The EU is taking the lead with legislation for artificial intelligence as it passes its AI Act to minimize risks posed by the technology.
The European Union (EU) Parliament has passed the EU AI Act to oversee the development and application of artificial intelligence in the region. On Wednesday, Parliament passed the law with 499 votes in favor, 93 abstaining, and 28 votes against.
The EU AI Act became necessary as government officials worried that the ongoing explosion of AI development and use could be dangerous. Officials are concerned that artificial intelligence could be applied to illegal or criminal use if not properly regulated, as with many other types of tech.
The new act puts the EU ahead of the US regarding the control and adoption of artificial intelligence.
Specifics of the EU’s AI Act
The act requires companies that use AI for content to properly label all content generated by artificial intelligence. Officials expect this would help to quickly identify non-human content and stem the spread of false information. In addition to this, companies must also publish summaries of all data used to train AI. Publishing copyrighted data takes publishers into account and protects them from giant companies that may unduly benefit from their hard work.
The legislation also protects the general public from undue targeting by government bodies, such as law enforcement agencies. The EU AI Act bans any tools that can analyze information usable to predict criminal tendencies. In addition, the law would ban tools capable of influencing voter behavior for elections.
Furthermore, the EU AI Act would ensure restrictions on untargeted facial recognition and biometric surveillance services. These restrictions would also cover the use of artificial intelligence that could potentially harm people or the environment. Also, any AI applications considerably regarded as “high risk” would receive further scrutiny.
Reactions to the Act
A major talking point on regulating AI is the potential difficulty with controlling technology so generative. Regardless, many countries believe that artificial intelligence technologies should not be left to their own devices.
Speaking to NY Times, Ada Lovelace Institute acting director Francine Bennet said regulating AI is important. According to the exec:
“Fast-moving and rapidly repurposable technology is of course hard to regulate, when not even the companies building the technology are completely clear on how things will play out. But it would definitely be worse for us all to continue operating with no adequate regulation at all.”
On the other hand, ChatGPT maker OpenAI might feel differently about the new law. While the current act is still a draft law, OpenAI has said it could leave Europe if the final text contains requirements it cannot meet. Interestingly, OpenAI CEO Sam Altman supported AI regulation in his testimony before the US Congress. Altman said regulation is necessary to maximize proper AI use and minimize potential risks.
The bill will pass through further negotiations between members of the European Union. The final EU AI Act may not be concluded until later in the year.