European lawmakers have taken a monumental step toward regulating artificial intelligence, as two key committees at the European Parliament have given preliminary approval to groundbreaking AI rules. The legislation, known as the Artificial Intelligence Act, would be the first of its kind worldwide, marking a historic moment in the governance of advanced technology.
Historic Approval by European Parliament Committees
On Tuesday, the Internal Market and Civil Liberties committees voted decisively, with a 71-8 majority and seven abstentions, in favor of the Artificial Intelligence Act. This agreement follows rigorous negotiations with member states and is a significant stride towards the official endorsement by the legislative assembly in April.
Protecting Fundamental Rights and Promoting Innovation
The European Parliament emphasized that the regulation is designed to safeguard fundamental rights, democracy, the rule of law, and environmental sustainability from high-risk AI. Moreover, it aims to foster innovation and position Europe at the forefront of the AI field. The Act establishes clear obligations for AI based on its potential risks and impact levels, ensuring a balanced approach to regulation.
“The AI Act takes a critical step forward: MEPs in @EP_Justice and @EP_SingleMarket have endorsed the provisional agreement on an Artificial Intelligence Act that ensures safety and complies with fundamental rights,” stated a representative from the European Parliament committees on the social media platform X, formerly known as Twitter.
The rules set forth guidelines for large language models (LLMs) and generative AI tools, addressing concerns raised by entities such as OpenAI, backed by Microsoft (NASDAQ:MSFT). This development comes after EU countries and lawmakers tentatively agreed on AI governance rules in December 2023. Notably, France recently retracted its opposition to the AI Act, securing conditions that balance transparency with business secrecy, easing the administrative burden on high-risk AI systems.
Strikingly, the EU’s stance on AI governance has been markedly more stringent compared to other global approaches. While Japan and the U.S. have leaned towards a milder regulatory stance to bolster economic growth, Southeast Asian nations and China have adopted more business-friendly AI governance strategies. Moreover, in the U.S., President Joe Biden issued an executive order in October 2023 to manage the risks associated with AI, highlighting the diverse global landscape for AI regulation.
Explosive Growth in Generative AI Services
The recent surge in generative AI services has garnered widespread attention, attributable to the launch of OpenAI’s ChatGPT model. Companies worldwide are actively developing their own LLMs, offering an array of services—including content, image, and voice generation—that harness the capabilities of AI to meet diverse consumer needs.
Meta Platforms’ (META) Emu Video, Emu Edit, AudioCraft, SeamlessM4T, and Llama 2, Alibaba’s (BABA) Tongyi Qianwen 2.0 and Tongyi Wanxiang, Baidu’s (BIDU) Ernie Bot, OpenAI’s text-to-image tool DALL·E 3, Alphabet’s (NASDAQ:GOOG) (GOOGL) Bard, Samsung’s (OTCPK:SSNLF) Gauss, and Getty Images’ (GETY) Generative AI model are just a few examples of the LLMs being pioneered by industry leaders.