EU Council and Parliament Reach Landmark Agreement on AI Regulation

EU Council and Parliament Reach Landmark Agreement on AI Regulation

Balancing Innovation and Rights in Europe's First AI Legislation

After three days of negotiations, the Council of the European Union and the European Parliament reached an agreement on the world's first legislation for Artificial Intelligence (AI) technologies. The AI Act establishes unified rules for the safety of such systems in the EU market while respecting citizens' fundamental rights.

Simultaneously, the preliminary agreement, pending approval through standard procedures by the Council and Parliament, includes provisions to stimulate investment and innovation in the AI sector in Europe.

The EU institutions aimed to make this legislation a global standard for AI technologies, similarly to how the GDPR regulation influenced other markets in personal data protection.

In cases of violation, financial penalties can reach up to 35 million euros or 7% of the company's global turnover if the amount exceeds 35 million euros. However, more proportional fines are envisaged for small and medium-sized enterprises and startups.

"The agreement maintains an extremely delicate balance," stated Spanish Deputy Minister for Digitization and Artificial Intelligence, Carme Artigas, on behalf of the Spanish Presidency of the EU Council. The balance pertains to "fostering innovation and the adoption of AI across Europe while fully respecting citizens' fundamental rights," she said.

The AI Act regulates specific AI technologies based on the principle that the more risks a particular application presents, the stricter the regulatory rules should be. As agreed so far, the legislation will be implemented two years after its final approval, with certain exceptions.

The agreement reached by the Council and Parliament adds to the initial proposal by the Commission from April 2021, rules for high-impact and general-purpose AI models that may pose systemic risks in the future, as well as for high-risk AI systems.

AI Under Watch

The legislation prohibits AI use in areas where the risk is deemed unacceptably high. This category includes manipulating behavior, indiscriminate extraction of facial images from the internet or CCTV footage, emotion recognition in workplaces and educational institutions, social scoring, biometric categorization for sensitive data inference such as sexual orientation or religious beliefs, and certain predictive policing cases.

A governance system is also established, with the EU having certain responsibilities for rule implementation. This includes creating an AI Office within the Commission to oversee the development of advanced AI models, assisted by an independent expert scientific panel, coordinated through a council of member state representatives, and with technical contributions from an advisory forum involving industry, civil society, and academic research representatives.

However, law enforcement authorities will be able to use remote biometric identification in public spaces, with safeguards against misuse.

AI system users, who are public entities, must register in the EU database for such system use, while emotion recognition system users must inform individuals exposed to these systems.

Moreover, entities developing high-risk AI systems are required to conduct an impact assessment on fundamental rights before their use.

Overall, the AI legislation contains specific criteria for defining what constitutes an AI system and the scope of the legislation. It will not cover the responsibilities of member states in national security, entities undertaking these responsibilities, systems with exclusive military or defense use, nor systems used exclusively in research and innovation sectors.

Rules for AI systems will be limited to transparency obligations for low-risk systems, while the use of several high-risk systems will be allowed under specific conditions before they can be available in the EU market.

Loader