AI Act: a step closer to the first rules on Artificial Intelligence

EUROPARL, 11/05/2023

Partagé par : 

Beesens TEAM

AI Act: a step closer to the first rules on Artificial Intelligence

"Once approved, they will be the world’s first rules on Artificial Intelligence
MEPs include bans on biometric surveillance, emotion recognition, predictive policing AI systems
Tailor-made regimes for general-purpose AI and foundation models like GPT
The right to make complaints about AI systems
To ensure a human-centric and ethical development of Artificial Intelligence (AI) in Europe, MEPs endorsed new transparency and risk-management rules for AI systems.


On Thursday, the Internal Market Committee and the Civil Liberties Committee adopted a draft negotiating mandate on the first ever rules for Artificial Intelligence with 84 votes in favour, 7 against and 12 abstentions. In their amendments to the Commission’s proposal, MEPs aim to ensure that AI systems are overseen by people, are safe, transparent, traceable, non-discriminatory, and environmentally friendly. They also want to have a uniform definition for AI designed to be technology-neutral, so that it can apply to the AI systems of today and tomorrow.


Risk based approach to AI - Prohibited AI practices

The rules follow a risk-based approach and establish obligations for providers and users depending on the level of risk the AI can generate. AI systems with an unacceptable level of risk to people’s safety would be strictly prohibited, including systems that deploy subliminal or purposefully manipulative techniques, exploit people’s vulnerabilities or are used for social scoring (classifying people based on their social behaviour, socio-economic status, personal characteristics).


MEPs substantially amended the list to include bans on intrusive and discriminatory uses of AI systems such as:


“Real-time” remote biometric identification systems in publicly accessible spaces;
“Post” remote biometric identification systems, with the only exception of law enforcement for the prosecution of serious crimes and only after judicial authorization;
Biometric categorisation systems using sensitive characteristics (e.g. gender, race, ethnicity, citizenship status, religion, political orientation);
Predictive policing systems (based on profiling, location or past criminal behaviour);
Emotion recognition systems in law enforcement, border management, workplace, and educational institutions; and
Indiscriminate scraping of biometric data from social media or CCTV footage to create facial recognition databases (violating human rights and right to privacy)..." Lire la suite