EU Parliament sends a global message to protect human rights from AI

Imatge
Àmbits Temàtics

Today, the Internal Market Committee (IMCO) and the Civil Liberties Committee (LIBE) committees took several important steps to make this landmark legislation more people-focused by banning AI systems used for biometric surveillance, emotion recognition and predictive policing. Disappointingly, the MEPs stopped short of protecting the rights of migrants.

Today, May 11, the IMCO and LIBE committees of the European Parliament voted to put people first in the AI Act. This vote comes at a crucial time for the global regulation of AI systems and is a massive win for our fundamental rights as the Parliament heeded to the demands of diverse civil society voices.

Work on the EU AI Act started in 2020, and EDRi’s network and partners have been pushing for a people-first, fundamental rights-based approach from the beginning.

The Parliament is sending a globally significant message to governments and AI developers with its list of bans, siding with civil society’s demands that some uses of AI are just too harmful to be allowed. Unfortunately, the European Parliament’s support for peoples’ rights stops short of protecting migrants from AI harms.

Sarah Chander, Senior Policy Adviser, EDRi

 

C

MEPs bring down the hammer against unacceptably risky AI systems

The European Parliament’s lead committees send a clear signal that certain uses of AI are simply too harmful to be allowed, including predictive policing systems, many emotion recognition and biometric categorisation systems, and biometric identification in public spaces. These systems present severe threats to fundamental rights, and perpetuate systematic discrimination against already marginalised groups, including racial minorities.

We are delighted to see Members of the European Parliament (MEPs) stepping up to prohibit so many of the practices that amount to biometric mass surveillance. With this vote, the EU shows it is willing to put people over profits, freedom over control, and dignity over dystopia.

Ella Jakubowska, Senior Policy Advisor, EDRi

 

MEPs have heeded the warning of over 80 civil society groups and tens of thousands of supporters in the Reclaim Your Face campaign, electing to put a stop to many of the key practices which amount to biometric mass surveillance (BMS). This is a significant victory in the fight against practices that violate our privacy and dignity, and turn our public spaces into places of suspicion and the suppression of our democratic rights and freedoms.

In particular, this ban covers all real-time and most post remote biometric identification (RBI) in public spaces, discriminatory biometric categorisation and emotion recognition in unacceptably risky sectors. This is a historic step to protect people in the EU from many BMS practices by both state and by private actors.

Whilst we welcome these steps, in Europe, RBI already is reality in law and practice we will continue to advocate at EU level and in every member state to end all BMS practices which chill our rights and participation in public life.

Push for transparency, accountability, and the right to redress

A key demand of civil society has been to require that all actors rolling out high risk AI (‘deployers’) to be more transparent and accountable about where and how they use certain AI systems. MEPs have agreed that deployers must perform a fundamental rights impact assessment before making use of AI systems.

However, MEPs only require that public authorities and ‘gatekeepers’ (large companies) need to publish the results of these assessments. This is an arbitrary distinction and oversight, offering less public information when companies use high risk systems.

In addition, transparency requirements have been added for ‘foundational models’ or large language models sitting behind systems like Chat GPT, including an obligation to show the computing power required.

Significant steps have also been taken to empower people affected by the use of AI systems, including a requirement to notify and provide explanations to people who are affected by AI-based decisions or outcomes, and remedies for when rights have been violated.

 

Lingering concerns on definition of ‘high risk’ and AI in migration

There is a danger that the safeguards the Parliament has put in place against risky AI systems will be compromised by the proposed changes to the risk classification process in Article 6 of the AI Act.

The change provides a large loophole for AI developers (with an incentive to under-classify) to argue that they should not be subject to legislative requirements. The changes proposed to Article 6 pave the way for legal uncertainty, fragmentation, and ultimately, risks undermining the EU AI Act. We will continue pushing against these loopholes favouring industry actors over people’s rights.

Furthermore, the European Parliament has stopped short of protecting the rights of migrants from discriminatory surveillance. The MEPs failed to include in the list of prohibited practices where AI is used to facilitate illegal pushbacks, or to profile people in a discriminatory manner. Without these prohibitions the European Parliament is paving the way for panopticon at the EU border.

Unfortunately, the Parliament is proposing some very worrying changes relating to what counts as 'high-risk' AI. With the changes in the text, developers will be able to decide if their system is 'significant' enough to be considered high risk, a major red flag for the enforcement of this legislation.

Sarah Chander, Senior Policy Advisor, EDRi

 

C

What’s Next

A plenary vote with all MEPs is expected to take place in June, which will finalise the Parliament’s position on the AI Act. After that, we will enter a period of inter-institutional negotiations with the Member States before this regulation can be passed and become EU law. The broad civil society coalition will continue centering people in these negotiations.