People should have proper control on how AI shapes their lives.

02 June 2023 | Artificial Intelligence, Op-Ed

The AI Act is intended to be a major legal framework to introduce protections for fundamental rights in a fast developing technology that is already shaping society in all aspects. In the recent votes in the Parliament’s Internal Market (IMCO) and Civil Liberties (LIBE) committees, MEPs voted to adopt amendments recommended by a coalition of CSOs, including ECF. The amendments introduced a number of fundamental rights protections that were missing in the Commission’s and Council’s versions of the text. We hope that when the amended Act goes to the plenary, MEPs endorse the proposals set by their colleagues. However, not all important issues have been addressed or even discussed.

The development of AI systems must correspond with people’s needs and the common good, without exemptions. The Act will introduce a risk-based classification system for AI systems, ranging from “unacceptable risk” (prohibited) to “minimal risk”. But several fundamental questions in this approach remain unanswered. What will be the criteria for determining each AI system’s level of risk? Who will be responsible for the assessment? And perhaps most importantly, will all details of any assessment be public and subject to scrutiny?

The need for regulation of AI is not first and foremost about organising a market. It is primarily about ensuring people’s access to fundamental rights. Many crucial questions about AI systems’ very functioning and legitimacy have not yet been publicly discussed. Indeed, the urgency of regulating a rapidly developing technology has come at the expense of keeping people informed about the problems that needs to be solved, and avoids a wider conversation about what we actually want from AI.