Svenja Hahn (FDP/Renew Europe), the liberal lead negotiator in the Internal Market Committee on the European AI Act, is criticising the final legislative text.
"The AI Act is turning into an Anti-Innovation Act. The combination of too much innovation hampering regulation and too little protection of civil rights is a toxic mix for our European economy and societies. After a chaotic legislative process and a marathon trilogue the result is disappointing."
Hahn is upset with the final text regarding civil rights: "We are sleepwalking into a China light scenario. Though the European Parliament was unable to achieve a ban on real-time biometric surveillance against the member states, we managed to include a number of safeguards for the use of these systems. There are still loopholes, such as a reference to the threat of a foreseeable terrorist attack as well as several national security exceptions throughout the law, which can potentially lead to mass surveillance. The post biometric identification of individuals is possible almost without any legal safeguards such as prior judicial authorisation. There is no threshold on crimes, so biometric identification can be used even for minor offences. In addition, there are insufficient restrictions on predictive policing. The AI Act might become a wildfire instead of a firewall for civil rights."
Hahn also sees major threats for AI innovation made in Europe: "Big tech companies with large compliance departments will be able to afford the bureaucratic and financial burdens that would arise from the new regulation. Smaller and medium-sized companies will suffer from the bureaucratic burdens and high level of legal uncertainty. An important goal of the AI Act, to create a framework that promotes innovation for AI made in Europe, has not been met. The AI Act would become a barrier rather than a driver of innovation and could lead to an exodus of AI development from the EU."
Hahn sees improvements compared to the Commission's proposal: "We were able to save AI developers from the most impractical and expensive rules that had no added value for consumer protection. For example, a simple and harmless AI that is used in a high-risk area is not automatically considered high-risk anymore. Take a doctor's appointment system, for example. The definition of AI is now aligned with that of the OECD and at least ensures international compatibility. Regulatory sandboxes will be established as a new tool, especially important for innovative start-ups. It is crucial that research and development are excluded from the scope of the AI Act and that responsibilities are clarified for the various actors along the AI value chain. With these and a number of other improvements, we managed to fix at least some of the most problematic points of the Commission's original proposal."
Hahn criticises the fact that the rules in many areas are not clear enough in the legal text, but are only to be clarified by guidelines that the Commission still has to draft: "Vague political compromises create massive legal uncertainty instead of a clear legal framework. Developers and even users are being overburdened. This will massively set back AI made in Europe. China simply acts, the USA invent and the EU regulates."