AI ACT – trilogue outcome

December 9, 2023

"In 38 hours of negotiations over three days, we were able to prevent massive overregulation of AI innovation and safeguard rule of law principles in the use of AI in law enforcement. Overall I wanted to see more openness to innovation and an even stronger commitment to civil rights."

Svenja Hahn (FDP), shadow rapporteur for the European AI Act on behalf of the liberal Renew Europe group, has mixed views on the trilogue result of the AI Act:

 

"In 38 hours of negotiations over three days, we were able to prevent massive overregulation of AI innovation and safeguard rule of law principles in the use of AI in law enforcement. Overall I wanted to see more openness to innovation and an even stronger commitment to civil rights."

 

Hahn regrets that a majority for a complete ban of real-time biometric identification was not achievable, but emphasizes that biometric mass surveillance was be prevented:

 

"We succeeded in preventing biometric mass surveillance. Despite an uphill battle over several days of negotiations, it was not possible to achieve a complete ban on real-time biometric identification against the massive headwind from the member states. They wanted to use biometric surveillance as unregulated as possible, only the German government had called for a ban. We were able to enshrine decisive safeguards for the rule of law. The technology may only be used for the targeted identification of either victims of abduction, trafficking in human beings and sexual exploitation as well as missing persons or persons explicitly wanted for very serious crimes. These include kidnapping or rape. No other persons may be biometrically identified in the video material."

 

The regulations for General Purpose AI (GPAI) contain both positive and negative aspects for Hahn:

 

"We were able to prevent a blanket high-risk classification of GPAI systems and create a burden sharing along the value chain. This is an extremely important success for European companies to be able to build secure systems and not be burdened with all compliance costs or be responsible for malfunctions of GPAI systems. In particular, small and medium-sized companies that integrate GPAI systems such as Chat GPT into their own systems will not be hassled with unnecessary regulatory burdens. The planned regulation of GPAI models, also known as foundation models, could be more balanced. The agreed two-tier solution for GPAI models makes more sense than across-the-board excessive requirements for all models. Many of the requirements will only apply to the very impactful models, but the requirements for all other models are too extensive and unnecessarily bureaucratic. A code of practice as an interim solution until standards are available can make it easier for small and medium-sized companies in particular to comply with the law. This is because the alternative of a conformity assessment can quickly become costly. This code of practice must now be developed quickly."

 

Hahn predicts:

 

"This political deal still needs to be fine-tuned on many technical aspects. We need to take a very critical look at the trilogue agreements in terms of civil rights, innovation and legal certainty. This will determine whether the AI Act can ultimately be approved."