A view from Brussels: EU AI Act adoption is ‘not the arrival point for AI legislation’ – International Association of Privacy Professionals

4 minutes, 22 seconds Read

Unless you have been living in a cave with no internet connection this week, you will have seen ample reporting — including from the IAPP — on the European Parliament’s adoption of the EU AI Act.

AI Act co-rapporteurs Dragoş Tudorache and Brando Benifei celebrated Wednesday’s vote as a major achievement for the EU, marking the end of a long parliamentary journey that even predates the European Commission’s original proposal. While now is the time for implementation, they also conveyed the EU’s ambition moving forward: to support implementation of the regulation and promote its safe and human-centric approach globally.

This week’s vote is not the arrival point for AI legislation. In the short term, Parliament officials have already stated a corrigendum to the AI Act will be published in April and both texts will then be approved by the Council of member states before being published in the Official Journal of the European Union.

In the medium term, the next European Commission will have more to do on AI. It will have to tackle AI in the workplace and work on attracting investment in Europe. It will also pick up negotiations on the AI liability proposal and will likely look at AI and intellectual property, not to mention keep an eye on the implementation of the AI Act and that of the updated Product Liability Directive which is about to enter into force.

“It’s complicated” doesn’t even begin to describe the state of affairs. The IAPP is launching a suite of resources to help privacy pros navigate what this means and how to get started, including LinkedIn Lives, 101 infographics and, of course, ongoing reporting as the clock starts to tick soon on the implementation period for many organizations.

Incidentally, on the same day Parliament adopted the AI Act, France’s President Emmanuel Macron received a report from the country’s AI Commission. Set up last September, the commission was tasked with making suggestions to strengthen France’s position on AI, and the 25 recommendations in its report focus on six areas:

  • A nationwide training and awareness plan for all sectors.
  • AI innovation financing with the creation of a 10 billion euro short-term fund.
  • Super-computing power.
  • Data access, including facilitating access to personal and public sector data, deleting certain authorization requirements for health data, reducing the CNIL’s response time to requests, creating sectoral databases, and clarifying data sharing rules — all of which are already addressed at the EU level.
  • Public research and collaboration with the private sector.
  • Global governance, creating a global AI organization and setting up an international fund to support ethical AI development.


  • Also this week, the European Parliament formally adopted the revised Product Liability Directive. The directive governs compensation for damage suffered due to a product defect, establishing a regime of strict liability and updating 40-year-old rules. The directive expands the scope of products to cover digital products like software and AI, expands the concept of damage to include loss or corruption of data, changes the burden of proof in cases where proving the defectiveness or a causal link is difficult due to technical or scientific complexity, changes liability for damage regarding the identification of a liable party and includes cases where a product has been significantly modified and reintroduced into the market, and extends the liability period in exceptional cases. Once it enters into force, this updated law will impact many manufacturers, importers and distributors in the EU.
  • The European Parliament also adopted the Cyber Resilience Act this week with 517 votes in favor, 12 against and 78 abstentions. The Council is next to formally approve before the legislation is published to the OJEU for entry into force. The European Commission originally proposed this regulation for “products with digital elements” with two main objectives: to encourage a life-cycle approach to cybersecurity of connected devices and to ensure they are placed on the market with fewer vulnerabilities; and to allow users to take cybersecurity into account when selecting and using connected devices. The CRA defines the chain of responsibility in the cybersecurity ecosystem and introduces, among other new obligations, a cybersecurity risk assessment in the technical documentation of a new connected device and requirements to report incidents impacting security of connected devices as well as actively exploited vulnerabilities — both within a window of 24 hours of becoming aware the incident.
  • The European Union Agency for Cybersecurity presented an overview of its Cybersecurity Certification framework. This work stems from the Cybersecurity Act, adopted in 2019, which created an EU-level framework for EU-wide rules for the cybersecurity certification of products, processes and services. The framework has been the basis for drafting common criteria certification, 5G, and more significantly perhaps, trusted cloud services. This last bit has been the source of intense political and technical discussions going right at the heart of sovereignty debates, as France has been promoting the inclusion of sovereignty requirements in EU-level schemes. The debate is not yet resolved, explaining the delays in finalizing trusted cloud schemes. This work could also be relevant for AI as ENISA is assessing whether and how this cybersecurity certification workstream could apply to the technology, as well as how schemes under elaboration could be re-used. An amendment to the certification framework piece of the CSA was proposed in April 2023 to extend its application to “managed security services.”

This post was originally published on this site

Similar Posts