EU AI Act sets global standard for managing AI – TechTarget

author
3 minutes, 28 seconds Read

The European Parliament passed some of the world’s first comprehensive artificial intelligence regulations Wednesday, meaning that enterprise businesses will need to start taking steps toward compliance.

The European Union’s AI Act regulates the technology by placing AI models into buckets of risk ranging from minimal risk to unacceptable risk. The EU defines unacceptable-risk AI models as real-time facial recognition systems in public spaces or AI systems that create facial recognition databases through untargeted internet scraping. Minimal risk includes models such as spam filters. Generative AI models such as ChatGPT won’t be classified as high risk under the EU AI Act, but will have to disclose AI-generated content.

The risk-based approach outlined in the EU AI Act is “unique,” said Nitish Mittal, a partner at global research firm Everest Group. The approach addresses some of the AI paranoia attached to the growing use of large language models.

Classifying AI models by risk and naming unacceptable use cases upfront makes the regulation more practical, he said.

“It allows you to take an appropriate measure for the appropriate level of risk,” Mittal said.

Forrester Research analyst Enza Iannopollo applauded the new AI rules, noting that the EU AI Act marks the world’s first set of requirements to mitigate the technology’s risks.

Like it or not, with this regulation, the EU established the de facto standard for trustworthy AI, AI risk mitigation and responsible AI.
Enza IannopolloAnalyst, Forrester Research

“The goal is to enable institutions to exploit AI fully, in a safer, more trustworthy and inclusive manner,” she said in a statement. “Like it or not, with this regulation, the EU established the de facto standard for trustworthy AI, AI risk mitigation and responsible AI. Every other region can only play catch-up.”

Businesses need to prepare for EU AI Act

The EU AI Act will be enforced by EU member states, which will be required to establish or designate a market surveillance authority and a notifying authority to ensure regulation implementation. The EU Commission, its AI board and AI office, and other entities will also oversee implementation.

Unacceptable-risk AI models must be phased out within six months of the EU AI Act entering into force. Companies with high-risk models will have 24 months. Businesses that fail to comply with the EU AI Act could face hefty fines.

Iannopollo said the fines and extraterritorial effect of the rules could extend across the “AI value chain,” meaning that most global organizations using AI must comply with the EU AI Act. Some of the law’s requirements, especially those concerning unacceptable-risk models, will start being enforced later this year.

“There is a lot to do and little time to do it,” Iannopollo said. “Organizations must assemble their AI compliance team to get started. Meeting the requirements effectively will require strong collaboration among teams, from IT and data science to legal and risk management, and close support from the C-suite.”

Graphic listing five things to know about the EU AI Act.
European lawmakers have approved the landmark EU AI Act, establishing rules governing AI use.

Indeed, the EU AI Act demands immediate attention from companies as they face a limited time frame for compliance, said Volker Smid, CEO of Acrolinx, a Berlin-based content marketing software platform provider.

“Compliance is not just a regulatory requirement,” he said in a statement. “It’s a strategic imperative for survival in a globally interconnected market.”

Everest Group’s Mittal said CIOs assessing compliance with the EU AI Act should focus on data. Most CIOs and C-suites need to get ahead of data governance and pipelines to AI models, and focus on AI’s value in certain use cases rather than quickly adopting and scaling the newest AI technologies.

“Don’t think it’s an AI problem — it’s a data problem,” he said. “What I mean is, when a lot of clients look at any of these AI issues … most of the problems happen at the data layer, which is who owns the data, where do we get the data [and] how do you factor that in.”

Makenzie Holland is a senior news writer covering big tech and federal regulation. Prior to joining TechTarget Editorial, she was a general reporter for the Wilmington StarNews and a crime and education reporter at the Wabash Plain Dealer.

This post was originally published on this site

Similar Posts