EU and US agree to chart common course on AI regulation – CIO

1 minute, 59 seconds Read

“AI regulation necessitates joint efforts from the international community and governments to agree a set of regulatory processes and agencies,” Angelo Cangelosi, professor of machine learning and robotics at the University of Manchester in England, told

“The latest UK-US agreement is a good step in this direction, though details on the practical steps are not fully clear at this stage, but we hope that this will continue at a wider international level, for example with integration with the EU AI agencies, as well as in the wider UN framework,” he added.

Risks of AI misuse

Dr Kjell Carlsson, head of AI strategy at Domino Data Lab, argued that focusing on the regulation of commercial AI offerings loses sight of the real and growing threat: the misuse of artificial intelligence by criminals to develop deep fakes and more convincing phishing scams.

“Unfortunately, few of the proposed AI regulations, such as the EU AI Act, are designed to effectively tackle these threats as they mostly focus on commercial AI offerings that criminals do not use,” Carlsson said. “As such, many of these regulatory efforts will damage innovation and increase costs, while doing little to improve actual safety.”

“At this stage in the development of AI, investment in testing and safety is far more effective than regulation,” Carlsson argued.

Research on how to effectively test AI models, mitigate their risks and ensure their safety, carried out through new AI Safety Institutes, represents an “excellent public investment” in ensuring safety whilst fostering the competitiveness of AI developers, Carlsson said.

Many mainstream companies are using AI to analyze, transform, and even produce data – developments that are already throwing up legal challenges on myriad fronts.

Ben Travers, a partner at legal firm Knights and specializes in AI, IP and IT issues, explained: “Businesses should have an AI policy, which dovetails with other relevant policies, such as those relating to data protection, IP and IT procurement. The policy should set out the rules on which employee can (or cannot engage with AI).”

Recent instances have raised awareness of the risks to employers when employees upload otherwise protected or confidential information to AI tools, while the technology also poses issues in areas such as copyright infringement.

“Businesses need to decide how they are going to address these risks, reflect these in relevant policies and communicate these policies to their teams,” Travers concluded.

This post was originally published on this site

Similar Posts