Deepfake AI regulation a tightrope walk for Congress – TechTarget
U.S. lawmakers will need to strike the right balance between regulating the use of tools such as generative AI while maintaining free speech protections guaranteed by the First Amendment.
That’s according to witnesses discussing draft legislation at a hearing Tuesday concerning AI-generated voice and visual replicas. Issues with deepfake AI, or AI used to create realistic but deceptive audio and visual images of an individual, have escalated over the last few years. From an audio call impersonating President Joe Biden to songs generated by AI replicating artists such as Beyoncé and Rihanna, Sen. Chris Coons (D-Del.) said use of such tools raises pressing legal questions that need to be answered.
“These issues aren’t theoretical,” Coons said during the hearing held by the Senate Judiciary Subcommittee on Intellectual Property. “As AI tools have become increasingly sophisticated, it’s become easier to replicate and distribute fake images of someone — fakes of their voice, fakes of their likeness — without consent. We can’t let this challenge go unanswered, and inaction should not be an option.”
Indeed, U.S. federal enforcement agencies, the U.S. Congress and the European Union are zeroing in on the use of generative AI to create fake videos, sounds and pictures of individuals. Members of the U.S. House of Representatives proposed legislation targeting this issue in January with a bipartisan bill called the No Artificial Intelligence Fake Replicas and Unauthorized Duplications (No AI FRAUD) Act. States are also advancing deepfake AI legislation, including Tennessee’s Ensuring Likeness Voice and Image Security (ELVIS) Act.
In October, a bipartisan group of U.S. senators proposed the Nurture Originals, Foster Art and Keep Entertainment Safe (NO FAKES) Act, a draft proposal that takes aim at generative AI and implements protections for a person’s voice and visual likeness from unauthorized recreations. The NO FAKES Act also includes language to hold platforms such as Meta’s Facebook and Instagram liable for hosting unauthorized digital replicas.
While some experts support specific legislation for regulating deepfake AI, others believe existing laws cover unlawful uses of the technology and caution against overly broad rules that could hinder innovation.
Stakeholders testify on regulating AI
Robert Kyncl, CEO of Warner Music Group, testified during the hearing that deepfake AI poses a threat to individuals’ voices and likeness, and needs to be regulated.
He cautioned that the technology could affect the world at large, including business leaders, whose images or voices could be manipulated in a way that damages business relationships.
“Untethered deepfake technology has the potential to impact everyone,” Kyncl said.
A bill like the NO FAKES Act should include enforceable intellectual property rights for an individual’s likeness and voice, he said, as well as effective deterrence for AI model builders and digital platforms that knowingly violate a person’s IP rights.
Kyncl added that while some argue that responsible AI threatens freedom of speech, he disagrees.
“AI can put words in your mouth and AI can make you say things you didn’t say or don’t believe,” Kyncl said. “That’s not freedom of speech.”
FKA Twigs
Musical artist and performer Tahliah Debrett Barnett, known as FKA Twigs, also testified in support of legislation. She said Congress must enact a law to protect against misappropriation of artists’ work.
“I stand before you today because you have it in your power to protect artists and their work from the dangers of exploitation and the theft inherent in this technology if it remains unchecked,” she said.
Ben Sheffner, senior vice president and associate general counsel of law and policy at the Motion Picture Association, testified that while the NO FAKES Act is a “thoughtful contribution” to the debate about how to establish guardrails against abuses of the technology, legislating around AI-generated content involves regulating the content of speech, which the First Amendment “sharply limits.”
“It will take very careful drafting to accomplish the bill’s goals without inadvertently chilling or even prohibiting legitimate, constitutionally protected uses of technology to enhance storytelling,” he said. “This is technology that has entirely legitimate uses that are fully protected by the First Amendment and do not require the consent of those being depicted.”
In addition, Sheffner said it’s important for Congress to pause and ask whether the harms it seeks to address are already covered by existing law prohibiting defamation or fraudulent activities. He said if there is a gap in those laws in certain areas, such as election-related deepfakes, the best answer is “narrow, specific legislation targeting that specific problem.”
Lisa Ramsey, a law professor at the University of San Diego School of Law, agreed with Sheffner, testifying that the NO FAKES Act is inconsistent with First Amendment protections because it’s “overbroad and vague.” However, she said the bill could be revised to address those concerns by not suppressing protected speech more than necessary.
Deepfake AI draws national, global scrutiny
Congress isn’t the only entity acting on this issue. The Federal Communications Commission made AI-generated voices in robocalls illegal in February. In addition, the Federal Trade Commission is seeking public comment on proposed rulemaking that would prohibit impersonation of individuals, according to a news release.
In the release, the FTC said it’s taking action due to a surge in complaints and public outcry around fraudulent impersonations. The FTC pointed to emerging technology such as AI-generated deepfakes as further escalating the issue. The FTC’s proposed rulemaking is also considering whether the rule should declare it unlawful for AI platforms that create images, video or text to provide a service that “they know or have reason to know is being used to harm consumers through impersonation.”
While it’s important to remove unauthorized content generated by AI and prevent deceptive practices, it’s also important to consider how existing rules, regulations and laws prohibiting unlawful behavior still apply to AI, said Linda Moore, president and CEO of TechNet, a network of senior tech executives looking to advance innovation, in a statement.
Moore said the FTC’s proposed rule is overly broad and could result in unintended consequences hindering the application of existing laws as well as AI innovation.
“A more tailored rule would more effectively prevent impersonations of individuals, allow innovation to flourish and encourage companies to implement strong compliance programs,” she said in the statement.
The European Union is also acting on this issue. The European Commission, the EU’s enforcement arm, opened a formal proceeding this week to assess whether Meta breached the Digital Services Act with its practices and policies around political disinformation.
Makenzie Holland is a senior news writer covering big tech and federal regulation. Prior to joining TechTarget Editorial, she was a general assignment reporter for the Wilmington StarNews and a crime and education reporter at the Wabash Plain Dealer.
This post was originally published on 3rd party site mentioned in the title of the post.