U.S. Rep. Shontel Brown fears AI election targeting of Black and brown voters – cleveland.com

author
2 minutes, 39 seconds Read
image

WASHINGTON, D. C. – Fake political audio recordings, photographs and videos generated by artificial intelligence (AI) have alarmed lawmakers including U.S. Rep. Shontel Brown.

The Warrensville Heights Democrat on Wednesday led a bipartisan group of U.S. House of Representatives members in a letter that asks several federal government branches to probe AI’s potential election misuse, including its use as a weapon by adversaries of the United States.

The letter that Brown sent the Department of Justice, Department of Homeland Security, and Election Assistance Commission (EAC) with 32 colleagues seeks information about AI’s use to use intimidate, threaten, or misinform voters during the 2024 election cycle.

Citing recent cases where a deepfake, AI-generated audio robocall of President Biden was sent to discourage New Hampshire Democrats from voting in their presidential primary, and where a Super PAC supporting Florida Gov. Ron DeSantis’ presidential campaign released an ad with fake audio of ex-President Donald Trump, the letter expresses concern the technologies make it easier to spread disinformation.

In Ohio, state lawmakers from both parties have introduced legislation targeting deepfakes, including one bill that would create a civil liability for those who create or share deepfakes for purposes of influencing the results of an election.

“We have particular concern about the concentrated deception targeted at Black and brown and other minority communities,” the letter says. “The U.S. Senate Select Committee on Intelligence found that during the 2016 and 2020 presidential election cycles the Russian government created disinformation content on social media to support former President Trump which was aimed specifically at the Black community. As noted in the report, Russia’s objective was to cause political instability in the United States.”

Their letter asks how the agencies plan to collaborate to ensure generative AI does not intimidate, dissuade, or mislead voters in the 2024 presidential election cycle, and whether the Election Assistance Commission has plans to update its AI toolkit to include practical and usable instructions for responding to AI-generated disinformation, threats, and other forms of voter intimidation and suppression.

A statement in support of the letter from Alex Ault, policy counsel at the Lawyers’ Committee for Civil Rights Under Law, described Black and brown Americans as “the number one target in recent elections for mass dis-information and mis-information campaigns.

“The widespread adoption of artificial intelligence is no excuse to endure more supercharged attacks on Black power and participation at the ballot box,” the statement said. “The time for action is now. Bad actors cannot hide behind new technologies to attack our democracy with impunity.”

“This elusive, unregulated technology has the potential to disrupt our democracy through the spread of mis and disinformation,” said another statement from Cedric C. Haynes, Vice President, Policy and Legislative Affairs, NAACP. “That’s why the NAACP stands firm in our belief that generative AI must not be used to further this aim. We will continue to educate our communities on this threat, but we can’t do this alone. Our government must have a plan and take the lead on addressing and mitigating the danger that generative AI poses.”

Sabrina Eaton writes about the federal government and politics in Washington, D.C., for cleveland.com and The Plain Dealer.

This post was originally published on this site

Similar Posts