Microsoft warns China could influence elections with AI content – UPI News

author
2 minutes, 39 seconds Read

April 5 (UPI) — Microsoft warned in a report Friday that China and North Korea pose artificial intelligence threats aimed at influencing U.S., South Korean and Indian elections this year with AI-generated false content.

Chinese malign cyber efforts use AI to create deep fake videos, audio and false “news reports” complete with AI-generated phony news anchors aimed at influencing election outcomes, according to Microsoft.

Advertisement

“As populations in India, South Korea, and the United States head to the polls, we are likely to see Chinese cyber and influence actors, and to some extent North Korean cyber actors, work toward targeting these elections,” Microsoft’s report said.

While the impact of this content in altering the opinions of viewers “remains low,” the report warned that China’s “increasing experimentation in augmenting memes, videos, and audio will continue-and may prove effective down the line.”

Advertisement

These AI tools have been used for, among other things, amplifying controversial issues in the United States using AI-generated memes that criticize the Biden administration.

According to Microsoft, China used AI-generated images while posting false stories that a U.S. government “weather weapon” caused the Maui, Hawaii fires. China used posts in at least 31 languages across dozens of websites and platforms to promote the false narrative that the American government deliberately set the fires.

China has also targeted nearly every South Pacific Island country using a China-based espionage group called Gingham Typhoon.

The report said during the summer of 2023 this campaign was observed by Microsoft “hitting international organizations, government entities, and the IT sector with complex phishing campaigns. Victims also included vocal critics of the Chinese government.”

Microsoft also said a Chinese cyber group it calls Storm-0062 “targets military entities and critical infrastructure in the United States,” while the most prolific Chinese user of AI content is Storm-1376, a Chinese Communist Party-linked actor also known as “Spamouflage” or “Dragonbridge.”

It added China targeted Taiwan using Chinese company ByteDance’s tools to create phony AI-generated “news anchors” and content featuring Taiwanese officials.

One example was AI-generated audio clips of Foxconn owner Terry Gou, who ran as an independent in the presidential race there before bowing out. Videos used what sounded like his voice to endorse another candidate in Taiwan’s presidential race, according to Microsoft.

Advertisement

China also used AI-enhanced videos in Canada to target Chinese members of parliament using the likeness of a Canada-based Chinese dissident. The campaign included harassing politicians on social media accounts.

Propaganda using AI is used to influence politics and elections in targeted countries with false information designed to appear real. They are often used to exacerbate political divisions in ways that benefit the malign cyber actors.

“Microsoft has observed several notable cyber and influence trends from China and North Korea since June 2023 that demonstrate not only doubling down on familiar targets, but also attempts to use more sophisticated influence techniques to achieve their goals,” Microsoft’s report said.

Microsoft said North Korea is also expected to launch cyberattacks using “increasingly sophisticated cryptocurrency heists and supply chain attacks targeted at the defense sector.”

The report said North Korea in 2023 stole hundreds of millions of dollars in cryptocurrency and conducted supply chain cyberattacks while targeting their perceived national security adversaries.

This post was originally published on this site

Similar Posts