Meta overhauls rules on deepfakes, other AI-generated media ahead of election season – Business Today

2 minutes, 15 seconds Read

In anticipation of the upcoming election season, Meta, the parent company of Facebook, announced significant alterations to its policies regarding digitally manipulated media. These changes aim to confront the challenge posed by deceptive content generated by cutting-edge artificial intelligence technologies.

According to Monika Bickert, Vice President of Content Policy at Meta, the social media giant will roll out “Made with AI” labels starting in May. These labels will be applied to AI-generated videos, images, and audio shared across Meta’s platforms. Previously, Meta’s policy only targeted a limited scope of doctored videos.

Bickert outlined that Meta will introduce distinct and more conspicuous labels for digitally altered media presenting a “particularly high risk of materially deceiving the public on a matter of importance,” regardless of whether AI or other tools were used in their creation.

“We plan to start labelling AI-generated content in May 2024, and we’ll stop removing content solely on the basis of our manipulated video policy in July. This timeline gives people time to understand the self-disclosure process before we stop removing the smaller subset of manipulated media.” Bickert said.

This strategic shift represents a departure from Meta’s previous approach to manipulated content. Instead of primarily focusing on content removal, Meta now aims to maintain the content while simultaneously providing viewers with information regarding its creation process.

Previously, Meta disclosed plans to identify images produced using third-party generative AI tools through embedded invisible markers within the files. However, no specific commencement date was provided at the time of the announcement.

A spokesperson for Meta confirmed to Reuters that the revised labelling strategy will apply to content shared on Meta’s Facebook, Instagram, and Threads platforms. Different rules govern Meta’s other services, including WhatsApp and Quest virtual reality headsets.

These policy changes precede the highly anticipated US presidential election scheduled for November as well as elections in multiple countries including India. Tech researchers have raised concerns about the potential impact of novel generative AI technologies on the electoral landscape. Political campaigns have already begun leveraging AI tools, prompting a reevaluation of guidelines by providers like Meta and industry leader OpenAI.

In February, Meta’s oversight board criticised the company’s existing rules on manipulated media as “incoherent.” This assessment followed the board’s review of a manipulated video featuring US President Joe Biden, posted on Facebook last year. The video, which inaccurately suggested inappropriate behaviour by Biden, remained accessible on the platform.

Meta’s existing policy on “manipulated media” currently targets misleadingly altered videos produced by AI or those that manipulate speech. However, the oversight board recommended extending these guidelines to non-AI content, asserting that such content can be equally misleading. Additionally, the board advocated for applying these standards to audio-only content and videos depicting fabricated actions.

This post was originally published on this site

Similar Posts