The AI Perils Buried in the Fine Print – Hollywood Reporter

author
4 minutes, 40 seconds Read
Ads
Best Mobile Games Directory

Best Mobile Games Marketplace

If there is one craze that’s taken hold on Wall Street, it is the growth potential of generative artificial intelligence. “If we succeed, everyone who uses our services will have a world-class AI assistant to help get things done,” Meta CEO Mark Zuckerberg said Feb. 1, with Amazon CEO Andy Jassy predicting that the tech will “drive tens of billions of dollars in revenue” in coming years.

And with Hollywood increasingly competing with the likes of Apple and Google for viewer time, the investments those companies are making in Gen AI become all the more critical for the industry. But the emerging tech is also facing a moment of uncertainty: How will copyright law apply? Will new laws emerge? Will a Gen AI watch The Matrix and think, “Hey, that seems like a cool idea?”

Related Stories

The Hollywood Reporter examines the state of play, based on what companies have stated are “risk factors” in their annual corporate reports.

It can cause serious harm to society

AI might give everyone a personal assistant, but the stated “risk factors” of many of the biggest tech companies show there are concerns about what the tech could do to the world.

“Unintended consequences, uses or customization of our AI tools and systems may negatively affect human rights, privacy, employment or other social concerns,” Google owner Alphabet wrote in its Jan. 31 annual report. Microsoft wrote in its own report that the tech could “have broad impacts on society,” adding that as a result of “their impact on human rights, privacy, employment or other social, economic or political issues, we may experience brand or reputational harm.”

Translation: Sorry in advance for the impact to your human rights.

It makes competing harder for Hollywood

Is it possible for major studios to keep up in a world where content creators or their traditional competitors can use AI to create anything they imagine at zero marginal cost? It’s pretty obvious: “New technological developments, including the development and use of generative artificial intelligence, are rapidly evolving,” Netflix wrote in its latest 10-K report on Jan. 26. “If our competitors gain an advantage by using such technologies, our ability to compete effectively … could be adversely impacted.”

YouTube, for its part, is now requiring AI labels for generated content that is realistic, though it will allow its users to skip those labels for “clearly unrealistic content, such as animation or someone riding a unicorn through a fantastical world.” So be sure to generate those unicorns for your next YouTube upload, kids.

It could be used for other nasty stuff

Another fear is that AI could become an automatic defamation machine (like an ATM, but for making stuff up), or as a hacker’s helper: “We cannot guarantee that third parties will not use such AI technologies for improper purposes, including through the dissemination of illegal, inaccurate, defamatory or harmful content, intellectual property infringement or misappropriation, furthering bias or discrimination, cybersecurity attacks, data privacy violations, other activities that threaten people’s safety or well-being on- or offline,” Meta wrote Feb. 2 in its latest annual report.

Alphabet added, “Increased use of AI in our offerings and internal systems may create new avenues of abuse for bad actors.”

It’s a concern top of mind for entertainment companies, too: “The techniques used to access, disable or degrade service or sabotage systems change frequently and continue to become more sophisticated and targeted, and the increasing use of artificial intelligence may intensify cybersecurity risks,” Fox Corp. wrote Feb. 7 in its annual report.

It could run into lawmakers’ regulations or tough court rulings

AI legislation is still in flux, as are a slew of high-profile court cases. How those play out is still TBD. “We may not always be able to anticipate how courts and regulators will apply existing laws to AI, predict how new legal frameworks will develop to address AI, or otherwise respond to these frameworks as they are still rapidly evolving,” Meta noted in its risk factors.

AI rules “remain unsettled, and these developments may affect aspects of our existing business model, including revenue streams for the use of our IP and how we create our entertainment products,” Disney added, with Comcast striking a similar tone: “The legal landscape for new technologies, including artificial intelligence (‘AI’), remains uncertain, and development of the law in this area could impact our ability to protect against unauthorized third-party use, misappropriation, reproduction or infringement.”

Translation: If the Midjourney prompt “CGI cartoon of a toy cowboy and his astronaut friend” holds up in court, watch out!

And …

How Hollywood Is Lobbying for Safeguards

Neither Hollywood nor Big Tech is letting that legal uncertainty hold them back. According to a review of federal lobbying disclosure reports, a small army of lobbyists is descending on Capitol Hill to talk AI on behalf of companies like Comcast, Apple and News Corp., not to mention unions like SAG-AFTRA, the WGA, the Communications Workers of America and NFL Players Association; and trade groups like the National Association of Broadcasters and the Recording Industry Association of America. And they will be competing for time with the likes of OpenAI, Meta and Alphabet, with their own robust lobbying bills.

A union source tells THR that its efforts have been focused on the AI threat to jobs, while a studio source says that copyright concerns are a priority. But if it’s purely a matter of money, Big Tech would seem to have a leg up based on the number of registered lobbyists alone. 

This story first appeared in the March 27 issue of The Hollywood Reporter magazine. Click here to subscribe.

This post was originally published on this site

Similar Posts