AI Is One Tweak Away From Ending Careers – Newsweek

6 minutes, 54 seconds Read

Many who experience generative artificial intelligence reach the same two quick conclusions. First: It’s amazing, since it writes and reasons better than many of my colleagues! Second: It’s terrifying, since it writes and reasons better than many of my colleagues! For the fretters there has been a single saving grace: AI messes up.

How badly? Comically. People who couldn’t come up with a short story to save their lives ask ChatGPT to find a review of their latest novel and then delight when the bot finds one. My job is somewhat safe, they think. But if they’re smart, a little voice might add: For now.

This inability to trust the results of queries (delivered though they are with what reads like supreme assurance) has been at the center of several panel discussions I’ve been on about AI and journalism. For now, the industry is being very careful with AI. But an obvious question hangs in the air: What happens if they fix the problem? After all, if there’s one thing that applies to all technology except Microsoft updates, it’s that it keeps improving.

A photo taken on Feb. 26 shows the logo of the ChatGPT application developed by US artificial intelligence research organization OpenAI on a smartphone screen (L) and the letters AI on a laptop screen in…
A photo taken on Feb. 26 shows the logo of the ChatGPT application developed by US artificial intelligence research organization OpenAI on a smartphone screen (L) and the letters AI on a laptop screen in Frankfurt am Main, Germany.


I took this concern to ChatGPT, as anyone would, and received this: “I believe that as AI continues to advance, there will be significant improvements in its ability to verify the results of inquiries. Currently, AI systems (can’t) independently verify the accuracy of the information they produce. Future generations are expected to … cross-reference information, evaluate sources, and discern between reliable and unreliable sources.

“These advancements could lead to AI systems that … provide more accurate and trustworthy responses,” it added. “Techniques such as fact-checking algorithms, knowledge graph integration, and probabilistic reasoning could enable AI systems to assess the credibility of information more effectively.”

Of course, “as an AI natural language model”—which it reminds you of tirelessly—the bot has no agenda beyond pure analysis (though that can be a fine line, as anyone who’s written a “news analysis” can confirm). But the overeager detail of that last sentence sure seemed to contain a glimmer of a gloat. Perhaps a poke in the eye of the salaried stiff.

The claim is convincing—but as we know, could also be untrue. ChatGPT might have found a satirical article about probabilistic reasoning in the Onion. So, I consulted Elik Eizenberg, a successful London-based serial entrepreneur whose Scroll.AI startup develops tools that are relevant to this discussion.

“As we grapple with how to move forward, trust is becoming a major concern for AI,” confirmed Elik, who does not deny being a human with opinions. “For journalists and other content creators, AI can be tremendous if you use it right, and lead to potentially devastating breakdowns if misused. I agree that coming iterations will address this in profound ways.”

If this is true—if it becomes extremely likely that the result of AI research is rock-solid and unassailable—the results will be dramatic. It will mean the little voice was right.

Imagine a scenario where a generative AI system not only produces articles, reports, or marketing content but also includes footnotes, hyperlinks, and references to convincingly substantiate its claims. Imagine if editors started to realize that the chances of fooling the AI, or receiving false outputs, is infinitesimally small. Smaller, indeed, than deliberate sabotage by a rogue reporter, which has happened here and there, even in calmer days.

Consider the outcome if key stakeholders concluded that from news articles and press releases to marketing campaigns and consulting reports, AI-generated content would be indistinguishable from the human-created product as regards accuracy and credibility.

It would mean, basically, that you could have AI write this column instead of me and there would be no greater reason to suspect that it is nonsense. I might argue that this would not contain the same degree of humor, “lived experience” and je ne sais quoi—but do you believe me? I am a human with an agenda.

(Or am I?)

It would portend seismic consequences for content creation industries such as journalism, marketing, advertising, and strategic consulting services—that’s what would happen.

Distressed businesses would have the choice of either hiring consultants from McKinsey & Company and spending months dealing with youthful squads of the arrogant besuited, or uploading the company profile, annual report, mission statement and financials, and waiting about three seconds. Then asking for a tweak, and waiting about three more seconds. Until it got it right.

It would be the inflection point that could compel widespread adoption of AI-generated content which could in turn trigger mass layoffs, job displacement, and an erosion of traditional roles within a vast array of industries.

So, how much time do we have?

OpenAI, the maker of ChatGPT, has shed any notion of social-conscience caution from its early messaging, and marches on fearlessly into the night.

At recent events it presented a series of developments, including allowing each user to build their own customized version of the bot, which they can even sell in a dedicated app store. And the most hyped announcement—right on cue—relates to the verification issue. A new tool, called ChatGPT4 Turbo, is expected to be “much more accurate in long contexts,” according to CEO Sam Altman, as it will be able to take in larger amounts of text for use as instructions—from a few thousand characters today to 300 pages of text.

Dealing with this will be a major societal challenge. It may finally flip the narrative on the idea that Luddites are to be ridiculed. Yes, humanity emerged not just intact but stronger from the industrial revolution and the automation of the textile industry, which freaked out the original Luddites. But the challenge facing new Luddites is of a different order.

We’ll need to find useful ways to repurpose workers future and present into functions and paradigms that generative AI cannot replicate. Largely, that will involve areas of endeavor that relate to true personality, genuine inspiration and unique brilliance. It may shift tremendous demand to some types of manual labor. Politics and prostitution too, I fear. Maybe poetry as well; that one’s not so clear.

Journalists could argue that while AI-driven content may be cost-effective and scalable, it lacks the human touch, investigative rigor, and ethical judgment that define quality journalism. If media organizations prioritize AI-generated content over original reporting, the risk is of a decline in journalistic standards and the erosion of public trust. In the end, this will come down to what the public wants; current indicators are of only a minority willing to pay top dollar for true quality.

In public relations and marketing, one could argue that without human creativity, intuition, and emotional intelligence, AI-generated communications may fail to resonate with target audiences, undermining the effectiveness of marketing efforts and diluting brand identities. Here, too, the proof will be found in the market.

One surging industry may be the regulation of AI, especially if the new Luddites gain political traction fast enough. They could argue that there is an urgent need for not only ethical guidelines, regulations, and accountability mechanisms to ensure that AI-generated content upholds integrity, consumer protection and public safety, but also to limit its use through hyper-regulation and punitive taxation.

Either way, society may be facing the Terminator of tenure. Instead of this issue dominating the discourse, we run around arguing about Critical Race Theory. So here’s the thing: Humanity will get pretty much what it deserves.

Dan Perry, a lapsed computer programmer, is the former Cairo-based Middle East editor and London-based Europe/Africa editor of the Associated Press. Follow him at

The views expressed in this article are the writer’s own.

Uncommon Knowledge

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

This post was originally published on this site

Similar Posts