5-ish Things on AI: Tech Giants to Pay for AI Upskilling, Meta Adding More Labels to AI Content – CNET

author
12 minutes, 10 seconds Read

Aside from death and taxes, the other item to add to the list of things you can be certain of is that new technologies disrupt the job market. That’s certainly true of the latest technology, generative AI, with economists, educators, regulators and business analysts predicting that gen AI will require changes to job descriptions and the skills needed to be a successful member of the workforce in the not-too-distant future. 

And it’s why I’ve written many, many times about the gen AI effect on jobs, and the need to reskill today’s workforce starting now.

I’m glad to know I’m not the only one who thinks reskilling is essential. Nine companies, led by Cisco and including Google, IBM, Intel and Microsoft, joined forces on April 4 to announce a new task force whose goal is to evaluate “how AI is changing the jobs and skills workers need to be successful.” The first order of business for the AI-Enabled Information and Communication Technology (ICT) Workforce Consortium will be to produce a report about how to upskill workers, after it identifies and evaluates “the impact of AI on 56 ICT job roles.”

“These job roles include 80% of the top 45 ICT job titles garnering the highest volume of job postings for the period February 2023-2024 in the United States and five of the largest European countries by ICT workforce numbers (France, Germany, Italy, Spain, and the Netherlands) according to Indeed Hiring Lab,” the consortium noted at its debut. “Collectively, these countries account for a significant segment of the ICT sector, with a combined total of 10 million ICT workers.”

The consortium also said its members recognize “the need to build an inclusive workforce with family-sustaining opportunities.” 

That’s good news, right? I hope it’s also a recognition, as AI economists have noted, that it will be faster, and likely more cost-effective, to retrain and upskill today’s employees than to continue the current tack of firing tons of people because of a shift to AI — and then waiting years for our educational systems to deliver the AI-skilled workers companies say they’ll need to achieve the productivity and profit gains they’re betting gen AI will deliver. (The exception: today’s AI engineers, who are being wooed away from rivals. Cases in point: Microsoft last month hiring the co-founder of DeepMind, Mustafa Suleyman, and a bunch of talent from his startup, Inflection AI, and Tesla CEO Elon Musk calling out the “craziest talent war” as the reason he’s going to pay his AI engineers more.)

At the end of the day, many people hope tech companies, who say the disruption that comes from new AI tech will make life better, should be part of the effort to ensure their disruption doesn’t leave their employees behind. 

“We have, as a society, been through technological advances before. And they have all promised a utopian life without drudgery. And the reality is they come for our jobs,” comedian Jon Stewart said during a Daily Show segment about “the false promises of AI.” 

After playing clips of notable tech executives extolling the benefits of AI, including venture capitalist Marc Andreessen, Meta CEO Mark Zuckerberg, OpenAI CEO Sam Altman, Microsoft SEO Satya Nadella and Google CEO Sundar Pichai, Stewart called for their assurances that AI won’t remove “the human from the loop.” His ask came after he also showed clips of CEOs talking about the need for more productivity and how AI is really all about “labor replacement tools” and how the tech will cut the cost of doing business by reducing the “people tax” — that is, the cost of paying human workers.  

For Stewart, promises of new types of jobs  — “prompt engineer” is a fancy way of saying “types-question guy,” he said — won’t be enough to stem the great job disruption already happening. “I’ve been thinking about this all wrong,” Stewart said mockingly, after clips of some CEOs talking about how AI will free up time for all of us to pursue other creative passions. “It’s not joblessness. It’s self-actualizing me time.” 

[embedded content]

Will a consortium of nine private-sector companies make a difference in the gen AI job displacement narrative? Time will tell. But we can start by watching to see if those companies, and others, deliver on their upskilling promises. So far, those promises include: 

  • Cisco training 25 million people with cybersecurity and digital skills by 2032.
  • IBM ensuring 30 million people have digital skills by 2030, including 2 million in AI.
  • Intel empowering over 30 million people with AI skills for current and future jobs by 2030.
  • Microsoft training and certifying 10 million people from underserved communities with in-demand digital skills for jobs and livelihood opportunities in the digital economy by 2025.
  • SAP upskilling 2 million people worldwide by 2025.
  • Google spending 25 million euros to support AI training and skills for people across Europe.

Here are the other doings in AI worth your attention:

Apple says its new AI model outperforms OpenAI’s GPT-4

Apple has already given us a strong hint that it will showcase new AI advancements at its Worldwide Developers Conference in June, with marketing chief Greg Joswiak alluding to AI in his post on X that WWDC 2024 will be “Absolutely Incredible.”

I boldfaced the A and the I there to help you get what he’s talking about. But you might not need hints like that in the future, based on work Apple’s researchers have shared about their new AI model, called ReALM. Their 15-page paper posted March 29 describes an approach to how a Large Language Model, or LLM, can provide context for every element on a screen — including text, images and “forms of entities on screen that are not traditionally conducive to being reduced to a text-only modality.” 

“We propose reconstructing the screen using parsed entities and their locations to generate a purely textual representation of the screen that is visually representative of the screen content,” Apple’s researchers wrote. “The parts of the screen that are entities are then tagged, so that the LM has context around where entities appear, and what the text surrounding them is (Eg: call the business number). To the best of our knowledge, this is the first work using a Large Language Model that aims to encode context from a screen.”

Apple’s researchers also boast that their model outperforms GPT-3.5 and GPT-4, the LLMs designed by OpenAI, maker of ChatGPT. “We also benchmark against GPT-3.5 and GPT-4, with our smallest model achieving performance comparable to that of GPT-4, and our larger models substantially outperforming it,” they wrote.

What does this all mean, besides that the battle for AI dominance among tech giants continues to ramp up? Reading the research paper, it’s obvious Apple is looking to give an AI boost to its Siri voice assistant. “Enabling the user to issue queries about what they see on their screen is a crucial step in ensuring a true hands-free experience in voice assistants,” they wrote. 

“The hope is that ReALM could improve Siri’s ability to understand context in a conversation, process onscreen content, and detect background activities, CNET sister site ZDNET reports.  

How? “For example,” adds Analytics India Magazine, “a user asks about nearby pharmacies, which can be done by Siri, leading to a list being presented. Later, the user asks to call the bottom listed number (present on-screen). Siri would not perform this particular task. However, with ReALM, the language model can comprehend the context by analysing on-device data and fulfilling the query.”

WWDC starts June 10.

Meta plans to label more AI imagery to help separate fact from fakery

Based on recommendations from its Oversight Board on how to handle AI-manipulated images and video on its sites, and after the board called Meta’s labeling standards “incoherent,” Meta said that starting in May, it “will begin labeling a wider range of video, audio and image content as ‘Made with AI’ when we detect industry standard AI image indicators or when people disclose that they’re uploading AI-generated content.” 

That labeling will affect content posted on Facebook, Instagram and Threads, the company said in an April 5 blog post. The new labeling guidelines are intended to balance concerns about misleading content and about restricting freedom of speech on its popular platforms. 

“We agree with the Oversight Board’s argument that our existing approach is too narrow since it only covers videos that are created or altered by AI to make a person appear to say something they didn’t say,” wrote Meta’s head of content policy, Monika Bickert. 

“Our manipulated media policy was written in 2020 when realistic AI-generated content was rare and the overarching concern was about videos. In the last four years, and particularly in the last year, people have developed other kinds of realistic AI-generated content like audio and photos, and this technology is quickly evolving,” she added. “As the Board noted, it’s equally important to address manipulation that shows a person doing something they didn’t do.”

The new labeling push comes after Meta said in February that it would work with other companies to set technical standards on how to identify AI-generated content, including audio and video. The new labels mean that content intended to mislead or deceive can remain on Meta’s sites  — unless it violates “our policies against voter interference, bullying and harassment, violence and incitement, or any other policy,” Bickert added — but that content will now be tagged with stronger disclosures. 

“If we determine that digitally-created or altered images, video or audio create a particularly high risk of materially deceiving the public on a matter of importance, we may add a more prominent label so people have more information and context,” Bickert wrote. “This overall approach gives people more information about the content so they can better assess it and so they will have context if they see the same content elsewhere.”

The bottom line: It will still be up to you, the recipient of all this information, to read the labels in the hopes that the added context in those disclosures will prevent you from being fooled by AI-generated misinformation and deepfakes. Good luck with that, fellow humans.

Worth a watch: Nova offers its take on the ‘A.I. Revolution’

In May 1951, noted British mathematician Alan Turing gave a short lecture on BBC Radio, talking about the potential of computers. “I think it is probable,” he said, “that at the end of the century, it will be possible to program a machine to answer questions in such a way that it will be extremely difficult to guess whether the answers are being given by a man or by the machine.”

With gen AI, we’re pretty much there. And that’s why Nova, PBS’ award-winning science series, refers to the origins of the Turing Test in a new one-hour film that asks the question: “Can we harness the power of artificial intelligence to solve some of the world’s most challenging problems without creating an uncontrollable force that ultimately destroys us?”

Featuring interviews with researchers and CEOs working on the front lines of AI, including Deepmind co-founder Mustafa Suleyman and AI pioneer Yoshua Bengio, who’s now focused on the threat that AI poses to humankind, “A.I. Revolution” traces the history of the technology and how we’ve gotten to the current state of gen AI. 

PBS correspondent Miles O’Brien looks at the opportunities AI creates — taking it away, he says, from the Terminator narrative — to show that artificial intelligence is already helping solve complex problems, from identifying breast cancer earlier than humans can and helping formulate new drugs to detecting wildfires before they rage out of control and predicting how they might spread.

“AI is a tool for helping us to understand the world around us, predict what’s likely to happen and then invent solutions that help improve efficiency and help improve the world around us,” says Suleyman, who just joined Microsoft to lead its new AI division.  

Professors at UC Berkeley also show how easy it is to create a deepfake video of O’Brien as the Terminator with just a short snippet of his voice and a photo of his face.

I thought the show was worthwhile, giving viewers a good enough overview to be part of the conversation about the opportunities and challenges AI represents. We need more people to be up to speed on AI tools so we can, as Nova co-executive producer Julia Cort says, “make informed decisions about the best path forward.”  

As to the question about whether we can use AI for good “without creating an uncontrollable force that ultimately destroys us,” I’d say the jury is still out on that one — though I’m rooting for humanity to figure it out.

“A.I. Revolution” is available to watch at pbs.org/nova, on Nova’s YouTube channel, and on the PBS app.  

State Department testing AI to help identify career paths

With all the talk about how gen AI will lead to the end of some jobs, the reinvention of others, and the need to upskill almost every worker, I found some interesting news about how, according to the Federal News Network, the US State Department will use the tech to help “employees chart the next step in their careers.”

Don Bauer, chief technical officer of the State Department’s Bureau of Global Talent Management, described how his organization is using the technology to work with sensitive internal data, including employees’ resumes and information about the training courses they’ve taken.

“We have a demonstration project to extract skills from resumes and start building out pipelines for civil servants, as far as career progression,” Bauer said during a panel discussion moderated by the Federal News Network. “If I identify a career path for you, then I’m using publicly available position descriptions, extracting those out, and then building up the ability for you to recognize skills you need. Then we’re going to tie that with our learning management system, so we can actually say, ‘If you want to be this person, here’s the skills you need and here’s how you can go get trained.'”

Bauer also said gen AI is being used to help ferret out “bad data” across systems, including those that track retirement benefits. “There’s so many different legal authorities and combinations of information that human beings could probably do it, but we’re really honing in on the ability to actually start looking at that as a data cleanup exercise. … Everything’s decision data driven now, so you have to have good data.”

Editors’ note: CNET used an AI engine to help create several dozen stories, which are labeled accordingly. The note you’re reading is attached to articles that deal substantively with the topic of AI but are created entirely by our expert editors and writers. For more, see our AI policy.

This post was originally published on this site

Similar Posts