Microsoft Boosts Responsible AI Team From 350 to 400 Personnel – Yahoo Finance
(Bloomberg) — Microsoft Corp. expanded the team responsible for ensuring its artificial intelligence products are safe, boosting personnel from 350 to 400 last year.
Most Read from Bloomberg
More than half of the group focuses on the task full-time, the company said Wednesday in its first annual AI transparency report, which outlines measures to ensure its services are rolled out responsibly. The team’s additional members include new hires as well as existing employees.
Last year, Microsoft dissolved its Ethics and Society team amid broader layoffs across the technology sector that gutted trust and safety teams at various companies, including Meta Platforms Inc. and Alphabet Inc.’s Google.
Microsoft is keen to boost trust in its generative AI tools amid mounting concerns about their tendency to generate strange content. In February, the company investigated incidents involving its Copilot chatbot, whose responses ranged from weird to harmful.
The following month, a Microsoft software engineer sent letters to the board, lawmakers and the Federal Trade Commission warning that the tech giant wasn’t doing enough to safeguard its AI image generation tool, Copilot Designer, from creating abusive and violent content.
“At Microsoft, we recognize our role in shaping this technology,” the Redmond, Washington-based company said in the report.
Microsoft’s approach to deploying AI safely is based on a framework devised by the National Institute for Standards and Technology. The agency, which is part of the Department of Commerce, was tasked with creating standards for the emerging technology as part of an executive order issued last year by President Joe Biden.
In its inaugural report, Microsoft said it has rolled out 30 responsible AI tools, including ones that make it harder for people to trick AI chatbots into acting bizarrely. The company’s “prompt shields” are designed to detect and block deliberate attempts — also known as prompt injection attacks or jailbreaks — to make an AI model behave in an unintended way.
Most Read from Bloomberg Businessweek
©2024 Bloomberg L.P.
This post was originally published on 3rd party site mentioned in the title of the post.