Microsoft intros new responsible AI tools in Azure Studio – TechTarget

author
3 minutes, 8 seconds Read
Ads
Best Mobile Games Directory

Best Mobile Games Marketplace

Microsoft introduced new responsible AI tools in Azure AI Studio aimed at reducing many of the hesitations enterprises have around generative AI systems.

The company on Thursday introduced prompt shields, Groundedness Detection, system safety messages, safety evaluations, and risk and safety monitoring.

The new tools come as the tech giant, and its rival, Google, work to address challenges with their generative AI tools in recent months. For example, a Microsoft whistleblower wrote to the Federal Trade Commission detailing safety concerns about Microsoft Copilot Designer.

Meanwhile, Google turned off the image-generating feature of its Gemini large language model after it generated biased images of key historical figures.

Google also expanded its Search Generative Experience (SGE) by allowing it to answer user questions. However, there are reports that SGE’s responses are spammy and prompt malware and scams.

Addressing enterprise concerns

The new responsible AI tools address concerns of enterprises that are hesitant to use generative AI.

“One of the barriers to adoption of generative AI among enterprises today is trust, a lack of trust in these systems,” Forrester Research analyst Brandon Purcell said.

Many enterprises are concerned about hallucinations, where an LLM or AI tool generates incorrect information, and the fact that the tools are susceptible to IP leakage.

“Microsoft is… releasing products that are hopefully going to help generate trust in the market,” Purcell said.

For example, Prompt Shields detects and blocks prompt injection attacks. It is currently available in preview. Prompt injection is when a user with bad intentions tries to make the LLM do something it is not supposed to do, such as provide access to its training data or engage in hate speech or sexualized content.

Another tool, Groundedness Detection, helps detect hallucinations in model outputs. That tool is coming soon.

“Reducing hallucinations is probably one of the main seemingly unsolvable challenges in adopting generative AI for mission-critical business use cases,” Gartner analyst Jason Wong said.

Since most language models tend to hallucinate, a tool that reduces hallucinations will be critical to enterprises.

“Groundedness Detection should reduce the hallucination rate and give businesses confidence and trust that the system is working as it should,” Purcell said.

Responding to regulations

Microsoft’s new responsible AI tools also show how the vendor is responding to some of the new regulations coming out of both the European Union and the U.S., Northeastern University AI policy adviser Michael Bennett said.

Earlier this month, the EU approved the EU AI Act. The Act regulates AI systems that interact with humans in industries including education, employment and public systems.

Thus, having these responsible AI safeguards eases the minds of enterprises conducting business in the EU, Bennett said.

“These types of safeguards will probably put those larger companies at greater ease, not erase the concern altogether,” he said.

Enterprises will also feel comfortable using the systems in the U.S., where different state districts have introduced individual AI laws he added.

However, despite vendors’ safeguards, enterprises must perform their due diligence, Purcell said.

“No matter how many great features Microsoft or other companies roll out, a company that is using generative AI needs to have a stringent monitoring system in place to be able to detect when the model is not performing and leading to poor business outcomes,” he said.

Other responsible AI tools introduced by Microsoft include safety system messages, safety evaluations, and risk and safety monitoring.

Safety system messages steer the model’s behavior toward safe outputs is coming soon. Safety evaluations assess applications’ vulnerability to jailbreak attacks. It is available in preview. Risk and safety monitoring understand what model inputs, outputs, and end users trigger content filters. It is also available in preview.

Esther Ajao is a TechTarget Editorial news writer and podcast host covering artificial intelligence software and systems.

This post was originally published on this site

Similar Posts