7 takeaways from a year of building generative AI responsibly and at scale – Source – Microsoft

0 minutes, 59 seconds Read

While AI can already do a lot to make life easier, it’s far from perfect. It’s a good practice for users to verify information they receive from AI-enabled systems, which is why Microsoft provides links to cited sources at the end of any chat-produced output.

Since 2019, Microsoft has been releasing “transparency notes” providing customers of the company’s platform services with detailed information about capabilities, limitations, intended uses and details for responsible integration and use of AI. The company also includes user-friendly notices in products aimed at consumers, such as Copilot, to provide important disclosures around topics like risk identification, the potential for AI to make errors or generate unexpected content, and to remind people they are interacting with AI.

As generative AI technology and its uses continue to expand, it will be critical to continue to strengthen systems, adapt to new regulation, update processes and keep striving to create AI systems that deliver the experiences that people want.

“We need to be really humble in saying we don’t know how people will use this new technology, so we need to hear from them,” says Sarah Bird. “We have to keep innovating, learning and listening.”

This post was originally published on this site

Similar Posts