Organizations across industries are leveraging Microsoft Azure OpenAI Service and Copilot services and capabilities to drive growth, increase productivity, and create value-added experiences. From advancing medical breakthroughs to streamlining manufacturing operations, our customers trust that their data is protected by robust privacy protections and data governance practices. As our customers continue to expand their use of our AI solutions, they can be confident that their valuable data is safeguarded by industry-leading data governance and privacy practices in the most trusted cloud on the market today.
At Microsoft, we have a long-standing practice of protecting our customers’ information. Our approach to Responsible AI is built on a foundation of privacy, and we remain dedicated to upholding core values of privacy, security, and safety in all our generative AI products and solutions.
Microsoft’s existing privacy commitments extend to our AI commercial products
Commercial and public sector customers can rest assured that the privacy commitments they have long relied on for our enterprise cloud products also apply to our enterprise generative AI solutions, including Azure OpenAI Service and our Copilots.
- You are in control of your organization’s data. Your data is not used in undisclosed ways or without your permission. You may choose to customize your use of Azure OpenAI Service by opting to use your data to fine tune models for your organization’s own use. If you do use your organization’s data to fine tune, any fine-tuned AI solutions created with your data will be available only to you.
- Your access control and enterprise policies are maintained. To protect privacy within your organization when using enterprise products with generative AI capabilities, your existing permissions and access controls will continue to apply to ensure that your organization’s data is displayed only to those users to whom you have given appropriate permissions.
- Your organization’s data is not shared. Microsoft does not share your data with third parties without your permission. Your data, including the data generated through your organization’s use of Azure OpenAI Service or Copilots – such as prompts and responses – are kept private and are not disclosed to third parties.
- Your organization’s data privacy and security are protected by design. Security and privacy are incorporated through all phases of design and implementation of Azure OpenAI Service and Copilots. As with all our products, we provide a strong privacy and security baseline and make available additional protections that you can choose to enable. As external threats evolve, we will continue to advance our solutions and offerings to ensure world-class privacy and security in Azure OpenAI Service and Copilots, and we will continue to be transparent about our approach.
- Your organization’s data is not used to train foundation models. Microsoft’s generative AI solutions, including Azure OpenAI Service and Copilot services and capabilities, do not use your organization’s data to train foundation models without your permission. Your data is not available to OpenAI or used to train OpenAI models.
- Our products and solutions continue to comply with global data protection regulations. The Microsoft AI products and solutions you deploy continue to be compliant with today’s global data protection and privacy regulations. As we continue to navigate the future of AI together, including the implementation of the EU AI Act and other laws globally, organizations can be certain that Microsoft will be transparent about our privacy, safety, and security practices. We will comply with laws globally that govern AI, and back up our promises with clear contractual commitments.
You can find additional details about how Microsoft’s privacy commitments apply to Azure OpenAI and Copilots here.
We provide programs, transparency documentation, and tools to assist your AI deployment
To support our customers and empower their use of AI, Microsoft offers a range of solutions, tooling, and resources to assist in their AI deployment, from comprehensive transparency documentation to a suite of tools for data governance, risk, and compliance. Dedicated programs such as our industry-leading AI Assurance program and Customer Copyright Commitment further broaden the support we offer commercial customers in addressing their needs.
Microsoft’s AI Assurance Program helps customers ensure that the AI applications they deploy on our platforms meet the legal and regulatory requirements for responsible AI. The program includes support for regulatory engagement and advocacy, risk framework implementation and the creation of a customer council.
For decades we’ve defended our customers against intellectual property claims relating to our products. Building on our previous AI customer commitments, Microsoft announced our Customer Copyright Commitment, which extends our intellectual property indemnity support to both our commercial Copilot services and our Azure OpenAI Service. Now, if a third party sues a commercial customer for copyright infringement for using Microsoft’s Copilots or Azure OpenAI Service, or for the output they generate, we will defend the customer and pay the amount of any adverse judgments or settlements that result from the lawsuit, as long as the customer has used the guardrails and content filters we have built into our products.
Our comprehensive transparency documentation about Azure OpenAI Service and Copilot and the customer tools we provide help organizations understand how our AI products work and provide choices our customers can use to influence system performance and behavior.
Azure’s enterprise-grade protections provide a strong foundation upon which customers can build their data privacy, security, and compliance systems to confidently scale AI while managing risk and ensuring compliance. With a range of solutions in the Microsoft Purview family of products, organizations can further discover, protect, and govern their data when using Copilot for Microsoft 365 within their organizations.
With Microsoft Purview, customers can discover risks associated with data and users, such as which prompts include sensitive data. They can protect that sensitive data with sensitivity labels and classifications, which means Copilot will only summarize content for users when they have the right permissions to the content. And when sensitive data is included in a Copilot prompt, the Copilot generated output automatically inherits the label from the reference file. Similarly, if a user asks Copilot to create new content based on a labeled document, the Copilot generated output automatically inherits the sensitivity label along with all its protection, like data loss prevention policies.
Copilot conversation inherits sensitivity label
Finally, our customers can govern their Copilot usage to comply with regulatory and code of conduct policies through audit logging, eDiscovery, data lifecycle management, and machine-learning based detection of policy violations.
As we continue to innovate and provide new kinds of AI solutions, Microsoft will continue to offer industry-leading tools, transparency resources, and support for our customers in their AI journey, and remain steadfast in protecting our customers’ data.