White House outlines new rules for AI use in federal agencies – Healthcare IT News

author
3 minutes, 53 seconds Read
Ads
Best Mobile Games Directory

Best Mobile Games Marketplace

The Biden Administration on Thursday announced new government-wide policies from the  White House Office of Management and Budget governing the use of artificial intelligence at federal agencies, including many focused on healthcare.

WHY IT MATTERS
The aim of the new policies, which build off President Biden’s sweeping executive order back in October, is to “mitigate risks of artificial intelligence and harness its benefits,” said the White House in a fact sheet.

By December 1, 2024, the OMB says federal agencies will be required to have implemented concrete safeguards anytime they’re using AI in a way that “could impact Americans’ rights or safety.”

Such safeguards include a wide array of “mandatory actions to reliably assess, test, and monitor AI’s impacts on the public, mitigate the risks of algorithmic discrimination, and provide the public with transparency into how the government uses AI.”

If an agency can’t demonstrate that those safeguards are in place, it “must cease using the AI system, unless agency leadership justifies why doing so would increase risks to safety or rights overall or would create an unacceptable impediment to critical agency operations,” according to the White House.

The new rules put a focus on AI governance and algorithm transparency – and seek to find a way forward for innovation that capitalizes on the technology’s benefits while protecting against its potential harms.

For instance, the OMB policy requires all federal agencies to designate Chief AI Officers, who will coordinate the use of AI across their agencies.

They must also stand up AI Governance Boards to coordinate and govern the use of AI across their own particular agencies. (The Departments of Defense, Veterans Affairs and others have already done this.)

The policies also require federal agencies to improve public transparency in their use of AI – mandating that they:

  • Release expanded annual inventories of their AI use cases, including identifying use cases that impact rights or safety and how the agency is addressing the relevant risks.

  • Report metrics about the agency’s AI use cases that are withheld from the public inventory because of their sensitivity.

  • Notify the public of any AI exempted by a waiver from complying with any element of the OMB policy, along with justifications for why.

  • Release government-owned AI code, models, and data, where such releases do not pose a risk to the public or government operations.

The White House says the OMB rules, rather than being prohibitive, are meant to foster safe and responsible innovation and “remove unnecessary barriers” to same.

The new fact sheet, for example, cites AI’s potential to advance public health – noting that the Centers for Disease Control and Prevention is using AI to predict the spread of disease and detect the illicit use of opioids, while the Center for Medicare and Medicaid Services is using the technology to reduce waste and identify anomalies in drug costs.

The policies also seek to bolster the AI workforce, through projects such as a National AI Talent Surge – which seeks to hire 100 AI professionals by this summer to promote safe use of AI across the government, as well as an additional $5 million to expand a government-wide AI training program – which saw 7,500 people from 85 federal agencies participating in 2023.

THE LARGER TREND
In October 2023, the White House issued President Biden’s landmark executive order on AI, a sprawling and many-faceted document that outlined ways to prioritize development of the technology that was “safe, secure and trustworthy.”

Among its many provisions, the EO called for the U.S. Department of Health and Human Services to develop and implement a mechanism to collect reports of “harms or unsafe healthcare practices” – and act to remedy them, wherever possible.

ON THE RECORD
“All leaders from government, civil society, and the private sector have a moral, ethical, and societal duty to make sure that artificial intelligence is adopted and advanced in a way that protects the public from potential harm while ensuring everyone is able to enjoy its full benefit,” said Vice President Kamala Harris on a press call about the new OMB rules on Thursday

“When government agencies use AI tools, we will now require them to verify that those tools do not endanger the rights and safety of the American people,” said the Vice President, who offered an example: “If the Veterans Administration wants to use AI in VA hospitals to help doctors diagnose patients, they would first have to demonstrate that AI does not produce racially biased diagnoses.”

The American people, she added, “have a right to know that when and how their government is using AI that it is being used in a responsible way. And we want to do it in a way that holds leaders accountable for the responsible use of AI.”

This post was originally published on this site

Similar Posts