Fostering Responsible AI in Health Care – Kaiser Permanente

3 minutes, 1 second Read

The path to responsible AI

At Kaiser Permanente, AI tools must drive our core mission of delivering high-quality and affordable care for our members. This means that AI technologies must demonstrate a “return on health,” such as improved patient outcomes and experiences.

We evaluate AI tools for safety, effectiveness, accuracy, and equity. Kaiser Permanente is fortunate to have one of the most comprehensive datasets in the country, thanks to our diverse membership base and powerful electronic health record system. We can use this anonymized data to develop and test our AI tools before we ever deploy them for our patients, care providers, and communities.

We are careful to make sure that the AI tools we use support the delivery of equitable, evidence-based care for our members and communities. We do this by testing and validating the accuracy of AI tools across our diverse populations. We are also working to develop and deploy AI tools that can help us identify and proactively address the health and social needs of our members. This can lead to more equitable health outcomes.

Finally, once a new AI tool is implemented, we continuously monitor its outcomes to ensure it is working as intended. We stay vigilant; AI technology is rapidly advancing, and its applications are constantly changing. 

Policymakers can help set guardrails

While Kaiser Permanente and other leading health care organizations work to advance responsible AI, policymakers have a role to play too. We encourage action in the following areas:

  • National AI oversight framework — An oversight framework should provide an overarching structure for guidelines, standards, and tools. It should be flexible and adaptable to keep pace with rapidly evolving technology. New breakthroughs in AI are occurring monthly.
  • Standards governing AI in health care — Policymakers should work with health care leaders to develop national, industry-specific standards to govern the use, development, and ethics of AI in health care. By working closely with health care leaders, policymakers can establish standards that are effective, useful, timely, and not overly prescriptive. This is important because standards that are too rigid can stifle innovation, which would limit the ability of patients and providers to experience the many benefits AI tools could help deliver. 

Guardrails: Progress so far

The National Academy of Medicine convened a steering committee to establish a Health Care AI Code of Conduct that draws from health care and technology experts, including Kaiser Permanente. This is a promising start to developing an oversight framework.

In addition, Kaiser Permanente appreciates the opportunity to be an inaugural member of the U.S. AI Safety Institute Consortium. The consortium is a multisector work group setting safety standards for the development and use of AI, with a commitment to protecting innovation.

Considerations for policymakers

As policymakers develop AI standards, we urge them to keep a few important points top of mind.

  • Lack of coordination creates confusion. Government bodies should coordinate at the federal and state levels to ensure AI standards are consistent and not duplicative or conflicting. 
  • Standards need to be adaptable. As health care organizations continue to explore new ways to improve patient care, it is important for them to work with regulators and policymakers to make sure standards can be adapted by organizations of all sizes and levels of sophistication and infrastructure. This will allow all patients to benefit from AI technologies while also being protected from potential harm.

AI has enormous potential to help make our nation’s health care system more robust, accessible, efficient, and equitable. At Kaiser Permanente, we’re excited about AI’s future, and are eager to work with policymakers and other health care leaders to ensure all patients can benefit.

This post was originally published on this site

Similar Posts