Oxford study highlights AI risks in patient care management – Techerati

author
3 minutes, 16 seconds Read
image

A study by the University of Oxford found some care providers had been using generative artificial intelligence (AI) to create care plans for patients receiving care.

The Guardian reported early career research fellow at the Oxford Institute for Ethics in AI, Dr Caroline Green, said carers might act on faulty or biased information that may accidentally cause harm. Dr Green added that care plans generated by AI might also be substandard in quality.

“If you put any type of personal data into [a generative AI chatbot], that data is used to train the language model. That personal data could be generated and revealed to somebody else,” said Green as reported in the Guardian.

However, Dr Green said AI could help with ‘administrative heavy work’ and allow people to revisit care plans more often. 

“At the moment, I would not encourage anyone to do that, but there are organisations working on creating apps and websites to do exactly that,” added Dr Green.

AI Care Concerns Echo Roundtable Findings 

Dr Green’s concerns echoed last month’s findings in the ‘AI in adult social care’ roundtable hosted at the University of Oxford and co-organised by Dr Green, the Digital Care Hub, and Casson Consulting.

On 1 February, thirty organisations and individuals in Adult Social Care convened at the University of Oxford to deliberate on the advantages and drawbacks of employing generative AI in social care. Attendees included Skills for Care, the National Care Forum, and Care England.

The roundtable concurred without diligent oversight and transparency, AI risks could impact people’s human rights. The Oxford Institute for Ethics in AI found that AI has the potential to impact core issues like safeguarding, data privacy, data security, equality, choice and control, and the quality of care. 

“Social care and tech providers integrating AI chatbots into their services need to develop specialist skills and understanding of the current applications immediately to capture the positive approaches and minimise risk,” said the Oxford Institute of AI Ethics.

The roundtable identified inherent risks in AI, such as biased outputs and unreliable information. They also noted risks associated with inappropriate or irresponsible use, including inputting personal data without consent and failing to ensure output safety and reliability.

Roundtable advocate for actionable guidelines for generative AI deployment

The Oxford Institute for Ethics in AI stressed the need for the development of a shared, co-produced framework to underpin the responsible use of generative AI in adult social care.

The institute and roundtable participants agreed that responsible use of generative AI depends on prioritising human rights and fostering trusting relationships among those involved in care, including family members, social workers, and regulators.

The roundtable advocated for a thoughtful and values-driven approach to the integration of AI in healthcare, with a focus on improving care quality, respecting individual autonomy, and promoting well-being for both patients and providers.

The Institute also stressed these guidelines for responsible generative AI should include identifying current applications, understanding government regulations, and gathering insights worldwide. The Institute’s future engagement will involve stakeholders, such as care recipients, workers, caregivers, tech firms, advocacy groups, and academics.

The Institute recognised generative AI is only one part of AI in social care and pledged to extend similar processes to other AI technologies within six months. It urged all social care stakeholders to initiate research, discussions, and knowledge sharing on various AI applications, including evaluating risks and benefits and determining training and resource needs.

In November, UK Prime Minister Rishi Sunak unveiled a £100 million ($121 million) investment to advance healthcare with AI.

The investment will be channelled through the AI Life Sciences Accelerator Mission, which will utilise the UK’s strength in secure health data and AI. The funding will assist the Government in investing in areas of the UK with the greatest clinical needs. The investment will test and trial new technologies in the next 18 months. 

Hungry for more tech news?

Sign up for your weekly tech briefings!

This post was originally published on this site

Similar Posts