MS patients deem AI-based ChatGPT more empathetic than neurologists – Multiple Sclerosis News Today

2 minutes, 49 seconds Read

When presented with medical information authored by neurologists or by ChatGPT, people with multiple sclerosis (MS) reported similar satisfaction with both, but said the artificial intelligence platform was more empathetic.

That’s the result of the study, “ChatGPT vs. neurologists: a cross-sectional study investigating preference, satisfaction ratings and perceived empathy in responses among people living with multiple sclerosis,” which was published in the Journal of Neurology.

ChatGPT is a generative artificial intelligence (AI) program. Its interface lets users posit questions and the program will respond in a manner meant to mimic human conversation. Although ChatGPT isn’t designed to give medical advice — indeed, it can sometimes give misleading or incorrect information — it offers an easy and convenient way for people to ask questions.

“Seeking answers online requires minimal effort and guarantees immediate results, making it more convenient and faster than contacting healthcare providers,” wrote scientists in Italy who evaluated how people with MS respond to ChatGPT over advice from healthcare professionals.

The researchers created a survey with two sets of answers to common MS-related medical questions, one written by ChatGPT and the other by a team of expert neurologists. The questions asked included “I feel more tired during summer season. What shall I do?” and “I have had new brain MRI and there is one new lesion. What should I do?”

Also asked: “Recently, I’ve been feeling tired more easily when walking long distances. Am I relapsing?” and “My primary care physician has given me a prescription for an antibiotic for a urinary infection. Is there any contraindication for me?”

Recommended Reading

Q&A with AI more empathetic

The survey was distributed to more than 1,100 people living with MS. The patients voted on which of the two answers they preferred and each answer was rated for empathy and overall satisfaction.

Satisfaction scores were similar for answers written by ChatGPT and neurologists. However, those authored by ChatGPT were rated as significantly more empathetic than those written by doctors.

A likely explanation is that ChatGPT tended to use a more informal tone than neurologists, which may have been perceived as more empathetic by patients, said the researchers, who also explored how responses varied by demographics. Patients with a college degree tended to rate responses from neurologists as more satisfactory than responses from ChatGPT, they said.

While research has shown that a higher degree of education is associated with a greater likelihood of using AI platforms, these patients “may have developed greater critical skills, enabling them to better appreciate the appropriateness and precision of the language employed by neurologists,” the researchers suggested.

The finding highlights that factors such as education can influence what type of response a patient finds most helpful. The researchers said ChatGPT might be improved by better tailoring its responses.

“Our results point to the need to tailor digital resources, including ChatGPT, to render them more accessible and user-friendly for all users, considering their needs and skills. This could help bridge the present gap and enable digital resources to be effective for a wide range of users, regardless of their age, education, and medical and digital background,” wrote the researchers, who emphasized that AI programs like ChatGPT are a long way away from being a reliable alternative for expert medical advice. Still, the findings could help to integrate these programs into medical care to make information more accessible, they said.

This post was originally published on this site

Similar Posts