Study shows AI tool’s high accuracy in answering genetic counseling questions – News-Medical.Net

2 minutes, 50 seconds Read

An artificial intelligence tool correctly answered 83 percent of common genetic counseling questions, including those about genetic testing and genetic syndromes, a new study found.

Presented at the Society of Gynecologic Oncology’s Annual Meeting on Women’s Cancer on March 17, 2024, in San Diego, the study examined the capabilities of a form of artificial intelligence (AI) called generative AI. Such tools predict likely options for the next word in any sentence based on how billions of people used words in context on the internet. A side effect of this next-word prediction is that generative AI chatbots like ChatGPT can generate replies to questions in realistic language and produce clear summaries of complex texts.

Led by researchers at NYU Langone Health and its Perlmutter Cancer Center, the current paper explored the application of ChatGPT to answering questions about genetic syndromes counseling related to gynecologic oncology.

Our data suggest that this tool has the potential to answer common questions from patients to reduce anxiety and keep them informed. More data input from gynecologic oncologists is needed before the tool can help to educate patients on their cancers, and only as an assistant to human providers.”

Jharna M. Patel, MD, lead study investigator

Jharna M. Patel has a Gynecologic Oncology Fellowship at NYU Langone.

40 questions

For the study, the research team consulted with gynecologic oncologists to choose 40 questions often asked by patients and in line with professional society guidelines. They then typed the questions into prompted ChatGPT version 3.4, asking for answers.

The authors then asked attending gynecologic oncologists to rate the chatbot’s answers on the following scale: 1) correct and comprehensive, 2) correct but not comprehensive, 3) some correct, some incorrect, and 4) completely incorrect. The proportion of responses earning each score was calculated overall and within each question category.

Specifically, ChatGPT was found to have provided correct and comprehensive answers to 33/40 (82.5 percent) questions, correct but not comprehensive answers to 6/40 (15 percent) questions, and partially incorrect answers to 1/40 (2.5 percent) questions. None of the answers were scored as completely incorrect.

The genetic counseling category of questions (e.g., How do you know if someone has an inherited or family cancer syndrome?) had the highest proportion of AI answers that were both correct and comprehensive, with ChatGPT answering all 20 questions with 100 percent accuracy. ChatGPT performed equally well with questions about specific genetic disorders. The gynecologic oncologists found, for instance, that 88.2 percent (15/17) of the answers were correct and complete on testing for differences in the BRCA1 or BRCA2 gene known to be a driver of cancer risk, while 66.6 percent (2/3) were correct on Lynch syndrome, the most common form of hereditary colorectal cancer. A complete list of the study questions is included below.

“We think we can further improve on these results by continuing to train the AI tool on more data, and by learning to ask better sets of questions,” says senior study author Marina Stasenko, MD, an assistant professor in the Department of Obstetrics and Gynecology at NYU Grossman School of Medicine. “The goal is to deploy this in the clinic when ready, but only as an assistant to human providers.”

Along with Dr. Patel and Dr. Stasenko, study authors in the Department of Obstetrics and Gynecology’s Division of Gynecologic Oncology were Catherine Hermann, MD, and Whitfield B. Growdon, MD. Study author Emeline M. Aviki, MD, MBA, was from the Division of Gynecologic Oncology in the Department of Obstetrics and Gynecology at NYU Grossman Long Island School of Medicine.

This post was originally published on this site

Similar Posts