ChatGPT – a conversational generative artificial intelligence (AI) released to the public in November 2022 – has continued to make waves, particularly when it comes to healthcare. One study showed it has a better bedside manner than real doctors, and ChatGPT has even passed the United States Medical Licensing Exam (USMLE). Now, research is showing that ChatGPT could prove useful for patient education. Here’s a look at what the study revealed and the benefits and risks of using ChatGPT for patient education.
ChatGPT and Patient Education Study
Researchers tasked ChatGPT with answering common patient questions regarding a common procedure, the colonoscopy. Colonoscopies are a common screening method for specific cancers and various gastrointestinal conditions.
While information about colonoscopies is widely available, researchers wanted to know if ChatGPT could offer accurate and accessible insights to patients. They identified eight common questions about the procedure and provided them to ChatGPT as prompts. Then, they assessed the AI-generated answers, rating them based on scientific adequacy, ease of understanding, and overall satisfaction with the response.
Overall, ChatGPT was found to provide answers of similar quality to those listed on comparative sample hospital websites. Additionally, scoring physicians rated ChatGPT responses slightly higher (on average) when it came to ease of understanding.
While the researchers felt it was too early to claim that ChatGPT is a suitable option for patient education, the study shows the technology’s potential. It could make information more accessible, particularly with provider shortages plaguing the medical industry and the increasing likelihood of turning to digital resources to find information.
Benefits and Risks of Using ChatGPT for Patient Education
Generally, the most significant benefit of using ChatGPT for patient education is the speed at which it can share information. ChatGPT is functionally a real-time conversational model, so not only can it present information in a way that’s often easy to understand, it can clarify details in answers if prompted by the user right when they ask. That can ensure that patients get additional clarity if a point is complex or outside of their knowledge base.
However, there are risks associated with using ChatGPT. Open AI – the company behind ChatGPT – openly states on the ChatGPT main page that it can’t guarantee the accuracy of any responses provided by the technology.
Primarily, that’s because ChatGPT was trained using large amounts of data from the internet. Not all resources are correct, and some may include blatant misinformation or errors. However, ChatGPT can’t perfectly discern accuracy when considering those data sources. As a result, it may share details that aren’t correct and may even provide dangerous guidance.
How likely ChatGPT is to respond with inaccurate information may vary depending on the amount of accurate information available on a particular topic. If numerous resources used as part of its dataset state similar details that relate to a presented question, ChatGPT may favor answers with those points when generating text over lesser-used claims. When there’s less information available about a topic, ChatGPT might not be able to identify what details are part of a broad consensus within the medical community, which may make inaccurate statements more likely.
However, precisely how ChatGPT makes decisions about what to include in responses isn’t clear, and researchers have discovered that ChatGPT can provide incorrect information or, for lack of a better term, can essentially lie. Since inaccuracies in health-related content can prove detrimental to patients, many feel it’s too early to rely on ChatGPT as a primary resource for patient education. Still, its potential is undeniable, and as ways to improve accuracy continue to evolve, that may change over time.
Improving Patient Education Through Language Services
ChatGPT can technically prove useful for patient education, including when communicating with a diverse population that relies on many languages. However, depending on ChatGPT for translations and information sharing comes with risks. While the ChatGPT has reasonable translation accuracy when widely-used languages with strong online presences are involved, it’s not perfect. Additionally, ChatGPT will have higher error rates with lesser-used languages, making inaccuracies that could harm patients more likely.
Patient education requires reliable, accurate communication. By partnering with a leading language services provider, you achieve better results. Acutrans can provide professional certified document translations in 24 hours. Plus, our team can assist with generation translation and post-machine translation services, and offers specialized solutions designed explicitly for the healthcare sector. Acutrans also has over-the-phone, video remote, and on-site interpretation programs, and our team covers over 200 languages. The interpretation services also include an industry-specific program designed specifically for the medical sector. Contact us for a free quote today.