Have you ever asked ChatGPT to create a bio on yourself? Was it spot-on? Somewhat accurate? Total fiction? ChatGPT and similar programs, such as Google’s Med-PaLM 2 and Bard, and Microsoft’s Bing Chat, are generative artificial intelligence (AI) programs that use large language models (LLMs) to respond to prompts. ChatGPT, for example, is capable of producing text, images, and other media, such as music and videos. It learns by applying neural network machine learning techniques and generates new data with similar characteristics. Along the same lines, Med-PaLM 2, an AI program specific to medical applications, uses Google’s LLM to answer health-related questions. This program was the first LLM to perform at an “expert” level on professional medical board examination data sets (MedQA and MedMCQA).1
AI clearly has some useful applications in areas such as customer service and human resources support. But what if ChatGPT or Med-PaLM 2 were to provide healthcare advice to patients and providers? Would you be inclined to trust the information?
Tebra, a healthcare technology company, surveyed 1000 Americans and 500 healthcare professionals regarding the use of AI in healthcare, and the results may surprise you. Some key takeaways include the following2:
Although 25% of Americans reported that they would not visit a healthcare provider who refused to embrace AI technology, this does not mean that they do not have important reservations about the technology. Among their primary concerns (53% of respondents) was the belief that AI technology cannot fully replace the expertise and experience of well-trained healthcare providers.2 In addition, almost one-half (47%) reported concerns about the reliability of AI-generated information regarding diagnoses and treatment. Data privacy and security concerns were noted by 42% of Americans, and 33% were concerned that biased algorithms could result in unfair or discriminatory treatment.2
According to the survey results, almost 90% of healthcare providers are currently not using AI technologies, but many expressed the intention to do so in the future. When asked how they intend to use it, 52% responded data entry and 42% said to schedule appointments, which are fairly low-level tasks. However, 31% indicated that they would use AI for diagnosis and treatment purposes.
To assess the healthcare professionals’ opinions regarding the quality of answers to common medical questions from ChatGPT, Bard, and Bing, the Tebra survey posed the same questions to each AI program, and then the answers were evaluated by medical professionals. A total of 44% of respondents believed ChatGPT was the best of the 3 programs, followed by Bard (42%), and Bing (14%).
The healthcare professionals then assessed medical guidance provided by ChatGPT on self-examinations for various cancers and arthritis. After this exercise, 46% reported feeling more optimistic about the use of AI in healthcare.
The rapid rise and improvement in AI technology is remarkable and offers potential healthcare benefits, such as increased time efficiency, cost-savings, and increased accessibility.2 Concerns linger, however, regarding the ability of this technology to reliably assume tasks that require the expertise and experience of human healthcare providers, and many Americans expressed a basic preference for human interactions.2
The complete results from Tebra’s survey can be found at: www.tebra.com/blog/research-perceptions-of-ai-in-healthcare.
To sign up for our newsletter or print publications, please enter your contact information below.
Subscribe to recieve the free, monthly TON print publication and TON weekly e‑newsletter.