Head‐to‐Head Comparison of ChatGPT Versus Google Search for Medical Knowledge Acquisition
Noel F. Ayoub, Yu‐Jin Lee, David Grimm, Vasu Divi- Otorhinolaryngology
- Surgery
Abstract
Objective
Chat Generative Pretrained Transformer (ChatGPT) is the newest iteration of OpenAI's generative artificial intelligence (AI) with the potential to influence many facets of life, including health care. This study sought to assess ChatGPT's capabilities as a source of medical knowledge, using Google Search as a comparison.
Study Design
Cross‐sectional analysis.
Setting
Online using ChatGPT, Google Seach, and Clinical Practice Guidelines (CPG).
Methods
CPG Plain Language Summaries for 6 conditions were obtained. Questions relevant to specific conditions were developed and input into ChatGPT and Google Search. All questions were written from the patient perspective and sought (1) general medical knowledge or (2) medical recommendations, with varying levels of acuity (urgent or emergent vs routine clinical scenarios). Two blinded reviewers scored all passages and compared results from ChatGPT and Google Search, using the Patient Education Material Assessment Tool (PEMAT‐P) as the primary outcome. Additional customized questions were developed that assessed the medical content of the passages.
Results
The overall average PEMAT‐P score for medical advice was 68.2% (standard deviation [SD]: 4.4) for ChatGPT and 89.4% (SD: 5.9) for Google Search (p < .001). There was a statistically significant difference in the PEMAT‐P score by source (p < .001) but not by urgency of the clinical situation (p = .613). ChatGPT scored significantly higher than Google Search (87% vs 78%, p = .012) for patient education questions.
Conclusion
ChatGPT fared better than Google Search when offering general medical knowledge, but it scored worse when providing medical recommendations. Health care providers should strive to understand the potential benefits and ramifications of generative AI to guide patients appropriately.