Generative AI vs web search for patient education: a comparative evaluation of OSA information quality.
Patients with obstructive sleep apnea (OSA) frequently seek information online, yet the comparative quality of content delivered by web search engines versus generative AI systems is unclear. This study evaluated how different digital information sources perform in answering common patient questions about OSA.
Thirty high-volume, patient-facing OSA questions were identified using Google Trends. Each question was submitted verbatim to four general-purpose large language models (GPT-4, GPT-5, DeepSeek, Mistral), a medically specialized retrieval-augmented model (OpenEvidence), and Google Search. Seven otolaryngologists with clinical experience in OSA independently rated each response for accuracy, clarity, completeness, relevance, and usefulness using a five-point rubric. Composite and domain scores were analyzed using one-way analysis of variance with multiple-comparison correction; inter-rater reliability was assessed with two-way random-effects intraclass correlation coefficients.
A total of 180 question-system pairs received 6295 domain-level ratings. OpenEvidence achieved the highest mean composite score (4.33), followed by a tightly clustered group of LLMs (means 4.00-4.04). Google Search scored significantly lower (3.15). Differences among systems were statistically significant across all domains (p < 0.001), with large effect sizes for comparisons of OpenEvidence and general LLMs versus Google. Composite average-rater reliability was good (ICC = 0.70).
For common OSA questions, generative AI systems-particularly a retrieval-augmented medical model-produced higher-quality patient-facing information than standard web search. These findings support cautious consideration of GenAI tools to supplement patient education in OSA, while underscoring the need for ongoing evaluation across diseases, disciplines, and patient populations.
Patients with obstructive sleep apnea (OSA) frequently rely on online sources such as Google Search to understand symptoms, testing, and treatment, yet the quality of patient-facing information varies widely. As generative artificial intelligence tools are increasingly used for health questions, their comparative performance for OSA education has not been systematically evaluated using blinded expert review.
In this blinded comparative study, generative AI systems, particularly a retrieval-augmented medical model, provided more accurate, clear, complete, and useful answers to common OSA questions than standard web search. These findings highlight that the choice of digital information source can meaningfully influence the quality of patient education in sleep medicine and support further evaluation of AI tools within clinical practice.
Thirty high-volume, patient-facing OSA questions were identified using Google Trends. Each question was submitted verbatim to four general-purpose large language models (GPT-4, GPT-5, DeepSeek, Mistral), a medically specialized retrieval-augmented model (OpenEvidence), and Google Search. Seven otolaryngologists with clinical experience in OSA independently rated each response for accuracy, clarity, completeness, relevance, and usefulness using a five-point rubric. Composite and domain scores were analyzed using one-way analysis of variance with multiple-comparison correction; inter-rater reliability was assessed with two-way random-effects intraclass correlation coefficients.
A total of 180 question-system pairs received 6295 domain-level ratings. OpenEvidence achieved the highest mean composite score (4.33), followed by a tightly clustered group of LLMs (means 4.00-4.04). Google Search scored significantly lower (3.15). Differences among systems were statistically significant across all domains (p < 0.001), with large effect sizes for comparisons of OpenEvidence and general LLMs versus Google. Composite average-rater reliability was good (ICC = 0.70).
For common OSA questions, generative AI systems-particularly a retrieval-augmented medical model-produced higher-quality patient-facing information than standard web search. These findings support cautious consideration of GenAI tools to supplement patient education in OSA, while underscoring the need for ongoing evaluation across diseases, disciplines, and patient populations.
Patients with obstructive sleep apnea (OSA) frequently rely on online sources such as Google Search to understand symptoms, testing, and treatment, yet the quality of patient-facing information varies widely. As generative artificial intelligence tools are increasingly used for health questions, their comparative performance for OSA education has not been systematically evaluated using blinded expert review.
In this blinded comparative study, generative AI systems, particularly a retrieval-augmented medical model, provided more accurate, clear, complete, and useful answers to common OSA questions than standard web search. These findings highlight that the choice of digital information source can meaningfully influence the quality of patient education in sleep medicine and support further evaluation of AI tools within clinical practice.
Authors
Hack Hack, Locatello Locatello, Boldes Boldes, Biadsee Biadsee, Dall'Agnese Dall'Agnese, Cannavicci Cannavicci, Kaminer Kaminer, Gouveia Gouveia, Roxbury Roxbury, Karni Karni, Hayu Hayu, Zalzal Zalzal, Ahmed Ahmed, Takashima Takashima
View on Pubmed