Letter to the editor: "Chatbots for breast cancer education: a systematic review and meta-analysis".

This correspondence serves to critically evaluate the meta-analysis by Lin et al. regarding chatbot-led breast cancer education, focusing on specific methodological weaknesses that may undermine its core findings.

We performed a rigorous appraisal of the original study's methodology, specifically scrutinizing their data synthesis process, the inclusion criteria for trials, and the conceptual clarity of the outcome measures used to gauge chatbot performance.

Our analysis revealed three primary concerns: (1) the reliance on a mere six studies raises concerns about the generalizability of the results; (2) a major unit-of-analysis error occurred where multiple intervention arms from a single study were incorrectly "double-counted" as independent data points; and (3) there was a clear conceptual conflation between "usability" and "educational efficacy, leading to ambiguous interpretations of the chatbots' actual impact.

Given these identified flaws, the evidence presented in the meta-analysis remains inconclusive and should be approached with skepticism. To advance the field, we urge the development of larger-scale randomized controlled trials and the consistent application of validated instruments, such as the System Usability Scale (SUS), to ensure more reliable data in digital health research.
Cancer
Access
Care/Management
Advocacy
Education

Authors

Xie Xie, Wang Wang, Ning Ning
View on Pubmed
Share
Facebook
X (Twitter)
Bluesky
Linkedin
Copy to clipboard