Applicable Scenarios, Desired Features, and Risks of AI Psychotherapists in Depression Treatment From the Patient's Perspective: Exploratory Qualitative Study.
Depression is a pervasive global mental health issue, yet access to trained professionals remains severely limited. With the rapid advancement of artificial intelligence (AI), digital tools are increasingly seen as a viable way to address this shortage. However, questions remain about how digital platforms for mental health care can be effectively designed.
This study aimed to investigate, from an end user's (patient's) perspective, the potential use scenarios, desired features, and perceived risks of AI psychotherapists in depression treatment, providing design guidelines for their development.
A grounded theory approach was applied to analyze qualitative responses from 452 individuals recruited via Amazon Mechanical Turk. Data were collected through a scenario-based online survey on AI-assisted depression treatment administered between March 2023 and May 2023. Participants responded to 3 open-ended questions regarding the potential use of AI in treating depression, the characteristics expected from an AI psychotherapist, and the associated perceived risks, along with demographic, control, and contextual measures. The open-ended responses were inductively coded into themes, with intercoder reliability established (Cohen κ=0.80). In addition, variations in themes were further examined across participant profiles, including social stigma, current depression severity, trust in an AI psychotherapist, and privacy awareness.
Participants envisioned AI psychotherapists across 5 primary scenarios: diagnosis, treatment, consultation, self-management, and companionship. Key desired features include professionalism, warmth, precision care, empathy, remote services, active listener, personalization, flexible treatment options, patience, trustworthiness, and basic treatment alternative, while critical concerns include diagnostic inaccuracy, treatment errors, privacy breach, lack of human interaction, technical malfunctions, and lack of emotional engagement. Based on these findings, a general MoSCoW (must have, should have, could have, and won't have) prioritization framework was proposed to serve as a conceptual starting point for future AI system design and empirical validation in mental health care. Notably, feature prioritization varied across user profiles: individuals with higher stigma placed greater emphasis on privacy protection, those with more severe depression prioritized precision care and timely access, low-trust users de-emphasized remote services, and privacy-sensitive individuals showed reduced preference for features requiring extensive data disclosure. These patterns highlight the need for context-sensitive design.
This study provides a patient-centered framework for designing AI psychotherapists and complements the existing literature by highlighting the importance of balancing clinical effectiveness with relational considerations. The findings offer actionable guidelines for designing AI mental health care tools that are aligned with user expectations and sensitive to individual differences.
This study aimed to investigate, from an end user's (patient's) perspective, the potential use scenarios, desired features, and perceived risks of AI psychotherapists in depression treatment, providing design guidelines for their development.
A grounded theory approach was applied to analyze qualitative responses from 452 individuals recruited via Amazon Mechanical Turk. Data were collected through a scenario-based online survey on AI-assisted depression treatment administered between March 2023 and May 2023. Participants responded to 3 open-ended questions regarding the potential use of AI in treating depression, the characteristics expected from an AI psychotherapist, and the associated perceived risks, along with demographic, control, and contextual measures. The open-ended responses were inductively coded into themes, with intercoder reliability established (Cohen κ=0.80). In addition, variations in themes were further examined across participant profiles, including social stigma, current depression severity, trust in an AI psychotherapist, and privacy awareness.
Participants envisioned AI psychotherapists across 5 primary scenarios: diagnosis, treatment, consultation, self-management, and companionship. Key desired features include professionalism, warmth, precision care, empathy, remote services, active listener, personalization, flexible treatment options, patience, trustworthiness, and basic treatment alternative, while critical concerns include diagnostic inaccuracy, treatment errors, privacy breach, lack of human interaction, technical malfunctions, and lack of emotional engagement. Based on these findings, a general MoSCoW (must have, should have, could have, and won't have) prioritization framework was proposed to serve as a conceptual starting point for future AI system design and empirical validation in mental health care. Notably, feature prioritization varied across user profiles: individuals with higher stigma placed greater emphasis on privacy protection, those with more severe depression prioritized precision care and timely access, low-trust users de-emphasized remote services, and privacy-sensitive individuals showed reduced preference for features requiring extensive data disclosure. These patterns highlight the need for context-sensitive design.
This study provides a patient-centered framework for designing AI psychotherapists and complements the existing literature by highlighting the importance of balancing clinical effectiveness with relational considerations. The findings offer actionable guidelines for designing AI mental health care tools that are aligned with user expectations and sensitive to individual differences.