TRUST: an large language model-based dialogue system for trauma understanding and structured assessments.

While large language models (LLMs) have been widely used to assist clinicians and support patients, no existing work has explored dialogue systems for standard diagnostic interviews and assessments. This study aims to bridge the gap in mental healthcare accessibility by developing an LLM-powered dialogue system that replicates clinician behavior.

We introduce TRUST, a framework of cooperative LLM modules capable of conducting formal diagnostic interviews and assessments for post-traumatic stress disorder (PTSD) following the Clinician-Administered PTSD Scale for DSM-5 (CAPS-5). To guide the generation of appropriate clinical responses, we propose a Dialogue Acts schema specifically designed for clinical interviews. Additionally, we develop a patient simulation approach based on real-life interview transcripts to replace time-consuming and costly manual testing by clinicians.

A comprehensive set of evaluation metrics is designed to assess the dialogue system from both the agent and patient simulation perspectives. Expert evaluations by conversation and clinical specialists show that TRUST performs comparably to real-life clinical interviews.

Our system performs with clinical quality approaching that of human clinicians, with room for future enhancements in communication styles and response appropriateness.

Our TRUST framework shows its potential to facilitate mental healthcare availability.
Mental Health
Access
Care/Management

Authors

Tu Tu, Powers Powers, Doogan Doogan, Choi Choi
View on Pubmed
Share
Facebook
X (Twitter)
Bluesky
Linkedin
Copy to clipboard