Credit: Pixabay/CC0 Public Domain

UW Medicine researchers have found that algorithms are as good as trained human raters at identifying red flags in text messages from people with serious mental illness. This opens up a promising area of ​​research that could aid in psychiatry training and care deficits.

The findings were published at the end of September in the journal Psychiatric services.

Text messaging is increasingly part of mental health care and assessment, but these remote psychiatric interventions may lack the emotional cues that therapists use to navigate face-to-face conversations with patients.

The research team, based in the Department of Psychiatry and Behavioral Sciences, used natural language processing for the first time to help detect and identify text messages displaying “cognitive distortions” that may slip past an undertrained or overworked clinician. The research may also help more patients find help.

“When we meet people in person, we have different contexts,” said Justin Toscher, lead author of the paper and an acting assistant professor at the University of Washington School of Medicine. “We have visual signalswe have auditory cues, things that don’t come out in a text message. These are things we have been trained to rely on. The hope is that technology can provide clinicians with an additional tool to augment the information they rely on to make clinical decisions.”

The study examined thousands of unique and unprompted text messages between 39 people from serious mental illness and hospitalization history and their mental health providers. Human raters rated the texts for multiple cognitive distortions, as is customary in treating patients. Assessors look for subtle or overt language that indicates the patient is overgeneralizing, catastrophizing, or jumping to conclusions, all of which can be clues to problems.

The researchers also programmed computers to perform the same text evaluation task and found that humans and artificial intelligence rated the same in most of the categories studied.

“The ability to have systems that can help support clinical decision-making, I think is extremely relevant and potentially important for those who work in this field, sometimes lacking access to training, sometimes lacking access to supervision, and sometimes just tired, overworked and burned out and it’s hard for them to stay present in all the interactions they have,” said Tauscher, who came to the research after a decade in clinical settings.

Helping clinicians would be an immediate benefit, but the researchers also see future applications that would work in parallel with wearable fitness bands or a monitoring system on a phone. Dror Ben-Zeev, director of the UW Behavioral Research in Technology and Engineering Center and a co-author of the paper, said the technology could eventually provide real-time feedback that alerts the therapist to looming problems.

“The same way you get blood oxygen levels and a heart rate and other inputs,” Ben-Zeev said, “we can get a note that shows the patient is jumping to conclusions and catastrophizing. Just the ability to draw attention to a thought pattern is what we envision for the future. People will have these feedback loops with their technologies, where they get a sense of themselves.”

Pros and cons of elemental healthcare

Additional information:
Justin C. Tauscher et al. Automated detection of cognitive distortions in text-based communication between physicians and people with serious mental illness, Psychiatric services (2022). DOI: 10.1176/

Citation: AI equals human in text message mental health trial (2022, October 11) Retrieved October 11, 2022, from -mental.html

This document is subject to copyright. Except in good faith for the purpose of private study or research, no part may be reproduced without written permission. The content is provided for informational purposes only.