Machine learning helps reduce suicide risk among children

In a significant stride towards improving child suicide-risk prediction, researchers have developed machine learning models that demonstrate enhanced capabilities in identifying children at risk of self-harm.

A new study from UCLA Health researchers finds that the typical ways health systems store and track data on children receiving emergency care miss a sizable portion of those who are having self-injurious thoughts or behaviours. 

The researchers also found that several machine learning models they designed were significantly better at identifying those children at risk of self-harm. 

Amid a nationwide youth mental health crisis, mental health providers are trying to improve their understanding of which children are at risk of suicide or self-harm so that providers can intervene earlier. 

However, health systems often do not have a full understanding of who is coming through their doors for self-injurious thoughts or behaviours, meaning that many risk-prediction models designed to flag children at future risk are based on incomplete data, limiting prediction accuracy. 

“Our ability to anticipate which children may have suicidal thoughts or behaviours in the future is not great – a key reason is our field jumped to prediction rather than pausing to figure out if we are actually systematically detecting everyone who is coming in for suicide-related care,” said Juliet Edgcomb, MD, PhD, the study’s lead author and Associate Director of UCLA’s Mental Health Informatics and Data Science (MINDS) Hub. 

“We sought to understand if we can first get better at detection.” 

Many risk-prediction models for suicide and self-harm rely on how providers categorise the care they’ve provided through diagnostic codes, known as the International Classification of Diseases, version 10 (ICD-10). 

However, this may exclude many children who have self-injurious thoughts or behaviours but have been coded in their health records for an underlying mental health diagnosis, such as depression or anxiety. 

Another commonly used method for flagging at-risk patients is the “chief complaint”, a brief statement provided at the beginning of a health care visit describing why a patient is seeking care, but children may not always report suicidal thoughts and behaviours when they first come into the emergency department. 

Experts reviewed clinical notes for 600 emergency department visits for children ages 10-17 at a large health system to understand how well ICD-10 codes and chief complaints identify children with self-injurious thoughts or behaviours. 

Experts who reviewed the patients’ clinical notes found that ICD codes missed 29 percent of children who came to the emergency department for self-injurious thoughts or behaviours, while the chief complaint missed over half (54 percent) of those patients. Using the ICD code and the chief complaint together still missed about 22 percent of those patients. 

Screening methods that relied on ICD codes or chief complaints were also more likely to miss male children than female children, as well as preteens compared to teens. There was also a signal that Black and Latino youth were more likely to be left out, raising concerns that these groups could be disproportionately underrepresented in risk prediction models. 

Researchers designed three different machine learning models to test whether an automated system could do a better job of flagging children with self-injurious thoughts or behaviours. 

The most comprehensive model incorporated 84 data points available in a patient’s electronic record, including previous medical care, medications, demographic information, and whether the child lives in a disadvantaged neighbourhood, among others. 

A second model used all diagnostic codes for mental health, rather than just the suicide-related codes that come from CDC’s suicide surveillance program, and a third looked at other indicators, such as a patient’s medications and lab tests. 

All three machine learning models were better at identifying children with self-injurious thoughts and behaviours than just ICD codes and chief complaint alone. No machine learning model performed significantly better than any of the others, indicating that health systems could improve their ability to flag at-risk patients without having to build especially sophisticated models. 

“Adding more information helps, but you don’t necessarily need a bells-and-whistles approach to get better detection,” Edgcomb said. 

The machine learning models were more likely to flag patients not at risk of self-harm, but Edgcomb said there is little downside to using these more sensitive screening tools. 

“Depending on the situation, it may be better to have some false positives and have a medical records analyst double-check those charts that screen positive than to miss many children entirely,” she said. 

Edgcomb’s upcoming research will continue to examine ways of improving youth suicide risk prediction models, including those for primary-school-age children, which have been particularly scarce. 

Previous Article National Grid to explore wireless power project
Next Article Food skills shortages to be tackled with new initiative
Related Posts
fonts/
or