Finding the forgotten: AI-driven suicide risk prevention through tailored resource referrals for U.S. Veterans

Abstract:The US Department of Veterans Affairs (VA) purports universal screening for veteran suicide risk. But a finer examination of the term ‘universal’ reveals a glaring caveat: only veterans who engage with in-person care are actually screened. For many reasons – including the VA being embroiled in numerous public scandals since at least 2014 – roughly 6.84 million veterans have never engaged with VA at all and, by extension, were never screened. Worse still, Department of Defense (DOD) data indicates that suicide prevalence is highest within this unscreened population – the very group the VA has deemed ‘forgotten’. This research harmonizes different ML techniques to help mental health professionals (MHPs) ‘find’, screen, and connect members of this previously unscreened population to resources that work for them. For the discriminative modeling portion (depression and anxiety), 14 candidate models were evaluated using PyCaret’s soft Auto-ML. Superior-than-baseline XGBoost (AUC-ROC .942) and LightGBM (AUC-ROC .948) models were identified, tuned and implemented respectively. From there, modern explainable AI techniques – principally local SHapley Additive exPlanations (SHAP) – were used to perform a root-cause analysis of why this specific veteran is experiencing depression or anxiety. In this instance, and as a contribution to the literature, the focus is shifted from global reasons like “many veterans struggle with sleep disturbances” instead to personalized insights like “this veteran is anxious because they have been struggling to meet their family’s financial obligations for the past few months”. Using individualized information gleaned from the previous step, the system employs Retrieval Augmented Generation (RAG) with Generative AI models to identify local non-profits (based on any US Zip Code and radius), grouped by their National Taxonomy of Exempt Entities (NTEE) codes, that are best positioned to address the root causes – whether financial, housing, family stress, or any socio-health-economic blend. Then, using regular expressions (REGEX), the system determines whether there is any overlap between a veteran’s personal interests, military context, and their identified needs. It intelligently subsets NGOs based on that match – leaving mental health professionals (MHPs) or the veteran with a list of targeted and actionable referrals to community organizations that align well with their unique profile and circumstances. Finally, we implemented Large Language Models as a Judge (LLMaaJ) to evaluate output quality against a stringent rubric developed in conjunction with industry leaders. All aspects of explainable AI, including decisions, reasoning, and suggested referrals are made available to the person administering the screener; this reenforces the key tenets of accountability, explainability, and trust within the system. In concordance with other literature, this research confirms that ensemble tree-based models continue to outperform other model types, even with a different data set – this was validated using a newly designed methodology called Dispersion-Penalized Convex Mesh (DPCM) for Model Class Evaluation. The study also presents previously undocumented population-level explanations using global SHAP bee swarm diagrams; this, thereby, presents compelling prima facie evidence for data seasonality / distribution shifts based on younger veteran demographics. Most critically, through the implementation of a robust web-based application, twenty synthetically-created veteran profiles, and LLMaaJ architecture, we learned how this system could meaningfully improve the daily workflow of mental health professionals (MHP) and usability in ‘open-air’field conditions. This research delivers a portable, proactive and privacy-respecting tool. It was built to overcome current limitations and gaps in the literature by meeting veterans where they are – rather than where they are expected to be. Through a secure, web-based application, MHPs can screen people anywhere there is internet or cellular service. Better still, all questions are optional and no personally identifiable information (PII) is required to receive results. Once submitted the data is fed to a distributed, GPU-enabled, Python-based backend, designed to increase throughput. These system instances peel off requests incrementally and quickly queue them for discriminative and natural language processing using a suite of in-series (chained) Large Language Models (LLM). The results are manifold, including: reduced strain on the already-overburdened mental health system; precise, local resource referrals across the country; and a deeper understanding of what’s actually driving veteran mental health – so that no one slips through the cracks again. So that no one is forgotten.

Read the full article
Report a problem with this article

Related articles