Healthcare Quarterly

Healthcare Quarterly 28(3) October 2025 : 30-34.doi:10.12927/hcq.2025.27734
Digital Tools to Support Mental Health

Improved Outcomes in Mental Healthcare Using Artificial Intelligence

Andrew Lustig, Masooma Hassan, Ethan Kim, Keith D’Souza, Adam Tasca, Tania Tajirian and David Gratzer

Abstract

Artificial intelligence (AI) presents opportunities and challenges in post-discharge psychiatric care. Leveraging structured data and machine learning, the Centre for Addiction and Mental Health aims to predict adverse outcomes, including readmissions, among patients recently discharged from psychiatric units. By identifying high-risk individuals, AI can guide referrals to resource-intensive outpatient clinics, enhancing continuity of care and improving outcomes. A governance framework addressing ethics, transparency and fairness underpins the development and implementation process. The study emphasizes using interpretable AI models over black-box systems to foster trust and clinical utility, aligning AI advancements with ethical mental health practices.

Introduction

At the Centre for Addiction and Mental Health (CAMH), Canada's largest mental health facility, we are investigating ways to harness recent advances in machine learning (ML) and artificial intelligence (AI) to improve outcomes for Ontario's mental health system and for people recently discharged from the hospital after a psychiatric in-patient admission.

There is currently a wave of exuberance for AI. AI-powered large language models (LLMs), such as OpenAI's ChatGPT, Microsoft's Gemini and Meta's Llama, have recently left the laboratory and become widely available to members of the public. AI has been called “the new electricity” by Andrew Ng, a cofounder of Google Brain (Ng 2017). And a recent ad campaign pithily opined that the data that power modern AI applications are “the new gold.” The current mood was set when OpenAI launched ChatGPT in November 2022. This culminated in Geoffrey Hinton and John Hopfield being awarded the Nobel Prize in Physics in October 2024 for their contributions to artificial neural networks.

This is not the first time that the public has been gripped by enthusiasm for AI. However, prior waves have ended with disappointing results. The Dartmouth Summer Research Project on Artificial Intelligence of 1956 is often identified as the moment or event that initiated AI as a cohesive research discipline. From 1956 until the mid-1970s, AI underwent a “golden age” of sorts. Research in AI was seen as a gold rush, and projects were amply funded. Experts predicted that machines would be exhibiting intelligence that rivaled that of humans within several years. Although there were some significant advances in AI in the 1960s and 1970s, the heady predictions were not borne out. Computing was too slow, and storage was too expensive by several orders of magnitude to achieve the lofty goals set out. As a result, it was felt that AI had achieved as much as possible given the extant constraints on computation. The first AI golden age came to an end in the mid-1970s and was followed by the first of two AI “winters” where interest in AI dwindled.

However, as predicted by Moore's law, the speed of computer processing has increased exponentially over the past several decades. This led to the advent of inexpensive storage and graphical processing units that perform vast numbers of computations in parallel. A new golden age of AI is upon us. The modern AI boom began in the 2010s with the availability of large quantities of digitized data, advancements in hardware and the deep learning revolution. Deep learning approaches rely on computational representations of neural networks. In such models, the individual nodes, which are digital representations of neurons, are connected to many other nodes. These networks resemble the structure of neurons in the brain, earning them the moniker “neural networks.”

In light of the recent advances in AI and ML, medical practitioners, health researchers and industry leaders have been working feverishly to develop tools that harness these new powers to improve care. They have developed many sophisticated and accurate AI models. However, for the most part, these have not been successfully deployed in clinical care.

Broadly speaking, AI applications in healthcare fall into several categories. These include automation of tasks such as writing progress notes or entering orders, automating the process of diagnosis and suggesting treatments. They could also assist with risk prediction. Although such approaches have the potential to revolutionize medicine (Topol 2019), ML methods have failed to make a substantial translational impact thus far (Imrie et al. 2023). They also have the potential to be misused and cause patient harm. The vast majority of medical predictive algorithms have been developed and published, yet never deployed (Jiang et al. 2023).

At CAMH, we have been investigating the possibility of using tools that incorporate AI to assist with predicting the risk of adverse outcomes in people who are discharged from in-patient psychiatric units. Our hope is to use these tools to support people after discharge to prevent adverse outcomes, including readmission, suicide, self-harm and violence. In addition to attending to the technical aspects of model development, we also have a framework in place to address issues of fairness, ethics and bias in model development. We are also considering how to integrate an AI model into clinicians' workflow to try and ensure that a model we develop will actually be used in daily clinical work and contribute to improved outcomes.

Post-Discharge Psychiatric Care

Since 2017, we have used several different approaches to support patients in the post-discharge period and promote ongoing recovery after discharge. In that year, we launched a clinic designed to provide psychiatric care to every person discharged from the in-patient psychiatric units. Psychiatric patients are at high risk of adverse outcomes after discharge (Coleman et al. 2006; Fazel et al. 2016) and high rates of readmission in the early post-discharge period are viewed as indicative of a deficiency in care (Sfetcu et al. 2017). Improved continuity of care after discharge predicts improved outcomes in this population (Choi et al. 2020). However, it remains unclear how best to structure care after discharge and how to ensure continuity of care. In April 2023, we launched a new enhanced outpatient clinic at CAMH to provide additional support for people discharged from the hospital who were deemed to be at higher risk of readmission and other adverse events. This new clinic, known as the in-patient bridging transitions clinic, allows in-patient psychiatrists to continue to provide outpatient care for patients after discharge, with the support of an interdisciplinary team. Because patients receive care from the same physician who cared for them on the in-patient unit, it has improved continuity of care compared with more traditional clinics where outpatient care is provided by a different team than in-patient care. As Figure 1 illustrates, people attending that clinic have a lower risk of readmission than others. However, the clinic is resource intensive, and it remains unclear which patients would benefit most from this and similar clinics. We see this as an opportunity for AI to contribute to patient care by identifying people at high risk of readmission and other adverse outcomes so that these high-risk patients can receive this more intensive outpatient follow-up.


Click to Enlarge
 

Previous studies have investigated the feasibility of using ML to predict psychiatric readmission rates (Boag et al. 2021). They found that ML algorithms outperformed humans in predicting the risk of psychiatric readmission. The same authors also note that predictions of psychiatric readmission are more difficult to predict than medical readmissions.

As this new model of care is resource intensive, we would like to implement criteria to rationally refer those people to this clinic who are most likely to benefit from it. We believe this is an appropriate use case for an AI model. We are working to develop a model for this purpose.

There are several barriers and pitfalls that pertain to the successful implementation of AI in mental healthcare delivery. AI models have the potential to be overused, misused and potentially cause patients' harm. One of the potential sources of harm is the use of so-called “black box models” such as LLMs (Rudin 2019). Such models are able to make accurate predictions in some instances, but they are not transparent and a user of such a model cannot decipher specifically which input data points resulted in a particular predicted outcome. They are similar to an oracle who can deliver wise pronouncements but does not share the rationale for their wisdom.

In contrast, “white box models,” also known as transparent AI, generally make somewhat less accurate predictions. However, their workings are more transparent and are generally amenable to scrutiny and allow users to understand how and why they have made their predictions.

Initially, we were interested in applying a LLM to the problem of predicting the risk of 30-day readmission. However, upon discussing this approach with data scientists at our hospital, we came to realize that such an approach is highly computationally intensive and that we lack the computational resources (or “compute” in the parlance of data scientists) to implement this model. In addition, some stakeholders expressed concern that an LLM may make accurate predictions but that the lack of transparency would pose ethical and practical limitations. From an ethical perspective, there is a risk that the LLM may perpetuate and amplify biases present in its training data. From a practical point of view, if a predictive model issued a prediction that a particular patient is at high risk of an adverse outcome, such as readmission, but we could not understand why the person is at high risk, then such a prediction would be of limited value. Clinicians might be powerless to understand what changes could be made to modify the risk in a favourable way. It could be that they require a longer period of hospitalization, a medication change, more comprehensive outpatient support, a community treatment order or some or all of the above. But with a black box model, there would be no way to know that. There have been some efforts to append separate “explanation models” to black box ML models to help users understand or explain how such models arrive at their predictions or conclusions. However, such efforts may further complicate matters. Rather than using an LLM, which uses large amounts of unstructured data, we are using an ML approach, which uses structured data.

In an effort to successfully implement such ML models, CAMH has established a governance framework based on identified barriers to AI adoption impacting trust, such as ethics, transparency and explainability and fairness (among others) (Hassan et al. 2024a). The governance model oversees all stages of the process, from problem identification and solution design to development, implementation and sustainability (Hassan et al. 2024b). To begin, a comprehensive needs assessment was conducted to capture the projected outcomes of deploying a predictive indicator for readmission risk. The assessment also reviewed evidence from models validated in similar settings, providing a strong foundation for advancing this ML solution. Broadly speaking, there are three major stages to creating a ML model: exploratory data analysis, data preparation and model development. The most commonly used programming language for ML development is Python, and additional software, such as Jupyter (Granger and Pérez 2021) and Pandas (McKinney 2020), assists immensely with model development. We started our exploratory data analysis by examining admissions-related data from the emergency department, including statistical characteristics (the amount of missing data, the statistical distributions of the variables, etc.) and other key demographic variables, such as gender and age. The following step, data preparation, also known as “feature engineering,” involves multiple transformations to the data to build a dataset that is suitable for training a ML model. These can include techniques such as imputing and/or removing missing data, encoding text-based variables into numerical representations and more. These steps, much like any other analysis or investigation, need to have systemized methods to track and document to ensure that they are reusable and reproducible. To do so, CAMH has also established, working hand-in-hand with the aforementioned governance framework, an operational process known as “machine learning operations” that develops and monitors ML and/or AI-enabled solutions. The main outcome of this first stage of the exploratory data analysis will be deliberated for fairness, ethics and safety to move toward a viable and equitable ML model. Additional governance deliberations at the future stages will ensure the model progresses into clinical evaluation and implementation whilst grounded in safety, equity and ethics principles.

The American Psychiatric Association suggests using the term augmented intelligence, rather than AI, to highlight the point that if AI has a role in healthcare, it is to assist clinicians in decision-making, rather than to replace them. In a widely reported incident, an eating disorders AI chatbot dispensed advice on losing weight (Sharp et al. 2023), highlighting the potential pitfalls of unsupervised AI implementation.

That said, AI assistance with the more quotidian aspects of psychiatric practice, such as routine office work, billing and clinical documentation, is right around the corner, and in some districts, already in place. In fact, the largest electronic medical record providers, including Epic, Cerner and Allscripts, have introduced AI functionality in their existing products. AI-powered scribes are already a reality. These applications can listen to an encounter between a patient and a physician and generate a draft note autonomously.

However, the more challenging and interesting question is whether AI-powered tools can assist with those aspects of psychiatric care that are so complex that they have strained the limits of human observation and comprehension. We are cautiously optimistic that they will have an impact, but only time will tell.

About the Author(s)

Andrew Lustig, MD, MSc, FRCPC, is a general psychiatrist. He is the in-patient medical head of the division of General Adult Psychiatry and Health Systems at the Centre for Addiction and Mental Health (CAMH), Toronto, Ontario, Canada. Andrew can be reached by e-mail at andrew.lustig@camh.ca.

Masooma Hassan, MS, is a senior program manager and program lead of artificial intelligence governance implementation and adoption at (CAMH) in Toronto, Ontario, Canada, and an adjunct lecturer at the Institute for Health Policy, Management and Evaluation at the University of Toronto in Toronto, Ontario, Canada.

Ethan Kim, MS, is a research methods specialist at the Krembil Centre for Neuroinformatics at CAMH in Toronto, Ontario, Canada.

Keith D'Souza, MSW, is the clinical director of adult in-patient and psychosis recovery and treatment outpatient services at CAMH in Toronto, Ontario, Canada.

Adam Tasca, MD, FRCPC, is a general adult psychiatrist at CAMH in Toronto, Ontario, Canada. His professional interests include promoting wellness through the integration of technology into the physician workflow.

Tania Tajirian, MD, MHI, is an academic hospitalist and the chief health information officer at CAMH, Toronto, Ontario, Canada. As an associate professor at the University of Toronto Department of Family and Community Medicine in Toronto, Ontario, Canada. She focuses on advancing digital quality initiatives that aim to optimize healthcare delivery and lessen the burden of documentation.

David Gratzer, MD, FRCPC, is a general psychiatrist at CAMH, Toronto, Ontario, Canada and co-chief of the division of General Adult Psychiatry and Health Systems at CAMH in Toronto, Ontario, Canada.

References

Boag, W., O. Kovaleva, T.H. McCoy Jr, A. Rumshisky, P. Szolovits and R.H. Perlis. 2021. Hard for Humans, Hard for Machines: Predicting Readmission After Psychiatric Hospitalization Using Narrative Notes. Translational Psychiatry 11(1): 32. doi:10.1038/s41398-020-01104-w.

Choi, Y., C.M. Nam, S.G. Lee, S. Park, H.-G. Ryu and E.-C. Park. 2020. Association of Continuity of Care With Readmission, Mortality and Suicide After Hospital Discharge Among Psychiatric Patients. International Journal for Quality in Health Care 32(9): 569–76. doi:10.1093/intqhc/mzaa093.

Coleman, E.A., C. Parry, S. Chalmers and S.-J. Min. 2006. The Care Transitions Intervention: Results of a Randomized Controlled Trial. Archives of Internal Medicine 166(17): 1822–28. doi:10.1001/archinte.166.17.1822.

Fazel, S., Z. Fimi?ska, C. Cocks and J. Coid. 2016. Patient Outcomes Following Discharge From Secure Psychiatric Hospitals: Systematic Review and Meta-Analysis. The British Journal of Psychiatry 208(1): 17–25. doi:10.1192/bjp.bp.114.149997.

Granger, B.E. and F. Pérez. 2021. Jupyter: Thinking and Storytelling With Code and Data. Computing in Science and Engineering 23(2): 7–14. doi:10.1109/MCSE.2021.3059263.

Hassan, M., J.A. Santisteban and N. Shen. 2024a. Implementation of a Clinical, Patient-Level Dashboard at a Mental Health Hospital: Lessons Learned From Two Pilot Clinics. The Role of Digital Health Policy and Leadership 312: 41–46. doi:10.3233/SHTI231308.

Hassan, M., A. Kushniruk and E. Borycki. 2024b. Barriers to and Facilitators of Artificial Intelligence Adoption in Health Care: Scoping Review. JMIR Human Factors 11: e48633. doi:10.2196/48633.

Imrie, F., R. Davis and M. van der Schaar. 2023. Multiple Stakeholders Drive Diverse Interpretability Requirements for Machine Learning in Healthcare. Nature Machine Intelligence 5(8): 824–29. doi:10.1038/s42256-023-00698-2.

Jiang, L.Y., X.C. Liu, N.P. Nejatian, M. Nasir-Moin, D. Wang, A. Abidin et al. 2023. Health System-Scale Language Models Are All-Purpose Prediction Engines. Nature 619(7969): 357–62. doi:10.1038/s41586-023-06160-y.

McKinney, W. 2010. Data Structures for Statistical Computing in Python. In S. van der Walt and J. Millman, eds., Proceedings of the 9th Python in Science Conference (pp. 56–61). Austin, TX: SciPy Press.

Ng, A. 2017, September 19. AI Is the New Electricity. O'Reilly Media. Retrieved October 30, 2025. <https://www.oreilly.com/radar/ai-is-the-new-electricity/>.

Rudin, C. 2019. Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead. Nature Machine Intelligence 1(5): 206–15. doi:10.1038/s42256-019-0048-x.

Sharp, G., J. Torous and M.L. West. 2023. Ethical Challenges in AI Approaches to Eating Disorders. Journal of Medical Internet Research 25: e50696. doi:10.2196/50696.

Sfetcu, R., S. Musat, P. Haaramo, M. Ciutan, G. Scintee, C. Vladescu et al. 2017. Overview of Post-Discharge Predictors for Psychiatric Re-Hospitalisations: A Systematic Review of the Literature. BMC Psychiatry 17: 227. doi:10.1186/s12888-017-1386-z.

Topol, E.J. 2019. High-Performance Medicine: The Convergence of Human and Artificial Intelligence. Nature Medicine 25(1): 44–56. doi:10.1038/s41591-018-0300-7.

Comments

Be the first to comment on this!

Note: Please enter a display name. Your email address will not be publically displayed