HealthcarePapers
What Problem Are We Trying to Solve With Artificial Intelligence for Healthcare in Canada?
Overview
The application of artificial intelligence (AI) in healthcare is not a “flash in the pan.” As Howell et al. (2024) have described, AI has been evolving since the 1950s, from decision trees to machine learning to generative AI that can create new content. These developments were foreshadowed by science fiction writer Isaac Asimov in a story first published in 1942 in which he outlined three rules of robotics, to the effect that they must not harm humans (Asimov 1950). Fast forward to 2015; Ashrafian (2015) proposed an additional law for AI systems that interact with each other: “all robots endowed with comparable human reason and conscience should act towards one another in a spirit of brotherhood.”
As health systems worldwide strive to achieve the quintuple aim, AI is often presented as a solution to pressing challenges, such as workforce shortages and reducing administrative burden for health professionals (Bajwa et al. 2021; Harris 2024; Meskó et al. 2018). However, its rapid integration into healthcare raises critical societal concerns, including the potential to exacerbate inequities and compromise patient and health professional safety (Abràmoff et al. 2023). At the Canadian Medical Association (CMA), we hear a mix of cautious optimism and deep concern from health professionals, policy makers, researchers and patient partners about the swift acceleration of AI in healthcare. This underscores the need for a clearer understanding of the current state and implications of AI within the Canadian context. Most importantly, experts are asking a fundamental question: What problem are we trying to solve with AI? This highlights the need to critically assess whether AI is being applied as an appropriate solution to an important problem or simply embraced for its perceived promise in healthcare.
In this issue, Kueper and Pandit (2025) and the commentators have done a brilliant job of setting out the challenges and opportunities of introducing AI in Canadian healthcare. Several strong themes emerge from the contributions: the need for education for health professionals, regulatory oversight, greater attention to equity, and patient engagement. There is both an urgency and an imperative to address these issues.
Educating Health Professionals
Recent surveys of nurses and physicians on priorities to support AI use have shown that appropriate training and education around the use of AI and appropriate regulation and accreditation for the use of AI-based technology were the top two priorities for both groups (Canada Health Infoway 2024a, 2024b).
Risling and Strudwick (2025) observe in their commentary that “the advancement of AI will not be as forgiving to hesitation from healthcare professions as previous digital health integration has been” and they call for the development of interdisciplinary AI competencies for all practitioners. In a 2023 interview, American Medical Association President Jesse Ehrenfeld said, “It is clear to me that AI will never replace physicians – but physicians who use AI will replace those who don't” (Schumaker et al. 2023).
In his commentary, Hodges (2025) underscores the importance of education and training for health professionals on the use of AI and the need to acquire new competencies to work effectively and safely with it. He states that the most important of these is metacognition, which he describes as the awareness of one's own cognitive and emotional processes.
Regulatory Oversight
Patients and health professionals alike want reassurance that AI tools used in diagnosis and treatment are valid and reliable. Tsuei (2025) raises several important questions about the adequacy of the Medical Devices Regulations and the proposed AI and Data Act (AIDA) to regulate AI systems in healthcare, citing issues of flexible definitions of risk, levels of required evidence and jurisdiction (non-commercial/commercial and inter/intra-provincial). Moreover, Bill C-27, in which AIDA is embodied, is yet to pass second reading since it was first tabled in June 2022 (House of Commons 2022).
In a recent discussion paper, the CMPA (2024) detailed medico-legal risks in the areas of civil liability, privacy and data protection; human rights; and intellectual property, and it has identified considerations for regulators and legislators for the safe introduction of AI systems in healthcare. Unless they can have confidence that safeguards are in place, physicians and other health professionals may be reticent to use them. As the College of Physicians and Surgeons of Manitoba states in its Advice to the Profession, “if a GenAI tool produces clinical decision support or advice related to a specific patient's care, the registrant accepts the responsibility for care delivered” (CPSM 2024).
Equity
Equity is presented across commentaries as a critical concern in the adoption of AI in healthcare. Garies et al. (2025) emphasize that insufficient high-quality data, particularly on the social determinants of health, hamper AI's ability to address inequities and increase risk of bias. To mitigate this, the authors advocate for community-led, culturally safe approaches to collecting and using race-based and Indigenous identity data, guided by frameworks such as OCAP (ownership, control, access and possession). They stress the need to prioritize evidence-based strategies that advance health equity as AI is integrated into healthcare. In her commentary, Paprica (2025) sheds light onto the critical role of appropriate training data in ensuring effective and equitable health AI tools. She highlights the importance of engaging policy makers, health professionals and patient partners in assessing datasets to identify who will benefit from these tools, prioritize validation efforts and pinpoint areas for improvement. Simplifying how training data are presented can further enhance understanding of AI's capabilities, guide investment decisions and ultimately promote equity (Paprica 2025).
In this issue, equity is also addressed through AI applications in under-resourced care settings, such as primary care and northern and rural communities. Bhattacharyya et al. (2025) highlight the untapped potential of AI in primary care, particularly its ability to reduce administrative burdens and alleviate health professional burnout in environments facing severe workforce shortages. Their commentary emphasizes how early evaluations of tools like AI scribes have shown reduced documentation time and received positive feedback from both patients and health professionals, further underscoring the promise of AI in primary care. Cava and Wood (2025) address the challenges of equitable AI adoption in Canada's fragmented health systems, particularly in northern and rural communities. The commentators explain that these regions face infrastructure and funding gaps but offer opportunities for community-centred approaches that could be leveraged for equitable AI adoption in these care contexts. Strategic partnerships, such as embedding AI researchers who have deep roots within northern and rural communities, can foster innovative AI adoption, close the equity gaps that these health systems face and ultimately ensure that the AI revolution does not further exacerbate system and population inequities.
Patient Engagement
While AI has the potential to enhance patient and health professional experiences, patient engagement is essential to determine if it is the right solution for healthcare challenges (McCradden and Kirsch 2023). This aligns with what we have been hearing from CMA's Patient Voice; in fact, one member wrote a commentary reflecting on his concerns with trust, safety and accuracy with AI in healthcare based on his care experience (Pratt 2023). In this issue, reflecting on a 2021 virtual policy design laboratory convened by Healthcare Excellence Canada, Zelmer and McKinnon (2025) present four guiding principles: AI solutions require flexibility as they evolve, must address the right problems, involve co-design with patients and health professionals and prioritize equity to avoid perpetuating biases and disparities. Annette McKinnon's patient perspective underscores both AI's transformative potential and its risks, including data security, fairness and trust. She highlights concerns about patient consent, algorithmic bias and the need for meaningful patient involvement in AI development. As AI applications expand, patient engagement will be essential to move toward equitable and patient-centred solutions in healthcare.
Conclusion
AI represents a transformative opportunity for Canadian healthcare, but its promise will only be realized if its adoption is underpinned by a commitment to education, robust regulation, equity and patient engagement. The lead paper and commentaries in this issue make clear that Canada's health system must prepare its workforce for the profound changes AI will bring while addressing the unique challenges of fragmented systems and diverse populations. By embedding equity-focused AI solutions within underserved regions, fostering interdisciplinary competencies and co-designing solutions, Canada can set a global example for equitable and responsible AI integration.
However, this progress requires sustained investment and a coordinated effort across all levels of the health system. Addressing gaps in regulation, data interoperability and infrastructure is essential to ensure that AI enhances patient safety without exacerbating disparities. AI has evolved from speculative fiction to a practical tool for innovation, but its future depends on actions taken today. By embracing AI's potential with vigilance and inclusivity and clearly determining what problem we want AI to solve, Canada can drive meaningful health system transformation that reflects its commitment to equity and excellence in care.
About the Author(s)
Ashley Chisholm, Phd, Strategic Advisor, Strategy and Innovation, Canadian Medical Association, Toronto, ON
Owen Adams, Phd, Senior Advisor to the Chief Executive Officer, Canadian Medical Association, Ottawa, ON
Sara Allin, Phd, Associate Professor, Institute of Health Policy, Management and Evaluation, Dalla Lana School of Public Health, University of Toronto, Toronto, ON, Director, North American Observatory on Health Systems and Policies, Toronto, ON
Audrey Laporte, MA, Phd, Director, Institute of Health Policy, Management and Evaluation, Dalla Lana School of Public Health, University of Toronto, Toronto, ON
References
Abràmoff, M.D., M.E. Tarver, N. Loyo-Berrios, S. Trujillo, D. Char, Z. Obermeyer et al. 2023. Considerations for Addressing Bias in Artificial Intelligence for Health Equity. npj Digital Medicine 6(1): 170. doi:10.1038/s41746-023-00913-9.
Paprica, P.A. 2025. Training Data Tell Us a Lot About Whom Health AI Tools Are Likely to Benefit. Healthcare Papers 22(4): 58–62. doi:10.12927/hcpap.2025.27569.
Ashrafian, H. 2015. AIonAI: A Humanitarian Law of Artificial Intelligence and Robotics. Science and Engineering Ethics 21: 29–40. doi:10.1007/s11948-013-9513-9.
Asimov, I. 1950. I, Robot. Fawcett Crest.
Bajwa, J., U. Munir, A. Nori and B. Williams. 2021. Artificial Intelligence in Healthcare: Transforming the Practice of Medicine. Future Healthcare Journal 8(2): e188–94. doi:10.7861/fhj.2021-0095.
Bhattacharyya, O., P. Agarwal, E. Ha, J. Yong and E. Montague. 2025. Accelerating AI Adoption for Reducing Administrative Burden in Primary Care: Insights From Evaluating AI Scribes. Healthcare Papers 22(4): 63–68. doi:10.12927/hcpap.2025.27568.
Canada Health Infoway. 2024a, May. 2023 Canadian Survey of Nurses: Use of Digital Health Technology in Practice. Quantitative Research Report. Retrieved December 17, 2024. <https://www.infoway-inforoute.ca/en/component/edocman/6481-2023-canadian-survey-of-nurses-use-of-digital-health-technology-in-practice/view-document?Itemid=103>.
Canada Health Infoway. 2024b, July 9. 2024 National Survey of Canadian Physicians: Use of Digital Health and Information Technologies in Practice. Retrieved April 19, 2025. <https://insights.infoway-inforoute.ca/docs/component/edocman/414-2024-national-survey-of-canadian-physicians-use-of-digital-health-and-information-technologies-in-practice/viewdocument/414>.
Canadian Medical Protective Association (CMPA). 2024, September. The Medico-Legal Lens on AI Use by Canadian Physicians: A Deep Dive. Retrieved April 19, 2025. <https://www.cmpa-acpm.ca/en/research-policy/public-policy/the-medico-legal-lens-on-ai-use-by-canadian-physicians>.
Cava, D. and B. Wood. 2025. Workforce Investments to Accelerate Learning Health Systems With Artificial Intelligence in Northern and Rural Settings. Healthcare Papers 22(4): 69–73. doi:10.12927/hcpap.2025.27567.
College of Physicians and Surgeons of Manitoba (CPSM). 2024, July 8. Advice to the Profession on the Responsible Use of Artificial Intelligence in the Practice of Medicine. Retrieved December 17, 2024. <https://cpsm.mb.ca/news/advice-to-the-profession-on-the-responsible-use-of-artificial-intelligence-in-the-practice-of-medicine>.
Garies, S., J.K. Holodinsky, J.E. Black and T. Williamson. 2025. Achieving Health Equity for All Canadians: Is AI Currently Up to the Task? Healthcare Papers 22(4): 52–57. doi:10.12927/hcpap.2025.27570.
Harris, E. 2024. AI-Drafted Responses to Patients Reduced Clinician Burnout. JAMA 331(17): 1440. doi:10.1001/jama.2024.5157.
Hodges, B.D. 2025. Education and the Adoption of AI in Healthcare: “What Is Happening?” Healthcare Papers 22(4): 39–43. doi:10.12927/hcpap.2025.27572.
House of Commons. 2022. Bill C-27: An Act to Enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to Make Consequential and Related Amendments to Other Acts. Retrieved April 19, 2025. <https://www.parl.ca/documentviewer/en/44-1/bill/C-27/first-reading>.
Howell, M.D., G.S. Corrado and K.B. DeSalvo. 2024. Three Epochs of Artificial Intelligence in Health Care. JAMA 331(3): 242–44. doi:10.1001/jama.2023.25057.
Tsuei, S.H-T. 2025. How Are Canadians Regulating Artificial Intelligence? A Brief Analysis of Current Legal Direction, Challenges and Deficiencies. Healthcare Papers 22(4): 44–51. doi:10.12927/hcpap.2025.27571.
Kueper, J.K. and J. Pandit. 2025. AI in the Canadian Healthcare System: Scaling From Novelty to Utility. Healthcare Papers 22(4): 79–83. doi:10.12927/hcpap.2025.27574.
McCradden, M.D. and R.E. Kirsch. 2023. Patient Wisdom Should Be Incorporated Into Health AI to Avoid Algorithmic Paternalism. Nature Medicine 29(4): 765–66. doi:10.1038/s41591-023-02224-8.
Meskó, B., G. Hetényi and Z. Gy?rffy. 2018. Will Artificial Intelligence Solve the Human Resource Crisis in Healthcare? BMC Health Services Research 18: 545. doi:10.1186/s12913-018-3359-4.
Pratt, A. 2023, July 11. Artificial Intelligence in Healthcare: A Patient Perspective. IASLC News. Retrieved December 17, 2024. <https://www.ilcn.org/artificial-intelligence-in-healthcare-a-patient-perspective/>.
Risling, T. and G. Strudwick, 2025. Through the Nursing Lens: How AI Will Change Healthcare Practice and Professions. Healthcare Papers 22(4): 32–39. doi:10.12927/hcpap.2025.27573.
Schumaker, E., B. Leonard, C. Paun and E. Peng. 2023, October 7. AMA President: AI Will Not Replace Doctors. Politico. Retrieved December 17, 2024. <https://www.politico.com/newsletters/future-pulse/2023/07/10/ai-will-not-replace-us-ama-president-says-00105374>.
Zelmer, J. and A. McKinnon. 2025. Tipping the Balance Toward Positive Futures for Patients: AI in Healthcare. Healthcare Papers 22(4): 74–78. doi:10.12927/hcpap.2025.27566.
Comments
Be the first to comment on this!
Personal Subscriber? Sign In
Note: Please enter a display name. Your email address will not be publically displayed