Insights
A Practical Blueprint for Enabling AI Use in Healthcare Organizations
In healthcare, artificial intelligence (AI) promises to make our work more efficient, innovative and sustainable. But how, exactly? Where do we start? And how do we overcome the hurdles?
At VHA Home HealthCare (VHA), we started by defining a vision and objectives for AI to ensure alignment with our broader purpose and strategic priorities. Our AI vision is to integrate AI and human expertise to make home care more efficient, empowering, innovative and sustainable. We are committed to making technology and compassion work hand-in-hand to deliver safe and spectacular care.
At the outset of our journey, we defined three objectives. We will enhance care delivery by optimizing resources, personalizing care plans and improving client outcomes. We will empower staff by automating administrative tasks, simplifying workflows and freeing up time for meaningful, face-to-face client care. We will advance research, innovation and business growth.
To ensure the proper support and alignment as we move forward, we drafted a set of guiding principles and articulated the enablers that would be key to our success. These include robust governance, risk management and a user-centric approach.
Taming the Data
With these foundational pieces in place, it was time get serious about data.
AI is only as good as the data it learns from, and the stakes are high in healthcare. VHA has long been deeply committed to research, with a data-informed approach to everything we do, so we were already able to draw upon a robust collection of data, with millions of fresh data points being captured every week from all parts of our operations.
The trick was to tame all this data. Like many organizations, for years we had multiple, disconnected datasets in a variety of formats; we needed to convert these “shoeboxes full of receipts” into a coherent, organized, linked data repository.
It was no small task. We made an investment in data engineering talent, defined common standards and built the infrastructure and governance controls to support large-scale data analysis.
We also needed to change how we thought about data as an organization, moving from a fragmented, guarded approach to a shared understanding of data as a resource with tremendous strategic potential. Today, we have a high-quality, accessible, growing repository of information.
Practical applications designed in partnership with clinicians
With the data under control, the next step was to unlock its insights and spot the deeper patterns. It wasn’t about counting different types of events; it was about understanding how they interact and sometimes lead to a particular outcome. For that, clinicians needed to be involved. Our success with AI has been contingent upon a productive partnership between our IT and research teams and our clinical specialists.
Recognizing opportunities in various areas of our work, we’re testing AI usage involving predictive modelling, generative AI and agentic AI.
Predictive modelling uses large volumes of data and statistical techniques to uncover patterns and create forecasts about future events. At VHA, we use it to identify risks, optimize resource allocation and improve patient outcomes. For example, we can use VHA’s administrative data to predict which clients are most at risk of hospital readmission within 30 days of discharge. With this knowledge, care providers can intervene and potentially prevent these events.
Predictive modelling can also help us personalize care by identifying clients at higher risk of falls or complications based on their individual health history and behaviour. In this way, artificial intelligence, when driven by high-quality data and the expertise of data scientists and clinicians to refine and validate models, actually helps us personalize care and treatment plans.
Generative AI has the potential to automate routine tasks, support clinical decision-making and free up time for care providers to spend with their clients. If properly trained, generative AI can produce effective clinical summaries and notes, capture critical information and reduce the risk of errors. VHA’s AI-enabled Chart Coach addresses a longstanding challenge in home care: documentation can be delayed or incomplete because of the practical challenge of typing up notes in a client’s home. We anticipate the richer, more complete records we’re seeing with Chart Coach will lead to deeper insights and improved documentation. We are also piloting the use of generative AI for translation services, which will enable clearer, safer conversations to help our team members care for our diverse client population.
The third type of AI we’re testing is Agentic AI, which can take on tasks like answering routine client queries, rescheduling visits, and optimizing workers’ schedules. Acting as a clinical assistant, it can provide real-time support to care providers in the field, searching the latest clinical research and internal policies and offering best practice recommendations or step-by-step guidance on complex procedures. As ever, this approach requires robust human oversight to ensure quality and safety. From our perspective, a human must always be in the loop. Another example of agentic AI is the chat bot that has been developed and will be integrated into our myVHA client portal to simplify the client experience.
In designing all of these models, we draw on the expertise of clinicians who understand the nuances of client care and the technical skills of data scientists who can translate these insights into working algorithms. This partnership approach improves the quality of our AI outputs and ensures that our tools are practical and effective in the complex, unpredictable world of home care.
Managing potential risks
These promising opportunities must be balanced with careful consideration of privacy, transparency and accountability, as well as the risk that AI could produce biased or incorrect outputs that can pose serious operational, clinical or ethical risks. In addition to building intelligent systems, we must ensure that these systems align with the values and expectations of clients and clinicians. This requires robust human oversight and clear guardrails. It also involves ongoing collaboration with ethics committees, privacy officers and frontline staff to ensure that AI systems are safe, fair and respectful of client autonomy and staff expertise. VHA’s cross-functional AI Governance Committee is tasked with ensuring we understand and address these risks.
Privacy and security are critical concerns. Healthcare organizations are stewards of sensitive client data, and all our systems must keep private information out of the public domain. The guiding principles and processes we’ve developed ensure these considerations remain top of mind at VHA. We’ll always be transparent about how we use data, and we proceed with great caution, partnering only with organizations that take data protection as seriously as we do. <>The human factor is a critical consideration. AI is a big change for people. It’s important that teams understand what we are trying to achieve and how we are going about it. And it is essential that clinicians are involved. Without their unique insights, AI solutions would be technically impressive but practically ineffective, or worse. All of VHA’s AI models and systems are extensively tested to ensure that the outputs are consistent with what a clinician would expect and advise. Human oversight ensures that we understand what the AI is doing and have validated that it's doing it correctly. In the foreseeable future, AI will not operate fully autonomously at VHA.
And finally, the practical question: buy or build? The decision depends on your organization’s resources, technical expertise and the specific problems you are trying to solve. Off-the-shelf solutions can be deployed quickly with fewer upfront costs. But even with pre-built tools, skilled data scientists and technical staff must validate models, ensure data quality, manage the risk of bias or inappropriate outputs and customize as needed. Custom-built tools may better fit more specialized or unique use cases and data sets, provided the organization has the resources, internal capacity and close vendor partnerships to develop them responsibly.
VHA’s journey with AI is just beginning, but by focusing on data quality, collaboration and ethical practice, we have created a framework for responsible AI adoption that aligns with our core purpose: care at home, delivered with heart, led by science.
As more healthcare providers consider integrating AI into their operations, we hope our experience can offer insights and guidance and serve as an example of what is possible when technology is used thoughtfully and ethically.
About the Author(s)
Alistair Forsyth, Vice President, Digital Health and Chief Information Officer at VHA Home HealthCare
Sandra McKay, PhD, VHA Vice President, Research & Innovation and Chief Scientific Officer
Comments
Be the first to comment on this!
Personal Subscriber? Sign In
Note: Please enter a display name. Your email address will not be publically displayed
