
Artificial intelligence (AI) is transforming how care is delivered in the home. But as technology evolves, so does the need for thoughtful governance, ethical implementation and strategic adoption. To help care at home organizations navigate these shifts, Axxess recently launched a multi-part webinar series examining the role of AI in modern care delivery, beginning with a conversation on the foundational elements of responsible AI use.
Moderated by Axxess Senior Vice President of Public Policy Deborah Hoyt, the conversation featured insights from Axxess Chief Technology Officer Andrew Olowu and Axxess Chief Operating Officer Tom Codd. Together, they unpacked what providers need to know to implement AI in a way that is secure, scalable and aligned with regulatory standards.
To register for the next webinar in this series, click here.
Understanding the Fundamentals of AI
Olowu started the discussion by clarifying the difference between basic and advanced AI tools.
“Basic AI tools are assistive,” he explained. “They help with efficiency but they don’t optimize over time. Advanced tools are adaptive. They pull data, they continuously learn, and they help predict outcomes and streamline your operations.”
Understanding this distinction is critical, as advanced AI tools position providers to deliver higher-quality care.
Olowu outlined four core capabilities that providers should expect from any AI solution:
- Deep Integration: “AI tools must be tightly embedded within the EMR or the back-office system,” he said. “If it’s not integrated tightly, it’s not scalable.”
- Accuracy and Transparency: “AI is very powerful, but it’s not infallible,” he said. “Hallucinations are real. Put guardrails in place to ensure that it’s not fabricating or misinterpreting your data.”
- Scalability: “As your organization grows, you want the AI to continue to perform,” he said. “You want it to grow with you, so assess whether that AI can handle more data, more users, more complexity without breaking down or requiring you to completely overhaul it.”
- Adaptability: “The best AI systems learn and evolve,” he said. “They don’t just automate; they optimize. Executives should look for tools that continuously improve outcomes, not just drive efficiency up.”
Managing Risk and Governance
Codd emphasized that AI must be part of every organization’s enterprise risk management strategy.
“In January 2023, the National Institute of Standards and Technology (NIST) introduced a risk management framework for AI,” he said, breaking down the four key components of the framework: govern, map, measure and manage. These steps help organizations build trust into the design, development and use of AI.
Codd also stressed the importance of having an acceptable use policy that clearly outlines how employees and others can use the organization’s technology and data.
“Areas that you would usually address in a good acceptable use policy would be your ownership and general use of your technology, security and handling confidential information, types of unacceptable use of systems, [standards for] blogging and social media, email and communications guidelines, and consequences for violation of these policies,” he said.
Ethics and Trust in AI
To further support ethical implementation, Codd encouraged organizations to develop a formal AI ethics statement.
“A solid AI ethics statement should include the organization’s approach to AI, where AI is used in its products, principles which are followed when using AI, including privacy by design, how AI models are evaluated, and how AI is governed,” he said.
He recommended the free NIST AI Risk Management Framework Playbook, which includes a roadmap for AI risk management and top ten priorities for building a trustworthy AI framework.
Practical Strategies for AI Adoption
As the discussion turned toward practical implementation, Olowu shared actionable insights for organizations just beginning their AI journey. He recommended starting small but thinking long-term.
“Choose the right use cases where the technology is mature enough, the risk to your patient care is low, and there’s potential for short-term value,” he said. “Automating clinical documentation is a great example. It’s proven. It’s safe. It delivers immediate time-savings.”
He also emphasized the importance of engaging staff early in the process.
“Use your early adopters and your innovators to bring your late majority and your laggards into the future,” he said. “You have to go after the people who are excited about technology, so your staff really must see AI as a partner, not a replacement.”
Defining success metrics is another key strategy. Olowu encouraged organizations to set clear goals, such as reducing documentation time by 40% or accelerating reimbursement time by 20%, and to use those wins to build momentum.
Finally, he stressed the importance of choosing the right technology partner.
“If I could leave our audience with one strategic priority, it would be to choose the right technology partner,” Olowu said. “The right technology partner should feel like an extension of your team, someone that really helps you grow confidently, scale responsibly, and stay ahead of what’s next.”
Register now for our next AI webinar to continue the conversation on responsible AI adoption and use.
