Table of Contents:
Are You Scaling Excellence, or Scaling Problems?
Verification and Validation: The Questions Nobody Was Asking
ROI and Scale: The Fully Loaded Question
Vendor Reality and AI Fatigue on the Table
Denver, April 21, 2026. I hosted four rotations of infusion leaders moving through 25-minute roundtable discussions on a question that will define the next three years of this industry: Operationalizing AI in Infusion: Governance, Risk, and Return.
I came in expecting to field questions about which AI vendors to trust or whether the automation promises in the exhibit hall were real. What I got was something better: a candid, grounded conversation about what AI adoption actually requires in practice. Not in a demo environment, but in real infusion operations managing complex therapies, prior authorizations, and billing cycles that directly affect patient access to care.
As in any other industry, AI can drive tremendous value for infusion operators. Several technology platforms are doing an excellent job using AI to automate specific tasks. However, not every vendor is equally specialized in every task, which makes it challenging to choose the right tools, ensure they play well together, and monitor the performance of each.
The first question I put to each group was a bit uncomfortable: If you automated your current process tomorrow, would you be scaling excellence or scaling inconsistency?
This is the under-appreciated first challenge of AI in infusion. Before any tool is deployed, the underlying workflow needs clearly defined start and end points, task-level ownership, and measurable handoffs. AI applied to an undefined process does not fix that process. It accelerates whatever is already there, variability included.
The follow-on question that generated real discussion: how do you currently measure variation across sites in the same workflow? Operators who had invested in centralized reporting could answer with data. Those who had not were making educated guesses. That distinction is the foundation of every subsequent AI decision.
We use the term Data Spine to describe the integrated data foundation connecting an infusion operator's fragmented systems related to referrals, intake, EHR, billing, and remit files into one continuous view. The Data Sophistication Framework we work with identifies six levels of maturity, from basic reporting through centralized data lakes to predictive and generative AI. Most mid-market infusion organizations sit at Level 1 or 2. The critical insight from the roundtable discussions: organizations at Level 2 are not just lacking in analytics sophistication, they are also not in a position to quantitatively evaluate their AI vendors.
This section sparked the most sustained engagement. Most operators described Verification intuitively as "did the task complete?" and Validation as "did it complete correctly?" That is directionally right, and it maps to the V3 Framework I published on the NICA blog last December.
But the harder question followed: what infrastructure, audit trails, and monitoring would you actually need to trust an AI model's output? Most organizations are running AI tools without any formal Verification & Validation protocol. They rely on vendor accuracy claims, spot-check outputs informally, and hope the errors are rare enough not to matter.
One participant asked a question that I think a lot of operators are sitting on: How do we actually do verification and validation reporting if we don't have a lot of data infrastructure in place? My recommendation was to ask your vendor, at minimum, for a weekly activity log. A simple CSV file, delivered weekly, listing every task they worked, with referral ID, patient name, drug, timestamps, and model commentary. If you can't do full Verification & Validation, you can at least cross-reference those numbers against what you see in your EHR and intake systems. It's a fair ask. Any credible vendor should be able to agree to it before the engagement starts.
But as you scale, the “Data Spine” matters beyond analytics. Applying V3 in practice requires a single source of truth across referrals and revenue cycle, standardized task definitions across sites, and centralized reporting that integrates EHR, billing, and AI platforms together.
In infusion, the stakes make this urgent. The question of where a false positive or false negative would be tolerable versus unacceptable drew some of the sharpest thinking of each session. An AI scheduling error in a low-acuity workflow is recoverable. An AI error in a prior authorization decision can easily cause an unresolved denial leading to an eventual $40K+ hit to EBITDA. Participants found it useful to map their workflows against that tolerance spectrum before evaluating any specific tool.
The human-in-the-loop question was equally revealing: what criteria determine whether a workflow stays supervised or goes fully automated? Most organizations had not formalized those criteria or created a governance policy. They were making those calls case by case, often under vendor pressure to move faster.
As we pushed into questions around financial accountability, the V3 framework's Value dimension came to life.
The question that landed hardest: On a fully loaded basis, how does the ROI of this AI initiative compare to the cost of simply adding well-trained staff to solve the same problem?
Platform fees are visible. Token usage, configuration time, compliance overhead, and the cost of rework when the AI produces wrong outputs are not always in the initial calculation. Several operators acknowledged they had approved pilots based on vendor-provided ROI estimates that did not account for the human oversight layer that remained necessary after deployment. The 80/20 reality of AI/Human hybrid workflows means staffing costs do not disappear.
Finally, we discussed the exit cost: what would it cost, financially and operationally, to unwind an AI solution in 6 to 12 months if it underperforms? Most operators had not modeled that scenario. For a mission-critical workflow that has been partially automated, the answer can be far more disruptive than expected.
Many leaders seemed to be already getting frustrated with all the noise from vendors, unproven solutions, and 6-week AI pilots that were still open after 6 months.
When it comes to the use of AI, all technology platforms are building the plane as they fly it. References and tenure are useful, but the more important signal is whether a vendor will sit down with you, scope a six-month pilot, and define quantitative metrics that both sides track together.
I spend roughly 25% of my time talking with vendors and exploring use cases on behalf of the organizations I work with (yes, you read that right). Not all platforms are created equal, nor are all tasks. It is worth being clear-eyed about where a platform you’re considering is genuinely performing well in infusion right now versus where it is still maturing. The assumption that a vendor that performs fax processing really well must be equally good at BIV or prior authorization has caught a number of operators off guard. A platform that manages prior authorization well for Pharmacy benefit may not be as good for Medical benefit.
Despite everything, there was no cynicism about AI amongst infusion leaders. These are operators managing organizations that care for patients receiving high-cost, life-critical therapies. They want better tools and are ready for this technology.
What they needed, and what I hope the discussion provided, was a framework for asking harder questions before committing. The V3 Framework, with its insistence on Verification, Validation, and honest Value measurement, is that framework. It does not slow down AI adoption, but makes it durable.
The operators who build governance infrastructure before the pressure peaks, who define their Verification & Validation protocols before the first pilot goes live, and who ask the fully loaded ROI question before signing the contract, those are the operators who will still be describing their AI investments as successes in three years.
Chris Hilger is the CEO of SolisRx and a contributor to NICA on AI adoption in infusion operations. He moderated the "Operationalizing AI in Infusion" roundtable at NHIA 2026 in Denver. He also spoke on "Better Denial Management with Simple AI Models". The V3 Framework was originally published on infusioncenter.org in December 2025.




