The clinical case for AI-assisted radiology doesn't sell itself to hospital IT. And it shouldn't — that's not IT's job. Their job is to protect a clinical environment where a PACS outage during peak ED hours has direct patient care consequences, where a HIPAA breach costs seven figures in penalties and indefinite reputational damage, and where every new vendor connection is a new attack surface. The clinical value conversation is for department heads and CMOs. The IT conversation is different, and it needs to happen on different terms.
We've completed IT security reviews at 14 hospital systems over the past two years. Here's what we've learned about where the friction actually comes from — and what actually resolves it.
The PACS Integration Question Is the First One
Before security architecture, before uptime SLAs, the first question from hospital IT is almost always about PACS compatibility. The imaging environment is the one clinical system that most hospital IT departments are genuinely reluctant to touch. PACS integrations have historically been brittle, upgrade-sensitive, and poorly documented. Vendors have gone out of business. Custom integrations have been abandoned. The institutional memory of a painful PACS migration is long.
The question is whether AI integration requires touching the PACS directly, and the honest answer is: it depends on the architecture. DICOM-compliant pull integrations — where the AI system requests studies from the PACS via standard DICOM protocols without requiring any agent installation on the PACS server — are the path of least resistance. They're what we use. The AI system is a DICOM node on the network, like any other. It requests studies, processes them, and returns structured results via HL7 or DICOM SR, depending on what the downstream workflow needs.
This architecture has been accepted by every IT team we've worked with. It doesn't require PACS vendor involvement, doesn't require custom middleware, and doesn't create a dependency on the PACS upgrade cycle. The integration test we run before deployment is a synthetic DICOM study routing exercise that the hospital's own PACS administrator can run and validate independently. That procedural step matters — it means IT isn't taking our word for compatibility, they're verifying it themselves.
Data Residency and PHI Handling
The HIPAA question is always about where patient data goes. Medical imaging studies are PHI under HIPAA, and any vendor system processing them is a business associate. The hospital's information security team needs to see the Business Associate Agreement, understand the data flow architecture, and confirm that PHI handling meets their institutional policies — which often go beyond the HIPAA baseline.
For most of our hospital clients, the critical issue is on-premises processing. They want the AI model running inside their network perimeter, with studies never leaving the hospital environment. We support this deployment model. The inference engine runs on hardware within the hospital's data center or cloud environment — not on our infrastructure. Study data moves within the hospital network and doesn't traverse the public internet for processing.
There is a separate question about what diagnostic data we use for model improvement. Our standard contracts are explicit: we don't use patient data from deployed hospitals for model training without a separate research agreement, IRB coverage, and explicit hospital authorization. De-identified aggregate performance metrics used for monitoring don't leave the hospital. This has been a sticking point in negotiations before we clarified the contractual language — make sure your BAA and your SaaS agreement actually say what you think they say on this point.
Uptime and Failover
IT needs to know what happens when the AI system is unavailable. The clinical answer is straightforward: the AI is a decision support tool, not a prerequisite for radiology reads. If it's down, radiologists read the old way. But IT wants to see this documented in the clinical workflow design, not just asserted. They've been burned by vendors who said their tool was "optional" and turned out to be deeply embedded in workflows that broke when the tool was unavailable.
We address this with a designed fallback mode that's tested during go-live validation. When the AI system is unreachable — network outage, planned maintenance, unplanned downtime — the worklist reverts to standard FIFO ordering automatically, without any manual IT intervention. Radiologists see a status indicator showing AI triage is offline. Clinical workflow continues. This design requirement isn't onerous to build, but it needs to be explicit, tested, and contractually defined in the uptime SLA.
Our standard SLA is 99.5% availability during defined clinical hours, with a 4-hour recovery time objective for unplanned outages. More important than the number is the escalation path — who do IT staff call at 2am when something is wrong, and what can they expect in terms of response time and initial triage? We maintain a 24/7 technical support line staffed by engineers who know the integration architecture. That's something IT asks for explicitly and rarely receives from AI vendors whose support model is "submit a ticket."
The Security Review Process
Most hospital systems now conduct vendor security assessments using the HITRUST framework or equivalent. The review typically covers access controls, encryption standards, vulnerability management, penetration testing documentation, incident response procedures, and vendor third-party risk management. Plan for this process to take 60-90 days at institutions with mature security programs, and have your security documentation ready before you ask IT to schedule the review.
The assessments we've been through have consistently flagged two areas requiring additional documentation: our software update and patching procedures (specifically, how we handle security patches to the AI inference engine within the hospital environment without requiring a full re-deployment), and our supply chain security documentation covering the open-source components in our inference stack. Both are legitimate concerns and worth having detailed answers for before you walk into the review.
The IT teams that have been most difficult to work with aren't the ones with the most rigorous security requirements — those teams have a defined process and clear criteria. The hardest reviews are with departments that are understaffed, haven't done a medical AI vendor review before, and don't have a clear framework for what "good" looks like. In those cases, the most useful thing we've done is bring our own reference checklist based on previous reviews and offer to walk the IT team through it, which helps them structure the assessment rather than leaving everything open-ended.
IT pushback on AI integration is rational, well-founded, and best addressed directly rather than around it. The vendors who've built trust with hospital IT teams aren't the ones with the slickest clinical pitch — they're the ones who show up with complete documentation, patient data handling commitments in writing, and a technical team that can speak to the integration architecture at a level of specificity that convinces skeptical engineers.