The hospital is a 200-bed regional medical center with 1,200 employees, part of a three-hospital health system serving a mid-size metropolitan area. It operates emergency, surgical, diagnostic imaging, and outpatient services. Like every hospital in the country, it faces the same relentless pressures: staffing shortages, documentation burden, reimbursement complexity, and a regulatory environment that punishes errors with six- and seven-figure penalties.
AI arrived as a relief valve. Physicians heard about AI scribes that could eliminate hours of daily documentation work. Revenue cycle managers read about AI coding assistants that could accelerate medical billing. Radiologists explored AI image analysis tools that promised faster preliminary reads. In each case, the motivation was the same: clinicians and administrators drowning in work found tools that genuinely helped them stay afloat.
The hospital's IT department had done their job with the tools they knew about. Five AI tools were on the approved list. Each one had been security-reviewed. Each one had a Business Associate Agreement (BAA) in place. Each one was documented in the compliance management system. The CISO was confident in his governance posture — and for those five tools, he was right to be.
The problem was the 14 tools he did not know about. They were adopted by individual departments, individual physicians, and individual administrators who needed to work faster and found tools that helped. Nobody checked with IT. Nobody consulted compliance. Nobody thought to ask whether a BAA was required. The tools simply appeared in clinical and administrative workflows — silently, productively, and in direct violation of HIPAA requirements that the hospital did not even realize were being triggered.
The discovery began during a routine compliance review. The hospital's compliance officer was preparing for a state survey and asked the CISO a standard question: "Can you confirm the complete list of AI tools in use at this facility, along with their BAA status and data handling documentation?" The CISO pulled the approved tools list, verified the five entries, and was ready to sign off.
Then he paused. He had attended a healthcare cybersecurity conference the previous month where a peer CISO described discovering 30+ unauthorized AI tools at a comparable hospital. The story had been alarming enough to stick with him. He decided to spend one afternoon verifying that the approved list was actually complete before certifying it to compliance.
That afternoon turned into two weeks. The initial investigation — a combination of network traffic analysis, expense report review, and targeted conversations with department heads — revealed AI tools that IT had never seen, approved, or documented. The consulting radiologist was running a trial of an AI image analysis tool. The hospitalist group had adopted an AI scribe. The revenue cycle team was using an AI coding assistant. The nursing informatics team was testing an AI documentation tool. Each department had made its own decision. Each one believed they were using a helpful productivity tool. None of them had involved IT or compliance in the decision.
When the full audit was complete, the count stood at 19 AI tools in active use. Of the 14 unapproved tools, 7 were being used in departments that handle Protected Health Information. Not theoretical exposure. Active, daily use with patient data — names, diagnoses, medication lists, treatment plans, imaging studies — flowing through tools that had never been security-reviewed and had no BAA in place.
The CISO's conversation with the hospital's Chief Medical Officer was direct: "We have a HIPAA compliance problem that I did not know about until this week. Seven AI tools are handling patient data without Business Associate Agreements. Under HIPAA, each one of these represents an independent violation exposure. I need to brief the board, and I need outside help to remediate this correctly."
The board was notified within 48 hours. Coriven was engaged within the week.
The seven unapproved tools touching PHI represented a range of clinical and administrative use cases. Three incidents stood out — not because they were the most egregious, but because they illustrated the three most common patterns of shadow AI adoption in healthcare: the clinical productivity tool, the administrative efficiency tool, and the departmental trial that never went through procurement.
Every one of these incidents followed the same pattern: a clinician or administrator with a legitimate workflow problem found a tool that solved it, adopted the tool without organizational involvement, and unknowingly created a HIPAA compliance exposure that the organization had no visibility into and no mechanism to detect. The common thread was not negligence — it was the absence of a detection and governance system capable of keeping pace with the speed of AI adoption in a clinical environment.
Coriven's ARIA (AI Risk and Impact Assessment) framework classified every discovered tool by the data it handles, the contractual protections in place, the regulatory exposure it creates, and the remediation complexity involved. The classification was not based on what the tools could theoretically access — it was based on what the tools were actually processing, verified through data flow tracing, usage analysis, and department interviews.
The ARIA classification gave the hospital something it had never had: a prioritized, risk-weighted view of its entire AI landscape. Instead of treating all 19 tools as equally urgent — which would have overwhelmed the compliance team — the classification allowed the hospital to focus remediation effort where the risk was highest and work systematically through lower tiers.
The three T1 Critical tools — the AI scribe, the billing coder, and the radiology trial — each represented an independent HIPAA violation exposure. Under HIPAA, a covered entity that allows a business associate to access PHI without a BAA has committed a violation. Not "might have a problem." Has one. The Office for Civil Rights (OCR) penalty tiers range from $141 to $2,134,831 per violation category per year — and each of these three tools constituted a separate violation category.
It is worth noting what HIPAA does and does not say about AI. HIPAA was enacted in 1996. It does not mention artificial intelligence, large language models, machine learning, or AI scribes. It does not need to. The law is clear: any entity that creates, receives, maintains, or transmits PHI on behalf of a covered entity is a business associate and requires a BAA. Every AI tool that touches PHI falls squarely within that definition. The question is not whether HIPAA applies to AI. The question is whether your hospital knows which AI tools are touching PHI right now.
The hospital executed the remediation plan over 21 business days. The urgency was driven by two factors: the board had been notified of the exposure, and the state survey was approaching. There was no room for a slow rollout. Every T1 tool needed to be remediated or replaced before the survey, and every T2 tool needed a BAA in place or a documented remediation timeline.
The remediation was not a blunt instrument. Each tool was handled based on its specific situation. The AI scribe vendor had an enterprise healthcare tier with full BAA support — the physician had simply signed up for a personal account instead of going through procurement. Migration to the enterprise tier took two weeks and preserved the clinical workflow the physician depended on. The billing coding tool had no healthcare-compliant offering, but the hospital's EHR vendor already offered an AI coding module as an add-on — the migration preserved the productivity gain while bringing the tool inside the BAA perimeter. The radiology trial was paused, a proper vendor security assessment was completed, a BAA was signed, and the trial resumed using de-identified data only.
The goal was not to eliminate AI. The goal was to bring every AI tool that touches PHI inside the hospital's governance and compliance framework — with verifiable evidence that the coverage is real, not aspirational.
Governance coverage measures the percentage of AI tools in the environment that have been security-reviewed, risk-classified, and contractually covered with appropriate agreements. The remaining 11% represents two T4 low-risk tools in the vendor evaluation pipeline with documented remediation timelines.
Each finding was scored on Coriven's 5-point weighted model adapted for healthcare: PHI exposure severity, HIPAA violation tier, remediation urgency, clinical workflow impact, and organizational complexity. The scoring ensured that the highest-risk exposures were closed first while preserving the clinical productivity that the tools had legitimately created.
| Finding | Score | State at Audit | State After |
|---|---|---|---|
|
AI Scribe — PHI in Non-BAA Cloud Storage
HIPAA Compliance · Clinical Data
|
5.00 Do First | Hundreds of patient encounters recorded and stored in cloud environment with no BAA — audio, transcripts, and clinical notes all exposed | Migrated to vendor's enterprise healthcare tier — BAA executed, historical data migrated to compliant storage, personal account closed |
|
Radiology AI Trial — Patient-Identifiable Imaging Data
HIPAA Compliance · Imaging Data
|
4.90 Do First | DICOM imaging studies with patient identifiers transmitted to vendor with no BAA, no security review, no procurement involvement | Trial paused — vendor security assessment completed, BAA signed, trial resumed with de-identified data only, DICOM metadata scrubbed |
|
AI Billing Coder — Diagnosis Codes as PHI
HIPAA Compliance · Revenue Cycle
|
4.70 Do First | Clinical documentation and diagnosis codes processed through tool with no BAA — billing team unaware that coding data constitutes PHI in context | Tool replaced with EHR-native AI coding module already covered under existing BAA — productivity preserved, compliance restored |
|
4 T2 Tools — Missing BAAs, Contextual PHI Exposure
Compliance · Data Governance
|
3.80 Do Next | Four tools handling scheduling data, department communications, and case references that constitute PHI in context — vendors have SOC 2 but no BAA | BAAs negotiated and executed with all four vendors — two had standard BAA templates ready, two required negotiation (completed in 10 business days) |
|
No AI Discovery or Monitoring Capability
Governance · Continuous Compliance
|
3.50 Do Next | New AI tool adoption completely invisible to IT and compliance — detected only through manual investigation after suspected exposure | Continuous AI monitoring deployed — new tool adoption detected automatically, classified by ARIA tier, and routed to compliance review workflow |
|
No AI-Specific Compliance Documentation
Audit Readiness · Survey Preparation
|
3.20 Do Next | Compliance officer unable to produce AI-specific documentation for state survey — only a general acceptable use policy with no AI-specific provisions | Complete compliance matrix produced — tool-by-tool BAA status, data classification, PHI exposure assessment, and cryptographic verification of audit data |
Before the Coriven engagement, the hospital's answer to "how do you govern AI?" was a policy document. A well-written policy document, but a policy document nonetheless — a statement of intent with no verification mechanism, no discovery capability, and no way to prove that the policy was being followed. When a state surveyor or Joint Commission auditor asks "how do you govern AI?", a policy document is the minimum acceptable answer. It is not a strong one.
After remediation, the hospital has something fundamentally different: a compliance matrix with cryptographic verification on every data point. Every AI tool in the environment is inventoried. Every data flow is classified. Every tool that handles PHI has a BAA in place or has been replaced with a tool that does. The compliance officer does not hand the surveyor a policy and hope they do not dig deeper. She hands them a verified, auditable, tool-by-tool compliance matrix and invites them to dig as deep as they want.
That is the difference between a governance program and a governance document.
The hospital now runs continuous AI monitoring. Every new AI tool adoption is detected, classified by ARIA risk tier, and routed to compliance review before PHI can flow through it. Not because of a policy that says "check with IT first" — because of a detection system that sees it happen and triggers the workflow automatically. HIPAA does not mention AI. It does not need to. Every AI tool that touches PHI is in scope. The question is not whether HIPAA applies. The question is whether your hospital can prove it is compliant. This one can.
This hospital is one of three in the health system. The same shadow AI dynamics that produced 19 tools at this facility are almost certainly present at the other two. The board has approved a system-wide AI governance initiative using the same methodology — discovery, classification, remediation, and continuous monitoring — applied across all three hospitals. The goal is a unified compliance posture that can be presented to regulators, payers, and patients as evidence of responsible AI governance.
When the surveyor asks "which AI tools touch PHI and what are the BAA arrangements?", the answer needs to be a verified compliance matrix — not a policy document and a promise. Coriven maps your AI landscape, classifies every tool by PHI exposure, and builds the governance framework that makes your answer audit-ready.
Start the Conversation →Disclaimer: This use case is based on a composite engagement profile using the Coriven Method. The hospital described is a representative profile, not a specific client. All findings reflect the methodology Coriven applies to real engagements. Green numbers are verified from source data. Indigo numbers are calculated using defined methodology. Gold numbers are estimated from baseline data and implementation modeling. Actual results vary.
Every number in this use case is confidence-tagged by color — because we believe if we can't prove it, we should say so.