← Back to Use Cases
Coriven Use Case — Healthcare

Healthcare AI Compliance: A 200-Bed Hospital Discovers 19 AI Tools — 7 of Them Touch PHI Without a BAA

IT knew about 5 AI tools. HIPAA compliance assumed the number was 5. The actual number was 19 — and 7 of the unapproved ones were handling Protected Health Information.
Verified — measured directly from source data
Calculated — derived with methodology
Estimated — projected from baseline data

A Regional Hospital Where AI Arrived Before Governance Did

The hospital is a 200-bed regional medical center with 1,200 employees, part of a three-hospital health system serving a mid-size metropolitan area. It operates emergency, surgical, diagnostic imaging, and outpatient services. Like every hospital in the country, it faces the same relentless pressures: staffing shortages, documentation burden, reimbursement complexity, and a regulatory environment that punishes errors with six- and seven-figure penalties.

AI arrived as a relief valve. Physicians heard about AI scribes that could eliminate hours of daily documentation work. Revenue cycle managers read about AI coding assistants that could accelerate medical billing. Radiologists explored AI image analysis tools that promised faster preliminary reads. In each case, the motivation was the same: clinicians and administrators drowning in work found tools that genuinely helped them stay afloat.

The hospital's IT department had done their job with the tools they knew about. Five AI tools were on the approved list. Each one had been security-reviewed. Each one had a Business Associate Agreement (BAA) in place. Each one was documented in the compliance management system. The CISO was confident in his governance posture — and for those five tools, he was right to be.

The problem was the 14 tools he did not know about. They were adopted by individual departments, individual physicians, and individual administrators who needed to work faster and found tools that helped. Nobody checked with IT. Nobody consulted compliance. Nobody thought to ask whether a BAA was required. The tools simply appeared in clinical and administrative workflows — silently, productively, and in direct violation of HIPAA requirements that the hospital did not even realize were being triggered.

1,200
Employees across clinical and administrative departments
19
AI tools discovered in active use (IT knew about 5)
7
Unapproved tools handling PHI with no BAA in place

The CISO Discovers the Gap Between Policy and Reality

The discovery began during a routine compliance review. The hospital's compliance officer was preparing for a state survey and asked the CISO a standard question: "Can you confirm the complete list of AI tools in use at this facility, along with their BAA status and data handling documentation?" The CISO pulled the approved tools list, verified the five entries, and was ready to sign off.

Then he paused. He had attended a healthcare cybersecurity conference the previous month where a peer CISO described discovering 30+ unauthorized AI tools at a comparable hospital. The story had been alarming enough to stick with him. He decided to spend one afternoon verifying that the approved list was actually complete before certifying it to compliance.

That afternoon turned into two weeks. The initial investigation — a combination of network traffic analysis, expense report review, and targeted conversations with department heads — revealed AI tools that IT had never seen, approved, or documented. The consulting radiologist was running a trial of an AI image analysis tool. The hospitalist group had adopted an AI scribe. The revenue cycle team was using an AI coding assistant. The nursing informatics team was testing an AI documentation tool. Each department had made its own decision. Each one believed they were using a helpful productivity tool. None of them had involved IT or compliance in the decision.

When the full audit was complete, the count stood at 19 AI tools in active use. Of the 14 unapproved tools, 7 were being used in departments that handle Protected Health Information. Not theoretical exposure. Active, daily use with patient data — names, diagnoses, medication lists, treatment plans, imaging studies — flowing through tools that had never been security-reviewed and had no BAA in place.

The CISO's conversation with the hospital's Chief Medical Officer was direct: "We have a HIPAA compliance problem that I did not know about until this week. Seven AI tools are handling patient data without Business Associate Agreements. Under HIPAA, each one of these represents an independent violation exposure. I need to brief the board, and I need outside help to remediate this correctly."

The board was notified within 48 hours. Coriven was engaged within the week.

Three Departments. Three Tools. Three Distinct PHI Exposures.

The seven unapproved tools touching PHI represented a range of clinical and administrative use cases. Three incidents stood out — not because they were the most egregious, but because they illustrated the three most common patterns of shadow AI adoption in healthcare: the clinical productivity tool, the administrative efficiency tool, and the departmental trial that never went through procurement.

Incident 1 — AI Scribe Storing Patient Recordings in Non-BAA Cloud

A hospitalist physician adopted an AI scribe tool to transcribe patient encounters and generate structured clinical notes. The tool records the physician-patient conversation, processes it through a speech-to-text model, and produces a SOAP note ready for the EHR. The physician found it at a medical conference exhibit booth and signed up for a personal account on-site. The tool stores audio recordings and generated transcripts in a cloud environment that has no BAA with the hospital. Every patient encounter recorded through that tool — patient names, chief complaints, diagnoses, medication lists, treatment plans, family history, and social history — was being stored by a company with zero contractual obligation to protect it under HIPAA. The physician had been using it for four months. Hundreds of patient encounters. The physician's intent was entirely clinical: "I was spending three hours a night on documentation. This tool gave me my evenings back." He was not wrong about the value. He was entirely unaware of the risk.

Incident 2 — AI Billing Coder Processing Diagnosis Codes Without BAA

The revenue cycle team started using an AI coding assistant to accelerate medical billing. Coders would paste clinical documentation into the tool, and it would suggest ICD-10 and CPT codes along with supporting rationale. The tool processes diagnosis descriptions, procedure narratives, and frequently the underlying clinical notes themselves. Diagnosis codes combined with dates of service and provider information constitute PHI under HIPAA — a fact that the billing team did not consider because they thought of coding as "administrative, not clinical." The tool had no BAA, no SOC 2 certification, and no documented data retention policy. The billing manager's reasoning was practical: "Our coders are backlogged three weeks. This tool cuts coding time by 40%. I didn't think we needed IT involved for a billing tool." She was wrong about not needing IT. She was right about the productivity gain — which made the remediation conversation harder, not easier.

Incident 3 — Radiology AI Trial Never Security-Reviewed

The radiology department initiated a trial of an AI image analysis tool to assist with preliminary reads on chest X-rays and CT scans. The trial was set up directly between the department chair and the vendor's sales team. No procurement process was followed. No security review was conducted. No BAA was signed. The vendor was given access to a test set of imaging studies that included patient identifiers — names, medical record numbers, and dates of birth embedded in the DICOM metadata. The department chair's perspective: "We're evaluating whether the technology works. Once we decide, we'll go through the formal process." The formal process should have come first. The "trial" had been running for six weeks. Patient-identifiable imaging data had been transmitted to and stored by a vendor with no contractual obligation to protect it, delete it, or even acknowledge that it constitutes PHI.

Every one of these incidents followed the same pattern: a clinician or administrator with a legitimate workflow problem found a tool that solved it, adopted the tool without organizational involvement, and unknowingly created a HIPAA compliance exposure that the organization had no visibility into and no mechanism to detect. The common thread was not negligence — it was the absence of a detection and governance system capable of keeping pace with the speed of AI adoption in a clinical environment.

19 Tools Classified. 7 with PHI Exposure. 3 at Critical.

Coriven's ARIA (AI Risk and Impact Assessment) framework classified every discovered tool by the data it handles, the contractual protections in place, the regulatory exposure it creates, and the remediation complexity involved. The classification was not based on what the tools could theoretically access — it was based on what the tools were actually processing, verified through data flow tracing, usage analysis, and department interviews.

The ARIA classification gave the hospital something it had never had: a prioritized, risk-weighted view of its entire AI landscape. Instead of treating all 19 tools as equally urgent — which would have overwhelmed the compliance team — the classification allowed the hospital to focus remediation effort where the risk was highest and work systematically through lower tiers.

3
T1 Critical — Active PHI processing, no BAA, no security review, independent HIPAA violation exposure
4
T2 High — Contextual PHI exposure (scheduling, case references), missing BAAs, vendor has SOC 2
5
T3 Medium — Administrative data, no PHI, limited compliance risk, governance gap
7
T4 Low — General productivity, non-clinical, no sensitive data exposure

The three T1 Critical tools — the AI scribe, the billing coder, and the radiology trial — each represented an independent HIPAA violation exposure. Under HIPAA, a covered entity that allows a business associate to access PHI without a BAA has committed a violation. Not "might have a problem." Has one. The Office for Civil Rights (OCR) penalty tiers range from $141 to $2,134,831 per violation category per year — and each of these three tools constituted a separate violation category.

It is worth noting what HIPAA does and does not say about AI. HIPAA was enacted in 1996. It does not mention artificial intelligence, large language models, machine learning, or AI scribes. It does not need to. The law is clear: any entity that creates, receives, maintains, or transmits PHI on behalf of a covered entity is a business associate and requires a BAA. Every AI tool that touches PHI falls squarely within that definition. The question is not whether HIPAA applies to AI. The question is whether your hospital knows which AI tools are touching PHI right now.

From 26% Governance Coverage to 89% in 21 Business Days

The hospital executed the remediation plan over 21 business days. The urgency was driven by two factors: the board had been notified of the exposure, and the state survey was approaching. There was no room for a slow rollout. Every T1 tool needed to be remediated or replaced before the survey, and every T2 tool needed a BAA in place or a documented remediation timeline.

The remediation was not a blunt instrument. Each tool was handled based on its specific situation. The AI scribe vendor had an enterprise healthcare tier with full BAA support — the physician had simply signed up for a personal account instead of going through procurement. Migration to the enterprise tier took two weeks and preserved the clinical workflow the physician depended on. The billing coding tool had no healthcare-compliant offering, but the hospital's EHR vendor already offered an AI coding module as an add-on — the migration preserved the productivity gain while bringing the tool inside the BAA perimeter. The radiology trial was paused, a proper vendor security assessment was completed, a BAA was signed, and the trial resumed using de-identified data only.

The goal was not to eliminate AI. The goal was to bring every AI tool that touches PHI inside the hospital's governance and compliance framework — with verifiable evidence that the coverage is real, not aspirational.

Before — State at Audit
19 AI tools in active use: IT aware of 5 — the remaining 14 were shadow AI adopted by individual departments without organizational visibility
7 tools handling PHI without BAAs: Patient data flowing through tools with no contractual obligation to protect it under HIPAA
26% governance coverage: Only 5 of 19 tools had been security-reviewed, risk-classified, and contractually covered
No AI discovery mechanism: New tool adoption invisible to IT and compliance — detected only through manual investigation or incident
Survey readiness: unknown: Compliance officer unable to answer "how do you govern AI?" with anything beyond a policy document
After — 21 Business Days Later
All 19 tools inventoried and classified: Complete AI landscape mapped with ARIA risk tier, data sensitivity, BAA status, and assigned owner per tool
0 T1 tools without BAA: AI scribe migrated to enterprise tier, billing coder replaced with EHR-native module, radiology trial restructured with BAA and de-identified data
89% governance coverage: 17 of 19 tools fully governed — remaining 2 low-risk tools in vendor evaluation pipeline with documented timeline
Continuous AI monitoring active: New tool adoption detected automatically and routed to compliance review before PHI can flow through unapproved tools
Survey-ready compliance matrix: Tool-by-tool documentation with BAA status, data classification, PHI exposure assessment, and cryptographic verification
AI Governance Coverage — Before & After
26%
Before — 5 of 19 tools governed
89%
After — 17 of 19 tools governed

Governance coverage measures the percentage of AI tools in the environment that have been security-reviewed, risk-classified, and contractually covered with appropriate agreements. The remaining 11% represents two T4 low-risk tools in the vendor evaluation pipeline with documented remediation timelines.

6 Findings. Every One a Compliance Exposure. Every One Resolved.

Each finding was scored on Coriven's 5-point weighted model adapted for healthcare: PHI exposure severity, HIPAA violation tier, remediation urgency, clinical workflow impact, and organizational complexity. The scoring ensured that the highest-risk exposures were closed first while preserving the clinical productivity that the tools had legitimately created.

Finding Score State at Audit State After
AI Scribe — PHI in Non-BAA Cloud Storage
HIPAA Compliance · Clinical Data
5.00 Do First Hundreds of patient encounters recorded and stored in cloud environment with no BAA — audio, transcripts, and clinical notes all exposed Migrated to vendor's enterprise healthcare tier — BAA executed, historical data migrated to compliant storage, personal account closed
Radiology AI Trial — Patient-Identifiable Imaging Data
HIPAA Compliance · Imaging Data
4.90 Do First DICOM imaging studies with patient identifiers transmitted to vendor with no BAA, no security review, no procurement involvement Trial paused — vendor security assessment completed, BAA signed, trial resumed with de-identified data only, DICOM metadata scrubbed
AI Billing Coder — Diagnosis Codes as PHI
HIPAA Compliance · Revenue Cycle
4.70 Do First Clinical documentation and diagnosis codes processed through tool with no BAA — billing team unaware that coding data constitutes PHI in context Tool replaced with EHR-native AI coding module already covered under existing BAA — productivity preserved, compliance restored
4 T2 Tools — Missing BAAs, Contextual PHI Exposure
Compliance · Data Governance
3.80 Do Next Four tools handling scheduling data, department communications, and case references that constitute PHI in context — vendors have SOC 2 but no BAA BAAs negotiated and executed with all four vendors — two had standard BAA templates ready, two required negotiation (completed in 10 business days)
No AI Discovery or Monitoring Capability
Governance · Continuous Compliance
3.50 Do Next New AI tool adoption completely invisible to IT and compliance — detected only through manual investigation after suspected exposure Continuous AI monitoring deployed — new tool adoption detected automatically, classified by ARIA tier, and routed to compliance review workflow
No AI-Specific Compliance Documentation
Audit Readiness · Survey Preparation
3.20 Do Next Compliance officer unable to produce AI-specific documentation for state survey — only a general acceptable use policy with no AI-specific provisions Complete compliance matrix produced — tool-by-tool BAA status, data classification, PHI exposure assessment, and cryptographic verification of audit data

The Hospital Can Now Prove How It Governs AI

Before the Coriven engagement, the hospital's answer to "how do you govern AI?" was a policy document. A well-written policy document, but a policy document nonetheless — a statement of intent with no verification mechanism, no discovery capability, and no way to prove that the policy was being followed. When a state surveyor or Joint Commission auditor asks "how do you govern AI?", a policy document is the minimum acceptable answer. It is not a strong one.

After remediation, the hospital has something fundamentally different: a compliance matrix with cryptographic verification on every data point. Every AI tool in the environment is inventoried. Every data flow is classified. Every tool that handles PHI has a BAA in place or has been replaced with a tool that does. The compliance officer does not hand the surveyor a policy and hope they do not dig deeper. She hands them a verified, auditable, tool-by-tool compliance matrix and invites them to dig as deep as they want.

That is the difference between a governance program and a governance document.

89%
AI Governance Coverage — Up from 26%
0
T1 Critical tools without BAA (was 3)
19 tools
Fully inventoried and ARIA-classified
7 tools
PHI exposures remediated
21 days
Total remediation timeline
0
AI tools processing PHI without a Business Associate Agreement (was 7)
100%
T1 and T2 tools now governed with BAAs and security reviews
Audit Ready
Compliance matrix with cryptographic verification — survey-ready documentation

The hospital now runs continuous AI monitoring. Every new AI tool adoption is detected, classified by ARIA risk tier, and routed to compliance review before PHI can flow through it. Not because of a policy that says "check with IT first" — because of a detection system that sees it happen and triggers the workflow automatically. HIPAA does not mention AI. It does not need to. Every AI tool that touches PHI is in scope. The question is not whether HIPAA applies. The question is whether your hospital can prove it is compliant. This one can.

Scaling Governance Across the Three-Hospital System

This hospital is one of three in the health system. The same shadow AI dynamics that produced 19 tools at this facility are almost certainly present at the other two. The board has approved a system-wide AI governance initiative using the same methodology — discovery, classification, remediation, and continuous monitoring — applied across all three hospitals. The goal is a unified compliance posture that can be presented to regulators, payers, and patients as evidence of responsible AI governance.

Can your hospital prove how it governs AI?

When the surveyor asks "which AI tools touch PHI and what are the BAA arrangements?", the answer needs to be a verified compliance matrix — not a policy document and a promise. Coriven maps your AI landscape, classifies every tool by PHI exposure, and builds the governance framework that makes your answer audit-ready.

Start the Conversation →

Disclaimer: This use case is based on a composite engagement profile using the Coriven Method. The hospital described is a representative profile, not a specific client. All findings reflect the methodology Coriven applies to real engagements. Green numbers are verified from source data. Indigo numbers are calculated using defined methodology. Gold numbers are estimated from baseline data and implementation modeling. Actual results vary.

Every number in this use case is confidence-tagged by color — because we believe if we can't prove it, we should say so.

The Coriven Creed