← Back to Use Cases
Coriven Use Case — Data Leakage Prevention

Data Leakage Prevention: Every AI Prompt Is a Potential Breach — A 600-Person Firm Discovers 11 Tools Leaking Client Data

A consultant pasted a client financial model into ChatGPT Free. A recruiter ran 200 resumes through an AI summarizer with no BAA. A legal associate uploaded 14 contracts to an unvetted review tool. All in one week.
Verified — measured directly from source data
Calculated — derived with methodology
Estimated — projected from baseline data

A Professional Services Firm Built on Client Trust

The firm is a 600-person professional services organization spanning three practice areas: management consulting, staffing and recruitment, and legal services. Approximately $140M in annual revenue. Their entire business model rests on a single foundation: client trust. Fortune 500 companies hand over financial models, strategic plans, personnel records, litigation documents, and competitive intelligence with the reasonable expectation that this information stays within the engagement team and the firm's secure systems.

AI adoption had been enthusiastic and, by most visible measures, successful. The firm's leadership actively encouraged consultants, recruiters, and associates to find ways to work smarter with AI. The managing partner said it plainly at a quarterly all-hands: "If AI can help you do better work faster, use it. We are not going to be the firm that falls behind." What they did not say — because it had not occurred to anyone to say it — was what "use it" actually meant in the context of client confidentiality, data processing agreements, and regulatory exposure.

The result was predictable in hindsight. A shadow AI ecosystem grew organically across all three practice areas. Each practice discovered different tools. Each team made independent decisions about what data was acceptable to put into an AI prompt. No one had a complete picture. IT knew about a handful of approved tools. Compliance assumed the AI footprint was small. The CISO had flagged AI as a "future risk" in the most recent board report — not a current one. That assumption was about to be destroyed.

600
Employees across consulting, staffing, and legal services
47
AI tools discovered in use across the organization
11
Tools processing client data with no enterprise agreement

The CISO Gets a Phone Call That Changes Everything

It started with a routine vendor security questionnaire. A Fortune 500 client — one of the firm's largest accounts — sent their annual third-party risk assessment. Most of the questions were familiar: encryption standards, access controls, incident response procedures. But one question was new, and it stopped the CISO cold: "Does your organization use AI tools in the delivery of services to our company? If so, list all tools, their data handling policies, and any relevant enterprise agreements or BAAs."

The CISO's first instinct was to check with IT. IT's answer came back quickly and cleanly: "We have 5 approved AI tools. All enterprise-licensed. All documented." That answer felt manageable. But the CISO had been reading enough industry breach reports to know that the gap between "approved tools" and "tools actually in use" was often enormous — sometimes by a factor of five or more. She decided to dig deeper before signing off on the questionnaire response.

What she found in the first 48 hours of investigation was enough to delay the questionnaire response indefinitely and escalate directly to the managing partners. Expense report analysis alone revealed 23 AI tool subscriptions that IT had never approved. Browser extension audits on a random sample of 50 employee machines found 9 additional tools. Network traffic analysis identified API calls to 15 distinct AI services that were not on any approved list. Some employees were using multiple unapproved tools simultaneously — a summarization tool for one task, a different generation tool for another, a third tool for document review.

The total count when the full audit was complete: 47 AI tools in active use across the organization. IT knew about 5 of them. The CISO had just told the board that AI was a "future risk." It was not a future risk. It was a current, active, unquantified exposure — and 11 of those 47 tools were processing client data with no enterprise agreement, no data processing addendum, and in some cases, terms of service that explicitly permitted the AI provider to train on input data.

The CISO's phone call to the managing partner was brief: "We have a problem. It is bigger than I thought. We need outside help, and we need it this week."

That was when Coriven was brought in.

Three Incidents. One Week. Zero Malicious Intent.

What made this situation politically complicated — and operationally dangerous — was that none of the people involved had done anything they believed was wrong. They were doing exactly what leadership had asked them to do: work smarter with AI. The problem was not the people. The problem was the complete absence of guardrails, policy, or organizational signal about what "working smarter" actually meant when client confidentiality was at stake.

Three incidents surfaced during the first week of the Coriven engagement. Each one came from a different practice area, involved a different tool, and exposed a different category of client data. Together, they painted a picture of systemic risk that was both invisible and inevitable — the natural consequence of enthusiastic adoption without governance.

Incident 1 — Consulting Practice: Client Financial Model

A senior consultant copied a client's complete three-year financial projection model — including revenue forecasts, margin targets, acquisition cost assumptions, and competitive positioning data — into ChatGPT Free (consumer tier, no enterprise agreement) to "help restructure the layout and add executive commentary." The financial model contained material non-public information for a publicly traded company. ChatGPT Free's terms of service at the time permitted the use of input data for model training. The consultant had no awareness this was a risk. He had been doing it for months. The engagement partner had no visibility that it was happening. There was no DLP system monitoring AI prompts, no policy that mentioned AI-specific data handling, and no technical control that would have flagged or prevented this action. The consultant's reasoning was simple and sincere: "It saved me two hours on the reformatting alone."

Incident 2 — Staffing Practice: 200 Candidate Resumes

A recruiting coordinator used a consumer-grade AI summarization tool — one she had found through a Google search — to process 200 candidate resumes for a bulk staffing engagement. The resumes contained full names, home addresses, phone numbers, email addresses, employment histories, salary expectations, and in several cases, disability accommodation notes and veteran status indicators. The AI tool had no Business Associate Agreement, no data processing addendum, no SOC 2 certification, and no enterprise tier. Its privacy policy permitted data retention for up to 30 days and did not exclude training use. The coordinator's reasoning was practical and reasonable: "It would have taken me two full days to summarize these manually. The tool did it in 20 minutes." She was right about the time savings. She had no way of knowing — and no reason to suspect — that those 200 resumes were now stored on a startup's servers with no contractual obligation to protect them.

Incident 3 — Legal Practice: 14 Client Contracts

A legal associate uploaded 14 client contracts — including NDAs, licensing agreements, a joint venture term sheet, and a merger agreement with strict confidentiality provisions — to an AI-powered contract review tool. The associate had found the tool through a LinkedIn post from a legal technology influencer. The tool was built by a startup with fewer than 50 employees, no SOC 2 certification, no enterprise tier, and terms of service that granted the provider a "non-exclusive license to use uploaded content for service improvement purposes." The associate's supervising partner did not know the tool existed. The firm's legal malpractice insurance carrier did not know client contracts were being processed through unvetted third-party AI. The exposure was both a confidentiality breach under the firm's engagement letters and a potential malpractice trigger if any client data surfaced in the tool's outputs to other users.

The pattern is what matters. None of these three employees intended harm. All three were trying to be more productive. All three would have stopped immediately if someone had told them there was a risk. But no one had told them — because no one in the organization had mapped the AI footprint, classified the data sensitivity, or created a policy framework that distinguished between acceptable and unacceptable AI use. The breach did not come from a malicious actor. It came from well-meaning employees who pasted the wrong data into the wrong tool on an ordinary Tuesday afternoon.

You Cannot Ban Productivity

The CISO and managing partners faced a reality that every professional services firm confronts when shadow AI surfaces: the people using these tools are not casual experimenters. They are senior consultants, experienced recruiters, and producing associates who have built their daily workflows around AI. They are measurably faster. Their output quality, in many cases, has improved. Several of them had been recognized in recent performance reviews for their productivity gains.

A blanket AI ban would have been the easiest policy decision. It also would have been the worst business decision. The firm would have lost genuine productivity value, alienated its highest-performing employees, and signaled to the market that it was regressing on technology adoption. Several of the firm's competitors were actively marketing their AI capabilities to clients. Banning AI was not just operationally painful — it was competitively dangerous.

The managing partners were clear about the objective: "We need to close the data exposure without killing the productivity gains. We need to do it in a way that does not make our best people feel like they are being punished for being innovative. And we need to answer that Fortune 500 client's questionnaire with something stronger than a policy document."

This is the balance Coriven was engaged to achieve. Not a technology project. Not a compliance checkbox exercise. A change management challenge that required mapping the full AI landscape, classifying every data flow by sensitivity, scoring every tool for risk and value simultaneously, and then building a governance framework that protected clients without destroying the legitimate productivity that AI had genuinely created.

47 Tools. 4 Sensitivity Tiers. 8 at Critical.

The Coriven audit classified every discovered AI tool by the sensitivity of the data flowing through it. This was not a theoretical exercise or a survey-based assessment. The audit team traced actual data flows by reviewing usage patterns in tool admin dashboards, interviewing tool owners and department heads, examining browser histories with employee consent, and analyzing network traffic logs. Each tool was assigned a sensitivity tier based on the most sensitive data type it had processed in the previous 90 days.

The classification revealed that the firm's most sensitive data — the data that clients trust them to protect above all else — was flowing through tools with the weakest protections. The eight Critical-tier tools were processing PII, PHI, client financial models, and legally privileged material. Of those eight, only two had enterprise agreements in place. The remaining six were operating on consumer-grade or free-tier terms that offered essentially no data governance guarantees.

8
Critical — PII, PHI, client financials, legally privileged material
15
High — internal strategy, competitive intel, employee records
12
Medium — operational data, project plans, internal communications
12
Low — public information, general research, non-sensitive content

The Critical-tier tools were the immediate remediation priority. For a firm whose entire value proposition is "we protect your most sensitive information," having six tools process that information with no contractual protections was not a risk to manage — it was an exposure to eliminate.

From Invisible Risk to Verified Control in 8 Weeks

The engagement ran for eight weeks. The first two weeks were pure discovery and classification — mapping the full AI landscape and tracing every data flow. Weeks three and four focused on risk triage: which tools needed to be blocked immediately, which could be migrated to enterprise tiers with proper agreements, and which could be replaced with approved alternatives that provided equivalent functionality. Weeks five through eight were implementation: policy deployment, tool migration, employee training, and verification testing.

The goal was not to reduce the number of AI tools. The goal was to ensure that every tool processing client data had appropriate contractual protections, data handling guarantees, and organizational oversight. Some tools were eliminated because they had no path to enterprise compliance. Others were upgraded. A few were replaced. The net result was a smaller, governed, contractually protected AI landscape — with productivity preserved for every tool that could be made compliant.

Before — State at Audit
47 AI tools in active use: IT aware of 5 — the remaining 42 were shadow AI with zero organizational visibility or oversight
11 tools processing client data: No enterprise agreement, no DPA, no BAA — consumer-grade terms permitting data retention and training
Zero data classification: No framework for determining what data could go into which tools — 600 employees making independent decisions daily
No AI-aware DLP: Traditional DLP monitored email and file transfers but had zero visibility into data pasted into AI prompts or uploaded to AI tools
No AI-specific policy: Acceptable use policy mentioned "cloud services" generically — no guidance for AI tools, prompt data, or model training risks
After — 60 Days Post-Engagement
Complete AI inventory: All 47 tools cataloged with assigned owner, data sensitivity tier, enterprise agreement status, and scheduled review date
4 tools blocked entirely: Consumer-grade tools processing Critical-tier data with no viable path to enterprise agreements — access revoked firm-wide
6 tools migrated to enterprise tiers: Upgraded from consumer or free accounts to enterprise licensing with BAAs and data processing addendums in place
3 tools replaced: Unvetted tools swapped for approved alternatives providing equivalent functionality with proper data governance and contractual protections
AI Acceptable Use Policy live: Data classification matrix, tool approval workflow, quarterly review cycle, and incident response procedure — deployed and trained across all 600 employees

5 Findings. Scored. Prioritized. Resolved.

Each finding was scored on Coriven's 5-point weighted model: data sensitivity impact, regulatory exposure, speed to resolve, organizational complexity, and strategic importance. The scoring determined remediation order — highest-risk exposures closed first, lower-risk items addressed in sequence. No finding was left unresolved.

Finding Score State at Audit State After
4 Tools — No Enterprise Agreement, Processing Client Data
Data Governance · Compliance Risk
5.00 Do First Consumer-grade AI tools processing Critical-tier client data — no enterprise agreement, no DPA, no data retention controls of any kind All 4 tools blocked — access revoked firm-wide, users migrated to approved alternatives with enterprise agreements and BAAs
2 Tools Actively Training on Client Input Data
Data Leakage · Model Training Risk
4.80 Do First Two AI tools with terms of service explicitly permitting use of input data for model training — client financial data and PII included in training corpus Both tools replaced — one with enterprise alternative offering training opt-out, one with self-hosted equivalent under firm's direct control
8 Critical-Tier Tools — Insufficient Data Governance
Data Classification · Risk Prioritization
4.60 Do First 8 tools processing PII, PHI, client financials, or legally privileged material — only 2 of 8 had enterprise agreements with appropriate data protections All 8 Critical-tier tools now fully governed — enterprise agreements, BAAs, and data processing addendums in place or access revoked
Free-Tier AI Usage with Client-Sensitive Data
Cost Governance · Data Risk
3.90 Do Next Multiple employees across all three practices using free tiers of AI tools for client-facing work — weakest available data protections, broadest training permissions Free-tier use prohibited for any data classified Medium or above — enterprise licenses provisioned for approved tools, policy enforced with quarterly audit
No DLP Policy or Technical Controls for AI Tools
Governance · Process Control
3.50 Do Next Existing DLP systems monitored email and file transfers but had zero visibility into data entered through AI tool interfaces, browser extensions, or API calls AI-specific acceptable use policy deployed — data classification matrix, tool approval workflow, quarterly review cycle, and AI incident response procedure live across all practices

From Unquantified Exposure to Verified Zero Active Risk

When the engagement began, the firm could not answer the most basic question about its AI data exposure: "How many AI tools are processing our client data, and under what terms?" The answer was not "we don't know the exact number." The answer was "we have no mechanism to know." The AI landscape was completely invisible to IT, compliance, and the CISO. Every tool was a potential leak, and no organizational function had visibility into which tools existed, what data flowed through them, or what the providers' terms of service permitted them to do with that data.

Eight weeks later, that question had a precise, documented, auditable answer. Every tool was inventoried. Every data flow was classified by sensitivity tier. Every Critical-tier tool either had an enterprise agreement with appropriate data protections in place, or had been blocked and replaced. The exposure was not reduced incrementally — it was eliminated systematically. Not through hope, not through policy language, but through verified technical and contractual controls with an audit trail.

$0
Verified Active Data Leakage Risk — Down from Unquantified
4 tools
Blocked entirely
6 tools
Migrated to enterprise tiers
3 tools
Replaced with approved alternatives
47 tools
Fully inventoried and classified
0
Critical-tier tools without enterprise agreements (was 6)
0
Tools training on client input data (was 2)
100%
AI tools now classified by data sensitivity tier

The firm can now answer the Fortune 500 client's vendor security questionnaire with specificity and confidence: a complete AI tool inventory, a data classification matrix, enterprise agreement documentation for every tool processing client data, and a quarterly review schedule that ensures the answer stays current. The questionnaire that triggered the engagement is now a competitive advantage — demonstrable proof that the firm takes data governance seriously, backed by auditable evidence rather than policy language and good intentions.

Sustained Governance: Keeping the AI Landscape Visible

The initial audit and remediation closed the immediate exposure. But AI adoption does not pause — it accelerates. New tools emerge weekly. Employees discover new use cases. Providers change their terms of service. The firm's governance framework must evolve at least as fast as the AI landscape it governs. The next phase focuses on making AI data governance a permanent operational capability, not a one-time cleanup project.

Do you know what data your employees are putting into AI tools?

Most organizations discover their AI data exposure the hard way — a client complaint, a regulatory inquiry, or a breach notification. Coriven maps your full AI landscape, classifies every data flow, and builds the governance framework so your answer is always defensible. Not a policy document. Proof.

Start the Conversation →

Disclaimer: This use case is based on a composite engagement profile using the Coriven Method. The company described is a representative profile, not a specific client. All findings reflect the methodology Coriven applies to real engagements. Green numbers are verified from source data. Indigo numbers are calculated using defined methodology. Gold numbers are estimated from baseline data and implementation modeling. Actual results vary.

Every number in this use case is confidence-tagged by color — because we believe if we can't prove it, we should say so.

The Coriven Creed