A 400-employee mid-market technology company generating approximately $65M in annual revenue. Three product lines, six departments, and an engineering culture that prized autonomy. Teams were encouraged to "move fast and pick the tools you need." So they did. AI adoption happened department by department — without coordination, without IT awareness, and without any governance framework. When the CISO started asking questions about data flows, nobody could answer them. Not because they were hiding anything — because nobody had ever asked.
The IT team knew about 5 AI tools. The actual count was 28 — 5 approved and 23 operating in the shadows. Marketing alone had 4 separate writing tools: Jasper, Copy.ai, Writesonic, and individual ChatGPT Plus subscriptions scattered across the team. Engineering had 3 code assistant tools — Cursor, GitHub Copilot on personal accounts, and individual ChatGPT Plus seats — none provisioned through IT, none with SSO, none with audit logging. And then there was Legal.
Legal had adopted an AI-powered contract review tool. It was processing client contracts, NDAs, and vendor agreements — documents containing confidential terms, pricing, and obligations. The tool had no SOC 2 attestation, no data processing agreement, and no BAA. Legal found it on Product Hunt and signed up with a company credit card. IT learned about it from the Coriven audit.
Shadow AI doesn't show up in a single scan. It hides across expense reports, browser extensions, personal subscriptions, and SaaS sprawl. We used four discovery vectors to build the full picture.
Pulled 12 months of expense data across all department cards and reimbursement requests. Flagged every line item matching known AI vendors plus generic "software" and "subscription" categories. Found 14 tools through expense data alone — including 3 that had been expensed as "research tools" and never flagged as AI.
Deployed the Coriven Proof Sensor across consenting employee browsers. Detected 7 AI browser extensions — grammar assistants, writing enhancers, and prompt tools — that never appeared in any expense report because they were free-tier or bundled with other subscriptions. Also confirmed active usage patterns for tools found through other vectors.
Cross-referenced SSO logs against known AI tool domains. 18 of 23 shadow tools had no SSO integration — users logging in with personal credentials, no MFA, no centralized session management. This means IT couldn't revoke access if an employee left. Three former employee accounts were still active on shadow tools at the time of audit.
30-minute structured interviews with each department head. Every single one disclosed at least one tool not found through the other three vectors. Engineering's lead disclosed Cursor usage across the team. Marketing's VP mentioned "a tool someone on the team found" — which turned out to be an AI image generator processing brand assets through a consumer API.
Each finding scored on a 5-point weighted model: risk exposure, cost impact, speed to resolve, governance complexity, and strategic importance.
| Finding | Score | State at Audit | State After |
|---|---|---|---|
|
Unvetted Legal AI on Client Contracts
Compliance Risk · Data Governance
|
5.00 Do First | Consumer-grade contract review AI processing client NDAs, pricing, and terms — no DPA, no SOC 2, no BAA | Migrated to SOC 2-attested platform with DPA and BAA — IT-provisioned, SSO-enabled, audit-logged |
|
4 Duplicate Writing Tools — Marketing
Cost Waste · Tool Rationalization
|
4.60 Do First | Jasper ($99/mo), Copy.ai ($49/mo), Writesonic ($79/mo), 6x ChatGPT Plus ($120/mo) — $347/month for overlapping writing capability | Consolidated to Jasper (team plan) — 3 tools cancelled, ChatGPT Plus seats converted to team API access |
|
Engineering Code Assistants Without SSO
Security · Access Control
|
4.40 Do First | Cursor (8 seats, personal accounts), individual ChatGPT Plus (4 seats), GitHub Copilot on personal GitHub — no SSO, no audit trail, no offboarding process | Migrated to GitHub Copilot Business — SSO-integrated, centrally managed, audit-logged, with automated offboarding |
|
7 Shadow AI Browser Extensions
Visibility · Data Exposure
|
3.80 Do Next | 7 AI browser extensions detected by Proof Sensor — grammar tools, prompt assistants, and writing enhancers with broad page-access permissions | 3 extensions approved and added to managed list — 4 blocked via browser policy with user notification and approved alternatives |
|
No AI Acceptable Use Policy
Governance · Policy Gap
|
3.50 Do Next | No written policy governing AI tool adoption, data classification for AI use, or prohibited use cases — employees had no guidance | AI Acceptable Use Policy published — covers tool approval, data classification, prohibited use cases, and violation escalation |
|
3 Former Employee Accounts Still Active
Security · Offboarding Gap
|
4.20 Do First | 3 employees who left in past 6 months still had active accounts on shadow AI tools — IT couldn't revoke what IT didn't know about | All orphaned accounts deactivated — AI tools added to offboarding checklist, SSO requirement prevents future orphaned access |
Shadow AI spend reduced from $47K/year ungoverned to $16K/year governed and consolidated. Every remaining tool has an owner, a documented use case, SSO integration, and a quarterly review date. The CISO can answer the question now.
Shadow AI is not a one-time problem. New tools launch weekly. Employees discover them daily. The governance framework built during this engagement creates a sustainable detection and response loop — not a point-in-time snapshot.
Most companies know about 30-40% of the AI tools their employees actually use. We find the rest — and build the governance framework so you never lose visibility again.
Start Shadow AI Discovery →Disclaimer: This use case is based on a simulated engagement using the Coriven Method. Company details are representative. All findings reflect the methodology Coriven applies to real engagements. Green numbers are verified from source data. Indigo numbers are calculated with documented methodology. Gold numbers are estimated from baseline data. Actual results vary.
Every number in this use case is confidence-tagged by color — because we believe if we can't prove it, we should say so.