What AI do you have, and what are you spending on it?
That's not a philosophical question. It's the literal first step of EU AI Act compliance. And most organizations can't answer it.
The EU AI Act is now rolling out its major compliance deadlines. Non-compliance carries fines up to €35 million or 7% of worldwide annual turnover. The "wait and see" window is closed. Companies operating in or selling into the EU need a plan — and every compliance plan starts with the same thing: a complete AI inventory.
Here's the problem. You can't classify risk on AI systems you don't know exist. You can't build governance frameworks around tools that aren't inventoried. And you can't budget for compliance when you don't know what your current AI spend looks like.
The compliance work is real and complex. But the foundation it all sits on is simple: visibility.
What the EU AI Act Actually Requires
The Act creates a risk-based framework. Every AI system in use must be categorized:
- Prohibited — social scoring, real-time biometric surveillance, manipulative AI. Must be eliminated.
- High-risk — HR tools, credit scoring, medical devices, critical infrastructure. Strictest requirements: risk management, data governance, human oversight, technical documentation, continuous monitoring.
- General-purpose AI (GPAI) — foundation models like GPT-4, Claude, Gemini. Transparency obligations, systemic risk assessments for the largest models.
- Limited risk — chatbots, content generators. Transparency requirements (users must know they're interacting with AI).
Beyond classification, organizations must establish quality management systems, lock down data governance for training and validation data, ensure human oversight capabilities, prepare technical documentation, and build AI literacy across teams.
That's a lot. And it's all important. But none of it is possible without the first step.
Step Zero: The AI Inventory Nobody Has
Before you classify a single system, you need to know what systems exist. All of them. Not just the ones IT approved.
In most organizations, AI adoption has outpaced procurement. Departments adopted tools independently. Employees signed up for personal accounts. Teams bought subscriptions on corporate cards without going through IT. Browser extensions, API keys, embedded AI features in existing SaaS tools — the footprint is wider than anyone realizes.
This is the shadow AI problem, and it's not a security issue. It's a financial visibility issue. Every AI tool that exists in your environment without being inventoried is a compliance liability under the EU AI Act. And the way you find those tools is by following the money.
You cannot govern what you do not know exists. And you cannot inventory what you cannot see in the spend data.
Procurement records, corporate card charges, department budgets, SaaS management data, API billing — these are the artifacts that reveal the true AI footprint. The compliance team needs the risk classification. But finance has the map.
The Cost Question Nobody Is Asking
Every article about the EU AI Act talks about what you need to do. Almost none of them talk about what it costs.
Compliance isn't free. Risk assessments, quality management systems, technical documentation, data governance reviews, human oversight mechanisms, staff training, ongoing monitoring — all of this requires budget. And boards don't approve compliance budgets based on "we need to do this because the regulation says so." They approve budgets that are grounded in specifics:
- How many AI systems do we have? — You need the count before you can estimate per-system compliance costs.
- How many are high-risk versus limited risk? — High-risk systems carry 10-50x the compliance burden of limited-risk systems. The mix determines the budget.
- What do we currently spend on AI? — Compliance costs typically run 10-20% of existing AI spend for well-organized companies, significantly more for companies starting from zero visibility.
- What's the cost of non-compliance versus the cost of compliance? — €35M or 7% of annual turnover makes the ROI calculation straightforward, but only if you can quantify the compliance side.
The CFO is going to ask these questions. The board is going to ask these questions. If you walk in with "we don't know how many AI tools we have" and "we're not sure what we spend on AI," the compliance initiative stalls before it starts.
Where Financial Visibility Meets Compliance Readiness
This is where the AI inventory and the AI spend picture converge. The compliance team needs to know what AI exists. Finance needs to know what AI costs. Both start from the same data.
| Compliance Need | Financial Visibility Provides |
|---|---|
| Complete AI system inventory | Every tool with a billing trail — approved and shadow AI — mapped to department, vendor, and contract |
| Risk classification input | What each tool does, who uses it, and how central it is to operations (usage data informs risk) |
| Compliance budgeting | Current AI spend baseline, per-system cost breakdowns, and forecast models for compliance investment |
| Board-level reporting | Executive briefs that translate AI posture into financial terms — cost of compliance vs. cost of fines |
| Ongoing monitoring | Spend tracking catches new tools as they appear — continuous inventory, not a one-time scan |
Financial visibility doesn't replace the compliance work. It's the foundation the compliance work is built on. Without it, you're classifying risk on an incomplete inventory and budgeting compliance costs based on estimates.
What Coriven Proof Does for EU AI Act Readiness
Coriven Proof is an AI spend accountability platform. It's not a compliance tool, and we don't pretend it is. But it solves the foundational problem that every compliance effort depends on: knowing what AI you have and what it costs.
Here's what Proof provides that directly supports EU AI Act readiness:
- Complete AI tool inventory. Every AI tool, subscription, API, and embedded feature — mapped to department, vendor, and spend. Including the tools nobody remembers approving.
- Department-level spend attribution. Who is using what, where, and how much it costs. When the compliance team asks "which departments use high-risk AI tools?" — finance can answer immediately.
- Vendor and contract visibility. Every AI vendor relationship in one view. Contract terms, renewal dates, spend trends. Essential for assessing third-party AI obligations under the Act.
- Cost forecasting for compliance budgeting. Model what compliance will cost based on your actual AI footprint. Scenario modeling: "What if we eliminate 3 shadow AI tools and consolidate vendors?" The board gets real numbers, not estimates.
- Continuous monitoring. New AI tools don't wait for annual audits. Proof tracks spend continuously, so new tools are visible as soon as they hit a billing cycle. Your inventory stays current without manual effort.
- Board-ready reporting. Auto-generated briefs that frame AI posture in financial terms. Compliance readiness, current spend, identified risks, recommended actions — all confidence-tagged so the board knows exactly what's measured vs. estimated.
What Proof Doesn't Do (And Who Does)
Transparency matters. Here's what falls outside Proof's scope:
- Risk classification and scoring — This requires legal and compliance expertise specific to your AI systems and use cases.
- Quality management systems — This is organizational process work, typically led by your compliance or quality team.
- Data governance for training data — Bias testing, representativeness, and data quality are data science and legal functions.
- Technical documentation and logging — This lives with your engineering and product teams.
- Robustness and cybersecurity testing — Security testing is a separate discipline with dedicated tools.
- GDPR alignment — Your DPO and legal team own this intersection.
We're honest about scope because the companies that try to solve everything with one tool usually solve nothing well. Proof gives your compliance team the financial foundation and inventory they need to do the classification, risk assessment, and governance work effectively.
A Practical Starting Point
If the EU AI Act applies to your organization, here's a realistic first-week plan:
- Get the AI inventory done. Pull procurement records, corporate card data, SaaS management exports, and API billing logs. Map every AI tool to a department and an owner. This is the foundation everything else depends on.
- Quantify the spend. Total AI spend across the organization. Break it down by department, vendor, and tool type. This gives your compliance team context and your CFO a baseline.
- Identify shadow AI. Look for AI subscriptions that didn't go through procurement. Personal accounts billed to corporate cards. Browser extensions. Embedded AI features in tools you already use. These are the highest-risk blind spots.
- Estimate compliance costs. Based on the number of systems and their likely risk classification, build a preliminary compliance budget. Frame it against the non-compliance fines so the board understands the stakes.
- Brief the board. One page. How many AI systems. What they cost. What compliance will require. What happens if you don't act. CFOs and boards respond to numbers, not regulatory summaries.
Steps 1 through 5 are exactly what Coriven Proof automates. Connect your data, and the inventory, spend analysis, and board brief are ready in 24 hours.
The Bottom Line
The EU AI Act is real, the deadlines are here, and the fines are significant. But compliance doesn't start with hiring a law firm or buying a governance platform. It starts with answering two questions:
What AI do we have?
What does it cost?
Everything else — risk classification, quality management, data governance, documentation, training — builds on top of those answers. Without them, you're building compliance on a foundation of assumptions.
The EU AI Act isn't just a regulatory challenge. It's a financial visibility challenge. The companies that know what AI they have and what it costs will comply faster, spend less on compliance, and avoid the fines. The ones that don't will learn the hard way that you can't govern what you can't see.
Start with visibility. The rest follows.