The hardest question in business right now: what is our AI actually producing?
Why Is AI ROI So Hard to Measure?
Three reasons. First, most AI tool benefits are diffuse — time saved is real but invisible in the P&L. Second, nobody captures baselines before deploying tools, so there's no before-and-after comparison. Third, vendors have an incentive to inflate ROI numbers because it drives renewals.
What Is the Right Framework for Measuring AI ROI?
Four steps:
- 1. Baseline before you deploy. Measure current state: time per task, error rate, cost per output. If you don't measure before, the after is meaningless.
- 2. Tag every number. Is the time savings measured from timesheets? Estimated from a 3-day sample? Projected from benchmarks? The tag determines how much weight the number carries.
- 3. Measure at 30, 60, 90 days. Week-one results aren't ROI — they're novelty. Real ROI shows up when adoption stabilizes and the tool has been running long enough to produce patterns.
- 4. Separate measured from projected. Don't annualize a 2-week data point and call it ROI. A measured weekly savings of 5 hours is a fact. An annualized projection of 260 hours is a forecast. Label them differently.
What Does Honest AI ROI Look Like?
Honest ROI looks like: "We measured 42 minutes/week saved on the Monday report [measured]. Over 52 weeks, that projects to 36 hours annually [projected]. At $45/hour loaded cost, that's $1,620/year in recovered labor [projected from measured inputs]." Three sentences. Two tags. Complete transparency. That's what boards should expect — and what most vendors refuse to provide.