Here's something that bothers me about consulting: the numbers are usually made up.

Not maliciously. But structurally. A vendor comes in, builds something, and six weeks later sends you a case study that says "saved $200K annually." You nod along because the project felt useful. But if you pressed them — really pressed — could they show you where that number came from? Could they tell you what was measured versus what was estimated versus what was a projection based on assumptions?

Almost never.

I've been on both sides of this. I've seen the internal conversations where someone says, "We need a good ROI number for the case study," and then reverse-engineers a figure that sounds impressive. It's not fraud — it's just what happens when nobody has a system for honest measurement.

When I built Coriven, I decided we'd do it differently. Not because I'm a better person than other consultants. Because I think dishonest measurement is bad business. If a number is real, it should sell itself. If it's not real, dressing it up just delays the moment your customer realizes the project didn't land.

What Are the Three Confidence Tags?

Every number in every report we produce gets one of three tags:

[measured] — This is a hard number pulled from a system, a log, or direct observation. Time tracked before and after. Transactions counted. Error rates recorded. If it says [measured], we can show you the data.

[estimated] — This is a calculated number based on measured inputs plus reasonable assumptions. For example: we measured that a dispatcher spends 22 minutes per manual scheduling adjustment [measured], and they do roughly 8 per day [estimated based on 3-day sample]. The inputs are real. The math is transparent. But it's not a hard count.

[projected] — This is a forward-looking number. Based on current trends, we project that automating this report will save 14 hours per month over the next quarter [projected]. We're telling you what we think will happen, not what has happened.

That's it. Three tags. No hiding behind a single impressive number and hoping nobody asks where it came from.

Why Should Buyers Care About Evidence Tagging?

When someone hands you an ROI report, you're being asked to make a decision: renew, expand, or move on. If the numbers behind that decision are a mix of hard data, educated guesses, and forward projections — and they're all presented identically — you can't actually evaluate the report. You're trusting vibes.

The tags give you something most ROI reports don't: the ability to read critically. When you see [projected], you know to apply your own judgment. When you see [measured], you know we can back it up. When you see [estimated], you know the method and can challenge the assumptions.

How Does Honest ROI Measurement Work in Practice?

Example 1: The weekly operations report. A regional services company had a team lead spending 45 minutes every Monday morning pulling data from three systems, copying it into a spreadsheet, formatting it, and emailing it to the GM. Every Monday. For years. We timed it: 47 minutes [measured]. We built an automated report. Post-automation, the team lead spends about 5 minutes reviewing [measured at 30-day check-in]. Net savings: 42 minutes per week [measured], roughly 36 hours per year [projected].

Example 2: The dispatch bottleneck. A staffing company had dispatchers manually matching open shifts to available workers. We observed: average 3.2 hours per day on manual matching [estimated from 3-day observation]. After automation, dispatchers reported 40 minutes per day [measured at 60-day check-in]. That's roughly 2.5 hours saved per dispatcher per day [measured]. With 3 dispatchers, that's 7.5 hours daily [estimated — assumes consistent workload].

Example 3: The data entry double-touch. An insurance agency entered the same client information into their CRM and quoting system separately. We measured the duplicate entry at roughly 6 minutes per new client [measured] across about 15 new records per week [estimated]. After connecting the two systems, duplicate entry dropped to zero [measured]. Annual time recovered: approximately 78 hours [projected].

How Do You Measure ROI at 30, 60, and 90 Days?

We don't hand you a report at the end of the project and disappear. We come back at 30, 60, and 90 days to measure actual results against the baselines we captured at the start. At 30 days, we're confirming changes are sticking. At 60 days, we see real patterns. By 90 days, we have enough data to give you a credible picture of what the engagement actually delivered — with every number tagged honestly.

What Should You Ask Every Vendor About Their ROI Numbers?

The next time a consultant, vendor, or agency shows you an ROI number, ask them one question:

"Is that number measured, estimated, or projected?"

If they can answer clearly and show their work, you're probably dealing with someone honest. If they can't — or if they look at you like you're speaking a foreign language — that tells you something too.

We're building Coriven on the idea that honest measurement compounds. Every engagement adds real data. Every tagged number builds trust. And over time, that trust is worth more than any inflated case study.