When Finance Has to Ask Engineering to Change a Price
How we moved pricing out of hardcoded macros and into the hands of Finance — no ticket, no PR, no waiting.


When Finance Has to Ask Engineering to Change a Price, Something Has Gone Wrong
The story of how we moved pricing out of code and into the hands of the people who own it.
Every quarter, Finance needed to update a pricing rate.
Every quarter, they filed a ticket with Engineering.
Every quarter, Engineering got to it when they could.
This is the story of how that stopped — and what we found before it did.
We were brought in to migrate a cloud security platform's revenue analytics to a modern stack. A few weeks in, we started pulling apart how pricing actually worked.
Pricing had been living inside the engineering codebase since 2021. Not in a spreadsheet. Not in a table anyone could open. In code — files with rates written as literal numbers, managed by engineers, deployed like software.
Five years of pricing changes had accumulated there. Each one was a review, a merge, a production run. Finance filed tickets. Engineering handled them when it could.
Finance had been asking for permission to change their own numbers for five years.
When we pulled all those rates out of the code and laid them flat, something became visible that hadn't been before.
A 118% price increase with no documentation. It had been running in production, applied to real revenue calculations, since 2025. Nobody on the current team knew it was there.
We found it because our validation showed a gap — worked backward through the data until we found a rate roughly double what anyone expected. It had never been written down anywhere. It just existed, quietly, getting applied to every invoice.
That's what five years of pricing changes in code produces: a history nobody can read.
Here's the thing we kept coming back to: pricing is data. It was being stored as code.
Data should be editable by the people who own it. Code should be managed by engineers. When you store pricing as code, Finance loses the ability to manage their own numbers — not because they can't be trusted, but because the system was never built for them to touch it.
Engineers become involuntary gatekeepers. Finance becomes dependent on someone else's sprint cycle to do work that's fundamentally theirs.
How to know if you have this problem
The pattern isn't rare. Here are the signals:
- Finance files a ticket to change a single rate. Even a small one. Even one they've changed before.
- Pricing lives in
.sqlfiles or Jinja macros — not a table, not a spreadsheet, in code. - The only way to understand rate history is to read
git blame. - There's a number somewhere in production nobody on the current team can explain.
- A pricing change requires a PR, a review, and a deploy before it's live.
If two or three of those are true, your pricing is almost certainly in the wrong place.
The first step was moving every rate into a table.
One row per rate. One column for the effective date. The full history that had been scattered across years of code changes was suddenly visible in a single place — five years of rate changes, in order, readable by anyone.
Adding a new rate meant adding a row. Old rates stayed in place. The system always knew what rate applied on any given date. No coordination required, no risk of losing history.
Finance could now edit the table. But they still needed database access to do it — which meant they still needed engineering.
So we took one more step.
We moved the pricing tables into Sigma, the dashboarding tool Finance already used every day. Updating a rate meant opening a familiar interface, picking from a dropdown, typing a number, saving. Same system they used for reports. Same login. Nothing new to learn.
The dropdowns mattered more than the interface did. Without them, a typo in a product name or region code would break revenue calculations silently — wrong output, no error, no warning. The dropdowns constrained every field to valid values. Finance could only enter things the model already understood.
That's what made self-service actually safe: not trusting that the right values would be entered, but making wrong values impossible to enter.
The stack was Snowflake for the data warehouse, dbt for the transformation layer, and Sigma for the Finance-facing interface — but the pattern works anywhere Finance has a tool that can write directly to your warehouse.
The first time Finance updated a rate themselves, it took three minutes.
No ticket. No waiting. No conversation about sprint priorities.
They picked the product. Entered the rate. Set the effective date. Saved. Ran the validation query from the guide we left them. Closed the tab.
When this pattern doesn't apply
Not every pricing change belongs in a table. The distinction that matters: is this a rate update, or does it change how the model computes?
Rate updates — a new price, a changed multiplier, a revised discount — are data. They belong in a table Finance can edit. The model logic stays the same; only the inputs change.
Structural changes are different. A new product line that introduces a pricing dimension that didn't exist before. A new billing model that requires new transformation logic. A regional expansion that changes how tiers are calculated, not just what the rates are. Those still belong in engineering — because you're not updating a value, you're changing how the system works.
The test: if a wrong entry in the table would produce a bad number but the model would still run, that's data. If a wrong entry could break the model or produce nonsense silently, that's engineering territory.
Most day-to-day pricing changes pass the first test. The ones that don't will be obvious.
Frequently asked questions
Do I need Sigma specifically, or will any BI tool work?
You need a BI tool that can write back to your warehouse — not just read from it. Sigma has this built in for Snowflake. Other tools in this space (Omni, some configurations of Looker) support it too. The specific tool matters less than the capability: Finance needs to be able to save a value directly to a table your dbt models can read. If your current BI tool can't do that, you'll need either a different tool or a lightweight form layer on top of your warehouse.
What about audit trails? How do we know who changed what?
This is where the temporal pattern pays off. Because each rate row has an effective date rather than overwriting the previous value, the full history is always in the table. You know every rate that was ever active and when. For compliance purposes, you can layer warehouse-level audit logging on top — Snowflake's query history, for example, captures who ran which write at what time. The result is more traceable than code ever was: git blame requires reading diffs; a table with effective dates is just a query.
What if Finance makes a mistake?
The same way any data mistake gets caught: validation. We built a validation query Finance runs after every update — it checks that the new rate is within expected bounds, that the product and region codes are valid, and that the effective date doesn't conflict with existing rows. The dropdowns eliminate the most common error class (invalid foreign keys). The validation query catches anything else before it reaches production reporting. No system eliminates human error entirely; this one makes errors visible immediately rather than letting them run quietly for years.
Ask yourself one question before you build something like this: who should own this data after you're gone?
If the answer is a business team — Finance, RevOps, Sales Ops — it should live somewhere they can edit without needing help. If you're fielding engineering tickets for routine value changes, the data is in the wrong place.
Not everything belongs in Finance's hands. New product lines, new pricing structures, new regions — those change how the model computes. Those still belong in engineering.
But a rate increase? A discount adjustment? A new regional multiplier? That's data. It should live in a table, owned by the people who decided it.
Finance looked at us like we'd performed a miracle. They hadn't expected to be able to update pricing themselves. Nobody had told them it was possible.
We hadn't performed a miracle. We'd just stopped storing their data in our code.
Clarivant builds analytics infrastructure that business teams can actually operate. clarivant.ai
Topics
Arturo Cárdenas
Founder & Chief Data Analytics & AI Officer
Arturo is a senior analytics and AI consultant helping mid-market companies cut through data chaos to unlock clarity, speed, and measurable ROI.
