Skip to main content

Finance Asked If They Could Update Pricing Without a Jira Ticket. We Said Yes.

How we gave a Finance team full pricing update capability — no engineering, no git, no Jira tickets — using Sigma input tables wired to dbt seeds with dropdown validation sourced from live pipeline data.

AC
Arturo Cárdenas
Founder & Chief Data Analytics & AI Officer
March 20, 2026 · Updated March 20, 2026 · 8 min read
Finance Asked If They Could Update Pricing Without a Jira Ticket. We Said Yes.

Key Takeaway

A cloud security company's Finance team was filing Jira tickets to change a single pricing rate — two weeks of waiting for one CSV row. We built a three-stage solution: hardcoded macros → CSV seeds → Sigma input tables with dropdown validation. The result: Finance edits pricing directly in Sigma, requests a dbt run, and verifies in the same dashboard. Engineering is out of the loop entirely. The hard part wasn't the tooling — it was getting the data model clean enough that a non-technical edit couldn't break production.

They looked at us like we'd performed a miracle.

It wasn't a miracle. It was a Sigma input table wired to a dbt seed, four dropdown-validated columns, and a workflow that took three months to build right. But from the finance team's perspective, the experience genuinely was: open dashboard, edit a cell, pricing is updated.

No ticket. No PR. No waiting.


The ticket problem

At a cloud security company, pricing changes required a developer to edit SQL macros. Not because anyone wanted it that way. It's just where the system had landed after five years of incremental growth.

The earliest macros looked like this:

{% macro calc_node_amount(node_quantity, period, plan_name) %}
case
  when {{ plan_name }} = 'Product-A-tier' and {{ period }} < '2023-04-01' then
    cast([rate_v1] as number(38, 9)) * cast({{ node_quantity }} as number(38, 9))
  when {{ plan_name }} = 'Product-A-tier' and {{ period }} < '2024-01-01' then
    cast([rate_v2] as number(38, 9)) * cast({{ node_quantity }} as number(38, 9))
  when {{ plan_name }} = 'Product-A-tier' then
    cast([rate_v3] as number(38, 9)) * cast({{ node_quantity }} as number(38, 9))
end
{% endmacro %}

Every rate change was a deployment. Every regional multiplier adjustment was a PR. A 118% undocumented price increase had been applied at the database level with no git record. The most important business logic was either buried in macros or invisible.

Finance couldn't update anything without filing a ticket. And the engineering team was fielding pricing questions when they had infrastructure to run.

The full story of migrating this codebase covers the technical migration. This post covers the piece that made Finance go from frustrated to fully self-sufficient: the input table pattern.


The evolution: three stages

We went through three distinct approaches before landing on something Finance would actually use.

Stage 1: Hardcoded macros. Any rate change required developer time, a PR, and a deployment. Zero self-service. Not viable long-term.

Stage 2: CSV seed files. We extracted all rates into five CSV files — product pricing, regional multipliers, ELA discounts, account discounts, quarterly floors. A real improvement. Finance could edit a spreadsheet. But they still needed to commit to GitHub, which meant git access and git training. In practice, this meant a developer was still in the loop.

Stage 3: Sigma input tables. Finance edits directly in the Sigma dashboard they already use. Zero code interaction. The workflow is: edit in Sigma, request a dbt run, verify in Sigma. The engineering team is out of the loop entirely.

That last step — removing git from the workflow — was the design decision that made this actually work.


How Sigma input tables work with dbt seeds

Sigma's input tables let you create editable spreadsheet-like surfaces inside a dashboard. Users can add rows, edit cells, and save changes — all through the Sigma UI. Those changes write back to a Snowflake table.

The dbt side reads from that table as a seed source. When the Finance team updates a pricing rate in Sigma and a dbt run executes, the new rate flows through the entire downstream model graph automatically.

Here's the seed configuration for the regional pricing table:

seeds:
  - name: regional_pricing_rates
    config:
      schema: pricing
      column_types:
        region: varchar(50)
        percentage_adj: number(5,4)
        effective_date: date
        notes: varchar(255)

Explicit column types matter here. Without them, Snowflake's type inference will turn "1.16" into an integer and truncate leading zeros from account IDs. The Finance team edits the values; the schema contract prevents silent corruption.

The dbt model consuming this seed uses a temporal lookup pattern — no end dates, just ORDER BY effective_date DESC LIMIT 1 — so Finance can add new rows to update rates without deleting old history:

-- int_regional_pricing.sql
select
    usage.account_id,
    usage.product,
    usage.billing_period,
    usage.gross_usage,
    rates.percentage_adj as regional_multiplier,
    usage.gross_usage * rates.percentage_adj as adjusted_revenue
from {{ ref('stg_usage') }} usage
left join {{ ref('regional_pricing_rates') }} rates
    on rates.region = usage.region
    and rates.effective_date <= usage.billing_period::date
qualify row_number() over (
    partition by usage.account_id, usage.billing_period
    order by rates.effective_date desc
) = 1

Adding a new regional rate in Sigma adds a new row with a new effective_date. All historical calculations stay unchanged. All future calculations use the new rate. No engineer needed.


Dropdown validation: the detail that makes it safe

Giving Finance a free-form editable table would be worse than a CSV — at least with CSV review, an engineer would catch a typo before it reached production.

The solution is dropdown validation. Every column that references a dimension — product, region, plan, account — is constrained to a dropdown list in Sigma. The dropdown values are pulled directly from the actual pipeline data.

Seven dimension tables power the validation:

TableValues
dim_regions11 cloud regions (exact slugs matching billing data)
dim_products7 product lines
dim_plan_namesProduct-A-tier, Platform, Product-B-tier, ALL
dim_segmentsInternal, External POC, External Upside, Unknown
dim_accountsAccount ID + display name
dim_product_categoriesDetection, Observability, All
dim_plansObservability, Detection, Platform

These dimensions materialize as tables and auto-update on every dbt run. When a new product launches, dim_products updates automatically, and the dropdown in Sigma reflects it on next run. No configuration change required.

Finance can't type "us-sout" instead of "us-south". They can't enter a plan name that doesn't exist. The dimension tables enforce referential integrity without a database constraint — because the constraint lives in the UI where they're actually working.

Self-service flow: Finance edits Sigma input table → dbt run → downstream models update → Finance verifies in Sigma dashboard. Engineering out of loop entirely.


The Finance workflow: what it actually looks like

We wrote a three-layer guide for the Finance team. Not a technical runbook — a practical workflow with copy-paste commands for the one code interaction they still need.

Layer 1 — Quick reference:

What to updateWhere to editWhere to verify
Product pricing ratesPricing Input Table (Sigma)Revenue Dashboard
Regional multipliersRegional Input Table (Sigma)ARR by Region
Account discountsAccount Discount Table (Sigma)Account Detail
ELA quarterly floorsELA Payments Table (Sigma)ELA Compliance

Layer 2 — Step-by-step (example: add a new account discount):

  1. Open Pricing Input Tables in Sigma
  2. Navigate to the Account Discounts tab
  3. Add a new row: select account from dropdown, select product category, enter discount percentage, set effective date
  4. Save
  5. Request a dbt run (one command, provided verbatim in the guide)
  6. Open the Account Detail dashboard in Sigma and verify the new rate is applied

Layer 3 — Validation queries:

Finance can run four pre-written SQL queries directly in Sigma to verify their changes took effect:

-- Verify discount application
select
    account_id,
    applied_discount_pct,
    discount_source
from {{ ref('fct_account_discounts') }}
where account_id = '{{ account_id }}'
order by effective_date desc
limit 1

This matters more than it sounds. Finance had previously submitted tickets and waited for engineering to confirm the update worked. Now they can verify it themselves, in the same tool where they made the change. The feedback loop closed.


One timing detail Finance needed to know

Monthly and quarterly models behave differently, and this confused the Finance team until we documented it explicitly.

Monthly models include the current month immediately — a pricing change on March 15 appears in March revenue on the next dbt run.

Quarterly models exclude the current incomplete quarter — a Q1 change won't appear in Q1 numbers until Q1 is complete. This is by design: incomplete quarters produce misleading ARR figures.

Without this explanation, Finance would update a rate, run a validation, see no change in the quarterly ARR dashboard, and assume something broke. We put this in the guide with a specific example. After that, no confusion.


What this actually changed

Before this pattern, the Finance team filed Jira tickets for pricing updates. The tickets sat in a backlog behind infrastructure work. A regional multiplier change for the São Paulo region — one row in a CSV — could take two weeks to reach production.

After: the Finance team updates pricing in Sigma, requests a dbt run via Slack, and verifies in the dashboard. End to end, under an hour.

The revenue analytics migration case study covers the full scope of what this project delivered. The self-service layer was one of its outcomes, but the underlying pricing engine — how we got from hardcoded macros to seed-driven lookups — had to be built first before Finance self-service was even possible.

You can't give Finance a self-service interface to logic that's buried in Jinja macros. The data model has to separate rate data from calculation logic before the rate data can be made editable. That separation is the prerequisite. The Sigma input tables are the interface on top of it.

Topics

finance self-service analyticsSigma input tablesdbt seedsfinance analytics automationself-service BIpricing update automationdbt finance workflowfinance team analytics
Share this article:
AC

Arturo Cárdenas

Founder & Chief Data Analytics & AI Officer

Arturo is a senior analytics and AI consultant helping mid-market companies cut through data chaos to unlock clarity, speed, and measurable ROI.

Ready to turn data into decisions?

Let's discuss how Clarivant can help you achieve measurable ROI in months.