The Hidden Tax of Snowflake RBAC: Permission Patterns That Actually Work
Snowflake RBAC ate 17–26 engineering hours in the first two months of our engagement. Here are the role patterns — future grants, functional/access split, per-developer isolation — that eliminated most of it.

Key Takeaway
Over a 5-month cloud security analytics engagement, permission-related blockers averaged multiple per week in the first two months — 17–26 engineering hours lost before the migration was even done. This post breaks down the role hierarchy patterns, future grant setup, and per-developer isolation that eliminated the recurring tax.
It was 8pm on a Friday. Our infra engineer had just granted the third round of permissions for the dbt service role.
"Ok, let me know what the next error is."
Reader, there was a next error.
The short version
Snowflake RBAC is powerful and painful in equal measure. The "simple role hierarchy" the docs describe becomes a multi-hour troubleshooting cycle the moment you add a service account, a CI/CD pipeline, and a team of developers who all need slightly different access. The cost isn't the architecture — the cost is the time: the error → ask admin → wait → retry loops that nobody tracks and everybody pays.
This post is about that cost, and the role patterns that eliminate most of it.
"Simple role hierarchy" says the docs
Let's revisit that phrase.
Snowflake's official guidance shows a clean diagram: SYSADMIN owns objects, SECURITYADMIN owns roles, USERADMIN manages users, functional roles sit neatly below them. It looks like four boxes and three arrows. It implies an afternoon of setup.
What actually happens on a real engagement:
- Developer needs access to a new schema → missing
USAGEon the database - Service role runs a dbt model → missing
USAGEon the warehouse - CI/CD pipeline kicks off → missing
CREATE TABLEon the target schema - Prod deployment → missing
MODIFYon the schema the dev role does have - Sigma tries to query → missing
SELECTon the new models the service role just built
Each of these is a separate ticket, a separate Slack ping, a separate interruption for whoever holds the SECURITYADMIN hat. Over a 5-month engagement migrating a cloud security company to a modern analytics stack (dbt + Snowflake + Sigma), we tracked these interruptions. Permission-related blockers averaged multiple times per week in the first two months.
The hidden tax isn't any single incident. It's the accumulated overhead of a role structure that wasn't designed for how the team actually works.
The patterns that actually work
Here's what we landed on after enough late-night permission grants to develop opinions.
Separate functional roles from access roles
The core principle: never grant privileges directly to users. Grant them to roles, grant roles to users, and keep functional roles (what people do) separate from access roles (what they can see).
-- Access roles: what you can see
CREATE ROLE db_analytics_read;
CREATE ROLE db_analytics_readwrite;
-- Functional roles: who you are
CREATE ROLE developer;
CREATE ROLE analyst;
CREATE ROLE reporter;
-- Wire them together
GRANT ROLE db_analytics_read TO ROLE reporter;
GRANT ROLE db_analytics_readwrite TO ROLE analyst;
GRANT ROLE db_analytics_readwrite TO ROLE developer;
-- Grant privileges to access roles, not to functional roles
GRANT USAGE ON DATABASE analytics TO ROLE db_analytics_read;
GRANT USAGE ON ALL SCHEMAS IN DATABASE analytics TO ROLE db_analytics_read;
GRANT SELECT ON ALL TABLES IN DATABASE analytics TO ROLE db_analytics_read;
This gives you one place to change when you add a new schema: the access role. Functional roles don't change. Users don't change.
Service accounts need their own role hierarchy
The dbt service account is the most common source of Friday-night permission failures. It needs a specific, reproducible set of privileges — and those privileges need to survive a schema change, a new model, a new environment.
-- Create a dedicated service role for dbt
CREATE ROLE dbt_service_role;
-- Warehouse access
GRANT USAGE ON WAREHOUSE transform_wh TO ROLE dbt_service_role;
-- Database + schema access
GRANT USAGE ON DATABASE analytics TO ROLE dbt_service_role;
GRANT USAGE ON DATABASE analytics_dev TO ROLE dbt_service_role;
-- Schema-level create/modify for dbt to build models
GRANT CREATE SCHEMA ON DATABASE analytics_dev TO ROLE dbt_service_role;
GRANT USAGE ON ALL SCHEMAS IN DATABASE analytics TO ROLE dbt_service_role;
GRANT CREATE TABLE ON ALL SCHEMAS IN DATABASE analytics_dev TO ROLE dbt_service_role;
GRANT CREATE VIEW ON ALL SCHEMAS IN DATABASE analytics_dev TO ROLE dbt_service_role;
-- Future grants: this is the one people forget
GRANT SELECT ON FUTURE TABLES IN DATABASE analytics TO ROLE dbt_service_role;
GRANT SELECT ON FUTURE VIEWS IN DATABASE analytics TO ROLE dbt_service_role;
That last block — GRANT ... ON FUTURE TABLES — is what stops the access review cycle from recurring every time you add a model. Without future grants, every new table or view your dbt job creates is invisible to the roles that need to read it. This is the single most common source of "why can't Sigma see the new model?" tickets.
Per-developer isolated schemas, production-safe
On this engagement we ran per-developer development databases: DBT_DEV__ALICE, DBT_DEV__BOB, and so on. Each developer had full CREATE TABLE / VIEW / SCHEMA rights in their own database and no rights in production except SELECT on the staging layer.
-- Developer role: full rights in dev, read-only in prod staging
CREATE ROLE developer_alex;
GRANT USAGE ON DATABASE dbt_dev__alex TO ROLE developer_alex;
GRANT CREATE SCHEMA ON DATABASE dbt_dev__alex TO ROLE developer_alex;
GRANT ALL PRIVILEGES ON ALL SCHEMAS IN DATABASE dbt_dev__alex TO ROLE developer_alex;
-- Read-only in prod staging for reference
GRANT USAGE ON DATABASE analytics TO ROLE developer_alex;
GRANT USAGE ON SCHEMA analytics.staging TO ROLE developer_alex;
GRANT SELECT ON ALL TABLES IN SCHEMA analytics.staging TO ROLE developer_alex;
-- Assign to user
GRANT ROLE developer_alex TO USER alex_user;
The isolation means an accidental dbt run in the wrong target (something you prevent in code with a validate_dev_target macro — see Running dbt Natively in Snowflake) hits your own dev database, not production. RBAC as a second safety layer.
Database roles: the 2024_08 bundle behavior
Snowflake introduced database roles as a way to scope grants inside a single database — useful for multi-tenant architectures or when you want Sigma-specific read grants without a proliferating account-role hierarchy.
One behavior change worth knowing: the 2024_08 BCR bundle changed how GRANT ... ON ALL SCHEMAS interacts with future schemas. If your Snowflake account opted into 2024_08 (or was auto-enrolled), a grant that previously covered new schemas may not anymore. Check your account with:
SELECT system$bcr_bundle_status('2024_08');
If the output is ENABLED, audit your ON FUTURE SCHEMAS grants. The shift was subtle and broke access reviews for teams that didn't know to look for it.
The actual troubleshooting query
When a dbt service role can't access something, the diagnosis usually takes longer than the fix. This speeds it up:
-- What can a role actually do?
SHOW GRANTS TO ROLE dbt_service_role;
-- What roles does a user have?
SHOW GRANTS TO USER service_user;
-- What privileges exist on a specific object?
SHOW GRANTS ON SCHEMA analytics.marts;
-- Who owns what (useful when grants look right but still fail)
SELECT table_schema, table_name, table_owner
FROM information_schema.tables
WHERE table_schema = 'MARTS'
ORDER BY table_name;
The table_owner query is the one that solves the confusing case: a table that shows up in SHOW GRANTS but still throws an access error. If the table was created by the wrong role, ownership determines what the grant actually allows. The fix is GRANT OWNERSHIP ON TABLE ... TO ROLE dbt_service_role, not another GRANT SELECT.
The tax you're already paying
Let's put a rough number on it.
Per-engineer, a permission error that requires an admin costs: ~5 minutes to diagnose and report, ~15–30 minutes wait time for the admin to respond (assuming they're available), ~5 minutes to verify and move on. Call it 30–40 minutes per incident.
On this 5-month engagement, the first two months averaged two to three permission-related blockers per week across a team of four engineers. That's roughly 30–45 incidents in the first phase alone. At 35 minutes each: 17–26 engineering hours lost to permission management before the team had even finished the initial migration.
That's not counting the context switch cost. An engineer waiting on a permission grant doesn't sit idle — they context-switch to something else, then context-switch back when the grant arrives. Research on interruption recovery puts the re-engagement cost at 15–20 minutes per context switch. The real number is higher than the raw wait time suggests.
The fix isn't zero-cost: setting up a clean role hierarchy with future grants takes a dedicated half-day at project start. But it's a one-time investment against a recurring tax. The math is obvious once you count it.
What we'd do differently
Invest the half-day before the first deployment. We spent the first two months paying the tax before the role hierarchy was right. A 4-hour RBAC setup sprint at project kickoff would have recovered most of that time inside the first month.
Document the role hierarchy like you document the schema. We eventually added an RBAC section to the project's CLAUDE.md — what roles exist, who owns what, what the future grant setup covers. When a new integration (Sigma OAuth, Streamlit, Cortex Analyst) needed access, the engineer could self-serve from documentation instead of pinging the infra team.
Use the Okta + network policy interaction as an input to role design. When Okta SSO is in play — covered in depth in our post on the Okta + Snowflake auth dance — network policies interact with role grants in non-obvious ways. Developer IPs are dynamic. Service accounts need different network policy treatment than human users. This is an RBAC design input, not just a security configuration.
Lessons learned
The Snowflake RBAC docs show the architecture. They don't show you what happens when a dbt job runs at 2am and the service role is missing MODIFY on the schema that got recreated in the last deployment.
The hidden tax of RBAC is real, it's measurable, and it compounds across a multi-month engagement. The cure is boring and front-loaded: functional roles, future grants, per-developer isolation, documented ownership. None of this is clever. All of it saves the Friday-night pings.
"Let me know what the next error is" is a funny line when the infra engineer says it at 8pm. It's less funny when it's the third time this week.
Topics
Arturo Cárdenas
Founder & Chief Data Analytics & AI Officer
Arturo is a senior analytics and AI consultant helping mid-market companies cut through data chaos to unlock clarity, speed, and measurable ROI.


