Skip to main content

AI in Analytics Engineering: What Executives Need to Know (Not the Hype)

What AI actually did — and didn't do — on a 5-month analytics migration with real financial data.

AC
Arturo Cárdenas
Founder & Chief Data Analytics & AI Officer
March 23, 2026 · Updated March 23, 2026 · 6 min read
AI in Analytics Engineering: What Executives Need to Know (Not the Hype)

Key Takeaway

We used AI as a development tool on a production analytics migration handling real revenue data. Here's an honest assessment: where AI multiplied productivity, where it fell short, and the guardrail framework that made it work safely on financial systems.

What AI Did Well

1. Accelerated Repetitive Pattern Work

The project required 12 staging models — one per cloud region — all following the same pattern with minor variations (different source tables, different region identifiers). AI generated these in minutes instead of hours.

Similarly, creating schema documentation (YAML files with column descriptions and tests) for 80+ models is tedious work that AI handles efficiently.

2. Maintained Consistency Across 161 Models

With a CLAUDE.md file defining naming conventions, architecture patterns, and coding standards, the AI produced code that was stylistically consistent across the entire project. No "different developer, different style" drift.

3. Caught Edge Cases in Financial Logic

During pricing implementation, the AI flagged potential precision issues (FLOAT vs NUMBER types) and suggested explicit casts. It also identified potential fan-out risks in joins that could inflate revenue calculations.

4. Made Documentation Happen

Let's be honest — engineers rarely write documentation voluntarily. With AI assistance, every model got descriptions, every column got documented, and the self-service guides for Finance were drafted and refined iteratively.


AI in Analytics: Hype vs. Reality — what the marketing says versus what actually happened in production


What AI Did NOT Do

1. Make Financial Decisions

The AI never decided what a pricing rate should be, how a discount should apply, or whether a variance was acceptable. Every financial calculation was proposed by AI, reviewed by a human, and approved by Finance stakeholders.

This was enforced by design — the project's CLAUDE.md file explicitly prohibits AI from changing financial macros without human approval.

2. Replace Domain Expertise

The AI had no idea that a product category's pricing changed significantly mid-year because the rate existed in a database, not in any code or documentation. A human had to reverse-engineer the multiplier by comparing legacy and new system outputs.

AI accelerates work within a known domain. It doesn't discover undocumented business rules.

3. Navigate Organizational Complexity

Snowflake permissions, Okta SSO configurations, GitHub token approvals, role hierarchies — none of this was AI-solvable. The "permission dance" (try → fail → request access → wait → try again) consumed real project time that no AI could shortcut.

4. Run Without Guardrails

Early in the project, without proper constraints, the AI would occasionally suggest running broad dbt commands that could affect production. The solution was explicit guardrails:

  • Never run dbt run without --select (limits scope)
  • Never use FLOAT for monetary values
  • Never modify legacy reference code
  • Always validate before claiming correctness

Without these rules, AI assistance would have been net negative on a financial system.


AI Guardrail Layers — how constraints flow from context to code to production


The Framework: How to Actually Use AI in Analytics

The CLAUDE.md Pattern

The single most impactful decision was creating a project-level instruction file (CLAUDE.md) that defines:

Architecture rules — where each type of code belongs (staging, intermediate, marts), naming conventions, materialization patterns.

Hard boundaries — operations that require human approval (financial logic changes, production deployments, destructive commands).

Domain context — what the project does, what the data represents, what precision standards apply.

Workflow patterns — how to approach common tasks (migration, validation, testing).

This file serves dual purpose: it constrains the AI during development AND documents institutional knowledge for human team members. When the contractor left, the team had everything needed to continue — both with and without AI assistance.

The Guardrail Investment

Time spent writing guardrails: ~2 days across the project. Time saved by AI across 5 months: significant (difficult to quantify precisely, but the 285 commits from the primary developer suggest high velocity).

The ROI on guardrails isn't just safety — it's quality. Constrained AI produces better code than unconstrained AI because it follows patterns instead of guessing.


What Executives Should Ask Their Teams

1. "What guardrails are in place?"

If the answer is "we're just using the AI tool as-is," that's a risk. Especially on financial systems. Every project should have documented constraints.

2. "Who approves AI-generated financial logic?"

There should be a human in the loop for any calculation that affects revenue, billing, or compliance reporting. "The AI wrote it and it passed tests" is not sufficient approval.

3. "Is the AI making us faster, or just busier?"

AI can generate code quickly. But if that code needs extensive review, debugging, and rework, the net velocity gain may be smaller than it appears. Measure cycle time, not just output volume.

4. "What happens when we turn off the AI?"

If the codebase only makes sense to the AI that wrote it, you have a dependency problem. Well-structured AI-assisted code should be readable and maintainable by humans. Documentation, naming conventions, and clear architecture matter more with AI, not less.

5. "Are we using AI where it has leverage?"

AI excels at: repetitive patterns, documentation, code consistency, test generation. AI struggles with: undocumented business rules, organizational navigation, precision-critical financial decisions.

Deploy it where it has leverage. Don't force it where it doesn't.


The Bottom Line for Executives

AI in analytics engineering is real and valuable — but it's a productivity multiplier, not a replacement for domain expertise or human judgment.

The project delivered 161 models in 5 months with a team of 4. That velocity would not have been possible without AI assistance. But it also would not have been possible without:

  • Clear architecture patterns
  • Explicit guardrails on financial logic
  • Human review of every pricing calculation
  • Organizational navigation that no AI can automate

The companies that will get the most value from AI in analytics aren't the ones with the best AI tools. They're the ones with the best guardrails, the clearest documentation, and the strongest domain expertise to guide the AI.

Invest in the framework, not just the tool.


Evaluating AI for your data team? We can help you separate what works from what's marketing. Let's talk.

Topics

AI analytics engineeringAI guardrailsClaude Code dbtAI development toolsAI in data engineeringenterprise AI governance
Share this article:
AC

Arturo Cárdenas

Founder & Chief Data Analytics & AI Officer

Arturo is a senior analytics and AI consultant helping mid-market companies cut through data chaos to unlock clarity, speed, and measurable ROI.

Ready to turn data into decisions?

Let's discuss how Clarivant can help you achieve measurable ROI in months.