Stage I — Implementation

Stand up AI analytics on top of the systems you already have

Today's leading AI agents already work for analytics. You don't need a specialized AI analyst product — you need the context and trust layer around the agent.

Context
GitHub Notion Snowflake Cortex Atlan
Agents
Claude Code Codex Gemini Claude
Warehouse
Snowflake BigQuery Redshift
BI
Sigma Tableau Hex Looker
analyst@nodal — answered
What's our revenue last quarter?
$4.2M
Q4 2025 · ↑ 12% vs Q3 2025
source
fct_revenue (dbt) · finance domain
fresh
3h ago
trust
94 / 100 · context match
Stage II — Safe rollout

Catch under-specified questions before they get wrong answers

Non-technical users don't ask fully-specified questions. Nodal makes the gaps visible before SQL runs — so widening access doesn't scale plausible-but-wrong answers. The move from demo to internal deployment runs through the business user; the system has to be usable in the flow of work, not just technically correct.

  • Question reframed with defaults from your documentation; assumptions in brackets the user can change.
  • Confidence score from auditable signals — entity resolution, schema grounding, doc coverage, context freshness.
  • You approve the interpretation, not the SQL.
  • Every under-specified question becomes a signal — and a candidate test case for the eval suite.
Stage III — Continuous reliability

Regression tests for AI analytics — every commit, every piece.

The same discipline software engineers apply to code, applied to AI analytics. Every dbt commit, doc edit, prompt change, or model swap triggers a re-run. Drift gets attributed to the specific change that caused it. Accuracy and cost-benefit get measured per piece of the system — not assumed.

  • Re-run on every change — schema migrations, dbt commits, doc edits, prompt changes, model swaps. Failures get pinned to the commit that caused them, with affected questions, SQL diffs, and result deltas.
  • Ablation tests on each context source — drop a data dictionary, a Notion page, a glossary entry; measure the answer-quality delta against the token-cost delta.
  • Model trade-off tests — swap Claude for a cheaper model, Codex for Gemini; read off pass rate vs. cost per run.
  • Cost optimization stops being a guess — every piece of the system is benchmarked against the trust it actually delivers.
Benchmark Run — April 8, 2026

Trigger: dbt model change (commit a3f8c2d)

92 questions evaluated
88 passed
4 drifted
0 failed
Affected

dim_patientsenrollment_status

Drifted questions
  1. "Active Medicare patients by region" — result changed
  2. "Enrollment trend by quarter" — confidence score dropped -12
  3. "Payer mix for active patients" — SQL changed
  4. "Patient count by enrollment status" — result changed
View full benchmark report View dbt diff
Documentation health report
67% of answered questions relied on dbt column descriptions
23% used Confluence documentation — but 40% of those pages hadn't been updated in over a year
15% lower consistency on questions grounded in stale docs
§ Pre-flight

Self-service analytics is adding AI context to the tools you already have.

Four context layers decide whether self-service AI analytics works inside a real organization. Best practice below.

  1. 01

    Data warehouse(s)

    Snowflake, BigQuery, or Redshift — with a dbt project sitting on top. The system of record the agent queries against.

  2. 02

    Data lineage

    Column-level flow from raw tables through transformations to dashboards — so the agent knows where every number actually comes from.

  3. 03

    Code as context

    dbt project, DAG pipelines (Airflow, Dagster), and scripts repo — the queries your team has already written are the ground truth.

  4. 04

    Business-context layer

    What metrics actually mean — the piece most teams skip, and the one that decides whether the rest works.

Walkthrough — what deployment looks like
End of dispatch

Get this running on your stack.

Request demo