NDL · 03 Use cases

Who Nodal
is for.

Teams at three stages: getting AI analytics live, widening access safely, and keeping the system reliable as usage scales.

Stage I — Deploying for the first time

Teams standing up AI analytics on their existing stack

You think self-service AI analytics should work on your stack. The real problem is connecting the warehouse, dbt, dashboards, docs, and business definitions into something usable.

Nodal helps you stand up the deployment using the infrastructure you already have, grounded in real business context instead of a demo-only setup.

Stage II — Widening access without breaking trust

Teams scaling AI analytics beyond the data team

A clear-sounding question. A confident number. No way to tell what the system actually did under the hood.

Nodal shows the assumptions, the sources, and a confidence score before SQL runs — so more people can use the system without the blast radius of plausible-but-wrong answers growing unchecked. The bottleneck on broad rollout isn't capability; it's giving business users the trust signals and workflow fit they need to use it correctly, repeatedly.

Stage III — Already in production, need continuous reliability

Teams who need to know which definitions are breaking answers right now

You know stale Confluence pages, missing dbt descriptions, and competing definitions are problems. You need to know which ones are actually breaking answers in production.

Evaluations, observability, and the production question corpus turn into a prioritized list — grounded in what actually broke an answer, not what merely feels stale. Run ablation tests on each context source to see which ones are worth their maintenance cost.

Documentation Health — Last 30 Days
Questions answered 183
Confidence score 80+ 134 (73%)
Required clarification 31 (17%)
Could not answer 18 (10%)
Top documentation gaps
  • dim_accounts.account_tier — no description, referenced in 12 questions
  • "enterprise" — 3 competing definitions across dashboards
  • fct_payments — no Confluence documentation, stale dbt description
Context source usage
  • 67% relied on dbt column descriptions
  • 23% used Confluence docs (40% of those pages > 1 year old)
  • Questions grounded in stale docs: 15% lower benchmark consistency
§ Footnote — the consensus problem

The problem nobody talks about: analytical consensus.

When every department asks its own data questions, every department gets its own version of the numbers. It's not a data quality problem — it's a consensus problem, and AI makes it worse.

Marketing's "active users" doesn't match Product's. Finance's revenue doesn't match Sales'. The eval suite is where consensus gets enforced: shared test cases, shared definitions, drift caught the moment someone redefines a metric.

More questions lead to more alignment, not fragmentation.

End of dispatch

Launch AI analytics on your stack — then see where trust breaks.

Request demo