AI analytics tools generate SQL and return numbers. They do it fast and they do it confidently — even when the question was ambiguous, the definition was wrong, or the time window was assumed.
The missing piece isn't a better model. It's continuous improvement enabled by observability and benchmark testing.
When someone asks a vague question like "what's our retention?", Nodal reframes it before doing anything — filling in defaults from your actual documentation. The user sees exactly what was assumed and can change any of it before the query runs.
You approve the interpretation, not the SQL. The answer comes with a confidence score so you know when to act and when to check with the data team.
What's our retention?
What's the [30-day] retention rate [for all users] [in the last 90 days] [compared with the previous 90-day period]?
Defaults pulled from your documentation. Change any [bracket] before I run it.
Should I run this, or would you like to change any of the defaults?
Change to enterprise users only. Run it.
Every question asked through Nodal becomes visible — including the ones that would have gone unasked. Data teams see which questions resolve cleanly, which hit documentation gaps, and where competing definitions create inconsistent answers across dashboards.
Nodal turns the question corpus into an actionable signal: which parts of your data are well-documented and which are silently confusing people.
Nodal continuously benchmarks its own answers. When a schema migration, dbt model change, or documentation update causes answers to drift, Nodal detects it and tells you which questions were affected and why.
This is evaluation as a product — not an internal engineering concern. Your data team gets a clear signal: what changed, what broke, and what to fix.
Since your last dbt update, accuracy on customer segmentation questions dropped from 87% → 61%.
The 3 questions most affected all involve the "enterprise" account definition.
dim_accounts.account_tier definition changed in commit a3f8c2d (April 7, 2026)
Update the "enterprise" entity definition in business-context to match the new account_tier values.
Claude is the analytical engine — it generates SQL, reasons about data, and executes analytical workflows. Nodal is the trust layer around it: the context, disambiguation, confidence scoring, and evaluation system that makes the output reliable for business decisions.
If your team doesn't have dbt, a documented warehouse, or a semantic layer yet — that's a common starting point. We help you connect your warehouse, set up dbt with best practices, establish metric definitions, and build the documentation layer that makes AI analytics reliable.
It's hands-on work that gets you production-ready for Nodal.