Today's leading AI agents already work for analytics. You don't need a specialized AI analyst product — you need the context and trust layer around the agent.
Today's leading AI agents already work for analytics. You don't need a specialized AI analyst product — you need the context and trust layer around the agent.
Non-technical users don't ask fully-specified questions. Nodal makes the gaps visible before SQL runs — so widening access doesn't scale plausible-but-wrong answers. The move from demo to internal deployment runs through the business user; the system has to be usable in the flow of work, not just technically correct.
How does length of stay compare across facilities in the Northeast?
What is the [mean inpatient days] per facility for [all facility types] in [the Northeast region] over [trailing 12 months — assumed]?
Defaults pulled from your documentation. Change any [bracket] before I run it.
Should I run this, or would you like to change any of the defaults?
Narrow to acute care only. Run it.
The same discipline software engineers apply to code, applied to AI analytics. Every dbt commit, doc edit, prompt change, or model swap triggers a re-run. Drift gets attributed to the specific change that caused it. Accuracy and cost-benefit get measured per piece of the system — not assumed.
Trigger: dbt model change (commit a3f8c2d)
dim_patients → enrollment_status
Four context layers decide whether self-service AI analytics works inside a real organization. Best practice below.
Snowflake, BigQuery, or Redshift — with a dbt project sitting on top. The system of record the agent queries against.
Column-level flow from raw tables through transformations to dashboards — so the agent knows where every number actually comes from.
dbt project, DAG pipelines (Airflow, Dagster), and scripts repo — the queries your team has already written are the ground truth.
What metrics actually mean — the piece most teams skip, and the one that decides whether the rest works.