How to get the most out of your Count trial
What to prove
A successful Count trial demonstrates:
- Speed: Faster time from question to decision with collaboration & AI agents
- Trust: Analysis with full context, explanation, and audit trails
- Capability: Deeper exploration enabled by the compute layer
- Governance: Safe self-service with proper controls
Minimal setup
- Connect a data source (Athena, Synapse, BigQuery, Databricks, MySQL, Postgres, Redshift, Snowflake, SQL Server, Googlesheets)
- Or just work with CSVs
- Invite the core team (analyst + business owner + stakeholder)
- Learn about workspace permissions ->
- Optional: Set up Count metrics if you want to test governed, reusable metrics
- Count Metrics ->
- Github integration ->
- Start asking the agent!
Some useful agent examples:
The Count AI Agent works best as a copilot. It speeds things up, but you’re still in control. Here are a couple of short examples:
- Turning vague concerns into structured analysis 🧐
Start with something messy like “sales is up but margins feel worse… why?”
The agent breaks the problem into clear drivers, starts the investigation, and gives you a structured summary to build on. Watch the video here - Making sense of spaghetti SQL 🍝
You open an old query. It’s yours. It’s a mess! Paste it in a canvas and ask for a step-by-step breakdown.
It splits each CTE out, explains what it’s doing with your data, shows the output at each stage, and flags anything worth a closer look. That's better. Watch the video here - Churn prediction model 📉
The agent is great with open-ended questions when you’re not quite sure what you want (e.g. “help us understand churn risk”). In this example, Ollie also shows how easy it is to add context in the canvas when you do have a clear direction in mind. Watch the video here. - Chaining agents for investigation 🖇️
Don’t worry about crafting one GIANT prompt. Take smaller steps. Let one agent highlight an area worth investigating, then use another to dig deeper. Watch the video here.
Pick 1–2 use cases to focus on
Instead of rebuilding dashboards or migrating BI logic, we suggest focusing on the areas where Count delivers transformational impact: analyst efficiency, scalable governed compute, organisational clarity and safe AI-powered self-serve.
Choose what matters most to your team, for example..
- Exploratory analysis & storytelling Answer a real business question, document your thinking, present findings from the same canvas.
- Metric map/funnel Build a visual representation of how key metrics relate and roll up.
- Governed metrics catalog Define reusable metrics, enable self-service for non-technical users.
- Collaborative model review (for dbt users) Reference dbt models in Count, annotate changes, document decisions.
End-of-trial deliverables
For a trial, prioritise one or two key deliverables - depth beats breadth. Focusing narrowly makes it easier to prove value, avoid scope creep, and get clear before/after comparisons.
These are some suggested end-of-trial deliverables:
- Improvement Cycle canvas: a single live canvas that tells problem → analysis → decision.
- Ad-hoc analysis example: one real request answered end-to-end on a canvas (steps, assumptions, lineage) presented via Present mode.
- Impact note: brief summary of time saved, iterations reduced, and warehouse compute avoided (via DuckDB).
- Stakeholder + analyst feedback: quick pulse on clarity/usefulness and a “happy explorers” check (e.g., “Was this trial easier/faster than your normal workflow?”).
Remember, the agent is there to help you speed up your workflows throughout!