How to get the most out of your Count trial

What to prove

A successful Count trial demonstrates:

  • Speed: Faster time from question to decision with collaboration & AI agents
  • Trust: Analysis with full context, explanation, and audit trails
  • Capability: Deeper exploration enabled by the compute layer
  • Governance: Safe self-service with proper controls

Minimal setup

  • Connect a data source (Athena, Synapse, BigQuery, Databricks, MySQL, Postgres, Redshift, Snowflake, SQL Server, Googlesheets)
  • Or just work with CSVs
  • Invite the core team (analyst + business owner + stakeholder)
    • Learn about workspcae permissions ->
  • Optional: Set up Count metrics if you want to test governed, reusable metrics
    • Count Metrics ->
    • Github integration ->

Pick 1–2 use cases

Choose what matters most to your team, for example..

  • Exploratory analysis & storytelling Answer a real business question, document your thinking, present findings from the same canvas.
  • Metric map/funnel Build a visual representation of how key metrics relate and roll up.
  • Governed metrics catalog Define reusable metrics, enable self-service for non-technical users.
  • Collaborative model review (for dbt users) Reference dbt models in Count, annotate changes, document decisions.
  • Identify: frame goals and assumptions on the canvas (stickies + context)
  • Explore: query live; snapshot infrequently; iterate locally with DuckDB; use parameters for scenarios
  • Decide: switch to Present mode, show steps/lineage; capture comments and approvals
  • Monitor: add scorecards and alerts to keep outcomes visible

End-of-trial deliverables

For a trial, prioritise one or two key deliverables - depth beats breadth. Focusing narrowly makes it easier to prove value, avoid scope creep, and get clear before/after comparisons.

These are some suggested end-of-trial deliverables:

  • Improvement Cycle canvas: a single live canvas that tells problem → analysis → decision.
  • Ad-hoc analysis example: one real request answered end-to-end on a canvas (steps, assumptions, lineage) presented via Present mode.
  • Impact note: brief summary of time saved, iterations reduced, and warehouse compute avoided (via DuckDB).
  • Stakeholder + analyst feedback: quick pulse on clarity/usefulness and a “happy explorers” check (e.g., “Was this trial easier/faster than your normal workflow?”).