How to get the most out of your Count trial
What to prove
- Faster time-to-decision with real-time collaboration and Present mode
- Clearer storytelling and buy-in (context, lineage, comments)
- Lower exploratory compute (warehouse queries and then explore in DuckDB)
- Reuse and open up availability of logic via governed metrics.
Minimal setup
- Connect a data source (Athena, Synapse, BigQuery, Databricks, MySQL, Postgres, Redshift, Snowflake, SQL Server)
- Or just work with CSVs
- Invite the core team (analyst + business owner + stakeholder)
Pick 1–2 use cases
- Exploratory analysis & story: answer a real question; present from the same canvas
- Metric map / funnel: align on goals and how they roll up
- Mini catalog: define a handful of reusable metrics/datasets and use them in a report
- Collaborative model review: reference dbt models, annotate changes, share decisions
How to work (Improvement Cycle)
- Identify: frame goals and assumptions on the canvas (stickies + context)
- Explore: query live; snapshot infrequently; iterate locally with DuckDB; use parameters for scenarios
- Decide: switch to Present mode, show steps/lineage; capture comments and approvals
- Monitor: add scorecards and alerts to keep outcomes visible
End-of-trial deliverables
For a trial, prioritise one or two key deliverables - depth beats breadth. Focusing narrowly makes it easier to prove value, avoid scope creep, and get clear before/after comparisons.
These are some suggested end-of-trial deliverables:
- Improvement Cycle canvas: a single live canvas that tells problem → analysis → decision.
- Ad-hoc analysis example: one real request answered end-to-end on a canvas (steps, assumptions, lineage) presented via Present mode.
- Impact note: brief summary of time saved, iterations reduced, and warehouse compute avoided (via DuckDB).
- Stakeholder + analyst feedback: quick pulse on clarity/usefulness and a “happy explorers” check (e.g., “Was this trial easier/faster than your normal workflow?”).