Context best practice
Count's agent combines and observes context at multiple different levels. This system allows you to build consistency and improve reliability of results at an organizational-level, while adapting to the specific needs of projects or adhoc queries.
| Workspace | Catalog | Project | |
|---|---|---|---|
| Governed by | Workspace owners and business leaders | Data team | Project leads and analysts |
| Rate of change | Quarterly or less | Often and iteratively | Throughout project life-cycle |
| Nature of knowledge | Organisational policy, brand, and regulation | Data structure and meaning | Analytical scope and intent |
| Reusability | Across all projects | Across projects using specific views and datasets | Within this project |
Additionally, agent queries can be provided objects and data sources from the canvas itself as a final context layer. For example, questions from a kick off meeting captured in stickies, or additional CSVs to consider alongside catalogs and database cells.
There is no fixed system for what context lives where. However, here is a simple framework to simplify decisions:
| Workspace | Catalog | Project | Query | |
|---|---|---|---|---|
| Identity & Expression | Tone of voice, approved terminology, formality standards, naming standards, currency/date formats | Labels for dimensions/measures, physical-to-logical name mappings, unit specifications (e.g., "revenue is in USD cents") | Project-specific voice adjustments (e.g. "give it to me straight without interpretation") | Annotations |
| Business Foundations | Industry vertical, business model, company-wide KPIs, fiscal calendar, reporting periods | Business logic, known data quirks, temporal semantics (event time vs load time), SCD type, refresh schedules | Domain-focus (marketing vs finance), project objectives, analysis windows | Specific hypothesis, current analytical thread. |
| Semantic data model | Canonical metric definitions, dimension hierarchies, entity relationships, calculated fields | Project-specific derived metrics, scope-limited definitions, "for this analysis X means Y" | Objects in view: what a specific chart shows, sticky note contents | |
| Data Governance | Compliance frameworks (GDPR/HIPAA), approved sources, data retention policies, org-wide access principles | Sensitivity classifications (per table/column), PII flags, row-level security rules, data quality scores, freshness metadata | What's currently selected, filtered, or in focus | |
| Analytical Standards | Preferred statistical methods, significance thresholds | Default aggregations, null handling rules, grain definitions | Methodological choices for this project, assumptions log |
Quality checklist
When writing context, consider:
- Actionable? - Does this actually change what the agent does? You can test this within a query in side-by-side agents in the canvas.
- Unambiguous? - Could two reasonable people interpret this differently?
- Current? - Is this still true and verifiable?
- Scoped correctly? - Would this be more effective and useful at a different level of the context?
- Minimal? - Is there a shorter, more concise articulation that doesn't sacrifice clarity.
- Testable? - Can you prove a positive improvement in side-by-side testing?