Context best practice

Count's agent combines and observes context at multiple different levels. This system allows you to build consistency and improve reliability of results at an organizational-level, while adapting to the specific needs of projects or adhoc queries.

WorkspaceCatalogProject
Governed byWorkspace owners and business leadersData teamProject leads and analysts
Rate of changeQuarterly or lessOften and iterativelyThroughout project life-cycle
Nature of knowledgeOrganisational policy, brand, and regulationData structure and meaningAnalytical scope and intent
ReusabilityAcross all projectsAcross projects using specific views and datasetsWithin this project

Additionally, agent queries can be provided objects and data sources from the canvas itself as a final context layer. For example, questions from a kick off meeting captured in stickies, or additional CSVs to consider alongside catalogs and database cells.

There is no fixed system for what context lives where. However, here is a simple framework to simplify decisions:

WorkspaceCatalogProjectQuery
Identity & ExpressionTone of voice, approved terminology, formality standards, naming standards, currency/date formatsLabels for dimensions/measures, physical-to-logical name mappings, unit specifications (e.g., "revenue is in USD cents")Project-specific voice adjustments (e.g. "give it to me straight without interpretation")Annotations
Business FoundationsIndustry vertical, business model, company-wide KPIs, fiscal calendar, reporting periodsBusiness logic, known data quirks, temporal semantics (event time vs load time), SCD type, refresh schedulesDomain-focus (marketing vs finance), project objectives, analysis windowsSpecific hypothesis, current analytical thread.
Semantic data modelCanonical metric definitions, dimension hierarchies, entity relationships, calculated fieldsProject-specific derived metrics, scope-limited definitions, "for this analysis X means Y"Objects in view: what a specific chart shows, sticky note contents
Data GovernanceCompliance frameworks (GDPR/HIPAA), approved sources, data retention policies, org-wide access principlesSensitivity classifications (per table/column), PII flags, row-level security rules, data quality scores, freshness metadataWhat's currently selected, filtered, or in focus
Analytical StandardsPreferred statistical methods, significance thresholdsDefault aggregations, null handling rules, grain definitionsMethodological choices for this project, assumptions log

Quality checklist

When writing context, consider:

  • Actionable? - Does this actually change what the agent does? You can test this within a query in side-by-side agents in the canvas.
  • Unambiguous? - Could two reasonable people interpret this differently?
  • Current? - Is this still true and verifiable?
  • Scoped correctly? - Would this be more effective and useful at a different level of the context?
  • Minimal? - Is there a shorter, more concise articulation that doesn't sacrifice clarity.
  • Testable? - Can you prove a positive improvement in side-by-side testing?
Loading...