Telemetry
What is telemetry?
Telemetry is the automated collection of data from your product: how users behave (clicks, flows, events), how the system performs (latency, errors), and optionally business outcomes (conversions, revenue). You use it to inform experimentation, A/B testing, continuous discovery, and product decisions.
Use it when: you need evidence for what’s actually happening in the product (not just what users say). Telemetry supports feedback loops, release habits, and problem statements grounded in data.
Copy/paste checklist (what to instrument)
- [ ] Key events – Sign-up, sign-in, core actions (e.g. “project created”, “invoice sent”), and outcomes you care about.
- [ ] Funnels – Steps in critical flows (e.g. sign-up steps, checkout) so you can see drop-off.
- [ ] Errors – Client and server errors (anonymised, no PII in logs) so you can fix and monitor.
- [ ] Performance – Core Web Vitals or equivalent (e.g. LCP, FID, CLS) so you know when the product is slow or broken.
- [ ] Privacy and consent – What you collect, why, and how users can opt out; comply with GDPR/privacy policy. No sensitive data without consent.
Why telemetry matters
- Informs A/B tests and experimentation with real behaviour and outcomes.
- Surfaces problems (errors, drop-off, performance) before they become crises.
- Supports continuous discovery and feedback loops so you learn from usage.
- Gives problem statements and prioritisation a data backbone.
What good telemetry includes
Checklist
- [ ] Purpose-led – You collect data that answers product questions (e.g. “Do people complete onboarding?” “Where do they drop off?”). Avoid “collect everything.”
- [ ] Key events defined – A short list of events that matter for experimentation and discovery; consistent naming.
- [ ] Privacy-respectful – Minimal PII; consent where required; documented in privacy policy.
- [ ] Reliable – Instrumentation is tested; events fire when they should; you don’t lose data silently.
- [ ] Used – Data feeds dashboards, A/B tests, or reviews; not stored and forgotten.
- [ ] Errors and performance – You know when the product or a flow is broken or slow.
Common formats
- Event-based: Each meaningful action is an event (e.g.
signup_started,checkout_completed). Use for funnels and A/B testing. - Session/aggregate: Page views, session length, bounce. Use for high-level engagement.
- Performance: Load times, errors, Core Web Vitals. Use for stability and release habits.
Examples
Example (the realistic one)
Events: signup_started, signup_step_2_completed, signup_completed, first_project_created. Funnel: Sign-up flow; you see 60% complete step 2, 40% complete sign-up. You form a problem statement (“Users drop at step 2”) and run user research or usability testing to understand why. Errors: Front-end errors logged with message and stack (no PII); you alert on spike. Privacy: No names or emails in event payloads; IP anonymised; consent banner where required.
Common pitfalls
- Collecting everything: No clear question; data swamp. → Do this instead: Define the 10–20 events that support experimentation and discovery; add more only when you have a question.
- No key events: You have page views but not “completed sign-up” or “first value.” → Do this instead: Instrument outcomes (e.g. activation, conversion) so you can measure impact.
- Ignoring privacy: Collecting PII or not documenting. → Do this instead: Minimise PII; document what you collect and why; follow privacy policy and consent.
- Data never used: Logs and dashboards exist but nobody looks. → Do this instead: Tie telemetry to feedback loops and release habits; review funnels and errors regularly.
- Brittle instrumentation: Events break after a refactor and nobody notices. → Do this instead: Treat instrumentation as part of the product; test critical events; monitor for gaps.
Telemetry vs. related concepts
- Telemetry vs A/B testing: A/B testing uses telemetry (events, metrics) to measure variants; telemetry is the data layer that makes experiments measurable.
- Telemetry vs user research: User research explains why; telemetry shows what and how much. Use both: data for “where” and “how many,” research for “why.”
- Telemetry vs analytics: “Analytics” often means the tools and reports; telemetry is the data you collect and send. They overlap; telemetry is the input, analytics the use of it.
Related terms
- A/B testing – needs telemetry to measure outcomes.
- Experimentation – telemetry informs hypotheses and results.
- Continuous discovery – data from telemetry feeds discovery.
- Feedback loop – telemetry is one source of feedback.
- Release habits – monitor errors and performance after release.
- Problem statement – often grounded in telemetry (e.g. drop-off, errors).
- Feature prioritisation – usage and funnels inform what to build next.
Next step
List the 5–10 events that would answer your current product questions (e.g. “Do people reach first value?” “Where do they drop?”). Instrument those events if they’re missing, and add one funnel or dashboard that you’ll review regularly. Ensure privacy and consent are documented. Read Experimentation to connect telemetry to hypothesis testing.