Go Back

Success Metrics

What are success metrics?

A success metric is a specific, observable outcome that tells you whether a change worked. It connects a design or feature decision to a real-world signal: what will measurably improve if you solve the problem correctly?

Use it when: you're defining a project scope, writing a problem statement, or closing a design sprint — before you start building, so "done" is unambiguous.

Copy/paste template

Use one row per metric. Fill in all four columns before you start.

Metric name Definition Target Time window
[Metric name] [What it measures and how it's tracked] [The number or threshold that signals success] [When you'll measure it]

Example filled row:

| Checkout completion rate | % of users who reach order confirmation after clicking "Proceed to checkout" (tracked via analytics funnel) | Increase from 61% to 70% | 4 weeks post-launch |

Add 2–3 metrics per project. More than five usually means the goal isn't clear enough.

Why success metrics matter

  • Stop "did it work?" becoming a political argument. When you define the signal up front, the team evaluates evidence — not opinion.
  • Make trade-offs legible. If two designs test well against different metrics, you know which one actually moves the needle on what matters.
  • Enable faster iteration. Clear metrics mean you can run a focused A/B test or short experiment rather than a multi-month build.
  • Connect design to business outcomes. Stakeholders back projects that track impact in terms they recognise.

Metrics vs KPIs vs OKRs

These terms overlap but are used at different scopes:

Term Scope Example
Success metric Single feature or project Checkout completion rate increases from 61% to 70% within 4 weeks
KPI Product or team Monthly active users; NPS; support ticket volume
OKR Company or team quarter O: Reduce friction in checkout. KR: Increase completion rate by 10%

A success metric is usually a KR (Key Result) zoomed into a single change. If you're measuring the impact of one feature, "success metric" is the right label.

What good success metrics include

Checklist

  • [ ] Observable: you can collect the data today (or know exactly how you'll instrument it).
  • [ ] Specific: it names the exact behaviour being measured, not a category.
  • [ ] Time-bound: it has a window (4 weeks, next quarter, 30 days post-launch).
  • [ ] Directional: it says whether higher or lower is better.
  • [ ] Realistic: the target is achievable, not aspirational in a way that makes failure inevitable.

Weak vs strong examples

Weak (feature: onboarding redesign):
"Improve new user experience."

This is not a metric — it's a goal statement. There's no number, no time window, and no way to know when it's achieved.

Strong:
"Increase the percentage of new users who complete at least one core action (create, import, or connect) within 24 hours of sign-up from 28% to 40%, measured over 6 weeks post-launch."


Weak (feature: redesigned pricing page):
"Better conversion."

Conversion of what? Compared to what baseline? Over what time window? "Better" is not a metric.

Strong:
"Increase pricing page → free trial sign-up conversion from 3.2% to 5.0%, tracked via UTM + goal completion in analytics, measured 30 days after launch."


Weak (feature: error message rewrite):
"Fewer support tickets."

No baseline, no target, no window.

Strong:
"Reduce support tickets tagged 'login errors' from an average of 40/week to 20/week within 8 weeks of the updated error messages going live."

Common pitfalls

  • "Better" is not a metric:Do this instead: name the specific behaviour, the current baseline, and the target number.
  • No baseline: you can't measure improvement without knowing where you started. → Do this instead: check analytics or run a baseline measurement before you build.
  • Measuring activity, not outcome: "users click the new button" is activity. "Users complete onboarding" is outcome. → Do this instead: always ask: what's the result, not just the action?
  • Too many metrics: tracking 10 metrics means you're tracking nothing clearly. → Do this instead: pick the 2–3 that most directly reflect the change you're making.
  • Metrics defined after the fact: post-hoc metric selection invites cherry-picking. → Do this instead: document metrics before any code ships, ideally in the problem statement.

How to measure (implementation)

  1. Define the event: what user action signals success? (e.g. "reaches order confirmation screen")
  2. Instrument before launch: add the tracking event in your analytics tool (PostHog, Mixpanel, GA4) before the change ships.
  3. Set a baseline: capture the current rate for at least 1–2 weeks before launch.
  4. Choose a comparison method: A/B test (parallel), before/after (sequential), or segment comparison.
  5. Document the window: log when the change shipped, and when you'll evaluate results.
  6. Share the outcome: write up what moved, what didn't, and what the team will do next.
  • Success metric vs hypothesis: a hypothesis is a testable prediction ("We believe that simplifying checkout will increase completion rate"). The success metric is what you measure to confirm or reject that prediction.
  • Success metric vs KPI: a KPI is a standing measure for a product or team; a success metric is tied to a specific change or experiment.
  • Success metric vs acceptance criteria: acceptance criteria define whether a feature is built correctly; success metrics define whether it works in the real world.
  • Problem statement – the place to define your success signal before the team starts building.
  • Design sprint – sprints end with user testing; success metrics tell you what to watch in the longer post-launch window.
  • A/B testing – a method for measuring whether your metric moved in the right direction.
  • Continuous discovery – ongoing learning needs ongoing metrics to stay grounded.
  • Feature prioritisation – projects with clear success metrics are easier to prioritise and deprioritise.
  • Hypothesis testing – hypothesis + success metric = the minimal testable unit of product thinking.

Next step

Take the next feature or redesign you're planning. Write its success metric using the template above before anyone opens Figma. If you can't fill in all four columns (name, definition, target, time window), the scope isn't clear yet — read Problem statement to sharpen it first.