Go Back

Usability Testing

What is usability testing?

Usability testing is a research method where you observe real users trying to complete tasks with your product or prototype. You watch where they struggle, get stuck, or misunderstand, and use that to fix problems before or after launch.

Use it when: you have something people can interact with (prototype or live product) and you need to learn whether they can complete key tasks and where the experience breaks down.

Copy/paste checklist (run a round)

  • [ ] Objective: What do we need to learn? (e.g. “Can users complete checkout?”)
  • [ ] Tasks: 3–5 realistic tasks that map to that objective.
  • [ ] Participants: 5–8 people who match your target users (recruit and screen).
  • [ ] Script: Brief intro, think-aloud reminder, task list, post-task questions.
  • [ ] Artefact: Summary of issues (severity + frequency) and 3–5 recommendations.

Why usability testing matters

  • Surfaces real problems that analytics and opinions miss.
  • Catches issues early when they’re cheap to fix.
  • Gives the team shared evidence and reduces “I think” debates.
  • Improves completion rates and satisfaction by removing friction.

What good usability testing includes

Checklist

  • [ ] Clear tasks – participants know what to do but not how; no leading.
  • [ ] Think-aloud – participants say what they’re thinking so you see reasoning, not just clicks.
  • [ ] Representative users – people who could actually use the product.
  • [ ] Documented findings – issues plus severity and recommended fixes.
  • [ ] Moderated or unmoderated – moderated for depth and follow-up; unmoderated for scale and cost.

Common formats

  • Moderated: facilitator guides the session, can probe. Best for discovery and complex flows.
  • Unmoderated: participants complete tasks on their own; you review recordings. Best for volume and simple tasks.
  • Remote vs lab: remote is common and flexible; lab gives more control and equipment (e.g. eye-tracking) when needed.

Examples

Example (the realistic one)

Objective: Can new users sign up and complete their first action? Tasks: (1) Find the sign-up option, (2) Create an account, (3) Complete the one “first action” we’ve defined. Participants: 6 people who’ve never used the product, match our primary segment. Output: 2–3 critical issues (e.g. “CTA not found”, “form error unclear”) with severity and recommended changes.

Common pitfalls

  • Leading the user: “Click the blue button” or “You’d use this to save, right?” → Do this instead: give the goal only (“Save this item so you can find it later”) and watch what they do.
  • Too many tasks: long sessions tire participants and dilute focus. → Do this instead: 3–5 tasks per round; run another round if you need more.
  • Only testing with internal users: colleagues already know the product. → Do this instead: recruit people who match your target users and haven’t used it (or use it rarely).
  • No clear outcome: notes in a drawer, no recommendations. → Do this instead: summarise issues, severity, and next steps; share with the team and tie to a decision.
  • Usability testing vs A/B testing: usability testing explains why (qualitative, task-based); A/B testing measures which version wins (quantitative, at scale). Use both: usability to find issues, A/B to test fixes.
  • Usability testing vs heuristic evaluation: heuristics are expert review against best practices; usability testing is with real users. Do both when you can; users catch things experts miss.
  • Usability testing vs user research: usability testing is one method within user research, focused on “can they use it?”. Broader user research includes discovery, interviews, and other methods.
  • User research – usability testing is one research method.
  • Prototype – what you often test before building.
  • Heuristic evaluation – expert review; combine with usability testing.
  • A/B testing – test solutions at scale after usability finds issues.
  • User flow – map flows, then test them with users.
  • Problem statement – what you’re solving; usability tests whether your solution works.
  • Accessibility – include accessibility in your test plan and participants.

Next step

Pick one critical flow (e.g. sign-up or checkout), write 3 tasks, and run 5 sessions. Share a one-page summary of issues and recommendations. If you don’t have a prototype yet, build a clickable one first.