Go Back

Heuristic Evaluation

What is heuristic evaluation?

Heuristic evaluation is a usability inspection method where one or more evaluators review an interface against a set of usability principles (heuristics) and note violations. It’s expert-based, not user-based: no users are in the room, but the evaluators know usability best practices.

Use it when: you need a quick, cheap pass over an interface to find obvious issues before or alongside usability testing. It doesn’t replace testing with users but can catch many problems fast.

Copy/paste checklist (Nielsen’s 10 heuristics)

  1. Visibility of system status – Users see what’s going on (feedback, loading, state).
  2. Match system and real world – Language and concepts users know.
  3. User control and freedom – Undo, back, exit; no dead ends.
  4. Consistency and standards – Platform and product conventions followed.
  5. Error prevention – Constraints and confirmations to avoid mistakes.
  6. Recognition over recall – Options visible; minimal memory load.
  7. Flexibility and efficiency – Shortcuts and defaults for experts; simple for novices.
  8. Aesthetic and minimalist design – No irrelevant clutter.
  9. Help with errors – Clear messages and recovery.
  10. Help and documentation – Findable when needed; concise.

For each heuristic, note violations (where the interface fails) and severity (e.g. critical / major / minor).

Why heuristic evaluation matters

  • Finds many usability issues quickly and cheaply, without recruiting users.
  • Covers the whole interface systematically if you walk through key flows.
  • Complements usability testing: heuristics catch expert-visible issues; users reveal real behaviour and confusion.
  • Good for early concepts or when you can’t run user tests yet.

What a good heuristic evaluation includes

Checklist

  • [ ] Clear scope – which flows or screens (e.g. “sign-up and onboarding”).
  • [ ] Heuristic set – e.g. Nielsen’s 10; use the same set for consistency.
  • [ ] Severity – rate each issue so you can prioritise (critical / major / minor).
  • [ ] Recommendation – what to change, not just “violates heuristic X”.
  • [ ] Multiple evaluators – 2–3 evaluators find more issues than one; merge findings.

Common formats

  • Nielsen’s 10: the standard set; use unless you have a reason to add/remove (e.g. accessibility heuristics).
  • Task-based: walk through specific tasks (e.g. “sign up”, “complete purchase”) and check each step against heuristics.
  • Report: list of issues with heuristic, location, severity, and recommendation; optionally grouped by heuristic or by screen.

Examples

Example (the realistic one)

Scope: Checkout flow (cart → payment → confirmation). Evaluators: 2 people. Process: Each evaluator goes through the flow and notes violations of Nielsen’s 10; they rate severity and suggest a fix. Output: 12 issues, e.g. “No visible loading state when submitting payment (H1: Visibility). Severity: Major. Recommendation: Show spinner or progress and disable submit until response.” You fix the critical and major issues, then run usability testing to validate and find more.

Common pitfalls

  • Replacing user testing: heuristics don’t tell you what users actually do or want. → Do this instead: use heuristics as a first pass; always test with users for important flows.
  • Vague findings: “Inconsistent.” → Do this instead: name the screen, element, and heuristic; say what to do differently.
  • No severity: everything looks equally important. → Do this instead: rate so the team can prioritise (e.g. critical = blocks task; major = significant friction; minor = polish).
  • One evaluator only: single perspective misses issues. → Do this instead: at least 2 evaluators; merge and deduplicate.
  • Heuristic evaluation vs usability testing: heuristics = expert review; usability testing = real users. Use both: heuristics for breadth and speed; user testing for validity and “why”.
  • Heuristic evaluation vs accessibility audit: accessibility has its own guidelines (e.g. WCAG); you can add accessibility heuristics or run a separate audit.
  • Heuristic evaluation vs cognitive walkthrough: walkthrough is task-focused (“can the user complete step X?”); heuristics are principle-focused. Different angles; both are expert methods.
  • Usability testing – validate with users after or alongside heuristic review.
  • Usability – the goal heuristics help you assess.
  • User research – broader research; heuristics are one method.
  • Prototype – what you often evaluate (before or after user testing).
  • UX design – heuristics support UX quality.
  • Accessibility – consider adding accessibility checks to your review.

Next step

Pick one critical flow, run a heuristic evaluation using Nielsen’s 10 (yourself or with a colleague), and document issues with severity and recommendations. Then run usability testing on the same flow to see what users do and whether your fixes are right.