Back to Articles

How to Run a Design Audit That Actually Changes Something

Most design audits produce a slide deck. The slide deck gets shared once. Then it lives in a folder nobody visits again. Six months later, a new designer joins and asks if anyone has done an audit recently.

This is not a process problem. It is a scoping problem. And it starts with how most teams think about what a design audit is for.

A design audit is a diagnostic, not a compliance exercise. The question is not whether your product follows the design system. The question is whether it works for the person trying to do something real. That reframe changes what you look for, how you scope it, and what you do with the findings.

What a design audit is (the short version)

A design audit is a systematic review of a product's UI — looking for inconsistency, accessibility gaps, and coverage failures across states and flows. Teams run them before major releases, after a redesign, or when a new designer is trying to understand what they've inherited.

The useful version produces a prioritised fix list. Every item has a severity, an owner, and a clear next action. The useless version produces observations.

When to run one, and when not to

Run a design audit when:

  • You are preparing a significant release and need to know what is broken before users do
  • A new designer or design lead is getting up to speed on the product
  • You have recurring bug reports or support tickets that point to UI confusion
  • You are consolidating or replacing parts of the design system

Do not run one when:

  • You are mid-sprint with no capacity to act on findings. An audit without remediation time is documentation of your problems, not a path to fixing them.
  • The product is in early exploration. You do not audit a prototype. You audit a shipped product.
  • You want ammunition for a conversation you have already decided to have. That is not an audit — it is a brief with extra steps.

The most common audit failure is running one at the wrong time, when the findings have no path to action. If you cannot answer "what will we do with this?" before you start, don't start.

Scope it ruthlessly

The second most common failure is scoping too broadly. "Let's audit the whole product" means nobody agrees on what done looks like, nothing is prioritised, and the final output covers everything equally — which means it guides nothing.

Scope by surface, not by category. Pick one critical user flow — onboarding, upgrade, error recovery — and audit it end to end. Or pick one component type, such as all your form patterns or all your empty states, and go deep on that.

A focused audit on the checkout flow is worth more than a broad audit that lists 200 inconsistencies with no clear priority.

Questions that help scope the work:

  • What is the most user-visible part of this product right now?
  • Where do we have the most support tickets or drop-off data?
  • What is shipping in the next cycle that is adjacent to this area?

If you can answer the first two, you have your scope.

What to look for

Visual inconsistency

This is the category most teams default to, and it matters — but only when you go beyond "these paddings don't match."

The meaningful version of a visual audit catches:

  • Token drift: components that stopped using the correct token and are now hardcoded to a value that will diverge the next time the system updates
  • State gaps: interactive elements that have a default state and an active state, but nothing for hover, focus, disabled, or loading
  • Spacing that isn't systematic: padding that looks roughly right but does not map to any grid or spacing scale, which means it will not hold up under responsive changes

The question to ask of every component: if the design system updates, will this break silently or update correctly?

Accessibility gaps

Not "does every image have alt text." The accessibility issues that actually exclude users are structural, and they tend to be invisible until a keyboard or screen reader hits them.

  • Tab order problems: modals that trap focus, forms that tab in the wrong sequence, custom interactive components that are skipped entirely by keyboard navigation
  • Screen reader compatibility: components that have no accessible name, or that announce the wrong role to assistive technology — a button that reads as a div, a modal that announces nothing on open
  • Contrast that passes but still fails: the WCAG threshold is a floor, not a target. Small body text sitting exactly at 4.5:1 is technically compliant and still hard to read for a large portion of your users
  • Touch targets: 44px is the minimum. Anything below that is a bug, not a design choice

These are the gaps that produce real exclusion and real support tickets. They are also often straightforward to fix once found, which makes them high-severity by default.

Flow and state coverage

This is the forgotten 20%. Most screens get designed for the happy path. Everything else gets shipped in whatever state a developer guessed at under time pressure.

Check for:

  • Empty states: what does the screen look like before a user has any data? Is it useful, or is it just blank with a broken layout?
  • Error states: does the message help someone recover, or does it just tell them they're wrong? Does it point to the thing they need to fix?
  • Partial states: what does a dashboard look like with one item? With a hundred? While loading? When the API times out?
  • Edge cases in forms: very long names, special characters, inputs that overflow their containers, fields that accept one format but receive another

Audit these by walking the flow and breaking it deliberately. It takes less time than you think, and it surfaces things that automated tooling will never catch.

Handoff drift

What was designed is not always what shipped. This gap is rarely intentional — it is a consequence of pressure, ambiguity, and the fact that developers make hundreds of small decisions during implementation that nobody reviews before release.

The method here is direct comparison. Pull the live product and the most recent design file side by side. Check spacing, font sizes, interactive states, and border radii. Note where they diverge. The interesting question is not just what is different, but why — the answer usually reveals either a gap in the handoff process or an ambiguity in the spec that will produce the same divergence again next cycle.


If you want more of this kind of thinking in your inbox, Unicorn Club is a free weekly newsletter for senior designers and design leads at SaaS companies. Practical, short, and worth your time.


What to do with the output

A finding with no owner and no priority is a slide deck waiting to be ignored.

Every item in your audit output needs three things:

  1. Severity — not a label, a consequence. "Low" means it will not break anything or affect anyone significantly if it ships next month. "High" means it is breaking something or excluding someone right now.
  2. Owner — a named person. Not "design" or "dev." A person.
  3. Next action — not "fix this." The specific thing that needs to happen next: a Figma component update, a developer ticket, a token change, a conversation with a particular stakeholder.

If you end up with 40 findings, you have too many. Group them. If the same root cause explains twelve different visual inconsistencies, the root cause gets one item — probably a token update or a component refactor — and the twelve symptoms resolve along with it.

The output of a useful audit is not a comprehensive record of everything you found. It is the shortest path to the most impactful improvements.

Anything systemic — patterns that indicate a missing rule or an unclear standard — should feed into your design standards. If the same problem appears in five different places, the fix is a rule, not five tickets.

Findings that reveal a genuine user problem — not just a design inconsistency, but a task the product is failing to support — deserve a proper problem statement before anyone starts designing a solution. The audit surfaces the problem. The problem statement defines it. The work that follows fixes it.

The 90-minute version

The thorough audit that never happens is worth less than the lightweight one that does.

If you need a fast pass — a new designer onboarding, a pre-release sanity check, or a forcing function for a conversation that has been deferred too long — here is a version that fits inside a morning.

10 minutes: define the scope. One flow, or one component type. Not the whole product.

20 minutes: walk the flow as a user. Tab through it. Try to break it. Check it on a small viewport. Try it at 200% browser zoom. Note anything that surprises you or requires more effort than it should.

30 minutes: check states and edge cases. For every interactive component in scope: does it have all the states it needs? What does it look like when empty? When the input is unusually long? When the action fails?

20 minutes: compare design to live. Open your design tool and the live product side by side. Note the three biggest divergences.

10 minutes: write the top five findings. Severity, owner, next action. If you cannot get it to five, you have not prioritised. Pick the five that matter most.

This version will not catch everything. It will catch the things that matter most, in a format that can actually be acted on. That is the point of a design audit. Not the catalogue. The change.