Back to Articles

How to Run a Design Audit That Actually Changes Something

Most design audits produce a slide deck. The slide deck gets shared once. Then it lives in a folder nobody visits again. Six months later, a new designer joins and asks if anyone has done an audit recently.

This is not a process problem. It is a scoping problem. And it starts with how most teams think about what a design audit is for.

A design audit is a diagnostic, not a compliance exercise. The question is not whether your product follows the design system. The question is whether it works for the person trying to do something real. That reframe changes what you look for, how you scope it, and what you do with the findings.

What a design audit is (the short version)

A design audit is a systematic review of a product's UI, looking for inconsistency, accessibility gaps, and coverage failures across states and flows. Teams run them before major releases, after a redesign, or when a new designer is trying to understand what they've inherited.

The useful version produces a prioritised fix list. Every item has a severity, an owner, and a clear next action. The useless version produces observations.

When to run one, and when not to

Run a design audit when:

  • You are preparing a significant release and need to know what is broken before users do
  • A new designer or design lead is getting up to speed on the product
  • You have recurring bug reports or support tickets that point to UI confusion
  • You are consolidating or replacing parts of the design system

Do not run one when:

  • You are mid-sprint with no capacity to act on findings. An audit without remediation time is documentation of your problems, not a path to fixing them.
  • The product is in early exploration. You do not audit a prototype. You audit a shipped product.
  • You want ammunition for a conversation you have already decided to have. That is not an audit; it is a brief with extra steps.

The most common audit failure is running one at the wrong time, when the findings have no path to action. If you cannot answer "what will we do with this?" before you start, don't start.

Scope it ruthlessly

The second most common failure is scoping too broadly. "Let's audit the whole product" means nobody agrees on what done looks like, nothing is prioritised, and the final output covers everything equally, which means it guides nothing.

Scope by surface, not by category. Pick one critical user flow: onboarding, upgrade, or error recovery. Audit it end to end. Or pick one component type, such as all your form patterns or all your empty states, and go deep on that.

A focused audit on the checkout flow is worth more than a broad audit that lists 200 inconsistencies with no clear priority.

Questions that help scope the work:

  • What is the most user-visible part of this product right now?
  • Where do we have the most support tickets or drop-off data?
  • What is shipping in the next cycle that is adjacent to this area?

If you can answer the first two, you have your scope.

What to look for

Visual inconsistency

This is the category most teams default to, and it matters, but only when you go beyond "these paddings don't match."

The meaningful version of a visual audit catches:

  • Token drift: components that stopped using the correct token and are now hardcoded to a value that will diverge the next time the system updates
  • State gaps: interactive elements that have a default state and an active state, but nothing for hover, focus, disabled, or loading
  • Spacing that isn't systematic: padding that looks roughly right but does not map to any grid or spacing scale, which means it will not hold up under responsive changes

The question to ask of every component: if the design system updates, will this break silently or update correctly?

Accessibility gaps

Not "does every image have alt text." The accessibility issues that actually exclude users are structural, and they tend to be invisible until a keyboard or screen reader hits them.

  • Tab order problems: modals that trap focus, forms that tab in the wrong sequence, custom interactive components that are skipped entirely by keyboard navigation
  • Screen reader compatibility: components that have no accessible name, or that announce the wrong role to assistive technology: a button that reads as a div, a modal that announces nothing on open
  • Contrast that passes but still fails: the WCAG threshold is a floor, not a target. Small body text sitting exactly at 4.5:1 is technically compliant and still hard to read for a large portion of your users
  • Touch targets: 44px is the minimum. Anything below that is a bug, not a design choice

These are the gaps that produce real exclusion and real support tickets. They are also often straightforward to fix once found, which makes them high-severity by default.

Flow and state coverage

This is the forgotten 20%. Most screens get designed for the happy path. Everything else gets shipped in whatever state a developer guessed at under time pressure.

Check for:

  • Empty states: what does the screen look like before a user has any data? Is it useful, or is it just blank with a broken layout?
  • Error states: does the message help someone recover, or does it just tell them they're wrong? Does it point to the thing they need to fix?
  • Partial states: what does a dashboard look like with one item? With a hundred? While loading? When the API times out?
  • Edge cases in forms: very long names, special characters, inputs that overflow their containers, fields that accept one format but receive another

Audit these by walking the flow and breaking it deliberately. It takes less time than you think, and it surfaces things that automated tooling will never catch.

Handoff drift

What was designed is not always what shipped. This gap is rarely intentional; it is a consequence of pressure, ambiguity, and the fact that developers make hundreds of small decisions during implementation that nobody reviews before release.

The method here is direct comparison. Pull the live product and the most recent design file side by side. Check spacing, font sizes, interactive states, and border radii. Note where they diverge. The interesting question is not just what is different, but why: the answer usually reveals either a gap in the handoff process or an ambiguity in the spec that will produce the same divergence again next cycle.

If you want more of this kind of thinking in your inbox, Unicorn Club is a free weekly newsletter for senior designers and design leads at SaaS companies. Practical, short, and worth your time.

What to do with the output

A finding with no owner and no priority is a slide deck waiting to be ignored.

Every item in your audit output needs three things:

  1. Severity: not a label, a consequence. "Low" means it will not break anything or affect anyone significantly if it ships next month. "High" means it is breaking something or excluding someone right now.
  2. Owner: a named person. Not "design" or "dev." A person.
  3. Next action: not "fix this." The specific thing that needs to happen next: a Figma component update, a developer ticket, a token change, a conversation with a particular stakeholder.

If you end up with 40 findings, you have too many. Group them. If the same root cause explains twelve different visual inconsistencies, the root cause gets one item, probably a token update or a component refactor, and the twelve symptoms resolve along with it.

The output of a useful audit is not a comprehensive record of everything you found. It is the shortest path to the most impactful improvements.

Anything systemic, patterns that indicate a missing rule or an unclear standard, should feed into your design system. If the same problem appears in five different places, the fix is a rule, not five tickets. This is where audits become leverage: a single rule prevents the same gap from happening again.

Findings that reveal a genuine user problem, not just a design inconsistency but a task the product is failing to support, deserve a proper problem statement before anyone starts designing a solution. The audit surfaces the problem. The problem statement defines it. The work that follows fixes it.

The 90-minute version

The thorough audit that never happens is worth less than the lightweight one that does.

If you need a fast pass, consider a new designer onboarding, a pre-release sanity check, or a forcing function for a conversation that has been deferred too long. Here is a version that fits inside a morning.

10 minutes: define the scope. One flow, or one component type. Not the whole product.

20 minutes: walk the flow as a user. Tab through it. Try to break it. Check it on a small viewport. Try it at 200% browser zoom. Note anything that surprises you or requires more effort than it should.

30 minutes: check states and edge cases. For every interactive component in scope: does it have all the states it needs? What does it look like when empty? When the input is unusually long? When the action fails?

20 minutes: compare design to live. Open your design tool and the live product side by side. Note the three biggest divergences.

10 minutes: write the top five findings. Severity, owner, next action. If you cannot get it to five, you have not prioritised. Pick the five that matter most.

This version will not catch everything. It will catch the things that matter most, in a format that can actually be acted on. That is the point of a design audit. Not the catalogue. The change.

Design Audit Checklist

Use this checklist during your audit to systematically review the areas that matter most. Print it out, share it with your team, or adapt it to your product.

Visual Consistency

  • Token drift: are all colors, spacing, and typography using design tokens or hardcoded values?
  • Font sizes and weights: do they match the design system spec across the product?
  • Spacing and padding: does layout follow a documented grid or spacing scale?
  • Border radii, shadows, and effects: are they consistent or ad-hoc?
  • Iconography: are all icons from the same system and used consistently?
  • Color usage: are brand colors, neutrals, and semantic colors applied correctly?

Component Audit

  • All interactive components have default, hover, active, focus, and disabled states
  • Buttons: size, padding, icon placement, and loading states are consistent
  • Forms: input fields, labels, validation states, and error messaging follow a pattern
  • Dropdowns and selects: open and closed states, keyboard navigation, empty states
  • Cards and containers: spacing, borders, shadows, and corner radii are systematic
  • Modals and overlays: consistent padding, close buttons, focus management

Spacing and Layout

  • Gutters and margins between sections follow the spacing scale
  • Padding inside components is proportional and consistent
  • Line height and text spacing are readable and follow system
  • Responsive breakpoints maintain alignment and spacing logic
  • Container widths and max-widths are documented and applied
  • Whitespace is used intentionally, not as a leftover

Typography

  • Heading hierarchy is clear: H1, H2, H3 sizes and weights distinct
  • Body text is readable at smallest viewport and largest zoom level
  • Font line-height supports readability (aim for 1.5 or higher)
  • Letter spacing and font kerning look intentional
  • Link styling is distinguishable from body text
  • All text meets minimum contrast requirements (4.5:1 for body, 3:1 for large text)

Accessibility Basics

  • All form inputs have associated labels or accessible names
  • Images and icons have descriptive alt text or aria-labels
  • Color is not the only way to convey information (error states use icons and text)
  • Focus indicators are visible on all interactive elements
  • Tab order is logical and matches visual flow
  • Interactive components announce their role and state to screen readers

Interaction Patterns

  • Loading states are shown (spinners, skeletons, disabled buttons)
  • Error states provide specific guidance, not just "Error"
  • Success feedback is clear (toast messages, confetti, state change)
  • Hover effects and transitions are performant and purposeful
  • Keyboard shortcuts (if any) are documented and consistent
  • Touch targets are at least 44px on mobile

Copy and Microcopy

  • Button text is action-oriented, not generic ("Save settings" not "OK")
  • Error messages help recovery ("Email is already in use. Try a different one." not "Invalid input")
  • Empty states are helpful ("No projects yet. Create one to get started." not "No data")
  • Placeholder text is not a substitute for labels
  • Confirmation dialogs use specific language, not "Are you sure?"
  • All jargon and acronyms are either explained or removed

Performance Indicators

  • Page load states are shown (not a blank screen)
  • Network errors are handled gracefully (not a generic timeout)
  • Infinite scroll or pagination is clearly labeled
  • Data heavy tables have sorting, filtering, or search
  • Large forms have progress indicators or step labels
  • Real-time updates (sync status, notifications) are visually clear

How to use this checklist: Print it or share it in your audit planning doc. Assign sections to team members or review it together. Not every item applies to every product, so adapt and remove sections that are out of scope. The point is not to check every box, but to ensure you are looking at the areas that matter most.

Adam Marsden

Behind Unicorn Club

Hey 👋 I'm Adam Marsden. I've been designing and building products for 13 years, mostly SaaS and fintech.

I started Unicorn Club as a weekly newsletter for product builders. A small handful of reads each week, picked because they hold up when you get back to the work. Something you can use straight away, or take into a conversation with your team.

Every issue I ask myself one question: does this actually help someone ship better work this week?