QA Processes
What are QA Processes?
QA processes (quality assurance processes) are systematic approaches to testing and validating interfaces before they reach users. They ensure that interfaces meet functional requirements, accessibility standards, performance expectations, and user experience goals across devices and contexts.
Use it when: you need to ship with confidence that interfaces work as intended, meet standards, and won't break existing functionality.
Copy/paste template
- Scope: [what’s in scope for this release / test cycle]
- Types of testing: [functional, accessibility, visual regression, performance, UAT]
- Definition of done: [e.g. no P0/P1 bugs, WCAG AA, Core Web Vitals pass]
- Who runs what: [automated vs manual; who signs off]
- Sign-off: [who approves release and on what criteria]
Why QA Processes matter
- Catch issues early when they're cheaper to fix and before users are affected.
- Keep quality consistent across releases and reduce regression risk.
- Build confidence that releases will work as expected in production.
- Support accessibility and compliance (e.g. WCAG) through repeatable checks.
- Reduce stress and rework by having clear gates and expectations.
What a good QA process includes
Checklist
- [ ] Clear scope and definition of done for each release or sprint.
- [ ] Functional testing (core flows and edge cases) and regression where relevant.
- [ ] Accessibility testing (automated plus manual where needed) against WCAG or your standard.
- [ ] Cross-browser / device checks for your supported matrix.
- [ ] Performance checks (e.g. Core Web Vitals, budgets) where it matters.
- [ ] Ownership defined (who runs tests, who signs off).
Common formats
- Sprint QA: tests run per sprint; done as part of "definition of done".
- Release QA: dedicated test cycle before release with sign-off gates and QA guardrails (e.g. automated gates in CI).
Examples
Example (the realistic one)
A team ships a new checkout flow. QA process: (1) automated smoke tests and accessibility (axe) in CI; (2) manual test of happy path and key edge cases; (3) visual regression on critical screens; (4) UAT with one stakeholder. Definition of done: all automated checks green, no P0/P1 bugs, WCAG AA. Only then does the release go out.
Common pitfalls
- Testing too late: QA only at the end, so bugs are expensive to fix. → Do this instead: shift left: involve QA in requirements and design; run automated checks on every commit.
- No accessibility in the loop: treating a11y as optional or one-off. → Do this instead: add automated a11y checks to CI and manual testing with keyboard/screen reader for critical flows.
- Vague "done": no clear criteria for release. → Do this instead: define and document definition of done and quality gates; use QA guardrails where possible.
- Only manual testing: slow, inconsistent, hard to scale. → Do this instead: automate repetitive and critical path tests; use manual testing for exploration and edge cases.
QA Processes vs. related concepts
- QA processes vs QA guardrails: Processes are the overall approach (what to test, when, who). Guardrails are automated or manual checks that block or flag issues (e.g. in CI, before merge). Guardrails support the process.
- QA vs UAT: QA validates that the product meets spec and standards; UAT validates that it meets user/business acceptance criteria, often with real users or stakeholders.
Related terms
Next step
If you're setting up quality gates and automation, read QA guardrails. If you're validating with users, use usability testing. Ensure accessibility is part of your QA checklist.