Reading a QA Report
When a QA review runs, Gradial generates a report showing how content performed against each active rule.Report summary
At the top of every report:| Field | What it shows |
|---|---|
| Rules failed | Count of rules the content did not pass |
| Rules passed | Count of rules the content met |
| Incomplete | Checks that require manual verification |
Result detail
Each rule result includes:| Field | Description |
|---|---|
| Title | The rule name |
| Type | Whether this is a brand/compliance Rule or an Accessibility check |
| Status | Passed, Failed, or Incomplete |
| Explanation | What was found and why it failed |
Status definitions
- Passed — Content meets the standard
- Failed — A violation was detected that must be corrected
- Incomplete — Automated testing found something that requires manual verification before a final determination can be made
Issue types
QA reports surface issues across five categories:| Category | What it catches |
|---|---|
| Brand & editorial | Voice and tone violations, unapproved terminology, messaging inconsistencies |
| SEO | Missing or duplicate meta descriptions, heading hierarchy problems, keyword usage |
| Compliance | Missing required disclosures, unapproved claims, regional regulatory language |
| Accessibility | Alt-text, heading structure, ARIA, color contrast — see Accessibility Reports for full detail |
| Functional | Missing asset references, broken links, incomplete metadata |
QA in your content workflow
Author creates content
The authoring agent produces output guided by active rules. Planning-agent rules apply early, before authoring begins. Integration-agent rules apply during content creation.
QA runs automatically
After the authoring agent completes work, the QA agent evaluates the output against all active rules for that workspace and integration.
Report surfaces issues
Failed and incomplete checks appear in the report with explanations. Content with failures is flagged before it reaches a human reviewer.
Author resolves and re-runs
Fix the flagged issues and re-run QA to confirm. Content with a clean report moves to review with confidence.
Scaling QA across teams
QA reports are most valuable when distributed teams — regional marketers, agency partners, extended business teams — are creating content without deep familiarity with every brand guideline. The report explains, not just flags. Each failure includes the reasoning: what the rule requires, what was found, and how to fix it. Authors self-correct before submitting for review, which reduces back-and-forth. Pass rates improve over time. As authors encounter and resolve the same rule types repeatedly, the failures decrease. Teams that track pass rates over time see measurable quality improvement without increased review overhead. Add rules as patterns emerge. If the same issue keeps appearing and no rule currently catches it, add one. The rules library grows in response to real production patterns.To configure the rules that power QA reports, see Rules & Skills. For accessibility-specific results and WCAG remediation guidance, see Accessibility Reports.