Skip to main content
QA reports evaluate content against the rules your organization has defined — surfacing failures, explaining why they occurred, and guiding resolution before content reaches reviewers or goes live. Rules are configured in the Rules Manager. Once rules are active, QA runs automatically on every task output.

Reading a QA Report

When a QA review runs, Gradial generates a report showing how content performed against each active rule.

Report summary

At the top of every report:
FieldWhat it shows
Rules failedCount of rules the content did not pass
Rules passedCount of rules the content met
IncompleteChecks that require manual verification

Result detail

Each rule result includes:
FieldDescription
TitleThe rule name
TypeWhether this is a brand/compliance Rule or an Accessibility check
StatusPassed, Failed, or Incomplete
ExplanationWhat was found and why it failed
Click any result to expand the full explanation — including the exact content that triggered the failure and guidance on how to fix it.

Status definitions

  • Passed — Content meets the standard
  • Failed — A violation was detected that must be corrected
  • Incomplete — Automated testing found something that requires manual verification before a final determination can be made

Issue types

QA reports surface issues across five categories:
CategoryWhat it catches
Brand & editorialVoice and tone violations, unapproved terminology, messaging inconsistencies
SEOMissing or duplicate meta descriptions, heading hierarchy problems, keyword usage
ComplianceMissing required disclosures, unapproved claims, regional regulatory language
AccessibilityAlt-text, heading structure, ARIA, color contrast — see Accessibility Reports for full detail
FunctionalMissing asset references, broken links, incomplete metadata
Not all issues carry equal weight. Compliance and accessibility failures have regulatory implications; editorial issues may be lower priority. Use the severity and explanation fields to triage.

QA in your content workflow

1

Author creates content

The authoring agent produces output guided by active rules. Planning-agent rules apply early, before authoring begins. Integration-agent rules apply during content creation.
2

QA runs automatically

After the authoring agent completes work, the QA agent evaluates the output against all active rules for that workspace and integration.
3

Report surfaces issues

Failed and incomplete checks appear in the report with explanations. Content with failures is flagged before it reaches a human reviewer.
4

Author resolves and re-runs

Fix the flagged issues and re-run QA to confirm. Content with a clean report moves to review with confidence.
5

Reviewers focus on higher-value feedback

With baseline quality validated automatically, reviewers can focus on strategic messaging, creative direction, and stakeholder alignment — not catching compliance issues.

Scaling QA across teams

QA reports are most valuable when distributed teams — regional marketers, agency partners, extended business teams — are creating content without deep familiarity with every brand guideline. The report explains, not just flags. Each failure includes the reasoning: what the rule requires, what was found, and how to fix it. Authors self-correct before submitting for review, which reduces back-and-forth. Pass rates improve over time. As authors encounter and resolve the same rule types repeatedly, the failures decrease. Teams that track pass rates over time see measurable quality improvement without increased review overhead. Add rules as patterns emerge. If the same issue keeps appearing and no rule currently catches it, add one. The rules library grows in response to real production patterns.
To configure the rules that power QA reports, see Rules & Skills. For accessibility-specific results and WCAG remediation guidance, see Accessibility Reports.