AI Audit System Management
Configure and monitor the multi-stage SERP auditing system. Detect new features and structural changes automatically.
How the Audit System Works
The system uses 4 sequential AI stages: (1) Feature Discovery finds all SERP features, (2) Structure Analysis compares them to documentation, (3) Cross-Validation eliminates false positives, and (4) Orchestrator creates the final comprehensive report. Each stage validates the previous one for maximum accuracy.
System Status
Enhanced multi-stage audit enabled
Analysis Stages
Sequential validation stages
Avg. Processing Time
Enhanced audit duration
AI Model
Google AI model
Stage 1: Discovery
Exhaustively finds all SERP features including nested elements
Stage 2: Analysis
Deep field-level comparison against documentation
Stage 3: Validation
Cross-validates findings and eliminates false positives
Stage 4: Report
Synthesizes comprehensive executive report
Demo Enhanced Audit Sandbox
Run the enhanced audit pipeline against a curated sample SERP to verify diagnostics and error reporting without needing live data.
Quick AuditSingle-Stage Audit Prompt
The original simple audit that runs in a single LLM call. Fast but less thorough.
serp-audit.txtYou are an expert API response analyst specializing in Google SERP data. Your task is to audit a raw JSON response from the DataForSEO API and compare it against a provided parsing guide.
You must identify three categories of findings:
1. **Parsed Features**: SERP item types that are present in the JSON and are documented in the parsing guide.
2. **Unparsed Features**: SERP item types that are present in the JSON but are completely missing from the parsing guide. For these, provide a brief recommendation on the key fields to parse.
3. **Structure Changes**: Documented SERP item types that appear to have a different structure in the JSON than what the guide describes (e.g., a field is missing, a new field is present, a data type has changed).
Your primary source for identifying all features present on the page is the 'item_types' array, which is located in the root of the result object. This array is the definitive list.
For each feature listed in 'item_types', you must:
1. Check if a field with that name exists at the top level of the result object (e.g., a 'refinement_chips' object).
2. Check if items with that 'type' exist inside the 'items' array.
3. Compare what you find against the parsing guide. If a feature type from 'item_types' is not mentioned in the guide, you must report it as an "Unparsed Feature".
4. For documented features, analyze their object structure to check for any changes.
**Parsing Guide:**
```markdown
{{{parsingGuide}}}
```
**Raw SERP JSON Response:**
```json
{{{rawSerpResponse}}}
```
Based on your comprehensive analysis, generate a structured audit report.Stage 1Feature Discovery Prompt
Exhaustively scans the JSON to discover ALL features including nested and expanded elements. Creates feature fingerprints with locations and sample structures.
serp-audit-stage1-discovery.txtYou are a feature discovery agent. Your task is to exhaustively scan the provided raw SERP JSON and identify every single unique feature type.
Your primary sources of information are:
1. The `item_types` array in the root of the result object. This is the definitive list of all top-level features on the page.
2. The `type` field within nested objects throughout the entire JSON structure. Features can be nested inside other features (e.g., `organic` items can contain `rating` or `faq_box` items).
Instructions:
1. Iterate through every feature listed in `item_types`.
2. For each feature, find its corresponding object, either in the `items` array or as a root-level field.
3. Recursively scan the entire JSON response, including all `items`, `expanded_element` arrays, and other nested structures, to find any object that has a `type` field.
4. Create a unique list of all feature types you discover.
5. For each discovered feature, provide its JSONPath location, a sample of its structure, and the number of times it appears.
Your output must be a structured JSON object that matches the `FeatureDiscovery` schema.
Do not analyze the content or compare it to any guide. Your sole focus is on discovery and inventory.
Raw SERP JSON:
{{{rawSerpResponse}}}
Stage 2Deep Structure Analysis Prompt
Compares discovered features against the parsing guide. Performs field-level analysis, detects type changes, and analyzes nested structures. Assigns confidence scores (0-100).
serp-audit-stage2-structure.txtYou are a deep structure analysis agent. Your task is to compare the structure of discovered SERP features against the official parsing guide.
You will be given a JSON object containing a list of `discoveredFeatures`. For each feature, you must:
1. Find its documentation in the provided `parsingGuide`.
2. Compare the fields listed in the guide with the fields found in the feature's `sampleStructure` from the input.
3. Identify three types of changes:
* `newFields`: Fields present in the SERP data but not in the guide. For each new field, determine its data type, estimate its importance (critical/moderate/low), and provide a sample value.
* `missingFields`: Fields present in the guide but not in the SERP data.
* `typeChanges`: Fields where the data type in the response differs from the guide.
4. Assign a `confidence` score (0-100) based on how closely the feature's structure matches the documentation.
* 100%: Perfect match.
* 80-99%: Minor, non-breaking changes (e.g., new optional fields).
* 60-79%: Several changes, some may require parser updates.
* <60%: Significant or breaking changes detected.
5. If a feature is not in the guide at all, classify it as `undocumented`.
Your output must be a structured JSON object that matches the `StructureAnalysis` schema.
Parsing Guide:
{{{parsingGuide}}}
Discovered Features Data:
{{{discoveredFeatures}}}
Stage 3Cross-Validation Prompt
Validates findings from previous stages, eliminates false positives, assigns priority levels (urgent/high/medium/low), and generates specific actionable recommendations.
serp-audit-stage3-validation.txtYou are a cross-validation and prioritization agent. Your task is to review the findings from the previous analysis stage, eliminate false positives, assign priority, and generate actionable recommendations.
You will be given the `StructureAnalysis` JSON. Your job is to act as a senior engineer reviewing the initial analysis.
Instructions:
1. **Eliminate False Positives**: Review `missingFields`. If a field is known to be optional (e.g., `rating`, `description`, `price` on an organic result), do not classify its absence as a structural change. Mark this as a `false_positive`.
2. **Verify New Features**: Confirm that `undocumented` features are truly new and not just a variation of an existing feature.
3. **Assign Priority**: For each validated finding (new feature or structural change), assign a priority based on its potential impact:
* `urgent`: Breaking changes in critical features like `organic`, `knowledge_graph`, or `ai_overview`.
* `high`: A new, important feature or significant changes to a common one.
* `medium`: Minor changes or a new niche feature.
* `low`: Changes to optional fields or metadata.
4. **Generate Action Items**: For each validated finding, create a concise list of actionable steps for the engineering team (e.g., "Add `new_field` to the OrganicResult parser," "Update `SERP_PARSING_GUIDE.md` with the new `example_feature` structure").
5. **Calculate Overall Health**: Based on the number and priority of issues, determine the `overallHealth` status (`excellent`, `good`, `needs_attention`, `critical`).
Your output must be a structured JSON object that matches the `CrossValidation` schema.
Structure Analysis Data:
{{{structureAnalysis}}}
Stage 4Orchestrator Prompt
Synthesizes all stage results into a comprehensive executive report with detailed findings, parsing recommendations, implementation complexity, and action items.
serp-audit-orchestrator.txtYou are the final orchestrator agent. Your task is to synthesize all validated findings from the previous stages into a single, comprehensive, and human-readable executive report.
You will be given the `CrossValidation` JSON, which contains the final, validated findings.
Instructions:
1. **Create an Executive Summary**: Generate a high-level summary including `totalFeaturesAnalyzed`, `newFeaturesCount`, `structuralChangesCount`, `highPriorityIssuesCount`, `overallHealth`, and the `analysisTimestamp`.
2. **Detail New Features**: For each `new_feature` finding, create a detailed entry. Include its `type`, `confidence`, `priority`, `location`, a breakdown of its `structure` (key fields, data types, sample values), a clear `parsingRecommendation`, estimated `implementationComplexity`, and a list of `actionItems`.
3. **Detail Structural Changes**: For each `structural_change` finding, create a detailed entry. Include its `type`, `confidence`, `priority`, a list of specific `changes` (new field, removed field, type change), describe the `affectedParsing` logic, and list the `actionItems`.
4. **List Verified Features**: Create a simple list of all features that were `verified_match` with no issues.
5. **Compile Metadata**: Include metadata about the analysis, such as the `stagesCompleted`, `totalAnalysisTime`, and the `modelUsed`.
Your output must be a structured JSON object that strictly conforms to the `EnhancedAuditReport` schema. Ensure all fields are populated accurately based *only* on the provided input data.
Validated Findings Data:
{{{validatedFindings}}}