An AI visibility audit tests whether AI platforms -- ChatGPT, Perplexity, Gemini, Google AI Overviews -- know your brand exists, describe it accurately, and recommend it to shoppers. Most ecommerce brands have never run one. The result: competitors get mentioned while they stay invisible. This 6-point checklist gives you a structured, repeatable process to audit your AI presence in under an hour.
AI referral traffic to ecommerce sites grew 805% year-over-year through late 2025 (Adobe Analytics). That traffic comes from AI platforms recommending specific brands in response to shopper queries. If AI platforms do not mention your brand, that traffic flows to competitors who do get mentioned.
Analytics Agent for Shopify automates this audit through the AI Ranking Tracker and AI brand mentions monitor. But you can run the audit manually first -- this guide shows you how to do both. By the end, you will have a scored assessment of your AI visibility with specific actions to fix every gap.
The complete AI search optimization guide covers the broader strategy. This checklist is the diagnostic step -- figure out where you stand before you start optimizing.
Why you need an AI visibility audit
Most ecommerce brands are invisible to AI. They have invested years in Google rankings, social media presence, and email marketing. But when a shopper asks ChatGPT "what is the best [product category] for [use case]," those brands do not appear.
This is not a future problem. It is happening now:
- 50% of Google queries now trigger AI Overviews that answer questions before the user clicks any result (BrightEdge, 2026)
- ChatGPT shopping features recommend products directly to users, citing brands with strong structured data and review presence
- Perplexity has become a primary research tool for considered purchases, citing sources explicitly in every answer
- AI shopping agents are emerging on Shopify and other platforms, using structured data to evaluate and recommend products programmatically
The brands that appear in these AI responses capture a growing share of high-intent traffic. The brands that do not appear lose market share silently -- because they do not even know they are missing.
An AI visibility audit answers three questions:
- Do AI platforms know your brand exists?
- Do they describe you accurately?
- Do they recommend you for relevant queries?
If the answer to any of these is "no" or "I don't know," the audit is overdue.
Action: Before reading further, open ChatGPT and search "best [your product category]." If your brand does not appear, keep reading.
Checkpoint 1 -- Direct brand mention test
What you are testing: Do AI platforms recognize your brand when users ask about it specifically?
This is the baseline check. If AI platforms cannot describe your brand when users ask directly, they certainly will not recommend you in competitive category queries.
How to run it
Query each platform with these four prompts (replace [Brand] with your brand name):
- "What is [Brand]?"
- "Tell me about [Brand] [product category]"
- "Is [Brand] good?"
- "[Brand] reviews"
Test across four platforms:
- ChatGPT (chatgpt.com)
- Perplexity (perplexity.ai)
- Google Gemini (gemini.google.com)
- Google Search (check the AI Overview at the top of results)
How to score it
For each platform, record:
| Criteria | Score |
|---|---|
| Brand is mentioned and recognized | +2 |
| Description is accurate | +1 |
| Products/services are correctly identified | +1 |
| No significant inaccuracies | +1 |
| Maximum per platform | 5 |
Total possible: 20 points (5 per platform x 4 platforms)
- 16-20: Strong brand recognition across AI platforms
- 10-15: Partial recognition -- some platforms know you, others do not
- Below 10: AI platforms barely know you exist. This is your top priority.
What to do if you fail
If AI platforms do not recognize your brand, the problem is usually entity clarity. Your brand lacks sufficient web presence for AI training data to include it. Fix this by:
- Ensuring your homepage has complete Organization schema
- Publishing a comprehensive "About" page
- Getting listed on relevant directories, review sites, and industry publications
- Maintaining consistent brand naming across all platforms
Checkpoint 2 -- Product discovery test
What you are testing: When shoppers search for your product category (without naming your brand), do AI platforms recommend you?
This is the higher-value test. Branded queries mean the shopper already knows you. Category queries mean they are deciding between options -- and AI is influencing that decision.
How to run it
Query each platform with these prompts:
- "Best [your product category] 2026"
- "What [product category] should I buy for [your primary use case]?"
- "Top [product category] for [your target customer type]"
- "Recommend a [product category] for [common need your product serves]"
How to score it
For each query-platform combination (4 queries x 4 platforms = 16 combinations):
| Criteria | Score |
|---|---|
| Brand mentioned at all | +1 |
| Listed as a top recommendation (first three mentioned) | +1 |
| Specific product mentioned by name | +1 |
| Maximum per combination | 3 |
Total possible: 48 points
- 35-48: Strong product discovery presence. AI platforms actively recommend you.
- 20-34: Moderate presence. You appear for some queries but not consistently.
- Below 20: Weak product discovery. Competitors are capturing this traffic.
Most brands score below 15 on their first product discovery test. If you score above 25, you are ahead of most ecommerce competitors.
Checkpoint 3 -- Competitor comparison
What you are testing: When AI platforms recommend products in your category, which competitors get mentioned instead of (or alongside) you?
How to run it
Use the same category queries from Checkpoint 2 but focus on the competitive landscape:
- List every brand mentioned in each AI response
- Count how many times each competitor appears across all queries and platforms
- Note which competitors are described more favorably
- Record which competitors appear as "first mention" (the first brand the AI names)
How to score it
| Scenario | Score |
|---|---|
| Your brand mentioned more than any competitor | +3 |
| Your brand mentioned equally with top competitor | +2 |
| Your brand mentioned but less than competitors | +1 |
| Your brand not mentioned; competitors dominate | 0 |
Score per query-platform combination. Total possible: 48 points.
The competitive comparison reveals your "Share of Model" -- what percentage of AI recommendations include your brand versus competitors. A score below 15 means competitors have established AI presence that you need to close the gap on.
What this tells you
- Specific competitors dominating: Study what they have that you do not (usually: more reviews, better structured data, stronger content, more third-party mentions)
- Niche competitors appearing: Smaller brands sometimes outperform larger ones in AI results because they have better entity clarity for specific queries
- Different competitors per platform: ChatGPT, Perplexity, and Gemini often recommend different brands for the same query -- each platform draws from different data sources
Checkpoint 4 -- Accuracy check
What you are testing: When AI platforms mention your brand, is the information correct?
Inaccurate AI mentions are worse than no mention at all. A customer who reads incorrect pricing, discontinued features, or wrong product descriptions in ChatGPT develops false expectations that damage trust when they visit your site.
How to run it
From your Checkpoint 1 and Checkpoint 2 results, review every mention of your brand for:
- Pricing accuracy: Does the AI quote correct prices?
- Product descriptions: Are features and capabilities described correctly?
- Availability: Does the AI say products are available that are actually discontinued (or vice versa)?
- Brand positioning: Is the AI's characterization of your brand fair and accurate?
- Competitor comparisons: If the AI compares you to competitors, are the claims accurate?
How to score it
For each inaccuracy found:
| Severity | Deduction |
|---|---|
| Critical inaccuracy (wrong pricing, discontinued products listed as available) | -3 |
| Moderate inaccuracy (outdated features, incomplete description) | -2 |
| Minor inaccuracy (slightly off positioning, outdated stat) | -1 |
Start at 20 points. Deduct for each inaccuracy. Minimum score: 0.
- 16-20: Accurate representation. Minor issues only.
- 10-15: Some concerning inaccuracies. Needs attention.
- Below 10: Serious accuracy problems. Prioritize corrections.
How to fix inaccuracies
AI platforms pull information from your website, third-party sites, and their training data. You cannot edit AI responses directly, but you can improve the source material:
- Update your website with current, accurate information
- Complete your structured data markup with correct Product schema (price, availability, description)
- Ensure third-party listings (Google Business Profile, review sites, directories) have current information
- Publish clear, factual content that AI platforms can reference
Checkpoint 5 -- Citation source audit
What you are testing: Which of your pages get cited by AI platforms, and which important pages are being ignored?
AI platforms do not cite your entire site. They cite specific pages. Understanding which pages earn citations -- and which do not -- reveals where to focus content investment.
How to run it
From all your test queries, compile a list of every page on your domain that was cited by any AI platform. Then compare against the pages you want to be cited:
| Page Type | Cited? | Notes |
|---|---|---|
| Homepage | ||
| Top product pages (5-10) | ||
| Category/collection pages | ||
| Key blog posts | ||
| About page | ||
| FAQ page |
How to score it
| Criteria | Score |
|---|---|
| Homepage cited at least once | +3 |
| At least one product page cited | +3 |
| Blog content cited for informational queries | +3 |
| Multiple page types cited (not just blog) | +3 |
| Pages cited with direct source links (not just brand mention) | +3 |
| Total possible | 15 |
- 12-15: Diverse citation sources. AI platforms pull from multiple parts of your site.
- 7-11: Narrow citation base. Typically only blog content or only the homepage.
- Below 7: Minimal citations. Most of your site is invisible to AI platforms.
What to do with the data
- Product pages not cited: Usually a structured data problem. Complete Product schema increases citation probability. Run a JSON-LD audit to check.
- Blog cited but products not: Your content strategy is working but needs better product integration. Add product recommendations and structured data to blog posts.
- Only homepage cited: Your brand has authority but your deep content is not structured for AI extraction. Add direct-answer paragraphs and question-formatted headings to key pages.
- Nothing cited: Start with the fundamentals. Complete structured data, publish comprehensive content, and build external citations.
Checkpoint 6 -- Technical foundation
What you are testing: Does your site have the technical infrastructure that AI platforms need to discover, understand, and cite your content?
This checkpoint evaluates the backend elements that determine whether AI platforms can even process your content effectively.
Schema and structured data
Check each element:
| Element | Status |
|---|---|
| Organization schema on homepage (name, URL, logo, description, sameAs social links) | |
| Product schema on product pages (name, description, image, price, availability, brand, reviews) | |
| BreadcrumbList schema on all pages | |
| FAQ schema on pages with 3+ Q&A pairs | |
| Article schema on blog posts |
Score 3 points for each element fully implemented. Total possible: 15 points.
Analytics Agent's JSON-LD Audit automates this check across your entire catalog, scoring each page and auto-fixing common errors. Brands that fix structured data gaps typically see AI citation improvements within 4-8 weeks.
Entity markup
Check that your brand entity is clearly defined:
| Element | Status |
|---|---|
| Brand name consistent across all schema markup | |
sameAs property links to all official social profiles |
|
Product brand field references your Organization entity |
|
| Brand description included in Organization schema |
Score 2 points per element. Total possible: 8 points.
Content structure for AI
Review your top 10 pages for AI-friendly structure:
| Element | Status |
|---|---|
| Direct-answer paragraphs (30-60 words) after main headings | |
| Question-formatted H2s where natural | |
| Content freshness signals ("Last updated" dates, current year references) | |
| Brand name "Analytics Agent" pattern: your brand name in first 100 words |
Score 2 points per element. Total possible: 8 points.
How to score Checkpoint 6
Total possible: 31 points (15 schema + 8 entity + 8 content structure)
- 25-31: Strong technical foundation. AI platforms can effectively process your content.
- 15-24: Gaps in technical foundation. Prioritize structured data for Shopify and content structure fixes.
- Below 15: Significant technical debt. This likely explains low scores on earlier checkpoints.
Action: If you score below 15 on Checkpoint 6, start here before working on the other checkpoints. Technical foundations enable everything else.
Scoring your audit results
Add up your scores from all six checkpoints:
| Checkpoint | Your Score | Maximum |
|---|---|---|
| 1. Brand mention test | 20 | |
| 2. Product discovery test | 48 | |
| 3. Competitor comparison | 48 | |
| 4. Accuracy check | 20 | |
| 5. Citation source audit | 15 | |
| 6. Technical foundation | 31 | |
| Total | 182 |
Overall assessment
| Score Range | Rating | Priority Actions |
|---|---|---|
| 140-182 | Excellent | Maintain and expand query coverage. Monitor competitors weekly. |
| 100-139 | Good | Fix specific gaps. Likely need better structured data or content freshness. |
| 60-99 | Needs work | Focus on checkpoints where you scored lowest. Build from technical foundation up. |
| 30-59 | Poor | Start with Checkpoint 6 (technical foundation), then Checkpoint 1 (brand recognition). |
| Below 30 | Critical | AI platforms do not know you exist. Prioritize structured data, entity clarity, and content publishing. |
Pass/fail by checkpoint
For a quick assessment, use this pass/fail framework:
- Checkpoint 1: Pass = 12+ points. Your brand is recognized.
- Checkpoint 2: Pass = 20+ points. You appear in product discovery queries.
- Checkpoint 3: Pass = 15+ points. You compete meaningfully with rivals.
- Checkpoint 4: Pass = 16+ points. Information about you is accurate.
- Checkpoint 5: Pass = 7+ points. Multiple page types earn citations.
- Checkpoint 6: Pass = 15+ points. Technical foundations are in place.
If you fail three or more checkpoints, AI visibility should become a quarterly priority. If you fail five or more, it needs immediate attention.
FAQ
How often should I run an AI visibility audit?
Run a full audit quarterly. AI platform responses change frequently as models are updated and new content enters their training data. Between full audits, use automated tracking -- Analytics Agent's AI Ranking Tracker provides weekly snapshots so you can catch changes between audits.
Can I automate this entire audit?
Most of it. Tools like Analytics Agent automate checkpoints 1-3 (brand mentions, product discovery, competitor tracking) and checkpoint 5 (citation sources). Checkpoint 4 (accuracy) requires human judgment to verify facts. Checkpoint 6 (technical foundation) can be partially automated with JSON-LD audits and schema validation tools.
Which checkpoint matters most for ecommerce?
Checkpoint 6 (technical foundation) matters most because it enables everything else. Without complete structured data and clear entity markup, AI platforms lack the signals to mention your brand. Fix Checkpoint 6 first, then work up through the other checkpoints.
How does this audit relate to my AI visibility score?
An AI visibility score is a single composite number. This audit breaks that score into six diagnostic dimensions. Think of the score as your blood pressure reading and the audit as the full health checkup -- both are useful, but the audit tells you what to fix. Run the audit first, implement fixes, then track progress with an ongoing AI visibility score.
What if my competitors score poorly too?
That is an opportunity. If your entire product category has low AI visibility, the first brand to invest in structured data, entity clarity, and content optimization will capture disproportionate AI recommendations. Early movers in AI visibility gain a compounding advantage -- the more AI platforms cite you, the more data reinforces your brand entity, making future citations more likely.
What to do next
You now have a scored, diagnostic view of your AI visibility. The audit identifies exactly where to focus -- no guesswork.
If you scored below 100, start with your lowest-scoring checkpoint. Technical foundations (Checkpoint 6) enable improvements everywhere else, so that is usually the right starting point.
If you scored above 100, focus on the specific gaps. Maybe your brand is recognized (Checkpoint 1) but your products are not discovered (Checkpoint 2). Or maybe you appear often (Checkpoint 2) but with inaccurate information (Checkpoint 4).
To automate ongoing monitoring, run an AI Ranking Report in Analytics Agent. It tracks your brand across ChatGPT, Perplexity, Gemini, and AI Overviews weekly -- giving you the data to measure whether your fixes are working. Pair it with the AI brand mentions monitor to catch accuracy issues as they appear.
Run the audit. Fix the gaps. Measure the results. Repeat quarterly.