in SEO
The SEO game has changed, and most of us are still playing by the old rules.
We're checking keyword rankings, obsessing over SERP positions, and running the same competitive analyses we've done for years. Meanwhile, a growing share of our audience never sees those search results.
They're getting their answers from AI Overviews. From ChatGPT. From Perplexity. And in those AI-generated responses, your competitors might be dominating the conversation while you're completely invisible.
Here's what makes this particularly tricky: you can rank #1 for a keyword and still be nowhere in the AI answer. You can have better content, more backlinks, and stronger domain authority, and still lose the visibility battle before a user ever clicks.
This guide will show you how to analyze which competitors are winning in AI search, why they're appearing when you're not, and what you can actually do about it.
Traditional competitive SEO analysis was straightforward. You'd identify your top competitors, see which keywords they ranked for that you didn't, analyze their backlink profiles, and reverse-engineer their content strategy. The goal was simple: outrank them.
But AI-powered search doesn't work that way.
When someone asks ChatGPT, "What's the best project management tool for remote teams?" or Google serves an AI overview on "how to choose accounting software," these systems aren't ranking pages 1 through 10. They're synthesizing information from a handful of sources they've already decided to trust, then presenting that synthesis as a single answer.
This creates three new competitive dynamics:
In traditional search, being #4 instead of #3 matters, but you're still visible. In AI search, you're either cited or you're not. There's no "page 2" to fall back on.
AI systems don't pull from thousands of pages. They reuse a tight set of sources across many queries. Once a page or domain enters that trusted circle, it shows up repeatedly. If you're outside it, you're invisible across dozens of related prompts.
Even when AI doesn't cite your URL directly, being mentioned as an option influences perception. If five prompts about your category all mention the same three competitors and never mention you, that's a visibility problem that won't show up in any rank tracker.
The shift from "who ranks higher" to "who gets cited more often" requires a completely different analytical approach.
The foundation of AI visibility analysis isn't keywords, it's prompts.
You need to know which questions your potential customers are actually asking AI systems, and which of those questions generate answers in your space.
These are the query types where AI engines most commonly generate direct answers:
Recommendation prompts: "Best X for Y" or "Top X tools for Y."
Comparison prompts: "X vs Y" or "Difference between X and Y."
Decision support: "Should I use X or Y?" or "Which X is right for me if..."
Explainer prompts: "How does X work?" or "What is X and why does it matter?"
Implementation questions: "How to set up X" or "How to get started with X."
The reason these matter: they're the prompts where AI systems feel confident enough to provide substantive answers rather than just returning links.
Start with 20-30 prompts that represent real questions in your category. Make sure you include:
Direct product/service comparisons
Category-level "best of" questions
Common implementation or setup questions
Pain point or problem-framing questions
For example, if you're in the email marketing space, your prompt library might include:
"Best email marketing platform for small businesses"
"Mailchimp vs ConvertKit: Which is better?"
"How to choose an email marketing tool."
"What's the easiest email marketing software for beginners?"
"Email marketing tools with best automation"
Run your initial prompts and note which ones generate AI responses (AI Overviews in Google, direct answers in ChatGPT, or Perplexity). Focus your analysis on prompts that consistently trigger AI-generated content.
Some prompts won't trigger AI answers at all; they'll just return traditional search results. That's fine. Those aren't your priority for this analysis.
Now comes the detailed work: running your prompts across multiple AI platforms and logging your findings.
Platforms to test:
At a minimum, test these three:
If you have the resources, also test Claude, Gemini, and any vertical-specific AI tools in your industry.
What to log for each prompt
Create a simple spreadsheet with these columns:
Prompt text (exact wording)
Platform (Google AIO, ChatGPT, etc.)
Brands mentioned (list all brands referenced)
URLs cited (specific pages linked or referenced)
Answer framing (how the answer is structured - list format, comparison table, narrative explanation)
Your brand mentioned? (yes/no)
Competitor positioning (are they recommended, mentioned neutrally, or compared?)
Example of what you're looking for
Let's say you run "best CRM for real estate agents" across platforms:
Google AI Overview cites a Forbes article and mentions HubSpot, Salesforce, and BoomTown
ChatGPT references Salesforce, Zillow Premier Agent, and LionDesk, pulling from G2 reviews and industry blogs
Perplexity cites a NerdWallet comparison and a Real Estate Technology article, mentioning the same three from ChatGPT, plus Follow Up Boss
You'd note that Salesforce appears across all three. That's a signal. You'd also note that certain articles (the Forbes piece, the NerdWallet comparison) are being reused across platforms.
How much data do you need?
Run all your prompts (20-30) across at least three platforms. That gives you 60-90 data points. Patterns will start emerging clearly after the first 30-40.
You're not trying to be scientifically exhaustive here. You're trying to spot which competitors show up reliably and which sources AI systems trust enough to cite repeatedly.
After logging your results, the next step is to recognize patterns.
Some pages will show up over and over again. You might find:
The same "best X tools" article cited for five different prompts
A single comparison page referenced across multiple platforms
One comprehensive guide that AI systems keep pulling from
These are your high-leverage targets. If AI systems already trust these pages, being mentioned on them (or creating something similar) has a disproportionate impact.
Beyond specific URLs, notice which domains appear most often. You might discover:
A particular review site (G2, Capterra, TrustRadius) is heavily cited
Industry publications dominate certain types of queries
A competitor's blog shows up more than anyone else's
This shows where AI systems look for trustworthy information in your space.
Often, you'll find that AI systems pull from a cluster of 3-5 sources for an entire category of prompts. For example:
All "best of" prompts in marketing automation cite the same two industry roundup articles
Every comparison prompt references a combination of G2, a specific Software Advice page, and one competitor's comparison chart
Implementation questions consistently pull from the same two educational blogs
The key insight: You're not competing against the entire internet. You're competing to be included in a very small set of sources that AI already trusts.
Once you clearly see this clustering, the path forward becomes much clearer.
Now that you know which competitors dominate AI visibility, you need to understand why.
Some competitors don't just appear occasionally; they're everywhere. When you see a brand mentioned in 60-70% of AI responses across varied prompts, that's a citation monopoly.
This usually happens because:
They're prominently featured in third-party lists. If a competitor appears in every "top 10" article, every comparison guide, and every industry roundup that AI systems trust, they benefit from compounding citations. AI doesn't have to trust its domain; it trusts the sources that keep mentioning it.
They have strong entity recognition. When a brand is consistently associated with specific categories across the web (in Wikipedia, in knowledge graphs, in structured data), AI systems connect that brand to relevant queries more reliably.
They've created citation-friendly content. Some competitors excel at creating the exact content formats AI systems prefer to cite: clear comparison tables, well-structured "best for" lists, scannable feature breakdowns.
Sometimes you're invisible not because competitors are stronger, but because you're structurally excluded from where AI looks.
Common structural gaps:
Missing from review platforms. If you're not on G2, Capterra, or TrustRadius (or if your listings are sparse), you're invisible when AI pulls from these sources.
Absent from industry roundups. If every major "best of" article in your space mentions 8-10 tools and you're never one of them, AI won't spontaneously add you to the list.
No comparison content. If competitors all have "us vs them" pages and you don't, AI can't cite you when answering comparison queries.
Coverage gaps. If competitors have dedicated pages for specific use cases, industries, or pain points that you've never written about, AI won't mention you for those subtopics, even if your product handles them well.
Sometimes your content exists, but AI systems don't trust it enough to cite it.
Signs of a trust deficit:
Your pages appear in traditional search results but never in AI citations
AI cites competitor pages that are objectively less comprehensive than yours
Your domain is cited for some topics but ignored for others
Possible causes:
Newer domain without established authority signals
Thin backlink profile compared to competitors
Lack of mentions on authoritative third-party sites
Content that's too promotional rather than educational
Missing structured data or entity markup
Look at the gaps between where competitors are cited and where you're not:
Are they mentioned on third-party sites you're absent from? → Structural exclusion
Do they appear even when their content is weaker? → Entity recognition or citation momentum
Are their pages formatted differently from yours? → Format mismatch with AI preferences
Do they cover subtopics or use cases you haven't addressed? → Coverage gap
Usually, it's a combination of factors. But identifying the primary gap helps you prioritize where to focus.
One competitor dominates 60-70% of AI responses
You're missing from where AI systems look
Your content exists, but isn't trusted enough
Analysis without action is just trivia. Here's how to convert what you've learned into actual improvements.
The fastest way to improve AI visibility is to get included in sources AI already trusts.
Get added to high-frequency citation sources
If your analysis shows that AI repeatedly cites the same Capterra page, the same Forbes roundup, or the same industry blog comparison, those are your targets.
Actions:
Update your profiles on cited review platforms (add screenshots, details, customer quotes)
Reach out to authors of frequently-cited articles with updates or new product information
Pitch to be included in upcoming roundups or comparisons
One inclusion can unlock visibility across dozens of prompts.
Create the missing comparison pages
If you noticed that "X vs Y" prompts consistently cite competitor comparison pages, and you don't have equivalent pages, create them.
Focus on:
[Your product] vs [top 3 competitors]
[Your product] vs [emerging alternatives]
"Best [category] for [specific use case]" pages where you should rank
Make these pages genuinely helpful, not just promotional. AI systems cite content that comprehensively answers the question, not thinly-veiled sales pitches.
Fill obvious coverage gaps
If competitors appear for certain use cases, industries, or implementation questions that you've never written about, create that content.
Example: If you noticed competitors dominate prompts like "best CRM for nonprofits" and you serve nonprofits well but have never created dedicated content about it, that's a straightforward gap to fill.
These changes take longer but build sustainable AI visibility.
Enhance your most relevant pages
Find pages that should be cited for core prompts but aren't. Improve them by:
Adding comparison tables or structured feature breakdowns
Including specific use cases and examples
Updating with recent information (AI often prefers recency)
Improving factual density (more specific claims, data points, examples)
Adding clear section headers that match how people ask questions
Develop format diversity
AI systems cite different content formats for different query types:
List articles for "best of" prompts
Comparison tables for "X vs Y" prompts
Step-by-step guides for "how to" prompts
FAQ sections for common questions
If all your content is blog posts, you're missing citation opportunities. Diversify your content formats to match what AI is already citing.
Build entity signals
Help AI systems recognize your brand as a legitimate option in your category:
Get mentioned in industry news articles
Appear in podcast discussions or video content
Maintain updated Wikipedia presence (if applicable)
Ensure your structured data is comprehensive
Build consistent NAP (name, address, phone) across platforms
These won't show results immediately, but they create the conditions for sustained AI visibility.
Systematic third-party presence
Make it a regular practice to:
Update and maintain review platform profiles
Contribute expert perspectives to industry publications
Get featured in case studies and success stories
Appear in industry research reports
Build relationships with frequently-cited sources
If certain authors, publications, or platforms are repeatedly cited in your analysis, build ongoing relationships with them:
Become a go-to source for quotes or expert input
Share relevant data or research they might reference
Contribute guest content or collaborative pieces
Create citation-worthy research
Original data, surveys, or research studies get cited more reliably than opinion content. If you can regularly publish proprietary insights, you increase the chances AI systems will reference you as a source.
Traditional rank tracking won't tell you if your AI visibility is improving. You need different metrics.
Brand mention frequency
Count how many times your brand is mentioned in AI responses across your prompt library.
Track this weekly or monthly. Improvement looks like: your brand appearing in 40% of responses instead of 20%.
Citation share vs. competitors
For each prompt, note what percentage of AI platforms cite you vs. your top competitors.
Example: For "best email marketing tools," you might be cited in 1 out of 3 platforms, while Mailchimp appears in 3 out of 3. Track how this ratio changes over time.
Prompt coverage expansion
Track how many unique prompts in your library generate brand mentions.
If you started with mentions in 8 out of 30 prompts, and after three months, you're mentioned in 18 out of 30, that's meaningful progress.
Position in AI responses
When you are mentioned, note whether you're:
Listed first or early in recommendations
Included in a "best for [use case]" qualification
Mentioned neutrally alongside others
Relegated to a brief mention at the end
Being mentioned first or being the recommended option for specific scenarios is more valuable than simply being included in a list.
Monthly tracking is usually sufficient. AI citation patterns change more slowly than daily rankings.
Run your core prompt set (15-20 high-priority prompts) across platforms monthly. Track trends over 3-6 months to see if your changes are working.
Don't expect an overnight transformation. AI visibility compounds gradually.
Realistic progress:
Month 1-2: Little to no change (your changes haven't been picked up yet)
Month 3-4: Mentions start appearing for a few prompts
Month 6+: Consistent presence across multiple prompts and platforms
The key inflection point is when you cross from "occasionally mentioned" to "regularly cited. That's when you've entered the trusted source pool.
Pitfall 1: Assuming better content automatically wins
AI systems don't always cite the "best" content. They cite content from sources they already trust, in formats they can easily parse, covering topics with clear signals of authority.
Your comprehensive 5,000-word guide might be objectively better than a competitor's 800-word listicle, but if the listicle is on a domain AI trusts and formatted in a citation-friendly way, it'll get picked.
Pitfall 2: Ignoring third-party presence
Many brands focus exclusively on their own content and overlook that AI systems heavily weight third-party sources such as review sites, industry publications, and aggregator content.
You can publish perfect content all day, but if you're not present on the third-party sites AI already trusts, you'll stay invisible.
Pitfall 3: Expecting rank tracking tools to show progress
Your traditional SEO dashboard won't capture AI visibility improvements. You need to manually track citations or use emerging GEO-specific tools designed for this purpose.
Pitfall 4: Over-optimizing for one platform
Don't obsess over Google AI Overviews while ignoring ChatGPT and Perplexity. Different audiences use different tools, and citation patterns vary across platforms.
A balanced approach means appearing across multiple AI surfaces, not dominating just one.
Here's how to actually implement this:
Weeks 1-2: Research and analysis
Build your competitive prompt library (20-30 prompts)
Run prompts across Google AI Overviews, ChatGPT, and Perplexity
Log all brand mentions, citations, and patterns
Identify your top 3-5 competitors in AI visibility
Weeks 3-4: Diagnosis
Map which sources AI systems trust most in your space
Identify where competitors appear that you don't
Catalog your specific gaps (structural exclusions, coverage gaps, format mismatches)
List high-frequency citation sources you should target
Weeks 5-8: Quick wins
Update profiles on frequently-cited review platforms
Create 3-5 missing comparison or "best for" pages
Reach out to authors of high-frequency citation sources
Fill obvious coverage gaps
Weeks 9-12: Medium-term improvements
Enhance your 5 most important existing pages
Develop diverse content formats (tables, FAQs, structured comparisons)
Build out entity signals and structured data
Start relationship building with frequently cited publications
Ongoing: Measurement and iteration
Track brand mentions monthly across your prompt library
Monitor which new prompts generate citations
Adjust strategy based on what's working
Expand prompt coverage as you gain visibility
Competitive analysis in the age of AI search isn't about who ranks #1.
It's about who gets remembered, cited, and included in the answers users see before they ever think about clicking through to a website.
Your competitors who figure this out first will build citation momentum that gets harder to displace over time. AI systems reinforce what they already trust, which means early movers in building AI visibility have a compounding advantage.
The good news: most companies haven't adapted their competitive analysis yet. They're still tracking rankings and wondering why their traffic patterns are changing in ways their dashboards don't explain.
You now have a framework for identifying what they're missing and closing the visibility gap before it becomes permanent.
Start with your prompt library. Run the analysis. Find the patterns.
The competitors dominating AI visibility in your space aren't doing magic. They're just showing up in the right places, in the right formats, with the right signals.