Competitor Analysis: How to Spy on AI Conversations
You track your competitor's domain authority. You monitor their keyword rankings daily. But do you know what ChatGPT recommends when a customer asks, "Who is the best alternative to [Your Brand]?"
Traditional competitive intel doesn't capture what happens inside AI conversations. Your competitors might be winning AI recommendations without ranking #1 on Google. This is "Share of Model" (SoM)—the percentage of AI responses where a brand gets recommended for a specific query. It's the GEO equivalent of Share of Voice.
This guide shows you how to manually audit competitor visibility on ChatGPT and Perplexity, and when automation makes sense.
How to Audit Competitors on ChatGPT
ChatGPT acts like a consultative advisor. To understand your competitive position, you need to think like a confused buyer.
Your objective: Measure recall and recommendation. Does the AI know your competitor? Does it recommend them over you?
Start with these prompts:
The Discovery Prompt
"I am a [Persona, e.g., CFO of a mid-sized startup]. I need software to [Problem, e.g., automate expense reporting]. What are the top 3 options and why?"
The Direct Comparison
"Compare [Your Brand] vs. [Competitor Name] for an enterprise client. Which one is better for security?"
The Weakness Probe
"What are the most common complaints users have about [Competitor Name]?"
Pay attention to:
Language and positioning: What descriptive language does the AI use? (e.g., "Expensive but enterprise-grade" vs. "Budget-friendly but limited features")
Hallucinations: Is the AI inventing features they don't have? This reveals a brand knowledge gap you can exploit by publishing clearer, more authoritative content.
Recommendation order: Being mentioned first matters. If you're consistently third, you're losing Share of Model.
How to Find Competitor Citation Sources on Perplexity
Now that you know what AI says, let's find out why. While ChatGPT is the advisor, Perplexity is the researcher that cites its sources. Those citations are your roadmap.
Your objective: Identify the seed sources shaping AI perception of your competitor.
Start with these prompts:
The Citation Audit
"What are the main pros and cons of [Competitor Name]? Please cite at least 5 authoritative reviews or articles."
The Trend Check
"What recent news or updates has [Competitor Name] released in the last 6 months? List sources."
Analyze the citations:
Look at every source Perplexity provides:
Are they quoted from a specific G2 review?
Is it a niche industry blog you've ignored?
Is it a Reddit thread?
Are the citations from the last 6 months or outdated?
Your strategy: If Perplexity trusts these sources for your competitor, you need to be mentioned there too. This is where you earn Share of Model—not through keywords, but through authoritative presence in the sources AI platforms trust.
When Manual Monitoring Breaks Down
Running these prompts manually for 5 competitors across 3 AI models (GPT-4, Claude, Perplexity) means 45+ queries per week. That's before you factor in different user personas, locations, or tracking changes over time.
Manual audits work for initial research. But if you're serious about defending Share of Model, you need three things the manual approach can't provide:
Scale: Testing hundreds of query variations daily across multiple AI platforms Consistency: Running the same prompts on a schedule so you can spot trends Benchmarking: Quantifying exactly how often you're recommended vs. competitors
This is where Deca automates the process. We run thousands of test queries daily, track Share of Model over time, and identify the exact sources giving your competitors an edge. You get alerts when your SoM drops and gap analysis showing which citations to target.
Manual audits teach you what to look for. Automation lets you actually defend your position at scale.
Frequently Asked Questions
Can I influence what ChatGPT says about my competitor?
No, you can't directly edit their training data. But you can create content that corrects misconceptions or highlights your advantages in high-authority sources that AI reads. Focus on the platforms Perplexity cites most often.
Why does Perplexity recommend my competitor but Google ranks me higher?
Perplexity prioritizes direct answers and discussion-based signals, often pulling from forums like Reddit or detailed reviews that traditional SEO overlooks. Your competitor might have stronger "discussion footprint" than "keyword footprint." Perplexity refreshes its index every 2-3 weeks, so recent authoritative mentions matter more than static SEO.
How often should I check AI visibility?
For manual audits, monthly is the minimum. Perplexity refreshes its index every 2-3 weeks, and ChatGPT's context evolves with new training data. Real-time monitoring (via tools like Deca) is ideal if you're in a competitive vertical or launching new products.
Is Share of Model an actual metric?
Yes. It measures the percentage of AI responses where your brand is the primary recommendation for a specific intent. It's calculated by running standardized test prompts across AI platforms and tracking recommendation frequency. Think of it like Share of Voice, but for AI recommendations instead of search visibility.
Does this work for B2B?
Absolutely. B2B buyers use AI to compare vendors and summarize technical documentation more than B2C consumers. With longer sales cycles, AI reputation compounds over time. Losing Share of Model in your category means losing mindshare before prospects even visit your site.
Ready to automate your AI visibility tracking? Try Deca free and start monitoring Share of Model across ChatGPT, Perplexity, and Google AI Overviews.
References
HubSpot AEO Grader: hubspot.com/aeo-grader
Otterly.ai: Brand monitoring for AI platforms
"How to use AI for Competitor Analysis": determ.com/blog/ai-competitor-analysis
Last updated