Why LLM visibility matters now
A growing share of high-intent buying research starts inside an LLM instead of a search engine. A buyer asking ChatGPT “what’s the best CRM for a 10-person consultancy?” doesn’t see the same SERP a Google searcher sees — they see the model’s synthesized answer, plus a handful of cited sources. If your brand isn’t in that answer, you don’t exist for that buyer at that moment.
The shift is uneven across categories — informational research has migrated faster than transactional shopping — but every category is trending the same direction. Tracking which LLMs mention your brand, which ones cite your domain, and how that movement correlates with traffic is the AI-era equivalent of classical organic-rank tracking.
How LLMs decide what to mention
Two broad mechanisms, depending on the provider. Grounded providers (Perplexity, Grok, Gemini with search, Google AI Overviews, Copilot) run a real-time web search to retrieve passages, then synthesize an answer over them and expose citations. For these, your visibility depends on classical SEO fundamentals: ranking well for the query, having clear answer-shaped content, and being on a domain the model treats as authoritative.
Pure completion providers (ChatGPT without browse, Claude without tools, DeepSeek, Meta AI / Together Llama) generate from training data only. For these, your visibility depends on whether the brand was mentioned often enough during pre-training to make it into the model’s parametric memory — and whether that mention was favorable enough to surface in answers about your category. Both mechanisms reward the same thing: durable, high-signal coverage of your brand on the open web.
What rank.ai measures across 9 LLM surfaces
We track 9 provider surfaces, each implemented as a separate adapter in backend/app/core/ai_rank/providers/:
- OpenAI / ChatGPT — GPT-class completion provider; mention-only signal.
- Anthropic / Claude — Claude completion provider; mention-only signal.
- Google Gemini — Gemini with optional grounding; citations exposed when grounded.
- Perplexity (Sonar) — grounded search-and-synthesize; full citation set per answer.
- Grok (xAI) — grounded with X / open-web search; citations exposed.
- Google AI Overviews — the SERP surface, covered separately in our AI Overviews glossary entry.
- DeepSeek — DeepSeek v4 Flash chat completions; mention-only signal, no grounding.
- Meta AI (Llama via Together AI, P-24) — production path uses Together AI’s hosted Llama 3.3-70B-Instruct-Turbo. This is the same base weight family as the consumer Meta AI rollout, but not a faithful replica of Meta’s in-product answer — Meta doesn’t expose a public API for that surface. Treat it as a Llama-class signal.
- Microsoft Copilot — currently stub-mode (P-17); lights up to live as the upstream API allows.
For each provider × prompt pair we record: whether the brand was mentioned by name, the citation list (when the provider exposes one), and the brand’s citation rank within that list. Aggregated across prompts, these roll up into the same share-of-voice metric we report on the local side — “percentage of measurement points where you appear in the visible result” — except the denominator is provider × prompt instead of grid cells.
LLM visibility vs Google SEO
Same goal — be the visible answer — but the algorithms are different, the time-decay is different, and the failure modes are different. Google’s organic ranking is a real-time crawl-and-index system that re-ranks daily and exposes the same SERP to most searchers asking the same query. An LLM’s parametric memory is fixed at training time — until the next pre-train, you can’t move what the model “knows” about your category. Grounded providers patch this by running a fresh web search, but even there, the retrieval index and the source weighting are model-specific.
Practically: classical SEO fundamentals (authoritative content, clear answer structure, page authority, topical depth, schema markup) help on both surfaces. But LLM-specific levers — direct, answer-shaped page intros; explicit category framing in the body; consistent brand mention across credible third-party sites — earn extra ground on the LLM side without hurting the Google side.
Tracking LLM visibility consistently
LLM answers are non-deterministic. Ask the same model the same question twice and you can get slightly different words, sometimes different citations, occasionally a different recommendation. That variance makes single-shot tracking useless — one bad run doesn’t mean your visibility dropped; one lucky run doesn’t mean it improved.
We address this with three practices. First, daily re-checks of every tracked prompt on every provider, so trend lines smooth the day-to-day variance. Second, deterministic prompt phrasing — the prompt text is fixed per tracked query so we’re always measuring the same input. Third, P-15 embed sharing — every AI-rank result has a shareable embed widget so agencies can drop the dashboard into a client portal and the client sees the same per-provider trend the agency sees. Same data, no re-run, no “but it looked different yesterday” ambiguity.