Daniel Ternyak
Calendar icon
May 4, 2026

Announcing AI Rank: Track Your Brand Across ChatGPT, Claude & Gemini

AI Rank is now live. See exactly when ChatGPT, Claude, and Gemini cite your domain — plus the underlying Google searches each model runs under the hood.

Why AI Rank, and why now

When customers research a service today, an increasing share of them never see a Google SERP. They open ChatGPT, type their question, and accept the synthesized answer.

That answer might recommend you. It might recommend a competitor. It might cite three URLs from your domain — or none at all. You have no visibility into any of that unless you specifically check, prompt by prompt, model by model, day after day. Nobody has time for that.

AI Rank closes the loop. Plug in a prompt your customers actually type, and we run it across ChatGPT, Claude, and Gemini — every day — capturing:

  • Whether your domain appears in the citation list
  • Whether your brand is mentioned in the answer prose
  • Where in the answer it lands (citation rank, mention position)
  • The sentiment of how the model talks about you
  • The underlying Google searches the model issued before composing its answer

If you've used Profound, Peec, or Otterly, you'll find the surface familiar. We took the best parts of each and built it on top of the existing Rank.AI platform — same login, same credit pool, same MCP server.

What's in the box

Per-prompt detail page. Click any tracked prompt and see, per provider:

  • The full answer the model just produced
  • The exact Google searches it ran (yes, you can see those — OpenAI, Anthropic, and Gemini all expose them via their grounded-answer APIs)
  • Each cited URL, color-coded as you (green) / a tracked competitor (red) / a third party (blue)
  • Citation positions anchored to the answer text where supported

Share of Voice. Add competitor brands and we'll tag every mention across runs. The detail page renders a stacked bar showing your share against each competitor over the last 7 days.

Source Authority. Across all your prompts, which domains do AI assistants cite most often? Bar chart sorted by citation count, with breadth (distinct prompts) shown alongside. Useful for spotting which third-party sites — Reddit, G2, Wikipedia, niche industry blogs — are quietly shaping how AI describes your category.

Sentiment timeline. Stacked bars showing positive / neutral / mixed / negative mentions over 30 days. If a model's framing of your brand shifts negative, you see it.

Auto-suggest. We pull the SEO keywords you already track in Rank.AI's National Rank Tracker and have an LLM rewrite them as natural-language prompts a real customer would type. One click to accept.

Native MCP, day one

Rank.AI ships an MCP server. Today's release adds two new tool families:

  • national_rank.* — list your tracked SERP keywords, pull rank history
  • ai_rank.* — list prompts, create new ones, fire ad-hoc checks, drill into a specific run's answer + queries + citations, generate suggestions

We're the second product in this category to ship a native MCP — Peec was first. If you run Claude Desktop, Cursor, or Cline with the Rank.AI MCP wired up, your AI assistant can directly query your visibility data and trigger checks. Try things like:

  • "What's our visibility trend in ChatGPT this week?"
  • "Track this keyword in AI Rank too."
  • "Show me the underlying searches Claude ran for our 'best Botox in Tampa' prompt."

How it actually works

Each daily check is a real API call to each provider with their search-grounded mode turned on:

  • OpenAI Responses API + web_search tool. We capture web_search_call.action.query for the underlying queries and output_text.annotations for citations with character offsets.
  • Anthropic Messages API + web_search_20250305 tool. Captures server_tool_use.input.query per call and per-text-block citations[] with cited-text snippets.
  • Google Gemini + google_search grounding. Captures groundingMetadata.webSearchQueries and groundingChunks with confidence scores per citation.

Each prompt runs 3 samples per provider per day by default (configurable 1–10). Multiple samples smooth over LLM stochasticity — visibility metrics are rate-based ("brand cited in 67% of runs"), not binary. After the day's runs settle, we roll them up into a daily aggregate that powers the dashboard.

A second cheap LLM pass extracts every brand mention from the answer prose — separate from the formal citations list — so we catch "ChatGPT recommends Stripe" even when ChatGPT didn't link to stripe.com.

Pricing

AI Rank is gated to Starter+ tiers:

  • Starter ($50/mo): 3 AI Rank prompts × all 3 providers × daily
  • Pro ($250/mo): 15 AI Rank prompts + share-of-voice + competitor tracking
  • Boost / Growth / Dominate: 50 / 150 / 500 prompt seats

Each (provider × sample) check costs 50 credits. Default daily setup (3 providers × 3 samples) = 450 credits/day per prompt. Your existing audit credit pool covers it.

Roadmap

What's next:

  • Perplexity — once they expose underlying search queries (Sonar today only returns a count). It's a noisy gap right now.
  • Push + email alerts when visibility shifts materially day-over-day
  • Persona-aware tracking (segment prompts by buyer persona, similar to Scrunch)
  • Streaming answer text in the UI — watch the model write its answer in real time during run-now

Try it

Existing Starter+ customers: open the Rank Tracker hub, click into the AI tab, hit Auto-suggest, and you'll have 10 starter prompts in under 30 seconds. The first daily run kicks off at 04:30 UTC.

Free-tier users: this is a big enough capability gap that we couldn't fit it in basic. Upgrade to Starter — pricing is unchanged.

We've been heads-down on this for weeks. It's the most-requested feature we've shipped this year. We can't wait to see what you find.

Ready to Improve

Your Rankings?

Use our free tools to get instant insights into your SEO performance and discover opportunities to rank higher