rank.ai docs

MCP server

Run rank.ai audits and query reports from any MCP-aware AI assistant. Quickstart below.

Quickstart

Three steps. We'll use Claude Desktop as the example; every other host is one or two field-name swaps away.

  1. Generate an API key. Open your account settings → API Keys → New key. Copy the plaintext (you'll see it once).
  2. Add to your MCP config. For Claude Desktop, edit ~/Library/Application Support/Claude/claude_desktop_config.json:
    {
      "mcpServers": {
        "rank-ai": {
          "url": "https://api.rank.ai/mcp",
          "headers": {
            "Authorization": "Bearer rnk_live_..."
          }
        }
      }
    }
  3. Restart your MCP host and ask your assistant: "Run an audit for fusefinance.com." It'll walk through the conversational flow described below.

Auth

Every request must carry Authorization: Bearer rnk_live_.... Sessions/cookies are not accepted on the MCP endpoint — only API keys minted in your dashboard. Each key is scoped to your org; revoking it is immediate.

Per-user rate limits: 60 req/min and 5,000 req/day across all of your keys. Exceeding either returns a structured error withretry_after_seconds.

The conversational audit flow

Audits aren't fire-and-forget — locations + keywords need confirmation before we charge credits. The MCP models this as an explicit gate. Here's the typical back-and-forth between an LLM and our server:

User: Run an audit on fusefinance.com

Claude calls audit.create(domain: "fusefinance.com")
   → state=scraping

Claude polls audit.next(session_id)
   → state=awaiting_review
     proposed_payload: {
       locations: [...],
       keywords: ["loan origination software", ...]
     }
     credit_cost_estimate: 1200

Claude shows you the proposal, you say "looks good"

Claude calls audit.answer(session_id, payload, confirm=true)
   → state=running

Claude polls audit.next(session_id) until state=complete
   → report_id available

Claude calls report.summary(report_id) and starts answering
"so what's the takeaway?"

Tool reference

Two families. Token costs are advertised in each tool's description so the model can budget. List tools cap default limit at 20 (max 100) with cursor pagination; numeric grids return as histograms unless you explicitly drill into grid_points.

audit.*

audit.create

~200 tok

Start a session. Returns state=scraping plus a session_id. Optional `category` biases keyword extraction.

audit.next

~500 tok

Single source of truth for what to do next. Per-state shapes: scraping → progress; awaiting_review → proposed_payload + schema + credit_cost_estimate; running → progress + report_id; complete → report_id + summary_pointer; failed/cancelled → error_message.

audit.answer

~400 tok

Send `{payload, confirm}`. Validates against the schema returned by audit.next. confirm=false saves the payload but stays in awaiting_review (lets the LLM iterate). confirm=true dispatches the audit and transitions to running — credits are charged here.

audit.cancel

~100 tok

Mark the session cancelled. Idempotent. No-ops on already-terminal states.

audit.usage

~100 tok

Current month's audit-credit balance for the caller. Cheap — call before audit.create when in doubt.

audit.list

~50 tok/row

List the org's recent audit sessions. Optional status filter; cursor pagination keyed on (updated_at desc, id desc).

report.*

report.summary

~800 tok

Always start here. Returns header KPIs (avg gmb/organic rank, review summary), totals, current vs potential estimates, biggest_gap pointer, and next_tools hints.

report.list_keywords

~50 tok/row

Per-keyword aggregation across all locations. Sort by `search_volume` (default), `organic_rank`, `gmb_rank`, or `keyword`. Cursor pagination.

report.list_locations

~40 tok/row

Per-location rows with avg ranks + current/potential lead estimates.

report.location

~2k tok

Single-location detail: rankings dict, lead estimates, review summary. No raw grid points.

report.grid_summary

~200 tok

Bucketed histogram {top_3, top_10, top_20, beyond_20} for one (location, keyword) pair, plus geographic centroid and the worst-ranked point. Always prefer this over grid_points unless explicitly asked.

report.grid_points

~30 tok/pt

Drill-down to individual grid points. `ranks="poor"` (default) returns only points ranking >10. `ranks="all"` returns everything (use limit). Cursor pagination.

report.competitor

~1.5k tok

List competitors with presence counts (sorted by occurrences then leads), or detail for a single domain when `domain` is provided.

report.findings

~2k tok

Enriched findings for non-local site reviews. Optional `severity` filter ("critical"/"needs_work"/"good"). Returns 404 for client-audit reports — those use summary/list_keywords/list_locations instead.

report.insights

~1.5k tok

Pre-computed insight cards: high-volume blind spots, competitor dominance, geographic gaps, quick wins, diminishing returns. Cheap to fetch repeatedly.

FAQ

What's MCP?

Model Context Protocol — Anthropic's open spec for connecting AI assistants to external tools. The rank.ai server speaks the HTTP variant. See modelcontextprotocol.io.

Why API keys instead of OAuth?

Simpler for v1, and the relationship is "your laptop talks to your account" rather than "third party talks to user". OAuth makes sense once we ship to third-party integrations.

What does it cost?

The MCP itself is free for accounts on any plan. Audits charge credits at the same rate as the dashboard. Pricing.

Can I keep grid data out of the LLM context?

Yes — that's the default. Every read tool except grid_pointstrims raw arrays to histograms or aggregates. Drill into points only when the user explicitly asks for them.

Ready to Improve

Your Rankings?

Use our free tools to get instant insights into your SEO performance and discover opportunities to rank higher