Run rank.ai audits and query reports from any MCP-aware AI assistant. Quickstart below.
Three steps. We'll use Claude Desktop as the example; every other host is one or two field-name swaps away.
~/Library/Application Support/Claude/claude_desktop_config.json:{
"mcpServers": {
"rank-ai": {
"url": "https://api.rank.ai/mcp",
"headers": {
"Authorization": "Bearer rnk_live_..."
}
}
}
}Every request must carry Authorization: Bearer rnk_live_.... Sessions/cookies are not accepted on the MCP endpoint — only API keys minted in your dashboard. Each key is scoped to your org; revoking it is immediate.
Per-user rate limits: 60 req/min and 5,000 req/day across all of your keys. Exceeding either returns a structured error withretry_after_seconds.
Audits aren't fire-and-forget — locations + keywords need confirmation before we charge credits. The MCP models this as an explicit gate. Here's the typical back-and-forth between an LLM and our server:
User: Run an audit on fusefinance.com
Claude calls audit.create(domain: "fusefinance.com")
→ state=scraping
Claude polls audit.next(session_id)
→ state=awaiting_review
proposed_payload: {
locations: [...],
keywords: ["loan origination software", ...]
}
credit_cost_estimate: 1200
Claude shows you the proposal, you say "looks good"
Claude calls audit.answer(session_id, payload, confirm=true)
→ state=running
Claude polls audit.next(session_id) until state=complete
→ report_id available
Claude calls report.summary(report_id) and starts answering
"so what's the takeaway?"Two families. Token costs are advertised in each tool's description so the model can budget. List tools cap default limit at 20 (max 100) with cursor pagination; numeric grids return as histograms unless you explicitly drill into grid_points.
audit.*audit.createStart a session. Returns state=scraping plus a session_id. Optional `category` biases keyword extraction.
audit.nextSingle source of truth for what to do next. Per-state shapes: scraping → progress; awaiting_review → proposed_payload + schema + credit_cost_estimate; running → progress + report_id; complete → report_id + summary_pointer; failed/cancelled → error_message.
audit.answerSend `{payload, confirm}`. Validates against the schema returned by audit.next. confirm=false saves the payload but stays in awaiting_review (lets the LLM iterate). confirm=true dispatches the audit and transitions to running — credits are charged here.
audit.cancelMark the session cancelled. Idempotent. No-ops on already-terminal states.
audit.usageCurrent month's audit-credit balance for the caller. Cheap — call before audit.create when in doubt.
audit.listList the org's recent audit sessions. Optional status filter; cursor pagination keyed on (updated_at desc, id desc).
report.*report.summaryAlways start here. Returns header KPIs (avg gmb/organic rank, review summary), totals, current vs potential estimates, biggest_gap pointer, and next_tools hints.
report.list_keywordsPer-keyword aggregation across all locations. Sort by `search_volume` (default), `organic_rank`, `gmb_rank`, or `keyword`. Cursor pagination.
report.list_locationsPer-location rows with avg ranks + current/potential lead estimates.
report.locationSingle-location detail: rankings dict, lead estimates, review summary. No raw grid points.
report.grid_summaryBucketed histogram {top_3, top_10, top_20, beyond_20} for one (location, keyword) pair, plus geographic centroid and the worst-ranked point. Always prefer this over grid_points unless explicitly asked.
report.grid_pointsDrill-down to individual grid points. `ranks="poor"` (default) returns only points ranking >10. `ranks="all"` returns everything (use limit). Cursor pagination.
report.competitorList competitors with presence counts (sorted by occurrences then leads), or detail for a single domain when `domain` is provided.
report.findingsEnriched findings for non-local site reviews. Optional `severity` filter ("critical"/"needs_work"/"good"). Returns 404 for client-audit reports — those use summary/list_keywords/list_locations instead.
report.insightsPre-computed insight cards: high-volume blind spots, competitor dominance, geographic gaps, quick wins, diminishing returns. Cheap to fetch repeatedly.
Model Context Protocol — Anthropic's open spec for connecting AI assistants to external tools. The rank.ai server speaks the HTTP variant. See modelcontextprotocol.io.
Simpler for v1, and the relationship is "your laptop talks to your account" rather than "third party talks to user". OAuth makes sense once we ship to third-party integrations.
The MCP itself is free for accounts on any plan. Audits charge credits at the same rate as the dashboard. Pricing.
Yes — that's the default. Every read tool except grid_pointstrims raw arrays to histograms or aggregates. Drill into points only when the user explicitly asks for them.

Use our free tools to get instant insights into your SEO performance and discover opportunities to rank higher