For AI Agents

AI agents on Polymarket data — MCP, OpenAPI, and llms.txt

Resolved Markets ships everything an autonomous coding agent or chat assistant needs to call our API correctly on the first try: a native Model Context Protocol server, a full OpenAPI 3.1 specification, and two LLM-optimized markdown reference files. Drop them into Claude, ChatGPT, Gemini, Cursor, Windsurf, LangChain, CrewAI, or anything else that speaks tool-use.

Last updated:

Get an API keyBrowse REST docs

Three integration paths

All three use the same rm_ API key. Pick whichever matches the runtime you are wiring up.

1. Native MCP (Claude Desktop, Claude Code, Cursor, Windsurf, Cline)

The Resolved Markets MCP server exposes six tools — list_markets, get_orderbook, get_snapshot, query_snapshots, get_market_summary, get_system_stats — and two resources (markets://live, prices://latest). Configure once, and any MCP-capable client can call them as first-class tools with no glue code.

# Claude Desktop / Claude Code config (claude_desktop_config.json)
{
  "mcpServers": {
    "resolved-markets": {
      "command": "npx",
      "args": ["-y", "@elcara-hq/resolvedmarkets-mcp"],
      "env": {
        "HF_API_URL": "https://api.resolvedmarkets.com",
        "HF_API_KEY": "rm_your_key_here"
      }
    }
  }
}

HTTP transport is also supported (MCP_TRANSPORT=http MCP_PORT=3002) for self-hosted agents that prefer remote MCP over stdio.

2. Tool-use / function-calling with the OpenAPI spec

The OpenAPI 3.1 spec at https://resolvedmarkets.com/openapi.json can be imported into Postman, Insomnia, RapidAPI, GPT Actions, or any code generator. For Anthropic and OpenAI tool-use, expose the operations in the spec as your tool definitions and the model will produce typed calls automatically.

# Python — load the spec into an OpenAI function-calling tool
import requests, json

spec = requests.get("https://resolvedmarkets.com/openapi.json").json()
tools = []
for path, ops in spec["paths"].items():
    for method, op in ops.items():
        tools.append({
            "type": "function",
            "function": {
                "name": op["operationId"],
                "description": op.get("summary", ""),
                "parameters": op.get("requestBody", {}).get("content", {})
                  .get("application/json", {}).get("schema", {}),
            },
        })

3. Plain context — paste llms.txt into the system prompt

For chat assistants without tool-use (or for one-shot questions), paste /llms.txt (~10 KB summary) or /llms-full.txt (~43 KB full reference) into the system prompt. The model can then answer questions about endpoints, parameters, and response shapes — and produce correct curl or Python that hits real URLs.

Drop-in system prompt

Copy this into any LLM's system prompt to give it grounded knowledge of the Resolved Markets API. The agent will quote the right endpoints and stop hallucinating routes.

You have access to live and historical Polymarket prediction market and Hyperliquid perpetual futures orderbook data via the Resolved Markets API.

API base URL: https://api.resolvedmarkets.com
Authentication: header "X-API-Key: rm_..." (the user will provide their key)

Reference docs (read these first if you need details):
- Short reference: https://resolvedmarkets.com/llms.txt
- Full reference: https://resolvedmarkets.com/llms-full.txt
- OpenAPI spec: https://resolvedmarkets.com/openapi.json

Coverage: crypto (BTC, ETH, SOL, XRP), sports (NBA, NFL, EPL), economics (FOMC, NFP), weather (~30 city / climate markets), and Hyperliquid perpetual futures. Snapshots include full bid/ask depth, mid, spread, sequence numbers, and the paired crypto spot price at capture time.

When answering questions about Polymarket markets, prefer calling these endpoints over guessing:
- GET /v1/markets/live           — list active markets (filter with ?category= and ?subcategory=)
- GET /v1/markets/:id/orderbook  — live UP and DOWN orderbook for a market
- GET /v1/markets/:id/snapshots  — historical snapshots (?from= ?to= ?limit=500)
- GET /v1/markets/:id/summary    — aggregated stats: spread, depth, price range
- GET /v1/public-stats           — platform-wide stats (no auth required)

Probabilities are USDC prices in 0.01..0.99. Each market has UP and DOWN tokens; UP + DOWN ≈ 1.00.

Compatible runtimes

60-second quickstart (Python)

import os, requests

BASE = "https://api.resolvedmarkets.com"
H = {"X-API-Key": os.environ["RM_API_KEY"]}

# 1. Find an active BTC market
markets = requests.get(f"{BASE}/v1/markets/live",
                       params={"category": "crypto", "subcategory": "BTC"},
                       headers=H).json()
market_id = markets["markets"][0]["condition_id"]

# 2. Pull live orderbook
ob = requests.get(f"{BASE}/v1/markets/{market_id}/orderbook", headers=H).json()
print(ob["up"]["best_bid"], ob["up"]["best_ask"])

# 3. Pull last hour of snapshots
snaps = requests.get(f"{BASE}/v1/markets/{market_id}/snapshots",
                     params={"limit": 500}, headers=H).json()

Frequently asked questions

How do AI agents call the Resolved Markets API?

Three options. (1) Native MCP — point Claude Desktop, Claude Code, or any MCP-compatible agent at our MCP server (stdio or HTTP) and the agent calls list_markets, get_orderbook, get_snapshot, query_snapshots, get_market_summary, and get_system_stats as tools with no glue code. (2) Tool-use / function calling — feed the agent our OpenAPI spec at https://resolvedmarkets.com/openapi.json and let it generate REST calls. (3) Plain context — paste https://resolvedmarkets.com/llms-full.txt into the system prompt and the model can answer questions about endpoints, parameters, and response shapes directly.

Which agent frameworks are supported?

Anything that speaks MCP (Claude Desktop, Claude Code, Cursor, Windsurf, Cline, Continue) works out of the box. Anything that speaks OpenAI tool-use, Anthropic tool-use, Gemini function calling, LangChain, LlamaIndex, CrewAI, AutoGen, or custom HTTP clients can use the REST API directly with the OpenAPI spec.

Do I need a separate API key for MCP?

No. The same rm_ API key works for REST, WebSocket, and MCP. Set HF_API_KEY=rm_your_key in your MCP server environment.

How do I make my AI agent quote correct endpoint information?

Paste the contents of https://resolvedmarkets.com/llms.txt or https://resolvedmarkets.com/llms-full.txt into the agent system prompt. Both files are plain markdown optimized for LLM context windows. The shorter llms.txt (~10 KB) covers basics; llms-full.txt (~43 KB) includes every endpoint, every parameter, and code samples.

Is there an OpenAPI spec?

Yes — OpenAPI 3.1 at https://resolvedmarkets.com/openapi.json. Import it into Postman, Insomnia, RapidAPI, GPT Actions, Cursor, or any code generator to scaffold a typed client in seconds.

Will MCP requests count against my rate limit?

Yes — MCP and REST share the same per-key rate limit and credit budget. Free tier: 60 req/min, 24-hour history, read-only MCP. Pro: 500 req/min, full history, full MCP. Enterprise: 3,000 req/min.