Prerequisites
Before you begin, make sure you have the following installed:
| Tool | Version | Purpose |
|---|---|---|
| Node.js | 20+ | Runtime for backend and build tooling |
| pnpm | 9+ | Package manager (monorepo workspaces) |
| Docker | 20+ | Runs ClickHouse and Redis containers |
| Docker Compose | v2+ | Orchestrates infrastructure services |
npm install -g pnpm if you do not already have it.Installation
Clone and install
git clone <your-repo-url>
cd resolvedmarkets
pnpm installThis installs dependencies for all three packages (backend, frontend, mcp-server) via pnpm workspaces.
Start infrastructure
The backend requires ClickHouse (time-series storage) and Redis (caching):
pnpm db:up # docker-compose up -d
pnpm db:down # stop infrastructureThis starts ClickHouse on ports 8123/9000 and Redis on port 6379.
Configure environment
cp packages/backend/.env.example packages/backend/.envFor local development the defaults work out of the box. You will need Clerk keys for authentication -- see the Configuration section below.
Quick Start
Run everything
pnpm dev # starts infra + backend + frontendVerify the setup
curl http://localhost:3001/health # should return OKOpen http://localhost:5173 to see the frontend.
Make your first API call
Generate an API key from the /api-keys page, then:
curl -H "X-API-Key: rm_your_key_here" \
http://localhost:3001/v1/markets/liveThis returns a JSON array of active Polymarket crypto prediction markets.
Data Pipeline
The backend captures orderbook data from Polymarket via an event-driven pipeline:
Polymarket WS Binance WS
| |
v v
CLOBCollector CryptoPriceCollector
| |
v |
OrderbookManager <---------+
|
v (event callback)
SnapshotCapturer
|
v (batch)
ClickHouseBatchWriter --> ClickHouseKey components
- MultiMarketFetcher -- Queries Polymarket Gamma API every 30s for active BTC/ETH/SOL/XRP markets across 4 timeframes (5m, 15m, 1h, 1d).
- CLOBCollector -- Single WebSocket to Polymarket receiving
book(full snapshot) andprice_change(delta) events. - OrderbookManager -- In-memory orderbook state per token. Produces sorted snapshots with best bid/ask, mid price, spread, depth, and sequence numbers.
- SnapshotCapturer -- Event-driven capture throttled to 50ms min interval (~20Hz/token). Deduplicates via top-5 bid/ask fingerprinting.
- CryptoPriceCollector -- Streams BTC/ETH/SOL/XRP trades from Binance WebSocket with staleness tracking.
- ClickHouseBatchWriter -- Batch size 800 snapshots / 500 trades, 2s flush interval, 3 retries.
Storage
ClickHouse
Primary storage for all time-series data in the polymarket database:
| Table | Purpose | Engine |
|---|---|---|
snapshots_hf | Orderbook snapshots with bid/ask arrays, timestamps, sequence numbers | MergeTree, partitioned by day |
trades | Individual trade events | MergeTree, partitioned by day |
market_stats | Pre-aggregated market statistics | Materialized View |
api_keys | Hashed API keys with usage tracking | ReplacingMergeTree |
api_request_log | Per-request analytics | MergeTree, 90-day TTL |
user_tiers | User tier and credit tracking | ReplacingMergeTree |
payments | Payment transaction log | MergeTree |
Redis
Response caching with per-endpoint TTLs and pipeline health statistics. Redis is not used as a message queue -- data flows directly from capturers to the batch writer.
API Layer
Express server on port 3001 serving REST and WebSocket endpoints.
WebSocket authentication
Clients authenticate via message (not query string):
1. Connect to /ws/orderbook
2. Send: { "type": "auth", "apiKey": "rm_..." }
3. Receive: { "type": "auth", "status": "ok" }
4. Send: { "type": "subscribe", "crypto": "BTC" }A 5-second timeout is enforced. Unauthenticated connections close with code 4001.
Markets
Polymarket runs recurring binary prediction markets: will a cryptocurrency be at or above its opening price at the end of a time window?
Each market has two tokens -- UP and DOWN -- each with its own orderbook. Prices are inversely correlated and should sum to ~1.00.
Tracked cryptos and timeframes
| Crypto | 5m | 15m | 1h | 1d |
|---|---|---|---|---|
| BTC | Yes | Yes | Yes | Yes |
| ETH | Yes | Yes | Yes | Yes |
| SOL | Yes | Yes | Yes | Yes |
| XRP | Yes | Yes | Yes | Yes |
Identifiers
- conditionId (market_id) -- Primary identifier, a hex string for each market instance.
- tokenId -- Two per market (UP and DOWN). Used for WebSocket subscriptions.
- slug -- Human-readable identifier, e.g.
btc-updown-5m-1772448000.
sol- for short timeframes but solana- for hourly/daily series. This inconsistency comes from Polymarket's naming.Market discovery
The backend queries the Polymarket Gamma API:
GET https://gamma-api.polymarket.com/events?tag_id=102127&active=true&closed=false&limit=100Markets with enableOrderBook: false are skipped (typically near resolution time).
Data Fidelity
The system captures snapshots in response to every orderbook event -- not by polling at fixed intervals. This event-driven approach means no changes are missed (subject to WebSocket delivery guarantees).
Throttling and deduplication
A 50ms minimum interval per token caps capture at ~20Hz. Unchanged states are detected via top-5 bid/ask fingerprinting and skipped.
Dual timestamps
| Field | Source | Meaning |
|---|---|---|
eventTimestamp | Polymarket WS | When the exchange emitted the event |
captureTimestamp | Local | When we processed and stored it |
The delta measures full pipeline latency. Both stored as DateTime64(3, 'UTC') in ClickHouse.
Sequence numbers
Each token's orderbook maintains a monotonically increasing sequence number. Gaps indicate missed events -- check captureThrottled and captureDeduplicated counters to distinguish intentional skips from data loss.
Tiers & Payments
| Feature | Free | Pro ($29/mo) | Enterprise ($99/mo) |
|---|---|---|---|
| Rate limit | 60/min | 500/min | 3000/min |
| History | 1 hour | 30 days | Full archive |
| WebSocket | No | 2 connections | 10 connections |
| MCP access | No | Read-only | Full |
| API keys | 1 | 5 | 20 |
Payments are processed via NOWPayments (cryptocurrency). Prepaid credit packs are also available from the Pricing page.
Configuration
Backend uses dotenv. Copy packages/backend/.env.example to .env.
Required variables
| Variable | Description |
|---|---|
CLICKHOUSE_HOST | ClickHouse hostname (default: localhost) |
CLICKHOUSE_PORT | ClickHouse HTTP port (default: 8123) |
CLICKHOUSE_USER | ClickHouse username |
CLICKHOUSE_PASSWORD | ClickHouse password |
REDIS_HOST | Redis hostname (default: localhost) |
REDIS_PORT | Redis port (default: 6379) |
PORT | Backend server port (default: 3001) |
CLERK_PUBLISHABLE_KEY | Clerk frontend key |
CLERK_SECRET_KEY | Clerk backend secret |
Optional variables
| Variable | Description |
|---|---|
NOWPAYMENTS_API_KEY | NOWPayments API key for crypto payments |
NOWPAYMENTS_WEBHOOK_SECRET | HMAC secret for payment webhooks |
HF_API_URL | MCP server target API URL |
HF_API_KEY | MCP server API key |
MCP_TRANSPORT | MCP transport: stdio (default) or http |
MCP_PORT | MCP HTTP transport port |
ClickHouse Schema
All tables live in the polymarket database. The primary table for orderbook data:
CREATE TABLE polymarket.snapshots_hf (
market_id String,
token_id String,
token_side Enum8('UP'=1, 'DOWN'=2),
crypto LowCardinality(String),
timeframe LowCardinality(String),
best_bid Float64,
best_ask Float64,
mid_price Float64,
spread Float64,
bid_depth Float64,
ask_depth Float64,
bids Array(Tuple(Float64, Float64)),
asks Array(Tuple(Float64, Float64)),
crypto_price Float64,
crypto_price_age_ms Int32,
event_timestamp DateTime64(3, 'UTC'),
capture_timestamp DateTime64(3, 'UTC'),
sequence_number UInt64
) ENGINE = MergeTree()
PARTITION BY toDate(capture_timestamp)
ORDER BY (crypto, timeframe, market_id, token_side,
capture_timestamp)