Scaling and Shared Stores
When running multiple tollbooth instances, in-memory state is no longer safe. Use Redis for anything that must stay consistent across instances.
What must be shared
Section titled “What must be shared”| Store | Why |
|---|---|
| Rate-limit counters | All instances must read/increment the same counters |
| Time/session state | Session started on instance A must be visible on instance B |
| Verification cache | Prevents duplicate verification and settlement races |
Without shared stores: rate limits become per-instance, sessions break across instances, and autoscaling creates fresh empty caches.
Configuration
Section titled “Configuration”stores: redis: url: "redis://localhost:6379" prefix: "tollbooth-prod"
rateLimit: backend: redis
verificationCache: backend: redis
timeSession: backend: redisYou can override Redis connection details per store:
stores: redis: url: "redis://shared-cache:6379" prefix: "tollbooth"
verificationCache: backend: redis redis: url: "redis://verification-cache:6379" prefix: "tollbooth-vc"Docker Compose example
Section titled “Docker Compose example”services: redis: image: redis:7-alpine command: ["redis-server", "--appendonly", "yes"] ports: - "6379:6379" volumes: - redis_data:/data
tollbooth: image: ghcr.io/x402-tollbooth/gateway:latest depends_on: [redis] ports: - "3000:3000" environment: REDIS_URL: redis://redis:6379 volumes: - ./tollbooth.config.yaml:/app/tollbooth.config.yaml:ro
volumes: redis_data:Production recommendations
Section titled “Production recommendations”- Use a managed Redis service (Upstash, Elasticache, Redis Cloud) with TLS and auth.
- Place Redis close to tollbooth instances to minimize latency.
- Enable shared Redis before scaling to multiple instances.
- Monitor Redis latency, error rates, and memory.