We build Real-Time Bidding infrastructure that wins auctions in under 10ms — custom DSPs, SSPs, ad exchanges, and OpenRTB-compliant bid stacks operating at hundreds of thousands of requests per second.
Discuss your projectOpenRTB mandates that SSPs receive all DSP bid responses within 100ms of the auction start. In practice, budgeting for 80ms gives you margin. Our bidding engines consistently respond in 3–8ms at p99, leaving headroom for network jitter and SSP processing time.
We achieve this by loading all campaign targeting data, frequency caps, and budget states into memory at startup, updating asynchronously via Kafka events, and never hitting a database at bid-time.
Distributed pacing is one of the hardest problems in RTB. Centralising budget state creates a bottleneck; fully distributing it leads to overspend. We implement a token-bucket algorithm with Redis atomic operations and a configurable overspend tolerance, typically set to 1–2%.
For very high QPS campaigns, we shard budget state across Redis cluster slots to eliminate hotspot contention, with each bidder pod communicating with a deterministic subset of shards.
User segment targeting (interest categories, demographic, geo, device) uses a pre-compiled bitset representation stored in Redis. Segment matching for a bid request completes in under 50 microseconds even with 500 active segments per user.
Ad quality and brand safety checks run in a separate gRPC microservice with its own SLA. Low-priority signals (contextual category, domain block-list, viewability prediction) are pre-computed and cached; real-time checks are reserved for compliance-critical rules only.
Define peak QPS, target bid response latency (p50/p99), win rate targets, and budget accuracy tolerance before any architecture decisions.
We build a mock SSP that fires realistic bid request streams so we can test the full auction cycle in isolation, without depending on third-party sandbox environments.
Starting from 1K QPS, we scale to target production load in steps, profiling the bidder at each stage and addressing bottlenecks before they compound.
Traffic is introduced gradually (1% → 10% → 50% → 100%) with automated rollback triggers if p99 latency exceeds SLA or error rate spikes above threshold.
We benchmark every code path with Go pprof and flame graphs. Target: p99 bidder response <10ms at 200K QPS. We profile under realistic conditions — concurrent goroutines, actual targeting logic, actual Redis latency.
Budget accuracy is verified against a test harness that fires 10M bid events at various QPS levels. We measure actual spend vs target spend and tune pacing parameters to stay within the defined tolerance band.
OpenRTB request/response contracts are validated against the IAB spec using a custom conformance test suite. We verify all required fields, correct handling of unknown extensions, and appropriate timeout responses.
We test Redis cluster failover, Kafka broker loss, and bidder pod eviction — confirming that the system degrades gracefully (e.g., no-bid responses) rather than returning errors to the SSP.
Whether you're starting from scratch or need to scale an existing system, we can help. Tell us your QPS target and current architecture.