RTB & AdTech

Bidding Engines Built for
Millisecond Markets

We build Real-Time Bidding infrastructure that wins auctions in under 10ms — custom DSPs, SSPs, ad exchanges, and OpenRTB-compliant bid stacks operating at hundreds of thousands of requests per second.

Discuss your project

The RTB Technology Stack

Go / C++
Bidding Engine Core
Goroutine-based concurrency in Go handles thousands of simultaneous bid requests. C++ used for latency-critical scoring paths where nanoseconds matter.
Redis
Frequency Cap & Targeting
Sub-millisecond reads for frequency capping, user segment lookups, and campaign budget state — all in-memory with configurable persistence.
Kafka
Win/Loss & Impression Events
Durable, ordered event stream for bid wins, impressions, clicks, and conversions. Downstream consumers update pacing and attribution in real time.
ClickHouse
Reporting & Analytics
Columnar storage allows advertisers to query campaign performance across billions of impression rows in under 2 seconds.
Nginx + Lua
Edge Request Handling
OpenResty (Nginx + LuaJIT) handles bid request parsing and validation at the edge, offloading work from the Go bidder tier.
OpenRTB 2.x/3.x
Protocol Standard
Full implementation of the IAB OpenRTB specification including native, video, banner, and VAST ad formats with GDPR consent handling.
PostgreSQL
Campaign & Creative Store
Campaign configuration, targeting rules, creatives, and budget definitions. Replicated to Redis at campaign start for low-latency runtime reads.
Docker / K8s
Infrastructure
Containerised bidder pods with resource limits. HPA scales bidder replicas in response to QPS, ensuring capacity meets demand without over-provisioning.

Designing RTB Systems

The 100ms auction window

OpenRTB mandates that SSPs receive all DSP bid responses within 100ms of the auction start. In practice, budgeting for 80ms gives you margin. Our bidding engines consistently respond in 3–8ms at p99, leaving headroom for network jitter and SSP processing time.

We achieve this by loading all campaign targeting data, frequency caps, and budget states into memory at startup, updating asynchronously via Kafka events, and never hitting a database at bid-time.

Budget pacing without central state

Distributed pacing is one of the hardest problems in RTB. Centralising budget state creates a bottleneck; fully distributing it leads to overspend. We implement a token-bucket algorithm with Redis atomic operations and a configurable overspend tolerance, typically set to 1–2%.

For very high QPS campaigns, we shard budget state across Redis cluster slots to eliminate hotspot contention, with each bidder pod communicating with a deterministic subset of shards.

Targeting and segment matching

User segment targeting (interest categories, demographic, geo, device) uses a pre-compiled bitset representation stored in Redis. Segment matching for a bid request completes in under 50 microseconds even with 500 active segments per user.

Creative and brand safety

Ad quality and brand safety checks run in a separate gRPC microservice with its own SLA. Low-priority signals (contextual category, domain block-list, viewability prediction) are pre-computed and cached; real-time checks are reserved for compliance-critical rules only.

How We Build RTB Systems

01

Traffic model and SLA definition

Define peak QPS, target bid response latency (p50/p99), win rate targets, and budget accuracy tolerance before any architecture decisions.

02

Mock SSP integration and contract testing

We build a mock SSP that fires realistic bid request streams so we can test the full auction cycle in isolation, without depending on third-party sandbox environments.

03

Incremental load testing

Starting from 1K QPS, we scale to target production load in steps, profiling the bidder at each stage and addressing bottlenecks before they compound.

04

Controlled production launch

Traffic is introduced gradually (1% → 10% → 50% → 100%) with automated rollback triggers if p99 latency exceeds SLA or error rate spikes above threshold.

Latency Benchmarking

We benchmark every code path with Go pprof and flame graphs. Target: p99 bidder response <10ms at 200K QPS. We profile under realistic conditions — concurrent goroutines, actual targeting logic, actual Redis latency.

Overspend / Underspend Testing

Budget accuracy is verified against a test harness that fires 10M bid events at various QPS levels. We measure actual spend vs target spend and tune pacing parameters to stay within the defined tolerance band.

Protocol Conformance

OpenRTB request/response contracts are validated against the IAB spec using a custom conformance test suite. We verify all required fields, correct handling of unknown extensions, and appropriate timeout responses.

Failover Scenarios

We test Redis cluster failover, Kafka broker loss, and bidder pod eviction — confirming that the system degrades gracefully (e.g., no-bid responses) rather than returning errors to the SSP.

RTB Systems We've Built

AdTech Startup · Global
A programmatic advertising startup needed to build a custom DSP from scratch to connect to 12 SSPs, support CPC/CPM/CPA bidding models, and achieve bid response latency competitive with established players.
We built a Go-based bidding engine with in-memory targeting state, a Redis-backed frequency cap and budget system, and Kafka pipelines for win/impression/click attribution. The reporting stack uses ClickHouse for sub-second query performance across billions of events.
500K+Bid requests/sec
<5msp99 bid latency
99.99%Uptime
Media Group · Eastern Europe
A publisher network wanted to build their own SSP to capture the demand-side margin lost to third-party SSPs, while maintaining fill rates equivalent to their existing waterfall setup.
We designed an OpenRTB-compliant SSP with a second-price auction engine, floor price management, and deal ID support. We integrated with 8 DSPs simultaneously, implemented unified auction logic to prevent duplicate demand, and built a real-time reporting dashboard for the publisher's yield team.
35%Revenue uplift
8DSPs connected
<100msAuction window
Performance Agency · DACH Region
A performance marketing agency needed a white-label bidding platform for their clients that could handle multi-advertiser campaigns with isolated budgets, separate reporting, and role-based access control.
We built a multi-tenant DSP with per-advertiser budget isolation, a self-serve campaign management UI, and an ML-based bid price optimizer that modelled conversion probability from historical click and conversion data.
40+Advertisers on platform
ROAS improvement
1%Budget overspend tolerance
Mobile Ad Network · Global
A mobile advertising network needed to migrate from a PHP-based ad server handling 50K QPS to a new architecture supporting 300K QPS with native and video ad formats and GDPR consent string handling.
We rewrote the bid endpoint in Go, migrated targeting state to a Redis cluster, implemented VAST 4.x for video ads, and added TCF 2.0 consent string parsing. The migration ran in parallel for 4 weeks with live traffic comparison before cutover.
300KQPS capacity
Throughput gain
0Downtime during migration

Building an RTB platform?

Whether you're starting from scratch or need to scale an existing system, we can help. Tell us your QPS target and current architecture.