Skip to main content

Congestion Control Options

The transport decides what gets carried. Congestion control decides when the senders slow down. They're co-designed — pick a transport and you've largely picked the CC algorithm that pairs with it.

This page is the CC design space. If transports are the wire, these are the brakes.

The congestion control landscape, 2x2 grid of families. Family 1 TCP family — react after loss — Reno, CUBIC, BBR, Vegas. Family 2 Datacenter/RDMA — ECN and delay signals, built for tight DC networks — DCTCP, DCQCN, HPCC, Swift. Family 3 Hyperscaler custom — hardware-co-designed control loops with multipath telemetry — MRC CC, Falcon CC, UET CC, SRD. Family 4 Research/exotic — influential ideas with partial deployment — Homa, NDP, ExpressPass, EQDS.
The CC landscape at a glance. Each family pairs with a transport family — picking a transport mostly picks your CC.

Four decades in three eras

Congestion control evolutionTCP Reno / NewReno1988ECN (RFC 3168)2001CUBIC2006BBR2016DCTCP2010TIMELY2015–2020DCQCN2015HPCC2019Swift2020PowerTCP2022AWS SRD CC2018Spectrum-X CC2023Falcon CC2023MRC CC2024UET CC2025NDP / Trim2017Homa201819851990200020102015202020261988 · TCP Reno (Berkeley/IETF)2001 · ECN standardized2010 · DCTCP paper2015 · DCQCN + TIMELY2016 · BBR (Google)2019 · HPCC paper2023 · Spectrum-X / Falcon CC2025 · UET CC
Active spec / standardVendor / proprietarySuperseded

Era 1 — TCP's reactive CC (1988–2010)

CC was born in 1988 with TCP slow start and congestion avoidance — a pure software algorithm reacting after loss. CUBIC (2006) became Linux's default by aggressively filling bandwidth between losses. ECN was standardized in 2001 (RFC 3168) — the network could finally mark instead of drop — but for a decade almost nobody used it. The whole era was reactive: the sender knew there was congestion only after packets had already been thrown away.

Era 2 — Datacenter needs precision (2010–2015)

DCTCP (Microsoft / Stanford, 2010) proved that fine-grained ECN-based CC works in tight datacenter networks — small queues, proportional reaction, no waiting for loss. DCQCN (Microsoft Research, 2015) ported the idea to RoCE v2 and became the canonical RDMA CC. TIMELY (Google, 2015) showed that delay (RTT gradient) is a viable signal too, no ECN needed. CC moved from reactive to proactive, and from software-only to NIC-offloaded.

Era 3 — AI-fabric CC: hardware-co-designed, fabric-wide (2015 → today)

Modern AI-fabric CC isn't an algorithm in the kernel — it's a control loop the switch and the NIC implement together. Swift (Google, 2020) became the basis for Falcon CC (2023, with CSIG + Carousel). HPCC (Alibaba, 2019) uses In-band Network Telemetry (INT) for precise per-link signals. MRC CC (2024) brings programmable CC and μs rerouting. UET CC (2025) is sender + receiver based for a packet-sprayed environment. Spectrum-X CC (NVIDIA, 2023) is switch + NIC co-designed.

Congestion control moved from a reactive software algorithm (TCP) to a proactive, hardware-co-designed, fabric-wide control loop (DCQCN, HPCC, UET).


CC algorithm families

Pick the family you care about — each tab is a self-contained reference:

What runs the internet.

AlgorithmFamilySignalKey traitWhere used
Reno / NewRenoTCPLossClassic AIMD; baselineLegacy TCP everywhere
CUBICTCPLossCubic growth — default in Linux/WindowsMost internet TCP today
Vegas / WestwoodTCPDelay / bw-estDelay-based; low queueingNiche / research
Compound TCPTCPLoss + delayHybridOlder Windows
BBR v1 / v2 / v3TCP / QUICBandwidth + RTTModel-based; fills bottleneck without filling buffersGoogle services, YouTube, QUIC, Linux kernel

Standout: BBR. Doesn't wait for loss — models the bottleneck bandwidth and pacing rate, fills the pipe without filling buffers. Bridges the TCP era and the modern model-based approach.


Mental model

Most modern AI fabrics combine four ideas: packet spraying + delay-based CC + ECN/INT signals + microsecond failover.

Pick any production AI-fabric CC algorithm — MRC, Falcon, SRD, UET CC, Spectrum-X CC — and you'll find some combination of these four. The PFC-only era is ending.


Who built what — full reference table
TechOwner / Standards bodyWhen
TCP Reno / NewRenoBerkeley / IETF1988
ECN (RFC 3168)IETF2001
CUBICNCSU → Linux2006
DCTCPMicrosoft / StanfordSIGCOMM 2010
DCQCNMicrosoft ResearchSIGCOMM 2015
TIMELYGoogleSIGCOMM 2015
BBRGoogle2016
NDP / TrimCambridge2017
AWS SRD CCAWS2018
HomaStanford2018
HPCCAlibabaSIGCOMM 2019
SwiftGoogleSIGCOMM 2020
PowerTCPResearch consortiumNSDI 2022
Spectrum-X CCNVIDIA2023
Falcon CCGoogle + Intel2023
MRC CCOpenAI + Microsoft + NVIDIA + AMD + Broadcom + Intel2024
UET CCUltra Ethernet Consortium2024–2025

📄 Transports & Congestion Control — One-Pager — same content, denser, single-sheet print format


Next: What This Curriculum Picks → — the one transport + CC combo this course teaches, and why.