Skip to main content

The Linux Network Stack

You know router CLI cold. Show interfaces, show ip route, show running-config, configure terminal. The commands are different, but the concepts on Linux are mostly the same things you already know. This page is the translation.

We'll skip the things you can already do (TCP/IP fundamentals, BGP, routing) and focus on what's specific to running a Linux host that does AI networking — network namespaces, the ip command, virtual interfaces, and the bits that touch RDMA later.


The big translation table

Router CLI you knowLinux equivalent
show interfacesip link show
show ip routeip route show
show ip interface briefip -br addr show
configure terminalinterface eth0ip address 10.0.0.1/24ip addr add 10.0.0.1/24 dev eth0
interface ... no shutdownip link set eth0 up
show arpip neigh show
clear arpip neigh flush all
traceroutetracepath or mtr
show running-config/etc/netplan/*.yaml or /etc/network/interfaces or nmcli con show (depends on distro)
show ip bgp summaryvtysh -c "show ip bgp summary" (FRR)
VRFnetwork namespace (similar, more flexible)

Most things have a 1:1 mapping. The two big new concepts are network namespaces and virtual interfaces.


Network namespaces

A network namespace is its own complete networking stack — its own interfaces, routing table, iptables rules, sockets, the works. Linux can have thousands of them on one host.

This is what makes containers possible. Each pod in Kubernetes runs in its own network namespace.

# Create a namespace
ip netns add testns

# List namespaces
ip netns list

# Run a command inside a namespace
ip netns exec testns ip link show
ip netns exec testns ping 8.8.8.8 # ← fails, no interfaces in here yet

# Delete a namespace
ip netns del testns

Mental model: namespace ≈ VRF, but applied to everything — interfaces, sockets, processes, even the loopback. A process running in a namespace literally cannot see traffic in another namespace.

You won't manage namespaces by hand — k8s does it. But you'll need to understand them to debug "why is this pod's networking weird."


Interfaces and the ip command

Linux has more interface types than a switch:

TypeUsed for
eth0, eth1, ...Physical NIC (or a VF that looks like physical)
loLoopback (always 127.0.0.1)
vethVirtual ethernet pair — a "cable" between namespaces
bridgeLinux software switch (between veths)
tap / tunLayer-2 / Layer-3 software interface (VPN, VMs)
bondLike router port-channels — combine multiple physical links
vlan802.1Q tagged sub-interface
macvlanMultiple MAC addresses on one physical
ipvlanMultiple IP addresses on one MAC
# List all interfaces (any type)
ip -br link show
# Brief, like 'show ip interface brief'

# Show one interface in detail
ip -d link show enp1s0
# The '-d' adds driver info (NUMA node, etc.)

# Bring an interface up / down
ip link set enp1s0 up
ip link set enp1s0 down

# Add an IP
ip addr add 10.50.0.10/16 dev enp1s0

# Add a route
ip route add 10.51.0.0/16 via 10.50.0.1 dev enp1s0

The ip command from iproute2 is the modern interface to all of this. Older docs reference ifconfig, route, arp — all deprecated. Stick with ip.


veth pairs — Linux "cables"

A veth pair is two interfaces wired back-to-back. Anything sent on one side comes out the other. This is how a container's eth0 connects to a bridge or host interface — a veth pair, with one end inside the container's namespace and the other end on the host.

# Create a veth pair
ip link add veth0 type veth peer name veth1

# Move one end into a namespace
ip link set veth1 netns testns

# Configure both ends
ip addr add 10.0.0.1/24 dev veth0
ip link set veth0 up
ip netns exec testns ip addr add 10.0.0.2/24 dev veth1
ip netns exec testns ip link set veth1 up

# Now ping across
ping 10.0.0.2

Multus does exactly this under the hood — except instead of veth it uses SR-IOV CNI to move a real VF into the pod's namespace. The pattern is the same.


Bridges — Linux soft-switches

A Linux bridge is a software L2 switch. Plug multiple veths or physical interfaces into it and they're in the same broadcast domain.

# Create a bridge
ip link add br0 type bridge

# Add interfaces to it
ip link set veth0 master br0
ip link set veth1 master br0
ip link set enp2s0 master br0

# Bring up
ip link set br0 up

The default Kubernetes CNI plugins (Calico, Flannel) use bridges + veths internally to wire pod networks. You don't configure them by hand, but if you ip link show on a k8s node you'll see dozens of bridges and veths — that's what's there.


Routing and policy routing

Single routing table — same as you know. The main table holds most routes:

ip route show
ip route show table main

But Linux supports multiple routing tables (like VRFs). You set up rules that pick which table to use based on source IP, packet mark, etc:

# Create a custom table
ip route add 10.0.0.0/24 via 192.168.1.1 dev eth0 table 100

# Route packets from a specific source through that table
ip rule add from 10.50.0.10 lookup 100

For AI workloads, this matters because each rail might be in its own routing table — different next-hop for each NIC. With Multus + SR-IOV CNI, this is usually handled automatically, but if traffic isn't going where you expect, ip rule show is your friend.


Persistent config — three flavors

The painful part of Linux network config is that how you make it permanent depends on the distro:

DistroConfig tool
Ubuntu 18+Netplan (/etc/netplan/*.yaml) → renders to systemd-networkd or NetworkManager
RHEL/CentOS/Rocky 8+NetworkManager (nmcli con add ...)
Older Debian/Ubuntu/etc/network/interfaces
Most container hostsOften none — networking comes from CNI/cloud-init

For AI training hosts, your network team probably standardizes on one. Find out which, and learn that one's syntax. Don't ip addr add permanently — it disappears on reboot.


What's the same as your router experience

To anchor: most of what you do on Linux maps directly to router CLI.

  • IP addressing, routing tables, BGP, OSPF — same protocols, same RFCs, FRR has the same show commands.
  • MTU, jumbo frames, MTU mismatch debugging — same problems.
  • ARP — same protocol, ip neigh shows it.
  • VLANs — same 802.1Q, ip link add link eth0 name eth0.100 type vlan id 100.
  • BFD, LACP, BGP unnumbered — all supported by FRR.

What's different (this is where AI hosts bite you)

  • Namespaces are everywhere. A pod isn't on the host network; it's in its own namespace. ip netns exec <ns> ip ... to look inside.
  • The ip command does everything. No ifconfig, no route. Just ip.
  • Interfaces come and go. VFs appear when SR-IOV is enabled, disappear when disabled. CNI plugins create veths on every pod schedule.
  • Multiple routing tables matter. With 8 RDMA rails, you might have 8 routing tables — one per rail. ip rule show to see which.
  • The kernel itself is tunable. Sysctl knobs (net.ipv4.*, net.core.*) affect performance. We cover that on the next page.

What you should remember

  • The ip command is your CLI. Learn ip link, ip addr, ip route, ip neigh, ip rule, ip netns.
  • Network namespaces ≈ VRFs, but applied to everything. Pods live in namespaces.
  • veth pairs are Linux "cables." Bridges are Linux soft-switches. CNI uses these to wire pods.
  • Don't ip addr add permanently — that disappears on reboot. Use Netplan / NetworkManager / interfaces depending on the distro.
  • Routing rules matter when you have multiple NICs (8 RDMA rails = often 8 routing tables).

Next: Kernel Tuning for RDMA → — the kernel command line, IOMMU, hugepages, and the sysctl knobs that turn a stock Linux box into an RDMA-capable host.