Network Drivers (bridge, host, none, macvlan)
LEVEL 0
The Problem
So far, every network you’ve created has been a bridge network. And bridge networks work great for most use cases.
But imagine these scenarios:
Scenario 1: You’re running a performance monitoring tool that needs to see ALL network traffic on the host. It needs direct access to the host’s network interfaces—no virtual networking, no translation, maximum performance. Bridge networking adds overhead.
Scenario 2: You’re running a security-sensitive workload that processes classified data. This container should have ZERO network access. Not even to other containers. Complete isolation.
Scenario 3: You’re containerizing a legacy application that was designed to run on a physical server with its own MAC address on the corporate LAN. IT expects to see this application as a physical device at a specific IP address.
Scenario 4: You need containers to participate directly in your VLAN infrastructure, appearing as if they’re physical machines on specific VLANs.
Bridge networking can’t solve all these problems. You need different network drivers.
LEVEL 1
The Concept — Different Ways to Travel
The Concept
Imagine you need to get from your house to downtown.
Bridge Network = Taking the subway. You walk to the station (host’s network), take the train (virtual bridge), get off downtown (destination container). It’s efficient and gets most people where they need to go. There’s infrastructure involved, but it works well.
Host Network = Walking directly there. No intermediary, no stations, no transfers. You’re on the street the whole time. Maximum speed and directness, but you’re exposed to everything on the street—no protective subway car around you.
None Network = Staying home. You don’t travel at all. Complete isolation and maximum security, but no external connectivity.
Macvlan Network = Having your own personal road that connects directly to the main highway. You appear as another vehicle on the highway, with your own lane and address. Other drivers (network devices) see you as just another car, not a subway passenger.
Each travel method has its place. Sometimes you need the subway. Sometimes you need to walk. Sometimes you shouldn’t travel at all.
LEVEL 2
The Mechanics — The Four Network Drivers
The Mechanics
Docker provides four built-in network drivers. Each creates a different type of network with different characteristics.
1. Bridge (default)
We’ve been using this. Docker creates a virtual bridge (switch), and each container gets a virtual ethernet interface connected to it.
- Containers get private IPs (172.17.x.x, 172.18.x.x, etc.)
- Containers can talk to each other through the bridge
- Port mapping (
-p) exposes container ports to the host - NAT handles outbound connections
2. Host
The container completely shares the host’s network namespace. There’s no network isolation at all.
docker run --network host nginx
What this means:
- The container sees all the host’s network interfaces (eth0, wlan0, etc.)
- When nginx binds to port 80, it binds to the HOST’s port 80
- The container’s IP addresses are the host’s IP addresses
- No need for port mapping—the container uses the host’s ports directly
Advantages:
- Maximum network performance (no virtual network overhead)
- No NAT, no bridge traversal
- Container can bind to any host port without port mapping
Disadvantages:
- Zero network isolation
- Port conflicts: only one container can bind to each port
- Security risk: container has full access to host’s network
- Can’t run multiple instances of the same containerized app on the same port
3. None
The container gets NO network interfaces except loopback (lo / 127.0.0.1).
docker run --network none alpine sh
Inside the container:
ip addr
# Only shows:
# 1: lo: <LOOPBACK,UP> ...
# inet 127.0.0.1/8
No eth0. No network access. Complete isolation.
Use cases:
- Security-sensitive workloads that should have zero network access
- Batch processing jobs that work on local data only
- When you want to manually configure networking yourself (advanced)
- Testing how applications behave with no network
4. Macvlan
Each container gets its own MAC address and appears as a physical device on your network.
# Create macvlan network
docker network create -d macvlan \
--subnet=192.168.1.0/24 \
--gateway=192.168.1.1 \
-o parent=eth0 \
my-macvlan-net
# Run container with specific IP on your LAN
docker run -d \
--network my-macvlan-net \
--ip 192.168.1.50 \
nginx
Now the nginx container is accessible at 192.168.1.50 on your LAN, just like a physical machine would be.
How it works:
- The container’s network interface is linked to a physical interface on the host (
eth0) - The container gets an IP from your actual network subnet (not a Docker private network)
- Other devices on your LAN can reach the container directly
- No port mapping needed—the container IS on the network
Advantages:
- Containers appear as physical devices on your LAN
- Legacy applications that expect specific network setups work seamlessly
- No NAT, no port mapping complexity
- Can integrate with existing VLAN infrastructure
Disadvantages:
- Requires promiscuous mode on the host’s network interface (not always allowed)
- Can exhaust MAC addresses on your network
- More complex to set up
- Container-to-host communication requires special routing
LEVEL 3
How Docker Implements Each Driver
Bridge Implementation:
Docker creates a Linux bridge (virtual switch):
ip link add name docker0 type bridge
For each container, Docker creates a veth pair:
ip link add veth0 type veth peer name veth1
# veth0 goes to bridge, veth1 goes into container's netns
Host Implementation:
Docker simply doesn’t create a new network namespace:
docker run --network host ...
# No `unshare --net` - container uses host's network namespace directly
None Implementation:
Docker creates a new network namespace but doesn’t add any interfaces to it:
unshare --net /bin/bash
# New network namespace, but no interfaces created
Macvlan Implementation:
Docker creates a macvlan interface:
ip link add macvlan0 link eth0 type macvlan mode bridge
This interface allows multiple MAC addresses on a single physical NIC.
LEVEL 4
When to Use Each Driver
Use Bridge when:
- You want isolated container networks (most applications)
- You need port mapping to selectively expose services
- You want container-to-container communication via custom networks
- Default choice for 90% of use cases
Use Host when:
- You need maximum network performance (monitoring tools, packet capture)
- You’re running network tools that need to see all interfaces
- Legacy applications that bind to specific host interfaces
- Only when you truly need host-level network access
Use None when:
- Processing sensitive data with no network requirements
- Running batch jobs that operate on mounted volumes
- Building custom network setups manually
- Maximum security isolation
Use Macvlan when:
- Legacy applications expect to be on a physical network
- You need containers to have their own IP addresses on your LAN
- Integrating with existing VLAN infrastructure
- DHCP servers or other services that need direct network access
- Physical network integration requirements
LEVEL 5
Practical Examples
Example 1: High-performance monitoring with host network
docker run -d \
--name netdata \
--network host \
--cap-add SYS_PTRACE \
netdata/netdata
Netdata monitoring tool uses host networking to efficiently monitor all host metrics without network overhead.
Example 2: Secure processing with no network
docker run \
--network none \
--volume /data:/input:ro \
--volume /results:/output \
my-processing-job
Processes data from /data, writes to /results, has zero network access—can’t exfiltrate data.
Example 3: Macvlan for legacy application
# Create macvlan network on eth0
docker network create -d macvlan \
--subnet=192.168.1.0/24 \
--gateway=192.168.1.1 \
--ip-range=192.168.1.192/27 \
-o parent=eth0 \
lan-network
# Run legacy app with specific IP
docker run -d \
--network lan-network \
--ip 192.168.1.200 \
--name legacy-app \
old-application
The application is now accessible at 192.168.1.200 on your LAN, appearing as a physical device.
Example 4: Comparing bridge vs. host performance
# Bridge network (default)
docker run -d -p 8080:80 --name nginx-bridge nginx
ab -n 10000 -c 100 http://localhost:8080/
# Host network
docker run -d --network host --name nginx-host nginx
ab -n 10000 -c 100 http://localhost:80/
You’ll typically see 5-15% better performance with host networking due to no NAT/bridge overhead.