Table of contents
This page describes networking for swarm services.
A Docker swarm generates two different kinds of traffic:
Control and management plane traffic: This includes swarm managementmessages, such as requests to join or leave the swarm. This traffic isalways encrypted.
Application data plane traffic: This includes container traffic andtraffic to and from external clients.
Key network concepts
The following three network concepts are important to swarm services:
Overlay networks manage communications among the Docker daemonsparticipating in the swarm. You can create overlay networks, in the same wayas user-defined networks for standalone containers. You can attach a serviceto one or more existing overlay networks as well, to enable service-to-servicecommunication. Overlay networks are Docker networks that use the
overlay
network driver.The ingress network is a special overlay network that facilitatesload balancing among a service's nodes. When any swarm node receives arequest on a published port, it hands that request off to a module called
IPVS
.IPVS
keeps track of all the IP addresses participating in thatservice, selects one of them, and routes the request to it, over theingress
network.The
ingress
network is created automatically when you initialize or join aswarm. Most users do not need to customize its configuration, but Docker allowsyou to do so.The docker_gwbridge is a bridge network that connects the overlaynetworks (including the
ingress
network) to an individual Docker daemon'sphysical network. By default, each container a service is running is connectedto its local Docker daemon host'sdocker_gwbridge
network.The
docker_gwbridge
network is created automatically when you initialize orjoin a swarm. Most users do not need to customize its configuration, butDocker allows you to do so.
Tip
See alsoNetworking overview for more details about Swarm networking in general.
Docker daemons participating in a swarm need the ability to communicate witheach other over the following ports:
- Port
7946
TCP/UDP for container network discovery. - Port
4789
UDP (configurable) for the overlay network (including ingress) data path.
When setting up networking in a Swarm, special care should be taken. Consultthetutorialfor an overview.
Overlay networking
When you initialize a swarm or join a Docker host to an existing swarm, twonew networks are created on that Docker host:
- An overlay network called
ingress
, which handles the control and data trafficrelated to swarm services. When you create a swarm service and do notconnect it to a user-defined overlay network, it connects to theingress
network by default. - A bridge network called
docker_gwbridge
, which connects the individualDocker daemon to the other daemons participating in the swarm.
Create an overlay network
To create an overlay network, specify the overlay
driver when using thedocker network create
command:
$ docker network create \ --driver overlay \ my-network
The above command doesn't specify any custom options, so Docker assigns asubnet and uses default options. You can see information about the network usingdocker network inspect
.
When no containers are connected to the overlay network, its configuration isnot very exciting:
$ docker network inspect my-network[ { "Name": "my-network", "Id": "fsf1dmx3i9q75an49z36jycxd", "Created": "0001-01-01T00:00:00Z", "Scope": "swarm", "Driver": "overlay", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": null, "Config": [] }, "Internal": false, "Attachable": false, "Ingress": false, "Containers": null, "Options": { "com.docker.network.driver.overlay.vxlanid_list": "4097" }, "Labels": null }]
In the above output, notice that the driver is overlay
and that the scope isswarm
, rather than local
, host
, or global
scopes you might see inother types of Docker networks. This scope indicates that only hosts which areparticipating in the swarm can access this network.
The network's subnet and gateway are dynamically configured when a serviceconnects to the network for the first time. The following example showsthe same network as above, but with three containers of a redis
serviceconnected to it.
$ docker network inspect my-network[ { "Name": "my-network", "Id": "fsf1dmx3i9q75an49z36jycxd", "Created": "2017-05-31T18:35:58.877628262Z", "Scope": "swarm", "Driver": "overlay", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": null, "Config": [ { "Subnet": "10.0.0.0/24", "Gateway": "10.0.0.1" } ] }, "Internal": false, "Attachable": false, "Ingress": false, "Containers": { "0e08442918814c2275c31321f877a47569ba3447498db10e25d234e47773756d": { "Name": "my-redis.1.ka6oo5cfmxbe6mq8qat2djgyj", "EndpointID": "950ce63a3ace13fe7ef40724afbdb297a50642b6d47f83a5ca8636d44039e1dd", "MacAddress": "02:42:0a:00:00:03", "IPv4Address": "10.0.0.3/24", "IPv6Address": "" }, "88d55505c2a02632c1e0e42930bcde7e2fa6e3cce074507908dc4b827016b833": { "Name": "my-redis.2.s7vlybipal9xlmjfqnt6qwz5e", "EndpointID": "dd822cb68bcd4ae172e29c321ced70b731b9994eee5a4ad1d807d9ae80ecc365", "MacAddress": "02:42:0a:00:00:05", "IPv4Address": "10.0.0.5/24", "IPv6Address": "" }, "9ed165407384f1276e5cfb0e065e7914adbf2658794fd861cfb9b991eddca754": { "Name": "my-redis.3.hbz3uk3hi5gb61xhxol27hl7d", "EndpointID": "f62c686a34c9f4d70a47b869576c37dffe5200732e1dd6609b488581634cf5d2", "MacAddress": "02:42:0a:00:00:04", "IPv4Address": "10.0.0.4/24", "IPv6Address": "" } }, "Options": { "com.docker.network.driver.overlay.vxlanid_list": "4097" }, "Labels": {}, "Peers": [ { "Name": "moby-e57c567e25e2", "IP": "192.168.65.2" } ] }]
Customize an overlay network
There may be situations where you don't want to use the default configurationfor an overlay network. For a full list of configurable options, run thecommand docker network create --help
. The following are some of the mostcommon options to change.
Configure the subnet and gateway
By default, the network's subnet and gateway are configured automatically whenthe first service is connected to the network. You can configure these whencreating a network using the --subnet
and --gateway
flags. The followingexample extends the previous one by configuring the subnet and gateway.
$ docker network create \ --driver overlay \ --subnet 10.0.9.0/24 \ --gateway 10.0.9.99 \ my-network
Using custom default address pools
To customize subnet allocation for your Swarm networks, you canoptionally configure them during swarm init
.
For example, the following command is used when initializing Swarm:
$ docker swarm init --default-addr-pool 10.20.0.0/16 --default-addr-pool-mask-length 26
Whenever a user creates a network, but does not use the --subnet
command line option, the subnet for this network will be allocated sequentially from the next available subnet from the pool. If the specified network is already allocated, that network will not be used for Swarm.
Multiple pools can be configured if discontiguous address space is required. However, allocation from specific pools is not supported. Network subnets will be allocated sequentially from the IP pool space and subnets will be reused as they are deallocated from networks that are deleted.
The default mask length can be configured and is the same for all networks. It is set to /24
by default. To change the default subnet mask length, use the --default-addr-pool-mask-length
command line option.
Note
Default address pools can only be configured on
swarm init
and cannot be altered after cluster creation.
Overlay network size limitations
Docker recommends creating overlay networks with /24
blocks. The /24
overlay network blocks limit the network to 256 IP addresses.
This recommendation addresseslimitations with swarm mode.If you need more than 256 IP addresses, do not increase the IP block size. You can either use dnsrr
endpoint mode with an external load balancer, or use multiple smaller overlay networks. SeeConfigure service discovery for more information about different endpoint modes.
Configure encryption of application data
Management and control plane data related to a swarm is always encrypted.For more details about the encryption mechanisms, see theDocker swarm mode overlay network security model.
Application data among swarm nodes is not encrypted by default. To encrypt thistraffic on a given overlay network, use the --opt encrypted
flag on docker network create
. This enables IPSEC encryption at the level of the vxlan. Thisencryption imposes a non-negligible performance penalty, so you should test thisoption before using it in production.
Note
You mustcustomize the automatically created ingressto enable encryption. By default, all ingress traffic is unencrypted, as encryptionis a network-level option.
To attach a service to an existing overlay network, pass the --network
flag todocker service create
, or the --network-add
flag to docker service update
.
$ docker service create \ --replicas 3 \ --name my-web \ --network my-network \ nginx
Service containers connected to an overlay network can communicate witheach other across it.
To see which networks a service is connected to, use docker service ls
to findthe name of the service, then docker service ps <service-name>
to list thenetworks. Alternately, to see which services' containers are connected to anetwork, use docker network inspect <network-name>
. You can run these commandsfrom any swarm node which is joined to the swarm and is in a running
state.
Configure service discovery
Service discovery is the mechanism Docker uses to route a request from yourservice's external clients to an individual swarm node, without the clientneeding to know how many nodes are participating in the service or theirIP addresses or ports. You don't need to publish ports which are used betweenservices on the same network. For instance, if you have aWordPress service that stores its data in a MySQL service,and they are connected to the same overlay network, you do not need to publishthe MySQL port to the client, only the WordPress HTTP port.
Service discovery can work in two different ways: internal connection-basedload-balancing at Layers 3 and 4 using the embedded DNS and a virtual IP (VIP),or external and customized request-based load-balancing at Layer 7 using DNSround robin (DNSRR). You can configure this per service.
By default, when you attach a service to a network and that service publishesone or more ports, Docker assigns the service a virtual IP (VIP), which is the"front end" for clients to reach the service. Docker keeps a list of allworker nodes in the service, and routes requests between the client and one ofthe nodes. Each request from the client might be routed to a different node.
If you configure a service to use DNS round-robin (DNSRR) service discovery,there is not a single virtual IP. Instead, Docker sets up DNS entries for theservice such that a DNS query for the service name returns a list of IPaddresses, and the client connects directly to one of these.
DNS round-robin is useful in cases where you want to use your own loadbalancer, such as HAProxy. To configure a service to use DNSRR, use the flag
--endpoint-mode dnsrr
when creating a new service or updating an existingone.
Customize the ingress network
Most users never need to configure the ingress
network, but Docker allows youto do so. This can be useful if the automatically-chosen subnetconflicts with one that already exists on your network, or you need to customizeother low-level network settings such as the MTU, or if you want toenable encryption.
Customizing the ingress
network involves removing and recreating it. This isusually done before you create any services in the swarm. If you have existingservices which publish ports, those services need to be removed before you canremove the ingress
network.
During the time that no ingress
network exists, existing services which do notpublish ports continue to function but are not load-balanced. This affectsservices which publish ports, such as a WordPress service which publishes port80.
Inspect the
ingress
network usingdocker network inspect ingress
, andremove any services whose containers are connected to it. These are servicesthat publish ports, such as a WordPress service which publishes port 80. Ifall such services are not stopped, the next step fails.Remove the existing
ingress
network:$ docker network rm ingressWARNING! Before removing the routing-mesh network, make sure all the nodesin your swarm run the same docker engine version. Otherwise, removal may notbe effective and functionality of newly created ingress networks will beimpaired.Are you sure you want to continue? [y/N]
Create a new overlay network using the
--ingress
flag, along with thecustom options you want to set. This example sets the MTU to 1200, setsthe subnet to10.11.0.0/16
, and sets the gateway to10.11.0.2
.$ docker network create \ --driver overlay \ --ingress \ --subnet=10.11.0.0/16 \ --gateway=10.11.0.2 \ --opt com.docker.network.driver.mtu=1200 \ my-ingress
Note
You can name your
ingress
network something other thaningress
, but you can only have one. An attempt to create a second onefails.Restart the services that you stopped in the first step.
The docker_gwbridge
is a virtual bridge that connects the overlay networks(including the ingress
network) to an individual Docker daemon's physicalnetwork. Docker creates it automatically when you initialize a swarm or join aDocker host to a swarm, but it is not a Docker device. It exists in the kernelof the Docker host. If you need to customize its settings, you must do so beforejoining the Docker host to the swarm, or after temporarily removing the hostfrom the swarm.
You need to have the brctl
application installed on your operating system inorder to delete an existing bridge. The package name is bridge-utils
.
Stop Docker.
Use the
brctl show docker_gwbridge
command to check whether a bridgedevice exists calleddocker_gwbridge
. If so, remove it usingbrctl delbr docker_gwbridge
.Start Docker. Do not join or initialize the swarm.
Create or re-create the
docker_gwbridge
bridge with your custom settings.This example uses the subnet10.11.0.0/16
. For a full list of customizableoptions, seeBridge driver options.$ docker network create \--subnet 10.11.0.0/16 \--opt com.docker.network.bridge.name=docker_gwbridge \--opt com.docker.network.bridge.enable_icc=false \--opt com.docker.network.bridge.enable_ip_masquerade=true \docker_gwbridge
Initialize or join the swarm.
Use a separate interface for control and data traffic
By default, all swarm traffic is sent over the same interface, including controland management traffic for maintaining the swarm itself and data traffic to andfrom the service containers.
You can separate this traffic by passingthe --data-path-addr
flag when initializing or joining the swarm. If there aremultiple interfaces, --advertise-addr
must be specified explicitly, and--data-path-addr
defaults to --advertise-addr
if not specified. Traffic aboutjoining, leaving, and managing the swarm is sent over the--advertise-addr
interface, and traffic among a service's containers is sentover the --data-path-addr
interface. These flags can take an IP address ora network device name, such as eth0
.
This example initializes a swarm with a separate --data-path-addr
. It assumesthat your Docker host has two different network interfaces: 10.0.0.1 should beused for control and management traffic and 192.168.0.1 should be used fortraffic relating to services.
$ docker swarm init --advertise-addr 10.0.0.1 --data-path-addr 192.168.0.1
This example joins the swarm managed by host 192.168.99.100:2377
and sets the--advertise-addr
flag to eth0
and the --data-path-addr
flag to eth1
.
$ docker swarm join \ --token SWMTKN-1-49nj1cmql0jkz5s954yi3oex3nedyz0fb0xx14ie39trti4wxv-8vxv8rssmk743ojnwacrr2d7c \ --advertise-addr eth0 \ --data-path-addr eth1 \ 192.168.99.100:2377
Swarm services connected to the same overlay network effectively expose allports to each other. For a port to be accessible outside of the service, thatport must be published using the -p
or --publish
flag on docker service create
or docker service update
. Both the legacy colon-separated syntax andthe newer comma-separated value syntax are supported. The longer syntax ispreferred because it is somewhat self-documenting.
Flag value | Description |
---|---|
-p 8080:80 or -p published=8080,target=80 | Map TCP port 80 on the service to port 8080 on the routing mesh. |
-p 8080:80/udp or -p published=8080,target=80,protocol=udp | Map UDP port 80 on the service to port 8080 on the routing mesh. |
-p 8080:80/tcp -p 8080:80/udp or -p published=8080,target=80,protocol=tcp -p published=8080,target=80,protocol=udp | Map TCP port 80 on the service to TCP port 8080 on the routing mesh, and map UDP port 80 on the service to UDP port 8080 on the routing mesh. |
Learn more
- Deploy services to a swarm
- Swarm administration guide
- Swarm mode tutorial
- Networking overview
- Docker CLI reference