Container Network Interface (CNI)
This page show you how to install the [CNI reference plugins][cni-plugin-docs] on your Linux distribution and configure brdige networking on your Nomad clients. You can apply this guide's workflow to installing any plugin that complies with the Container Network Interface (CNI) Specification, but you should verify plugin compatibility with Nomad before deploying in production..
Introduction
Nomad implements custom networking through a combination of CNI reference plugin binaries and CNI configuration files. Networking features, like bridge network mode and Consul service mesh, leverage the CNI reference plugins to provide an operating-system agnostic interface to configure workload networking.
Requirements
- You are familiar with [CNI reference plugins][cni-plugin-docs].
- You are familiar with how Nomad uses Container Network Interface (CNI) plugins for bridge networking.
- You are running Nomad on Linux.
CNI plugins and bridge networking workflow
Perform the following on each Nomad client:
- Install CNI reference plugins.
- Configure bridge module to route traffic through iptables.
- Verify cgroup controllers.
- Create a bridge mode configuration.
- Configure Nomad clients.
Install CNI reference plugins
Nomad uses CNI plugins to configure network namespaces when using the bridge
network mode. You must install the CNI plugins on all Linux Nomad client nodes that use network namespaces. Refer to the CNI Plugins external guide for details on individual plugins.
The following series of commands determines your operating system architecture, downloads the CNI 1.5.1 release, and then extracts the CNI plugin binaries into the /opt/cni/bin
directory. Update the CNI_PLUGIN_VERSION
value to use a different release version.
$ export ARCH_CNI=$( [ $(uname -m) = aarch64 ] && echo arm64 || echo amd64)
$ export CNI_PLUGIN_VERSION=v1.5.1
$ curl -L -o cni-plugins.tgz "https://github.com/containernetworking/plugins/releases/download/${CNI_PLUGIN_VERSION}/cni-plugins-linux-${ARCH_CNI}-${CNI_PLUGIN_VERSION}".tgz && \
sudo mkdir -p /opt/cni/bin && \
sudo tar -C /opt/cni/bin -xzf cni-plugins.tgz
Configure bridge module to route traffic through iptables
Nomad's task group networks integrate with Consul's service mesh using bridge networking and iptables to send traffic between containers.
Warning: New Linux versions, such as Ubuntu 24.04, may not enable bridge networking by default. Use sudo modprobe bridge
to load the bridge module if it is missing.
The Linux kernel bridge module has three tunable parameters that control whether iptables processes traffic crossing the bridge. Some operating systems, including RedHat, CentOS, and Fedora, might have iptables rules that are not correctly configured for guest traffic because these tunable parameters are optimized for VM workloads.
Ensure your Linux operating system distribution is configured to allow iptables to route container traffic through the bridge network. Run the following commands to set the tunable parameters to allow iptables processing for the bridge network.
$ echo 1 > /proc/sys/net/bridge/bridge-nf-call-arptables
$ echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables
$ echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
To preserve these settings on startup of a client node, add a file to /etc/sysctl.d/
or remove the file your Linux distribution puts in that directory. The following example configures the tunable parameters for a client node.
/etc/sysctl.d/bridge.conf
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
Verify cgroup controllers
On Linux, Nomad uses cgroups to control access to resources like CPU and
memory. Nomad supports both cgroups v2 and the legacy cgroups
v1. When Nomad clients start, they determine the available cgroup controllers and include the attribute os.cgroups.version
in their fingerprint.
On cgroups v2, you can run the following command to verify that you have all required controllers.
$ cat /sys/fs/cgroup/cgroup.controllers
cpuset cpu io memory pids
On legacy cgroups v1, this same list of required controllers appears as a series of sub-directories under the directory /sys/fs/cgroup
.
Refer to the cgroup controller requirements for more details and to enable missing cgroups.
Create bridge mode configuration
Nomad itself uses CNI plugins and configuration as the underlying implementation for the bridge
network mode, using the loopback, bridge, firewall, and portmap CNI reference plugins configured together to create Nomad's bridge network.
Nomad uses the following configuration template when setting up a bridge network.
{
"cniVersion": "1.0.0",
"name": "nomad",
"plugins": [
{
"type": "loopback"
},
{
"type": "bridge",
"bridge": "nomad",
"ipMasq": true,
"isGateway": true,
"forceAddress": true,
"hairpinMode": false,
"ipam": {
"type": "host-local",
"ranges": [
[
{
"subnet": "172.26.64.0/20"
}
]
],
"routes": [
{ "dst": "0.0.0.0/0" }
]
}
},
{
"type": "firewall",
"backend": "iptables",
"iptablesAdminChainName": "NOMAD-ADMIN"
},
{
"type": "portmap",
"capabilities": {"portMappings": true},
"snat": true
}
]
}
The template placeholders have been replaced with the default configuration values for bridge_network_name
, bridge_network_subnet
, and an internal constant that provides the value for iptablesAdminChainName
. You can use this template as a basis for your own CNI-based bridge network configuration in cases where you need access to an unsupported option in the default configuration, like hairpin mode. When making your own bridge network based on this template, ensure that you change the iptablesAdminChainName
to a unique value for your configuration.
This configuration uses the following CNI reference plugins:
loopback: The loopback plugin sets the default local interface, lo0, created inside the bridge network's network namespace to UP. This allows workload running inside the namespace to bind to a namespace-specific loopback interface.
bridge: The bridge plugin creates a bridge (virtual switch) named
nomad
that resides in the host network namespace. Because this bridge is intended to provide network connectivity to allocations, it is configured to be a gateway by settingisGateway
totrue
. This tells the plugin to assign an IP address to the bridge interface. The bridge plugin connects allocations (on the same host) into a bridge (virtual switch) that resides in the host network namespace. By default Nomad creates a single bridge for each client. Since Nomad's bridge network is designed to provide network connectivity to the allocations, it configures the bridge interface to be a gateway for outgoing traffic by providing it with an address using anipam
configuration. The default configuration creates a host-local address for the host side of the bridge in the172.26.64.0/20
subnet at172.26.64.1
. When associating allocations to the bridge, it creates addresses for the allocations from that same subnet using the host-local plugin. The configuration also specifies a default route for the allocations of the host-side bridge address.firewall: The firewall plugin creates firewall rules to allow traffic to/from the allocation's IP address via the host network. Nomad uses the iptables backend for the firewall plugin. This configuration creates two new iptables chains,
CNI-FORWARD
andNOMAD-ADMIN
, in the filter table and adds rules that allow the given interface to send/receive traffic.The firewall creates an admin chain using the name provided in the
iptablesAdminChainName
attribute. For this case, it's calledNOMAD-ADMIN
. The admin chain is a user-controlled chain for custom rules that run before rules managed by the firewall plugin. The firewall plugin does not add, delete, or modify rules in the admin chain.A new chain,
CNI-FORWARD
is added to theFORWARD
chain.CNI-FORWARD
is the chain where rules will be added when allocations are created and from where rules will be removed when those allocations stop. TheCNI-FORWARD
chain first sends all traffic toNOMAD-ADMIN
chain.Use the
iptables
command to list the iptables rules present in each chain.$ sudo iptables -L
portmap: Nomad needs to be able to map specific ports from the host to tasks running in the allocation namespace. The portmap plugin forwards traffic from one or more ports on the host to the allocation using network address translation (NAT) rules.
The plugin sets up two sequences of chains and rules:
One “primary”
DNAT
(destination NAT) sequence to rewrite the destination.One
SNAT
(source NAT) sequence that will masquerade traffic as needed.Use the
iptables
command to list the iptables rules in the NAT table.$ sudo iptables -t nat -L
Save your bridge network configuration file to a Nomad-accessible directory. By default, Nomad loads configuration files from the /opt/cni/config
directory. However, you may configure a different location using the cni_config_dir
parameter. Refer to the Configure Nomad clients section for an example.
Configure Nomad clients
The CNI specification defines a network configuration format for administrators. It contains directives for both the orchestrator and the plugins to consume. At plugin execution time, this configuration format is interpreted by Nomad and transformed into arguments for the plugins.
Nomad reads the following files from the cni_config_dir
parameter — /opt/cni/config
by default:
.conflist
: Nomad loads these files as network configurations that contain a list of plugin configurations..conf
and.json
: Nomad loads these files as individual plugin configurations for a specific network.
Add the cni_path
and cni_config_dir
parameters to each client's client.hcl
file.
/etc/nomad.d/client.hcl
client {
enabled = true
cni_path = CNI_PATH
cni_config_dir = CNI_CONFIG_DIR
}
Use CNI networks with Nomad jobs
To specify that a job should use a CNI network, set the task group's network mode
attribute to the value cni/<your_cni_config_name>
. Nomad then schedules the workload on client nodes that have fingerprinted a CNI configuration with the given name. For example, to use the configuration named mynet
, you should set the task group's network mode to cni/mynet
. Nodes that have a network configuration defining a network named mynet
in their cni_config_dir
are eligible to run the workload.