Bonding NIC Teaming on Ubuntu 12.04
Published: 14-02-2014 | Author: Remy van Elst | Text only version of this article
❗ This post is over seven years old. It may no longer be up to date. Opinions may have changed.
Table of Contents
Bonding, also called port trunking or link aggregation means combining several network interfaces (NICs) to a single link, providing either high-availability, load-balancing, maximum throughput, or a combination of these.
Consider sponsoring me on Github. It means the world to me if you show your appreciation and you'll help pay the server costs.
You can also sponsor me by getting a Digital Ocean VPS. With this referral link you'll get $100 credit for 60 days.
Warning! Make sure you have ILO/BMC out of band remote access to your server. You are going to change vital network settings, this may result in loss of connectivity if done wrong.
Install required packages
ifenslave is used to attach and detach slave network interfaces to a bonding
device. Install the package:
apt-get install ifenslave
Before Ubuntu can configure your network cards into a NIC bond, you need to
ensure that the correct kernel module
bonding is present, and loaded at boot
Edit the file:
Add the word
bonding to the file:
Also, load the module manually for now:
Bonding network config
Edit the file:
Example config for an round-robin load balancing setup:
auto lo iface lo inet loopback auto eth0 iface eth0 inet manual bond-master bond0 auto eth1 iface eth1 inet manual bond-master bond0 auto bond0 iface bond0 inet static # For jumbo frames, change mtu to 9000 mtu 1500 address 172.16.20.1 netmask 255.255.255.0 network 172.16.20.0 broadcast 172.16.20.255 gateway 172.16.20.1 dns-nameservers 172.16.20.2 bond-miimon 100 # Specifies the MII link monitoring frequency in milliseconds. This determines how often the link state of each slave is inspected for link failures. bond-downdelay 200 # Specifies the time, in milliseconds, to wait before disabling a slave after a link failure has been detected. bond-updelay 200 # Specifies the time, in milliseconds, to wait before enabling a slave after a link recovery has been detected. bond-mode 0 bond-slaves none # we already defined the interfaces above with bond-master
For round-robin/load balancing, use
Bonding modes explained
Mode 0 - balance-rr
Round-robin policy: Transmit packets in sequential order from the first available slave through the last. This mode provides load balancing and fault tolerance.
Mode 1 - active-backup
Active-backup policy: Only one slave in the bond is active. A different slave becomes active if, and only if, the active slave fails. The bond's MAC address is externally visible on only one port (network adapter) to avoid confusing the switch. This mode provides fault tolerance. The primary option affects the behavior of this mode.
Mode 2 - balance-xor
XOR policy: Transmit based on [(source MAC address XOR'd with destination MAC address) modulo slave count]. This selects the same slave for each destination MAC address. This mode provides load balancing and fault tolerance.
Mode 3 - broadcast
Broadcast policy: transmits everything on all slave interfaces. This mode provides fault tolerance.
Mode 4 - 802.3ad
IEEE 802.3ad Dynamic link aggregation. Creates aggregation groups that share the same speed and duplex settings. Utilizes all slaves in the active aggregator according to the 802.3ad specification.
- Ethtool support in the base drivers for retrieving the speed and duplex of each slave.
- A switch that supports IEEE 802.3ad Dynamic link aggregation (LACP) . Most switches will require some type of configuration to enable 802.3ad mode.
Mode 5 - balance-tlb
Adaptive transmit load balancing: channel bonding that does not require any special switch support. The outgoing traffic is distributed according to the current load (computed relative to the speed) on each slave. Incoming traffic is received by the current slave. If the receiving slave fails, another slave takes over the MAC address of the failed receiving slave.
- Ethtool support in the base drivers for retrieving the speed of each slave.
Mode 6 - balance-alb
Adaptive load balancing: includes balance-tlb plus receive load balancing (rlb) for IPV4 traffic, and does not require any special switch support. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by the local system on their way out and overwrites the source hardware address with the unique hardware address of one of the slaves in the bond such that different peers use different hardware addresses for the server.
- documentation on linux network bonding: https://www.kernel.org/doc/Documentation/networking/bonding.txt