Skip to main content Logo (IEC resistor symbol)logo

Quis custodiet ipsos custodes?
Home | About | All pages | RSS Feed | Gopher

Adding IPv6 to a keepalived and haproxy cluster

Published: 24-09-2017 | Author: Remy van Elst | Text only version of this article

Table of Contents

At work I regularly build high-available clusters for customers, where the setupis distributed over multiple datacenters with failover software. If onecomponent fails, the service doesn't experience issues or downtime due to thefailure. Recently I was tasked with expanding a cluster setup to be alsoreachable via IPv6. This article goes over the settings and configurationrequired for haproxy and keepalived for IPv6. The internal cluster will only beIPv4, the loadbalancer terminates HTTP and HTTPS connections.

If you like this article, consider sponsoring me by trying out a Digital OceanVPS. With this link you'll get $100 credit for 60 days). (referral link)

Cluster setup


This diagram gives a general idea of the clusters I often build at work. TheCloudVPS network is fully redundant over multiple data centers, so I don't haveto worry about that part. A setup often consist out of the following componentsper data center: a loadbalancer, multiple application servers (php, apache,rails, python, java), a database server (mysql/galera or postgresql). So youhave three loadbalancer, three databaseservers and three or more applicationservers in total. Often there are extra components like DRBD/NFS for filestorage, Redis as a key/value store, mongodb or elasticsearch. (All which can beclustered). Because we have three datacenters there is enough for a quorum.Sometimes customers choose just two datacenters for cost reasons, then weexplain the issues without quorum and make them sign off the risks in thecontract.

The clusters are IPv4 internally, with keepalived or Corosync/Pacemaker handlingthe High Available IP addres (VIP). The loadbalancers all have their own IP andshare one or more external VIP's via the cluster software. They also have ainternal VIP because they function as a gateway for the internal servers. If oneloadbalancer fails, VRRP detects that and the VIP becomes active on one of theother servers.

For complex setups with depencencies and orders we use Corosync, for examplewith DRBD/NFS, so make sure the starting order is correct. First DRBD mount,then the VIP, then NFS. Most of the time keepalived is enough.

Adding IPv6 is suprisingly easy, so this is a short article covering thefollowing:

The internal network stays the same, the load balancer terminates all trafficand sends it on over IPv4 to the application servers, which do not need to beconfigured with IPv6.

In our case all serves come with a /64 IPv6 natively so there is no networkconfiguration on switches or routers included in this guide.

Operating System

The clusters run Ubuntu (16.04) most of the time, so in/etc/network/interfaces there must be an IPv6 address:

iface eth0 inet6 staticaddress 2a02:123:45:67ab::1/48netmask 48gateway 2a02:123:45::1

This is not the IPv6 address you'll use as the VIP, but a local IPv6 address forthe machine. You don't configure the VIP on the OS.

We have ACL rules on the backend in our hypervisor environment so I added anextra IPv6 range to the cluster for use with high-availability:


(example range in this case, which will be used inside haproxy and keepalived asIPv6 VIP.)

You also don't need to configure the following sysctl for ipv6:


We handle that inside of haproxy and keepalived.


This is tested with keepalived in ubuntu 16.04, version 1.2.19. Adding the IPv6address to the virtual_ipaddress section and restarting keepalived is enough:

vrrp_sync_group VG_1 {     group {        EXTERN        INTERN     }}vrrp_instance EXTERN {    interface eth0    virtual_router_id 12    state EQUAL    advert_int 1    smtp_alert    notify /usr/local/bin/    authentication {        auth_type PASS        auth_pass hunter2    }    virtual_ipaddress {        2a02:123:45:67bb::1/32    }}


haproxy is suprisingly easy with IPv6. Just add it to your frontend section asa bind option:

frontend http-in      mode http      bind      bind 2a02:123:45:67bb::1:80 transparent      option httplog      option forwardfor      option http-server-close      option httpclose      reqadd X-Forwarded-Proto:\ http      http-request add-header X-Real-IP %[src]      default_backend appserver

You must add the transparant option. Otherwise, haproxy will not start if theVIP is not on the machine itself. (kind of like nonlocal.bind sysctl).

haproxy is intelligent enough to understand the port number in the address. No need to screw around with brackets like [2a02:123:45:67bb::1]:80 or special options.

Restart haproxy and it's configured:

netstat -tlpn | grep haproxy


tcp        0      0*               LISTEN      1163/haproxytcp        0      0*               LISTEN      1163/haproxytcp6       0      0 2a02:123:45:67bb::1:80  :::*                    LISTEN      1163/haproxytcp6       0      0 2a02:124:45:67bb::1:443 :::*                    LISTEN      1163/haproxy

The version of haproxy is the one from Ubuntu 16.04, 1.6.3.

Tags: articles, cluster, heartbeat, high-availability, ipv6, keepalived, network, vrrp