High Availability Load-balancer

My cluster uses HAProxy to load-balance traffic between the single web-server. This is currently quite pointless. It was, however, the easiest place to start removing single points of failure. To do this, I created a backup load-balancer�a new DigitalOcean droplet with exactly the same configuration as the primary load-balancer.

DigitalOcean offers what it calls floating IP addresses, which can be moved between droplets using their API. This would allow the backup to take over from the master almost immediately�traffic would start hitting the backup server as soon as the IP address is switched. This is far better than simply changing the IP address in the DNS, as DNS records can be slow to propagate.

I used keepalived to allow the load-balancers to monitor each-other, and for master determination via VRRP (Virtual Router Redundancy Protocol). This ensures that the backup becomes the master when the primary load-balancer is down, but also that the primary load-balancer becomes the master once again when it returns.

I configured keepalived as follows (in the form of a template as used by Puppet):

global_defs {  
    notification_email {
    notification_email_from keepalived@caconym.co.uk
    smtp_connect_timeout 30
    router_id LVS_DEVEL

vrrp_script chk_haproxy {  
    script "pidof haproxy"
    interval 2

vrrp_instance VI_1 {  
    interface eth1
    state BACKUP
    priority 100


    virtual_router_id 33
    unicast_src_ip <%= @ipaddress_eth1 %>
    unicast_peer {
        <%= @peer_ipaddress_eth1 %>

    authentication {
        auth_type AH
        auth_pass <%= @password %>

    track_script {

    notify_master "/usr/bin/assign-ip <%= @floating_ip %> <%= @droplet_id %>"

This defines a command to use to check the health of each load-balancer pidof haproxy, which checks that HAProxy has a process id. It also defines the interface to communicate over, and the list of peers. Authentication is provided Authentication Header (AH) IPsec protocol. Finally, a command is defined that is run whenever a load-balancer is newly elected as the master: /usr/bin/assign-ip <%= @floating_ip %> <%= @droplet_id %>

The command assign-ip is a program I wrote in Go to use the DigitalOcean API to assign the floating IP to the specified droplet. This means that, upon becoming master, a loadbalancer will claim the floating IP address for itself. It is worth noting that even if, due to a connection problem between the load-balancers, we end up with multiple masters, the IP address will still point at a load-balancer that is able to serve requests�specifically, the IP address will point at the load-balancer that most recently became master.

The code for the assign-ip program is as follows:

package main

import (


const doConfigFile = "/etc/digitalocean.yaml"

type TokenSource struct {
    AccessToken string

type Config struct {
    APIToken string `yaml:"apiToken,omitempty"`

func (t *TokenSource) Token() (*oauth2.Token, error) {
    token := &oauth2.Token{
        AccessToken: t.AccessToken,
    return token, nil

func main() {
    yamlFile, err := ioutil.ReadFile(doConfigFile)

    if err != nil {
        fmt.Printf("IO error: %s

", err)

    var config Config

    err = yaml.Unmarshal(yamlFile, &config)
    if err != nil {
        fmt.Printf("Config error: %s

", err)

    tokenSource := &TokenSource{
        AccessToken: config.APIToken,
    oauthClient := oauth2.NewClient(oauth2.NoContext, tokenSource)
    client := godo.NewClient(oauthClient)

    ip := os.Args[1]
    dropletID, err := strconv.Atoi(os.Args[2])

    if err != nil {
        fmt.Printf("Argument error: %s

", err)

    _, _, err = client.FloatingIPActions.Assign(ip, dropletID)

    if err != nil {
        fmt.Printf("API error: %s

", err)
    } else {
        fmt.Println("IP assigned")

This makes uses godo, a DigitalOcean Go API client library, to make the API call.

So now I have twice as many load-balancers as web-servers, but I hope to rectify that soon. With the webserver, I have the additional complication of having to replicate the content of the site across to the secondary web-server. I also anticipate both primary and secondary web-servers being used to serve most requests�I need to put those load-balancers to good use, after all�so the backup will not be a true backup, more of a spare.