How to use Let’s Encrypt certificates with Nginx

Back in early 2011, I wrote a post on the most common reasonswhy SSL isn’t turned on by default for all websites, and one of these reasons at the time was cost.

Standard SSL certificates can be quite cheap these days, yet nothing beats free. According to their website, Let’s encrypt – which entered public beta on December 3 – is

a new Certificate Authority: It’s free, automated, and open.

So this essentially means you can get valid, trusted TLS/SSL certificates for free. Besides the cost, one thing I really like of Let’s Encrypt is that it is so easy and quick to get a new certificate! Normally you’d have to generate a Certificate Signing Request (CSR) and a private key on the server, then send the CSR to a provider/Certificate Authority in order to get the actual certificate. In many cases, the certificate you receive from the provider is a bundle of several certificates that you have to combine into a single certificate you can then install on the server. You need to repeat the process each time you need to renew the certificate.

The process overall isn’t complicated but is made much easier and quicker with Let’s Encrypt. If you use Apache, everything is pretty much automated with the Let’s Encrypt python tools, in that the certificate will be generated and installed in Apache automatically for you. The same level of support for Nginx is still in the works, but generating a certificate you can install with Nginx as well is quite straightforward.

First, you need to clone the git repo which contains the python tools you will use to generate new certificates:

git clone
cd letsencrypt

Next, you need to stop Nginx before proceeding… I know this sounds like it may be a problem, but there is a reason for this will I will explain in a moment.

service nginx stop

Now you can run the python tool which will generate the certificate for you:

./letsencrypt-auto --agree-dev-preview --server auth

This will require that you accept the terms and conditions and enter the domain or domains you need the certificate for. For example, you may want a certificate for a domain with and without the www subdomain.

Once the tool has done its stuff, you will find the new certificate in /etc/letsencrypt/live by default, with a directory for each domain which contains the following files:

cert.pem chain.pem fullchain.pem privkey.pem

The important files which you will use with Nginx are fullchain.pem and privkey.pem.

So open the relevant virtual host file (usually in /etc/nginx/sites-enabled) and add the following lines to the server block:

server {
listen 443 ssl;

server_name <domain name>;

ssl on;
ssl_certificate /etc/letsencrypt/live/<domain name>/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/<domain name>/privkey.pem;


Of course replace domain name with the actual domain name (or names for the server_name directive if more than one, e.g. with and without www).

These are the minimum settings you need to add in order to enable https for your site, but I recommend you have a look at Mozilla’s SSL config generator for additional settings to improve the security of your setup. For example I’m currently using the following settings:

ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;

add_header Strict-Transport-Security max-age=15768000;

ssl_stapling on;
ssl_stapling_verify on;

Once you have completed the configuration, reload or restart Nginx and test the configuration with this service.

If all is configured properly you should get a very good score, e.g.:


Optionally, you may also want to redirect all the plain http traffic to the https ‘version’ of your site. To do this, just add another server block to the virtual hosts like the following:

server {
listen 80;
server_name <domain name>;
rewrite ^/(.*) https://<domain name>/$1 permanent;

So, why do you need to stop Nginx before generating a certificate with Let’s Encrypt? When you request a certificate with a typical provider, they need to verify that you own the domain and this is done, for example, by sending an email to an email address of that domain with a confirmation link. If you own the domain, of course you have access to that email address and therefore you can proceed with the next steps required to get the certificate.

With Let’s Encrypt, everything is automated but they still need to verify the ownership of the domain first. So when you run letsencrypt-auto, it starts an HTTP server listening to the port 80 and requests a certificate from Let’s Encrypt CA. The CA, in order to verify that you own the domain, makes an HTTP request to your domain, which of course will be served by letsencrypt-auto’s server, confirming that you own the domain. Because this HTTP server runs on the port 80, you can’t run your Nginx server on the port 80 at the same time, so while you generate a certificate with letsencrypt-auto you will need to stop Nginx first. It doesn’t take long to get a certificate but this may be a problem depending on the application, especially considering that -as we’ll see later- Let’s Encrypt certificates must be renewed every 90 days. There is a module for Apache that does all of this automatically without downtime, but as said the same support for Nginx is still in the works so in the meantime you will have to stop Nginx while generating the certificate. Please note that what I described is the easiest way to obtain and install a certificate with Let’s Encrypt, so there may be other ways to do this without downtime even with Nginx. Update: I found this which might be of interest.


Unfortunately, Let’s Encrypt certificates come with some limitations:

  • only Domain Validation (DV) certificates are issued, so the browsers will show the padlock as expected. However Organisation Validation and Extended Validation certificates are not available and apparently Let’s Encrypt has no plans to offer these certificates because they require some human intervention and thus they cost money, so the generation of these certificate cannot be fully automated nor offered for free, which are the key features of Let’s Encrypt.
  • wildcard certificates aren’t available either; you can get certificates for multiple subdomains though. This may be a problem with some applications.
  • certificates expire in 90 days, which seems a bit too short. See this for an explanation.
  • there is a limit of 5 certificates for a registered domain in 7 days; this limit should be lifted when Let’s Encrypt is out of beta. So for example if you request separate certificates for, and these will be counted as 3 certificates for the same domain. But of course you can request a certificate with multiple subdomains at once.
  • all major browsers are supported, but some devices don’t recognise these certificates. See this list for more info.

Even with these limitations, Let’s Encrypt is an exciting initiative and it is likely that things will improve when LE is out of beta. It’s a great service because by offering free certificates that are also easier to obtain, it will surely speed up the adoption of TLS/SSL encryption, making for a more secure web.

I don’t have any particular reasons for enabling encryption on all pages on this blog since it doesn’t manage any user data and I am outsourcing comments to Disqus, but I am planning on switching anyway because another added benefit of https is that it helps increase search engine raking.

So if you haven’t yet, check Let’s Encrypt out!

Setting up a Ubuntu server for Ruby and PHP apps

There are several guides on the Internet on setting up a Ubuntu server, but I thought I’d add here some notes on how to set up a server capable of running both Ruby and PHP apps at the same time. Ubuntu’s latest Long Term Support (LTS) release is 14.04, so this guide will be based on that release.

I will assume you already have a a server with the basic Ubuntu Server Edition installed – be it a dedicated server or a VPS from your provider of choice – with just SSH access enabled and nothing else. We’ll be bootstrapping the basic system and install all the dependencies required for running Ruby and PHP apps; I usually use Nginx as web server, so we’ll be also using Phusion Passenger as application server for Ruby and fastcgi for PHP to make things easier.

First steps

Before anything else, it’s a good idea to update the system with the latest updates available. So SSH into the new server with the IP and credentials you’ve been given and -recommended- start a screen session with

screen -S <session-name>

Now change the root password with


then open /root/.ssh/authorized_keys with and editor and ensure no SSH keys have already been added other than yours; if you see any keys, I recommend you comment them out and uncomment them only if you ever need to ask your provider for support.

Done that, as usual run:

apt-get update
apt-get upgrade -y

to update the system.

Next, edit /etc/hostname with vi or any other editor and change the hostname with the hostname you will be using to connect to this server; also edit /etc/hosts and add the correct hostname in there as well. Reboot:

reboot now

SSH access

It’s a good idea to use a port other than the default one for SSH access, and a user other than root. In this guide, we’ll be:

  • using the example port 17239
  • disabling the root access and enabling access for the user deploy (only) instead
  • switching from password authentication to public key authentication for good measure.

Of course you can choose whichever port and username you wish.

For convenience, on your client computer (that is, the computer you will be connecting to the server from) edit ~/.ssh.config and add the following content:

Host my-server (or whichever name you prefer)
Hostname <the ip address of the server>
Port 22
User root

So you can more easily SSH into the new server with just

ssh my-server

As you can see for now we are still using the default port and user until the SSH configuration is updated.

Unless your public key has already been added to /root/.ssh/authorized_keys during the provisioning of the new server, still on the client machine run

ssh-copy-id <hostname or ip of the server>

to copy your public key over. You should now be able to SSH into your server without password.

Back on the server, it’s time to setup the user which you will be using to SSH into the server instead of root:

adduser deploy

Edit /etc/sudoers and add:

deploy ALL=(ALL:ALL) ALL

On the client, ensure you can SSH into the server as deploy using your key:

ssh-copy-id deploy@my-server

You should now be able to login as deploy without password.

Now edit /etc/ssh/sshd_config and change settings as follows:

Port 17239
PermitRootLogin no
PasswordAuthentication no
UseDNS no
AllowUsers deploy

This will:

  • change the port
  • disable root login
  • disable password authentication so we are forced to use public key authentication
  • disable DNS lookups so to speed up logins
  • only allow the user deploy to SSH into the system

Restart SSH server with:

service ssh restart

Keep the current session open just in case for now. On the client, open again ~/.ssh/config and update the configuration of the server with the new port and user:

Host my-server (or whichever name you prefer)
Hostname <the ip address of the server>
Port 17239
User deploy

Now if you run

ssh my-server

you should be in as deploy without password. You should no longer be able to login as root though; to test run:

ssh root@my-server date

you should see an error:

Permission denied (publickey).


Now that SSH access is sorted, it’s time to configure the firewall to lock down the server so that only the services we want (such as ssh, http/https and mail) are allowed. Edit the file /etc/iptables.rules and paste the following:

# Generated by iptables-save v1.4.4 on Sat Oct 16 00:10:15 2010
-A INPUT -i lo -j ACCEPT
-A INPUT -d ! -i lo -j DROP
-A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 443 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 587 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 17239 -j ACCEPT
-A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables [Positive[False?]: " --log-level 7
-A INPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-port-unreachable
# Completed on Sat Oct 16 00:10:15 2010
# Generated by iptables-save v1.4.4 on Sat Jun 12 23:55:23 2010
# Completed on Sat Jun 12 23:55:23 2010
# Generated by iptables-save v1.4.4 on Sat Jun 12 23:55:23 2010
-A PREROUTING -p tcp --dport 25 -j REDIRECT --to-port 587
# Completed on Sat Jun 12 23:55:23 2010

It’s a basic configuration I have been using for some years. It locks all incoming traffic apart from SSH access, web traffic (since we’ll be hosting Ruby and PHP apps) and mail. Of course, make sure you specify the SSH port you’ve chosen here if other than 17239 as in the example.

To apply the setting now, run:

iptables-restore < /etc/iptables.rules

and verify with

iptables -L

You should see the following output:

Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT all -- anywhere anywhere
DROP all -- anywhere loopback/8
ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
ACCEPT tcp -- anywhere anywhere tcp dpt:http
ACCEPT tcp -- anywhere anywhere tcp dpt:https
ACCEPT tcp -- anywhere anywhere tcp dpt:submission
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:17239
LOG all -- anywhere anywhere limit: avg 5/min burst 5 LOG level debug prefix "iptables [Positive[False?]: "
ACCEPT icmp -- anywhere anywhere icmp echo-request
LOG all -- anywhere anywhere LOG level warning
REJECT all -- anywhere anywhere reject-with icmp-port-unreachable

Chain FORWARD (policy ACCEPT)
target prot opt source destination

Chain OUTPUT (policy ACCEPT)
target prot opt source destination
ACCEPT all -- anywhere anywhere

Now if you reboot the server, these settings will be lost, so you need to persist them in either of two ways:

1) open /etc/network/interfaces and add, in the eth0 section, the following line:

post-up iptables-restore < /etc/iptables.rules

So the file should now look similar to the following:

auto eth0
iface eth0 inet static
address ...
netmask ...
gateway ...
up ip addr add dev eth0
post-up iptables-restore < /etc/iptables.rules


2) Run

apt-get install iptables-persistent

Either way, reboot now and verify again with iptables -L that the settings are persisted.

ZSH shell, editor (optional)

If you like me prefer ZSH over BASH and use VIM as editor, first install ZSH with:

apt-get install zsh git-core
curl -L | sh
ln -s ~/dot-files/excid3.zsh-theme ~/.oh-my-zsh/themes

Then you may want to use my VIM configuration so to have a nicer editor environment:

cd; git clone
cd dot-files; ./

I’d repeat the above commands for both the deploy user and root (as usual you can use sudo -i for example to login as root). Under deploy, you’ll need to additionally run:


and specify /usr/bin/zsh as your shell.

Dependencies for Ruby apps

You’ll need to install the various dependencies required to compile Ruby and install various gems:

apt-get install build-essential curl wget openssl libssl-dev libreadline-dev libmysqlclient-dev ruby-dev mysql-client ruby-mysql xvfb firefox libsqlite3-dev sqlite3 libxslt1-dev libxml2-dev

You’ll also need to install nodejs for the assets compilation (Rails apps):

apt-get install software-properties-common
add-apt-repository ppa:chris-lea/node.js
apt-get update
apt-get install nodejs

Next, as deploy:

Ensure the following lines are present in the shell rc files (.zshrc and .zprofile) and reload the shell so the new Ruby can be “found”:

export PATH="$HOME/.rbenv/bin:$HOME/.rbenv/shims:$PATH"
eval "$(rbenv init -)"

ruby -v should now output the expected version number, 2.2.4 in the example.

Optionally, you may want to install the rbenv-vars plugin for environment variables support with rbenv:

git clone ~/.rbenv/plugins/rbenv-vars
chmod +x ~/.rbenv/plugins/rbenv-vars/bin/rbenv-vars

Dependencies for PHP apps

Install the various packages required for PHP-FPM:

apt-get install php5-fpm php5-mysql php5-curl php5-gd php5-intl php-pear php5-imagick php5-mcrypt php5-memcache php5-memcached php5-ming php5-ps php5-pspell php5-recode php5-snmp php5-sqlite php5-tidy php5-xmlrpc php5-xsl php5-geoip php5-mcrypt php-apc php5-imap


I am assuming here you will be using MySQL – I usually use the Percona distribution. If you plan on using some other database system, skip this section.

First, install the dependencies:

apt-get install curl build-essential flex bison automake autoconf bzr libtool cmake libaio-dev libncurses-dev zlib1g-dev libdbi-perl libnet-daemon-perl libplrpc-perl libaio1
gpg --keyserver hkp:// --recv-keys 1C4CBDCDCD2EFD2A
gpg -a --export CD2EFD2A | sudo apt-key add -

Next edit /etc/apt/sources.list and add the following lines:

deb trusty main
deb-src trusty main

Install Percona server:

apt-get update
apt-get install percona-xtradb-cluster-server-5.5 percona-xtradb-cluster-client-5.5 percona-xtradb-cluster-galera-2.x

Test that MySQL is running:

mysql -uroot -p

Getting web apps up and running

First install Nginx with Passenger for Ruby support (also see this:

apt-key adv --keyserver --recv-keys 561F9B9CAC40B2F7
apt-get install apt-transport-https ca-certificates

Edit /etc/apt/sources.list.d/passenger.list and add the following:

deb trusty main

Update sources:

chown root: /etc/apt/sources.list.d/passenger.list
chmod 600 /etc/apt/sources.list.d/passenger.list
apt-get update

Then install Phusion Passenger for Nginx:

apt-get install nginx-extras passenger

Edit /etc/nginx/nginx.conf and uncomment the passenger_root and passenger_ruby lines, making sure the latter points to the version of Ruby installed with rbenv, otherwise it will point to the default Ruby version in the system. Make the following changes:

user deploy;
worker_processes auto;
pid /run/;

events {
use epoll;
worker_connections 2048;
multi_accept on;

http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
server_tokens off;
passenger_root /usr/lib/ruby/vendor_ruby/phusion_passenger/locations.ini;
passenger_ruby /home/deploy/.rbenv/shims/ruby;
passenger_show_version_in_header off;

Restart nginx with

service nginx restart

Test that nginx works by opening http://the_ip_or_hostname in your browser.

For PHP apps, we will be using fastcgi with unix sockets. Create for each app a profile in /etc/php5/fpm/pool.d/, e.g. /etc/php5/fpm/pool.d/myapp. Use the following template:

[<app name>]
listen = /tmp/<app name>.php.socket
listen.backlog = -1
listen.owner = deploy = deploy

; Unix user/group of processes
user = deploy
group = deploy

; Choose how the process manager will control the number of child processes.
pm = dynamic
pm.max_children = 75
pm.start_servers = 10
pm.min_spare_servers = 5
pm.max_spare_servers = 20
pm.max_requests = 500

; Pass environment variables
env[PATH] = /usr/local/bin:/usr/bin:/bin
env[TMP] = /tmp
env[TMPDIR] = /tmp
env[TEMP] = /tmp

; host-specific php ini settings here
; php_admin_value[open_basedir] = /var/www/DOMAINNAME/htdocs:/tmp

To allow communication between Nginx and PHP-FPM via fastcgi, ensure each PHP app’s virtual host includes some configuration like the following:

location / {
try_files $uri /index.php?$query_string;

location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/tmp/<app name>.php.socket;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root/$fastcgi_script_name;

Edit /etc/php5/fpm/php.ini and set cgi.fix_pathinfo to 0. Restart both FPM and Nginx:

service php5-fpm restart
service nginx restart

Congrats, you should now be able to run both Ruby and PHP apps.


There are so many ways to backup a server…. what I usually use on my personal servers is a combination of xtrabackup for MySQL databases and duplicity for file backups.

As root, clone my admin scripts:

cd ~
git clone
apt-key adv --keyserver --recv-keys 1C4CBDCDCD2EFD2A

Edit /etc/apt/sources.list and add:

deb trusty main
deb-src trusty main

Proceed with the installation of the packages:

apt-get update
apt-get install duplicity xtrabackup

Next refer to this previous post for the configuration.

Schedule the backups with crontab -e by adding the following lines:

MAILTO = <your email address>

00 02 * * sun /root/admin-scripts/backup/ full
00 02 * * mon-sat /root/admin-scripts/backup/ incr
00 13 * * * /root/admin-scripts/backup/ incr


  • install postfix and dovecot with
apt-get install postfix dovecot-common mailutils
  • run dpkg-reconfigure postfix and set the following:
  • General type of mail configuration -> Internet Site
  • System mail name -> same as the server’s hostname
  • Root and postmaster email recipient -> your email address
  • Force synchronous updates on mail queue -> no
  • Local networks -> leave default
  • Mailbox size limit (bytes) -> set 10485760 (10MB) or so, to prevent the default mailbox from growing with no limits
  • Internet protocols to use -> all

  • SMTP authentication: run

postconf -e 'home_mailbox = Maildir/'
postconf -e 'smtpd_sasl_type = dovecot'
postconf -e 'smtpd_sasl_path = private/auth'
postconf -e 'smtpd_sasl_local_domain ='
postconf -e 'smtpd_sasl_security_options = noanonymous'
postconf -e 'broken_sasl_auth_clients = yes'
postconf -e 'smtpd_sasl_auth_enable = yes'
postconf -e 'smtpd_recipient_restrictions = permit_sasl_authenticated,permit_mynetworks,reject_unauth_destination'
  • TLS encryption: run
mkdir /etc/postfix/certificate && cd /etc/postfix/certificate
openssl genrsa -des3 -out server.key 2048
openssl rsa -in server.key -out server.key.insecure
mv server.key
mv server.key.insecure server.key
openssl req -new -key server.key -out server.csr
openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt

postconf -e 'smtp_tls_security_level = may'
postconf -e 'smtpd_tls_security_level = may'
postconf -e 'smtp_tls_note_starttls_offer = yes'
postconf -e 'smtpd_tls_key_file = /etc/postfix/certificate/server.key'
postconf -e 'smtpd_tls_cert_file = /etc/postfix/certificate/server.crt'
postconf -e 'smtpd_tls_loglevel = 1'
postconf -e 'smtpd_tls_received_header = yes'
postconf -e 'myhostname = <hostname>'
  • SASL
  • edit /etc/dovecot/conf.d/10-master.conf, and uncomment the following lines so that they look as follows (first line is a comment so leave it…commented out):

Postfix smtp-auth

unix_listener /var/spool/postfix/private/auth {
mode = 0666
* edit /etc/dovecot/conf.d/10-auth.conf and change the setting auth_mechanisms to “plain login”
* edit /etc/postfix/ and a) comment out smtp, b) uncomment submission
* restart postfix: service postfix restart
* restart dovecot: service dovecot restart
* verify that all looks good

root@nl:/etc/postfix/certificate# telnet localhost 587
Connected to localhost.
Escape character is '^]'.
220 <hostname> ESMTP Postfix (Ubuntu)
ehlo <hostname>
250-SIZE 10240000
250 DSN

Test email sending:

echo "" | mail -s "test" <your email address>

There’s a lot more that could be done, but this should get you started. Let me know in the comments if you run into any issues.

CentOS Parallels VM and missing network configuration

I was using CentOS with Parallels today, and had problems with networking after cloning a template VM into several VMs. Basically, after cloning the template, the clones appear to report only the loopback interface and one eth interface which seems to be inactive, so of course Internet doesn’t work:

[root@centos ~]# ifconfig -a
eth1 Link encap:Ethernet HWaddr 00:1C:42:22:36:26
RX packets:2464262 errors:0 dropped:0 overruns:0 frame:0
TX packets:1221954 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:3716624972 (3.4 GiB) TX bytes:106808282 (101.8 MiB)

lo Link encap:Local Loopback
inet addr: Mask:
inet6 addr: ::1/128 Scope:Host
RX packets:3502 errors:0 dropped:0 overruns:0 frame:0
TX packets:3502 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:359663 (351.2 KiB) TX bytes:359663 (351.2 KiB)
[root@centos ~]# ping
connect: Network is unreachable

I am not too familiar with CentOS so I googled and found out that networking is disabled in the default installation or something like that.

Anyway, in case someone runs into the same issue, if you run ifup it complains that the configuration for the eth interface could not be found:

[root@centos ~]# ifup eth1
/sbin/ifup: configuration for eth1 not found.
Usage: ifup <device name>

I’ve had this particular issue – missing network configuration – only with CentOS VMs, but networking doesn’t work with Ubuntu VMs either after cloning. On Ubuntu however I usually run

rm /etc/udev/rules.d/70-persistent-net.rules

and then reboot the VM, and that usually fixes it. I tried the same on the CentOS clones but it didn’t work.

It turns out on the CentOS clones there is a profile for the loopback interface and a profile for eth0 but not for eth1 – which is the interface I see in the VMs after cloning – and that’s the the reason why the configuration could not be found:

[root@centos ~]# ls /etc/sysconfig/network-scripts/ifcfg*
/etc/sysconfig/network-scripts/ifcfg-eth0 /etc/sysconfig/network-scripts/ifcfg-lo

So the way I fixed the missing configuration was by making a copy of the eth0 profile for eth1, and updating the content of the new profile with the correct device name and MAC address. First, make a copy of the profile:

[root@centos ~]# cd /etc/sysconfig/network-scripts/
[root@centos network-scripts]# cp ifcfg-eth0 ifcfg-eth1

Then, open the new profile with any editor and make sure the DEVICE name is eth1 (or whatever ethX it is for you if you have removed/added virtual NICs) and that HWADDR is set to the MAC address of the VM:


You can find the MAC address in the Network > Advanced Settings of the virtual machine:


Then, run

[root@centos network-scripts]# ifup eth1

Determining IP information for eth1... done.

And Internet should now work:

[root@centos network-scripts]# ping
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=53 time=25.9 ms

That’s it. Not sure why this happens but anyway it’s easy to fix.

Easier backups with duplicity and xtrabackup

A little while ago I wrote a couple of scripts to take backups with duplicity and xtrabackup more easily; I am a little allergic to all the options and arguments you can use with both duplicity and xtrabackup, so these scripts use simple configuration files instead.

You can find these scripts on Github at


Xtrabackup is a great tool for taking backups (both full and incremental) of your MySQL databases without bringing them offline. When you first launch the script – admin-scripts/backup/ – without arguments, it will generate the simple configuration file as ~/.xtrabackup.config, containing the following configuration settings – you only need to set the MySQL credentials, customise the paths of source and destination, and choose how many backup chains to keep:


A backup chain is as usual made of one full backup and subsequent incrementals. The script – admin-scripts/backup/ accepts a single argument when you are taking backups, either full or incr. As these may suggest, in the first case a full backup will be taken, while the second case it will be an incremental. Backups are stored in the destination directory with the structure below:

├── full
│ ├── 2014-03-04_20-39-39
│ ├── 2014-03-09_02-00-04
│ ├── 2014-03-16_02-00-01
│ └── 2014-03-23_02-00-02
└── incr
├── 2014-03-04_20-39-53
├── 2014-03-04_20-41-21
├── 2014-03-05_02-00-02
├── 2014-03-05_13-00-02
├── 2014-03-06_02-00-07

I choose to store the incrementals separately from the full backups so to always have full backups ready for a simple copy if needed, but restoring from incrementals will work just fine. In order to restore, you can choose from any of the backups available – either full or incremental. To see the list of all the backups available you can use the list argument, which shows something like this:

> admin-scripts/backup/ list
Loading configuration from /root/.xtrabackup.config.
Available backup chains (from oldest to latest):

Backup chain 1:

Backup chain 2:

Backup chain 3:
Full: 2014-03-16_02-00-01
Incremental: 2014-03-16_13-00-01
Incremental: 2014-03-17_02-00-02
Incremental: 2014-03-21_13-00-01
Incremental: 2014-03-22_02-00-01
Incremental: 2014-03-22_13-00-02
Backup chain 4:
Full: 2014-03-23_02-00-02
Incremental: 2014-03-23_13-00-01
Incremental: 2014-03-24_02-00-03
Incremental: 2014-03-24_13-00-01
Incremental: 2014-03-25_02-00-01
Incremental: 2014-03-25_13-00-02

Latest backup available:
Incremental: 2014-03-25_13-00-02

Then, to restore any of the backups available you can run the script with the restore argument, e.g.

admin-scripts/backup/ restore 2014-03-25_02-00-01 <destination directory>

Once the restore is complete, the final result will be a destination directory ready for use with MySQL, so all you need to do at this stage (as the script will suggest) is:

  • stop MySQL
  • replace the content of MySQL’s datadir with the contents of the destination directory you’ve used for the restore
  • ensure the MySQL datadir is owned by the mysql user
  • start MySQL again

MySQL should happily work again with the restored data.


The other script is a useful wrapper which makes it a bit easier to take backups of data with duplicity; like the other script, this script also uses a configuration file instead of lots of options and arguments, and this configuration file is generated as ~/.duplicity.config when you first run the script with no arguments. The content of this configuration file is as follows:

INCLUDE=(/backup /etc /home /root /usr/local/configuration /var/log /var/lib/mysql /var/www)




# Set ENCRYPT_KEY if you want to use GPG pub key encryption. Otherwise duplicity will just use symmetric encryption.

# Optionally use a different key for signing

COMPRESSION_LEVEL=6 # 1-9; 0 disables compression; it currently works only if encryption is enabled

VERBOSITY=4 # 0 Error, 2 Warning, 4 Notice (default), 8 Info, 9 Debug (noisiest)

# Comment out the following if you want to run one or more scripts before duplicity backup.

# Comment out the following if you want to run one or more scripts after duplicity backup.

Most of these settings should be self-explanatory. backups_repository uses by default duplicity’s rsync backend, so of course you need to have SSH access to the destination server. max_volume_size: duplicity automatically splits the backup into volumes and the script will use settings that have duplicity generate one volume while the previous one is being asynchronously transferred to the destination. This should make backups faster. The ideal value for max_vol_size is really difficult to determine as it depends on many things, but in my case I have found that a value of 250 with the other settings I use for compression and encryption, makes backups fairly fast. encryption of course enables/disables the encryption of the backup; if you are doing on site backup to servers you own and that noone else controls, then I’d disable this option so to make backups quicker. Otherwise I recommend to enable it if others have access to the backup files. Encryption can be done both with (GPG) keys, or without keys, using symmetric encryption with a passphrase. Then, you can set the compression level; I’d recommend the value 6 as from my tests higher compression slows down backups for little gain. As the comment in the configuration file suggests, compression is currently available only when encryption is also enabled.

Lastly, as you can see you can choose to run other scripts before and/or after the backup with duplicity is performed. In the configuration above you can also see that I normally run the backup with the xtrabackup script first, so that the backup taken with duplicity also includes the latest MySQL backup. I find this pretty useful. Like for the other script, you need to specify the full or incr argument when taking backups; this argument will automatically be passed to the scripts specified in run_before and run_after so, for example, when taking an incremental backup with duplicity, an incremental backup with xtrabackup is taken first.

Restoring latest backup available


duplicity -v debug rsync://user@host//backup_directory <destination>

Note: Duplicity will not overwrite an existing file.

duplicity – other useful commands

Restoring from backups with duplicity is a little more straightforward than backing up, so I haven’t added any commands for this in the script really. However I’ll add here, for reference, some useful commands you may likely need when restoring or else directly with duplicity. These are examples assuming you use duplicity with symmetric encryption, in which case you need to have the PASSPHRASE environment variable set and available:

export PASSPHRASE=... # the passphrase you've used in the configuration file; you'll need this will all

If you add these commands in some other scripts, remember to unset this variable with

Listing available backups
duplicity -v debug collection-status rsync://user@host//backup_directory
Listing all files in current backup
duplicity -v debug list-current-files rsync://user@host//backup_directory
Restoring by date / specific files (e.g. 3 days ago)
duplicity -v debug -t 3D --file-to-restore FILENAME rsync://user@host//backup_directory <destination>


duplicity -v debug --restore-time 1308655646 rsync://user@host//backup_directory <destination> (unix time)
duplicity -v debug --restore-time 2011-06-21T11:27:26+02:00 rsync://user@host//backup_directory <destination>

Note: timestamps shown when listing available backups are in already in timezone, while the time on the server is in UTC. So a backup made e.g. on 24/2/2014 at 02:00 on the server means it will be listed as Mon Feb 24 04:00:35 2014. Restoring this backup means using the timestamp xxxx-xx-xxT02:00:00+02:00

If you are looking to use free tools, these scripts and commands should have your backup needs on servers covered in most cases.

Using Nginx to comply with a third-party API’s rate limits

API rate limits: the problem

I have just started a little pet project today that involves the integration of APIs of various social networks. In order to prevent abuse, among other reasons, these APIs usually restrict the number of requests that a client (normally identified by IP address) can make in a given amount of time, through rate limiting practices; an example is the Reddit API, which according to its access rules only allows 30 requests/minute per client.

Complying with this sort of API rate limits at application level, while possible, can be quite complicated, because there is the need to maintain some shared state across various instances of the application so that the API rate limits are not exceeded regardless of the instance making requests at any given time. I’m a Ruby developer, so in the past I have used a gem called SlowWeb to comply with a third party API’s rate limits. Unfortunately this gem is no longer maintained (last updates were 3 years ago), plus it is anyway limited in that it wouldn’t work by itself with multiple instance of the application since it doesn’t share state somehow by itself.

A simple solution

Wouldn’t it be cool if there was a way to comply with a third party API rate limits independently from our application, and without reinventing the wheel? This way there wouldn’t be any more the need to maintain some shared state across multiple instances of the application since the rate limiting would be handled separately. There’s a simple answer to this: web servers. It is trivial to implement such a solution with a web server like Apache or Nginx.

I normally use Nginx, so I’ll give you a very simple example (for Reddit API) with this web server. First, we need to add the following lines to Nginx’s main configuration:

http {

limit_req_zone $binary_remote_addr zone=api_name:10m rate=30r/m;


Then we need to add the following lines to a virtual host we’ll dedicate as wrapper for the third party API:

server {
listen 80;
server_name your_url.ext;

location / {
limit_req zone=api_name burst=30;
proxy_pass http://api_url.ext/;

That’s it! Now you can just use your custom URL in your application and stop worrying about the API rate limits. How it works is very simple: Nginx uses the builtin HttpLimitReqModule to limit the number of requests per session/client in a given amount of time. In the example above, we first define a ‘zone’ specifying that we want to limit requests to 30 per minute; then, in the virtual host, we let Nginx proxy all requests to the API’s URL with some “burstiness” unless the third party API does not allow this. Another bit of additional configuration you may want to add to the Nginx virtual host would be for caching, but I usually prefer handling this at application level, for example with Redis.

Know of other tricks to easily comply with API rate limits? Please let me know in the comments.

Copy files via an intermediate server with SSH/SCP

Sometimes I need to copy files from a remote server, but I can only access it through an intermediate server due to some restrictions.

I can’t remember always how to do it so I thought I’d add some notes here which may hopefully save time to others; I bet there are many ways of doing this, but here’s a couple. The first one is with SCP, using an SSH tunnel. First, start the tunnel with:

ssh -L <local-port>:<gateway-address>:<ssh-port-target-server> <target-user>@<target-server>

So, for example, given

  • I can SSH into the intermediate server (with settings stored in ~/.ssh/config for example)
  • from the intermediate server, I can SSH into the target server with the user vito and SSH listening on the port 9876

I would start the tunnel with the command:

ssh -L 5000: vito@

where 5000 is just a random port I choose. Once the tunnel is started, to copy files from the remote server, through the tunnel, you can run:

scp -P 5000 vito@ /local/destination/folder/

That’s it. SCP should copy files just as it would do if there were a direct SSH connection with the target server. Unfortunately, it is not always possible to copy files this way because SSH forwarding may have been disabled on the intermediate server for security reasons; in this case, using an SSH tunnel won’t work and you’ll see errors like:

channel 3: open failed: administratively prohibited: open failed

There is another trick though we could use in this case to copy files, using SSH directly and without a forwarding tunnel. An example:

ssh "ssh \"cat /path/to/file\"" | pv > /local/destination/folder/
477MiB 0:05:05 [1.81MiB/s] [ <=> ]

I like this trick because it doesn’t require any particular configuration on either the intermediate server or the target server, and it uses a tool like cat which I believe is available on all distros; pv is optional, but quite handy since it shows how much has been copied and the transfer speed, in realtime. In the example above pv won’t show the % of the file which has already been copied, but it’s easy to fix that by passing the -s SIZE argument (you need to know the size of the file in advance for the progress bar to be accurate).

I would be curious to know if there are other tricks to copy files via an intermediate server, so please leave a comment if you are aware of any others. 🙂

MySQL Cluster with Percona/Galera

Why a MySQL cluster

I have been using MySQL for many years as the RDBMS of choice for most applications; it does have its quirks, and it may lack some features I wish it had (and that other relational databases such as PostgreSQL have), but generally speaking it works fairly well and has good performance and reliability; also, I am anyway more familiar with it than with other databases. While these days the buzz is mainly for the so called NoSQL/schemaless/document-store/key-value-store/you-name-it alternatives, in my opinion relational databases are still a good option in many cases and are often also easier to use.

For a while the typical solution I used to rely on to scale MySQL databases was based on asynchronous replication, partitioning, and sharding, depending on the case. However I got tired of slaves going out of sync, and sharding can be a very good or a very bad idea depending on how it is done and or how well you can guess and plan in advance how the data will be used. In the past I’ve also tried the ‘standard’ MySQL Cluster, multi master replication setups, and various proxying configurations; however none of these were 100% reliable or easy enough to setup and maintain. About a year ago I started playing with a different type of MySQL cluster based on the synchronous replication provided by the Galera plugin (byCodership – who are also based here in Finland); Galera enables virtually synchronous replication to allow for reading from/writing to any node; furthermore, it automatically handles node provisioning. Better performance than the ‘standard’ MySQL cluster, no more slaves out of sync, true multi master replication and scalability out of the box with very little maintenance. Yay!

Rather than using the plugin directly with the standard ‘distribution’ of MySQL, I prefer using Percona‘s own distribution which includes many optimisations and also the XtraDB storage engine, a drop in replacement for InnoDB that performs a lot better in many scenarios; in addition, Percona XtraDB Cluster also includes the Galera plugin, so you are ready to configure a new MySQL cluster in a very short time. You can find instructions on how to setup a MySQL cluster on Percona’s website as well, but here I’d like to add a few slightly different instructions on how to use packages you can download rather than using the repositories for your Linux distribution provided by Percona. The reason I prefer to use these packages is that in a couple cases I have noticed that the packages available for download are newer that those you’d install from the repositories. I will also be covering some firewalling and proxying configuration so to have a secure and highly available MySQL cluster.

I will assume here you want to set up a complete MySQL cluster from scratch; you can skip some steps as you wish if that’s not your case. I will also assume here you already have linux boxes with at least the basic OS up to date; the following instructions will work as they are with Debian based distros (I normally use Ubuntu).

SSH setup

First things first, let’s lock down each node by configuring SSH authentication and the firewall. We need to configure public key authentication and disable the weaker password based authentication; still from your client computer, copy your public key to your new server; there are various ways to do this but perhaps the easiest is with the utility ssh-copy-id already available with most distros (if you are on OSX and use Homebrew, you can install it with brew install ssh-copy-id). Assuming your first node is called something like node1:

ssh-copy-id -i ~/.ssh/ node1

Test now the pub key authentication by SSH’ing into the box; you shouldn’t be required to enter your password this time. Next, if all looks good, edit /etc/ssh/sshd_config and change the port number defined at the top of the file with the port you want to use; then uncomment the line that has the setting PasswordAuthentication yes and change that setting to no so to force authentication with public key, which is more secure. Now restart SSH with

service ssh restart

making sure you don’t close your current terminal session until you have successfully tested the new configuration.
Next, from your client computer, edit ~/.ssh/config and paste the following:

Host node1 # or whatever
HostName … # ip or hostname of the box
User … # user account you'll be using on the box
Port … # custom port

Replace the placeholder text with the actual IP of the server, the username you’ll be using on the box and the SSH port you’ve chosen earlier; I recommend using a different port rather than the default one (22). Try now again to SSH into the box with just

ssh node1

You should be logged in if all went OK.

Firewall with iptables

For now, we’ll lock down the box with a restrictive iptables configuration; later we’ll open some port required for the MySQL cluster to function. Edit /etc/iptables.rules and paste the following:

# Generated by iptables-save v1.4.4 on Tue Feb 19 23:11:06 2013
-A INPUT -i lo -j ACCEPT
-A INPUT -d ! -i lo -j DROP
-A INPUT -p tcp -m state --state NEW -m tcp --dport <your SSH port> -j ACCEPT
-A INPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-port-unreachable
-A FORWARD -j REJECT --reject-with icmp-port-unreachable
# Completed on Tue Feb 19 23:11:06 2013

this is the basic configuration I usually start with, then I open ports or make changes as required. To apply these firewall rules right away, run

iptables-restore < /etc/iptables.rules

To ensure these rules are also applied each time the server starts, edit /etc/network/interfaces and add post-up iptables-restore < /etc/iptables.rules:

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
address ...
netmask ...
broadcast ...
network ...
gateway ...
post-up iptables-restore < /etc/iptables.up.rules

Of course make sure you specify your correct network settings here.


I install the same dependencies required when installing Percona XtraDB MySQL Cluster from source code, so to be sure nothing is missed.

apt-get install curl build-essential flex bison automake autoconf \
bzr libtool cmake libaio-dev libncurses-dev zlib1g-dev libdbi-perl \
libnet-daemon-perl libplrpc-perl libaio1

There’s one more dependency you need, but it is only available from Percona’s repositories, so we need to add them to apt’s sources:

gpg --keyserver hkp:// --recv-keys 1C4CBDCDCD2EFD2A
gpg -a --export CD2EFD2A | sudo apt-key add -

Then edit /etc/apt/sources.list and append the following lines to enable these repositories:

deb lucid main
deb-src lucid main

Lastly, to install the remaining dependency:

apt-get update
apt-get install libmysqlclient18

Installing the Percona packages

It’s time now to install the Percona packages; you’ll need to install both packages for the Percona server and for Xtrabackup, a hot backup tool also from Percona which I cover in more detail in another post. You will need Xtrabackup if you use this tool as the strategy to use for the provisioning of nodes in the MySQL cluster – more on this later.

You can download the packages to install the Percona server from here and the one required to install Xtrabackup from here. At the moment of this writing, the latest versions available are 5.5.29-23.7.2-389 for Percona server and 2.0.5-499 for Xtrabackup. I am using Ubuntu Lucid x86-amd64 so in the following example I am downloading the packages for this version:

cd /usr/local/src


Then, install these packages and stop the MySQL/Percona server since we need to configure the server as the first node of the cluster.

dpkg -i percona*.deb
service mysql stop

MySQL configuration

Next, edit the MySQL configuration at /etc/mysql/my.cnf and paste the content of this gist which already includes the required configuration for the MySQL cluster nodes. An important note is in order here: the configuration in that gist is what I am currently using with a small MySQL cluster in which each node has 8 GB of ram, so you may want to tweak some settings depending on your case. I have included them as they have worked pretty well for me. You could just include the settings in the Galera synchronous replication section and you’d be fine as far as the configuration of the MySQL cluster is concerned. So it’s up to you if you want to try the other settings too.

Notes on some of the settings in the gist:

  • max-connections: this setting really depends on many things. I’ve set it to 500 but the correct value depends on how you will be using MySQL;
  • transaction-isolation: MySQL’s default setting is REPEATABLE-READ which isn’t optimal; I prefer READ-COMMITTED (which happens to be the default setting in PostgreSQL instead);
  • skip-name-resolve: prevents the server from performing a DNS lookup each time a client connects to it, speeding up connections a little bit;
  • innodb_support_xa: this is required by the Galera replication;
  • innodb_import_table_from_xtrabackup: it allows restores of single tables by replacing the tablespace even at runtime, which can be pretty handy when you don’t need to restore the whole database;
  • innodb_log_file_size: I can’t remember exactly how I determined the value of 50M but the important thing to keep in mind concerning this setting is that you won’t be able to use a datadir with InnoDB logs created with a different value (so you’d have to delete the existing logs and restart MySQL if you change the value);
  • innodb_file_per_table: this is a recommended setting for two reasons: it uses disk space better by storing the data in separate files for the various tables vs a single gigantic file that can become bloated overtime; it also allows for restores of single tables together with the previous setting;

As for the Galera synchronous replication section, you should basically use those settings as they are apart from:

  • server-id: this has to be a unique id for each node; you can choose any arbitrary value;
  • wsrep_cluster_name: of course this is the name you want to give to the MySQL cluster; it’s important that all nodes in the cluster have the same value;
  • wsrep_node_name: this as well should be different for each node; I usually use names such as db1,db2,…,dbN or node1,node2,…,nodeN;
  • wsrep_slave_threads: the recommended setting is 4 threads per CPU core;
  • wsrep_cluster_address: this very important setting determines the role of a node in the MySQL cluster; as we’ll see later, this should be set to gcomm:// on the first node when bootstrapping a new cluster. Once the cluster is ready and all the nodes have been configured, it is convenient to have the setting with value gcomm://db1,db2,…,dbN on each node instead; this makes it so a node, when restarted or rebuilt, will automatically try one node per time in the list to find a node that is available and ‘synced’, so that node can become its ‘donor’ when the first node joins or rejoins the cluster;
  • wsrep_sst_method: this determines the synchronisation strategy to use when a node joins or rejoins the MySQL cluster after being offline for maintenance or else; at the moment I tend to use the rsync strategy as it seems to be somewhat more stable, but another good option is Percona’s own xtrabackup; the main difference is that with the rsync strategy both joiner and donor are seen as unavailable during the transfer of data, while with xtrabackup the donor is supposed to be available. I haven’t yet tried this though.

So go ahead with bootstrapping the MySQL cluster with the first node you’ve just set up, by setting wsrep_cluster_address to gcomm://. Then restart MySQL, which should now apply all the settings in the /etc/mysql/my.cnf configuration file. Before restarting though, if you have set innodb_log_file_size to some custom value, you’ll need to delete the existing InnoDB log files otherwise MySQL won’t start. MySQL’s datadir is by default /var/lib/mysql so to delete the old log files you can run:

rm /var/lib/mysql/ib_logfile*
service mysql restart

If MySQL fails to restart, try starting it “manually” with

mysqld -u mysql

which will show information that may useful to debug the problem. Otherwise, the first node is now ready and you can go ahead with adding a node per time – at least two more nodes for an optimal configuration.

Adding nodes to the MySQL cluster

Adding more nodes to the cluster is almost an identical process to that required to set up the very first node, apart from a few key differences. In the MySQL’s configuration for each new node, make sure that

  • server-id and wsrep_node_name have different and unique values, i.e. not in use by any other nodes in the MySQL cluster;
  • wsrep_cluster_address: it should be set to the address of the first node or anyway one node already synced and available to be used as donor, so the joiner can receive data from it.

Having updated the MySQL configuration, stop MySQL for now on the joiner node(s), and update the the firewall configuration on all nodes so that they can eventually exchange data with each other. I usually prefer using hostnames or aliases rather than IP addresses in iptables’ configuration, since it’s easier to see at a glance what each rule is for. So open /etc/hosts on each node and add entries for the IPs in use by the other nodes. For example, if I am on node1 in a 3-nodes MySQL cluster, I’d change the /etc/hosts file so it looks something like localhost node1.localdomain node1

# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters node2 node3

The two lines at the bottom are the important ones (of course make sure you specify the correct IP addresses). Next, we need to update the firewall rules. Open again /etc/iptables.rules and add the following rules before the -A input -j REJECT rule:

-A INPUT -i eth0 -p tcp -m tcp --source node2 --dport 4567 -j ACCEPT
-A INPUT -i eth0 -p tcp -m tcp --source node2 --dport 4568 -j ACCEPT
-A INPUT -i eth0 -p tcp -m tcp --source node2 --dport 4444 -j ACCEPT
-A INPUT -i eth0 -p tcp -m tcp --source node2 --dport 3306 -j ACCEPT

Explanation: 4567 is the port another node will knock to to check whether this node is synced and available to become its donor; 4568 is unsed when an incremental state transfer (IST) is possible as opposed to a snapshot state trasnfer (SST) which basically is the copy of all of the data – whether IST is possible or not depends on how much the data on the joiner differsfrom the data on the donor; 4444 is the port used with the rsync strategy, while 3306 is of course the default port at which MySQL listens for clients.

In the example above, I was on node1 so I added rules for node2. It is important to have those four rules replicated for each of the other nodes in the MySQL cluster, so to allow each node to exchange data with any of the other nodes. To apply the changes right away, run

iptables-restore < /etc/iptables.rules

Done this, you can start MySQL on one joiner per time and the it will start receiving data from the donor you have specified in my.cnf. Once all the nodes are up and running and synced, I recommend you set wsrep_cluster_address to gcomm://node1,node2,..nodeN. This way you don’t have to change that setting each time you take a node offline and then online again for maintenance or else, because the joiner will automatically find the first node in that list which is available to provide it with the data. If all went well, when you start a new node just configured you can see it becomes a joiner and receives data from the donor by watching the MySQL related processes (e.g. you could monitor this with watch “ps waux | grep mysql”):

root 5167 0.1 0.1 19396 1952 pts/2 S+ 12:04 0:00 /bin/bash /etc/init.d/mysql start
root 5195 0.1 0.0 4108 728 pts/2 S+ 12:04 0:00 /bin/sh /usr/bin/mysqld_safe
mysql 5837 0.5 3.3 245612 33980 pts/2 Sl+ 12:04 0:00 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib/mysql/plugin --user=mysql --log-error=/var/lib/mysql/node3.err --pid-file=/var/lib/mysql/node3
mysql 5884 0.0 0.0 4108 612 pts/2 S+ 12:04 0:00 sh -c wsrep_sst_rsync --role 'joiner' --address '' --auth '' --datadir '/var/lib/mysql/' --defaults-file '/etc/mysql/my.cnf' --parent '5837'
mysql 5886 0.2 0.1 19248 1764 pts/2 S+ 12:04 0:00 /bin/bash -ue /usr//bin/wsrep_sst_rsync --role joiner --address --auth --datadir /var/lib/mysql/ --defaults-file /etc/mysql/my.cnf --parent 5837
mysql 5909 0.0 0.0 10984 676 ? Ss 12:04 0:00 rsync --daemon --port 4444 --config /var/lib/mysql//rsync_sst.conf

In the example above I was using the rsync strategy; the output would look slightly different if you used the xtrabackup strategy instead. This is instead you would see on the donor while SST is happening:

root 746 0joiner.0 0.0 4108 688 ? S 11:38 0:00 /bin/sh /usr/bin/mysqld_safe
mysql 1448 0.1 10.6 1118380 108624 ? Sl 11:38 0:03 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib/mysql/plugin --user=mysql --log-error=/var/lib/mysql/node2.err --pid-file=/var/lib/mysql/node2
mysql 6938 0.0 0.0 4108 616 ? S 12:22 0:00 sh -c wsrep_sst_rsync --role 'donor' --address '' --auth '(null)' --socket '/var/run/mysqld/mysqld.sock' --datadir '/var/lib/mysql/' --defaults-fil
mysql 6939 1.0 0.1 17732 1592 ? S 12:22 0:00 /bin/bash -ue /usr//bin/wsrep_sst_rsync --role donor --address --auth (null) --socket /var/run/mysqld/mysqld.sock --datadir /var/lib/mysql/ --defa
mysql 6949 33.0 joiner 0.1 21112 1636 ? R 12:22 0:00 rsync --archive --no-times --ignore-times --inplace --delete --quiet --whole-file -f + /ib_lru_dump -f + /ibdata* -f + /ib_logfile* -f + */ -f -! */* /var/lib/mysql/ rsync:/

Once you have configured all the nodes your shiny new MySQL cluster is ready to be used as it is -yay!- but in order to take full advantage of it you will need to split reads/and writes either in your application or with a load balancer such as haproxy, which I cover next.

Load balancing and failover

Once you have two or (better) more nodes in the MySQL cluster, you could already use it as is and split connections or reads/writes at application level; however it’s perhaps easiest to use something like haproxy that will handle this for you and will also ensure nodes that are not in sync are ignored. Setting this up is quite easy; first, unless you have haproxy already installed, you can install it with

apt-get install haproxy

Next, edit /etc/haproxy/haproxy.cfg and paste the following lines:

listen mysql-cluster
mode tcp
balance leastconn
option tcpka
option httpchk

server db1 node1:3306 check port 9200 inter 5000 rise 3 fall 3 maxconn 400
server db2 node2:3306 check port 9200 inter 5000 rise 3 fall 3 maxconn 400
server db3 node3:3306 check port 9200 inter 5000 rise 3 fall 3 maxconn 400

Ensure you have listed all the nodes; do not restart haproxy yet. First, we need to configure a service on each node that haproxy will use to monitor the nodes and automatically ignore nodes that are offline or not in sync with the rest of the MySQL cluster. This is typically done with xinetd, although there are certainly other ways to achieve the same result. If you don’t have xinetd installed yet, run apt-get install xinetd, then create the new file /etc/xinetd.d/mysqlchk, if it doesn’t exist yet (it appears that this configuration is now done automatically in the newest version of Percona MySQL cluster) and paste the following:

# default: on
# description: mysqlchk
service mysqlchk
# this is a config for xinetd, place it in /etc/xinetd.d/
disable = no
flags = REUSE
socket_type = stream
port = 9200
wait = no
user = nobody
server = /usr/bin/clustercheck
log_on_failure += USERID
only_from =
# recommended to put the IPs that need
# to connect exclusively (security purposes)
per_source = UNLIMITED

Next, edit /usr/bin/clustercheck which may exist or not depending on the version of the MySQL cluster you have set up. If the file exists, just ensure that the variables MYSQL_USERNAME and MYSQL_PASSWORD are set to the correct MySQL credentials. If the file doesn’t elready xist instead, create it and paste the following:


# Perform the query to check the wsrep_local_state
WSREP_STATUS=`mysql --user=${MYSQL_USERNAME} --password=${MYSQL_PASSWORD} -e "SHOW STATUS LIKE 'wsrep_local_state';" 2>${ERR_FILE} | awk '{if (NR!=1){print $2}}' 2>${ERR_FILE}`

if [[ "${WSREP_STATUS}" == "4" ]] || [[ "${WSREP_STATUS}" == "2" && ${AVAILABLE_WHEN_DONOR} == 1 ]]
# Percona XtraDB Cluster node local state is 'Synced' => return HTTP 200
/bin/echo -en "HTTP/1.1 200 OK\r\n"
/bin/echo -en "Content-Type: text/plain\r\n"
/bin/echo -en "\r\n"
/bin/echo -en "Percona XtraDB Cluster Node is synced.\r\n"
/bin/echo -en "\r\n"
exit 0
# Percona XtraDB Cluster node local state is not 'Synced' => return HTTP 503
/bin/echo -en "HTTP/1.1 503 Service Unavailable\r\n"
/bin/echo -en "Content-Type: text/plain\r\n"
/bin/echo -en "\r\n"
/bin/echo -en "Percona XtraDB Cluster Node is not synced.\r\n"
/bin/echo -en "\r\n"
exit 1

If you run /usr/bin/clustercheck manually on an active, synced node, you’ll see the following output as expected:

HTTP/1.1 200 OK
Content-Type: text/plain

Percona XtraDB MySQL Cluster Node is synced.

Now restart xinetd with /etc/init.d/xinetd restart and then test that the script can also be run via the port specified in the xinetd configuration (9200):

root@node1:~# telnet localhost 9200
Connected to localhost.
Escape character is '^]'.
HTTP/1.1 200 OK
Content-Type: text/plain

Percona XtraDB MySQL Cluster Node is synced.

Connection closed by foreign host.

Now you can reload haproxy as well with

service haproxy reload

and ensure your applications connect to the load balancer instead of any nodes of the MySQL cluster directly. One last thing I’d like to suggest which I find very useful, is to use haproxy’s web interface to check the status of nodes, especially when you take one node offline for maintenance and want to check that it rejoins the cluster correctly when that’s done. Edit /etc/haproxy/haproxy.cfg again and add the following lines (ensure you use a good combination of username and password and optionally use a custom port):

listen stats
mode http
option httpchk
balance roundrobin
stats uri /
stats refresh 10s
stats realm Haproxy\ Statistics
stats auth username:password

Once you reload haproxy again, you will be able to see the status of the MySQL cluster’s nodes from the UI at the port specified (8282 or whichever you have chosen):



Testing the MySQL cluster is quite easy: just take a node offline or kill -9 MySQL and delete all thata on a node, and see what happens when you restart MySQL :p


I think that despite both Galera and Percona XtraDB Cluster are relatively new, this combination is definitely the best setup I have worked with so far for MySQL databases; it’s nice to have the peace of mind that nodes can be taken offline at any time for maintenance and have them resynced automatically without downtime, and at the same time scale reads and to some extent writes too. I will certainly play again with alternatives such as MongoDB and similar, but I have been using a MySQL cluster with Percona/Galera in production for a while now and it’s been pretty stable requiring very little maintenance, so that’s the reason why for the time being I will stick to MySQL rather than rethinking the applications I am working on so to adapt them to other solutions. I will, however, very likely look into similar clustering solutions for PostgreSQL since I am getting more and more interested in this database these days.

I would be interested to hear others’ experiences with a MySQL cluster with Percona/Galera or any alternatives that have worked well for them.