has_many :codes

MySQL hot backups with Percona Xtrabackup

Published  

A better alternative to mysqldump?

In the previous post, we’ve seen how to perform backups and restores of MySQL databases using the mysqldump tool bundled with the MySQL server distribution, together with a few tricks that help speed up both backups and restores, among other things. In this post, I’ll show how to perform MySQL hot backups and restores with Percona XtraBackup. You may also want to check out the next post, in which I describe how to set up a Percona Galera MySQL cluster as it’s somehow related.

mysqldump is very likely the most popular tool for backing up MySQL databases, and in most cases it just works, and pretty well too. It’s easy to use, free, comes with MySQL and produces simple text files containing standard DROP/CREATE/INSERT SQL statements that, once replayed during a restore, will simply recreate the databases, the tables and all the data on the destination MySQL server. By producing simple text files based on SQL statements, mysqldump may also be the perfect tool when migrating data between different versions of MySQL, or when you need to change something in the text file dump before restoring it.

However, mysqldump may not be up to the job if you need to back up MySQL databases that you want to keep active without any downtime, and while they are in use for writes as well as reads; for example, when the databases are also replicated and you need to copy them to another server instance to set up a new slave or resync an existing one. In these circumstances, if you resort to using mysqldump anyway you may have to choose between locking all the tables while backing up -thus blocking writes in the meantime, which in some cases may mean breaking applications- and having dumps that may contain inconsistent data. But you unlikely want your users to experience downtime, and in any case you don’t want that your databases may change while your are exporting the data, do you?

Another issue is performance. While mysqldump works great with small databases, it is not efficient with large ones and restoring a SQL dump of a large database can take a very, very long time. With the tricks we’ve seen in the previous post, it is possible to speed up both dumps and restores significantly, but still this would not work well for very large databases. So, unless you are very patient, don’t mind waiting and can afford to stay sitting a week watching your MySQL server while it’s restoring your large db, sooner or later you will have to look for other, more efficient options.

LVM snapshots

Among the other options, there are both commercial and free tools. One popular (free) choice is to use LVM snapshots; they work well when you can use them (for example, you may not be able to use LVM snapshots with virtual private servers, in many cases, unless you are given the possibility to customise the disk layout setup), but they’ve also got their weaknesses; we’ll look at this option more in detail in a future post, however suffice it to say here that LVM snapshots may significantly impact on the disks, and therefore on the performance of the whole server. Plus, LVM snapshots only take into account what MySQL has already flushed to disk at the moment, but not what data it may have in memory, thus restoring from an LVM snapshot may require a crash recovery.

Hot backups!

A better alternative yet is MySQL Enterprise Backup (formerly InnoDB Hot Backup), which can backup a live MySQL server without downtime and at the same time ensure that our backups are consistent even if the databases are in use for writes too during the process. Plus, restores are incredibly fast. Sounds great, yeah? Unfortunately, MySQL Enterprise Backup is a pricey commercial product that may be beyond reach in many cases (5K USD per server), especially for companies with tight budgets that may choose MySQL for the its cost (none) in first place, as well as for its good performance.

But fear not, if you are on the cheap like me: luckily for us, there also exists a free, open source solution by the good people at Percona (a MySQL consultancy company), which is very, very similar to Enterprise Backup/InnoDB Hot Backup. It’s called Xtrabackup, and while it’s still pretty young having been first developed less than a couple years ago, it’s a pretty solid solution that works very well despite some limitations. It also compares well to the commercial alternative in that it only takes a bit longer to perform backups of the same data, and requires a little more disk space – I haven’t had a chance to test this myself yet, so I’ll trust what I’ve read on pythian.com: Taste test: Innobackup vs. Xtrabackup (link no longer available).

Here are the main features available with the current release, from the home page:

  • Create hot InnoDB backups without pausing your database
  • Make incremental backups of MySQL
  • Stream compressed MySQL backups to another server
  • Move tables between MySQL servers online
  • Create new MySQL replication slaves easily
  • Backup MySQL without adding load to the server

While the product itself is free, Percona also provides commercial support, if needed. So, let’s see now how to use Xtrabackup for backing up and restoring MySQL databases.

MySQL hot backups

For starters, you’ll need to install XtraBackup. In this example, we’ll install a Debian package on a Ubuntu server. Head to XtraBackup’s download page first, or download the latest version available (1.4 at the moment of this writing) with wget or similar making sure you choose the right package for your OS.

wget https://www.percona.com/downloads/XtraBackup/XtraBackup-1.4/deb/lucid/x86_64/
xtrabackup_1.4-193.lucid.25_amd64.deb
dpkg -i xtrabackup_1.4-193.lucid.25_amd64.deb

Backing up

The package installs a binary named xtrabackup that takes care of the actual backups and restores, but it also installs a perl script named innobackupex-1.5.1 that makes XtraBackup a bit easier to use, plus it also backs up MyISAM tables. Backing up is pretty simple:

innobackupex-1.5.1 --user=... --password=... --slave-info \
--defaults-file=/etc/mysql/my.cnf --databases="db1 db2" --stream=tar ./ \
| gzip -c -1 > /backups/backup.`date +%Y-%m-%d-%H-%M-%S`.tar.gz

You should see an output similar to this:

This software is published under
the GNU GENERAL PUBLIC LICENSE Version 2, June 1991.
IMPORTANT: Please check that the backup run completes successfully.
At the end of a successful backup run innobackupex-1.5.1
prints "completed OK!".
innobackupex-1.5.1: Using mysql Ver 14.14 Distrib 5.1.41, for debian-linux-gnu (i486)
using readline 6.1
innobackupex-1.5.1: Using mysql server version Copyright 2000-2008 MySQL, 2008 Sun
Microsystems, Inc.
innobackupex-1.5.1: Created backup directory /backups
101214 21:47:59 innobackupex-1.5.1: Starting mysql with options:
--defaults-file="/etc/mysql/my.cnf" --password=... --user=... --unbuffered --
101214 21:47:59 innobackupex-1.5.1: Connected to database with mysql child process
(pid=17917)
101214 21:48:03 innobackupex-1.5.1: Connection to database server closed
101214 21:48:03 innobackupex-1.5.1: Starting ibbackup with command: xtrabackup
--defaults-file="/etc/mysql/my.cnf" --backup --suspend-at-end --log-stream
--target-dir=./
innobackupex-1.5.1: Waiting for ibbackup (pid=17944) to suspend
innobackupex-1.5.1: Suspend file '/var/lib/mysql/xtrabackup_suspended'
xtrabackup: suspend-at-end is enabled.
xtrabackup: uses posix_fadvise().
xtrabackup: cd to /var/lib/mysql
xtrabackup: Target instance is assumed as followings.
xtrabackup: innodb_data_home_dir = ./
xtrabackup: innodb_data_file_path = ibdata1:10M:autoextend
xtrabackup: innodb_log_group_home_dir = ./
xtrabackup: innodb_log_files_in_group = 2
xtrabackup: innodb_log_file_size = 5242880
xtrabackup: Stream mode.
>> log scanned up to (92295311910)

xtrabackup: Transaction log of lsn (92294278211) to (92298258089) was copied.
101214 22:47:23 innobackupex-1.5.1: All tables unlocked
101214 22:47:23 innobackupex-1.5.1: Connection to database server closed
innobackupex-1.5.1: Backup created in directory '/backups'
innobackupex-1.5.1: MySQL binlog position: filename 'mysql-bin.000730',
position 16838362
innobackupex-1.5.1: MySQL slave binlog position: master host '192.168.162.110',
filename 'mysql-bin.000651', position 99728693
innobackupex-1.5.1: You must use -i (--ignore-zeros) option for extraction
of the tar stream.
101214 22:47:23 innobackupex-1.5.1: completed OK!

In this first example, we have instructed XtraBackup to stream the backup data in tar format; the output is then compressed with gzip into a file having the current timestamp in its name. You may have noticed that I am using in the example above gzip’s lowest compression level, this is because higher levels will only require more CPU and resources in general with little improvement, especially if you pipe the compression to XtraBackup while the backup is still in progress; actually I have had often problems with compressing the backup data while the backup is still running; in many cases I found that backing up first, and then compressing the backup once this is done, works better. However the success of compressing while backing up at the same time, also depends -from my experience with the tool- on the value of the wait_timeout system variable in MySQL. On my servers, whenever I have set this variable to a low value I’ve always had problems getting backup and compression to work in one step. There may be better settings or workarounds, but I’ve found that setting wait_timeout to a value higher than the average time taken by the backup process, usually makes for a successful compressed backup without requiring a separate step for the compression. However, keep in mind that a high value for wait_timeout can cause a whole lot of other issues depending on the usage pattern of your MySQL instances.

For this reason, you may want to keep wait_timeout to a not too high value, and only change it for the XtraBackup transaction. You can easily do this by patching the innobackupex-1.5.1 script as suggested by reader “Gu Lei” in a bug somebody filed exactly for the same issue. So edit the file /usr/bin/innobackupex-1.5.1 and change it as follows:

if (compare_versions($mysql_server_version, '4.0.22') == 0
|| compare_versions($mysql_server_version, '4.1.7') == 0) {
# MySQL server version is 4.0.22 or 4.1.7
mysql_send "COMMIT;";
mysql_send "set interactive_timeout=28800;"; # ADD THIS
mysql_send "set wait_timeout=28800;"; # ADD THIS
mysql_send "FLUSH TABLES WITH READ LOCK;";
} else {
# MySQL server version is other than 4.0.22 or 4.1.7
mysql_send "set interactive_timeout=28800;"; # ADD THIS
mysql_send "set wait_timeout=28800;"; # ADD THIS
mysql_send "FLUSH TABLES WITH READ LOCK;";
mysql_send "COMMIT;";
}

Of course, the timeout value must be enough to cover the amount of time XtraBackup could possibly take to backup your data.

The arguments --slave-info and --databases are optional. The former allows you to save the master status info together with the data (assuming the server you are backing up data from is a master or an existing slave in a replicated environment), in case you are in the process of setting up a new slave, or resyncing an existing one. The latter, instead, allows you to specify which databases you want to backup unless you want to backup all the available databases; it accepts a space separated list of database names. Obviously, if your MySQL configuration file is in another location on your server, remember to change the path in the relevant argument accordingly.

Streaming backups to another MySQL server

The command above works pretty well if you want to make backups manually or if you want to schedule, for example, daily, full backups with cron. However if you are backing up with the purpose of restoring to another server, you may want to skip creating a local file that then needs to be transferred to the destination server for restore. You can stream Xtrabackup‘s output directly to the destination server instead, and create a backup archive there, by using netcat.

Netcat, which we have already met in the previous post on backing up with mysqldump, is a simple yet very useful utility that allows you to stream data over a TCP connection, then you can for example save that data to a file on the destination host. First, let’s get netcat running on the destination host and listening to a port of our choice, for example 6789, and by redirecting the output we’ll store it into a file on disk:

nc -l 6789 > /backups/master-data.tar

Then, on the server that hosts the data we want to migrate:

innobackupex-1.5.1 --user=... --password=... --slave-info \
--defaults-file=/etc/mysql/my.cnf --databases="db1 db2" --stream=tar ./ \
| nc xxx.xxx.xxx.xxx 6789

where xxx.xxx.xxx.xxx is the IP address of the destination host. XtraBackup will take its time depending on the amount of data to transfer, and once the process is completed netcat will be automatically closed on the destination host, with all the data copied into the target file. If all went well, XtraBackup will complete the process on the source host saying:

innobackupex-1.5.1: You must use -i (--ignore-zeros) option
for extraction of the tar stream.
101214 22:47:23 innobackupex-1.5.1: completed OK!

If not, well, you are going to have trouble, of course.

Restoring

If backing up with XtraBackup was easy, restoring isn’t any more difficult, it just requires a few additional steps vs a single one. First, while on the destination host (we are done on the source host), backup the existing MySQL data folder, if needed, and then delete or rename it, so that we can create a new empty data folder in the same location which will contain the restored data.

service mysql stop # you need to shut down MySQL before restoring
mv /var/lib/mysql /var/lib/mysql.bak && mkdir /var/lib/mysql

Next, extract the tar archive created while streaming the data during the backup, into the new MySQL data folder. As XtraBackup reminds us at the end of the backup process, we need to use the –ignore-zeros option (or -i) when untarring. Also remember to use the option -z as well if you are restoring from a gzipped tarball – here, for example’s sake, I am assuming you have used netcat instead.

cd /var/lib/mysql && tar xfvi /backups/master-data.tar

Unless something is wrong with the archive for some reason, you should now have a new MySQL data folder identical to that on the source host at the moment the backup was performed. Only thing, this data folder is not ready yet for use with MySQL but needs to be “prepared” first to apply the changes that have been captured by XtraBackup in the transactional logs during backup. Once this is done, fix the permissions and start MySQL again:

cd /var/lib/mysql && innobackupex-1.5.1 --apply-log --ibbackup=xtrabackup_51 \
./ && chown -R mysql:mysql /var/lib/mysql

service mysql start

If all went well so far, MySQL should start correctly with the restored database(s).

Setting up the new/updated instance as replication slave

Assuming now you want to use this new server as a slave of the instance you have backed data from, and know how to set up the replication (we’ll look at how to do this in another post) or that this instance already was a slave and you just need to resync it, you will have to update the information that MySQL uses to connect to the master and replicate data. If you used the option –slave-info with XtraBackup during the backup, like in my example, in the new data folder you should also see some additional files that will contain exactly this information.

Nothing complicated here… but you need to be careful as to which file you should look at. You have two possibilities here:

Scenario 1 – You are cloning an existing slave

That is, you are adding a new slave. Let’s call the host you previously backed the data from Slave-0. Slave-0 itself was already configured as slave of another MySQL server instance, its master, that we’ll call …Master. Right now you basically want to “clone” Slave-0. That is, you want to add a new slave to Master. Let’s call this new slave Slave-1. So, to recap, both Slave-0 AND the new Slave-1 will in the end be replicating from the same Master, while there won’t be any link between Slave-0 and Slave-1. Hope this is clear!

If this is what you are trying to do here, the file you need to look at is xtrabackup_slave_info. If you look at the content of this file,

cat /var/lib/mysql/xtrabackup_slave_info
CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000834', MASTER_LOG_POS=50743116

you’ll see that it already has a “CHANGE MASTER..” command ready for you to use. So open a MySQL console, and just execute that command:

mysql> stop slave;
#optional IF this instance was already a slave AND MySQL IS configured
without the skip-slave-start OPTION
mysql> CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000834',
MASTER_LOG_POS=50743116;
mysql> START SLAVE;
mysql> SHOW SLAVE STATUS \G

If everything was OK, you’ll see something like this:

...
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
...
Seconds_Behind_Master: 1643
...

Seconds_Behind_Master is an estimation (not too accurate, I must say) of the time in second it will take for this new slave to catch up with the master. How long it takes depends on how long before the backup you’ve used to build this slave was taken from the existing slave you have cloned, and on how much data has changed in the meantime on the master.

Update Tue, May 17 2011: As reader Mora points out in the comment, this definition of Seconds_Behind_Master is not correct.

Seconds_Behind_Master is how many seconds the slave is behind the master. It is not an estimation of how long its going to take to catch up, if the slave has been stopped for 1hour, it would be 3600seconds behind, but it can take anywhere from a few seconds, to a few days (or more) to catch up.

Author Attribution

Thanks to Mora for the clarification.

Note: you may have noticed that the file containing the information about the master, only contains the name and position of the master’s binary log file in use at the moment the backup was taken; there is no username, or password. This is because credentials are not saved there for obvious reasons. So unless you have added values for MASTER_HOST, MASTER_USER and MASTER_PASSWORD to your my.cnf file, you may need to add these settings to the “CHANGE MASTER..” command above. Scenario 2 – You are setting up a new slave, or resyncing an existing slave having as master the MySQL instance you’ve backed data from in the previous steps In this second scenario, instead, say that

  • you want to use the MySQL server instance you previously backed your data from, as master in a typical replication; let’s call this instance Master;
  • you want to use the new instance you are currently setting up from that backup, as Slave. Or, the master-slave replication between these two already existed but you need to resync this slave. If this is what you are trying to do, the file you need to look at is xtrabackup_binlog_info instead. If you look at the content of this file,
cat /var/lib/mysql/xtrabackup_binlog_info
mysql-bin.000228 56875717

you’ll see something slightly different to what the other file contains. The first value is the name of the binary log file on the master, which you need to attach this slave to; the second value is instead the position within that binary log from which MySQL has to start catching up with changes from the master. Optionally, you may also see a third value being the list of names of the databases excluded from the replication.

Now you can use this information to update the link with the master. Start a MySQL console and enter the following commands to do this, then start the slave:

mysql> CHANGE MASTER TO MASTER_HOST='...', MASTER_USER='...',
MASTER_PASSWORD='...', MASTER_LOG_FILE='mysql-bin.000228',
MASTER_LOG_POS=56875717;
mysql> START SLAVE;
mysql> SHOW SLAVE STATUS \G

Also in this case, you’ll have to enter the username and password you are using for the replication process, and be careful to enter the log file name and position correctly, according to your xtrabackup_binlog_info file.

Finally, if all worked as expected, you should see:

...
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
...
Seconds_Behind_Master: 2184
...

Cool, you’re done.

I am pretty sure you’ll see right away – as soon as you try XtraBackup – the huge benefits of this alternative vs mysqldump. Make sure, though, you test backups and restores enough before using this system in production!

Know of any other tips related to XtraBackup, or of a similar or even better alternative (still free)? Please share them in the comments!

© Vito Botta