If you need to backup a MySQL server usually mysqldump is the way to go as long as you can comply with limitations like:

  • having a single backup file
  • having DDL and DML in the same file (unless you choose don’t do so)
  • slow restore times, given by the fact that indexes should be created again
  • etc…

If you wish to sort these kind of issues I recommend you to take a look to Percona Xtrabackup. (We will talk about it in a further post) An excellent option that solves many of the issues I mentioned above.

But if you server is small enough, mysqldump is still the best choice, you just need to run it when the server’s load is low enough and you will get the job done.

One situation I had to work out was being able to recover just one db or table from a backup…and if you use mysqldump as is you will need then to restore the whole backup in order to grab just that part you needed.

So I come with a script in bash that using mysqldump plus some other pieces helped me to accomplish the task. The core concept is to iterate through each database and export every single table, this way, in case you need to restore X table you only need to go to the Y database folder and restore it.

Also in addition to the handle the backup, the script will also copy the result to a S3 bucket using s3cmd.

The backup will keep a daily backup for a week, so it’s: SU MO TU WE TH FR SA and then SU MO and so on again (and it will override the previous one)

If you wish to change this behavior you will only need to change the logic of the variable $BACKUP_NAME

Lets see the script piece by piece first:

Some defaults and config variables


CMD_S3='/root/s3cmd/s3cmd --access_key=XXXXXXXXXXXX --secret_key=YYYYYYYYYYYYY put --reduced-redundancy' # s3cmd with some options

STORAGE_FODER="s3://" # the destination bucket
MYSQL_USER="" # mysql user with enough grants to read whatever you want to backup
MYSQL_PASS="" # mysql password
MYSQL_HOST="localhost" # mysql host

BACKUP_NAME=`LC_ALL=en_US.utf8 /bin/date +%A | tr "[:upper:]" "[:lower:]"`

A logger function in order to be DRY

function _log {
	echo -n `date`
	echo " - $1"

The _dump function will create a dir for a given database and dump the table with the name in the argument $2

function _dump  {
	_log "dumping $1.$2"

The main loop

_log ":: starting backup proccess"

_log "removing old dumps"

Here we retrieve a list of all databases in the current server. If you wish you could hardcode the list by setting a database name per line in $MYSQL_DBS

_log "starting backup"
MYSQL_DBS=`$CMD_MYSQL -h $MYSQL_HOST -u $MYSQL_USER --password=$MYSQL_PASS -B -N -e "show databases"`

Once we have the databases we should iterate for every table on it. We do so by getting all tables from the information_schema.TABLES table

for database in $MYSQL_DBS; do

	_log "exporting \"$database\" database"

	TABLES=`$CMD_MYSQL -h $MYSQL_HOST -u $MYSQL_USER --password=$MYSQL_PASS -B -N -e "select TABLE_NAME from information_schema.TABLES where TABLE_SCHEMA='$database'"`

	for table in $TABLES; do
		_dump $database $table $BACKUP_CONTENT

Packaging and uploading the backup

_log "packaging backup"

_log "uploading to S3"

_log ":: backup proccess finished"

The script is pretty naive and will assume that everything work ok, so there are several things to improve, for example:

  • flush tables to disk before starting the backup
  • ability to stop the MySQL replication if detected
  • error detection
  • etc.

You can download the whole script from this gist at github

I hope this could be helpful for you and it is for me. You just need to fill the blanks and you will have it running.