MySql Backups

Dolphins love data managers!  It is known.
Dolphins love data managers! It is known.

MySQL is a fantastic open source database management system that manages data for a huge number of incredibly popular websites.  Twitter, Google, Facebook and YouTube all use MySQL to store and recall data, and thanks to the amazing movement of open source so can we!

Personally, I use MySQL for storage of a number of things; several time-clock programs, this WordPress blog, and my Cthulhu Character Creator to name a few.  However, the problem I’ve come up against was that all this data was stored in one location.  If the storage hardware or the operating system of the device storing it failed in some manner, I could lose it forever.

Unacceptable!

So after giving it some thought, I set up a good number of redundant backup systems, but the one that I found to be the easiest and the most comforting was this:

  1. Create a script to dump the entirety MySQL system
  2. Setup an hourly execution of that dumping script
  3. Setup an hourly offload of that dump to a different device
  4. Archive those dumps backwards with timestamps daily for 30 days previous
  5. Snicker about using the word dump so many times

Dumping the File

Creating the script to dump the entire MySQL system was pretty simple once I knew the command for it.

mysqldump -u root -p<password> –all-databases > <backupfilename.ext>

This command creates a file readable by MySQL for importing, and has everything you could want from the databases you have stored.

Automation & Data Archiving

Since my MySQL database was residing on a Linux server, the easiest way to set up an automated, recurring execution is through the crontab. Setting it up there lets the OS worry about it so that you don’t have to.

Setting up the hourly offload was also fairly simple.  Offloading the dumped database file is just a matter of s/ftp-ing it somewhere for duplication.  For this I created an expect script and added it to the end of the dump script.  As a part of the script, the sftp command includes renaming the sql file to include the day of the month tacked on to the end for archival purposes.  Because it’s only the day of the month, it will overwrite the file monthly so that you will have a month’s worth of snapshot data.   The script itself looks something like this:

FILENAME1=”backup.sql”
THISDATE=`date +%d`
FILENAME=$FILENAME1$THISDATE
echo $FILENAME
expect <<EOF > logfile.txt
set timeout 50
spawn sftp <USER>@<SFTP HOST>
expect “assword: ”
send “<PASSWORD>\r”
expect “sftp>”
send “put backup.sql $FILENAME\r”
expect “sftp>”
send “exit\r”
exit
EOF

Considerations

Obviously this method has its downsides.  The two scripts require that you store the passwords to both the MySQL root user and the s/ftp receiving passwords in text files.  If you don’t have those scripts locked down, or if the server itself is accessible widely then of course it may not work for you.  However, I have the servers pretty secured, so it hasn’t been an issue yet.  But I could see how this might not be a good fit for some people, especially if they are in an enterprise system with strict security requirements.

Leave a Reply

Your email address will not be published. Required fields are marked *

Page not found - Sweet Captcha
Error 404

It look like the page you're looking for doesn't exist, sorry

Search stories by typing keyword and hit enter to begin searching.