MySQL has some powerful replication technology. It can be configured to replicate everything, or subsets of schemas or even individual tables. Those slave databases can themselves be masters as well, feeding still more databases a stream of transactions from the master.
Despite the technology's obvious power, it is not bulletproof. There are times when you find replication can break, throwing up errors in the slave error-log, and stopping the receiving of transactions in their tracks.
Picking up from where we left off in our last article, we'll discuss what to do to make your replication setup more resilient. It may not be bullet proof, but by being proactive, you'll keep your data in sync better, and be alerted when it's not.
As of MySQL 5.1 many of the issues we discuss here have been resolved with row-based replication. This allows MySQL to switch between statement and row-based replication as required by the SQL being written to the binary log. The mixed mode is not enabled by default. It was for earlier versions of 5.1, but they've switched to statement as default because of some common application code that breaks with mixed set. You can set this mode (mixed, statement or row) with the --binlog-format option.
As you might guess, there are pros and cons. Giuseppe Maxia says on updates to very large tables, there may be a performance hit.
Testing With A Sandbox Environment
If you haven't setup your MySQL environment yet, well we commend you for staying ahead of the curve and reading about these problems before they find you. However, even if you already have, it can be quite a boon to have a sandbox environment on which to work, play, test, break, and fix replication. And there is!
Giuseppe Maxia has built a tool, which creates a MySQL sandbox just for this purpose. It creates four different instances of MySQL on the same physical server by running it on different ports, and with different config files. It also comes with handy scripts such as make_replication_sandbox. There are also scripts, which start and stop all the instances, and help you reach the right one when you want to. Even setting the MySQL prompt so you know where you are!
Verifying MySQL Replication Setup
Now that we've gotten you used to the idea that your data on your slaves is probably somewhat out of sync of your master, what can be done about it? The first step would be to find out where your problems lay. For that we'll apply checksums to our tables.
You might wonder, if my master database is up and running, how can I do checksums on the tables. The folks who developed Maatkit thought of that. It does the checksums on the master, dumps the data into tables, and that data propagates through replication! Neat solution, I must say.
The MySQL Toolkit provides another checksum based solution, which you can also take a look at:
Monitoring MySQL Replication
As part of your enterprise wide monitoring, you'll want to monitor your MySQL slaves. Nagios is a good tool to provide this support. If you'd like to go further, wrap up some of the checksum tools in a script, and call that from Nagios as a custom plugin.
Get a copy of Nagios here:
Learn about writing Nagios plugins here:
Now that you know your replication may have problems, and also how to find those problems, you're way ahead of the game. Also knowing the things that are going wrong, you can search for some of those issues on Google more easily. Here are some other documents for further reading.
Baron Schwartz - How to Make MySQL Replication Reliable
General Replication documentation
Replication Features and Issues
Giuseppe Maxia's - Advanced MySQL Replication Techniques
Replication in MySQL promises a lot. It's fairly trivial to get up and running, almost too easy. But there are lots of headaches you'll likely encounter along the way.
Follow our guidance, be proactive, and setup monitoring, and you'll be on your way to a better more reliable replication setup for MySQL.
» See All Articles by Columnist Sean Hull