Multi-master replication for MySQL is easy to setup, is similar to master-slave replication and provides a host of features and options not available with the regular master-slave setup. Add to that the MMM database management tool, and you have a flexible, highly available MySQL cluster on which to host your application.
Introduction
Multi-master
replication for MySQL provides a host of new features and options not available
with the regular master-slave setup. Also known as circular replication,
with multi-master replication you have one active and one passive server, but
both configured in otherwise exactly the same way.
We’re
going to introduce this configuration, then show you how to use the
Multi-master Replication Manager for MySQL.
Easy Failover with Multi-Master Replication
Multi-master
replication above and beyond anything else, gives you very easy and simple
failover. Why is that? Because you don’t need to make any
configuration changes. Here’s how.
With
standard MySQL replication, the master database receives changes, and all the
while the slave database watches its binary log file for activity. When
it sees activity there, it copies those transactions using the IO thread from
the master’s binary log, to its own relay log file. It is then that the
slave’s SQL thread kicks in, and applies those transactions one-by-one in a
serial fashion to the slave database. This happens in the order the
transactions were committed.
In
that configuration, if you want to promote a slave to a master, you have to
disable the slave, change some other settings, possibly change read-only mode,
and or the server_id if that hasn’t been set.
Enter
Multi-master replication. Here we have the master and slave setup as
above. Db2 is a slave of db1 for example. However, in addition
with this setup you also make db1 a slave to db2. As above, the
transactions flow from master to slave. They then flow back upstream to
the master. The master sees the server_id in the transaction as its OWN
server_id and knows not to apply this transaction. In other words, db1
will only apply a transaction that ORIGINATED on db2. In all other
ways, the servers are configured exactly the same way. In fact you don’t
even *NEED* to enable read-only mode, though we recommend it.
Setting up Master-Master Replication
Begin
by setting up master-slave replication as you normally would. In this setup,
also make a few additions. Be sure to set:
server_id
= 12345 # could use last digits of IP address, must be
globally unique
log_slave_updates
= 1
read_only
= 1
log_bin
= /path/to/mysql-bin.log
relay_log=/path/to/mysql-relay.log
Go
ahead and set these; be sure your replication user exists on both master and
slave, restart master and slave, and make sure replication is happening to the
slave.
There
is now only one last step. On the *master* go ahead and issue the same
CHANGE MASTER TO command that you would normally use pointing db2 to your
master db1, but in reverse so the master points to the slave. Be sure to
grab the master_log_file and position on the slave so you can run this command:
mysql> change master to master_user='abc', master_password='abc', master_host='db2', master_log_file='mysql-bin.000001', master_log_pos='12345';
Then
issue:
mysql> start slave; mysql> show slave statusG;
Look
for these two lines:
Slave_IO_Running: Yes Slave_SQL_Running: Yes
Building Highly Available MySQL with MMM
I’m
sure you can see the advantages of MySQL Multi-master replication
already. But there’s a lot more in store for you. Next up you’ll
want to take a look at MMM. This toolset provides a monitoring daemon,
and agents that run on the various MySQL nodes. It monitors activity and
makes sure your master is available. If it becomes unavailable, it will
automatically switch your application to point to your alternate master.
MMM
exposes a virtual IP, which your application can use to connect. That VIP
will always point to the writer *role* whichever database in your multi-master
setup that currently holds the role. There are also reader roles with
associated IP addresses for your application to use. MMM manages all of
these for you.
What’s
more, if the monitor daemon dies for some reason, the currently assigned VIPs
are persistent. So obviously, you’ll want to monitor to make sure the that
process is running, but it won’t bring down your application if it dies.
Setting up MMM
First
setup the EPEL repository.
A quick Google search should get you details on this. Next use yum or
apt-get to install your packages. For yum:
$ yum install mysql-mmm-agent $ yum install mysql-mmm-monitor $ yum install mysql-mmm-tools
There
are a few files to edit to configure MMM, but they’re not too bad.
First,
create an mmm_common.conf file and copy to /etc/mysql-mmm/ on monitor server as
well as each server running the agent daemon:
active_master_role writer <host default> cluster_interface eth0 pid_path /var/run/mysql-mmm/mmmd_agent.pid bin_path /usr/libexec/mysql-mmm/ replication_user replication replication_password abc agent_user mmm_agent agent_password abc </host> <host db1> ip 192.168.1.86 mode master peer db2 </host> <host db2> ip 192.168.1.87 mode master peer db1 </host> <role writer> hosts db1, db2 ips 192.168.1.100 mode exclusive </role> <role reader> hosts db1, db2 ips 192.168.1.186, 192.168.1.187 mode balanced </role>
Next
edit the mmm_agent.conf file on each machine it will run on (adjust the
"this" line as needed).
include mmm_common.conf this db1
Lastly
edit the mmm_mon.conf file on the box it runs on.
include mmm_common.conf <monitor> ip 127.0.0.1 pid_path /var/run/mysql-mmm/mmmd_mon.pid bin_path /usr/libexec/mysql-mmm status_path /var/lib/mysql-mmm/mmmd_mon.status ping_ips 192.168.1.86, 192.168.1.87 auto_set_online 60 </monitor> <host default> monitor_user mmm_monitor monitor_password abc </host> debug 0
Got
all those files edited? Great, start the agent on each MySQL server.
$ /etc/init.d/mysql-mmm-agent start
Then
on the monitor box.
$ /etc/init.d/mysql-mmm-monitor start
Once
you’re up and running, use the control tool. Try these commands:
[root@mon init.d]# mmm_control show db1(192.168.1.86) master/ONLINE. Roles: reader(192.168.1.187), writer(192.168.1.100) db2(192.168.1.87) master/ONLINE. Roles: reader(192.168.1.186) [root@mon init.d]# mmm_control move_role writer db2 OK: Role 'writer' has been moved from 'db1' to 'db2'. Now you can wait some time and check new roles info! [root@mon init.d]# mmm_control show db1(192.168.1.86) master/ONLINE. Roles: reader(192.168.1.187) db2(192.168.1.87) master/ONLINE. Roles: reader(192.168.1.186), writer(192.168.1.100) [root@mon init.d]# mmm_control move_role writer db1 OK: Role 'writer' has been moved from 'db2' to 'db1'. Now you can wait some time and check new roles info! [root@mon init.d]# mmm_control show db1(192.168.1.86) master/ONLINE. Roles: reader(192.168.1.187), writer(192.168.1.100) db2(192.168.1.87) master/ONLINE. Roles: reader(192.168.1.186)
Consistency Checks With Maatkit
We’ve
discussed MySQL replication in depth in the past, and particularly how
replication can get out of sync. Just as with any MySQL replication
topology, Multi-master has that same limitation. Given that, be sure you
are using the Maatkit mk-table-checksum and mk-table-sync technologies to alert
you if they get out of sync, and fix them when they do.
Mk-table-checksum
works by calculating the checksums of all your tables on the master and
gathering those into a table. That table then propagates through
replication as well. You then run another check command on the slave side
(or current passive master) to verify that all is ok. See Fixing
MySQL Replication
for details.
Conclusion
As
you can see, Multi-master replication is not overly complicated to setup and a
lot like master-slave replication. Despite that, it provides a whole host
of advantages like icing on an already great cake. Add to that the MMM
management tool, and you have a very flexible highly available MySQL cluster to
host your application on top of.
Additional Resources
MySQL 17.6.10. MySQL Cluster Replication: Multi-Master and Circular Replication
Installing and Managing Multi
Master Replication Manager (MMM) for MySQL High Availability