Heartbeat automates all the moving parts and can work as well with the MySQL master-master active/passive solution as well as it can with the MySQL & DRBD solution. It manages the virtual IP address used by the database, directs DRBD to become primary, or relinquish primary duties, mounts the /dev/drbd0 device, and starts/stops MySQL as needed.
provides an alternative high availability solution for MySQL. By effectively putting a mirrored filesystem beneath
the database, the Linux operating system is thus replicating all of your data
without the database even knowing about it. In our first two articles on
the topic, we discussed some of the strengths and weaknesses of native MySQL
statement based replication and then
compared and contrasted those with the DRBD
based solution. The advantages are in simplification of management, and
elimination of some of the anomalous behavior of MySQL
replication. The potential tradeoffs though are some performance
impact, both in how your filesystem will then respond as it has to wait for
acknowledgement of writes on remote destinations as well as a performance hit
when you failover as the target MySQL database is starting up fresh, and has to
warm its cache before performance will equal that on the former primary.
piece of the puzzle is the heartbeat software. Heartbeat automates all
the moving parts that we've discussed so far. What's more, heartbeat can
also be used with the native MySQL master-master active/passive replication
solution, so it's likely to be in the mix, whatever high availability solution
you go with.
What is heartbeat?
is a daemon that runs on all of the nodes in your cluster.
It communicates with other nodes in the cluster to decide who will be primary,
and what that primary will do. If one of the nodes becomes unresponsive,
it will automatically remove it as primary, and make one of the other nodes
take over. It also exposes a virtual IP address for your primary MySQL
database to run on.
How does it work with DRBD and MySQL?
recall from previous articles, when switching to an alternate primary, one must
(a) stop the database (b) unmount /dev/drbd0 and (c) tell the drbd daemon to be
secondary, that is relinquish primary duties. On the alternate node, you
must (d) tell drbd that it is now primary (e) mount /dev/drbd0 and finally (f)
startup MySQL. Heartbeat performs all of these steps, but further it
exposes a virtual IP address upon which to run your MySQL database. Each
of the nodes in your cluster will have their my.cnf configured for an alternate
virtual IP address of your choosing. Heartbeat will notify, manage, and
take over that virtual IP address for use of the primary server before starting
MySQL for you.
that means is that when your mysql client, be it the mysql shell, apache webserver,
or what have you, connects to the database, it will do so through this virtual
IP address. When heartbeat moves that to a new machine, your application
will transparently start hitting that alternate database without even knowing
the rug was pulled out from under it. Ingenious!
Setup and Configuration
are three main files that control heartbeat and we'll look at each of them in
-- code section --
keepalive 1 # time between heartbeat packets
deadtime 10 # How fast heartbeat notices dead node
warntime 5 # alert about a late heartbeat
initdead 120 # time for heartbeat to report dead node
udpport 694 # UDP port for heartbeat
ping 192.168.1.4 # virtualbox host ip address (of your laptop)
bcast eth0 # interface to use for heartbeat packets
# names of our node hostnames
respawn hacluster /usr/lib/heartbeat/ipfail
use_logd yes # send messages to /var/log/messages
#use_logd no # send messages to /var/log/hb.log
logfile /var/log/hb.log # various heartbeat system messages
debugfile /var/log/heartbeat-debug.log # debug messages
above ha.cf file tells heartbeat how to startup and it's own default settings
for pinging, noticing nodes are out, switching, switching back and the names of
those nodes and ports. It doesn't tell heartbeat anything about the
resources it will manage. We'll look at that next.
-- code section --
adelie IPaddr::192.168.1.30/24/eth0 drbddisk::mysql \
above lines tell heartbeat (a) what virtual IP address to manage, (b) what
device and filesystem to mount and/or unmount and how and then (c) what service
to start, which for us is our database.
-- code section --
basically disables authentication. Obviously if you are using this in
production you'll want to configure this, but for our purposes in the virtualbox
demo, it won't be necessary.
edited these three files on both of your nodes, you'll want to stop MySQL and unmount
the filesystem, then startup heartbeat as follows:
$ kill %1 # assuming you've run mysqld_safe &
$ umount /mnt/mysql
$ /etc/init.d/heartbeat start
then want to monitor /var/log/messages or hb.log to see what is happening.
$ tail -f /var/log/messages
will take a moment to startup, and you'll see messages to that affect. It
will wait for a period of time, and decide who to make the primary, then start
services on that node, and start MySQL on it's new virtual IP address.
verify that you can connect to MySQL on the virtual IP address:
$ mysql --host=192.168.1.30 --port=3306
that's it, you've done it!
brings together the last piece of the puzzle in setting up a high availability
cluster for MySQL using DRBD. What makes it even better is that it works
as well with MySQL standard replication as it does with DRBD so you have
options there as well. Lastly, by exposing a virtual IP address and
managing the starting and stopping of all the components, heartbeat nicely
keeps everything seamless, thus providing a single target virtual database to
hit, but multiple underlying server nodes to protect you, eliminating a single
point of failure.
See All Articles by Columnist Sean Hull