deployments of internet sites are becoming much more prevalent everyday.
Let’s take a look at what type of special considerations we’ll need to make for
hosting MySQL successfully in the cloud.
1. Pay attention to disk I/O
all databases, disk I/O is an important consideration. MySQL of course is
no different, and when hosting a database in the cloud the concepts remain the
same. How you achieve fast I/O is slightly different.
you may use instance store boot images, remember that everything on the root
volumes will be lost when the instance dies; and they die much more often than
servers in traditional managed hosting facilities do. EBS rooted images
boot faster, allow you to backup the root volume, and provide root volume
persistence after reboot.
the database proper, store the MySQL datadir on its own EBS volume. This
will give you a lot of flexibility, and much better disk I/O performance than
straight instance store. If you want even better performance, stripe
(raid 1) over multiple EBS volumes. Also, remember that EBS is
virtualized network storage, so there is already redundancy built in. You
don’t need to mirror or mirror + stripe (RAID 10).
you’ll have your database on a separate mount point, adjust your MySQL start
scripts accordingly. Either wait and watch for that mount point to appear
before starting MySQL, or disable the automatic starting of the database.
However, you’ll still want MySQL to automatically shutdown when the server is
2. Manage Your Backups
up your persistent data, the core of your internet application is as important
in the cloud as it is in any hosting environment. However, there are a
few new considerations.
provides a method for snapshotting EBS volumes, another good reason for putting
your datadir on EBS. Issue these commands or script them:
mysql> FLUSH TABLES WITH READ LOCK; mysql> system xfs_freeze -f /mnt/db mysql> system ec2-create-snapshot
mysql> system xfs_freeze -u /mnt/db mysql> unlock tables;
commands take only a brief moment to run, even though the snapshot will still
be completing for some time.
that as powerful these backups are, they can only be used within EC2.
You’ll also want to make some other types of backups. Consider hotbackups
with Percona’s xtrabackup weekly, and retain your binary logs to provide
point-in-time recovery ability for any point between those backups and the
current moment. Lastly perform database dumps a couple times per week as
further prudent, and copy these hot and logical backups offsite
periodically. As a further and final note, consider encrypting your
backups in S3. Since you don’t know where your data is, or how safe it
really is in S3, encrypting it can give you additional peace of mind.
3. Be Serious About Disaster Recovery
of the things that deployments in the cloud force you to do is be very serious
about disaster recovery. That’s because these cloud servers are even less
reliable than commodity hardware in a managed hosting center. This
discipline is a good thing though, as it forces documenting processes, and
scripting server spinups. In the end you become *more* resilient. While we
may all agree that good disaster recovery practices are important, they do
sometimes get put off for higher priorities. In the cloud, they get moved
to the center of your priority list.
be sure to regularly test your DR capability with fire drills. Consider
spinning up your entire environment in a different availability zone, or even a
different region. This will give you a sense of how quickly you can
restore your entire infrastructure from the proverbial "bare metal".
do you restore your database master in the cloud? Here are the likely
steps you’ll go through:
a. Spinup a new database server and mount
your former EBS datadir volume
b. If (a) fails due to corruption,
promote your slave to be a master
c. If (b) fails for some reason you can
dump your data from the slave, spinup a new master database, and load that data
d. If these steps fail, go to your newest
backup, and attempt point-in-time recovery
4. Scaling your database capacity
capacity in a traditional hosting environment involved a lot of planning for
expected capacity needs. The cloud really opens up a lot of new options
for you here. You can avoid waste by using less capacity up front, adding
it later as necessary. Monitor your usage patterns, do some benchmarking
to get a sense of how your application responds to increased load.
also want to build the application from the start to support horizontal read
scaling. Start with a single master and single slave database, and send
read traffic to the single slave database, and write traffic to the
master. As your traffic increases, you can add additional slaves to scale
out. Since your application has inbuilt support for spreading requests
around, you make good use of the cloud’s dynamic ability to spinup and take
down servers this way.
pay serious attention to how the cloud changes the procurement process.
With traditional managed hosting, you plan for specific machines, get approval
and order them. With the cloud, all these levers are controlled in
software. So a cap of maximum monthly spending is something that many
organizations will want to consider.
5. Manage Network Traffic
EC2, your servers get spun up with new internal and external IPs each time they
start. Be sure to use the internal IP addresses for communication between
webservers and database, and between databases in replication. Depending
on your use of elastic IPs for the master or not, you may have to notify
webservers of the master databases new internal IP & name, as well as the
replication slave. Internal network traffic is not charged for, and is
more secure of course.
covered a few important considerations for hosting MySQL in the cloud. Our
next piece will hit on five more areas of importance, including disaster
recovery, legal implications, service levels, considerations for building
robust internet applications in the cloud, and a few miscellaneous ones.