Live schema changes on high volume tables

We’ve just finished some fairly serious data shuffling in the WLD database.

We’ve been working on changes to reduce the amount of storage space private messages take up, aiming to decrease the storage requirements by around 80%, and also to dramatically lower the database hit each time a new message is sent (there’s a write-query reduction of around 75%).

However, one of the changes required a database table schema change to the main messages table (the table that stores the references to messages for each member). The problem we had: there were nearly a quarter of a billion records in that table.

We estimated that running an ALTER TABLE on that, waiting for the changes to replicate over all the slaves, and then for the slaves to catch up again, could take up to two days, which would mean two days of effective downtime. Obviously, this wasn’t an option, so we had to come up with an alternative solution.

What we ended up doing was dumping all the data in the existing messages table to a file, and then reimporting on to each slave (and the master, unreplicated) individually, into a new table, messages_new. That in itself took around 36 hours on and off, dropping various DB slaves out of the database pool, including our superstar operations manager Ian working from around 2am–5am this morning while the system was quiet.

By the time we’d imported all this data, the original messages had another 2 days worth of data which also had to be migrated, so we just repeated what we’d already done (dump and import), but just for the new data (which took significantly less time, but was still a very manual process).

At this point, we only had an hour or so of new data to catch up with, which we deemed was a small enough amount to run in a single transaction. So it was “the big switch time”… The process was:

  1. Lock any message tables so no new messages got written;
  2. Sync any missing messages by running an INSERT INTO ... SELECT FROM straight on the master and allowed it to replicate;
  3. Re-run any message deletes and reads that had happened since the initial data export (we’d had the foresight to start recording all these before the export);
  4. Rename the messages table to messages_old, and the messages_new table to messages (i.e. put the new table in to service);
  5. Unlock all tables.

This whole process took less than 30 seconds on the master, and the same again to replicate across all slaves. Any queries which had backed up because of the table locks got pushed through as soon as the lock was released, so other than a small wait, no one would be any the wiser.

So, we’re now running on the new table schema, and everything’s running smoothly. It’s been an interesting couple of days: we’ve not performed any serious maintenance on tables of anywhere near that size before, but it’s something that we’ll be doing more and more as the service grows. And it’s great to be able to say that we completed the migration with zero system downtime.

Note: since this article was written, there have been a lot of changes in the world of live schema alterations. I suggest you use a tool like Percona’s online schema change to do the job for you.

This post was originally published on the globaldev site.

 
0
Kudos
 
0
Kudos

Now read this

Building an API and mobile platform for 10 million users

Four days before the turkey was served up at Christmas last year, we threw the switch on the biggest project we’ve ever done at globaldev. We launched our new mobile counterpart to our desktop application for nearly 200 of our biggest... Continue →