memcached invalidation for sets of keys

There are only two hard things in Computer Science: cache invalidation and naming things.
— Phil Karlton

We use memcached extensively throughout the White Label Dating platform, primarily for write-through data caching. For example, composite member and site objects are serialised to JSON, stored with simple keys such as member:1234567 or site:1234, and deserialised and the values injected into object constructs when required.

Although this data-caching is what takes up most of our caching space, we also store fully rendered HTML pages in a read-through fashion. We do this rather than using a caching reverse proxy such as Squid or Varnish to give us the ability to control expiration and invalidation of the cached items directly.

Due to the heavy customisation of each logged-in member’s pages, we can only perform full-page caching on the visitor and search results pages; with people like Google now taking page load speed into consideration for search rankings, serving crawler-visible pages up quickly becomes even more important (if you could ever argue that there was anything more important than user experience when talking about load times).

So far, so good. We’re currently getting around a 60% hit rate for our view cache which, while not fantastic, is reasonable considering the number of variations for each page, based on site and query string parameters (as a comparison, we’re getting 96.5% for the data cache).

However, it’s exactly this number of variations which has recently been causing us trouble. We cache complete pages with a key format similar to:

page:{site_id}:{page_id}:{query_string_hash}

This means that for each active site (current count is just over 4,500), we may potentially have (total number of cacheable pages * total hash space) keys. Which is a lot. Obviously, in practice, the used key-space is a lot lower, but even a cursory glance over the lookup logs for just one of our web servers shows potentially thousands of keys for a given site.

While everything’s stable, this isn’t a problem at all, but we’ve been on a push to migrate sites from our old (“standard”) template set to a new (“premium”) one; this has meant a large number of design changes to sites as they’re moved over. Because we have no way of knowing exactly which cache keys need to be invalidated when a site’s design changes, we’ve been taking the intermediary step of flushing the entire view cache each time. Ouch.

This got me thinking about some of the ways to invalidate large sets of cache keys like this…

 Use a key registry

Each time you write a key to memcached, record the key separately either in memcached itself, or in a persistent datastore like MySQL (redis’ set datatype would be perfect for this type of storage.) Then, when you need to invalidate the set, loop through each of the registered keys and perform a delete on each one.

This method, although “clean” (leaving no old keys around), requires the extra overhead of keeping the registry up to date. Plus, if you end up making an extra write to a database each time to add a key to memcached, you’re potentially reducing the benefits of the use of the cache.

 Rely on timeouts

Set each key with a low(-ish) timeout and just wait until the key naturally expires. Of course, you can’t force invalidation of individual keys, and you’re likely to have inconsistencies as individual items in the set expire at different times. This is by far the simplest approach, but if you need to be able to clear a set of cache keys on-demand, it’s not the solution for you.

 Key versioning

Version each key based on some repeatable string. This could be a manually controlled version number which is incremented when the data to cache is changed, or the last-updated timestamp. If we take the perennial example of a shopping cart broken into visible pages, we could version it as follows:

cart:{cart_id}:{page}:{version}  # format for storing data for a given cart page
cart:12345:1:4                   # explicit version 4 of page 1 of this cart
cart:12345:1:1280824586          # page 1 of this cart based on last-updated stamp

Then, when a user makes a change to their cart (adds a new item, for example), we simply increment the version on the cart, and any references to cached keys will start looking for the new version:

<20 get cart:12345:1:4
>20 NOT_FOUND
<20 set cart:12345:1:4
>20 STORED
<20 get cart:12345:1:4
>20 sending key cart:12345:1:4
>20 END
<20 get cart:12345:1:5
>20 NOT_FOUND
<20 set cart:12345:1:5
>20 STORED
<20 get cart:12345:1:5
>20 sending key cart:12345:1:4
>20 END

An alternative option is versioning by “namespace” as described in the memcached FAQ whereby for each key there is an additional namespace key. This namespace acts as the version and, by changing this value, you effectively invalidate any other keys that use that namespace:

# find an existing namespace
$ns_key = $memcache->get("foo_namespace_key");
# if not set, initialize it
if ($ns_key === false) $memcache->set("foo_namespace_key", rand(1, 10000));
# cleverly use the ns_key
$my_key = "foo_" . $ns_key . "_12345";
$my_val = $memcache->get($my_key);

# to clear the namespace do:
$memcache->increment("foo_namespace_key");

This means that you have to make an additional round-trip per key to retrieve the namespace, but is a good option if you don’t have a specific field you could version on (and you don’t want to add one).

 Our choice

In our case, we chose to version our keys based on the epoch-translated timestamp of the last-modified date for the site in question. We felt that this method had the best combination of costs and benefits for us:

Everyone’s invalidation technique will be different, and most people will use more than one depending on the type of data. There are plenty of resources out there to help you find the perfect one for your situation.

This article was originally posted on the globaldev website.

 
0
Kudos
 
0
Kudos

Now read this

It’s time to start doing

It’s time to start doing. To start writing. To start building. To start learning. I’ve spent far too long collecting resources and ideas, and not nearly enough time in using the resources, or acting on the ideas. My Dropbox folder is... Continue →