Well. Uh.
Sometimes ya just can't, yaknow? :)
In an ideal world your cache never expires, memcached never flaps, and
you have tools updating caches in the background that work flawlessly.
Unfortunately developers don't always have the time to make these
perfect, but it's "easy" to patch in one of the previous suggestions to
deal with the problem.
Lets say you're a typical startup and you're faced with a problem:
You have a complex bit of parsing code for special data. There was never
any code written to automatically, or programmatically, update this
data. Sometime in the future caching is added. This is easy; cache the
result of the parsing request into memcached, let it expire once per
minute so it's easy to propagate changes (which are made by hand,
rarely). It's wrong, but it's what happened.
It might take someone a nontrivial amount of time to fix this. Write
code to serialize the data into the DB, write a CLI tool or webpage to
manage the data, so the cache can be updated after the data is edited
(either by hand, or whatever). Good luck getting your boss to sign off
on that. There're magic widgits that aren't writing themselves!
I do realize there's an easy way to update the data by hand, then run a
tool to just refresh the cache, but that's besides the point ;) Imagine
again that you have a lot of these situations. Where caching was plugged
in as an afterthought. For new development I _always_ recommended a cron
to update the data (getting PHP devs to write crons is like pulling
teeth!), or a tool that updates the cache. It doesn't always happen.
At Gaia it's also common for this to happen where simple 'query caching'
was plugged in as a caching methodology. Everywhere there's SQL that's
relatively static, adding an ->enableCache(blah) call makes it faster!
Right? Right... Turns out you can also plug in one of the aforementioned
algorithms to mitigate this brain damage.
Also, if you have a cluster configured to not auto-rehash, and
memcached's can stay down for multiple minutes during a failure, you
will get a similar stampeding problem anyway. You should just cache the
data in APC at this point anyway.
Well. In summary; you're right. On that note, I realize in all the wiki
updatery I hadn't really stressed what you mentioned enough at all. It's
there, but it's not a highlight.
-Dormando
Steven Grimm wrote:
I admit I'm a bit baffled by this discussion (and I also admit I have
only been skimming it, so this might be a retread.) It seems like one of
two situations should be true:
1. The underlying data has not changed. The cache is therefore still
correct.
2. The underlying data has changed, and the cache is now stale.
In the first case, just don't set an expiration time and you're done,
yes? Since the item is frequently hit (hence the stampedes) it will
never get LRUed out.
In the second case, why are you waiting around for some unknown amount
of time to pass -- and for some client to get an actual cache miss --
before refreshing the cache? If you have a few hot keys that change
often but for whatever reason you can't invalidate / update the cache at
the time the underlying data gets updated, then another approach is to
have some background task periodically updating the hot items to their
current values. Again, you don't let the item expire in this scenario;
it just gets updated every once in a while. This way nobody has to deal
with a cache miss, and the values still stay as current as you want them
to (adjust the frequency of your background task's updates to taste)
with no stampedes.
What am I missing?
-Steve