I'm running this by you guys to make sure we're not trying something
completely insane. ;)

We already rely on memcached quite heavily to minimize load on our DB
with stunning success, but as a music streaming service, we also serve
up lots and lots of 5-6MB files, and right now we don't have a
distributed cache of any kind, just lots and lots of really fast
disks. Due to the nature of our content, we have some files that are
insanely popular, and a lot of long tail content that gets played
infrequently. I don't remember the exact numbers, but I'd guesstimate
that the top 50GB of our many TB of files accounts for 40-60% of our
streams on any given day.

What I'd love to do is get those popular files served from memory,
which should alleviate load on the disks considerably. Obviously the
file system cache does some of this already, but since it's not
distributed it uses the space a lot less efficiently than a
distributed cache would (say one popular file lives on 3 stream nodes,
it's going to be cached in memory 3 separate times instead of just
once).  We have multiple stream servers, obviously, and between them
we could probably scrounge up 50GB or more for memcached,
theoretically removing the disk load for all of the most popular
content.

My favorite memory cache is of course memcache, so I'm wondering if
this would be an appropriate use (with the slab size turned way up,
obviously). We're going to start doing some experiments with it, but
I'm wondering what the community thinks.

Thanks,

Jay

Reply via email to