Some clients do compression, though that's no guarantee that you're object
will compress to < 1MB.

On Wed, Sep 16, 2009 at 1:32 PM, EugeneV <[email protected]> wrote:

>
> Hi,
>
> In my scenario, typical items that I would like to cache are much
> smaller than 1MB. However, there is no guarantee that I will never
> encounter larger items. I would like to be able to store them as well,
> but also would like to avoid recompiling memcached or changing the
> back end.
>
> If I split items > 1MB and store the array of keys for the parts as a
> value with some "master" key, I can retrieve the parts via get_multi
> when I need to re-assemble the item. I just need to be able to
> determine when this has to be done. Rather than examining data each
> time, I can store "master" keys in a separate namespace. Then the
> logic for retrieving data from cache becomes: (1) check "main" cache;
> (2) if key is not there, check "master" cache and re-assemble the
> value; (3) if key is not there, do something... Obviously, deleting
> "master" key should delete parts as well.
>
> I was wondering if any existing clients out there abstract what I just
> described and whether it is a good idea.
>



-- 
"If you see a whole thing - it seems that it's always beautiful. Planets,
lives... But up close a world's all dirt and rocks. And day to day, life's a
hard job, you get tired, you lose the pattern."
Ursula K. Le Guin

What's different about data in the cloud? http://www.azuredba.com

http://www.finsel.com/words,-words,-words.aspx (My blog) -
http://www.finsel.com/photo-gallery.aspx (My Photogallery)  -
http://www.reluctantdba.com/dbas-and-programmers/blog.aspx (My Professional
Blog)

I enjoy the massacre of ads. This sentence will slaughter ads without a
messy bloodbath.

Reply via email to