I need to do more, and longer testing, but it appears that on HR, that
zipping data before writing to memcache, and unzipping it on read is faster
than doing the full sized read.  I am sure there are differences based on
the size of the data being written, and It may be that there is an added CPU
cost, but at least in our app, this seems to have reduced our latency,
solved a lot of our writing more than 1M to memcache (compressed we can put
as much as 3M in to a mem-cache) and increased the number of entries that
can live in Memcache reducing the number of Misses we have.

 

This was not an "Obvious" thing to test, and we only decided to try it
because we were having issues where splitting data in two to make it fit in
the 1M limit was resulting in much higher Cache Misses because if we had an
20% miss, now we had 20% + 20%.  

 

This may not be a "best practice" but may be something you want to consider.
We took the default compression level (6) and haven't tried other settings
yet.

 

Again Your Mileage My Very, but worth considering if you use Memcache

 

 


Brandon Wirtz 
LockerGnome.com: Corporate VP Business Strategy 
BlackWaterOps: President / Lead Mercenary 

Description: http://www.linkedin.com/img/signature/bg_slate_385x42.jpg



Work: 510-992-6548 
Toll Free: 866-400-4536 

IM: [email protected] (Google Talk) 
Skype: drakegreene 

 <http://www.lockergnome.com> Lockergnome.com <http://www.blackwaterops.com>

BlackWater Ops 


                

 

 

 

 

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.

<<image001.jpg>>

Reply via email to