Clarification for last post: LRU means that if an item has not been used for
a long time, it will likely be evicted, whereas an item that was recently
used is less likely to be evicted.

--
Ikai Lan
Developer Programs Engineer, Google App Engine
plus.ikailan.com | twitter.com/ikai



On Fri, Aug 5, 2011 at 5:38 PM, Ikai Lan (Google) <[email protected]> wrote:

> Agree with Tim. The way Memcache evicts items is on an LRU - least recently
> used - basis.
>
> Are you familiar with the concept of a "working set"? The idea is that the
> majority of your data reads for a given window will go to a small minority
> of your total dataset. This is why caching in general works so well. Faster
> caches have less storage, but are faster and cost more, but because you only
> ever work with a very small part of your entire dataset most of the time, it
> doesn't matter. With an LRU based cache, if something is retrieved, it is
> *usually* likely that object will be used again in the near future, and you
> gain the benefits of caching.
>
> It sounds like in your scenario, this is what will happen. If someone comes
> and requests some data that isn't frequently accessed, you win because you
> rarely pay that cost of hitting the datastore. If someone comes and requests
> data that is frequently accessed, again, you win because that data will, in
> an overwhelming majority of cases, be served from the cache. Where you lose
> is when your data access is totally random.
>
> So I guess my advice is this: build it first, use caching, graph everything
> and watch cache hits over time. When it's a problem, it's a problem and you
> can deal with it then, but my intuition tells me that it probably won't be
> for a while.
>
> --
> Ikai Lan
> Developer Programs Engineer, Google App Engine
> plus.ikailan.com | twitter.com/ikai
>
>
>
> On Fri, Aug 5, 2011 at 5:31 PM, Tim Hoffman <[email protected]> wrote:
>
>> Basically you can't tell.
>>
>> Also you memcache capacity is finite so you by stuffing it with stuff in
>> case you need it, will inevitably evict something else.
>>
>> You basic design pattern needs to be
>>
>> Check in memcache
>> if not their fetch from the datastore
>> if you think the data will be re-used stick it in memcache
>>
>> Various scenarios may lend them selves to preloading the cache (but note
>> the point at the top)
>> If you can identity a users requirements specifically, when they log in
>> you could fire of a task in the background which could pre-load the cache
>> with stuff you think they will look at shortly, but
>> you still have to deal with cache misses.
>>
>> I dont think running your own mechanism in the backend would be
>> particularly useful unless you can guarantee a huge cache hit rate
>> as it is a finite resource.
>>
>> Also think about how you can fetch the data from the datastore more
>> efficiently.  Getting data by key
>> can be very fast.  Given the client will only fetch the data once, and you
>> don't know when they will do it
>> keeping the data hot in memcache seems an expensive excercise, especially
>> if you can optimize the
>> data in the datastore for the client so they can fetch it with a single
>> db.get()
>>
>> How are detecting the data change that triggers the notification?
>>
>> Just my 2c
>>
>> Rgds
>>
>> Tim
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Google App Engine" group.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msg/google-appengine/-/QQMA5FBKJWMJ.
>>
>> To post to this group, send email to [email protected].
>> To unsubscribe from this group, send email to
>> [email protected].
>> For more options, visit this group at
>> http://groups.google.com/group/google-appengine?hl=en.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.

Reply via email to