I get that. I think I even spoke to it. I don't care if things disappear. But as I scale up my Cache hit ratio goes down because Total Unique data goes down. So my multi-tenant app currently has to be "Right Scaled" Too few users instances don't stay live and I eat warm ups. To many users and the turn over on the Fifo buffer is so high that my cache hit ratio falls off.
From: [email protected] [mailto:[email protected]] On Behalf Of Andrius A Sent: Thursday, November 24, 2011 3:01 AM To: [email protected] Subject: Re: [google-appengine] Potential Feature Request Memcache Allocation and Delayed Writes You don't get it how memcache is working, even if google come up with new service allowing you to allocate memory in memcache, there still be no guarantee for that data to stay for period of time since everything is in "RAM" which can fail any time and data is not recoverable. If that was possible google wouldn't bother implementing what we have today called datastore. On 24 November 2011 09:05, Brandon Wirtz <[email protected]> wrote: Realizing my app is different than from most everybody elses.. I got to thinking about the thread where we were talking about reading keys from memcache. I know all the reasons this is a bad idea. But I got to thinking what if it wasn't? My app is just a big optimized cache, but I rely on the 3 tiers of storage to make it all work and do so quickly. The sum of all the data for the day is about 2 gigs. In a virtual machine environment I would typically allocate a bunch of ram and every so often dump that to longer term storage, but since most my caching is measured in minutes, some is in days, and the longest I ever care about data is a month. the only reason I need long term storage is so that when the memory gets reset, or a new "instance" comes online that I don't have a 100% miss rate. Why can't I do that with Memcache? Allocate 2 gigs, populate it with data only on a version change. Once a day take all the values and dump them back to datastore so that if the world ends that I don't have to start from nothing. (maybe only write all the values that have an expiration so many hours away) Since Backends share Memcache this "long" operation could be a scheduled task and execute in the background. In my case this would save a lot of cycles since my writes are Local Memory, MemCache, Datastore. And I do so with every piece of data because I can't count on getting a hit from Memory or MemCache because of their volatility. But if I had a set amount of Memcache I wouldn't need to worry, it wouldn't be volatile, and Google Could charge me for the resource. Doesn't even have to be perfectly non-volatile because even if I only "back-up" 75% of the data that's fine it is just a cache. -- You received this message because you are subscribed to the Google Groups "Google App Engine" group. To post to this group, send email to [email protected]. To unsubscribe from this group, send email to [email protected] <mailto:google-appengine%[email protected]> . For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en. -- You received this message because you are subscribed to the Google Groups "Google App Engine" group. To post to this group, send email to [email protected]. To unsubscribe from this group, send email to [email protected]. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en. -- You received this message because you are subscribed to the Google Groups "Google App Engine" group. To post to this group, send email to [email protected]. To unsubscribe from this group, send email to [email protected]. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
