As far as I know, memcache stays consistent between deployments (its a feature). To get automatic invalidation on deployment, you need to work the code revision number in to the key.
The main reason to use memcache is to get consistency between nodes for data that could potentially be inconsistent. In this case, you do not need to worry about consistency between nodes, because the code is the same at *every* node. I would put the compiled bytecode in ram (dict). The access time will be faster than memcache, and you do not have to worry about invalidation. And if environments were available (eg. production), you could aggressively cache the code to ram by default. Robin On Nov 30, 1:43 am, mdipierro <[EMAIL PROTECTED]> wrote: > Not sure any more.... some of my other tests now take longer than > before. I need independent tests and verification. > > Massimo > > On Nov 30, 1:15 am, mdipierro <[EMAIL PROTECTED]> wrote: > > > Done! GAE caching of bytecode compiled code is in trunk and works. > > > It makes my test code 3-4 times faster (from 960ms to 270ms). > > > Open question: does GAE clear memcache when I re-upload web2py? It > > seems so. > > > Please test it. This is still to be considered experimental and there > > is still room for further improvements. > > > Massimo > > > On Nov 30, 12:37 am, mdipierro <[EMAIL PROTECTED]> wrote: > > > > hold on! While it is a good idea to write cache ram on top of gae > > > memcache. I thought of a better way to solve the above problem. > > > > Massimo > > > > On Nov 29, 11:45 pm, mdipierro <[EMAIL PROTECTED]> wrote: > > > > > Hi Robin, > > > > > following your suggestion look into the latest gluon/restricted.py in > > > > trunk. On the top you will find > > > > > ### FIX THIS > > > > CACHE_TIME=0 > > > > import cache > > > > cache_pyc=cache.CacheInRam() > > > > ### > > > > > if CACHE_TIME is set to a large number say 3600 (one hour) all byte > > > > code compiled code is cached in RAM using cache.CacheInRam > > > > > If you (or somebody else) could rewrite CacheInRam on top of > > > > google.memcache (look into cache.py and contrib/memcache/__init__.py > > > > for examples) that we should achieve a sensible speedup on GAE. > > > > > Else, if I have time, I will do it next week. > > > > > There is room for further improvements but this is a quick solution to > > > > the problem that should solve it. > > > > > Massimo --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "web2py Web Framework" group. To post to this group, send email to [email protected] To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/web2py?hl=en -~----------~----~----~----~------~----~------~--~---

