Here are the __init__ and __call__ functions for the CacheInRam class: http://code.google.com/p/web2py/source/browse/gluon/cache.py#133 http://code.google.com/p/web2py/source/browse/gluon/cache.py#165
Looks like the both acquire and release locks during the course of the function call, but they don't acquire and hold the lock beyond the duration of the function call (which should be very short). I also did a quick test involving two Ajax calls from a page. One waited a second before accessing the cache, and the other waited several seconds after accessing the cache. The function that waited to access the cache was still able to return first, so it did not have to wait for the other function to complete. Anthony On Wednesday, May 9, 2012 2:39:26 PM UTC-4, cyan wrote: > > > Thanks for confirming this. The only kind of updates would be adding new >>> key-value pairs. Having confirmed that this sort of updates will be >>> consistent on the cache, I am now a little worried about the performance of >>> doing so, given the mention of the mutex-lock design of the cache. >>> >>> If I understand this correctly, each thread (from a request) will lock >>> the cache so that all other threads (requests) will have to wait. I intend >>> to store multiple dictionaries (say 10) in the cache, and each dictionary >>> will handle the data from a fixed set of users (say 30 of them) for a given >>> period of time. If the cache truly behaves as above, then when one thread >>> is updating the cache, all the other 10 * 30 - 1 = 299 threads will be >>> blocked and will have to wait. This might drag the efficiency of the >>> server-side. >>> >> >> As far as I can tell, the ram cache is not locked for the entire duration >> of the request -- it is only locked very briefly to delete keys, update >> access statistics, etc. So, I don't necessarily think this will pose a >> performance problem. >> >> Anthony >> > > That's great news! May I ask where you found these details in the source > code? I'd just want to double-check and make sure it is the case, as this > is important to my design and implementation. Many thanks again! >

