> I've looked deeper into this, and in web2py doc: >> http://www.web2py.com/examples/static/epydoc/web2py.gluon.cache.CacheInRam-class.html, >> >> it mentions: This is implemented as global (per process, shared by all >> threads) dictionary. A mutex-lock mechanism avoid conflicts. >> >> Does this mean that when each request thread is accessing and modifying >> the content (e.g. a dictionary in my case) of the cache, every other cache >> is blocked and has to wait till the current request thread finishes with >> it. If so, it seems to me that the race condition we fear as above should >> not happen? Please correct me if I get this wrong. Thanks. >> > > Well, because cache.ram returns a reference to the object rather than a > copy, I think you're OK in terms of avoiding conflicts when updating the > dict (if you're just adding new keys). However, you might have to think > about what happens when you hit the required number of entries. What if one > request comes in with the final entry, but while that request is still > processing (before it has cleared the cache), another request comes in -- > what do you do with that new item? It may be workable, but you'll have to > think carefully about how it should work. >
Thanks for confirming this. The only kind of updates would be adding new key-value pairs. Having confirmed that this sort of updates will be consistent on the cache, I am now a little worried about the performance of doing so, given the mention of the mutex-lock design of the cache. If I understand this correctly, each thread (from a request) will lock the cache so that all other threads (requests) will have to wait. I intend to store multiple dictionaries (say 10) in the cache, and each dictionary will handle the data from a fixed set of users (say 30 of them) for a given period of time. If the cache truly behaves as above, then when one thread is updating the cache, all the other 10 * 30 - 1 = 299 threads will be blocked and will have to wait. This might drag the efficiency of the server-side.

