>
> Regarding the pros and cons of this approach (in comparison to David's 
> database approach), I wonder what are the potential pitfalls/risks of the 
> cache approach. For examples, but not limited to,
>
> 1. Is there any consistency issue for the data stored in web2py cache?
>

Yes, this could be a problem. The nice thing about using a db with 
transactions is that you can ensure any operations get rolled back if an 
error occurs. In web2py, each request is wrapped in a transaction, so if 
there is an error during the request, any db operations during that request 
are rolled back. The other issues is volatility -- if your server goes 
down, you lose the contents of RAM, but not what's stored in the db (or 
written to a file).
 

> 2. Is there any size limit on the data stored in web2py cache?
>

I think it's just limited to the amount of RAM available.
 

> 3. Is it thread-safe? For instance, if I have two threads A and B (two 
> requests from different users) trying to access the same object (e.g. 
> 'user_data' dict) stored in the cache at the same time, would that cause 
> any problem? This especially concerns the corner case where A and B bear 
> the very last two pieces of data expected to meet '
> some_pre_defined_number'.
>

That's a good point -- your current design would introduce a potential race 
condition problem (I was originally thinking each request would add 
separate entries to the cache, not update a single object). Of course, a db 
could have a similar problem if you were just repeatedly updating a single 
record (rather than inserting new records each time).

Anthony

Reply via email to