On Tuesday, April 5, 2011 1:08:54 PM UTC-4, VP wrote:
>
> >>My question is: in your locking policy above, what's the purpose of 
> locking at all? 
>
> If you don't lock during writes (e.g. when updating the counter), two 
> writes might happen "simultaneously" and cause inconsistent results. 
>
>

There are a lot of ways this can go wrong.  Forget for the moment that we 
are talking specifically about web session data.  More generally we are 
talking about a distributed cache.  That's a tough topic.  There may be 
some uses of web2py and other frameworks where data correctness isn't 
critical -- counting page hits or something -- and an occasional goofy 
write is OK.  But for any kind of business or commerce application, these 
issues are critical.  Having sessions read and written incoherently by more 
than one client has to be approached with real caution.  (And the 
session-forget approach is pretty limited -- unless you can be really 
confident that there's no function 7 layers away from the controller that 
may have to write to a session).

I suspect the answer probably has something to do with using other cache 
mechanisms (someone mentioned Redis) so callbacks can be used to respond to 
asynchronous changes; and maybe creating a separation between a core / 
super-stable session (sets session cookies, hands out XSRF tokens etc.) and 
optional user-created session objects that can have varying levels of lock 
/ concurrency behaviors depending on the importance and access pattern of 
the information they carry.  That sounds like a VERY big and difficult 
undertaking.

In the mean time, if we need to have just one model, I want the fail-safely 
approach, which is what we have now.

Regards --

-- 



Reply via email to