I may not understand the whole picture here but this is what I had surmised
:-)  We've implemented our client using exclusive write locks so that only
the person who holds the DAV lock on the file can modify the file. So access
to the filesystem to modify the file is synchronized through the object lock
and so only one of the slide servers in the cluster will ever make the
modification as long as only one DAV application can get the DAV lock. The
transaction locks also seem to be implemented as standard DAV locks (they
appear in the db LOCK table too) and so transaction synchronization also
occurs on a DAV lock. The DAV locks are implemented in the jdbc store and
updates to the lock table are synchronized by the jdbc client and will be
safe -- as long as the locks aren't cluster cached and the db transaction
isolation level is set appropriately.

So my thought here is that even if filesystem locks are taken during file
content modification by the server, the DAV lock on that object guarantees
that only one user, and so one Slide server acting on their behalf, can
modify the content while others reading it will continue to reference what
appears to be the current version to them.

So the cluster caching breaks this model by potentially allowing two servers
to make a lock request at exactly the same time causing undefined behaviour.

After reviewing this and fixing some bugs in the transaction code I'm now
concerned about the transaction isolation level that I'm using though.
READ_COMMITTED seems to indicate that other clients can continue to read the
lock table to see if an object is locked while another thread is in the
process of locking it. It doesn't seem like the atomic operation that is
normally required to ensure that only one lock requester gets a lock.  Is
there anybody who knows more about how the slide server ensures that only
once caller will get a lock?

Warwick


> -----Original Message-----
> From: Alessandro Apostoli [mailto:[EMAIL PROTECTED] 
> Sent: Wednesday, November 24, 2004 11:25 AM
> To: Slide Users Mailing List
> Subject: Re: slide clustering support question
> 
> 
> What I was really concerned about were the filesystem locks, in your 
> case NFS,
> that the txStore has to mantain on each node of the cluster 
> to support 
> transactions
> for file accesses, both stores are indeed accessing the same 
> filesystem 
> thus violating
> the rule one FileResourceManager per directory. If this holds 
> true then 
> even disabling
> all caches does not guarantee atomic operations on files, ie file 
> transactions are not reliable.
> 
> Warwick Burrows wrote:
> 
> >We've implemented that configuration: a jdbc nodestore with tx 
> >filesystem store much like you've outlined with a HTTP load balancer 
> >between our DAV clients and Slide servers. It is untested in this 
> >target (load balanced) configuration but we have tested in a simpler 
> >configuration that shares the jdbc store and the content (using NFS) 
> >between two Slide servers.
> >
> >Unfortunately the clustering implementation is untested in 
> terms of how 
> >locking will work. Ie. when a lock is taken by one client a 
> >notification is sent to the other servers in the cluster to let them 
> >know that this object has changed. But its not certain what 
> will happen 
> >if two requests for the lock come in at exactly the same 
> time as they 
> >would both take the lock and send a notification off to the other 
> >clustered servers. I believe that there's no code to resolve this 
> >issue.
> >
> >So for our deployment we've gone with disabling the cache for the 
> >nodestore altogether so that updates for locks and other 
> metadata are 
> >written directly to the db. The content store also has its cache 
> >disabled right now as it seems that caching for both node 
> and content 
> >stores is controlled from the encompassing <store> definition.
> >
> >So far we think it will meet our particular performance requirements 
> >even with the caches disabled. A fully distributed cache (and so 
> >lock-safe) cache would be great but the question is would it be more 
> >performant than just writing directly to the db?... 
> particularly when 
> >you consider that any negotiation for the lock that would 
> need to occur 
> >between cluster caches would be over the network and subject 
> to network 
> >latency. Anyone have any ideas as to how distributed caches actually 
> >perform in the real world?
> >
> >Warwick
> >
> >
> >  
> >
> >>-----Original Message-----
> >>From: Alessandro Apostoli [mailto:[EMAIL PROTECTED]
> >>Sent: Wednesday, November 24, 2004 5:28 AM
> >>To: Slide Users Mailing List
> >>Subject: slide clustering support question
> >>
> >>
> >>I have a couple questions about the clustering features of
> >>slide, suppose a scenario where you have a distributed 
> >>replicated filesystem such as coda and a replicated db 
> >>running on each node. Each node has the same data both on 
> >>filesystem and db, the nodes are part of a big wan with links 
> >>speed around 2 Mbit. The idea would be to use slide tx store 
> >>for resources and revisions whilst using a jdbc store for 
> >>properties, users, groups, roles and security
> >>1) In such a scenario how would the locks on the filesystem 
> >>behave, I guess that the transaction support in the 
> >>commons.transaction.file package would be broken for there 
> >>would be two or more instances of FileResourceManager 
> >>accessing the same directory or am I missing something ?
> >>2) For ideal clustering support would I be confined to the 
> JDBC store?
> >>3) If the Tx Store still works in this configuration how does slide 
> >>solve the
> >>above distributed transaction problem?
> >>
> >>Alessandro
> >>
> >>
> >>
> >>------------------------------------------------------------
> ---------
> >>To unsubscribe, e-mail: [EMAIL PROTECTED]
> >>For additional commands, e-mail: [EMAIL PROTECTED]
> >>
> >>    
> >>
> >
> >---------------------------------------------------------------------
> >To unsubscribe, e-mail: [EMAIL PROTECTED]
> >For additional commands, e-mail: [EMAIL PROTECTED]
> >
> >  
> >
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
> 

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to