What I was really concerned about were the filesystem locks, in your case NFS,
that the txStore has to mantain on each node of the cluster to support transactions
for file accesses, both stores are indeed accessing the same filesystem thus violating
the rule one FileResourceManager per directory. If this holds true then even disabling
all caches does not guarantee atomic operations on files, ie file transactions are not reliable.


Warwick Burrows wrote:

We've implemented that configuration: a jdbc nodestore with tx filesystem
store much like you've outlined with a HTTP load balancer between our DAV
clients and Slide servers. It is untested in this target (load balanced)
configuration but we have tested in a simpler configuration that shares the
jdbc store and the content (using NFS) between two Slide servers.


Unfortunately the clustering implementation is untested in terms of how
locking will work. Ie. when a lock is taken by one client a notification is
sent to the other servers in the cluster to let them know that this object
has changed. But its not certain what will happen if two requests for the
lock come in at exactly the same time as they would both take the lock and
send a notification off to the other clustered servers. I believe that
there's no code to resolve this issue.

So for our deployment we've gone with disabling the cache for the nodestore
altogether so that updates for locks and other metadata are written directly
to the db. The content store also has its cache disabled right now as it
seems that caching for both node and content stores is controlled from the
encompassing <store> definition.

So far we think it will meet our particular performance requirements even
with the caches disabled. A fully distributed cache (and so lock-safe) cache
would be great but the question is would it be more performant than just
writing directly to the db?... particularly when you consider that any
negotiation for the lock that would need to occur between cluster caches
would be over the network and subject to network latency. Anyone have any
ideas as to how distributed caches actually perform in the real world?

Warwick




-----Original Message-----
From: Alessandro Apostoli [mailto:[EMAIL PROTECTED] Sent: Wednesday, November 24, 2004 5:28 AM
To: Slide Users Mailing List
Subject: slide clustering support question



I have a couple questions about the clustering features of slide, suppose a scenario where you have a distributed replicated filesystem such as coda and a replicated db running on each node. Each node has the same data both on filesystem and db, the nodes are part of a big wan with links speed around 2 Mbit. The idea would be to use slide tx store for resources and revisions whilst using a jdbc store for properties, users, groups, roles and security
1) In such a scenario how would the locks on the filesystem behave, I guess that the transaction support in the commons.transaction.file package would be broken for there would be two or more instances of FileResourceManager accessing the same directory or am I missing something ?
2) For ideal clustering support would I be confined to the JDBC store?
3) If the Tx Store still works in this configuration how does slide solve the
above distributed transaction problem?


Alessandro



---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




--------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]




--------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]



Reply via email to