> -----Original Message-----
> From: Marcel Reutegger [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, May 09, 2006 11:03 AM
> To: [email protected]
> Subject: Re: jackrabbit & clustering
>
> Giota Karadimitriou wrote:
> > Hi Marcel and the rest of the list,
> >
> > please bear with me once more. I would like to ask if the following
> > scenario makes sense before applying it in practice.
> >
> > Let's assume that I have 2 clustered nodes and that I am able to
access
> > the shareditemstatemanagers on both (possibly I will make some
wrapper
> > around shareditemstatemanager and use rmi or sth to accomplish this
but
> > this part comes of second importance now).
> >
> > I will name them shism1 and shism2 where shism1=shared item state
> > manager on cluster node 1 and shism2=shared item state manager on
> > cluster node 2
> > (shismn==shared item state manager on cluster node n).
> >
> > a) First problem is making the write lock distributed. I though that
> > maybe this could be accomplished by doing the following:
> >
> > When invoking shism1.acquireWriteLock override it in order to also
> > invoke
> > shism2.acquireWriteLock ... shismn.acquireWriteLock
> >
> > This way the write lock will have been acquired on all
> > shareditemstatemanagers.
> >
> > b) Once the update operation is finished and the changes are
persisted
> > in permanent storage perform the rest 2 operations:
> >
> > 1. shism1.releaseWriteLock (I will create such a method)
> > which will perform
> >
> > // downgrade to read lock
> > acquireReadLock();
> > rwLock.writeLock().release();
> > holdingWriteLock = false;
> >
> > and which be invoked on all the shared item state managers 2,3...n
> > shism2.releaseWriteLock ... shismn.releaseWriteLock
> >
> > Before releasing the write lock I will also perform
> >
> > shism2.cache.evict(...), shismn.cache.evict(...)
> >
> > where (...) will be all the itemstate ids that existed in
shism1.shared.
> >
> > This way all the item states persisted in cluster node 1 will be
evicted
> > from the cache of the other nodes thus forcing them to take them
from
> > the persistent storage once more on the next first read or write
> > operation.
> >
> > Does this make sense you think?
>
> I see a couple of issues with your approach:
>
> simply acquiring the write locks on other cluster nodes in a random
> sequential order
> may lead to a deadlock situation, unless the cluster defines a strict
> order which is
> known to all cluster nodes and locks are always acquired in that
order.
>
> I'm not sure if evicting the states from the caches will do it's job.
> there might be
> local (and transient) states that are connected to the shared states.
> simply removing
> them from the cache will not work in that case.
Right, I was afraid that removing the state id from shared item state
cache wouldn't be enough.
What I want to accomplish is let the other shisms know that the
persistent storage that these states depend upon, has changed. I took a
more careful look at the code but the problem is that I don't know where
to fire the proper events... should it be on the item state itself or on
the underlying item state of an item state. What should I do if the
underlying state does not exist? I tried to answer these questions
myself however I do not know if I am on the right path. Please take a
look at the following draft code regarding just the modified states.
For all ids where *id* is obtained from shism1.shared.modified do for
shism2..shismn:
Iterator it=shism1.shared.modifiedStates();
while (it.hasNext()){
Object state = (ItemState)it.next();
ItemId id=state.getId();
if (shism2.hasItemState(id)) {
if (shism2.hasNonVirtualItemState(id)){
if (shism2.cache.isCached(id))
shism2.cache.evict(id)
} else {
//virtual or transient
ItemState is= shism2.getItemState(id);
//WHAT TO DO NOW? IS THIS CORRECT?
if (is.hasOverlayedState()){
ItemState over= is.getOverlayedState();
//over.copy(state); necessary?
over.notifyStateUpdated();
} else {
// use shism2.loadItemState(id) instead
of state?
is.connect(state);
}
}
}
}
}
Again many thanks for the support you have provided me with so far. I do
not want to monopolize the list with my questions however some things in
the code are difficult to understand without help.
regards
Giota
>
> finally, what happens if the a cluster node crashes while holding
'remote'
> write
> locks on other nodes? will they be released?
>
> regards
> marcel