I have no idea about jgroups, so I can not comment on this directly. However, I thought having only notifications that stuff has changed and should be reloaded from persistent storage into the cache should not do any harm, right? Especially, when it is done directly after a commit.

So, what I do not understand is why a transaction log would be necessary.

Olilver

Mortimore, Jamie [IT] wrote:

I agree that clustering shouldn't be done using the session since a session is user 
scoped and a store is server scoped.

I think it might be possible to get a store clustered at the slide level written 
pretty quickly using JGroups.

I would look at using the RpcDispatcher 
(http://www.jgroups.org/javagroupsnew/docs/newuser/node53.html) to ensure methods on 
all stores are called in the correct order. I would write a store proxy (that could 
wrap any store implementation) that would forward store method calls onto the wrapped 
store implementation using the RpcDispatcher.

It doesn't sound like it would be too much work - I might give this a shot myself.

For this solution to work all data involved in a store method call must be 
serializable.

However it doesn't provide recovery against existing nodes in a cluster when a node is 
bought back into a cluster after failure - a transaction log would be required to 
provide this functionality.

Jamie.

-----Original Message-----
From: Oliver Zeigermann [mailto:[EMAIL PROTECTED]
Sent: 10 July 2004 08:48
To: Slide Developers Mailing List
Subject: Re: StandardStore, AbstractStore, caching


James Mason wrote:

I don't know that I'm the best tutor for this, so you bets your money,
you takes your chances ;).
Comments below..



[EMAIL PROTECTED] 7/9/2004 2:05:08 PM >>>

I do not see how this would work, but am ready to be your pupil in this

matter.

What I do not understand is:

(1) If I add the cache to the session the application server is responsible for storing it persistent. This certainly is not what is intended with it cache, right?
[James>>]
Correct me if I'm wrong, but the current cache implementation isn't
persistent (it's all in memory). Putting the cache in the session
wouldn't be for persistence (although since Tomcat support persistent
sessions that's something to watch out for), rather it would be to take
advantage of an existing replication technique.


... I guess replication will take at least as much time as persisting the data, right?


(2) Additionally, I do not know how a session can be used to pass information between arbitraty cluster nodes. As you said the session will be taken over only if one node fails.
[James>>]
Sorry if I implied it was only good for failover. After every request
Tomcat sends out any changes that happened to the session during that
request. This allows the other nodes in the cluster to pick up the
changes and be ready in case the first node fails. The only reason to
put the cache in the session is to take advantage of this replication.


AFAIK if I put the whole cache into the session Tomcat will have no idea it has changed as the reference stays the same all the time. What you would have to do would be to store every entry into it?



Oliver

[James>>]
The more I look at this the less appealing using the session is,
though. For one, it's tied to the current user. Since the cache kinda
needs to be global this would be a problem ;).

Here's info on how Tomcat replicates sessions:
http://jakarta.apache.org/tomcat/tomcat-5.0-doc/cluster-howto.html.
The same technique would work for replicating changes to a cache. In
fact, looking through this nice list of java caching solutions
(http://www.manageability.org/blog/stuff/distributed-cache-java/view) it
looks like all the ones that support distributed caching do use a
similar technique.

I think either taking a similar approach to replicating changes or
finding a caching library that fits well with the transaction
architecture are the best choices here.


Agreed. I would go for replicating changes as every component you will need is already there. And except for the management this should not be too hard to implement.


One thing I was thinking about with clustering, though. Are LOCKs
cached at the moment? I can easily see two LOCK requests coming in to
different nodes in a cluster in close proximity. If the locks are cached
there's a potential for data loss.


Sounds correct. Currently, locks have a cache of their own. It would be very easy to switch it off by configuration.

Oliver

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


--------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]



---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Reply via email to