<snip>
However, I thought having only notifications that stuff has changed and 
should be reloaded from persistent storage into the cache should not do 
any harm, right?
</snip>

Correct - assuming both stores use the same underlying data source (e.g. the same 
database). If both servers use different databases (to avoid a single point of 
failure) or file stores then this wouldn't work.

Also if you only use notifications and not group messaging then there is a lag in the 
update reaching the server being notified. This could mean clients get inconsistent 
results.

<snip>
So, what I do not understand is why a transaction log would be necessary.
</snip>

Assuming that both servers use different underlying data sources then a transaction 
log would be useful under the following circumstance:

Consider a situation with two servers (server A and server B) where store replication 
is enabled.

1. Client updates server A
2. Server B is updated successfully using replication
3. Server B goes down
4. Client updates server A
5. Server B can't be updated
...
6. Server B is bought back into the cluster.
7. Client updates server A
8. Server B update via replication fails because the store used by server B is not in 
synch with the store used by server A.

If each store maintains a transaction log then when a node is brought back into the 
cluster after a failure it can retrieve any missed updates from another node in the 
cluster by comparing their transaction logs.

If this functionality is not provided then recovery must be performed manually. This 
means taking all servers in the cluster off-line temporarily in order to take a 
snapshot of an up-to-date data source and restoring the failed server's data source 
from that.

Providing automatic recovery is potentially alot of work so I would just get 
replication working and think about recovery after that.

Jamie.

-----Original Message-----
From: Oliver Zeigermann [mailto:[EMAIL PROTECTED]
Sent: 12 July 2004 12:24
To: Slide Developers Mailing List
Subject: Re: StandardStore, AbstractStore, caching


I have no idea about jgroups, so I can not comment on this directly. 
However, I thought having only notifications that stuff has changed and 
should be reloaded from persistent storage into the cache should not do 
any harm, right? Especially, when it is done directly after a commit.

So, what I do not understand is why a transaction log would be necessary.

Olilver

Mortimore, Jamie [IT] wrote:

> I agree that clustering shouldn't be done using the session since a session is user 
> scoped and a store is server scoped.
> 
> I think it might be possible to get a store clustered at the slide level written 
> pretty quickly using JGroups.
> 
> I would look at using the RpcDispatcher 
> (http://www.jgroups.org/javagroupsnew/docs/newuser/node53.html) to ensure methods on 
> all stores are called in the correct order. I would write a store proxy (that could 
> wrap any store implementation) that would forward store method calls onto the 
> wrapped store implementation using the RpcDispatcher.
> 
> It doesn't sound like it would be too much work - I might give this a shot myself.
> 
> For this solution to work all data involved in a store method call must be 
> serializable.
> 
> However it doesn't provide recovery against existing nodes in a cluster when a node 
> is bought back into a cluster after failure - a transaction log would be required to 
> provide this functionality.
> 
> Jamie.
> 
> -----Original Message-----
> From: Oliver Zeigermann [mailto:[EMAIL PROTECTED]
> Sent: 10 July 2004 08:48
> To: Slide Developers Mailing List
> Subject: Re: StandardStore, AbstractStore, caching
> 
> 
> James Mason wrote:
> 
>>I don't know that I'm the best tutor for this, so you bets your money,
>>you takes your chances ;).
>>Comments below..
>>
>>
>>
>>>>>[EMAIL PROTECTED] 7/9/2004 2:05:08 PM >>>
>>
>>I do not see how this would work, but am ready to be your pupil in this
>>
>>matter.
>>
>>What I do not understand is:
>>
>>(1) If I add the cache to the session the application server is 
>>responsible for storing it persistent. This certainly is not what is 
>>intended with it cache, right?
>>[James>>]
>>Correct me if I'm wrong, but the current cache implementation isn't
>>persistent (it's all in memory). Putting the cache in the session
>>wouldn't be for persistence (although since Tomcat support persistent
>>sessions that's something to watch out for), rather it would be to take
>>advantage of an existing replication technique.
> 
> 
> ... I guess replication will take at least as much time as persisting 
> the data, right?
> 
> 
>>(2) Additionally, I do not know how a session can be used to pass 
>>information between arbitraty cluster nodes. As you said the session 
>>will be taken over only if one node fails.
>>[James>>]
>>Sorry if I implied it was only good for failover. After every request
>>Tomcat sends out any changes that happened to the session during that
>>request. This allows the other nodes in the cluster to pick up the
>>changes and be ready in case the first node fails. The only reason to
>>put the cache in the session is to take advantage of this replication.
> 
> 
> AFAIK if I put the whole cache into the session Tomcat will have no idea 
> it has changed as the reference stays the same all the time. What you 
> would have to do would be to store every entry into it?
> 
> 
> 
>>Oliver
>>
>>[James>>]
>>The more I look at this the less appealing using the session is,
>>though. For one, it's tied to the current user. Since the cache kinda
>>needs to be global this would be a problem ;).
>>
>>Here's info on how Tomcat replicates sessions:
>>http://jakarta.apache.org/tomcat/tomcat-5.0-doc/cluster-howto.html.
>>The same technique would work for replicating changes to a cache. In
>>fact, looking through this nice list of java caching solutions
>>(http://www.manageability.org/blog/stuff/distributed-cache-java/view) it
>>looks like all the ones that support distributed caching do use a
>>similar technique.
>>
>>I think either taking a similar approach to replicating changes or
>>finding a caching library that fits well with the transaction
>>architecture are the best choices here.
> 
> 
> Agreed. I would go for replicating changes as every component you will 
> need is already there. And except for the management this should not be 
> too hard to implement.
> 
> 
>>One thing I was thinking about with clustering, though. Are LOCKs
>>cached at the moment? I can easily see two LOCK requests coming in to
>>different nodes in a cluster in close proximity. If the locks are cached
>>there's a potential for data loss.
> 
> 
> Sounds correct. Currently, locks have a cache of their own. It would be 
> very easy to switch it off by configuration.
> 
> Oliver
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to