Alex, how about this.

'Updating caches' to me means 'reconfuguring the business tire' (in
quantity, in quality or in both aspects).

What we idealy want is a all changes to be immediately inroduced to all PC
in the cluster (which I call group) without any compromises in integrity.
This means:
"... each machine must be capable of detecting when one of the servers it's
attempted to update has become unavailable... at which point it removes the
server from the list of servers to update (and propagates this information
to all other servers)."

For that to happen:
 - all PCs must behave like a group and the group is the eternal thing
 - every and each PC in the group must experience the same events in the
same order as all the other PCs in the group.
 - the group must contain only healthy members (fast failure discovery)

In order to implement this I recommed using some of the Group Multicast
Communication protocols. Muticasting is fast and cheap and the group
protocols are making it even eficient. It works for me!

If you are interested then have a search on 'Isis', 'Transis' or 'Totem' on
http://citeseer.nj.nec.com/cs

Once having that group stuff underneath your BCs you can be sure that if
you saw a thing then avery body saw the same thing and the oposite is true
as well. If sombody missed to see the thing then the next thing avery bobdy
sees is him leaving the group. Great stuff, isn't it!

stef

On Tue, 2 Jul 2002 14:09:53 +1200, Alex Henderson
<[EMAIL PROTECTED]> wrote:

>Updating caches within the business tier across multiple machines is an
>issue I'm currently wrestling with as well!
>
>So far it's been suggested:
>
>1. That an extended sql proc (dll) + triggers on tables could be used to
>push updates from the db server to all the webservers/business object
>servers in the farm when cached items must be updated.
>
>2. Remoting could be used to notify all servers when one changes.
>
>The key concern of mine is just how the server will know if its cache is
>stale when a cache update event fails to work properly for whatever reason
>(i.e. one server ends up out of sync).
>
>Currently my plan is to go for option 2 - this allows the business objects
>to swap caches (via datasets/whatever) without having to hit the DB again
>and ensures they are all "truly" in sync with each-other, which has a very
>"peer-to-peer" feel to it.  To make this truly robust I feel that each
>machine must be capable of detecting when one of the servers it's attempted
>to update has become unavailable... at which point it removes the server
>from the list of servers to update (and propagates this information to all
>other servers).
>
>Secondly when a website/business object server/whatever starts up it should
>make an attempt to contact a server (if it's a web farm this should be
easy)
>to post its return to being online and as a result should be sent the
>updated cache information.
>
>The only problem is determining the best way for a server to establish it's
>no longer part of the "cache update loop" so it could close itself down (of
>course you'd have to be careful in case it was the only machine left
online)
>... any suggestions or does anyone know how this is handled in other
>environments - i.e. how do EJB containers etc. handle this across multiple
>servers (I assume they must...).  And when are we going to see Business
>object infrastructures like EJB's etc. creeping into the .Net Framework
>arena (though I have been rolling my own for a while now...)
>
>Any suggestions/discussion gladly appreciated!
>
>- Alex
>
>-----Original Message-----
>From: Stefan Avramtchev [mailto:[EMAIL PROTECTED]]
>Sent: Monday, July 01, 2002 11:26 PM
>To: [EMAIL PROTECTED]
>Subject: Re: [ADVANCED-DOTNET] Synchronsing events caches across web
>farms...
>
>On Sat, 29 Jun 2002 13:59:36 +0800, Ben Kloosterman
><[EMAIL PROTECTED]> wrote:
>
>>Another solution is after the DB has been updated - update each cache and
>>ensure readers of those items are  locked  until the entire transaction
>>update has finished.  at least that way you see all the transaction or
none
>>of the transaction in the cache. Is this what you meant by OLTP
>>synchronizing the cache ?  This lock could be a big performance hit
though.
>
>Yes, this is exactly what I meant by OLTP and it's well known performance
>costs.
>
>In my case I cant get away with simpler architecture than a distributed
>business logic and in the same time availability, recoverabiltiy and
>perfomance are selling points, so I have to take into account (which is
>eliminate) the potential black-outs from failed cache updates etc.
>
>And probably it's  me, but I hate firing transactions somehow ortogonal to
>the normal (production) transaction flow.
>
>Thats why I'm looking for another solution...
>
>stef
>
>You can read messages from the Advanced DOTNET archive, unsubscribe from
>Advanced DOTNET, or
>subscribe to other DevelopMentor lists at http://discuss.develop.com.
>
>You can read messages from the Advanced DOTNET archive, unsubscribe from
Advanced DOTNET, or
>subscribe to other DevelopMentor lists at http://discuss.develop.com.

You can read messages from the Advanced DOTNET archive, unsubscribe from Advanced 
DOTNET, or
subscribe to other DevelopMentor lists at http://discuss.develop.com.

Reply via email to