I think the Synchronised OLTP approach is the best - thinking it through .
Performance could be improved by creating input queues for each transaction
and once the transaction is finished applying them in a big batch -. You
could then trigger an event and any other caches could  get the data and do
the same thing. Since caches have write locks anyway this would provide a
lower performance hit at the cost of more complicated transaction code.

Ben


-----Original Message-----
From: Moderated discussion of advanced .NET topics.
[mailto:[EMAIL PROTECTED]]On Behalf Of Stefan
Avramtchev
Sent: Monday, 1 July 2002 7:26 PM
To: [EMAIL PROTECTED]
Subject: Re: [ADVANCED-DOTNET] Synchronsing events caches across web
farms...


On Sat, 29 Jun 2002 13:59:36 +0800, Ben Kloosterman
<[EMAIL PROTECTED]> wrote:

>Another solution is after the DB has been updated - update each cache and
>ensure readers of those items are  locked  until the entire transaction
>update has finished.  at least that way you see all the transaction or none
>of the transaction in the cache. Is this what you meant by OLTP
>synchronizing the cache ?  This lock could be a big performance hit though.

Yes, this is exactly what I meant by OLTP and it's well known performance
costs.

In my case I cant get away with simpler architecture than a distributed
business logic and in the same time availability, recoverabiltiy and
perfomance are selling points, so I have to take into account (which is
eliminate) the potential black-outs from failed cache updates etc.

And probably it's  me, but I hate firing transactions somehow ortogonal to
the normal (production) transaction flow.

Thats why I'm looking for another solution...

stef

You can read messages from the Advanced DOTNET archive, unsubscribe from
Advanced DOTNET, or
subscribe to other DevelopMentor lists at http://discuss.develop.com.

You can read messages from the Advanced DOTNET archive, unsubscribe from Advanced 
DOTNET, or
subscribe to other DevelopMentor lists at http://discuss.develop.com.

Reply via email to