Wouldn't it be easier to just buy an external RAID enclosure with dual
scsi ports, and connect it both machines to the same disk
backend? Your replication problem goes away, and you have half as
many disks available for failure.
Some models even allow for connection among three or more machines,
although they can get pretty pricey. I'm sure I have some urls around
somewhere, if anyone wants more information.
Todd
At 09:54 AM 4/2/01, Paul Cotter wrote:
>[OT] - but...
>
>Do not throw out database replication as a solution. Trying to
>maintain
>synchronicity across multiple databases is not a trivial task. You
>have to
>cope with single point failure at the transaction level, (eg the
>server
>'one' update works, but server 'two' fails) clashes (same record
>updated
>differently on two servers.) communication failures between servers,
>single
>server maintenance, 'non-authorized' updates toa single server and so
>on.
>
>Event simplistic methods such as replacing all updates with stored
>procedures that can update across multiple data-bases (eg Oracle
>Sybase etc)
>will have problems. If you wish to separate the servers geographically
>then
>loss of network will cause problems unless you adopt a
>store-and-forward
>basis.
>
>One method that has a reasonable level of success is a transaction log
>analyser. Updates (as opposed to reads) are applied to a single server
>and
>the transaction log analyser applies them to the other servers.
>However, for
>ease of creation and maintenance a publish-and-subscribe replication
>system
>is easiest. Remember that only certain data need be replicated. I have
>tried
>in the past to extract to cached flat files for performance. At the
>end of
>the day it is usually cheaper to go out and buy more hardware.
>
> > This is important when clustering for redundancy purposes,
> >
> > I'm trying to address 2 issues:
> >
> > A. Avoiding a single point of failure associated with a
> > having a central repository for the data, such as a NFS
> > share or a single database server.
> > B. Avoiding the overhead from using heavyweight tools like
> > database replication.
> >
> > So I've been thinking about how to pull that off, and I think
> > I've figured out how, as long as I don't need every machine to
> > have exactly the same version of the data structure at all times.
> >
> > What it comes down to is implementing 2 classes: one implements
> > a daemon running on each server in the cluster, responsible for
> > handling requests to update the data across the network and one
> > a class usable inside mod_perl to handle local updates and inform
> > other servers of updates.
> >
> > I believe I wouldn't be the only person finding something like
> > this terrifically useful. Furthermore, I see that Cache::Cache
> > could be the underlying basis for those classes. Most of the
> > deep network programming is already there in Net::Daemon.
> >
> > What say y'all to something like Cache::Clustered::Server and
> > Cache::Clustered::Client::* ?
> >
> > --Christopher Everett
> >