On Fri, 8 Oct 1999, Tom Livingston wrote:
> Leonhard Zachl wrote:
> > but how could i tell the md driver on the primary server that  the local
> > disk is faulty and the 'nbd-disk' from the second server is ok
> 
> have you tried this?  I haven't done anything with nbd, but I would expect
> the following to happen:
> * primary server comes up again
> * primary server starts raid set with the local disk & the nbd
> * it reads both superblocks, and will see that the nbd is more up to date,
> and start the raid using only the nbd
> * then do raidhotadd /dev/md0 /dev/localdisk, and it should rebuild
> 
> note: from everything I've heard you should basically expect horrible
> performance from at least the rebuild from the nbd, if not all the time.
> You'll want to hack the raid code to never do reads from the nbd (currently
> it will load balance between the two).  And you'd probably want to set it up
> so that the secondary server only relinquishes control after the primary
> server's disk is rebuilt...  which could take forever with nbd?
> 
Perhaps this would be true on a normal network connection, but I would 
expect on a NDB raid setup one would want to run fo connections or the 
super fast proprietary linux network connection between the two machines 
to enhance access time. This should give bandwidth between machines 
comparable to the bandwidth to the disks themselves.

I'd sure like to see a setup like this in operation. I would think for 
optimum performance it would be 3 or 4 machine cluster.

1) machine with disk 1
2) machine with disk 2
3) diskless ndb raid box -- no moving parts, boot from diskette
4) redundant version of 3) -- no moving parts

3 would provide a NFS machine that could be mounted by the web or 
database engine 
with a fail over path to 4 should 3 fail. Any individual component should 
be able to die without affecting the running system.

Michael

Reply via email to