On Mon, Mar 20, 2006 at 08:31:42AM +0100, Anthony Howe wrote:
> Joachim Schipper wrote:
> >On Fri, Mar 17, 2006 at 07:36:13PM +0100, Anthony Howe wrote:
> >>Joachim Schipper wrote:
> >>>>--wd0a----------    --wd1a----------
> >>>>/     (bootable)    /     (bootable)
> >>>>/tmp                /tmp
> >>>>/usr                /usr
> >>>>/var                /var
> >>>>
> >>>>--wd0d----------    --wd1d----------      
> >>>>raid0    (root)     raid0     (root)
> >>>>
> >>>>  --raid0a-----       --raid0a-----
> >>>>  /                   /
> >>>>  /usr                /usr
> >>>Hmm - why include / and /usr again? OpenBSD will boot just fine off a
> >>>RAID array, even a failed one, provided you can get the kernel read
> >>>somehow.
> >>You have to have a RAID slice with / and /usr.  If you mount just wd0a 
> >>for / and /usr then if the wd0 dies you have to reboot to mount with 
> >>wd1a. If you happen to be a long way away from the console, then you're 
> >>toast, unless you went the extra distance and setup the backup fstab on 
> >>wd1a in advance.
> >>
> >>If you have them in a RAID and if a disk dies, you can continue to use 
> >>the system (degraded of course) without having to reboot until the new 
> >>disk and your are present at the console.
> >
> >Maybe I don't understand, but how does it follow from the above that it
> >is useful to have a third and fourth copy?
> >
> >I see the point in keeping / and /usr on RAID - the system will stay
> >running and come up even if one of the underlying disks fails.
> 
> How would you reboot a degraded system where wd0 containing your / & 
> /usr is dead? How would you reboot a system in order to reconstruct a 
> replacement? Maybe a full /usr is unnecessary, but what if you had to 
> rebuild the kernel for some reason before you could autoconfigure and 
> transfer to the RAID? If you only have the one machine at hand, which is 
> the one with the RAID. Both disks must be bootable and should have all 
> the necessary tools you deem necessary to recover.

Both disks must be bootable, yes, but only sufficiently far to
(auto)mount the RAID. RAID drives can be configured to be automounted as
root at boot; if you make sure that *some* kernel is always accessible
that, at least, has RAID support, you should always be able to mount /
and /usr.

Now, it might be useful having a second, non-autoconfiguring, kernel,
plus a minimal system, lying around; this could be very valuable if the
RAID is somehow hosed badly enough that it will not mount.

However, having two such systems seems a little excessive. Of course,
failing disks do strange things... so it might be useful. But it should
not typically be necessary.

> Now on smallish disks, installing more than just base system might not 
> be possible (necessary) spacewise and so you have alternative recovery 
> methods ready (if you can remember where you put them), but when you're 
> talking 40G+ disks, then there is ample space. Todays hard disks are so 
> large these days that I worry how SOHO sites can afford suitable backup 
> solutions, but that be another discussion. The point being, if I'm 
> building a RAID, its typically for large disks and I don't want to take 
> any chances being caught short when one of those disks dies, so I burn 
> one or two gigas for bootable self-sufficient rescue slices per disk.

Of course, that's the other side - there is little incentive not to be
wasteful. Nonetheless, it *is* wasteful.

                Joachim

Reply via email to