On Wed, May 13, 2009 at 01:11:43PM +0800, Uwe Dippel wrote:
> Beautyful, as it looks like!
> 
> I tried here on 2 300 GB U320, and the setup went through without any 
> warnings (?? most users encounter some?).
> What I did was: (my system disk is sd0)
> 
> fdisk -iy sd1
> fdisk -iy sd2
> 
> printf "a\n\n\n\nRAID\nw\nq\n\n" | disklabel -E sd1
> printf "a\n\n\n\nRAID\nw\nq\n\n" | disklabel -E sd2
> 
> bioctl -c 1 -l /dev/sd1a,/dev/sd2a softraid0
> 
> dd if=/dev/zero of=/dev/rsd3c bs=1m count=1
> disklabel sd3 (creating my partitions/slices)
> 
> newfs /dev/rsd3a
> newfs /dev/rsd3b
> 
> mount /dev/sd3b /mnt/
> cd /mnt/
> [pull one hot-swap out]

If I were to try this (search the archives, it has been discussed
recently what and how to restore a broken mirror), here is
when I would search very hard for error indications.
Does not really bioctl say nothing? Try "bioctl sd3"
"bioctl softraid0", "bioctl -q sd3", "bioclt -q softraid0".

> echo Nonsense > testo
> [push the disk back in]

That pushed in disk is most probably still regarded
by softraid as broken. The feature of now starting
a rebuild using "bioctl -R sd2" is missing in softraid.

The existing repair option as I recall it (again, search
the archives) is to backup the still working filesystems
sd3a and sd3b on the broken mirror, re-create the array
from scratch, and restore them.

> [pull the other disk]

That sounds fatal. You should repair the RAID mirror,
not break the working half. Now both mirror halves
are probably regarded as broken. Your RAID is doomed.

> # ls -l
> total 4
> -rw-r--r--  1 root  wheel  9 May 13 12:00 testo
> [everything okay until here]
> # rm testo 
> 
> rm: testo: Input/output error
> [I still guess this may happen]
> 
> But now my question: All posts say all info is in 'man softraid' and 
> 'man bioctl'. There is nothing about *warnings* in there. I also tried 
> bioctl -a/-q, but none would indicate that anything was wrong when one 
> of the drives was pulled.
> 
> This will be a production server, but it can take downtime, in case.
> However:
> 1. I *need to know* when a disk goes offline
> 2. I need to know, in real life(!), if I can simply use the broken 
> mirror to save my data; how I can mount it in another machine. Alas, 
> softraid and bioctl are silent about these two.
> 
> Another reason for asking:
> Next I issued 'reboot'; and could play hangman :(
> 
> After the reboot, I got:
> ...
> softraid0 at root
> softraid0: sd3 was not shutdown properly
> scsibus3 at softraid0: 1 targets, initiator 1
> sd3 at scsibus3 targ 0 lun 0: <OPENBSD, SR RAID 1, 003> SCSI2 0/direct fixed
> sd3: 286094MB, 36471 cyl, 255 head, 63 sec, 512 bytes/sec, 585922538 sec 
> total
> 
> Now I wonder what to do. Will a traditional fsck do, or do I have to 
> recreate the softraid?
> 
> Can anyone please help me further?
> 
> Uwe

-- 

/ Raimo Niskanen, Erlang/OTP, Ericsson AB

Reply via email to