New RAID-1 System with SuSe 6.1/kernel 2.2.10, 1 ide-hd as boot device,
and 2 9G SCSIs as RAID-1 devices.

escudo:~ # cat /etc/fstab 
/dev/hda4       swap                      swap            defaults   0   0
/dev/hda1       /                         ext2            defaults   1   1
/dev/hda2       /home                     ext2            defaults   1   2

# RAID-1 DEVICES see /etc/raidtab for more details
/dev/md0        /u01                      ext2            defaults     1   2
/dev/md5        /u02                      ext2            defaults     1   2
/dev/md6        /u03                      ext2            defaults     1   2
/dev/md7        /u04                      ext2            defaults     1   2
/dev/md8        /u05                      ext2            defaults     1   2

none            /proc                     proc            defaults   0   0
none            /dev/pts                  devpts          defaults   0   0

Everything was working great. But I wanted to simulate a HD Crash to
see the recovery process.  I halted the box, unpluged one of the two
SCSI disks and booted.  Everything booted great with the normal error
messages one would expect, and I was able to access my data from the
one live SCSI disk.  Re-halted the system, pluged in the 'bad' SCSI
disk..  and Re-booted, to see what would happen.  And now this is what
I get:

cudo:~ # cat /proc/mdstat
Personalities : [raid1] 
read_ahead 1024 sectors
md8 : active raid1 sda8[0] 2056192 blocks [2/1] [U_]
md7 : active raid1 sda7[0] 2032064 blocks [2/1] [U_]
md6 : active raid1 sda6[0] 2032064 blocks [2/1] [U_]
md5 : active raid1 sda5[0] 2040128 blocks [2/1] [U_]
md0 : active raid1 sda1[0] 722816 blocks [2/1] [U_]
unused devices: <none>

and from 'dmesg':
sda1's event counter: 00000015
md0: max total readahead window set to 128k
md0: 1 data-disks, max readahead per data-disk: 128k
raid1: device sda1 operational as mirror 0
raid1: md0, not all disks are operational -- trying to recover array
raid1: raid set md0 active with 1 out of 2 mirrors
md: updating md0 RAID superblock on device
sda1 [events: 00000016](write) sda1's sb offset: 722816
md: recovery thread got woken up ...
md0: no spare disk to reconstruct array! -- continuing in degraded mode
md: recovery thread finished ...

(I get the same for each raid partition)
>From the above I assume I am still running off of one of the HDs that are
in the array (the one that I did not unplug).  Now the questions:

How do I re-sync these RAID devices? 

Is it necessary to have a 'spare disk' to be able to recovery my array?
(ie, a partition on the IDE drive could be the spare disk..)

There is documentation on How-to-Setup a RAID, but I can't find much
on one of the more important aspects, how to recover. :)  Any URLs I'm
missing?

Any similar tests done by the RAID community, I would like to here your
experiences..

Thanks
--
JT
[EMAIL PROTECTED]

Reply via email to