[I'm sorry if this is a repeat...]

Hi all.  I've been playing around with 2.0.36 and
raid0145-19990309-2.0.36, and I think I've screwed it up.  It was working
fine (I believe) until I added a PCI EIDE controller.  I disabled /dev/md0
from mounting in the fstab until I could update /etc/raidtab to reflect
the new device names, but when I did, it appears to be using only one disk
in the mirror:

(read) hdb1's sb offset: 6353088 [events: 0000001a]
autorun ...
considering hdb1 ...
  adding hdb1 ...
created md0
bind<hdb1,1>
running: <hdb1>
now!
hdb1's event counter: 0000001a
md0: max total readahead window set to 128k
md0: 1 data-disks, max readahead per data-disk: 128k
raid1: device hdb1 operational as mirror 0
raid1: md0, not all disks are operational -- trying to recover array
raid1: raid set md0 active with 1 out of 2 mirrors
md: updating md0 RAID superblock on device
hdb1 [events: 0000001b](write) hdb1's sb offset: 6353088
md: recovery thread got woken up ...
md0: no spare disk to reconstruct array! -- continuing in degraded mode
md: recovery thread finished ...

The following is the contents of /proc/mdstat:

Personalities : [raid1] 
read_ahead 1024 sectors
md0 : active raid1 hdb1[0] 6353088 blocks [2/1] [U_]
unused devices: <none>

Now that I've removed the PCI controller, updated all necessary files, and
am now using the onboard controller, what can be done to rectify this
situation?

Is it possible to resync the mirrors?  What should I have done to
correctly move a mirror from hdb/hdc to hde/hdf?

Hmm.. Also, can I expect a significant performance improvement using a PCI
EIDE controller versus my onboard IDE controller?

Thanks,
Dave Wreski


Reply via email to