On 2.0.37+raid990713 (recent), when I stop my raid1 device (using raidstop),
it refuses to 'raidstart' (I have to reboot to get /dev/md0 active again).
Is it a bug or a feature? :)
Here it is:
---<cut>---
ego2:~# cat /proc/mdstat
Personalities : [raid1] [raid5]
read_ahead 1024 sectors
md0 : active raid1 hdc1[1] hda5[0] 3108480 blocks [2/2] [UU]
unused devices: <none>
ego2:~# raidstop /dev/md0
ego2:~# dmesg -c
interrupting MD-thread pid 5
raid1d(5) flushing signals.
marking sb clean...
md: updating md0 RAID superblock on device
hdc1 [events: 00000018](write) hdc1's sb offset: 3108544
hda5 [events: 00000018](write) hda5's sb offset: 3108480
.
unbind<hdc1,1>
export_rdev(hdc1)
unbind<hda5,0>
export_rdev(hda5)
md0 stopped.
ego2:~# raidstart /dev/md0
/dev/md0: Invalid argument
ego2:~# dmesg -c
md: can not import hda5, has active inodes!
could not import hda5!
autostart hda5 failed!
huh12?
ego2:~#
---<cut>---
Nothing from /dev/hda is used (mounted nor swap), partitions are of type FD,
autodetected on boot.
A small note: the same error ('active inodes') i get when I try to
'raidhotadd' this partition to my array running in degraded mode simulating
disk failure. Do I need an extra partition to recover from this?
Thanks,
Egon Eckert