have seen this problem since the first 2.0.36 patches. i usually make the arrays
with the part. type set to 83, then WITHOUT rebooting, i setup the fs, move the
data, THEN fdisk again, change part. type to fd. then i reboot. this has worked
on a couple boxes, 2.2.1, and 2.0.36. another alternative is to build the arrays
under 2.0.35, then upgrade afterward (not reccommended, but it works)

as for your second problem, use "raidhotadd /dev/md1 /dev/sda7"
i think :) check the archives for discussions of this.

i have also had that happen when i had several raid arrays, with many devices. i
seemed to hit a wall with 18 raid partitions, but that was with 2.0.35

al

"Hans-Georg v. Zezschwitz" <[EMAIL PROTECTED]> said: 

> Hi,
> 
> thanks to Jakobs really helpful HOWTO I managed to set up
> a well-working RAID1 based on the 990309-edition, inklusive
> RAID1 of the boot partion. However, smaller problems remain:
> 
> - If you change the type of a partion to "FD", a active inode
>   will be kept for this partion after auto-detection, even if
>   does not contain a RAID superblock.
> 
>   What happened, is that I wanted to build up a RAID partion.
>   I first changed the type from 83 to FD, and as an old-fashioned
>   man I rebooted the computer. When I tried to set up my RAID1
>   on that partion, mkraid aborted.
> 
>   The failure of the fs_may_mount(newdev)-function seems to
>   be the reason. Even more, some other strange messages in
>   /var/log/messages seem to result from open, active inodes:
> 
>   E.g.:
> 
> Mar 12 15:54:01 katheter kernel: VFS: inode busy on removed device 08:18
> Mar 12 15:54:01 katheter kernel: VFS: inode busy on removed device 08:17
> Mar 12 15:54:01 katheter kernel: VFS: inode busy on removed device 08:16
> 
>   or:
> 
> Mar 12 15:54:51 katheter kernel: md: can not import sda2, has active inodes!
> Mar 12 15:54:51 katheter kernel: md: error, md_import_device() returned -16 
> 
> 
> I suppose it will be easy to reproduce the bug: Empty a partion
> e.g. by dd if=/dev/zero of=/dev/xxxx, change its partition type to
> FD and reboot. Try to do anything with it, like mke2fs and a mount,
> or, as intended, RAID.
> 
> 
> 2)
> My *really* simple question is: Once I set one of two disks of a mirror
> down (e.g. by this sequence: Shutdown, Removal of 1 disk, Start of
> Linux, Shutdown, Insertion of the second disk, Start), 
> I won't get it up into the mirror again:
> 
> E.g:
> Mar 12 21:27:09 katheter kernel: .
> Mar 12 21:27:09 katheter kernel: considering sdb7 ...
> Mar 12 21:27:09 katheter kernel:   adding sdb7 ...
> Mar 12 21:27:09 katheter kernel:   adding sda7 ...
> Mar 12 21:27:09 katheter kernel: created md1
> Mar 12 21:27:09 katheter kernel: bind<sda7,1>
> Mar 12 21:27:09 katheter kernel: bind<sdb7,2>
> Mar 12 21:27:09 katheter kernel: running: <sdb7><sda7>
> Mar 12 21:27:09 katheter kernel: now!
> Mar 12 21:27:09 katheter kernel: sdb7's event counter: 00000037
> Mar 12 21:27:09 katheter kernel: sda7's event counter: 00000031
> Mar 12 21:27:09 katheter kernel: md: superblock update time inconsistency --
usi
> ng the most recent one
> Mar 12 21:27:09 katheter kernel: freshest: sdb7
> Mar 12 21:27:09 katheter kernel: md: kicking non-fresh sda7 from array!
> Mar 12 21:27:09 katheter kernel: unbind<sda7,1>
> Mar 12 21:27:09 katheter kernel: export_rdev(sda7)
> Mar 12 21:27:09 katheter kernel: md1: max total readahead window set to 128k
> Mar 12 21:27:09 katheter kernel: md1: 1 data-disks, max readahead per
data-disk:
>  128k
> Mar 12 21:27:09 katheter kernel: raid1: device sdb7 operational as mirror 1
> Mar 12 21:27:09 katheter kernel: raid1: md1, not all disks are operational --
tr
> ying to recover array
> Mar 12 21:27:09 katheter kernel: raid1: raid set md1 active with 1 out of 2
mirr
> ors
> Mar 12 21:27:09 katheter kernel: md: updating md1 RAID superblock on device
> Mar 12 21:27:09 katheter kernel: sdb7 [events: 00000038](write) sdb7's sb
offset
> : 513984
> Mar 12 21:27:09 katheter kernel: md: recovery thread got woken up ...
> Mar 12 21:27:09 katheter kernel: md1: no spare disk to reconstruct array! --
con
> tinuing in degraded mode
> Mar 12 21:27:09 katheter kernel: md: recovery thread finished ...
> 
> 
> How can I achieve what has formerly been done by chkraid? (Syncing the
> mirror and taking it active?) I have no spare disk, I just want my
> mirror back'n working :-)
> 
> Thanks a lot for any answers, and I hope my bug report will help,
> and thanks for the work that has been done,
> 
> 
> Georg v.Zezschwitz
> 
> 

<br>
<b>Warning</b>:  Sybase:  Server message:  Changed database context to 'twig'.
 (severity 10, procedure N/A) in <b>features/sybase.db.inc</b> on line
<b>33</b><br>


--
boo

Reply via email to