> I think _both_ lines are necessary to release the inode.
> ...
> +       clear_inode(rdev->inode);

It works for me!  However, I think there's another bug hiding.  When I erase
the contents of one partition (to learn at last how to recover from a RAID1
failure :) ), 'active inodes' show up again :

---<cut>---
ego2:/home/egon# dd if=/dev/zero of=/dev/hda5 bs=1048576
dd: /dev/hda5: No space left on device
3036+0 records in
3035+0 records out
ego2:/home/egon# raidstart /dev/md0
/dev/md0: Invalid argument
ego2:/home/egon# dmesg -c
(read) hda5's sb offset: 3108480 [events: 00000000]
md: invalid raid superblock magic on hda5
md: hda5 has invalid sb, not importing!
could not import hda5!
autostart hda5 failed!
huh12?
ego2:/home/egon# cat /proc/mdstat
Personalities : [raid1] [raid5] 
read_ahead 1024 sectors
unused devices: <none>
ego2:/home/egon# raidstart /dev/md0
/dev/md0: Invalid argument
ego2:/home/egon# dmesg -c
md: can not import hda5, has active inodes!
could not import hda5!
autostart hda5 failed!
huh12?
---<cut>---

md_import_device calls lock_rdev (where the dummy inode gets allocated), but
then fails without calling unlock_rdev on the refused partition.

Unfortunately, I'm an application programmer, I can't propose a kernel
patch. :)

Egon Eckert

Reply via email to