I had seen this same problem in a prior version and had emailed the
following comment to the list with no reply:
...
With linux 2.0.36 and patch raid0145-19990421-2.0.36 applied, if a disk is
looked at for autostart but rejected because it's superblock  is out of
date, you can't raidhotadd or raidhotremove it because the dummy inode is
still out there.  This would be no problem if I had really replaced the disk
with a fresh one that would have been rejected from the autostart process.
In my case, it was just put back on line giving me the message "md: can not
import /dev/sda1, has active inodes!"

At some time between patches 19990108 and 19990421 the line:

        clear_inode(rdev->inode);

was replaced with

        blkdev_release(rdev->inode);

I think _both_ lines are necessary to release the inode.
...
Try this patch to md.c:
...
*** md.c-19990713       Wed Jul 21 15:54:13 1999
--- md.c        Wed Jul 21 16:51:01 1999
***************
*** 682,687 ****
--- 682,688 ----
  static void unlock_rdev (mdk_rdev_t *rdev)
  {
        blkdev_release(rdev->inode);
+       clear_inode(rdev->inode);
  }
...
  static void export_rdev (mdk_rdev_t * rdev)

Rich Bollinger

----- Original Message -----
From: Egon Eckert <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Thursday, July 22, 1999 4:04 PM
Subject: raidstop; raidstart fails


> On 2.0.37+raid990713 (recent), when I stop my raid1 device (using
raidstop),
> it refuses to 'raidstart' (I have to reboot to get /dev/md0 active again).
> Is it a bug or a feature? :)
>
> Here it is:
>
> ---<cut>---
>
> ego2:~# cat /proc/mdstat
> Personalities : [raid1] [raid5]
> read_ahead 1024 sectors
> md0 : active raid1 hdc1[1] hda5[0] 3108480 blocks [2/2] [UU]
> unused devices: <none>
> ego2:~# raidstop /dev/md0
> ego2:~# dmesg -c
> interrupting MD-thread pid 5
>   raid1d(5) flushing signals.
> marking sb clean...
> md: updating md0 RAID superblock on device
> hdc1 [events: 00000018](write) hdc1's sb offset: 3108544
> hda5 [events: 00000018](write) hda5's sb offset: 3108480
> .
> unbind<hdc1,1>
> export_rdev(hdc1)
> unbind<hda5,0>
> export_rdev(hda5)
> md0 stopped.
> ego2:~# raidstart /dev/md0
> /dev/md0: Invalid argument
> ego2:~# dmesg -c
> md: can not import hda5, has active inodes!
> could not import hda5!
> autostart hda5 failed!
> huh12?
> ego2:~#
>
> ---<cut>---
>
> Nothing from /dev/hda is used (mounted nor swap), partitions are of type
FD,
> autodetected on boot.
>
> A small note: the same error ('active inodes') i get when I try to
> 'raidhotadd' this partition to my array running in degraded mode
simulating
> disk failure.  Do I need an extra partition to recover from this?
>
> Thanks,
>
> Egon Eckert
>

Reply via email to