On Fri, Aug 13, 1999 at 03:23:52PM -0400, Drenning Bruce wrote:
...
> 
> It was originally built with hda7 as a failed-disk 1. I tried switching it
> to 0 in raidtab to see what would happen. Apparently nothing. cat
> /proc/mdstat still shows:
> 
>   Personalities : [raid1] 
>   read_ahead 1024 sectors
>   md2 : active raid1 hdc2[0] hda2[1] 264000 blocks [2/2] [UU]
>   md5 : active raid1 hdc5[0] hda5[1] 526080 blocks [2/2] [UU]
>   md6 : active raid1 hdc6[0] hda6[1] 66432 blocks [2/2] [UU]
>   md7 : active raid1 hdc7[0] hda7[1] 66432 blocks [2/2] [UU]
>   md8 : active raid1 hdc8[0] hda8[1] 34176 blocks [2/2] [UU]
>   unused devices: <none>
> 
> even after raidstop, raidstart, even reboot. It appears then, that raidtab
> is only used by mkraid. Correct?

Yes

> 
> The reason for this is that it seems the failed-disk directive would be nice
> for bringing the machine back up with a new disk after a failure. However,
> the docs say that failed-disk cannot be first. What happens if hdc fails?

Read on....   You don't need failed-disk directives for putting in new disks.
The failed-disk directive is just a nice option to have when initially building
a boot-on-raid system. It has no use (what I know of at least) on a system once
it's set up.

> If I bring up the PC with a new hdc, I expect RAID would come up in degraded
> mode of some kind, I could do a raidhotremove, partition hdc, and then
> raidhotadd. Is this right?

It's already removed if the machine comes up in degraded mode.

Partition and raidhotadd.  Reconstruction will start, and you'll be back in
business.

................................................................
: [EMAIL PROTECTED]  : And I see the elder races,         :
:.........................: putrid forms of man                :
:   Jakob �stergaard      : See him rise and claim the earth,  :
:        OZ9ABN           : his downfall is at hand.           :
:.........................:............{Konkhra}...............:

Reply via email to