Makoto,

The normal raid driver only handles 12 disk entries (or slots). Unfortunately, a
spare disk counts as another disk slot, and you need a spare slot to rebuild the
failed disk. But, with your setup of 12 disk raid 5, you have already defined all
the available disk slots.

To recover your 12 disk raid 5 system, you will need to modify your kernel and
raid tools to accommodate more disks. Fortunately, the reason there is currently a
12 disk limit is from an erroneous calculation, and there is room for many more
disks (I don't remember the actual limit, but it is over 24). There has been some
talk of this subject in the past. If you look in the list archive for the thread
"the 12 disk limit" there is some information on what needs to be done to modify
the kernel.

This brings up a question though; Can an existing 12 disk limited raid superblock
work with a kernel that supports more than 12 disks? I'd think so, since the
unused areas are zeroed out. I don't know of anybody trying it though.

The tools should, but don't, limit the number of devices in a raid 5 array to one
less than the maximum disk slots in the raid superblock so than the last slot can
be used as a spare. Unfortunately, you ran into this trap.

Good luck, <>< Lance.


Makoto Kurokawa wrote:

> Hello, All.
>
> I have a trouble of HDD fail of raid-5,raid-0.90 on Redhat 6.0.
>
> Raid-5 is now working on degrade mode.
> Exactly, Iacan't repair or replace the failed HDD (to new HDD).
> Woule you tell me how to do recovery it?
>
> "/proc/mdstat" is as follows:
>
> [root@oem /root]# cat /proc/mdstat
> Personalities : [raid5]
> read_ahead 1024 sectors
> md0 : active raid5 sdm1[11] sdl1[10] sdk1[9] sdj1[8] sdi1[7] sdh1[6] sdg1[5]
> sdf1[4] sde1[3] sdd1[2] sdc1[1] 97192128 blocks level 5, 4k chunk, algorithm 2
> [12/11] [_UUUUUUUUUUU]
> unused devices: <none>
>
> "sdb1[0]" is failed, I think.
>
> "/etc/raidtab" is as follows:
>
> # Sample raid-5 configuration
> raiddev             /dev/md0
> raid-level          5
> nr-raid-disks       12
> chunk-size          4
>
> # Parity placement algorithm
>
> #parity-algorithm   left-asymmetric
>
> #
> # the best one for maximum performance:
> #
> parity-algorithm    left-symmetric
>
> #parity-algorithm   right-asymmetric
> #parity-algorithm   right-symmetric
>
> # Spare disks for hot reconstruction
> #nr-spare-disks          0
>
> device              /dev/sdb1
> raid-disk      0
>
> device              /dev/sdc1
> raid-disk      1
>
> device              /dev/sdd1
> raid-disk      2
>
> device              /dev/sde1
> raid-disk      3
>
> device              /dev/sdf1
> raid-disk      4
>
> device              /dev/sdg1
> raid-disk      5
>
> device              /dev/sdh1
> raid-disk      6
>
> device              /dev/sdi1
> raid-disk      7
>
> device              /dev/sdj1
> raid-disk      8
>
> device              /dev/sdk1
> raid-disk      9
>
> device              /dev/sdl1
> raid-disk      10
>
> device              /dev/sdm1
> raid-disk      11
>
> First, I restarted  the PC and tryed "raidhotadd" and "raidhotremove" ,the
> result is as fllows:
>
> [root@oem /root]# raidhotadd /dev/md0 /dev/sdb1
> /dev/md0: can not hot-add disk: disk busy!
>
> [root@oem /root]# raidhotremove /dev/md0 /dev/sdb1
> /dev/md0: can not hot-remove disk: disk not in array!
>
> Next, I replaced HDD,/dev/sdb to new HDD, the result, system hung-up on boot
> time.
>
> With the message, "/dev/md0 is invalid."
>
> what should I do to recovery the Raid-5 from degrade-mode to normal mode?
>
> Makoto Kurokawa
> Engineer, OEM Sales Engineering
> Storage Products Marketing, Fujisawa, IBM-Japan
> Tel:+81-466-45-1441 FAX:+81-466-45-1045
> E-mail:[EMAIL PROTECTED]

Reply via email to