> orb:/etc # mdadm --detail --scan -vvv
> /dev/md0:
>         Version : 00.90.03
>   Creation Time : Sun Dec  2 17:27:38 2007
>      Raid Level : raid5
>      Array Size : 2441912320 (2328.79 GiB 2500.52 GB)
>   Used Dev Size : 488382464 (465.76 GiB 500.10 GB)
>    Raid Devices : 6
>   Total Devices : 6
> Preferred Minor : 0
>     Persistence : Superblock is persistent
>
>     Update Time : Sun Dec  2 17:27:38 2007
>           State : clean, degraded
>  Active Devices : 5
> Working Devices : 6
>  Failed Devices : 0
>   Spare Devices : 1
>
>          Layout : left-symmetric
>      Chunk Size : 4096K
>
>            UUID : 837eecb4:28f004fb:0e4f8c91:f584a4b1
>          Events : 0.1
>
>     Number   Major   Minor   RaidDevice State
>        0       8        1        0      active sync   /dev/sda1
>        1       8       17        1      active sync   /dev/sdb1
>        2       8       49        2      active sync   /dev/sdd1
>        3       8       65        3      active sync   /dev/sde1
>        4       8       81        4      active sync   /dev/sdf1
>        5       0        0        5      removed
>
>        6       8       97        -      spare   /dev/sdg1
>

All your disks are there, you can see the disk numbering starts at 0 and
goes to 4, that's 5 disks + your spare.  I'm not exactly sure why its
showing the 5th entry, it looks like you've removed a disk from the array.
 I'm sure there is a way to remove the number 5 drive from the array using
mdadm (I'm not familiar with the tool myself) and that should get you out
of the degraded state.

BTW using RAID 5 with +1 parity on 6 500G drives will result in about
66-70% storage.  In your case it will be just over 2TB.

Cheers
Todd


-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to