Kenneth Burgener wrote:
> On 7/7/2009 1:03 PM, Mike Lovell wrote:
>
>> I have a machine that has 4 disks in a raid 10 using md.
>>
>> [ 28.575149] md: raid10 personality registered for level 10
>> [ 28.610827] md: md0 stopped.
>> [ 28.688678] md: bind<sdu1>
>> [ 28.688981] md: bind<sdv1>
>> [ 28.689269] md: bind<sdw1>
>> [ 28.689566] md: bind<sdx1>
>>
>
> Are you able to boot into the OS? What does 'cat /proc/mdstat' show?
> What does 'mdadm --examine /dev/sdu1' (or sdv,sdw,sdx) show?
I am able to boot into the OS as it is on a different single disk. Here
is the results of /proc/mdstat.
# cat /proc/mdstat
Personalities : [raid10]
md0 : inactive sdx1[3]
976759936 blocks
unused devices: <none>
I also took a look at the md superblock on the devices. sd{u,v,w}1 look
like they are fine. sdx1 looks funky. It shows that sdx1 is active but
that 1 other disks is removed and the other 2 are as faulty. It looks
like maybe the information on sdx1 got messed up while the others are fine.
> Normally
> if only one disk has failed, the array should be able to activate, but
> in a degraded state. For some reason your system thinks that sdu, sdv,
> sdw are all in an invalid state, which means there are not enough
> devices to reassemble the array. I haven't seen the "non-fresh" error
> before. This could simply mean it avoided assembling the array due to
> some sort of minor out of date, or out of sequence issue. As a last
> resort you could try to forcefully reassemble the array (no guarantees):
>
> mdadm --examine /dev/sdu1 | grep -i uuid
> # copy and paste the uuid into the following
> mdadm --assemble /dev/md0 --force --uuid=[UUID_from_previous_command]
>
>
I am about to try an mdadm --assemble command to see if it helps.
mike
/*
PLUG: http://plug.org, #utah on irc.freenode.net
Unsubscribe: http://plug.org/mailman/options/plug
Don't fear the penguin.
*/