Managed to work around Yast and delete/recreate /dev/md0 in one effort
without performing a format. mdadm began rebuilding the array
automatically. Hopefully without another URE...

On Wed, Jan 14, 2009 at 6:27 PM, Chris Louden <[email protected]> wrote:
> i remembered something I saw on SGVLUG a few days ago...
>
> http://blogs.zdnet.com/storage/?p=162
>
> from that article....
>
> "SATA drives are commonly specified with an unrecoverable read error
> rate (URE) of 10^14. Which means that once every 100,000,000,000,000
> bits, the disk will very politely tell you that, so sorry, but I
> really, truly can't read that sector back to you."
>
> "With a 7 drive RAID 5 disk failure, you'll have 6 remaining ### TB
> drives. As the RAID controller is busily reading through those
> remaining disks to reconstruct the data from the failed drive, it is
> almost certain it will see an URE."
>
> (Which is now what I think happened...)
>
> "So the read fails. And when that happens, you are one unhappy camper.
> The message "we can't read this RAID volume" travels up the chain of
> command until an error message is presented on the screen."
>
> Scheiße!
>
> -Chris
>
>
>
>
> On Wed, Jan 14, 2009 at 5:17 PM, Chris Louden <[email protected]> wrote:
>> On Wed, Jan 14, 2009 at 4:59 PM, Peter Manis <[email protected]> wrote:
>>> When I built my file server the raid card kept swapping with the boot
>>> drive.  Despite mounting with UUID and spending a lot of time on it I never
>>> got it fixed until I moved it to CentOS.  If something like that was
>>> happening it would explain the movement in slots.  I would check the serial
>>> numbers a couple times after rebooting to see if this is happening.  You may
>>
>> I think you are referring to SCSI order. This is eSATA in a DAS. Not
>> sure it works the same way.
>>
>>> need to erase the drives to clean all possible information about the array.
>>
>> I need to make every effort to save the data. This was the backup
>> location for production data.
>>
>>> I had to when I created a test array once, erasing the MBR wasn't enough for
>>> some reason.
>>>
>>> On Wed, Jan 14, 2009 at 7:51 PM, Chris Louden <[email protected]> wrote:
>>>>
>>>> On Wed, Jan 14, 2009 at 4:46 PM, Peter Manis <[email protected]> wrote:
>>>> > Have you moved any drives around?  What distro is this?
>>>> >
>>>>
>>>> No movement, possible drive failure. SLES10
>>>> _______________________________________________
>>>> LinuxUsers mailing list
>>>> [email protected]
>>>> http://socallinux.org/cgi-bin/mailman/listinfo/linuxusers
>>>
>>>
>>>
>>> --
>>> Peter Manis
>>> (678) 269-7979
>>>
>>> _______________________________________________
>>> LinuxUsers mailing list
>>> [email protected]
>>> http://socallinux.org/cgi-bin/mailman/listinfo/linuxusers
>>>
>>>
>>
>

Reply via email to