On Thu, Oct 6, 2011 at 6:14 PM, Michael Mol <mike...@gmail.com> wrote:
>
> On Oct 6, 2011 9:06 PM, "Mark Knecht" <markkne...@gmail.com> wrote:
>>
>> On Thu, Oct 6, 2011 at 1:28 PM, Michael Mol <mike...@gmail.com> wrote:
>> > On Thu, Oct 6, 2011 at 4:21 PM, Mark Knecht <markkne...@gmail.com>
>> > wrote:
>> >> On Thu, Oct 6, 2011 at 1:03 PM, Paul Hartman
>> >> My worry was that if the mdraid daemon saw one drive gone - either
>> >> when starting to spin down or when one spins up slowly - and if mdraid
>> >> didn't understand that all this stuff was taking place intentionally
>> >> then it might mark that drive as having failed.
>> >
>> > Does mdraid even have an awareness of timeouts, or does it leave that
>> > to lower drivers? I think the latter condition is more likely.
>> >
>> > I suspect, though, that if your disk fails to spin up reasonably
>> > quickly, it's already failed.
>> >
>>
>> In general I agree. However drives that are designed for RAID have a
>> feature known as Time Limited Error Recovery (TLER) which supposedly
>> guarantees that they'll get the drive back to responding fast enough
>> to not have it marked as failed in the RAID array:
>>
>> http://en.wikipedia.org/wiki/Time-Limited_Error_Recovery
>>
>> When I built my first RAID I bought some WD 1TB green drives, built
>> the RAID and immediately had drives failing because they didn't have
>> this sort of feature. I replaced them with RAID Edition drives that
>> have the TLER feature and have never had a problem since. (Well, I
>> actually bought all new drives and kept the six 1TB drives which I'd
>> mostly used up for other things like external eSATA backup drives,
>> etc...)
>>
>> Anyway, I'm possibly over sensitized to this sort of timing problem
>> specifically in a RAID which is why I asked the question of Paul in
>> the first place.
>
> My first RAID was with three Seagate economy 1.5TB drives in RAID 5, shortly
> followed by three 1TB WD black drives in RAID 0. I never had the problems
> you describe, though I rebuit the RAID5 several times as I was figuring
> things out. (the 3TB RAID0 was for some heavy duty scratch space.)

Yeah, I understand. This sort of problem, I found out after joining
the linux-raid list, is _very_ dependent on the _exact_ model of
drives chosen to build the RAID. I've had exactly ZERO problems with
any the 2 drive RAID0's, 3 & 5 drive RAID1's and 5 drive RAID6's that
I built using WD RAID Edition drives. They've run for 18 months and
nothing has ever gone off line or needed any attention of any type.
They just work.

On the other hand all the RAID0 & RAID1's that I build using the WD
1TB _Green_ drives simply wouldn't work reliably. They'd go off line
every day or two, and I'm talking in the very same computer. No other
differences hardware wise. I've heard of people using the same drive
model (but possibly different firmware) having similar problems until
they got a WD app to twiddle with the firmware, and others that never
got the drives working well at all. The drives are perfectly fine
non-RAID.

I appreciate the inputs. It's an interesting subject and hearing other
people's experiences helps put some shape around the space.

Cheers,
Mark

Reply via email to