Thanks for all the great replies.

Ken/Asa: It would be awesome if you guys could keep us up to date with
zfs (bsd) and brtfs (linux). That is an important topic.

Nobody recommended any software to exercise a drive to try to catch
infant mortality before putting the drives into service.  Someone must
have written a go/no-go program to run overnight for that purpose.  The
first thing I've come across is built into smartctl, which has short and
long offline self-tests.  But what do they do?  And also some external
test should be run anyway to catch cable issues (thanks Josh, and I will
buy spares if everything goes well setting this up).

Jonathan: It is true that Western Digital has disabled the ERC parameter
on these drives (for marketing reasons, I suppose). My understanding is
that recent Linux software raid can handle that, this factoid is
widespread in forums, for example from this post:

http://marc.info/?l=linux-raid&m=128641258325333&w=2

"As for the read errors/kicking drives from the array, I'm not sure why
it gets kicked reading some sectors and not others, however I know there
were changes to the md stuff which handled that more gracefully earlier
this year. I had the same problem -- on my 2.6.32 kernel, a rebuild of
one drive would hit a bad sector on another and drop the drive, then hit
another bad sector on a different drive and drop it as well, making the
array unusable. However, with a 2.6.35 kernel it recovers gracefully and
keeps going with the rebuild. (I can't find the exact patch, but Neil
had it in an earlier email to me on the list; maybe a month or two ago?)
So again, I'd suggest trying a newer kernel if you're having trouble."

...but I've had a hard time finding the patch or anything official. Does
anyone have something concrete?  It is strange that the linux raid wiki
doesn't mention this one way or the other.

Another quirk with these drives is that they park their heads a lot,
which makes some people uncomfortable, but I they are designed to work
that way,  and you can turn the behavior off if you desire.

On the plus side, these newer WD20EARS have 3 platters, vs. 4 on the
originals. To me these seem like the best of the inexpensive 2T drives
out there, but there is no hard data, just tons of conflicting anecdotes
and reviews.

Thanks for pointing out the RE4, it does look like an awesome drive for
a high speed/high availability array, but it isn't clear cut in my
application.  I don't want fast drives. This is a backup server, and it
will sit around doing nothing most of the day but staying cool. I don't
think I need a highish rpm drive with special vibration sensors designed
for huge arrays.

This isn't my plan, but consider this:

  $260*2 RE4 = $520 -> two disk raid0

  vs.

  $79*7 WD20EARS=$553 -> four disk raid0 + two hot spares + one cold spare

The math for the latter really kicks the former.  Inexpensive means
something in raid.

BTW,  the RE4 has just as many failure reviews on NewEgg as all the
other drives, maybe more.  I suspect that shipping is the problem.  I've
got four drives from two shipments, so I can put two drives from
different shipments into my server which should help some.  It would
have been even better to order from different vendors.  I put some data
in my original post about the shipping containers.

No time to proof this email, hopefully it all makes sense.

-- 
Anthony Carrico

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to