hi.

OK. Now I'm VAGUEly worried for all of your data...

0. Enterprise disks are NOT the same thing as commodity disks, they differ in 
many respects. See:

http://www.pantz.org/hardware/disks/what_makes_a_hard_drive_enterprise_class.html
http://forum.synology.com/wiki/index.php/Differences_between_an_Enterprise-Class_HDD_and_a_Desktop-Class_HDD
http://download.intel.com/support/motherboards/server/sb/enterprise_class_versus_desktop_class_hard_drives_.pdf
http://wdc.custhelp.com/app/answers/detail/a_id/1397/p/227%2C294/

1. I advise you all to NEVER RAID recent commodity drives. The firmware for 
non-enterprise SATA hardware is NOT designed to participate in any kind of RAID 
array, and will, in-fact, actively damage the array. The RAID technology, 
whether RAID-Z (ZFS) or RAID 1-6, software, hardware, or the 
netherworld-of-bios-based-motherboard-software-raid is immaterial. When a 
member disk sleeps, it will degrade the array, among other *bad things.* All of 
those *bad things* can lead to data loss, and do so rapidly. I expect this is 
exactly what happened to Mr. Hooker's friend.

2. Perusal of the "Green" drive reviews on Newegg will fill in the user "horror 
story" details. After becoming aware of this issue, I now only buy Enterprise 
class disks for RAID purposes. Google for "green drive raid" and you'll see 
what I'm talking about.

In the 3.5" form factor, Western Digital sells the RE3 now RE4 series. Seagate 
has a nice Constellation ES series. Hitachi just announced their new 3TB 
Ultrastar. These disks are NOT designed for "quiet" operation the way that 
desktop disks are. Most of the quiet operation is accomplished at the cost of 
performance. So called "green" drives are, in my opinion, one of the worst 
things ever conceived. Desktop HDDs don't eat that much power in comparison to 
say, an Intel i7 980X at 130 watts, or an nVidia GTX480 at 350 watts [3].  In 
comparison, an RE4 disk uses only... wait for it... 10.70 watts at peak 
consumption [4]. Note: all of those numbers ignore the cooling power costs.

3. Based on my limited knowledge, new enterprise arrays are now being populated 
by SSD and small form-factor 2.5" disks that offer smaller power-envelopes per 
disk, along with a higher data density. The firmware is not hacked to "reduce 
the power consumption" by modeling typical Desktop User Workloads. Ugh. Buyer 
beware has a whole new meaning in this arena these days. Gone are the days of 
awesome Micropolis disks, as are the days of SCSI Fast/Wide, and uh, you can 
see my point mired in the past somewhere there... ;-)

If you have purchased some new desktop disks for RAID, I encourage you to 
exchange them with the Enterprise variants. You get what you pay for. Is your 
data really that disposable? No amount of testing will fix the fact that the 
firmware for commodity drives is 1. Not Upgradeable 90% of the time, and 2. Not 
made for RAID. You might find a good drive listed at [5].

For some enterprise drives, see:

Seagate Constellation ES 1TB 3.5" SATA 3.0Gb/s Internal Hard Drive -Bare Drive
http://www.newegg.com/Product/Product.aspx?Item=N82E16822148590

Western Digital RE4 2TB 3.5" SATA 3.0Gb/s Internal Hard Drive
http://www.newegg.com/Product/Product.aspx?Item=N82E16822136579

Hitachi Ultrastar 3.5-Inch 2 TB 7200RPM SATA II 32 MB Cache Enterprise Hard 
Drive with 24x7 Duty Cycle 0F10452
http://www.amazon.com/Hitachi-Ultrastar-3-5-Inch-Enterprise-0F10452/dp/B00492BU4A

4. Monitoring: mdadm, SMART, and lm-sensors are your friends.

5. ZFS still wins. It is under active development in a number of places for 
different reasons. FreeBSD supports it, as does a recent port to Linux. 
OpenSolaris, per-se, was "taken-in-house" by Oracle, but you can find forks as 
http://openindiana.org/ http://www.illumos.org/ and http://www.nexenta.org/ for 
a full list: http://en.wikipedia.org/wiki/OpenSolaris

Finally, since this is VAGUE: Those who hold your data control your data. From 
formats to locations. From your hard drive to Google's super-computer. If you 
care about your data, you have to take care of it.

Aside: About a year after the big Seagate Firmware Fiasco of 2009 for the 
7200.11 series of disks, I got curious, and read numerous articles on the 
issue. I read one interesting post from someone claiming to be an engineer at 
Seagate. His description of the problem was enlightening. See [1,2]. 

References:

[1] 
http://www.tomshardware.com/news/seagate-firmware-7200.11-drives-bricking,6885.html
[2] http://it.slashdot.org/comments.pl?sid=1098793&cid=26542735
[3] http://www.tomshardware.com/reviews/geforce-gtx-480,2585-15.html
[4] http://www.wdc.com/global/products/specs/?driveID=732&language=1
[5] http://www.storagereview.com/

Thanks.

have a day.yad
jdpf


On Feb 24, 2011, at 11:44 AM, Sam Hooker wrote:

> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA512
> 
> - ----- Original Message -----
>> Anthony Carrico <[email protected]> writes:
>>> I've got two 2T drives (WD20EARS) and an old motherboard for a
>>> backup
>>> server (not an easy time for me, but my old backup system, which
>>> supports a few people, is running out of space). I plan to use raid
>>> to
>>> mirror them. Actually I've got four of these drives, two for another
>>> computer.
>> 
>> I have 4 750GB drives in a raid 1+0. I've had drive failures due to a
>> bad SATA cable (easily recoverable, of course), another due to a
>> proper
>> drive failure, and its replacement is already showing SMART
>> Raw_Read_Error_Rate and Hardware_ECC_Recovered counts. "raid 0 and
>> pray" is good advice for just 2 drives :) . In that case, I'd also
>> suggest having a spare already on hand, and adopting the policy that a
>> drive failure is a non-maskable interrupt, that must be dealt with
>> before anything else.
> 
> My personal stuff (hosting box; media server; backup server, even) is all 
> Linux software RAID1 at minimum, since work- and family-obligation interrupts 
> frequently preempt "hobby stuff" for days or even weeks on end. (Congrats on 
> your *ahem* "new family-obligation interrupts", BTW!)
> 
>>> Ongoing monitoring?
>> 
>> /me ♥ logwatch and smartd, but I'm an amateur at this stuff.
> 
> Again, speaking only for my personal boxes: the physical machines' logwatches 
> are the only ones I read religiously, scanning daily for security stuff and 
> the dreaded SMART errors. I backstop that with a rudimentary check (15-minute 
> cron job, but it could easily be parleyed into a Nagios check, for instance) 
> that emails on md array/member failure. That's attached (perl script), 
> without warranty of fitness for any particular purpose blah blah blah etc I 
> rest my case. (Read: I wrote it late at night and it may miss critical 
> conditions. Let me know, will ya? ;-))
> 
>>> Is ZFS so great and wonderful that it
>>> is worth running BSD (instead of Linux) on a backup server?
>> 
>> ZFS always seemed awesome. But, Apple stopped using ZFS, and Oracle
>> already cared about btrfs before buying Sun. Is it even an option?
> 
> I'm certainly no ZFS expert (barely even familiar, really), so please take 
> this worth a grain of salt, but a good buddy of mine (Windows guy) decided to 
> dip his toe in the FOSS waters by building a FreeNAS[1] box with ~6TB of 
> formatted ZFS, only to have it crap out in mysterious and opaque ways. This 
> was at least a year ago, and he turned to me as his "Unix guy" friend to help 
> him recover all his family's digital photos. We tried all manner of juju over 
> the course of a week or two, and got nothing. I recall that the docs (both 
> the ZFS util man pages and web-available stuff) were not terribly helpful, 
> and the whole affair left a bad taste in my mouth. That may have changed in 
> the intervening time, though. Oh, and his were WD "green" drives, of some 
> stripe. Caveat administrator. :-)
> 
> 
> Cheers,
> 
> - -sth
> 
> [1]http://freenas.org
> 
> sam hooker|[email protected]|http://www.noiseplant.com
> 
>    "Elmo: The Other Red Meat."
>                               -akw
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.9 (Darwin)
> Comment: Use GnuPG with Firefox : http://getfiregpg.org (Version: 0.8)
> 
> iQIcBAEBCgAGBQJNZor/AAoJEPoh2/xXOP2jo/MQALq6kFFYrqisaaQiwif7Vc0c
> 5JPNpSY8mIKywZDzmYd6x24IcFEk++enFfN73lfNUgGZjPy5jS3hYJdzwmTTfFXD
> d1MeGOQfTEFt7D2tRZWouONuJsjlnB1dWiqjOIo2tSkvtEuNcSuJgojq8sOrQ07l
> yFhnLXZ0Cj+c2lMo8rstrzrIPb4FKKRJbQsKByMKYsci9jNI7xkj5lqVmv1Vn9vR
> BCFUx0SCI5AwR5RQVPDHx2QB+b2HQIoyDBPESdE6G+NOcZfAkZBoMj6cLsHRaZuX
> AiG5Zq0xqT1hMQAxBQMw416SgRaXXVeD8LZcqKRj0OxY+RIENO67oB5iNuKCgIVd
> Uxf5F4p4/K0Y5WhpgwE5DkZo9a6Ivjy35dU215Dl9ysX7/vBhUklvuxccaRoYxXh
> NLuRZmVDijWy5LCJzImsMbtJ8bD6D65OhimA8WszoH5frf+MC049m61B5jhsF4Uq
> yIUsKy1hRe58y2HS51jQiRr/Ez+s7FC8NLDqelch3Jq5ALoiea82WCTpTaKcxsg4
> 4/LRoryOg+mFNUG9/LGhq0nBSqODfZAPlBv4fSQoIlSO0uPg3G4nsl9KgWU+y+zV
> uVF6coDGe3pxHmh/394YYVAHE4hn1Y6+R9ixjIX3bkTVo2i2E3zncV8iKaiHhsDg
> xHOBFNx+Lbn0xrpRHTG4
> =Gb6R
> -----END PGP SIGNATURE-----
> <raidcheck>

Reply via email to