-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512

Hi Jonathan,

- ----- Original Message -----
> 0. Enterprise disks are NOT the same thing as commodity disks, they
> differ in many respects. See:
> 
> [snip]

Just so we're on the same page, here, I agree with "point 0". But...

> 1. I advise you all to NEVER RAID recent commodity drives. The
> firmware for non-enterprise SATA hardware is NOT designed to
> participate in any kind of RAID array, and will, in-fact, actively
> damage the array. The RAID technology, whether RAID-Z (ZFS) or RAID
> 1-6, software, hardware, or the
> netherworld-of-bios-based-motherboard-software-raid is immaterial.
> When a member disk sleeps, it will degrade the array, among other *bad
> things.* All of those *bad things* can lead to data loss, and do so
> rapidly. I expect this is exactly what happened to Mr. Hooker's
> friend.

...I need to diverge from you here. I'd contend that desktop disks are suitable 
for RAID use given certain parameters:

1) the RAID level includes some redundancy/parity ("RAID0? What's that?");
2) the data are personal* and backed-up;
3) the environment isn't too punishing;
4) you're comfortable with the quality of the disks; and
5) the disk behavior is tuned (if possible) to prevent sleeping.

* and/or the data owner is aware of the risk and accepts it

Again citing my personal (home/family/hobby) rigs: RAID1 everywhere; important 
stuff is backed-up, and I have no illusions with respect to what it takes to 
keep those data available; mine is not an excessively-vibratory, high-density, 
thermally-dicey environment (clean, cool, dry basement, up on shelves, no more 
than four disks per chassis); I only deploy Samsung and occasionally Hitachi 
disks**, and only *new* for anything I care about -- I've found my requirements 
for increased storage quite nearly keep pace with the failure replacement 
schedule; and "sleep"? Your disks *sleep*? You need to push those boxes more, 
man. ;-) (Seriously, though -- tune disks where possible.)

** not looking to start a flame war -- I know everyone's got their faves/hates

Given that the cost differential is at least 2x, I think it'll be a while 
before I'm feeling rich enough to start using enterprise-class disk under these 
circumstances. And lest one think, "Well, sure enterprise-class disk costs at 
least twice as much as commodity, but you've said yourself that you've got this 
backed up to *another set of commodity disks*," you'd better believe I'm 
RAIDing and backing up the data irrespective of which class of disk the primary 
copy lives on. So, for my applications at least, commodity disk responsibly 
managed is OK by me. It's all about the user's expectations. :-) 


Cheers,

- -sth

        "I'm the operator of my pocket calculator."
                                                                                
                                Kraftwerk
                                "Pocket Calculator"


> 2. Perusal of the "Green" drive reviews on Newegg will fill in the
> user "horror story" details. After becoming aware of this issue, I now
> only buy Enterprise class disks for RAID purposes. Google for "green
> drive raid" and you'll see what I'm talking about.
> 
> In the 3.5" form factor, Western Digital sells the RE3 now RE4 series.
> Seagate has a nice Constellation ES series. Hitachi just announced
> their new 3TB Ultrastar. These disks are NOT designed for "quiet"
> operation the way that desktop disks are. Most of the quiet operation
> is accomplished at the cost of performance. So called "green" drives
> are, in my opinion, one of the worst things ever conceived. Desktop
> HDDs don't eat that much power in comparison to say, an Intel i7 980X
> at 130 watts, or an nVidia GTX480 at 350 watts [3]. In comparison, an
> RE4 disk uses only... wait for it... 10.70 watts at peak consumption
> [4]. Note: all of those numbers ignore the cooling power costs.
> 
> 3. Based on my limited knowledge, new enterprise arrays are now being
> populated by SSD and small form-factor 2.5" disks that offer smaller
> power-envelopes per disk, along with a higher data density. The
> firmware is not hacked to "reduce the power consumption" by modeling
> typical Desktop User Workloads. Ugh. Buyer beware has a whole new
> meaning in this arena these days. Gone are the days of awesome
> Micropolis disks, as are the days of SCSI Fast/Wide, and uh, you can
> see my point mired in the past somewhere there... ;-)
> 
> If you have purchased some new desktop disks for RAID, I encourage you
> to exchange them with the Enterprise variants. You get what you pay
> for. Is your data really that disposable? No amount of testing will
> fix the fact that the firmware for commodity drives is 1. Not
> Upgradeable 90% of the time, and 2. Not made for RAID. You might find
> a good drive listed at [5].
> 
> For some enterprise drives, see:
> 
> Seagate Constellation ES 1TB 3.5" SATA 3.0Gb/s Internal Hard Drive
> -Bare Drive
> http://www.newegg.com/Product/Product.aspx?Item=N82E16822148590
> 
> Western Digital RE4 2TB 3.5" SATA 3.0Gb/s Internal Hard Drive
> http://www.newegg.com/Product/Product.aspx?Item=N82E16822136579
> 
> Hitachi Ultrastar 3.5-Inch 2 TB 7200RPM SATA II 32 MB Cache Enterprise
> Hard Drive with 24x7 Duty Cycle 0F10452
> http://www.amazon.com/Hitachi-Ultrastar-3-5-Inch-Enterprise-0F10452/dp/B00492BU4A
> 
> 4. Monitoring: mdadm, SMART, and lm-sensors are your friends.
> 
> 5. ZFS still wins. It is under active development in a number of
> places for different reasons. FreeBSD supports it, as does a recent
> port to Linux. OpenSolaris, per-se, was "taken-in-house" by Oracle,
> but you can find forks as http://openindiana.org/
> http://www.illumos.org/ and http://www.nexenta.org/ for a full list:
> http://en.wikipedia.org/wiki/OpenSolaris
> 
> Finally, since this is VAGUE: Those who hold your data control your
> data. From formats to locations. From your hard drive to Google's
> super-computer. If you care about your data, you have to take care of
> it.
> 
> Aside: About a year after the big Seagate Firmware Fiasco of 2009 for
> the 7200.11 series of disks, I got curious, and read numerous articles
> on the issue. I read one interesting post from someone claiming to be
> an engineer at Seagate. His description of the problem was
> enlightening. See [1,2].
> 
> References:
> 
> [1]
> http://www.tomshardware.com/news/seagate-firmware-7200.11-drives-bricking,6885.html
> [2] http://it.slashdot.org/comments.pl?sid=1098793&cid=26542735
> [3] http://www.tomshardware.com/reviews/geforce-gtx-480,2585-15.html
> [4] http://www.wdc.com/global/products/specs/?driveID=732&language=1
> [5] http://www.storagereview.com/
> 
> Thanks.
> 
> have a day.yad
> jdpf
> 
> 
> On Feb 24, 2011, at 11:44 AM, Sam Hooker wrote:
> 
> > - ----- Original Message -----
> >> Anthony Carrico <[email protected]> writes:
> >>> I've got two 2T drives (WD20EARS) and an old motherboard for a
> >>> backup
> >>> server (not an easy time for me, but my old backup system, which
> >>> supports a few people, is running out of space). I plan to use
> >>> raid
> >>> to
> >>> mirror them. Actually I've got four of these drives, two for
> >>> another
> >>> computer.
> >>
> >> I have 4 750GB drives in a raid 1+0. I've had drive failures due to
> >> a
> >> bad SATA cable (easily recoverable, of course), another due to a
> >> proper
> >> drive failure, and its replacement is already showing SMART
> >> Raw_Read_Error_Rate and Hardware_ECC_Recovered counts. "raid 0 and
> >> pray" is good advice for just 2 drives :) . In that case, I'd also
> >> suggest having a spare already on hand, and adopting the policy
> >> that a
> >> drive failure is a non-maskable interrupt, that must be dealt with
> >> before anything else.
> >
> > My personal stuff (hosting box; media server; backup server, even)
> > is all Linux software RAID1 at minimum, since work- and
> > family-obligation interrupts frequently preempt "hobby stuff" for
> > days or even weeks on end. (Congrats on your *ahem* "new
> > family-obligation interrupts", BTW!)
> >
> >>> Ongoing monitoring?
> >>
> >> /me ♥ logwatch and smartd, but I'm an amateur at this stuff.
> >
> > Again, speaking only for my personal boxes: the physical machines'
> > logwatches are the only ones I read religiously, scanning daily for
> > security stuff and the dreaded SMART errors. I backstop that with a
> > rudimentary check (15-minute cron job, but it could easily be
> > parleyed into a Nagios check, for instance) that emails on md
> > array/member failure. That's attached (perl script), without
> > warranty of fitness for any particular purpose blah blah blah etc I
> > rest my case. (Read: I wrote it late at night and it may miss
> > critical conditions. Let me know, will ya? ;-))
> >
> >>> Is ZFS so great and wonderful that it
> >>> is worth running BSD (instead of Linux) on a backup server?
> >>
> >> ZFS always seemed awesome. But, Apple stopped using ZFS, and Oracle
> >> already cared about btrfs before buying Sun. Is it even an option?
> >
> > I'm certainly no ZFS expert (barely even familiar, really), so
> > please take this worth a grain of salt, but a good buddy of mine
> > (Windows guy) decided to dip his toe in the FOSS waters by building
> > a FreeNAS[1] box with ~6TB of formatted ZFS, only to have it crap
> > out in mysterious and opaque ways. This was at least a year ago, and
> > he turned to me as his "Unix guy" friend to help him recover all his
> > family's digital photos. We tried all manner of juju over the course
> > of a week or two, and got nothing. I recall that the docs (both the
> > ZFS util man pages and web-available stuff) were not terribly
> > helpful, and the whole affair left a bad taste in my mouth. That may
> > have changed in the intervening time, though. Oh, and his were WD
> > "green" drives, of some stripe. Caveat administrator. :-)
> >
> >
> > Cheers,
> >
> > - -sth
> >
> > [1]http://freenas.org
> >
> > sam hooker|[email protected]|http://www.noiseplant.com
> >
> >    "Elmo: The Other Red Meat."
> >                               -akw
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (Darwin)
Comment: Use GnuPG with Firefox : http://getfiregpg.org (Version: 0.8)

iQIcBAEBCgAGBQJNa+3DAAoJEPoh2/xXOP2j5twP/icgDzjSPlS7+a1HmKtC0oe5
galMFqU8EHbQ6kfAHiN59pQ5knhNUtym9/mRKwPtX2L7JESZ2uwP6uz6UwxNBrZ6
QgPqSYjEwJkBI3Tx4rNzTSYNfDHdNlMw5uZdM/A7UnLiUNl1whxCmGLHOBSyFg0d
xXP8mKxMdKV7af/rk7UnPG/dq5M5xSQ3B8Gfc+oIT0WNQwzyBNUNbMDxzB+oUT8d
SDEIVMSdgHLR9Lwaup4GgPKIdTmS/RyrFyblAe4T/fHSOr3LpGT0XGQR4bc/LyZd
+MNeCfh5SLoDEwgeRVuj8sMpYXeNT9bhtmGKtiiLILL9eovUA/B1a0POb6qcy2GZ
045Ky0vcTWfYX1X4DJWXTK6NFIHzmAq0Mfhs2bZkybZQIQCLaSWM+EIvRpzc5lL/
WWdqpHV50n1B0JSu4uVcRimEpCaQ4qwJWxTMaJcNcjZt+rwzFKxEbdyTwzZy9mVn
iPZaTt3WORAAZjea1gBucoA/PPYzeOUKtkOCDklR4nDqd0XfGc1GP6FRHWbBZkdo
UdPD3iYRTXgb43KAOucLRUbkdR4f5HVdhKa7XPFy/qgP7pkMdSXpX/SmNAyVAdN5
IKmb5peMrLvKbuWJcFeG6ZYzBHjxEhf5pw2KYdFKi8Rf5INvLHDcNtI3+EpToUET
qmq/73HZKhCjWzEHrNBG
=h1f/
-----END PGP SIGNATURE-----

Reply via email to