What I've noticed, is that when I have my drives in a situation of small airflow, and hence hotter operating temperatures, my disks will drop quite quickly. I've now moved my systems into large cases, which large amounts of airflow and using the icydock brand of removable drive enclosures.


I use the SASUC8I SATA/SAS controller to access 8 drives.


I put it in PCI-e x16 slots on "graphics heavy" motherboards which might have as many as 4x PCI-e x16 slots. I am replacing an old motherboard with this one.


The case that I found to be a good match for my needs is the Raven


It has enough slots (7) to put 2x 3-in-2 and 1x 4-in-3 icy dock bays in to provide 10 drives in hot swap bays.

I really think that the big issue is that you must move the air. The drives really need to stay cool or else you will see degraded performance and/or data loss much more often.

Gregg Wonderly

On 1/24/2012 9:50 AM, Stefan Ring wrote:
After having read this mailing list for a little while, I get the
impression that there are at least some people who regularly
experience on-disk corruption that ZFS should be able to report and
handle. I’ve been running a raidz1 on three 1TB consumer disks for
approx. 2 years now (about 90% full), and I scrub the pool every 3-4
weeks and have never had a single error. From the oft-quoted 10^14
error rate that consumer disks are rated at, I should have seen an
error by now -- the scrubbing process is not the only activity on the
disks, after all, and the data transfer volume from that alone clocks
in at almost exactly 10^14 by now.

Not that I’m worried, of course, but it comes at a slight surprise to
me. Or does the 10^14 rating just reflect the strength of the on-disk
ECC algorithm?
zfs-discuss mailing list

zfs-discuss mailing list

Reply via email to