On 04/07/2014 09:34 PM, Prentice Bisbal wrote:
Was it wear out, or some other failure mode?

And if wear out, was it because consumer SSDs have lame leveling or
something like that?

Here's how I remember it. You took the capacity of the disk, figured out
how much data would have to be written to it wear it out, and then
divided that by the bandwidth of the drive to figure out how long it
would take to write that much data to the disk if data was constantly
being written to it. I think the answer was on the order of 5-10 years,
which is a bit more than the expected lifespan of a cluster, making it a
non-issue.

This would be the ideal case, but requires perfect wear-leveling and write amplification factor of 1. Unfortunately, those properties rarely hold.

However, again, in the case of using it as a Hadoop intermediate disk, write amp would be a non-issue because you'd be blowing away data after runs (make sure to use a scripted trim or something, unless the FS auto-trims, which you may not want), and wear-leveling would be less important because the data written/read would be large highly sequential. Wear-leveling would be trivial under those conditions.

ellis

_______________________________________________
Beowulf mailing list, [email protected] sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to