2012-04-08 6:06, Richard Elling wrote:
You can't get past the age-old idiom: you get what you pay for.

True... but it can be somewhat countered with DrHouse-age idiom:
people lie, even if they don't mean to ;)

Rhetoric foolows ;)

Hardware breaks sooner or later, due to poor design, brownian
movement of the IC's atoms, or a flock of space deaath rays.
So in the extreme case software built for reliability should
assume that nothing is like it seems or is reported by the
hardware. In this case the 10^-14 or 10^-16 BER, or the fact
that most of the time expensive disks complete the flush
requests while cheaper ones likely don't - that is just a
change of non-zero factors in probability of having an actual
error and data loss. Even the hashes have miniscule non-zero
chances of collision.

This is reminiscent of the approach in building dedicated
network segments for higher-security tasks - a networked
device (especially ones connected to "the outside") are
assumed to have been hacked into. If not yet, then there
will be some zero-day exploit and it will be hacked into.

It is understandable that X%-reliable systems can probably
be built easier if you start with more reliable components,
but they are not infinitely better that "unreliable" ones.

So, is there really a fundamental requirement to avoid
cheap hardware, and are there no good ways to work around
its inherently higher instability and lack of dependability?

Or is it just a harder goal (indefinitely far away on the
project roadmap)?

Just a thought...
//Jim

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to