On Jan 14, 2012, at 6:36 AM, Stefan Ring wrote:
> Inspired by the paper "End-to-end Data Integrity for File Systems: A
> ZFS Case Study" , I've been thinking if it is possible to devise a way,
> in which a minimal in-memory data corruption would cause massive data
For enterprise-class systems, you will find hardware protection such as ECC
and other mechanisms all the way up and down the datapath. For example,
if you build an ALU, you can add a few transistors to also detect the various
failure modes that afflict data flowing through an ALU. This is one of the
that diffentiates a mainframe or SPARC64 processor from a run-of-the-mill PeeCee
> I could imagine a scenario where an entire directory branch
> drops off the tree structure, for example. Since I know too little
> about ZFS's structure, I'm also asking myself if it is possible to
> make old snapshots disappear via memory corruption or lose data blocks
> to leakage (not containing data, but not marked as available).
Sure. If you'd like a fright, read the errata sheet for a modern microprocessor
> I'd appreciate it if someone with a good understanding of ZFS's
> internals and principles could comment on the possibility of such
ZFS does expect that the processor, memory, and I/O systems work to some
degree. The only way to get beyond this sort of dependency is to implement a
system like we do for avionics.
>  http://www.usenix.org/event/fast10/tech/full_papers/zhang.pdf
Yes. Netapp has funded those researchers in the past. Looks like a FUD piece to
Lookout everyone, the memory system you bought from Intel might suck!
zfs-discuss mailing list