[Still waiting for answers on my earlier questions]

So I take it that ZFS solves one problem perfectly well: Integrity of data 
blocks. It uses CRC and atomic writes for this purpose, and as far as I could 
follow this list, nobody has ever had any problems in this respect.
However, it also - at least to me - looks like that there is a chance that you 
have a disk in your hands with 100% correct data blocks, but no way to retrieve 
a single one; under the unfortunate circumstances that the semantics of these 
blocks is lost. From what I can gather here, and correct me if I am wrong, the 
problem is not so much on the individual file system to which these 100% 
correct blocks belong, than on the level of the overall structure of those 
filesystems. 
If this was the case, a copy/mirror like it is used in FAT32 might be one 
solution, though maybe not the most elegant one. Could another approach be, to 
provide each file system a (virtual) self-contained, basic, pool to which it 
belongs, and from that it could be recovered? A pool that is over-ruled by the 
existence of a consistent higher-level pool (the one that the user has created 
and the user interacts with)?
I concede that these might be impossible one way or another, but conceptually 
at least, a fall-back pool is thinkable. Nobody expects consistency of a file 
that sees the drive yanked while the writing is going on. But an 'atomic' 
update before and after could be useful; one that propagates through to the 
upper level, so that the state of the pool is consistent at any moment, with or 
without the changes of the underlying file system.
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to