Jeremy Teo wrote:
The whole raid does not fail -- we are talking about corruption
here.  If you lose some inodes your whole partition is not gone.

My ZFS pool would not salvage -- poof, whole thing was gone (granted
it was a test one and not a raidz or mirror yet).  But still, for
what happened, I cannot believe that 20G of data got messed up
because a 1GB cache was not correctly flushed.

If you mess up 4 critical blocks (all copies of the same uberblock
metadata) then this is possible.  It is expected that this would be a
highly unlikely event since the blocks are spread across the device.
Inserting a cache between what ZFS thinks is a device and the device
complicates this a bit, but it is not clear to me what alternative
would be better for all possible failure modes.

Chad, I think what you're saying is for a zpool to allow you to
salvage whatever remaining data that passes it's checksums.

That is the way I read this thread.  Perhaps a job for zdb (I speculate
because I've never used zdb)?  Perhaps a zdb expert can chime in...
 -- richard

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to