Damon Atkins <damon_atk...@yahoo.com.au> writes:

> One problem could be block sizes, if a file is re-written and is the
> same size it may have different ZFS record sizes within, if it was
> written over a long period of time (txg's)(ignoring compression), and
> therefore you could not use ZFS checksum to compare two files.

the record size used for a file is chosen when that file is created.  it
can't change.  when the default record size for the dataset changes,
only new files will be affected.  ZFS *must* write a complete record
even if you change just one byte (unless it's the tail record of
course), since there isn't any better granularity for the block
pointers.

-- 
Kjetil T. Homme
Redpill Linpro AS - Changing the game

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to