On Mon, Dec 14, 2009 at 9:53 PM, <casper....@sun.com> wrote: > >>On Mon, Dec 14, 2009 at 09:30:29PM +0300, Andrey Kuzmin wrote: >>> ZFS deduplication is block-level, so to deduplicate one needs data >>> broken into blocks to be written. With compression enabled, you don't >>> have these until data is compressed. Looks like cycles waste indeed, >>> but ... >> >>ZFS compression is also block-level. Both are done on ZFS blocks. ZFS >>compression is not streamwise. > > > And if you enable "verify" and you checksum the uncompressed data, you > will need to uncompress before you can verify.
Right, but 'verify' seems to be 'extreme safety' and thus rather rare use case. Saving cycles lost to compress duplicates looks to outweigh 'uncompress before verify' overhead, imo. Regards, Andrey > > Casper > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss