I enabled compression on a zfs filesystem with compression=gzip9 - i.e. fairly 
slow compression - this stores backups of databases (which compress fairly 
well).

The next question is:  Is the CRC on the disk based on the uncompressed data 
(which seems more likely to be able to be recovered) or based on the zipped 
data (which seems slightly less likely to be able to be recovered).

Why? 

Because if you can de-dup anyway why bother to compress THEN check? This SEEMS 
to be the behaviour - i.e. I would suspect many of the files I'm writing are 
dups - however I see high cpu use even though on some of the copies I see 
almost no disk writes.

If the dup check logic happens first AND it's a duplicate I shouldn't see 
hardly any CPU use (because it won't need to compress the data).

Steve Radich
BitShop.com
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to