I'm curious as to how send/recv intersects with dedupe... if I send/recv a deduped filesystem, is the data sent it it's de-duped form, ie just sent once, followed by the pointers for subsequent dupe data, or is the the data sent in expanded form, with the recv side system then having to redo the dedupe process?

Obviously sending it deduped is more efficient in terms of bandwidth and CPU time on the recv side, but it may also be more complicated to achieve?


Also - do we know yet what affect block size has on dedupe? My guess is that a smaller block size will perhaps give a better duplication match rate, but at the cost of higher CPU usage and perhaps reduced performance, as the system will need to store larger de-dupe hash tables?

Regards,
   Tristan

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to