I have read some conflicting things regarding the ZFs record size setting.  
Could you guys verify/correct my these statements:

(These reflect my understanding, not necessarily the facts!)

1) The ZFS record size in a zvol is the unit that dedup happens at.  So, for a 
volume that is shared to an NTFS machine, if the NTFS cluster size is smaller 
than the zvol record size, dedup will get dramatically worse, since it won't 
dedup clusters that are positioned differently in zvol records.

2) For shared folders, the record size is the allocation unit size, so large 
records can waste a substantial amount of space, in cases with lots of very 
small files.  This is different than a HW raid stripe size, which only affects 
performance, not space usage.

3) Although small record sizes have a large RAM overhead for dedup tables, as 
long as the dedup table working set fits in RAM, and the rest fits in L2ARC, 
performance will be good.

Thanks,
   Rob
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to