On Fri, May 27, 2011 at 04:32:03AM +0400, Jim Klimov wrote:
> One more rationale in this idea is that with deferred dedup
> in place, the DDT may be forced to hold only non-unique
> blocks (2+ references), and would require less storage in
> RAM, disk, L2ARC, etc. - in case we agree to remake the
> DDT on every offline-dedup operation.

This is an interesting point.  In this case, deferred dedup would be
the only way to get a given block hash to have 2 or more duplicates,
but once in there further copies could be added as normal.  This
probably gives you most of the (space) benefit for much less (memory)
cost. 

In reverse, pruning the DDT of single-instance blocks could be a
useful operation, for recovery from a case where you made a DDT too
large for the system.  It would still need a complex bp_rewrite.

--
Dan.

Attachment: pgpTdDmsGgpfp.pgp
Description: PGP signature

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to