On Nov 2, 2009, at 9:07 AM, Victor Latushkin wrote:

Enda O'Connor wrote:
it works at a pool wide level with the ability to exclude at a dataset level, or the converse, if set to off at top level dataset can then set lower level datasets to on, ie one can include and exclude depending on the datasets contents.
so largefile will get deduped in the example below.

And you can use 'zdb -S' (which is a lot better now than it used to be before dedup) to see how much benefit is there (without even turning dedup on):

forgive my ignorance, but what's the advantage of this new dedup over the existing compression option? Wouldn't full-filesystem compression naturally de-dupe?

-Jeremy
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to