> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Ray Arachelian
> One thing you can do is enable dedup when you copy all your data from
> one zpool to another, then, when you're done, disable dedup.  It will no
> longer waste a ton of memory, and your new volume will have a high dedup
> ratio. 

That's not correct.  It sounds like you are mistakenly believing the DDT gets 
held in memory, but actually, it's held on disk and since it gets used so much, 
large portions of it will likely be in ARC/L2ARC.  Unfortunately, after you 
dedup a pool and disable dedup, the DDT will still get used frequently, and 
still take just as much memory most likely.  But that's not the main concern 
anyway - The main concern is things like snapshot destroy (or simply rm) which 
need to unlink blocks.  This requires decrementing the refcount, which requires 
finding and writing the DDT entry, which means a flurry of essentially small 
random IO.  So the memory & performance with dedup disabled is just as bad, as 
long as you previously had dedup enabled for a signfiicant percentage of your 

> Anyone know if zfs send | zfs get will maintain the deduped files after
> this?  Maybe once deduped you can wipe the old pool, then use get|send
> to get a deduped backup?

You can enable the dedup property on a receiving pool, and then the data 
received will be dedup'd.  The behavior is dependent on the properties of the 
receiving pool.

zfs-discuss mailing list

Reply via email to