In case you are still trying to get this resolved:

Since you have enabled deduplication in the past, unfortunately the 
deduplication tables must be loaded into memory for any and all writes. ZFS 
must determine if all writes match a previous block, to 1) increment the dedup 
count for matches, or 2) decrement the dedup count for a no-longer needed block.
This often means that the table must be loaded from disk repeatedly, especially 
if the table is large enough that it is purged from memory often. It is even 
possible to get your pool into a state where it cannot be imported due to the 
amount of memory required.
>From the disk activity and slowness you described, it sounds like the dedup 
>table (DDT) is being loaded, dumped, and loaded again repeatedly.

I don't know of the top of my head if you can use the 'zdb' command to examine 
the current DDT table, but there is a way to simulate the effect of enabling 
dedup on a given dataset using zdb in order to estimate the size of the DDT 
table that would result.

The only resolution, as Allan stated, is to zfs send and zfs recv the dataset 
into a new copy that does not have dedup enabled, or recreate it from scratch. 
Depending if you enabled dedup on single dataset, multiple datasets, or the 
whole pool, it may be easiest to recreate the whole pool. This is a side-effect 
of dedup that some have referred to as 'toxic', and some googling will show 
that it is a common issue encountered when experimenting with dedup (on all 
Some guides will recommend only enabling dedup if you know that your data will 
have many duplicated blocks, and also that the tables will easily fit into 
memory. There are also guides that can help to estimate the amount of ram 
required to hold a DDT based on the blocksize and number of blocks in the 
dataset. This can be complicated by many factors, so it is usually best to 
avoid turning dedup on unless you are sure that it will help rather than hinder 

Hope this helps!
_______________________________________________ mailing list
To unsubscribe, send any mail to ""

Reply via email to