On 01/08/2010 02:42 PM, Lutz Schumann wrote:
> See the reads on the pool with the low I/O ? I suspect reading the
> DDT causes the writes to slow down.
> 
> See this bug
> http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6913566.
> It seems to give some backgrounds.
> 
> Can you test setting the "primarycache=metadata" on the volume you
> test ? This would be my initial test. My suggestion would be that it
> may improve the situation because your ARC can be better utilized for
> DDT (this does not make much sence for production without a SSD
> cache, because you practially disable all caches for reading without
> a L2ARC (aka SSD)!)
> 
> As I read the bug report above - it seems the if the DDT
> (deduplication table) does not fit into memory or dropped from there
> the DDT has to be read from disk causing massive random I/O.

The symptoms described in that bug report do match up with mine.  I have
also experienced long hang times (>1hr) destroying a dataset while the
disk just thrashes.

I tried setting "primarycache=metadata", but that did not help.  I
pulled the DDT statistics for my pool, but don't know how to determine
its physical size-on-disk from that.  If deduplication ends up requiring
a separate sort-of log device, that will be a real shame.

> # zdb -DD nest
> DDT-sha256-zap-duplicate: 780321 entries, size 338 on disk, 174 in core
> DDT-sha256-zap-unique: 6188123 entries, size 335 on disk, 164 in core
> 
> DDT histogram (aggregated over all DDTs):
> 
> bucket              allocated                       referenced          
> ______   ______________________________   ______________________________
> refcnt   blocks   LSIZE   PSIZE   DSIZE   blocks   LSIZE   PSIZE   DSIZE
> ------   ------   -----   -----   -----   ------   -----   -----   -----
>      1    5.90M    752G    729G    729G    5.90M    752G    729G    729G
>      2     756K   94.0G   93.7G   93.6G    1.48M    188G    187G    187G
>      4    5.36K    152M   80.3M   81.5M    22.4K    618M    325M    330M
>      8      258   4.05M   1.93M   2.00M    2.43K   36.7M   16.3M   16.9M
>     16       30    434K     42K   50.9K      597   10.2M    824K   1003K
>     32        5    255K   65.5K   66.6K      204   10.5M   3.26M   3.30M
>     64       20   2.02M    906K    910K    1.41K    141M   62.0M   62.2M
>    128        4      2K      2K   2.99K      723    362K    362K    541K
>    256        1     512     512     766      277    138K    138K    207K
>    512        2      1K      1K   1.50K    1.62K    830K    830K   1.21M
>  Total    6.65M    846G    823G    823G    7.41M    941G    917G    917G
> 
> dedup = 1.11, compress = 1.03, copies = 1.00, dedup * compress / copies = 1.14

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to