Reviewed by: Dan Kimmel <>
Reviewed by: Prashanth Sreenivasa <>
Reviewed by: Paul Dagnelie <>

With compressed ARC (bug #6950) we use up to 25% of our CPU to decompress
indirect blocks, under a workload of random cached reads. To reduce this
decompression cost, we would like to increase the size of the dbuf cache so
that more indirect blocks can be stored uncompressed.

If we are caching entire large files of recordsize=8K, the indirect blocks
use 1/64th as much memory as the data blocks (assuming they have the same
compression ratio). We suggest making the dbuf cache be 1/32nd of all memory,
so that in this scenario we should be able to keep all the indirect blocks
decompressed in the dbuf cache. (We want it to be more than the 1/64th that
the indirect blocks would use because we need to cache other stuff in the dbuf
cache as well.)

In real world workloads, this won't help as dramatically as the example above,
but we think it's still worth it because the risk of decreasing performance is
low. The potential negative performance impact is that we will be slightly
reducing the size of the ARC (by ~3%).

Upstream bug: DLPX-46942
You can view, comment on, or merge this pull request online at:

-- Commit Summary --

  * 9188 increase size of dbuf cache to reduce indirect block decompression

-- File Changes --

    M usr/src/uts/common/fs/zfs/dbuf.c (18)

-- Patch Links --

You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:

Powered by Topicbox:

Reply via email to