Reviewed by: Dan Kimmel <[email protected]> Reviewed by: Prashanth Sreenivasa <[email protected]> Reviewed by: Paul Dagnelie <[email protected]>
With compressed ARC (bug #6950) we use up to 25% of our CPU to decompress indirect blocks, under a workload of random cached reads. To reduce this decompression cost, we would like to increase the size of the dbuf cache so that more indirect blocks can be stored uncompressed. If we are caching entire large files of recordsize=8K, the indirect blocks use 1/64th as much memory as the data blocks (assuming they have the same compression ratio). We suggest making the dbuf cache be 1/32nd of all memory, so that in this scenario we should be able to keep all the indirect blocks decompressed in the dbuf cache. (We want it to be more than the 1/64th that the indirect blocks would use because we need to cache other stuff in the dbuf cache as well.) In real world workloads, this won't help as dramatically as the example above, but we think it's still worth it because the risk of decreasing performance is low. The potential negative performance impact is that we will be slightly reducing the size of the ARC (by ~3%). Upstream bug: DLPX-46942 You can view, comment on, or merge this pull request online at: https://github.com/openzfs/openzfs/pull/564 -- Commit Summary -- * 9188 increase size of dbuf cache to reduce indirect block decompression -- File Changes -- M usr/src/uts/common/fs/zfs/dbuf.c (18) -- Patch Links -- https://github.com/openzfs/openzfs/pull/564.patch https://github.com/openzfs/openzfs/pull/564.diff -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/openzfs/openzfs/pull/564 ------------------------------------------ openzfs-developer Archives: https://openzfs.topicbox.com/groups/developer/discussions/Tc9d2040d2d71fd86-M5821ddec2618f6941f8b1033 Powered by Topicbox: https://topicbox.com
