Thanks Colin,

I believe that bountysource.com url is a scrape of
https://github.com/openzfs/zfs/issues/6223

I hadn't seen much of this information before.

Sadly my machine doesn't have enough memory to just keep turning up the
knobs -- the working set far exceeds the memory of my computer.

A recent archive-wide search I performed, I tried the following:

echo 126491074560 >> /sys/module/zfs/parameters/zfs_arc_max # eight gigs less 
than RAM
echo 117901139968 >> /sys/module/zfs/parameters/zfs_arc_dnode_limit # eight 
gigs less than ^^

I picked these numbers entirely at a guess. The zfs arc had previously
been 0 -- for half of ram, 64 gigabytes.

This workload seemed to run fine for longer than usual (arc_prune cpu
numbers were low, good read iops reported in zpool status output -- not
ideal indicator of actual progress, but a reasonable proxy) but
eventually the arc_prune tasks started consuming most of the CPUs, and
read iops fell to the dozens.

I haven't tried a run with silly low numbers for the arc dnode limit,
but I'm curious...

Thanks

** Bug watch added: github.com/openzfs/zfs/issues #6223
   https://github.com/openzfs/zfs/issues/6223

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1814983

Title:
  zfs poor sustained read performance from ssd pool

To manage notifications about this bug go to:
https://bugs.launchpad.net/zfs/+bug/1814983/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to