Excellent news.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1814983
Title:
zfs poor sustained read performance from ssd pool
To manage notifications about this bug go to:
https://bugs.launchpad.n
Hi Colin, thanks so much for giving this another look. I don't see any
references to trying different zfs_arc_meta_limit* settings here!
I raised zfs_arc_meta_limit_percent from the default 75 to 95 and my
system has been running this ripgrep for four hours without falling to
pieces.
I think we c
OK, Seems like we can resolve this by using some tuning. I was able to
reproduce this on a single drive SSD pool configuration with 30 clones
of the linux source and grep'ing for various strings.
So, the ARC determines that it can't free up enough memory by releasing
unpinned buffers so the prune
The fix seems to be causing some issues, as reported here:
https://github.com/openzfs/zfs/pull/10331#issuecomment-636502835
..so I'm watching the upstream fixes to see how it all shakes out.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ub
Thanks for the heads-up on this fix. It does not seem to have been
merged yet, so I'll wait for it to get through the regression tests and
once it's upstream I'll backport it for you to test.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubu
Hello Colin, this looks promising for my arc_prune spinlock contention
problems:
https://github.com/openzfs/zfs/pull/10331
with some background here:
https://github.com/openzfs/zfs/issues/7559
This might have a simple ~dozen line fix! It's not yet reviewed by the
openzfs gurus but it sure *look
I forgot to mention, my l2arc is used on the second pool on this system:
$ zpool iostat -v
capacity operations
bandwidth
pool alloc free read write read
write
-
Colin, thanks for the link to https://github.com/openzfs/zfs/issues/9966
; unfortunately I think that's a different problem, my meta use seems
less drastic than the github issue
arc_prune 41859269059
arc_meta_used 43590932168
arc_meta_limit
Andreas, this system is running 18.04 LTS, 0.7.5-1ubuntu16.8,
4.15.0-91-generic.
It has 128 gigs of ram; the workload is running ripgrep on an entire
unpacked Ubuntu source archive, roughly 193 million files, 3.8 TB of
data, on a single raidz1 ssd vdev.
So I have no illusions that this workload f
Hi there, https://github.com/openzfs/zfs/issues/9966 contains some
advice that seems pertinent to this issue.
** Bug watch added: github.com/openzfs/zfs/issues #9966
https://github.com/openzfs/zfs/issues/9966
--
You received this bug notification because you are a member of Ubuntu
Bugs, which
I've filed an issue with the upstream bug tracker:
https://github.com/openzfs/zfs/issues/10222
** Bug watch added: github.com/openzfs/zfs/issues #10222
https://github.com/openzfs/zfs/issues/10222
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribe
For what is worth, I also have the arc_prune eating cpu problem. I
thought it was because my "machine" has just 3Gb of RAM, but this report
makes me think there is something wrong in that area with zfs 0.8.x,
since you have what, 64Gb of RAM?
The same 3Gb machine didn't have any such problems with
Writing 8589934592 (eight gigs) into the zfs_arc_dnode_limit seemed to
make an improvement: the arc_prune threads would periodically spike
above 20% CPU use, but would return to 2-5% CPU use quickly; it worked
well for a long time, perhaps even hours, before all the arc_prune
threads returned to ~5
Thanks Colin,
I believe that bountysource.com url is a scrape of
https://github.com/openzfs/zfs/issues/6223
I hadn't seen much of this information before.
Sadly my machine doesn't have enough memory to just keep turning up the
knobs -- the working set far exceeds the memory of my computer.
A re
It may be worth reading the following article as it has some tuning
tweaks that may help resolve your issue:
https://www.bountysource.com/issues/46175251-arc_prune-high-load-and-
soft-lockups
** Bug watch added: github.com/openzfs/zfs/issues #7559
https://github.com/openzfs/zfs/issues/7559
**
** Changed in: zfs-linux (Ubuntu)
Assignee: (unassigned) => Colin Ian King (colin-king)
** Changed in: zfs-linux (Ubuntu)
Importance: Undecided => High
** Changed in: zfs-linux (Ubuntu)
Status: New => Confirmed
--
You received this bug notification because you are a member of Ubu
echo 1 > /proc/sys/vm/drop_caches # still slow
echo 2 > /proc/sys/vm/drop_caches # goes faster
Thanks
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1814983
Title:
zfs poor sustained read performa
This is also repeatable on my pool of slower spinning metal disks, srv:
top - 22:59:43 up 71 days, 22:35, 3 users, load average: 18.88, 17.98, 10.68
Tasks: 804 total, 20 running, 605 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.1 us, 58.0 sy, 0.0 ni, 41.9 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 s
Oh yes, and perf top output after dropping caches:
Samples: 11M of event 'cycles:ppp', Event count (approx.): 648517108833
Overhead Shared Object Symbol
19.77% [kernel][k] zfs_prune
18.96% [kernel][k] _raw_spin_lo
Debugging by voodoo dolls: I made a wild guess that stumbling over
cached objects was getting in the way. I dropped caches with:
echo 3 > /proc/sys/vm/drop_caches
and immediately the read iops to the pool skyrocketed and the ripgrep
output sure looks like it's going WAY faster:
And top(1) output after dropping caches:
top - 18:42:09 up 70 days, 18:18, 3 users, load average: 17.33, 22.19, 27.87
Tasks: 826 total, 2 running, 644 sleeping, 0 stopped, 0 zombie
%Cpu(s): 1.2 us, 22.9 sy, 0.0 ni, 63.3 id, 11.5 wa, 0.0 hi, 1.1 si, 0.0 st
GiB Mem : 125.805 total, 3
sudo perf top output:
Samples: 1M of event 'cycles:ppp', Event count (approx.): 476944550835
Overhead Shared Object Symbol
24.93% [kernel] [k] _raw_spin_lock
21.52% [kernel] [k] zfs_prune
17.29% [kernel] [k] d_prune_aliases
22 matches
Mail list logo