I'd like to revive this PR - we have 2 solutions to this problem and need to 
make progress towards getting one or both of them upstream.

@bcantrill I agree that it's silly for `kmem_cache_reap_now()` to do 
`taskq_diskpatch(); taskq_wait()`, and the solution in 
https://github.com/joyent/illumos-joyent/commit/daa3911f02365820bf2df2a1cdf96602eda66912
 is nice and simple.  The changes in this PR achieve the same thing and then 
some more - we're decoupling two independent tasks:
1. Keep the arc size (amount of cached data) below the target.  Note that this 
doesn't directly impact free memory since kmem_cache_free() doesn't free any 
pages.
2. Keep enough free memory, but reaping, and optionally reducing the target ARC 
size.

The SmartOS change makes it so that (1) doesn't have to wait for (2), which is 
the key thing for fixing the performance problem.  This change accomplishes 
that but also separates the code for these two tasks, which improves the design 
and also ensures that (2) doesn't have to wait for (1), and the timing of these 
two tasks is not coupled.

What are your plans for upstreaming 
https://github.com/joyent/illumos-joyent/commit/daa3911f02365820bf2df2a1cdf96602eda66912
 to illumos?  If you feel that the current kmem_cache_reap_now() is Wrong and 
needs to be replaced by kmem_cache_reap_soon(), that's OK with me.  I would 
want to additionally make the changes in this PR on top of that.

Another option would be to make the changes in this PR and also change the 
kmem_cache_reap_now() interface to not use a taskq, but rather do the reaping 
in the calling thread (however, there are some assertions implying that the 
reaping should happen only from the taskq, but I couldn't find any comment 
explaining *why* this is needed).

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/486#issuecomment-358831728
------------------------------------------
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/Tf18bbbd46b0af4a7-M6d09b4377c7f2496b3ae3a1a
Powered by Topicbox: https://topicbox.com

Reply via email to