On Sat, 16 May 2020 14:47:40 +0800 Feng Tang <[email protected]> wrote:

> When checking a performance change for will-it-scale scalability
> mmap test [1], we found very high lock contention for spinlock of
> percpu counter 'vm_committed_as':
> 
>     94.14%     0.35%  [kernel.kallsyms]         [k] _raw_spin_lock_irqsave
>     48.21% 
> _raw_spin_lock_irqsave;percpu_counter_add_batch;__vm_enough_memory;mmap_region;do_mmap;
>     45.91% _raw_spin_lock_irqsave;percpu_counter_add_batch;__do_munmap;
> 
> Actually this heavy lock contention is not always necessary. The
> 'vm_committed_as' needs to be very precise when the strict
> OVERCOMMIT_NEVER policy is set, which requires a rather small batch
> number for the percpu counter.
> 
> So keep 'batch' number unchanged for strict OVERCOMMIT_NEVER policy,
> and lift it to 64X for OVERCOMMIT_ALWAYS and OVERCOMMIT_GUESS policies.
> Also add a sysctl handler to adjust it when the policy is reconfigured.
> 
> Benchmark with the same testcase in [1] shows 53% improvement on a
> 8C/16T desktop, and 2097%(20X) on a 4S/72C/144T server. We tested
> with test platforms in 0day (server, desktop and laptop), and 80%+
> platforms shows improvements with that test. And whether it shows
> improvements depends on if the test mmap size is bigger than the
> batch number computed.
> 
> And if the lift is 16X, 1/3 of the platforms will show improvements,
> though it should help the mmap/unmap usage generally, as Michal Hocko
> mentioned:
> "
> I believe that there are non-synthetic worklaods which would benefit
> from a larger batch. E.g. large in memory databases which do large
> mmaps during startups from multiple threads.
> "
> 

This needed some adjustments to overcommit_policy_handler() after
linux-next's 32927393dc1c ("sysctl: pass kernel pointers to
->proc_handler").  Relevant parts are below.

--- 
a/include/linux/mm.h~mm-adjust-vm_committed_as_batch-according-to-vm-overcommit-policy
+++ a/include/linux/mm.h
@@ -205,6 +205,8 @@ int overcommit_ratio_handler(struct ctl_
                loff_t *);
 int overcommit_kbytes_handler(struct ctl_table *, int, void *, size_t *,
                loff_t *);
+int overcommit_policy_handler(struct ctl_table *, int, void *, size_t *,
+               loff_t *);
 
 #define nth_page(page,n) pfn_to_page(page_to_pfn((page)) + (n))
 

--- 
a/mm/util.c~mm-adjust-vm_committed_as_batch-according-to-vm-overcommit-policy
+++ a/mm/util.c
@@ -746,6 +746,18 @@ int overcommit_ratio_handler(struct ctl_
        return ret;
 }
 
+int overcommit_policy_handler(struct ctl_table *table, int write, void *buffer,
+               size_t *lenp, loff_t *ppos)
+{
+       int ret;
+
+       ret = proc_dointvec_minmax(table, write, buffer, lenp, ppos);
+       if (ret == 0 && write)
+               mm_compute_batch();
+
+       return ret;
+}
+
 int overcommit_kbytes_handler(struct ctl_table *table, int write, void *buffer,
                size_t *lenp, loff_t *ppos)
 {
_

Reply via email to