Re: [lkp] [mm/vmstat] 6cdb18ad98: -8.5% will-it-scale.per_thread_ops

2016-01-20 Thread Huang, Ying
Heiko Carstens  writes:

> On Wed, Jan 06, 2016 at 11:20:55AM +0800, kernel test robot wrote:
>> FYI, we noticed the below changes on
>> 
>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
>> commit 6cdb18ad98a49f7e9b95d538a0614cde827404b8 ("mm/vmstat: fix overflow in 
>> mod_zone_page_state()")
>> 
>> 
>> =
>> compiler/cpufreq_governor/kconfig/rootfs/tbox_group/test/testcase:
>>   
>> gcc-4.9/performance/x86_64-rhel/debian-x86_64-2015-02-07.cgz/ivb42/pread1/will-it-scale
>> 
>> commit: 
>>   cc28d6d80f6ab494b10f0e2ec949eacd610f66e3
>>   6cdb18ad98a49f7e9b95d538a0614cde827404b8
>> 
>> cc28d6d80f6ab494 6cdb18ad98a49f7e9b95d538a0 
>>  -- 
>>  %stddev %change %stddev
>>  \  |\  
>>2733943 .  0%  -8.5%2502129 .  0%  will-it-scale.per_thread_ops
>>   3410 .  0%  -2.0%   3343 .  0%  will-it-scale.time.system_time
>> 340.08 .  0% +19.7% 406.99 .  0%  will-it-scale.time.user_time
>>   69882822 .  2% -24.3%   52926191 .  5%  cpuidle.C1-IVT.time
>> 340.08 .  0% +19.7% 406.99 .  0%  time.user_time
>> 491.25 .  6% -17.7% 404.25 .  7%  
>> numa-vmstat.node0.nr_alloc_batch
>>   2799 . 20% -36.6%   1776 .  0%  numa-vmstat.node0.nr_mapped
>> 630.00 .140%+244.4%   2169 .  1%  
>> numa-vmstat.node1.nr_inactive_anon
>
> Hmm... this is odd. I did review all callers of mod_zone_page_state() and
> couldn't find anything obvious that would go wrong after the int -> long
> change.
>
> I also tried the "pread1_threads" test case from
> https://github.com/antonblanchard/will-it-scale.git
>
> However the results seem to vary a lot after a reboot(!), at least on s390.
>
> So I'm not sure if this is really a regression.

Most part of the regression is restored for v4.4.  But because the changes are
like "V", it is hard to bisect.

=
compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase:
  
gcc-4.9/performance/x86_64-rhel/thread/24/debian-x86_64-2015-02-07.cgz/ivb42/pread1/will-it-scale

commit: 
  cc28d6d80f6ab494b10f0e2ec949eacd610f66e3
  6cdb18ad98a49f7e9b95d538a0614cde827404b8
  v4.4

cc28d6d80f6ab494 6cdb18ad98a49f7e9b95d538a0   v4.4 
 -- -- 
 %stddev %change %stddev %change %stddev
 \  |\  |\  
   3083436 ±  0%  -9.6%2788374 ±  0%  -3.7%2970130 ±  0%  
will-it-scale.per_thread_ops
  6447 ±  0%  -2.2%   6308 ±  0%  -0.3%   6425 ±  0%  
will-it-scale.time.system_time
776.90 ±  0% +17.9% 915.71 ±  0%  +2.9% 799.12 ±  0%  
will-it-scale.time.user_time
316177 ±  4%  -4.6% 301616 ±  3% -10.3% 283563 ±  3%  
softirqs.RCU
776.90 ±  0% +17.9% 915.71 ±  0%  +2.9% 799.12 ±  0%  
time.user_time
777.33 ±  7% +20.8% 938.67 ±  7%  +7.5% 836.00 ±  8%  
slabinfo.blkdev_requests.active_objs
777.33 ±  7% +20.8% 938.67 ±  7%  +7.5% 836.00 ±  8%  
slabinfo.blkdev_requests.num_objs
  74313962 ± 44% -16.5%   62053062 ± 41% -49.9%   37246967 ±  8%  
cpuidle.C1-IVT.time
  43381614 ± 79% +24.4%   53966568 ±111%+123.9%   97135791 ± 33%  
cpuidle.C1E-IVT.time
 97.67 ± 36% +95.2% 190.67 ± 63%+122.5% 217.33 ± 41%  
cpuidle.C3-IVT.usage
   3679437 ± 69%-100.0%   0.00 ± -1%-100.0%   0.00 ± -1%  
latency_stats.avg.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
   5177475 ± 82%-100.0%   0.00 ± -1%-100.0%   0.00 ± -1%  
latency_stats.max.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
  11726393 ±112%-100.0%   0.00 ± -1%-100.0%   0.00 ± -1%  
latency_stats.sum.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
178.07 ±  0%  -1.3% 175.79 ±  0%  -0.8% 176.62 ±  0%  
turbostat.CorWatt
  0.20 ±  2% -16.9%   0.16 ± 18% -11.9%   0.17 ± 17%  
turbostat.Pkg%pc6
207.38 ±  0%  -1.1% 205.13 ±  0%  -0.7% 205.99 ±  0%  
turbostat.PkgWatt
  6889 ± 33% -49.2%   3497 ± 86% -19.4%   5552 ± 27%  
numa-vmstat.node0.nr_active_anon
483.33 ± 29% -32.3% 327.00 ± 48%  +0.1% 

Re: [lkp] [mm/vmstat] 6cdb18ad98: -8.5% will-it-scale.per_thread_ops

2016-01-20 Thread Huang, Ying
Heiko Carstens  writes:

> On Wed, Jan 06, 2016 at 11:20:55AM +0800, kernel test robot wrote:
>> FYI, we noticed the below changes on
>> 
>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
>> commit 6cdb18ad98a49f7e9b95d538a0614cde827404b8 ("mm/vmstat: fix overflow in 
>> mod_zone_page_state()")
>> 
>> 
>> =
>> compiler/cpufreq_governor/kconfig/rootfs/tbox_group/test/testcase:
>>   
>> gcc-4.9/performance/x86_64-rhel/debian-x86_64-2015-02-07.cgz/ivb42/pread1/will-it-scale
>> 
>> commit: 
>>   cc28d6d80f6ab494b10f0e2ec949eacd610f66e3
>>   6cdb18ad98a49f7e9b95d538a0614cde827404b8
>> 
>> cc28d6d80f6ab494 6cdb18ad98a49f7e9b95d538a0 
>>  -- 
>>  %stddev %change %stddev
>>  \  |\  
>>2733943 .  0%  -8.5%2502129 .  0%  will-it-scale.per_thread_ops
>>   3410 .  0%  -2.0%   3343 .  0%  will-it-scale.time.system_time
>> 340.08 .  0% +19.7% 406.99 .  0%  will-it-scale.time.user_time
>>   69882822 .  2% -24.3%   52926191 .  5%  cpuidle.C1-IVT.time
>> 340.08 .  0% +19.7% 406.99 .  0%  time.user_time
>> 491.25 .  6% -17.7% 404.25 .  7%  
>> numa-vmstat.node0.nr_alloc_batch
>>   2799 . 20% -36.6%   1776 .  0%  numa-vmstat.node0.nr_mapped
>> 630.00 .140%+244.4%   2169 .  1%  
>> numa-vmstat.node1.nr_inactive_anon
>
> Hmm... this is odd. I did review all callers of mod_zone_page_state() and
> couldn't find anything obvious that would go wrong after the int -> long
> change.
>
> I also tried the "pread1_threads" test case from
> https://github.com/antonblanchard/will-it-scale.git
>
> However the results seem to vary a lot after a reboot(!), at least on s390.
>
> So I'm not sure if this is really a regression.

Most part of the regression is restored for v4.4.  But because the changes are
like "V", it is hard to bisect.

=
compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase:
  
gcc-4.9/performance/x86_64-rhel/thread/24/debian-x86_64-2015-02-07.cgz/ivb42/pread1/will-it-scale

commit: 
  cc28d6d80f6ab494b10f0e2ec949eacd610f66e3
  6cdb18ad98a49f7e9b95d538a0614cde827404b8
  v4.4

cc28d6d80f6ab494 6cdb18ad98a49f7e9b95d538a0   v4.4 
 -- -- 
 %stddev %change %stddev %change %stddev
 \  |\  |\  
   3083436 ±  0%  -9.6%2788374 ±  0%  -3.7%2970130 ±  0%  
will-it-scale.per_thread_ops
  6447 ±  0%  -2.2%   6308 ±  0%  -0.3%   6425 ±  0%  
will-it-scale.time.system_time
776.90 ±  0% +17.9% 915.71 ±  0%  +2.9% 799.12 ±  0%  
will-it-scale.time.user_time
316177 ±  4%  -4.6% 301616 ±  3% -10.3% 283563 ±  3%  
softirqs.RCU
776.90 ±  0% +17.9% 915.71 ±  0%  +2.9% 799.12 ±  0%  
time.user_time
777.33 ±  7% +20.8% 938.67 ±  7%  +7.5% 836.00 ±  8%  
slabinfo.blkdev_requests.active_objs
777.33 ±  7% +20.8% 938.67 ±  7%  +7.5% 836.00 ±  8%  
slabinfo.blkdev_requests.num_objs
  74313962 ± 44% -16.5%   62053062 ± 41% -49.9%   37246967 ±  8%  
cpuidle.C1-IVT.time
  43381614 ± 79% +24.4%   53966568 ±111%+123.9%   97135791 ± 33%  
cpuidle.C1E-IVT.time
 97.67 ± 36% +95.2% 190.67 ± 63%+122.5% 217.33 ± 41%  
cpuidle.C3-IVT.usage
   3679437 ± 69%-100.0%   0.00 ± -1%-100.0%   0.00 ± -1%  
latency_stats.avg.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
   5177475 ± 82%-100.0%   0.00 ± -1%-100.0%   0.00 ± -1%  
latency_stats.max.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
  11726393 ±112%-100.0%   0.00 ± -1%-100.0%   0.00 ± -1%  
latency_stats.sum.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
178.07 ±  0%  -1.3% 175.79 ±  0%  -0.8% 176.62 ±  0%  
turbostat.CorWatt
  0.20 ±  2% -16.9%   0.16 ± 18% -11.9%   0.17 ± 17%  
turbostat.Pkg%pc6
207.38 ±  0%  -1.1% 205.13 ±  0%  -0.7% 205.99 ±  0%  
turbostat.PkgWatt
  6889 ± 33% -49.2%   3497 ± 86% -19.4%   5552 ± 27%  
numa-vmstat.node0.nr_active_anon
483.33 ± 29% -32.3%  

Re: [lkp] [mm/vmstat] 6cdb18ad98: -8.5% will-it-scale.per_thread_ops

2016-01-07 Thread Heiko Carstens
On Wed, Jan 06, 2016 at 11:20:55AM +0800, kernel test robot wrote:
> FYI, we noticed the below changes on
> 
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
> commit 6cdb18ad98a49f7e9b95d538a0614cde827404b8 ("mm/vmstat: fix overflow in 
> mod_zone_page_state()")
> 
> 
> =
> compiler/cpufreq_governor/kconfig/rootfs/tbox_group/test/testcase:
>   
> gcc-4.9/performance/x86_64-rhel/debian-x86_64-2015-02-07.cgz/ivb42/pread1/will-it-scale
> 
> commit: 
>   cc28d6d80f6ab494b10f0e2ec949eacd610f66e3
>   6cdb18ad98a49f7e9b95d538a0614cde827404b8
> 
> cc28d6d80f6ab494 6cdb18ad98a49f7e9b95d538a0 
>  -- 
>  %stddev %change %stddev
>  \  |\  
>2733943 ±  0%  -8.5%2502129 ±  0%  will-it-scale.per_thread_ops
>   3410 ±  0%  -2.0%   3343 ±  0%  will-it-scale.time.system_time
> 340.08 ±  0% +19.7% 406.99 ±  0%  will-it-scale.time.user_time
>   69882822 ±  2% -24.3%   52926191 ±  5%  cpuidle.C1-IVT.time
> 340.08 ±  0% +19.7% 406.99 ±  0%  time.user_time
> 491.25 ±  6% -17.7% 404.25 ±  7%  numa-vmstat.node0.nr_alloc_batch
>   2799 ± 20% -36.6%   1776 ±  0%  numa-vmstat.node0.nr_mapped
> 630.00 ±140%+244.4%   2169 ±  1%  
> numa-vmstat.node1.nr_inactive_anon

Hmm... this is odd. I did review all callers of mod_zone_page_state() and
couldn't find anything obvious that would go wrong after the int -> long
change.

I also tried the "pread1_threads" test case from
https://github.com/antonblanchard/will-it-scale.git

However the results seem to vary a lot after a reboot(!), at least on s390.

So I'm not sure if this is really a regression.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [lkp] [mm/vmstat] 6cdb18ad98: -8.5% will-it-scale.per_thread_ops

2016-01-07 Thread Heiko Carstens
On Wed, Jan 06, 2016 at 11:20:55AM +0800, kernel test robot wrote:
> FYI, we noticed the below changes on
> 
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
> commit 6cdb18ad98a49f7e9b95d538a0614cde827404b8 ("mm/vmstat: fix overflow in 
> mod_zone_page_state()")
> 
> 
> =
> compiler/cpufreq_governor/kconfig/rootfs/tbox_group/test/testcase:
>   
> gcc-4.9/performance/x86_64-rhel/debian-x86_64-2015-02-07.cgz/ivb42/pread1/will-it-scale
> 
> commit: 
>   cc28d6d80f6ab494b10f0e2ec949eacd610f66e3
>   6cdb18ad98a49f7e9b95d538a0614cde827404b8
> 
> cc28d6d80f6ab494 6cdb18ad98a49f7e9b95d538a0 
>  -- 
>  %stddev %change %stddev
>  \  |\  
>2733943 ±  0%  -8.5%2502129 ±  0%  will-it-scale.per_thread_ops
>   3410 ±  0%  -2.0%   3343 ±  0%  will-it-scale.time.system_time
> 340.08 ±  0% +19.7% 406.99 ±  0%  will-it-scale.time.user_time
>   69882822 ±  2% -24.3%   52926191 ±  5%  cpuidle.C1-IVT.time
> 340.08 ±  0% +19.7% 406.99 ±  0%  time.user_time
> 491.25 ±  6% -17.7% 404.25 ±  7%  numa-vmstat.node0.nr_alloc_batch
>   2799 ± 20% -36.6%   1776 ±  0%  numa-vmstat.node0.nr_mapped
> 630.00 ±140%+244.4%   2169 ±  1%  
> numa-vmstat.node1.nr_inactive_anon

Hmm... this is odd. I did review all callers of mod_zone_page_state() and
couldn't find anything obvious that would go wrong after the int -> long
change.

I also tried the "pread1_threads" test case from
https://github.com/antonblanchard/will-it-scale.git

However the results seem to vary a lot after a reboot(!), at least on s390.

So I'm not sure if this is really a regression.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[lkp] [mm/vmstat] 6cdb18ad98: -8.5% will-it-scale.per_thread_ops

2016-01-05 Thread kernel test robot
FYI, we noticed the below changes on

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
commit 6cdb18ad98a49f7e9b95d538a0614cde827404b8 ("mm/vmstat: fix overflow in 
mod_zone_page_state()")


=
compiler/cpufreq_governor/kconfig/rootfs/tbox_group/test/testcase:
  
gcc-4.9/performance/x86_64-rhel/debian-x86_64-2015-02-07.cgz/ivb42/pread1/will-it-scale

commit: 
  cc28d6d80f6ab494b10f0e2ec949eacd610f66e3
  6cdb18ad98a49f7e9b95d538a0614cde827404b8

cc28d6d80f6ab494 6cdb18ad98a49f7e9b95d538a0 
 -- 
 %stddev %change %stddev
 \  |\  
   2733943 ±  0%  -8.5%2502129 ±  0%  will-it-scale.per_thread_ops
  3410 ±  0%  -2.0%   3343 ±  0%  will-it-scale.time.system_time
340.08 ±  0% +19.7% 406.99 ±  0%  will-it-scale.time.user_time
  69882822 ±  2% -24.3%   52926191 ±  5%  cpuidle.C1-IVT.time
340.08 ±  0% +19.7% 406.99 ±  0%  time.user_time
491.25 ±  6% -17.7% 404.25 ±  7%  numa-vmstat.node0.nr_alloc_batch
  2799 ± 20% -36.6%   1776 ±  0%  numa-vmstat.node0.nr_mapped
630.00 ±140%+244.4%   2169 ±  1%  numa-vmstat.node1.nr_inactive_anon
  6440 ± 11% -15.5%   5440 ± 16%  
numa-vmstat.node1.nr_slab_reclaimable
 11204 ± 20% -36.6%   7106 ±  0%  numa-meminfo.node0.Mapped
  1017 ±173%+450.3%   5598 ± 15%  numa-meminfo.node1.AnonHugePages
  2521 ±140%+244.1%   8678 ±  1%  numa-meminfo.node1.Inactive(anon)
 25762 ± 11% -15.5%  21764 ± 16%  numa-meminfo.node1.SReclaimable
 70103 ±  9%  -9.8%  63218 ±  9%  numa-meminfo.node1.Slab
  2.29 ±  3% +32.8%   3.04 ±  4%  
perf-profile.cycles-pp.atime_needs_update.touch_atime.shmem_file_read_iter.__vfs_read.vfs_read
  1.10 ±  3% -27.4%   0.80 ±  5%  
perf-profile.cycles-pp.current_fs_time.atime_needs_update.touch_atime.shmem_file_read_iter.__vfs_read
  2.33 ±  2% -13.0%   2.02 ±  3%  
perf-profile.cycles-pp.fput.entry_SYSCALL_64_fastpath
  0.89 ±  2% +29.6%   1.15 ±  7%  
perf-profile.cycles-pp.fsnotify.vfs_read.sys_pread64.entry_SYSCALL_64_fastpath
  2.85 ±  2% +45.4%   4.14 ±  5%  
perf-profile.cycles-pp.touch_atime.shmem_file_read_iter.__vfs_read.vfs_read.sys_pread64
 63939 ±  0% +17.9%  75370 ± 15%  sched_debug.cfs_rq:/.exec_clock.25
 72.50 ± 73% -63.1%  26.75 ± 19%  sched_debug.cfs_rq:/.load_avg.1
 34.00 ± 62% -61.8%  13.00 ± 12%  sched_debug.cfs_rq:/.load_avg.14
 18.00 ± 11% -11.1%  16.00 ± 10%  sched_debug.cfs_rq:/.load_avg.20
 14.75 ± 41%+122.0%  32.75 ± 26%  sched_debug.cfs_rq:/.load_avg.25
278.88 ± 11% +18.8% 331.25 ±  7%  sched_debug.cfs_rq:/.load_avg.max
 51.89 ± 11% +13.6%  58.97 ±  4%  
sched_debug.cfs_rq:/.load_avg.stddev
  7.25 ±  5%+255.2%  25.75 ± 53%  
sched_debug.cfs_rq:/.runnable_load_avg.25
 28.50 ±  1% +55.3%  44.25 ± 46%  
sched_debug.cfs_rq:/.runnable_load_avg.7
 72.50 ± 73% -63.1%  26.75 ± 19%  
sched_debug.cfs_rq:/.tg_load_avg_contrib.1
 34.00 ± 62% -61.8%  13.00 ± 12%  
sched_debug.cfs_rq:/.tg_load_avg_contrib.14
 18.00 ± 11% -11.1%  16.00 ± 10%  
sched_debug.cfs_rq:/.tg_load_avg_contrib.20
 14.75 ± 41%+122.0%  32.75 ± 25%  
sched_debug.cfs_rq:/.tg_load_avg_contrib.25
279.29 ± 11% +19.1% 332.67 ±  7%  
sched_debug.cfs_rq:/.tg_load_avg_contrib.max
 52.01 ± 11% +13.8%  59.18 ±  4%  
sched_debug.cfs_rq:/.tg_load_avg_contrib.stddev
359.50 ±  6% +41.5% 508.75 ± 22%  sched_debug.cfs_rq:/.util_avg.25
206.25 ± 16% -13.1% 179.25 ± 11%  sched_debug.cfs_rq:/.util_avg.40
688.75 ±  1% +18.5% 816.00 ±  1%  sched_debug.cfs_rq:/.util_avg.7
953467 ±  1% -17.9% 782518 ± 10%  sched_debug.cpu.avg_idle.5
  9177 ± 43% +73.9%  15957 ± 29%  sched_debug.cpu.nr_switches.13
  7365 ± 19% -35.4%   4755 ± 11%  sched_debug.cpu.nr_switches.20
 12203 ± 28% -62.2%   4608 ±  9%  sched_debug.cpu.nr_switches.22
  1868 ± 49% -51.1% 913.50 ± 27%  sched_debug.cpu.nr_switches.27
  2546 ± 56% -70.0% 763.00 ± 18%  sched_debug.cpu.nr_switches.28
  3003 ± 78% -77.9% 663.00 ± 18%  sched_debug.cpu.nr_switches.33
  1820 ± 19% +68.0%   3058 ± 33%  sched_debug.cpu.nr_switches.8
 -4.00 ±-35%-156.2%   2.25 ± 85%  
sched_debug.cpu.nr_uninterruptible.11
  4.00 ±133%-187.5%  -3.50 ±-24%  
sched_debug.cpu.nr_uninterruptible.17
  1.75 ± 74%-214.3%  -2.00 ±-127%  
sched_debug.cpu.nr_uninterruptible.25
  0.00 ±  2%  +Inf%   4.00 ± 39%  
sched_debug.cpu.nr_uninterruptible.26
  2.50 ± 44%-110.0%  -0.25 ±-591%  

[lkp] [mm/vmstat] 6cdb18ad98: -8.5% will-it-scale.per_thread_ops

2016-01-05 Thread kernel test robot
FYI, we noticed the below changes on

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
commit 6cdb18ad98a49f7e9b95d538a0614cde827404b8 ("mm/vmstat: fix overflow in 
mod_zone_page_state()")


=
compiler/cpufreq_governor/kconfig/rootfs/tbox_group/test/testcase:
  
gcc-4.9/performance/x86_64-rhel/debian-x86_64-2015-02-07.cgz/ivb42/pread1/will-it-scale

commit: 
  cc28d6d80f6ab494b10f0e2ec949eacd610f66e3
  6cdb18ad98a49f7e9b95d538a0614cde827404b8

cc28d6d80f6ab494 6cdb18ad98a49f7e9b95d538a0 
 -- 
 %stddev %change %stddev
 \  |\  
   2733943 ±  0%  -8.5%2502129 ±  0%  will-it-scale.per_thread_ops
  3410 ±  0%  -2.0%   3343 ±  0%  will-it-scale.time.system_time
340.08 ±  0% +19.7% 406.99 ±  0%  will-it-scale.time.user_time
  69882822 ±  2% -24.3%   52926191 ±  5%  cpuidle.C1-IVT.time
340.08 ±  0% +19.7% 406.99 ±  0%  time.user_time
491.25 ±  6% -17.7% 404.25 ±  7%  numa-vmstat.node0.nr_alloc_batch
  2799 ± 20% -36.6%   1776 ±  0%  numa-vmstat.node0.nr_mapped
630.00 ±140%+244.4%   2169 ±  1%  numa-vmstat.node1.nr_inactive_anon
  6440 ± 11% -15.5%   5440 ± 16%  
numa-vmstat.node1.nr_slab_reclaimable
 11204 ± 20% -36.6%   7106 ±  0%  numa-meminfo.node0.Mapped
  1017 ±173%+450.3%   5598 ± 15%  numa-meminfo.node1.AnonHugePages
  2521 ±140%+244.1%   8678 ±  1%  numa-meminfo.node1.Inactive(anon)
 25762 ± 11% -15.5%  21764 ± 16%  numa-meminfo.node1.SReclaimable
 70103 ±  9%  -9.8%  63218 ±  9%  numa-meminfo.node1.Slab
  2.29 ±  3% +32.8%   3.04 ±  4%  
perf-profile.cycles-pp.atime_needs_update.touch_atime.shmem_file_read_iter.__vfs_read.vfs_read
  1.10 ±  3% -27.4%   0.80 ±  5%  
perf-profile.cycles-pp.current_fs_time.atime_needs_update.touch_atime.shmem_file_read_iter.__vfs_read
  2.33 ±  2% -13.0%   2.02 ±  3%  
perf-profile.cycles-pp.fput.entry_SYSCALL_64_fastpath
  0.89 ±  2% +29.6%   1.15 ±  7%  
perf-profile.cycles-pp.fsnotify.vfs_read.sys_pread64.entry_SYSCALL_64_fastpath
  2.85 ±  2% +45.4%   4.14 ±  5%  
perf-profile.cycles-pp.touch_atime.shmem_file_read_iter.__vfs_read.vfs_read.sys_pread64
 63939 ±  0% +17.9%  75370 ± 15%  sched_debug.cfs_rq:/.exec_clock.25
 72.50 ± 73% -63.1%  26.75 ± 19%  sched_debug.cfs_rq:/.load_avg.1
 34.00 ± 62% -61.8%  13.00 ± 12%  sched_debug.cfs_rq:/.load_avg.14
 18.00 ± 11% -11.1%  16.00 ± 10%  sched_debug.cfs_rq:/.load_avg.20
 14.75 ± 41%+122.0%  32.75 ± 26%  sched_debug.cfs_rq:/.load_avg.25
278.88 ± 11% +18.8% 331.25 ±  7%  sched_debug.cfs_rq:/.load_avg.max
 51.89 ± 11% +13.6%  58.97 ±  4%  
sched_debug.cfs_rq:/.load_avg.stddev
  7.25 ±  5%+255.2%  25.75 ± 53%  
sched_debug.cfs_rq:/.runnable_load_avg.25
 28.50 ±  1% +55.3%  44.25 ± 46%  
sched_debug.cfs_rq:/.runnable_load_avg.7
 72.50 ± 73% -63.1%  26.75 ± 19%  
sched_debug.cfs_rq:/.tg_load_avg_contrib.1
 34.00 ± 62% -61.8%  13.00 ± 12%  
sched_debug.cfs_rq:/.tg_load_avg_contrib.14
 18.00 ± 11% -11.1%  16.00 ± 10%  
sched_debug.cfs_rq:/.tg_load_avg_contrib.20
 14.75 ± 41%+122.0%  32.75 ± 25%  
sched_debug.cfs_rq:/.tg_load_avg_contrib.25
279.29 ± 11% +19.1% 332.67 ±  7%  
sched_debug.cfs_rq:/.tg_load_avg_contrib.max
 52.01 ± 11% +13.8%  59.18 ±  4%  
sched_debug.cfs_rq:/.tg_load_avg_contrib.stddev
359.50 ±  6% +41.5% 508.75 ± 22%  sched_debug.cfs_rq:/.util_avg.25
206.25 ± 16% -13.1% 179.25 ± 11%  sched_debug.cfs_rq:/.util_avg.40
688.75 ±  1% +18.5% 816.00 ±  1%  sched_debug.cfs_rq:/.util_avg.7
953467 ±  1% -17.9% 782518 ± 10%  sched_debug.cpu.avg_idle.5
  9177 ± 43% +73.9%  15957 ± 29%  sched_debug.cpu.nr_switches.13
  7365 ± 19% -35.4%   4755 ± 11%  sched_debug.cpu.nr_switches.20
 12203 ± 28% -62.2%   4608 ±  9%  sched_debug.cpu.nr_switches.22
  1868 ± 49% -51.1% 913.50 ± 27%  sched_debug.cpu.nr_switches.27
  2546 ± 56% -70.0% 763.00 ± 18%  sched_debug.cpu.nr_switches.28
  3003 ± 78% -77.9% 663.00 ± 18%  sched_debug.cpu.nr_switches.33
  1820 ± 19% +68.0%   3058 ± 33%  sched_debug.cpu.nr_switches.8
 -4.00 ±-35%-156.2%   2.25 ± 85%  
sched_debug.cpu.nr_uninterruptible.11
  4.00 ±133%-187.5%  -3.50 ±-24%  
sched_debug.cpu.nr_uninterruptible.17
  1.75 ± 74%-214.3%  -2.00 ±-127%  
sched_debug.cpu.nr_uninterruptible.25
  0.00 ±  2%  +Inf%   4.00 ± 39%  
sched_debug.cpu.nr_uninterruptible.26
  2.50 ± 44%-110.0%  -0.25 ±-591%