[lkp-robot] [drm] e1f8a89c4f: pft.faults_per_sec_per_cpu -77.9% regression

2017-06-08 Thread kernel test robot

Greeting,

FYI, we noticed a -77.9% regression of pft.faults_per_sec_per_cpu due to commit:


commit: e1f8a89c4f33f5b32cfb047f07fdf6d38cda854a ("drm: introduce sync objects 
(v4)")
git://people.freedesktop.org/~airlied/linux.git drm-syncobj-sem

in testcase: pft
on test machine: qemu-system-x86_64 -enable-kvm -cpu Penryn -smp 4 -m 2G
with following parameters:

iterations: 20x

test-description: Pft is the page fault test micro benchmark.
test-url: https://github.com/gormanm/pft



Details are as below:
-->


To reproduce:

git clone https://github.com/01org/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k  job-script  # job-script is attached in this 
email

testcase/path_params/tbox_group/run: pft/20x/vm-vp-2G

   v4.12-rc2  e1f8a89c4f33f5b32cfb047f07
  --
 %stddev %change %stddev


 \  |\  


878620 ± 
15% -77.9% 194209 ± 12%  pft.faults_per_sec_per_cpu
  7.16 ± 23%+257.2%  25.58 ± 12%  pft.time.elapsed_time
  7.16 ± 23%+257.2%  25.58 ± 12%  pft.time.elapsed_time.max
  1223 ± 10%+199.9%   3667 ± 15%  
pft.time.involuntary_context_switches
 61828 ±  2%  +10106.0%6310202 ±  0%  pft.time.minor_page_faults
145.40 ±  3% +20.6% 175.33 ±  5%  
pft.time.percent_of_cpu_this_job_got
  9.37 ± 22%+345.2%  41.72 ± 15%  pft.time.system_time
  5.04 ± 11% -45.6%   2.74 ±  9%  mpstat.cpu.usr%
131734 ± 12% +20.3% 158487 ±  5%  softirqs.TIMER


  1335 ± 11%+105.1%   2738 ± 
19%  vmstat.system.cs
  5390 ±  4% -27.6%   3904 ± 38%  slabinfo.kmalloc-96.active_objs
  5420 ±  3% -21.5%   4252 ± 27%  slabinfo.kmalloc-96.num_objs
 68.30 ±209%  +13218.2%   9096 ± 84%  
latency_stats.max.pipe_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
588552 ± 32% -74.2% 151873 ± 37%  
latency_stats.sum.ep_poll.SyS_epoll_wait.do_syscall_64.return_from_SYSCALL_64
 74.20 ±203%  +12326.8%   9220 ± 82%  
latency_stats.sum.pipe_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
160550 ± 14%-100.0%   0.00 ± -1%  meminfo.AnonHugePages
204792 ±  0%-100.0%   0.00 ± -1%  meminfo.CmaFree
204800 ±  0%-100.0%   0.00 ± -1%  meminfo.CmaTotal
  3041 ±  0% +13.4%   3449 ±  0%  meminfo.KernelStack
 51198 ±  0%-100.0%   0.00 ± -1%  proc-vmstat.nr_free_cma
  3033 ±  1% +13.6%   3446 ±  0%  proc-vmstat.nr_kernel_stack
 73394 ±  9%   +8258.9%6134982 ±  2%  proc-vmstat.numa_hit
 73394 ±  9%   +8258.9%6134982 ±  2%  proc-vmstat.numa_local
 67006 ±  9%   +9043.7%6126912 ±  2%  proc-vmstat.pgfault
 11161 ±  7%-100.0%   0.00 ± -1%  
proc-vmstat.thp_deferred_split_page
 11217 ±  7%-100.0%   0.00 ± -1%  proc-vmstat.thp_fault_alloc
  7.16 ± 23%+257.2%  25.58 ± 12%  time.elapsed_time
  7.16 ± 23%+257.2%  25.58 ± 12%  time.elapsed_time.max
  1223 ± 10%+199.9%   3667 ± 15%  time.involuntary_context_switches
 61828 ±  2%  +10106.0%6310202 ±  0%  time.minor_page_faults




 pft.faults_per_sec_per_cpu

   1e+06 ++--*--+
  90 ++*..  ..  |
 |   *:   ..|
  80 ++ : :   : ...*|
  70 ++  ...*   : :  :  *...|
 *...: :   : :  |
  60 ++  : :   ::   |
  50 ++   :   : :   :   |
  40 ++   :   :  : :|
 | : :   : :|
  30 ++ :   : :   : |
  20 O+ O   : O :O: O :  O OO
   

[lkp-robot] [drm] e1f8a89c4f: pft.faults_per_sec_per_cpu -77.9% regression

2017-06-08 Thread kernel test robot

Greeting,

FYI, we noticed a -77.9% regression of pft.faults_per_sec_per_cpu due to commit:


commit: e1f8a89c4f33f5b32cfb047f07fdf6d38cda854a ("drm: introduce sync objects 
(v4)")
git://people.freedesktop.org/~airlied/linux.git drm-syncobj-sem

in testcase: pft
on test machine: qemu-system-x86_64 -enable-kvm -cpu Penryn -smp 4 -m 2G
with following parameters:

iterations: 20x

test-description: Pft is the page fault test micro benchmark.
test-url: https://github.com/gormanm/pft



Details are as below:
-->


To reproduce:

git clone https://github.com/01org/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k  job-script  # job-script is attached in this 
email

testcase/path_params/tbox_group/run: pft/20x/vm-vp-2G

   v4.12-rc2  e1f8a89c4f33f5b32cfb047f07
  --
 %stddev %change %stddev


 \  |\  


878620 ± 
15% -77.9% 194209 ± 12%  pft.faults_per_sec_per_cpu
  7.16 ± 23%+257.2%  25.58 ± 12%  pft.time.elapsed_time
  7.16 ± 23%+257.2%  25.58 ± 12%  pft.time.elapsed_time.max
  1223 ± 10%+199.9%   3667 ± 15%  
pft.time.involuntary_context_switches
 61828 ±  2%  +10106.0%6310202 ±  0%  pft.time.minor_page_faults
145.40 ±  3% +20.6% 175.33 ±  5%  
pft.time.percent_of_cpu_this_job_got
  9.37 ± 22%+345.2%  41.72 ± 15%  pft.time.system_time
  5.04 ± 11% -45.6%   2.74 ±  9%  mpstat.cpu.usr%
131734 ± 12% +20.3% 158487 ±  5%  softirqs.TIMER


  1335 ± 11%+105.1%   2738 ± 
19%  vmstat.system.cs
  5390 ±  4% -27.6%   3904 ± 38%  slabinfo.kmalloc-96.active_objs
  5420 ±  3% -21.5%   4252 ± 27%  slabinfo.kmalloc-96.num_objs
 68.30 ±209%  +13218.2%   9096 ± 84%  
latency_stats.max.pipe_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
588552 ± 32% -74.2% 151873 ± 37%  
latency_stats.sum.ep_poll.SyS_epoll_wait.do_syscall_64.return_from_SYSCALL_64
 74.20 ±203%  +12326.8%   9220 ± 82%  
latency_stats.sum.pipe_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
160550 ± 14%-100.0%   0.00 ± -1%  meminfo.AnonHugePages
204792 ±  0%-100.0%   0.00 ± -1%  meminfo.CmaFree
204800 ±  0%-100.0%   0.00 ± -1%  meminfo.CmaTotal
  3041 ±  0% +13.4%   3449 ±  0%  meminfo.KernelStack
 51198 ±  0%-100.0%   0.00 ± -1%  proc-vmstat.nr_free_cma
  3033 ±  1% +13.6%   3446 ±  0%  proc-vmstat.nr_kernel_stack
 73394 ±  9%   +8258.9%6134982 ±  2%  proc-vmstat.numa_hit
 73394 ±  9%   +8258.9%6134982 ±  2%  proc-vmstat.numa_local
 67006 ±  9%   +9043.7%6126912 ±  2%  proc-vmstat.pgfault
 11161 ±  7%-100.0%   0.00 ± -1%  
proc-vmstat.thp_deferred_split_page
 11217 ±  7%-100.0%   0.00 ± -1%  proc-vmstat.thp_fault_alloc
  7.16 ± 23%+257.2%  25.58 ± 12%  time.elapsed_time
  7.16 ± 23%+257.2%  25.58 ± 12%  time.elapsed_time.max
  1223 ± 10%+199.9%   3667 ± 15%  time.involuntary_context_switches
 61828 ±  2%  +10106.0%6310202 ±  0%  time.minor_page_faults




 pft.faults_per_sec_per_cpu

   1e+06 ++--*--+
  90 ++*..  ..  |
 |   *:   ..|
  80 ++ : :   : ...*|
  70 ++  ...*   : :  :  *...|
 *...: :   : :  |
  60 ++  : :   ::   |
  50 ++   :   : :   :   |
  40 ++   :   :  : :|
 | : :   : :|
  30 ++ :   : :   : |
  20 O+ O   : O :O: O :  O OO