[lkp] [fsnotify] 8f2f3eb59d: -4.0% will-it-scale.per_thread_ops

2015-08-22 Thread kernel test robot
FYI, we noticed the below changes on

git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
commit 8f2f3eb59dff4ec538de55f2e0592fec85966aab ("fsnotify: fix oops in 
fsnotify_clear_marks_by_group_flags()")


=
tbox_group/testcase/rootfs/kconfig/compiler/cpufreq_governor/test:
  
lkp-sbx04/will-it-scale/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/performance/read1

commit: 
  447f6a95a9c80da7faaec3e66e656eab8f262640
  8f2f3eb59dff4ec538de55f2e0592fec85966aab

447f6a95a9c80da7 8f2f3eb59dff4ec538de55f2e0 
 -- 
 %stddev %change %stddev
 \  |\  
   1844687 ±  0%  -4.0%1770899 ±  0%  will-it-scale.per_thread_ops
283.69 ±  0%  +9.5% 310.64 ±  0%  will-it-scale.time.user_time
  4576 ±  3%  -7.3%   4242 ±  6%  
will-it-scale.time.voluntary_context_switches
  7211 ± 10% +54.0%  11101 ± 18%  cpuidle.C1E-SNB.usage
 10636 ± 36% +69.3%  18003 ± 36%  numa-meminfo.node1.Shmem
  1.07 ±  4% -13.1%   0.93 ±  9%  
perf-profile.cpu-cycles.selinux_file_permission.security_file_permission.rw_verify_area.vfs_read.sys_read
  4576 ±  3%  -7.3%   4242 ±  6%  time.voluntary_context_switches
526.75 ±104% -94.2%  30.50 ± 98%  numa-numastat.node1.other_node
  1540 ± 35% -74.2% 398.00 ± 90%  numa-numastat.node2.other_node
 32344 ±  5%  +7.4%  34722 ±  4%  numa-vmstat.node0.numa_other
  2658 ± 36% +69.3%   4500 ± 36%  numa-vmstat.node1.nr_shmem
935792 ±136%   +4247.3%   40682138 ±141%  
latency_stats.avg.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
935792 ±136%   +4247.3%   40682138 ±141%  
latency_stats.max.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
935792 ±136%   +4247.3%   40682138 ±141%  
latency_stats.sum.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
 12893 ±  2%  -9.1%  11716 ±  1%  slabinfo.kmalloc-192.active_objs
  1653 ±  9% -10.3%   1483 ±  5%  slabinfo.mnt_cache.active_objs
  1653 ±  9% -10.3%   1483 ±  5%  slabinfo.mnt_cache.num_objs
  1.75 ± 47% -81.0%   0.33 ±141%  
sched_debug.cfs_rq[10]:/.nr_spread_over
   -343206 ±-27% -73.2% -91995 ±-170%  sched_debug.cfs_rq[14]:/.spread0
533.25 ± 82% -81.5%  98.75 ± 42%  
sched_debug.cfs_rq[18]:/.blocked_load_avg
541.75 ± 82% -81.3% 101.25 ± 41%  
sched_debug.cfs_rq[18]:/.tg_load_contrib
  -1217705 ± -5% -30.2%-850080 ±-15%  sched_debug.cfs_rq[26]:/.spread0
 89722 ±  9%  +9.8%  98495 ± 10%  
sched_debug.cfs_rq[32]:/.exec_clock
101180 ±132%+180.8% 284154 ± 30%  sched_debug.cfs_rq[35]:/.spread0
 37332 ±473%+725.2% 308082 ± 59%  sched_debug.cfs_rq[38]:/.spread0
 32054 ±502%+981.6% 346689 ± 39%  sched_debug.cfs_rq[39]:/.spread0
  1.00 ±100%+100.0%   2.00 ± 50%  
sched_debug.cfs_rq[42]:/.nr_spread_over
   -125980 ±-218%-307.1% 260875 ± 46%  sched_debug.cfs_rq[42]:/.spread0
   -111501 ±-102%-288.7% 210354 ± 94%  sched_debug.cfs_rq[45]:/.spread0
   -173363 ±-34%-221.0% 209775 ± 94%  sched_debug.cfs_rq[47]:/.spread0
   -302090 ±-43%-121.8%  65953 ±322%  sched_debug.cfs_rq[4]:/.spread0
   -490175 ±-18% -41.1%-288722 ±-31%  sched_debug.cfs_rq[50]:/.spread0
   -594948 ±-10% -59.7%-239840 ±-33%  sched_debug.cfs_rq[51]:/.spread0
  1.00 ±100%   +6050.0%  61.50 ±141%  
sched_debug.cfs_rq[53]:/.blocked_load_avg
 10.50 ±  8%+614.3%  75.00 ±122%  
sched_debug.cfs_rq[53]:/.tg_load_contrib
   -596043 ±-10% -49.0%-304277 ±-36%  sched_debug.cfs_rq[54]:/.spread0
 10.00 ±  0%   +2062.5% 216.25 ± 40%  
sched_debug.cfs_rq[56]:/.tg_load_contrib
 17.75 ±173%   +1302.8% 249.00 ± 26%  
sched_debug.cfs_rq[60]:/.blocked_load_avg
   -809633 ± -9% -36.2%-516886 ±-23%  sched_debug.cfs_rq[60]:/.spread0
 28.00 ±109%+828.6% 260.00 ± 25%  
sched_debug.cfs_rq[60]:/.tg_load_contrib
277.75 ± 95% -86.3%  38.00 ±171%  
sched_debug.cfs_rq[7]:/.blocked_load_avg
293.25 ± 90% -81.8%  53.50 ±121%  
sched_debug.cfs_rq[7]:/.tg_load_contrib
 17.50 ±  2% -28.6%  12.50 ± 34%  sched_debug.cpu#0.cpu_load[2]
 17.00 ±  4% -25.0%  12.75 ± 35%  sched_debug.cpu#0.cpu_load[3]
  2907 ± 12%+195.9%   8603 ± 63%  sched_debug.cpu#0.sched_goidle
 16.50 ±  

[lkp] [fsnotify] 8f2f3eb59d: -4.0% will-it-scale.per_thread_ops

2015-08-22 Thread kernel test robot
FYI, we noticed the below changes on

git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
commit 8f2f3eb59dff4ec538de55f2e0592fec85966aab (fsnotify: fix oops in 
fsnotify_clear_marks_by_group_flags())


=
tbox_group/testcase/rootfs/kconfig/compiler/cpufreq_governor/test:
  
lkp-sbx04/will-it-scale/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/performance/read1

commit: 
  447f6a95a9c80da7faaec3e66e656eab8f262640
  8f2f3eb59dff4ec538de55f2e0592fec85966aab

447f6a95a9c80da7 8f2f3eb59dff4ec538de55f2e0 
 -- 
 %stddev %change %stddev
 \  |\  
   1844687 ±  0%  -4.0%1770899 ±  0%  will-it-scale.per_thread_ops
283.69 ±  0%  +9.5% 310.64 ±  0%  will-it-scale.time.user_time
  4576 ±  3%  -7.3%   4242 ±  6%  
will-it-scale.time.voluntary_context_switches
  7211 ± 10% +54.0%  11101 ± 18%  cpuidle.C1E-SNB.usage
 10636 ± 36% +69.3%  18003 ± 36%  numa-meminfo.node1.Shmem
  1.07 ±  4% -13.1%   0.93 ±  9%  
perf-profile.cpu-cycles.selinux_file_permission.security_file_permission.rw_verify_area.vfs_read.sys_read
  4576 ±  3%  -7.3%   4242 ±  6%  time.voluntary_context_switches
526.75 ±104% -94.2%  30.50 ± 98%  numa-numastat.node1.other_node
  1540 ± 35% -74.2% 398.00 ± 90%  numa-numastat.node2.other_node
 32344 ±  5%  +7.4%  34722 ±  4%  numa-vmstat.node0.numa_other
  2658 ± 36% +69.3%   4500 ± 36%  numa-vmstat.node1.nr_shmem
935792 ±136%   +4247.3%   40682138 ±141%  
latency_stats.avg.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
935792 ±136%   +4247.3%   40682138 ±141%  
latency_stats.max.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
935792 ±136%   +4247.3%   40682138 ±141%  
latency_stats.sum.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
 12893 ±  2%  -9.1%  11716 ±  1%  slabinfo.kmalloc-192.active_objs
  1653 ±  9% -10.3%   1483 ±  5%  slabinfo.mnt_cache.active_objs
  1653 ±  9% -10.3%   1483 ±  5%  slabinfo.mnt_cache.num_objs
  1.75 ± 47% -81.0%   0.33 ±141%  
sched_debug.cfs_rq[10]:/.nr_spread_over
   -343206 ±-27% -73.2% -91995 ±-170%  sched_debug.cfs_rq[14]:/.spread0
533.25 ± 82% -81.5%  98.75 ± 42%  
sched_debug.cfs_rq[18]:/.blocked_load_avg
541.75 ± 82% -81.3% 101.25 ± 41%  
sched_debug.cfs_rq[18]:/.tg_load_contrib
  -1217705 ± -5% -30.2%-850080 ±-15%  sched_debug.cfs_rq[26]:/.spread0
 89722 ±  9%  +9.8%  98495 ± 10%  
sched_debug.cfs_rq[32]:/.exec_clock
101180 ±132%+180.8% 284154 ± 30%  sched_debug.cfs_rq[35]:/.spread0
 37332 ±473%+725.2% 308082 ± 59%  sched_debug.cfs_rq[38]:/.spread0
 32054 ±502%+981.6% 346689 ± 39%  sched_debug.cfs_rq[39]:/.spread0
  1.00 ±100%+100.0%   2.00 ± 50%  
sched_debug.cfs_rq[42]:/.nr_spread_over
   -125980 ±-218%-307.1% 260875 ± 46%  sched_debug.cfs_rq[42]:/.spread0
   -111501 ±-102%-288.7% 210354 ± 94%  sched_debug.cfs_rq[45]:/.spread0
   -173363 ±-34%-221.0% 209775 ± 94%  sched_debug.cfs_rq[47]:/.spread0
   -302090 ±-43%-121.8%  65953 ±322%  sched_debug.cfs_rq[4]:/.spread0
   -490175 ±-18% -41.1%-288722 ±-31%  sched_debug.cfs_rq[50]:/.spread0
   -594948 ±-10% -59.7%-239840 ±-33%  sched_debug.cfs_rq[51]:/.spread0
  1.00 ±100%   +6050.0%  61.50 ±141%  
sched_debug.cfs_rq[53]:/.blocked_load_avg
 10.50 ±  8%+614.3%  75.00 ±122%  
sched_debug.cfs_rq[53]:/.tg_load_contrib
   -596043 ±-10% -49.0%-304277 ±-36%  sched_debug.cfs_rq[54]:/.spread0
 10.00 ±  0%   +2062.5% 216.25 ± 40%  
sched_debug.cfs_rq[56]:/.tg_load_contrib
 17.75 ±173%   +1302.8% 249.00 ± 26%  
sched_debug.cfs_rq[60]:/.blocked_load_avg
   -809633 ± -9% -36.2%-516886 ±-23%  sched_debug.cfs_rq[60]:/.spread0
 28.00 ±109%+828.6% 260.00 ± 25%  
sched_debug.cfs_rq[60]:/.tg_load_contrib
277.75 ± 95% -86.3%  38.00 ±171%  
sched_debug.cfs_rq[7]:/.blocked_load_avg
293.25 ± 90% -81.8%  53.50 ±121%  
sched_debug.cfs_rq[7]:/.tg_load_contrib
 17.50 ±  2% -28.6%  12.50 ± 34%  sched_debug.cpu#0.cpu_load[2]
 17.00 ±  4% -25.0%  12.75 ± 35%  sched_debug.cpu#0.cpu_load[3]
  2907 ± 12%+195.9%   8603 ± 63%  sched_debug.cpu#0.sched_goidle
 16.50 ±  3%