[lkp] [mm/vmpressure.c] 3c1da7beee: No primary result change, 278.5% vm-scalability.time.involuntary_context_switches
FYI, we noticed the below changes on https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master commit 3c1da7b02560cd0f0c66c5a59fce3c6746e3 ("mm/vmpressure.c: fix subtree pressure detection") = compiler/cpufreq_governor/kconfig/rootfs/runtime/tbox_group/test/testcase: gcc-4.9/performance/x86_64-rhel/debian-x86_64-2015-02-07.cgz/300s/ivb43/lru-file-mmap-read/vm-scalability commit: 30bdbb78009e67767983085e302bec6d97afc679 3c1da7b02560cd0f0c66c5a59fce3c6746e3 30bdbb78009e6776 3c1da7b02560cd0f0c66c5 -- %stddev %change %stddev \ |\ 193661 ± 1%+278.5% 733007 ± 1% vm-scalability.time.involuntary_context_switches 906499 ± 2% +18.1%1070404 ± 1% softirqs.RCU 193661 ± 1%+278.5% 733007 ± 1% time.involuntary_context_switches 4216 ± 3% +86.5% 7863 ± 1% vmstat.system.cs 0.74 ± 85% -80.1% 0.15 ±113% perf-profile.cycles-pp.__alloc_pages_nodemask.alloc_pages_current.__page_cache_alloc.__do_page_cache_readahead.filemap_fault 0.74 ± 85% -80.1% 0.15 ±113% perf-profile.cycles-pp.__page_cache_alloc.__do_page_cache_readahead.filemap_fault.xfs_filemap_fault.__do_fault 0.74 ± 85% -80.1% 0.15 ±113% perf-profile.cycles-pp.alloc_pages_current.__page_cache_alloc.__do_page_cache_readahead.filemap_fault.xfs_filemap_fault 1378 ± 5% -9.8% 1242 ± 9% slabinfo.file_lock_cache.active_objs 1378 ± 5% -9.8% 1242 ± 9% slabinfo.file_lock_cache.num_objs 14388 ± 3% -7.8% 13262 ± 7% slabinfo.kmalloc-512.num_objs 16441 ± 75%-100.0% 0.00 ± -1% latency_stats.avg.down.console_lock.do_con_write.con_write.n_tty_write.tty_write.redirected_tty_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath 15932 ± 45%+233.0% 53047 ± 43% latency_stats.avg.pipe_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath 16991 ± 74%-100.0% 0.00 ± -1% latency_stats.max.down.console_lock.do_con_write.con_write.n_tty_write.tty_write.redirected_tty_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath 189128 ± 86% -72.1% 52770 ± 63% latency_stats.max.pipe_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath 36438 ± 58%+417.4% 188546 ±112% latency_stats.max.pipe_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath 7680 ±102% -90.3% 741.25 ± 17% latency_stats.max.rpc_wait_bit_killable.__rpc_wait_for_completion_task.nfs4_run_open_task.[nfsv4]._nfs4_open_and_get_state.[nfsv4].nfs4_do_open.[nfsv4].nfs4_atomic_open.[nfsv4].nfs4_file_open.[nfsv4].do_dentry_open.vfs_open.path_openat.do_filp_open.do_sys_open 0.00 ± -1% +Inf% 20319 ±100% latency_stats.sum.call_rwsem_down_read_failed.page_lock_anon_vma_read.rmap_walk.try_to_unmap.migrate_pages.migrate_misplaced_page.handle_mm_fault.__do_page_fault.do_page_fault.page_fault 0.00 ± -1% +Inf% 20492 ± 98% latency_stats.sum.call_rwsem_down_read_failed.rmap_walk.remove_migration_ptes.migrate_pages.migrate_misplaced_page.handle_mm_fault.__do_page_fault.do_page_fault.page_fault 22837 ± 72%-100.0% 0.00 ± -1% latency_stats.sum.down.console_lock.do_con_write.con_write.n_tty_write.tty_write.redirected_tty_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath 0.00 ± -1% +Inf% 5388 ±106% latency_stats.sum.wait_on_page_bit.__migration_entry_wait.migration_entry_wait.do_swap_page.handle_mm_fault.__do_page_fault.do_page_fault.page_fault 17.00 ± 10% +30.9% 22.25 ± 27% sched_debug.cfs_rq:/.load.18 18.00 ± 11% +56.9% 28.25 ± 24% sched_debug.cfs_rq:/.load_avg.13 15.50 ± 9% +88.7% 29.25 ± 32% sched_debug.cfs_rq:/.load_avg.44 2.00 ±-50% +25.0% 2.50 ± 66% sched_debug.cfs_rq:/.nr_spread_over.13 2.50 ±100%+690.0% 19.75 ±100% sched_debug.cfs_rq:/.nr_spread_over.7 98.25 ± 82% -83.5% 16.25 ± 2% sched_debug.cfs_rq:/.runnable_load_avg.35 7118 ±387%+637.5% 52502 ± 57% sched_debug.cfs_rq:/.spread0.16 -275108 ±-167%-132.9% 90504 ± 43% sched_debug.cfs_rq:/.spread0.27 -524687 ±-161%-118.7% 98148 ± 49% sched_debug.cfs_rq:/.spread0.35 46300 ± 50% +97.7% 91531 ± 40% sched_debug.cfs_rq:/.spread0.39 72286 ± 17% +51.4% 109469 ± 21% sched_debug.cfs_rq:/.spread0.40 913.75 ± 6% -7.6% 844.00 ± 0% sched_debug.cfs_rq:/.util_avg.0 19.75 ± 9% -17.7% 16.25 ± 2% sched_debug.cpu.cpu_load[0].0 98.25 ± 82% -83.7% 16.00 ± 0% sched_debug.cpu.cpu_load[0].35 19.75 ± 9% -16.5% 16.50 ± 3% sched_debug.cpu.cpu_load[1].0 98.25 ± 82% -83.7%
[lkp] [mm/vmpressure.c] 3c1da7beee: No primary result change, 278.5% vm-scalability.time.involuntary_context_switches
FYI, we noticed the below changes on https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master commit 3c1da7b02560cd0f0c66c5a59fce3c6746e3 ("mm/vmpressure.c: fix subtree pressure detection") = compiler/cpufreq_governor/kconfig/rootfs/runtime/tbox_group/test/testcase: gcc-4.9/performance/x86_64-rhel/debian-x86_64-2015-02-07.cgz/300s/ivb43/lru-file-mmap-read/vm-scalability commit: 30bdbb78009e67767983085e302bec6d97afc679 3c1da7b02560cd0f0c66c5a59fce3c6746e3 30bdbb78009e6776 3c1da7b02560cd0f0c66c5 -- %stddev %change %stddev \ |\ 193661 ± 1%+278.5% 733007 ± 1% vm-scalability.time.involuntary_context_switches 906499 ± 2% +18.1%1070404 ± 1% softirqs.RCU 193661 ± 1%+278.5% 733007 ± 1% time.involuntary_context_switches 4216 ± 3% +86.5% 7863 ± 1% vmstat.system.cs 0.74 ± 85% -80.1% 0.15 ±113% perf-profile.cycles-pp.__alloc_pages_nodemask.alloc_pages_current.__page_cache_alloc.__do_page_cache_readahead.filemap_fault 0.74 ± 85% -80.1% 0.15 ±113% perf-profile.cycles-pp.__page_cache_alloc.__do_page_cache_readahead.filemap_fault.xfs_filemap_fault.__do_fault 0.74 ± 85% -80.1% 0.15 ±113% perf-profile.cycles-pp.alloc_pages_current.__page_cache_alloc.__do_page_cache_readahead.filemap_fault.xfs_filemap_fault 1378 ± 5% -9.8% 1242 ± 9% slabinfo.file_lock_cache.active_objs 1378 ± 5% -9.8% 1242 ± 9% slabinfo.file_lock_cache.num_objs 14388 ± 3% -7.8% 13262 ± 7% slabinfo.kmalloc-512.num_objs 16441 ± 75%-100.0% 0.00 ± -1% latency_stats.avg.down.console_lock.do_con_write.con_write.n_tty_write.tty_write.redirected_tty_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath 15932 ± 45%+233.0% 53047 ± 43% latency_stats.avg.pipe_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath 16991 ± 74%-100.0% 0.00 ± -1% latency_stats.max.down.console_lock.do_con_write.con_write.n_tty_write.tty_write.redirected_tty_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath 189128 ± 86% -72.1% 52770 ± 63% latency_stats.max.pipe_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath 36438 ± 58%+417.4% 188546 ±112% latency_stats.max.pipe_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath 7680 ±102% -90.3% 741.25 ± 17% latency_stats.max.rpc_wait_bit_killable.__rpc_wait_for_completion_task.nfs4_run_open_task.[nfsv4]._nfs4_open_and_get_state.[nfsv4].nfs4_do_open.[nfsv4].nfs4_atomic_open.[nfsv4].nfs4_file_open.[nfsv4].do_dentry_open.vfs_open.path_openat.do_filp_open.do_sys_open 0.00 ± -1% +Inf% 20319 ±100% latency_stats.sum.call_rwsem_down_read_failed.page_lock_anon_vma_read.rmap_walk.try_to_unmap.migrate_pages.migrate_misplaced_page.handle_mm_fault.__do_page_fault.do_page_fault.page_fault 0.00 ± -1% +Inf% 20492 ± 98% latency_stats.sum.call_rwsem_down_read_failed.rmap_walk.remove_migration_ptes.migrate_pages.migrate_misplaced_page.handle_mm_fault.__do_page_fault.do_page_fault.page_fault 22837 ± 72%-100.0% 0.00 ± -1% latency_stats.sum.down.console_lock.do_con_write.con_write.n_tty_write.tty_write.redirected_tty_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath 0.00 ± -1% +Inf% 5388 ±106% latency_stats.sum.wait_on_page_bit.__migration_entry_wait.migration_entry_wait.do_swap_page.handle_mm_fault.__do_page_fault.do_page_fault.page_fault 17.00 ± 10% +30.9% 22.25 ± 27% sched_debug.cfs_rq:/.load.18 18.00 ± 11% +56.9% 28.25 ± 24% sched_debug.cfs_rq:/.load_avg.13 15.50 ± 9% +88.7% 29.25 ± 32% sched_debug.cfs_rq:/.load_avg.44 2.00 ±-50% +25.0% 2.50 ± 66% sched_debug.cfs_rq:/.nr_spread_over.13 2.50 ±100%+690.0% 19.75 ±100% sched_debug.cfs_rq:/.nr_spread_over.7 98.25 ± 82% -83.5% 16.25 ± 2% sched_debug.cfs_rq:/.runnable_load_avg.35 7118 ±387%+637.5% 52502 ± 57% sched_debug.cfs_rq:/.spread0.16 -275108 ±-167%-132.9% 90504 ± 43% sched_debug.cfs_rq:/.spread0.27 -524687 ±-161%-118.7% 98148 ± 49% sched_debug.cfs_rq:/.spread0.35 46300 ± 50% +97.7% 91531 ± 40% sched_debug.cfs_rq:/.spread0.39 72286 ± 17% +51.4% 109469 ± 21% sched_debug.cfs_rq:/.spread0.40 913.75 ± 6% -7.6% 844.00 ± 0% sched_debug.cfs_rq:/.util_avg.0 19.75 ± 9% -17.7% 16.25 ± 2% sched_debug.cpu.cpu_load[0].0 98.25 ± 82% -83.7% 16.00 ± 0% sched_debug.cpu.cpu_load[0].35 19.75 ± 9% -16.5% 16.50 ± 3% sched_debug.cpu.cpu_load[1].0 98.25 ± 82% -83.7%