[lkp] [mm] 089e25e502: -5.4% will-it-scale.per_process_ops
FYI, we noticed the below changes on https://git.kernel.org/pub/scm/linux/kernel/git/andrea/aa.git master commit 089e25e5022ed5a1bb821f0c7ca6b48781ecc122 ("mm: gup: make get_user_pages_fast and __get_user_pages_fast latency conscious") = tbox_group/testcase/rootfs/kconfig/compiler/test: lkp-sb03/will-it-scale/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/futex2 commit: 226c833d8e769b4bb1900eaffd3efb1cbb95f820 089e25e5022ed5a1bb821f0c7ca6b48781ecc122 226c833d8e769b4b 089e25e5022ed5a1bb821f0c7c -- %stddev %change %stddev \ |\ 4287684 ± 0% -5.4%4055429 ± 0% will-it-scale.per_process_ops 1432530 ± 0% -3.9%1377184 ± 0% will-it-scale.per_thread_ops 0.60 ± 0% -1.8% 0.59 ± 0% will-it-scale.scalability 142663 ± 18%+171.2% 386864 ± 82% cpuidle.C3-SNB.time 220.25 ± 8% +49.0% 328.25 ± 14% cpuidle.C3-SNB.usage 24244125 ± 41% -91.9%1963754 ±108% cpuidle.POLL.time 663.50 ± 12% -77.8% 147.50 ±134% cpuidle.POLL.usage 344.25 ± 12% -29.6% 242.25 ± 17% slabinfo.kmem_cache.active_objs 344.25 ± 12% -29.6% 242.25 ± 17% slabinfo.kmem_cache.num_objs 474.00 ± 11% -27.0% 346.00 ± 15% slabinfo.kmem_cache_node.active_objs 496.00 ± 10% -25.8% 368.00 ± 14% slabinfo.kmem_cache_node.num_objs 1.50 ± 33%+150.0% 3.75 ± 22% sched_debug.cfs_rq[20]:/.nr_spread_over 7.00 ± 58%+285.7% 27.00 ± 35% sched_debug.cfs_rq[25]:/.load 151.50 ± 23% +34.8% 204.25 ± 1% sched_debug.cfs_rq[25]:/.util_avg -764780 ±-13% -18.2%-625478 ± -1% sched_debug.cfs_rq[31]:/.spread0 12046 ± 8% -15.5% 10180 ± 14% sched_debug.cpu#1.nr_switches 7057 ± 8% -33.8% 4669 ± 23% sched_debug.cpu#10.ttwu_count 24588 ± 17% -31.8% 16770 ± 16% sched_debug.cpu#11.nr_switches -2.50 ±-60%-120.0% 0.50 ±223% sched_debug.cpu#11.nr_uninterruptible 25784 ± 16% -29.9% 18064 ± 16% sched_debug.cpu#11.sched_count 1536 ± 15%+124.0% 3441 ± 49% sched_debug.cpu#13.ttwu_local -4.00 ± 0%-106.2% 0.25 ±1479% sched_debug.cpu#14.nr_uninterruptible 2563 ± 42% -50.0% 1281 ± 12% sched_debug.cpu#16.nr_switches 2760 ± 39% -47.0% 1463 ± 10% sched_debug.cpu#16.sched_count 3.25 ±107%-192.3% -3.00 ±-54% sched_debug.cpu#2.nr_uninterruptible 2635 ± 14% -27.0% 1923 ± 10% sched_debug.cpu#2.sched_goidle 957.25 ± 14% +44.7% 1385 ± 29% sched_debug.cpu#21.nr_switches 1140 ± 11% +37.3% 1566 ± 25% sched_debug.cpu#21.sched_count 3293 ± 64% -49.9% 1648 ± 74% sched_debug.cpu#23.nr_switches 4654 ± 43% -60.8% 1824 ± 66% sched_debug.cpu#23.sched_count 1595 ± 25% +79.0% 2856 ± 25% sched_debug.cpu#25.curr->pid 7.00 ± 58%+285.7% 27.00 ± 35% sched_debug.cpu#25.load 2.25 ±184%-355.6% -5.75 ±-31% sched_debug.cpu#3.nr_uninterruptible 1.25 ±153%+320.0% 5.25 ± 15% sched_debug.cpu#31.nr_uninterruptible 24696 ± 21% -36.6% 15662 ± 29% sched_debug.cpu#9.nr_switches 11398 ± 24% -43.2% 6474 ± 30% sched_debug.cpu#9.sched_goidle lkp-sb03: Sandy Bridge-EP Memory: 64G To reproduce: git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git cd lkp-tests bin/lkp install job.yaml # job file is attached in this email bin/lkp run job.yaml Disclaimer: Results have been estimated based on internal Intel analysis and are provided for informational purposes only. Any difference in system hardware or software design or configuration may affect actual performance. Thanks, Ying Huang --- LKP_SERVER: inn LKP_CGI_PORT: 80 LKP_CIFS_PORT: 139 testcase: will-it-scale default-monitors: wait: activate-monitor kmsg: uptime: iostat: vmstat: numa-numastat: numa-vmstat: numa-meminfo: proc-vmstat: proc-stat: interval: 10 meminfo: slabinfo: interrupts: lock_stat: latency_stats: softirqs: bdi_dev_mapping: diskstats: nfsstat: cpuidle: cpufreq-stats: turbostat: pmeter: sched_debug: interval: 60 cpufreq_governor: default-watchdogs: oom-killer: watchdog: commit: b58730ac9d33c9b2a59117f4b3f3d83368ff34f1 model: Sandy Bridge-EP memory: 64G hdd_partitions: "/dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WCAV5F059499-part3" swap_partitions: rootfs_partition: "/dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WCAV5F059499-part4" category: benchmark perf-profile: freq: 800 will-it-scale: test: futex2 queue: cyclic testbox: lkp-sb03 tbox_group: lkp-sb03 kconfig: x86_64-rhel enqueue_time:
[lkp] [mm] 089e25e502: -5.4% will-it-scale.per_process_ops
FYI, we noticed the below changes on https://git.kernel.org/pub/scm/linux/kernel/git/andrea/aa.git master commit 089e25e5022ed5a1bb821f0c7ca6b48781ecc122 ("mm: gup: make get_user_pages_fast and __get_user_pages_fast latency conscious") = tbox_group/testcase/rootfs/kconfig/compiler/test: lkp-sb03/will-it-scale/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/futex2 commit: 226c833d8e769b4bb1900eaffd3efb1cbb95f820 089e25e5022ed5a1bb821f0c7ca6b48781ecc122 226c833d8e769b4b 089e25e5022ed5a1bb821f0c7c -- %stddev %change %stddev \ |\ 4287684 ± 0% -5.4%4055429 ± 0% will-it-scale.per_process_ops 1432530 ± 0% -3.9%1377184 ± 0% will-it-scale.per_thread_ops 0.60 ± 0% -1.8% 0.59 ± 0% will-it-scale.scalability 142663 ± 18%+171.2% 386864 ± 82% cpuidle.C3-SNB.time 220.25 ± 8% +49.0% 328.25 ± 14% cpuidle.C3-SNB.usage 24244125 ± 41% -91.9%1963754 ±108% cpuidle.POLL.time 663.50 ± 12% -77.8% 147.50 ±134% cpuidle.POLL.usage 344.25 ± 12% -29.6% 242.25 ± 17% slabinfo.kmem_cache.active_objs 344.25 ± 12% -29.6% 242.25 ± 17% slabinfo.kmem_cache.num_objs 474.00 ± 11% -27.0% 346.00 ± 15% slabinfo.kmem_cache_node.active_objs 496.00 ± 10% -25.8% 368.00 ± 14% slabinfo.kmem_cache_node.num_objs 1.50 ± 33%+150.0% 3.75 ± 22% sched_debug.cfs_rq[20]:/.nr_spread_over 7.00 ± 58%+285.7% 27.00 ± 35% sched_debug.cfs_rq[25]:/.load 151.50 ± 23% +34.8% 204.25 ± 1% sched_debug.cfs_rq[25]:/.util_avg -764780 ±-13% -18.2%-625478 ± -1% sched_debug.cfs_rq[31]:/.spread0 12046 ± 8% -15.5% 10180 ± 14% sched_debug.cpu#1.nr_switches 7057 ± 8% -33.8% 4669 ± 23% sched_debug.cpu#10.ttwu_count 24588 ± 17% -31.8% 16770 ± 16% sched_debug.cpu#11.nr_switches -2.50 ±-60%-120.0% 0.50 ±223% sched_debug.cpu#11.nr_uninterruptible 25784 ± 16% -29.9% 18064 ± 16% sched_debug.cpu#11.sched_count 1536 ± 15%+124.0% 3441 ± 49% sched_debug.cpu#13.ttwu_local -4.00 ± 0%-106.2% 0.25 ±1479% sched_debug.cpu#14.nr_uninterruptible 2563 ± 42% -50.0% 1281 ± 12% sched_debug.cpu#16.nr_switches 2760 ± 39% -47.0% 1463 ± 10% sched_debug.cpu#16.sched_count 3.25 ±107%-192.3% -3.00 ±-54% sched_debug.cpu#2.nr_uninterruptible 2635 ± 14% -27.0% 1923 ± 10% sched_debug.cpu#2.sched_goidle 957.25 ± 14% +44.7% 1385 ± 29% sched_debug.cpu#21.nr_switches 1140 ± 11% +37.3% 1566 ± 25% sched_debug.cpu#21.sched_count 3293 ± 64% -49.9% 1648 ± 74% sched_debug.cpu#23.nr_switches 4654 ± 43% -60.8% 1824 ± 66% sched_debug.cpu#23.sched_count 1595 ± 25% +79.0% 2856 ± 25% sched_debug.cpu#25.curr->pid 7.00 ± 58%+285.7% 27.00 ± 35% sched_debug.cpu#25.load 2.25 ±184%-355.6% -5.75 ±-31% sched_debug.cpu#3.nr_uninterruptible 1.25 ±153%+320.0% 5.25 ± 15% sched_debug.cpu#31.nr_uninterruptible 24696 ± 21% -36.6% 15662 ± 29% sched_debug.cpu#9.nr_switches 11398 ± 24% -43.2% 6474 ± 30% sched_debug.cpu#9.sched_goidle lkp-sb03: Sandy Bridge-EP Memory: 64G To reproduce: git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git cd lkp-tests bin/lkp install job.yaml # job file is attached in this email bin/lkp run job.yaml Disclaimer: Results have been estimated based on internal Intel analysis and are provided for informational purposes only. Any difference in system hardware or software design or configuration may affect actual performance. Thanks, Ying Huang --- LKP_SERVER: inn LKP_CGI_PORT: 80 LKP_CIFS_PORT: 139 testcase: will-it-scale default-monitors: wait: activate-monitor kmsg: uptime: iostat: vmstat: numa-numastat: numa-vmstat: numa-meminfo: proc-vmstat: proc-stat: interval: 10 meminfo: slabinfo: interrupts: lock_stat: latency_stats: softirqs: bdi_dev_mapping: diskstats: nfsstat: cpuidle: cpufreq-stats: turbostat: pmeter: sched_debug: interval: 60 cpufreq_governor: default-watchdogs: oom-killer: watchdog: commit: b58730ac9d33c9b2a59117f4b3f3d83368ff34f1 model: Sandy Bridge-EP memory: 64G hdd_partitions: "/dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WCAV5F059499-part3" swap_partitions: rootfs_partition: "/dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WCAV5F059499-part4" category: benchmark perf-profile: freq: 800 will-it-scale: test: futex2 queue: cyclic testbox: lkp-sb03 tbox_group: lkp-sb03 kconfig: x86_64-rhel enqueue_time: