Re: [LKP] [sched] a15b12ac36a: -46.9% time.voluntary_context_switches +1.5% will-it-scale.per_process_ops

2015-01-03 Thread Huang Ying
Hi, Kirill,

Sorry for late.

On Tue, 2014-12-23 at 11:57 +0300, Kirill Tkhai wrote:
> Hi, Huang,
> 
> what do these digits mean? What test does?
> 
> 23.12.2014, 08:16, "Huang Ying" :
> > FYI, we noticed the below changes on
> >
> > commit a15b12ac36ad4e7b856a4ae54937ae26a51aebad ("sched: Do not stop cpu in 
> > set_cpus_allowed_ptr() if task is not running")
> >
> > testbox/testcase/testparams: lkp-g5/will-it-scale/performance-lock1
> >
> > 1ba93d42727c4400  a15b12ac36ad4e7b856a4ae549
> >   --

Above is the good commit and the bad commit.

> >  %stddev %change %stddev
> >  \  |\
> >1517261 ±  0%  +1.5%1539994 ±  0%  will-it-scale.per_process_ops

We have basic description of data above, where %stddev is standard
deviation.

What's more do you want?

Best Regards,
Huang, Ying

> >247 ± 30%+131.8%573 ± 49%  sched_debug.cpu#61.ttwu_count
> >225 ± 22%+142.8%546 ± 34%  sched_debug.cpu#81.ttwu_local
> >  15115 ± 44% +37.3%  20746 ± 40%  numa-meminfo.node7.Active
> >   1028 ± 38%+115.3%   2214 ± 36%  sched_debug.cpu#16.ttwu_local
> >  2 ± 19%+133.3%  5 ± 43%  sched_debug.cpu#89.cpu_load[3]
> > 21 ± 45% +88.2% 40 ± 23%  
> > sched_debug.cfs_rq[99]:/.tg_load_contrib
> >414 ± 33% +98.6%823 ± 28%  sched_debug.cpu#81.ttwu_count
> >  4 ± 10% +88.2%  8 ± 12%  
> > sched_debug.cfs_rq[33]:/.runnable_load_avg
> > 22 ± 26% +80.9% 40 ± 24%  
> > sched_debug.cfs_rq[103]:/.tg_load_contrib
> >  7 ± 17% -41.4%  4 ± 25%  sched_debug.cfs_rq[41]:/.load
> >  7 ± 17% -37.9%  4 ± 19%  sched_debug.cpu#41.load
> >  3 ± 22%+106.7%  7 ± 10%  
> > sched_debug.cfs_rq[36]:/.runnable_load_avg
> >174 ± 13% +48.7%259 ± 31%  sched_debug.cpu#112.ttwu_count
> >  4 ± 19% +88.9%  8 ±  5%  
> > sched_debug.cfs_rq[35]:/.runnable_load_avg
> >260 ± 10% +55.6%405 ± 26%  
> > numa-vmstat.node3.nr_anon_pages
> >   1042 ± 10% +56.0%   1626 ± 26%  numa-meminfo.node3.AnonPages
> > 26 ± 22% +74.3% 45 ± 16%  
> > sched_debug.cfs_rq[65]:/.tg_load_contrib
> > 21 ± 43% +71.3% 37 ± 26%  
> > sched_debug.cfs_rq[100]:/.tg_load_contrib
> >   3686 ± 21% +40.2%   5167 ± 19%  sched_debug.cpu#16.ttwu_count
> >142 ±  9% +34.4%191 ± 24%  sched_debug.cpu#112.ttwu_local
> >  5 ± 18% +69.6%  9 ± 15%  sched_debug.cfs_rq[35]:/.load
> >  2 ± 30%+100.0%  5 ± 37%  
> > sched_debug.cpu#106.cpu_load[1]
> >  3 ± 23%+100.0%  6 ± 48%  
> > sched_debug.cpu#106.cpu_load[2]
> >  5 ± 18% +69.6%  9 ± 15%  sched_debug.cpu#35.load
> >  9 ± 20% +48.6% 13 ± 16%  
> > sched_debug.cfs_rq[7]:/.runnable_load_avg
> >   1727 ± 15% +43.9%   2484 ± 30%  sched_debug.cpu#34.ttwu_local
> > 10 ± 17% -40.5%  6 ± 13%  sched_debug.cpu#41.cpu_load[0]
> > 10 ± 14% -29.3%  7 ±  5%  sched_debug.cpu#45.cpu_load[4]
> > 10 ± 17% -33.3%  7 ± 10%  sched_debug.cpu#41.cpu_load[1]
> >   6121 ±  8% +56.7%   9595 ± 30%  
> > sched_debug.cpu#13.sched_goidle
> > 13 ±  8% -25.9% 10 ± 17%  sched_debug.cpu#39.cpu_load[2]
> > 12 ± 16% -24.0%  9 ± 15%  sched_debug.cpu#37.cpu_load[2]
> >492 ± 17% -21.3%387 ± 24%  sched_debug.cpu#46.ttwu_count
> >   3761 ± 11% -23.9%   2863 ± 15%  sched_debug.cpu#93.curr->pid
> >570 ± 19% +43.2%816 ± 17%  sched_debug.cpu#86.ttwu_count
> >   5279 ±  8% +63.5%   8631 ± 33%  sched_debug.cpu#13.ttwu_count
> >377 ± 22% -28.6%269 ± 14%  sched_debug.cpu#46.ttwu_local
> >   5396 ± 10% +29.9%   7007 ± 14%  
> > sched_debug.cpu#16.sched_goidle
> >   1959 ± 12% +36.9%   2683 ± 15%  
> > numa-vmstat.node2.nr_slab_reclaimable
> >   7839 ± 12% +37.0%  10736 ± 15%  
> > numa-meminfo.node2.SReclaimable
> >  5 ± 15% +66.7%  8 ±  9%  sched_debug.cfs_rq[33]:/.load
> >  5 ± 25% +47.8%  8 ± 10%  sched_debug.cfs_rq[37]:/.load
> >  2 ±  0% +87.5%  3 ± 34%  sched_debug.cpu#89.cpu_load[4]
> >  5 ± 15% +66.7%  8 ±  9%  sched_debug.cpu#33.load
> >  6 ± 23% +41.7%  8 ± 10%  sched_debug.cpu#37.load
> >  8 ± 10% -26.5%  6 ±  6%  sched_debug.cpu#51.cpu_load[1]
> >   7300 ± 37% +63.6%  11943 ± 16%  softirqs.TASKLET
> >   2984 ±  6% +43.1%   4271 ± 23%  sched_debug.cpu#20.ttwu_count
> >328 ±  4% +40.5%462 ± 25%  sched_debug.cpu#26.ttwu_local
> > 10 ±  7%   

Re: [LKP] [sched] a15b12ac36a: -46.9% time.voluntary_context_switches +1.5% will-it-scale.per_process_ops

2015-01-03 Thread Huang Ying
Hi, Kirill,

Sorry for late.

On Tue, 2014-12-23 at 11:57 +0300, Kirill Tkhai wrote:
 Hi, Huang,
 
 what do these digits mean? What test does?
 
 23.12.2014, 08:16, Huang Ying ying.hu...@intel.com:
  FYI, we noticed the below changes on
 
  commit a15b12ac36ad4e7b856a4ae54937ae26a51aebad (sched: Do not stop cpu in 
  set_cpus_allowed_ptr() if task is not running)
 
  testbox/testcase/testparams: lkp-g5/will-it-scale/performance-lock1
 
  1ba93d42727c4400  a15b12ac36ad4e7b856a4ae549
    --

Above is the good commit and the bad commit.

   %stddev %change %stddev
   \  |\
 1517261 ±  0%  +1.5%1539994 ±  0%  will-it-scale.per_process_ops

We have basic description of data above, where %stddev is standard
deviation.

What's more do you want?

Best Regards,
Huang, Ying

 247 ± 30%+131.8%573 ± 49%  sched_debug.cpu#61.ttwu_count
 225 ± 22%+142.8%546 ± 34%  sched_debug.cpu#81.ttwu_local
   15115 ± 44% +37.3%  20746 ± 40%  numa-meminfo.node7.Active
1028 ± 38%+115.3%   2214 ± 36%  sched_debug.cpu#16.ttwu_local
   2 ± 19%+133.3%  5 ± 43%  sched_debug.cpu#89.cpu_load[3]
  21 ± 45% +88.2% 40 ± 23%  
  sched_debug.cfs_rq[99]:/.tg_load_contrib
 414 ± 33% +98.6%823 ± 28%  sched_debug.cpu#81.ttwu_count
   4 ± 10% +88.2%  8 ± 12%  
  sched_debug.cfs_rq[33]:/.runnable_load_avg
  22 ± 26% +80.9% 40 ± 24%  
  sched_debug.cfs_rq[103]:/.tg_load_contrib
   7 ± 17% -41.4%  4 ± 25%  sched_debug.cfs_rq[41]:/.load
   7 ± 17% -37.9%  4 ± 19%  sched_debug.cpu#41.load
   3 ± 22%+106.7%  7 ± 10%  
  sched_debug.cfs_rq[36]:/.runnable_load_avg
 174 ± 13% +48.7%259 ± 31%  sched_debug.cpu#112.ttwu_count
   4 ± 19% +88.9%  8 ±  5%  
  sched_debug.cfs_rq[35]:/.runnable_load_avg
 260 ± 10% +55.6%405 ± 26%  
  numa-vmstat.node3.nr_anon_pages
1042 ± 10% +56.0%   1626 ± 26%  numa-meminfo.node3.AnonPages
  26 ± 22% +74.3% 45 ± 16%  
  sched_debug.cfs_rq[65]:/.tg_load_contrib
  21 ± 43% +71.3% 37 ± 26%  
  sched_debug.cfs_rq[100]:/.tg_load_contrib
3686 ± 21% +40.2%   5167 ± 19%  sched_debug.cpu#16.ttwu_count
 142 ±  9% +34.4%191 ± 24%  sched_debug.cpu#112.ttwu_local
   5 ± 18% +69.6%  9 ± 15%  sched_debug.cfs_rq[35]:/.load
   2 ± 30%+100.0%  5 ± 37%  
  sched_debug.cpu#106.cpu_load[1]
   3 ± 23%+100.0%  6 ± 48%  
  sched_debug.cpu#106.cpu_load[2]
   5 ± 18% +69.6%  9 ± 15%  sched_debug.cpu#35.load
   9 ± 20% +48.6% 13 ± 16%  
  sched_debug.cfs_rq[7]:/.runnable_load_avg
1727 ± 15% +43.9%   2484 ± 30%  sched_debug.cpu#34.ttwu_local
  10 ± 17% -40.5%  6 ± 13%  sched_debug.cpu#41.cpu_load[0]
  10 ± 14% -29.3%  7 ±  5%  sched_debug.cpu#45.cpu_load[4]
  10 ± 17% -33.3%  7 ± 10%  sched_debug.cpu#41.cpu_load[1]
6121 ±  8% +56.7%   9595 ± 30%  
  sched_debug.cpu#13.sched_goidle
  13 ±  8% -25.9% 10 ± 17%  sched_debug.cpu#39.cpu_load[2]
  12 ± 16% -24.0%  9 ± 15%  sched_debug.cpu#37.cpu_load[2]
 492 ± 17% -21.3%387 ± 24%  sched_debug.cpu#46.ttwu_count
3761 ± 11% -23.9%   2863 ± 15%  sched_debug.cpu#93.curr-pid
 570 ± 19% +43.2%816 ± 17%  sched_debug.cpu#86.ttwu_count
5279 ±  8% +63.5%   8631 ± 33%  sched_debug.cpu#13.ttwu_count
 377 ± 22% -28.6%269 ± 14%  sched_debug.cpu#46.ttwu_local
5396 ± 10% +29.9%   7007 ± 14%  
  sched_debug.cpu#16.sched_goidle
1959 ± 12% +36.9%   2683 ± 15%  
  numa-vmstat.node2.nr_slab_reclaimable
7839 ± 12% +37.0%  10736 ± 15%  
  numa-meminfo.node2.SReclaimable
   5 ± 15% +66.7%  8 ±  9%  sched_debug.cfs_rq[33]:/.load
   5 ± 25% +47.8%  8 ± 10%  sched_debug.cfs_rq[37]:/.load
   2 ±  0% +87.5%  3 ± 34%  sched_debug.cpu#89.cpu_load[4]
   5 ± 15% +66.7%  8 ±  9%  sched_debug.cpu#33.load
   6 ± 23% +41.7%  8 ± 10%  sched_debug.cpu#37.load
   8 ± 10% -26.5%  6 ±  6%  sched_debug.cpu#51.cpu_load[1]
7300 ± 37% +63.6%  11943 ± 16%  softirqs.TASKLET
2984 ±  6% +43.1%   4271 ± 23%  sched_debug.cpu#20.ttwu_count
 328 ±  4% +40.5%462 ± 25%  sched_debug.cpu#26.ttwu_local
  10 ±  7% -27.5%  7 ±  5%  sched_debug.cpu#43.cpu_load[3]
   9 ±  8% -30.8%  6 ±  6%  sched_debug.cpu#41.cpu_load[3]
   

Re: [LKP] [sched] a15b12ac36a: -46.9% time.voluntary_context_switches +1.5% will-it-scale.per_process_ops

2014-12-23 Thread Kirill Tkhai
Hi, Huang,

what do these digits mean? What test does?

23.12.2014, 08:16, "Huang Ying" :
> FYI, we noticed the below changes on
>
> commit a15b12ac36ad4e7b856a4ae54937ae26a51aebad ("sched: Do not stop cpu in 
> set_cpus_allowed_ptr() if task is not running")
>
> testbox/testcase/testparams: lkp-g5/will-it-scale/performance-lock1
>
> 1ba93d42727c4400  a15b12ac36ad4e7b856a4ae549
>   --
>  %stddev %change %stddev
>  \  |    \
>    1517261 ±  0%  +1.5%    1539994 ±  0%  will-it-scale.per_process_ops
>    247 ± 30%    +131.8%    573 ± 49%  sched_debug.cpu#61.ttwu_count
>    225 ± 22%    +142.8%    546 ± 34%  sched_debug.cpu#81.ttwu_local
>  15115 ± 44% +37.3%  20746 ± 40%  numa-meminfo.node7.Active
>   1028 ± 38%    +115.3%   2214 ± 36%  sched_debug.cpu#16.ttwu_local
>  2 ± 19%    +133.3%  5 ± 43%  sched_debug.cpu#89.cpu_load[3]
> 21 ± 45% +88.2% 40 ± 23%  
> sched_debug.cfs_rq[99]:/.tg_load_contrib
>    414 ± 33% +98.6%    823 ± 28%  sched_debug.cpu#81.ttwu_count
>  4 ± 10% +88.2%  8 ± 12%  
> sched_debug.cfs_rq[33]:/.runnable_load_avg
> 22 ± 26% +80.9% 40 ± 24%  
> sched_debug.cfs_rq[103]:/.tg_load_contrib
>  7 ± 17% -41.4%  4 ± 25%  sched_debug.cfs_rq[41]:/.load
>  7 ± 17% -37.9%  4 ± 19%  sched_debug.cpu#41.load
>  3 ± 22%    +106.7%  7 ± 10%  
> sched_debug.cfs_rq[36]:/.runnable_load_avg
>    174 ± 13% +48.7%    259 ± 31%  sched_debug.cpu#112.ttwu_count
>  4 ± 19% +88.9%  8 ±  5%  
> sched_debug.cfs_rq[35]:/.runnable_load_avg
>    260 ± 10% +55.6%    405 ± 26%  numa-vmstat.node3.nr_anon_pages
>   1042 ± 10% +56.0%   1626 ± 26%  numa-meminfo.node3.AnonPages
> 26 ± 22% +74.3% 45 ± 16%  
> sched_debug.cfs_rq[65]:/.tg_load_contrib
> 21 ± 43% +71.3% 37 ± 26%  
> sched_debug.cfs_rq[100]:/.tg_load_contrib
>   3686 ± 21% +40.2%   5167 ± 19%  sched_debug.cpu#16.ttwu_count
>    142 ±  9% +34.4%    191 ± 24%  sched_debug.cpu#112.ttwu_local
>  5 ± 18% +69.6%  9 ± 15%  sched_debug.cfs_rq[35]:/.load
>  2 ± 30%    +100.0%  5 ± 37%  sched_debug.cpu#106.cpu_load[1]
>  3 ± 23%    +100.0%  6 ± 48%  sched_debug.cpu#106.cpu_load[2]
>  5 ± 18% +69.6%  9 ± 15%  sched_debug.cpu#35.load
>  9 ± 20% +48.6% 13 ± 16%  
> sched_debug.cfs_rq[7]:/.runnable_load_avg
>   1727 ± 15% +43.9%   2484 ± 30%  sched_debug.cpu#34.ttwu_local
> 10 ± 17% -40.5%  6 ± 13%  sched_debug.cpu#41.cpu_load[0]
> 10 ± 14% -29.3%  7 ±  5%  sched_debug.cpu#45.cpu_load[4]
> 10 ± 17% -33.3%  7 ± 10%  sched_debug.cpu#41.cpu_load[1]
>   6121 ±  8% +56.7%   9595 ± 30%  sched_debug.cpu#13.sched_goidle
> 13 ±  8% -25.9% 10 ± 17%  sched_debug.cpu#39.cpu_load[2]
> 12 ± 16% -24.0%  9 ± 15%  sched_debug.cpu#37.cpu_load[2]
>    492 ± 17% -21.3%    387 ± 24%  sched_debug.cpu#46.ttwu_count
>   3761 ± 11% -23.9%   2863 ± 15%  sched_debug.cpu#93.curr->pid
>    570 ± 19% +43.2%    816 ± 17%  sched_debug.cpu#86.ttwu_count
>   5279 ±  8% +63.5%   8631 ± 33%  sched_debug.cpu#13.ttwu_count
>    377 ± 22% -28.6%    269 ± 14%  sched_debug.cpu#46.ttwu_local
>   5396 ± 10% +29.9%   7007 ± 14%  sched_debug.cpu#16.sched_goidle
>   1959 ± 12% +36.9%   2683 ± 15%  
> numa-vmstat.node2.nr_slab_reclaimable
>   7839 ± 12% +37.0%  10736 ± 15%  numa-meminfo.node2.SReclaimable
>  5 ± 15% +66.7%  8 ±  9%  sched_debug.cfs_rq[33]:/.load
>  5 ± 25% +47.8%  8 ± 10%  sched_debug.cfs_rq[37]:/.load
>  2 ±  0% +87.5%  3 ± 34%  sched_debug.cpu#89.cpu_load[4]
>  5 ± 15% +66.7%  8 ±  9%  sched_debug.cpu#33.load
>  6 ± 23% +41.7%  8 ± 10%  sched_debug.cpu#37.load
>  8 ± 10% -26.5%  6 ±  6%  sched_debug.cpu#51.cpu_load[1]
>   7300 ± 37% +63.6%  11943 ± 16%  softirqs.TASKLET
>   2984 ±  6% +43.1%   4271 ± 23%  sched_debug.cpu#20.ttwu_count
>    328 ±  4% +40.5%    462 ± 25%  sched_debug.cpu#26.ttwu_local
> 10 ±  7% -27.5%  7 ±  5%  sched_debug.cpu#43.cpu_load[3]
>  9 ±  8% -30.8%  6 ±  6%  sched_debug.cpu#41.cpu_load[3]
>  9 ±  8% -27.0%  6 ±  6%  sched_debug.cpu#41.cpu_load[4]
> 10 ± 14% -32.5%  6 ±  6%  sched_debug.cpu#41.cpu_load[2]
>  16292 ±  6% +42.8%  23260 ± 25%  sched_debug.cpu#13.nr_switches
> 14 ± 28% +55.9% 23 ±  8%  

Re: [LKP] [sched] a15b12ac36a: -46.9% time.voluntary_context_switches +1.5% will-it-scale.per_process_ops

2014-12-23 Thread Kirill Tkhai
Hi, Huang,

what do these digits mean? What test does?

23.12.2014, 08:16, Huang Ying ying.hu...@intel.com:
 FYI, we noticed the below changes on

 commit a15b12ac36ad4e7b856a4ae54937ae26a51aebad (sched: Do not stop cpu in 
 set_cpus_allowed_ptr() if task is not running)

 testbox/testcase/testparams: lkp-g5/will-it-scale/performance-lock1

 1ba93d42727c4400  a15b12ac36ad4e7b856a4ae549
   --
  %stddev %change %stddev
  \  |    \
    1517261 ±  0%  +1.5%    1539994 ±  0%  will-it-scale.per_process_ops
    247 ± 30%    +131.8%    573 ± 49%  sched_debug.cpu#61.ttwu_count
    225 ± 22%    +142.8%    546 ± 34%  sched_debug.cpu#81.ttwu_local
  15115 ± 44% +37.3%  20746 ± 40%  numa-meminfo.node7.Active
   1028 ± 38%    +115.3%   2214 ± 36%  sched_debug.cpu#16.ttwu_local
  2 ± 19%    +133.3%  5 ± 43%  sched_debug.cpu#89.cpu_load[3]
 21 ± 45% +88.2% 40 ± 23%  
 sched_debug.cfs_rq[99]:/.tg_load_contrib
    414 ± 33% +98.6%    823 ± 28%  sched_debug.cpu#81.ttwu_count
  4 ± 10% +88.2%  8 ± 12%  
 sched_debug.cfs_rq[33]:/.runnable_load_avg
 22 ± 26% +80.9% 40 ± 24%  
 sched_debug.cfs_rq[103]:/.tg_load_contrib
  7 ± 17% -41.4%  4 ± 25%  sched_debug.cfs_rq[41]:/.load
  7 ± 17% -37.9%  4 ± 19%  sched_debug.cpu#41.load
  3 ± 22%    +106.7%  7 ± 10%  
 sched_debug.cfs_rq[36]:/.runnable_load_avg
    174 ± 13% +48.7%    259 ± 31%  sched_debug.cpu#112.ttwu_count
  4 ± 19% +88.9%  8 ±  5%  
 sched_debug.cfs_rq[35]:/.runnable_load_avg
    260 ± 10% +55.6%    405 ± 26%  numa-vmstat.node3.nr_anon_pages
   1042 ± 10% +56.0%   1626 ± 26%  numa-meminfo.node3.AnonPages
 26 ± 22% +74.3% 45 ± 16%  
 sched_debug.cfs_rq[65]:/.tg_load_contrib
 21 ± 43% +71.3% 37 ± 26%  
 sched_debug.cfs_rq[100]:/.tg_load_contrib
   3686 ± 21% +40.2%   5167 ± 19%  sched_debug.cpu#16.ttwu_count
    142 ±  9% +34.4%    191 ± 24%  sched_debug.cpu#112.ttwu_local
  5 ± 18% +69.6%  9 ± 15%  sched_debug.cfs_rq[35]:/.load
  2 ± 30%    +100.0%  5 ± 37%  sched_debug.cpu#106.cpu_load[1]
  3 ± 23%    +100.0%  6 ± 48%  sched_debug.cpu#106.cpu_load[2]
  5 ± 18% +69.6%  9 ± 15%  sched_debug.cpu#35.load
  9 ± 20% +48.6% 13 ± 16%  
 sched_debug.cfs_rq[7]:/.runnable_load_avg
   1727 ± 15% +43.9%   2484 ± 30%  sched_debug.cpu#34.ttwu_local
 10 ± 17% -40.5%  6 ± 13%  sched_debug.cpu#41.cpu_load[0]
 10 ± 14% -29.3%  7 ±  5%  sched_debug.cpu#45.cpu_load[4]
 10 ± 17% -33.3%  7 ± 10%  sched_debug.cpu#41.cpu_load[1]
   6121 ±  8% +56.7%   9595 ± 30%  sched_debug.cpu#13.sched_goidle
 13 ±  8% -25.9% 10 ± 17%  sched_debug.cpu#39.cpu_load[2]
 12 ± 16% -24.0%  9 ± 15%  sched_debug.cpu#37.cpu_load[2]
    492 ± 17% -21.3%    387 ± 24%  sched_debug.cpu#46.ttwu_count
   3761 ± 11% -23.9%   2863 ± 15%  sched_debug.cpu#93.curr-pid
    570 ± 19% +43.2%    816 ± 17%  sched_debug.cpu#86.ttwu_count
   5279 ±  8% +63.5%   8631 ± 33%  sched_debug.cpu#13.ttwu_count
    377 ± 22% -28.6%    269 ± 14%  sched_debug.cpu#46.ttwu_local
   5396 ± 10% +29.9%   7007 ± 14%  sched_debug.cpu#16.sched_goidle
   1959 ± 12% +36.9%   2683 ± 15%  
 numa-vmstat.node2.nr_slab_reclaimable
   7839 ± 12% +37.0%  10736 ± 15%  numa-meminfo.node2.SReclaimable
  5 ± 15% +66.7%  8 ±  9%  sched_debug.cfs_rq[33]:/.load
  5 ± 25% +47.8%  8 ± 10%  sched_debug.cfs_rq[37]:/.load
  2 ±  0% +87.5%  3 ± 34%  sched_debug.cpu#89.cpu_load[4]
  5 ± 15% +66.7%  8 ±  9%  sched_debug.cpu#33.load
  6 ± 23% +41.7%  8 ± 10%  sched_debug.cpu#37.load
  8 ± 10% -26.5%  6 ±  6%  sched_debug.cpu#51.cpu_load[1]
   7300 ± 37% +63.6%  11943 ± 16%  softirqs.TASKLET
   2984 ±  6% +43.1%   4271 ± 23%  sched_debug.cpu#20.ttwu_count
    328 ±  4% +40.5%    462 ± 25%  sched_debug.cpu#26.ttwu_local
 10 ±  7% -27.5%  7 ±  5%  sched_debug.cpu#43.cpu_load[3]
  9 ±  8% -30.8%  6 ±  6%  sched_debug.cpu#41.cpu_load[3]
  9 ±  8% -27.0%  6 ±  6%  sched_debug.cpu#41.cpu_load[4]
 10 ± 14% -32.5%  6 ±  6%  sched_debug.cpu#41.cpu_load[2]
  16292 ±  6% +42.8%  23260 ± 25%  sched_debug.cpu#13.nr_switches
 14 ± 28% +55.9% 23 ±  8%  sched_debug.cpu#99.cpu_load[0]
  5 ±  8% +28.6%  6 ± 12%  

[LKP] [sched] a15b12ac36a: -46.9% time.voluntary_context_switches +1.5% will-it-scale.per_process_ops

2014-12-22 Thread Huang Ying
FYI, we noticed the below changes on

commit a15b12ac36ad4e7b856a4ae54937ae26a51aebad ("sched: Do not stop cpu in 
set_cpus_allowed_ptr() if task is not running")

testbox/testcase/testparams: lkp-g5/will-it-scale/performance-lock1

1ba93d42727c4400  a15b12ac36ad4e7b856a4ae549  
  --  
 %stddev %change %stddev
 \  |\  
   1517261 ±  0%  +1.5%1539994 ±  0%  will-it-scale.per_process_ops
   247 ± 30%+131.8%573 ± 49%  sched_debug.cpu#61.ttwu_count
   225 ± 22%+142.8%546 ± 34%  sched_debug.cpu#81.ttwu_local
 15115 ± 44% +37.3%  20746 ± 40%  numa-meminfo.node7.Active
  1028 ± 38%+115.3%   2214 ± 36%  sched_debug.cpu#16.ttwu_local
 2 ± 19%+133.3%  5 ± 43%  sched_debug.cpu#89.cpu_load[3]
21 ± 45% +88.2% 40 ± 23%  
sched_debug.cfs_rq[99]:/.tg_load_contrib
   414 ± 33% +98.6%823 ± 28%  sched_debug.cpu#81.ttwu_count
 4 ± 10% +88.2%  8 ± 12%  
sched_debug.cfs_rq[33]:/.runnable_load_avg
22 ± 26% +80.9% 40 ± 24%  
sched_debug.cfs_rq[103]:/.tg_load_contrib
 7 ± 17% -41.4%  4 ± 25%  sched_debug.cfs_rq[41]:/.load
 7 ± 17% -37.9%  4 ± 19%  sched_debug.cpu#41.load
 3 ± 22%+106.7%  7 ± 10%  
sched_debug.cfs_rq[36]:/.runnable_load_avg
   174 ± 13% +48.7%259 ± 31%  sched_debug.cpu#112.ttwu_count
 4 ± 19% +88.9%  8 ±  5%  
sched_debug.cfs_rq[35]:/.runnable_load_avg
   260 ± 10% +55.6%405 ± 26%  numa-vmstat.node3.nr_anon_pages
  1042 ± 10% +56.0%   1626 ± 26%  numa-meminfo.node3.AnonPages
26 ± 22% +74.3% 45 ± 16%  
sched_debug.cfs_rq[65]:/.tg_load_contrib
21 ± 43% +71.3% 37 ± 26%  
sched_debug.cfs_rq[100]:/.tg_load_contrib
  3686 ± 21% +40.2%   5167 ± 19%  sched_debug.cpu#16.ttwu_count
   142 ±  9% +34.4%191 ± 24%  sched_debug.cpu#112.ttwu_local
 5 ± 18% +69.6%  9 ± 15%  sched_debug.cfs_rq[35]:/.load
 2 ± 30%+100.0%  5 ± 37%  sched_debug.cpu#106.cpu_load[1]
 3 ± 23%+100.0%  6 ± 48%  sched_debug.cpu#106.cpu_load[2]
 5 ± 18% +69.6%  9 ± 15%  sched_debug.cpu#35.load
 9 ± 20% +48.6% 13 ± 16%  
sched_debug.cfs_rq[7]:/.runnable_load_avg
  1727 ± 15% +43.9%   2484 ± 30%  sched_debug.cpu#34.ttwu_local
10 ± 17% -40.5%  6 ± 13%  sched_debug.cpu#41.cpu_load[0]
10 ± 14% -29.3%  7 ±  5%  sched_debug.cpu#45.cpu_load[4]
10 ± 17% -33.3%  7 ± 10%  sched_debug.cpu#41.cpu_load[1]
  6121 ±  8% +56.7%   9595 ± 30%  sched_debug.cpu#13.sched_goidle
13 ±  8% -25.9% 10 ± 17%  sched_debug.cpu#39.cpu_load[2]
12 ± 16% -24.0%  9 ± 15%  sched_debug.cpu#37.cpu_load[2]
   492 ± 17% -21.3%387 ± 24%  sched_debug.cpu#46.ttwu_count
  3761 ± 11% -23.9%   2863 ± 15%  sched_debug.cpu#93.curr->pid
   570 ± 19% +43.2%816 ± 17%  sched_debug.cpu#86.ttwu_count
  5279 ±  8% +63.5%   8631 ± 33%  sched_debug.cpu#13.ttwu_count
   377 ± 22% -28.6%269 ± 14%  sched_debug.cpu#46.ttwu_local
  5396 ± 10% +29.9%   7007 ± 14%  sched_debug.cpu#16.sched_goidle
  1959 ± 12% +36.9%   2683 ± 15%  
numa-vmstat.node2.nr_slab_reclaimable
  7839 ± 12% +37.0%  10736 ± 15%  numa-meminfo.node2.SReclaimable
 5 ± 15% +66.7%  8 ±  9%  sched_debug.cfs_rq[33]:/.load
 5 ± 25% +47.8%  8 ± 10%  sched_debug.cfs_rq[37]:/.load
 2 ±  0% +87.5%  3 ± 34%  sched_debug.cpu#89.cpu_load[4]
 5 ± 15% +66.7%  8 ±  9%  sched_debug.cpu#33.load
 6 ± 23% +41.7%  8 ± 10%  sched_debug.cpu#37.load
 8 ± 10% -26.5%  6 ±  6%  sched_debug.cpu#51.cpu_load[1]
  7300 ± 37% +63.6%  11943 ± 16%  softirqs.TASKLET
  2984 ±  6% +43.1%   4271 ± 23%  sched_debug.cpu#20.ttwu_count
   328 ±  4% +40.5%462 ± 25%  sched_debug.cpu#26.ttwu_local
10 ±  7% -27.5%  7 ±  5%  sched_debug.cpu#43.cpu_load[3]
 9 ±  8% -30.8%  6 ±  6%  sched_debug.cpu#41.cpu_load[3]
 9 ±  8% -27.0%  6 ±  6%  sched_debug.cpu#41.cpu_load[4]
10 ± 14% -32.5%  6 ±  6%  sched_debug.cpu#41.cpu_load[2]
 16292 ±  6% +42.8%  23260 ± 25%  sched_debug.cpu#13.nr_switches
14 ± 28% +55.9% 23 ±  8%  sched_debug.cpu#99.cpu_load[0]
 5 ±  8% +28.6%  6 ± 12%  sched_debug.cpu#17.load
13 ±  7% -23.1% 10 ± 12%  sched_debug.cpu#39.cpu_load[3]
 7 ± 10% -35.7%  4 ± 11%  

[LKP] [sched] a15b12ac36a: -46.9% time.voluntary_context_switches +1.5% will-it-scale.per_process_ops

2014-12-22 Thread Huang Ying
FYI, we noticed the below changes on

commit a15b12ac36ad4e7b856a4ae54937ae26a51aebad (sched: Do not stop cpu in 
set_cpus_allowed_ptr() if task is not running)

testbox/testcase/testparams: lkp-g5/will-it-scale/performance-lock1

1ba93d42727c4400  a15b12ac36ad4e7b856a4ae549  
  --  
 %stddev %change %stddev
 \  |\  
   1517261 ±  0%  +1.5%1539994 ±  0%  will-it-scale.per_process_ops
   247 ± 30%+131.8%573 ± 49%  sched_debug.cpu#61.ttwu_count
   225 ± 22%+142.8%546 ± 34%  sched_debug.cpu#81.ttwu_local
 15115 ± 44% +37.3%  20746 ± 40%  numa-meminfo.node7.Active
  1028 ± 38%+115.3%   2214 ± 36%  sched_debug.cpu#16.ttwu_local
 2 ± 19%+133.3%  5 ± 43%  sched_debug.cpu#89.cpu_load[3]
21 ± 45% +88.2% 40 ± 23%  
sched_debug.cfs_rq[99]:/.tg_load_contrib
   414 ± 33% +98.6%823 ± 28%  sched_debug.cpu#81.ttwu_count
 4 ± 10% +88.2%  8 ± 12%  
sched_debug.cfs_rq[33]:/.runnable_load_avg
22 ± 26% +80.9% 40 ± 24%  
sched_debug.cfs_rq[103]:/.tg_load_contrib
 7 ± 17% -41.4%  4 ± 25%  sched_debug.cfs_rq[41]:/.load
 7 ± 17% -37.9%  4 ± 19%  sched_debug.cpu#41.load
 3 ± 22%+106.7%  7 ± 10%  
sched_debug.cfs_rq[36]:/.runnable_load_avg
   174 ± 13% +48.7%259 ± 31%  sched_debug.cpu#112.ttwu_count
 4 ± 19% +88.9%  8 ±  5%  
sched_debug.cfs_rq[35]:/.runnable_load_avg
   260 ± 10% +55.6%405 ± 26%  numa-vmstat.node3.nr_anon_pages
  1042 ± 10% +56.0%   1626 ± 26%  numa-meminfo.node3.AnonPages
26 ± 22% +74.3% 45 ± 16%  
sched_debug.cfs_rq[65]:/.tg_load_contrib
21 ± 43% +71.3% 37 ± 26%  
sched_debug.cfs_rq[100]:/.tg_load_contrib
  3686 ± 21% +40.2%   5167 ± 19%  sched_debug.cpu#16.ttwu_count
   142 ±  9% +34.4%191 ± 24%  sched_debug.cpu#112.ttwu_local
 5 ± 18% +69.6%  9 ± 15%  sched_debug.cfs_rq[35]:/.load
 2 ± 30%+100.0%  5 ± 37%  sched_debug.cpu#106.cpu_load[1]
 3 ± 23%+100.0%  6 ± 48%  sched_debug.cpu#106.cpu_load[2]
 5 ± 18% +69.6%  9 ± 15%  sched_debug.cpu#35.load
 9 ± 20% +48.6% 13 ± 16%  
sched_debug.cfs_rq[7]:/.runnable_load_avg
  1727 ± 15% +43.9%   2484 ± 30%  sched_debug.cpu#34.ttwu_local
10 ± 17% -40.5%  6 ± 13%  sched_debug.cpu#41.cpu_load[0]
10 ± 14% -29.3%  7 ±  5%  sched_debug.cpu#45.cpu_load[4]
10 ± 17% -33.3%  7 ± 10%  sched_debug.cpu#41.cpu_load[1]
  6121 ±  8% +56.7%   9595 ± 30%  sched_debug.cpu#13.sched_goidle
13 ±  8% -25.9% 10 ± 17%  sched_debug.cpu#39.cpu_load[2]
12 ± 16% -24.0%  9 ± 15%  sched_debug.cpu#37.cpu_load[2]
   492 ± 17% -21.3%387 ± 24%  sched_debug.cpu#46.ttwu_count
  3761 ± 11% -23.9%   2863 ± 15%  sched_debug.cpu#93.curr-pid
   570 ± 19% +43.2%816 ± 17%  sched_debug.cpu#86.ttwu_count
  5279 ±  8% +63.5%   8631 ± 33%  sched_debug.cpu#13.ttwu_count
   377 ± 22% -28.6%269 ± 14%  sched_debug.cpu#46.ttwu_local
  5396 ± 10% +29.9%   7007 ± 14%  sched_debug.cpu#16.sched_goidle
  1959 ± 12% +36.9%   2683 ± 15%  
numa-vmstat.node2.nr_slab_reclaimable
  7839 ± 12% +37.0%  10736 ± 15%  numa-meminfo.node2.SReclaimable
 5 ± 15% +66.7%  8 ±  9%  sched_debug.cfs_rq[33]:/.load
 5 ± 25% +47.8%  8 ± 10%  sched_debug.cfs_rq[37]:/.load
 2 ±  0% +87.5%  3 ± 34%  sched_debug.cpu#89.cpu_load[4]
 5 ± 15% +66.7%  8 ±  9%  sched_debug.cpu#33.load
 6 ± 23% +41.7%  8 ± 10%  sched_debug.cpu#37.load
 8 ± 10% -26.5%  6 ±  6%  sched_debug.cpu#51.cpu_load[1]
  7300 ± 37% +63.6%  11943 ± 16%  softirqs.TASKLET
  2984 ±  6% +43.1%   4271 ± 23%  sched_debug.cpu#20.ttwu_count
   328 ±  4% +40.5%462 ± 25%  sched_debug.cpu#26.ttwu_local
10 ±  7% -27.5%  7 ±  5%  sched_debug.cpu#43.cpu_load[3]
 9 ±  8% -30.8%  6 ±  6%  sched_debug.cpu#41.cpu_load[3]
 9 ±  8% -27.0%  6 ±  6%  sched_debug.cpu#41.cpu_load[4]
10 ± 14% -32.5%  6 ±  6%  sched_debug.cpu#41.cpu_load[2]
 16292 ±  6% +42.8%  23260 ± 25%  sched_debug.cpu#13.nr_switches
14 ± 28% +55.9% 23 ±  8%  sched_debug.cpu#99.cpu_load[0]
 5 ±  8% +28.6%  6 ± 12%  sched_debug.cpu#17.load
13 ±  7% -23.1% 10 ± 12%  sched_debug.cpu#39.cpu_load[3]
 7 ± 10% -35.7%  4 ± 11%