Re: [PATCH v2 1/2] mm: make vmstat_update periodic run conditional

2013-08-28 Thread Christoph Lameter
On Fri, 9 Aug 2013, Gilad Ben-Yossef wrote:

> If the code does not consider setting the vmstat_cpus bit in the mask
> unless we are running
> on a CPU in tickless state, than we will (almost) never set
> vmstat_cpus since we will (almost)
> never be tickless in a deferrable work -

Sorry never got around to answering this one. Not sure what to do about
it.

How about this: Disable the vmstats when there is no diff to handle
instead?  This means that the OS was quiet during the earlier period. That
way you have an independent criteria for switching vmstat work off from
tickless. Would even work when there are multiple processes running on the
processor if none of them causes counter updates.

In the meantime there are additional patches for the vmstat function
pending for merge from me (not related to the conditional running of
vmstat but may make it easier to implement). So if you want to do any work
then please on top of the newer release available from Andrew's tree.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 1/2] mm: make vmstat_update periodic run conditional

2013-08-28 Thread Christoph Lameter
On Fri, 9 Aug 2013, Gilad Ben-Yossef wrote:

 If the code does not consider setting the vmstat_cpus bit in the mask
 unless we are running
 on a CPU in tickless state, than we will (almost) never set
 vmstat_cpus since we will (almost)
 never be tickless in a deferrable work -

Sorry never got around to answering this one. Not sure what to do about
it.

How about this: Disable the vmstats when there is no diff to handle
instead?  This means that the OS was quiet during the earlier period. That
way you have an independent criteria for switching vmstat work off from
tickless. Would even work when there are multiple processes running on the
processor if none of them causes counter updates.

In the meantime there are additional patches for the vmstat function
pending for merge from me (not related to the conditional running of
vmstat but may make it easier to implement). So if you want to do any work
then please on top of the newer release available from Andrew's tree.

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 1/2] mm: make vmstat_update periodic run conditional

2013-08-09 Thread Gilad Ben-Yossef
On Thu, Aug 8, 2013 at 5:59 PM, Christoph Lameter  wrote:
> On Thu, 8 Aug 2013, Gilad Ben-Yossef wrote:
>
>> vmstat_update runs from the vmstat work queue item by the workqueue
>> kernel thread.
>>
>> If this code is running, it means there are at least two schedulable tasks:
>> 1. The workqueue kernel thread, because it is running.
>> 2. At least one more task, otherwise were were in idle and the
>> workqueue kernel thread
>> would not execute this work item.
>>
>> Unfortunately, having two schedulable tasks means we're not running
>> tickless, so the check
>> will never trigger - or have I've missed something obvious?
>
> The vmstat update is deferrable work. As such it is not required to run
> and can be pushed off. It will not be considered for the calculation of
> the next timer interupt. See __next_timer_interrupt().

Yes, I understand that. I was trying to say something else:

If the code does not consider setting the vmstat_cpus bit in the mask
unless we are running
on a CPU in tickless state, than we will (almost) never set
vmstat_cpus since we will (almost)
never be tickless in a deferrable work -

If there is no other task, we will be in idle and the deferreable work
will not be scheduled since the timer will not fire.

If there is one task originally, the work queue gets executed in the
work queue kernel thread, so we have two tasks so tickless will
disengae.

If there is more than one task tickless is not engage.

Bottom line - we will be in active tickless mode when running a
deferreable work item only if we happen to have fire the timer
that scheduled the work and the previously running task happened to
block. This is rare enough that in practice we will almost
never be in active tickless mode when running the vmstat_update function.

I hope I manage to explain myself better this time.

Thanks,
Gilad



-- 
Gilad Ben-Yossef
Chief Coffee Drinker
gi...@benyossef.com
Israel Cell: +972-52-8260388
US Cell: +1-973-8260388
http://benyossef.com

"If you take a class in large-scale robotics, can you end up in a
situation where the homework eats your dog?"
 -- Jean-Baptiste Queru
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 1/2] mm: make vmstat_update periodic run conditional

2013-08-09 Thread Gilad Ben-Yossef
On Thu, Aug 8, 2013 at 5:59 PM, Christoph Lameter c...@gentwo.org wrote:
 On Thu, 8 Aug 2013, Gilad Ben-Yossef wrote:

 vmstat_update runs from the vmstat work queue item by the workqueue
 kernel thread.

 If this code is running, it means there are at least two schedulable tasks:
 1. The workqueue kernel thread, because it is running.
 2. At least one more task, otherwise were were in idle and the
 workqueue kernel thread
 would not execute this work item.

 Unfortunately, having two schedulable tasks means we're not running
 tickless, so the check
 will never trigger - or have I've missed something obvious?

 The vmstat update is deferrable work. As such it is not required to run
 and can be pushed off. It will not be considered for the calculation of
 the next timer interupt. See __next_timer_interrupt().

Yes, I understand that. I was trying to say something else:

If the code does not consider setting the vmstat_cpus bit in the mask
unless we are running
on a CPU in tickless state, than we will (almost) never set
vmstat_cpus since we will (almost)
never be tickless in a deferrable work -

If there is no other task, we will be in idle and the deferreable work
will not be scheduled since the timer will not fire.

If there is one task originally, the work queue gets executed in the
work queue kernel thread, so we have two tasks so tickless will
disengae.

If there is more than one task tickless is not engage.

Bottom line - we will be in active tickless mode when running a
deferreable work item only if we happen to have fire the timer
that scheduled the work and the previously running task happened to
block. This is rare enough that in practice we will almost
never be in active tickless mode when running the vmstat_update function.

I hope I manage to explain myself better this time.

Thanks,
Gilad



-- 
Gilad Ben-Yossef
Chief Coffee Drinker
gi...@benyossef.com
Israel Cell: +972-52-8260388
US Cell: +1-973-8260388
http://benyossef.com

If you take a class in large-scale robotics, can you end up in a
situation where the homework eats your dog?
 -- Jean-Baptiste Queru
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 1/2] mm: make vmstat_update periodic run conditional

2013-08-08 Thread Christoph Lameter
On Thu, 8 Aug 2013, Gilad Ben-Yossef wrote:

> vmstat_update runs from the vmstat work queue item by the workqueue
> kernel thread.
>
> If this code is running, it means there are at least two schedulable tasks:
> 1. The workqueue kernel thread, because it is running.
> 2. At least one more task, otherwise were were in idle and the
> workqueue kernel thread
> would not execute this work item.
>
> Unfortunately, having two schedulable tasks means we're not running
> tickless, so the check
> will never trigger - or have I've missed something obvious?

The vmstat update is deferrable work. As such it is not required to run
and can be pushed off. It will not be considered for the calculation of
the next timer interupt. See __next_timer_interrupt().



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 1/2] mm: make vmstat_update periodic run conditional

2013-08-08 Thread Gilad Ben-Yossef
On Thu, Jun 20, 2013 at 5:05 PM, Christoph Lameter  wrote:
>
> On Wed, 19 Jun 2013, Gilad Ben-Yossef wrote:
>
> > +static void vmstat_update(struct work_struct *w)
> > +{
> > + int cpu, this_cpu = smp_processor_id();
> > +
> > + if (unlikely(this_cpu == vmstat_monitor_cpu))
> > + for_each_cpu_not(cpu, _cpus)
> > + if (need_vmstat(cpu))
> > + start_cpu_timer(cpu);
> > +
> > + if (likely(refresh_cpu_vm_stats(this_cpu) || (this_cpu ==
> > vmstat_monitor_cpu)))
> > + schedule_delayed_work(&__get_cpu_var(vmstat_work),
> > +
> > round_jiffies_relative(sysctl_stat_interval));
> > + else
> > + cpumask_clear_cpu(this_cpu, _cpus);
>
> The clearing of vmstat_cpus could be avoided if this processor is not
> running tickless. Frequent updates to vmstat_cpus could become an issue.

I like the idea of tying the vmstat disabling to the tickless logic
but I seem to have run
into a bit of a chicken and egg problem here:

vmstat_update runs from the vmstat work queue item by the workqueue
kernel thread.

If this code is running, it means there are at least two schedulable tasks:
1. The workqueue kernel thread, because it is running.
2. At least one more task, otherwise were were in idle and the
workqueue kernel thread
would not execute this work item.

Unfortunately, having two schedulable tasks means we're not running
tickless, so the check
will never trigger - or have I've missed something obvious?

Thanks,
Gilad


--
Gilad Ben-Yossef
Chief Coffee Drinker
gi...@benyossef.com
Israel Cell: +972-52-8260388
US Cell: +1-973-8260388
http://benyossef.com

"If you take a class in large-scale robotics, can you end up in a situation
where the homework eats your dog?"
 -- Jean-Baptiste Queru
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 1/2] mm: make vmstat_update periodic run conditional

2013-08-08 Thread Gilad Ben-Yossef
On Thu, Jun 20, 2013 at 5:05 PM, Christoph Lameter c...@linux.com wrote:

 On Wed, 19 Jun 2013, Gilad Ben-Yossef wrote:

  +static void vmstat_update(struct work_struct *w)
  +{
  + int cpu, this_cpu = smp_processor_id();
  +
  + if (unlikely(this_cpu == vmstat_monitor_cpu))
  + for_each_cpu_not(cpu, vmstat_cpus)
  + if (need_vmstat(cpu))
  + start_cpu_timer(cpu);
  +
  + if (likely(refresh_cpu_vm_stats(this_cpu) || (this_cpu ==
  vmstat_monitor_cpu)))
  + schedule_delayed_work(__get_cpu_var(vmstat_work),
  +
  round_jiffies_relative(sysctl_stat_interval));
  + else
  + cpumask_clear_cpu(this_cpu, vmstat_cpus);

 The clearing of vmstat_cpus could be avoided if this processor is not
 running tickless. Frequent updates to vmstat_cpus could become an issue.

I like the idea of tying the vmstat disabling to the tickless logic
but I seem to have run
into a bit of a chicken and egg problem here:

vmstat_update runs from the vmstat work queue item by the workqueue
kernel thread.

If this code is running, it means there are at least two schedulable tasks:
1. The workqueue kernel thread, because it is running.
2. At least one more task, otherwise were were in idle and the
workqueue kernel thread
would not execute this work item.

Unfortunately, having two schedulable tasks means we're not running
tickless, so the check
will never trigger - or have I've missed something obvious?

Thanks,
Gilad


--
Gilad Ben-Yossef
Chief Coffee Drinker
gi...@benyossef.com
Israel Cell: +972-52-8260388
US Cell: +1-973-8260388
http://benyossef.com

If you take a class in large-scale robotics, can you end up in a situation
where the homework eats your dog?
 -- Jean-Baptiste Queru
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 1/2] mm: make vmstat_update periodic run conditional

2013-08-08 Thread Christoph Lameter
On Thu, 8 Aug 2013, Gilad Ben-Yossef wrote:

 vmstat_update runs from the vmstat work queue item by the workqueue
 kernel thread.

 If this code is running, it means there are at least two schedulable tasks:
 1. The workqueue kernel thread, because it is running.
 2. At least one more task, otherwise were were in idle and the
 workqueue kernel thread
 would not execute this work item.

 Unfortunately, having two schedulable tasks means we're not running
 tickless, so the check
 will never trigger - or have I've missed something obvious?

The vmstat update is deferrable work. As such it is not required to run
and can be pushed off. It will not be considered for the calculation of
the next timer interupt. See __next_timer_interrupt().



--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 1/2] mm: make vmstat_update periodic run conditional

2013-08-07 Thread Christoph Lameter
Is there any work in progress on this issue?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 1/2] mm: make vmstat_update periodic run conditional

2013-08-07 Thread Christoph Lameter
Is there any work in progress on this issue?
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 1/2] mm: make vmstat_update periodic run conditional

2013-06-20 Thread Christoph Lameter
On Wed, 19 Jun 2013, Gilad Ben-Yossef wrote:

> +static void vmstat_update(struct work_struct *w)
> +{
> + int cpu, this_cpu = smp_processor_id();
> +
> + if (unlikely(this_cpu == vmstat_monitor_cpu))
> + for_each_cpu_not(cpu, _cpus)
> + if (need_vmstat(cpu))
> + start_cpu_timer(cpu);
> +
> + if (likely(refresh_cpu_vm_stats(this_cpu) || (this_cpu == 
> vmstat_monitor_cpu)))
> + schedule_delayed_work(&__get_cpu_var(vmstat_work),
> + round_jiffies_relative(sysctl_stat_interval));
> + else
> + cpumask_clear_cpu(this_cpu, _cpus);

The clearing of vmstat_cpus could be avoided if this processor is not
running tickless. Frequent updates to vmstat_cpus could become an issue.

>   case CPU_DOWN_PREPARE_FROZEN:
> - cancel_delayed_work_sync(_cpu(vmstat_work, cpu));
> - per_cpu(vmstat_work, cpu).work.func = NULL;
> + if (cpumask_test_cpu(cpu, _cpus)) {
> + cancel_delayed_work_sync(_cpu(vmstat_work, cpu));
> + per_cpu(vmstat_work, cpu).work.func = NULL;
> + if(cpu == vmstat_monitor_cpu) {
> + int this_cpu = smp_processor_id();
> + vmstat_monitor_cpu = this_cpu;
> + if (!cpumask_test_cpu(this_cpu, _cpus))
> + start_cpu_timer(this_cpu);
> + }
> + }
>   break;

If the disabling of vmstat is tied into the nohz logic then these portions
are no longer necessary.

> @@ -1237,8 +1299,10 @@ static int __init setup_vmstat(void)
>
>   register_cpu_notifier(_notifier);
>
> + vmstat_monitor_cpu = smp_processor_id();
> +

Drop the vmstat_monitor_cpu and use the dynticks processor.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 1/2] mm: make vmstat_update periodic run conditional

2013-06-20 Thread Christoph Lameter
On Wed, 19 Jun 2013, Gilad Ben-Yossef wrote:

 +static void vmstat_update(struct work_struct *w)
 +{
 + int cpu, this_cpu = smp_processor_id();
 +
 + if (unlikely(this_cpu == vmstat_monitor_cpu))
 + for_each_cpu_not(cpu, vmstat_cpus)
 + if (need_vmstat(cpu))
 + start_cpu_timer(cpu);
 +
 + if (likely(refresh_cpu_vm_stats(this_cpu) || (this_cpu == 
 vmstat_monitor_cpu)))
 + schedule_delayed_work(__get_cpu_var(vmstat_work),
 + round_jiffies_relative(sysctl_stat_interval));
 + else
 + cpumask_clear_cpu(this_cpu, vmstat_cpus);

The clearing of vmstat_cpus could be avoided if this processor is not
running tickless. Frequent updates to vmstat_cpus could become an issue.

   case CPU_DOWN_PREPARE_FROZEN:
 - cancel_delayed_work_sync(per_cpu(vmstat_work, cpu));
 - per_cpu(vmstat_work, cpu).work.func = NULL;
 + if (cpumask_test_cpu(cpu, vmstat_cpus)) {
 + cancel_delayed_work_sync(per_cpu(vmstat_work, cpu));
 + per_cpu(vmstat_work, cpu).work.func = NULL;
 + if(cpu == vmstat_monitor_cpu) {
 + int this_cpu = smp_processor_id();
 + vmstat_monitor_cpu = this_cpu;
 + if (!cpumask_test_cpu(this_cpu, vmstat_cpus))
 + start_cpu_timer(this_cpu);
 + }
 + }
   break;

If the disabling of vmstat is tied into the nohz logic then these portions
are no longer necessary.

 @@ -1237,8 +1299,10 @@ static int __init setup_vmstat(void)

   register_cpu_notifier(vmstat_notifier);

 + vmstat_monitor_cpu = smp_processor_id();
 +

Drop the vmstat_monitor_cpu and use the dynticks processor.

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v2 1/2] mm: make vmstat_update periodic run conditional

2013-06-19 Thread Gilad Ben-Yossef
vmstat_update runs every second from the work queue to update statistics
and drain per cpu pages back into the global page allocator.

This is useful in most circumstances but is wasteful if the CPU doesn't
actually make any VM activity. This can happen in the situtation that
the CPU is idle or running a CPU bound long term task (e.g. CPU
isolation), in which case the periodic vmstate_update timer needlessly
interrupts the CPU.

This patch tries to make vmstat_update schedule itself for the next
round only if there was any work for it to do in the previous run.
The assumption is that if for a whole second we didn't see any VM
activity it is reasnoable to assume that the CPU is not using the
VM because it is idle or runs a long term single CPU bound task.

A scapegoat CPU is picked to serve to periodically monitor
CPUs that have their vmstat_update work stopped and re-schedule them
if VM activity is detected. The scapegoat CPU never stops its
vmstat_update work item instance.

Signed-off-by: Gilad Ben-Yossef 
CC: Christoph Lameter 
CC: Paul E. McKenney 
CC: linux-kernel@vger.kernel.org
CC: linux...@kvack.org
---
 include/linux/vmstat.h |2 +-
 mm/vmstat.c|   92 ---
 2 files changed, 79 insertions(+), 15 deletions(-)

diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h
index c586679..a30ab79 100644
--- a/include/linux/vmstat.h
+++ b/include/linux/vmstat.h
@@ -198,7 +198,7 @@ extern void __inc_zone_state(struct zone *, enum 
zone_stat_item);
 extern void dec_zone_state(struct zone *, enum zone_stat_item);
 extern void __dec_zone_state(struct zone *, enum zone_stat_item);
 
-void refresh_cpu_vm_stats(int);
+bool refresh_cpu_vm_stats(int);
 void refresh_zone_stat_thresholds(void);
 
 void drain_zonestat(struct zone *zone, struct per_cpu_pageset *);
diff --git a/mm/vmstat.c b/mm/vmstat.c
index f42745e..6143c70 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -14,6 +14,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -432,11 +433,12 @@ EXPORT_SYMBOL(dec_zone_page_state);
  * with the global counters. These could cause remote node cache line
  * bouncing and will have to be only done when necessary.
  */
-void refresh_cpu_vm_stats(int cpu)
+bool refresh_cpu_vm_stats(int cpu)
 {
struct zone *zone;
int i;
int global_diff[NR_VM_ZONE_STAT_ITEMS] = { 0, };
+   bool vm_activity = false;
 
for_each_populated_zone(zone) {
struct per_cpu_pageset *p;
@@ -483,14 +485,21 @@ void refresh_cpu_vm_stats(int cpu)
if (p->expire)
continue;
 
-   if (p->pcp.count)
+   if (p->pcp.count) {
+   vm_activity = true;
drain_zone_pages(zone, >pcp);
+   }
 #endif
}
 
for (i = 0; i < NR_VM_ZONE_STAT_ITEMS; i++)
-   if (global_diff[i])
+   if (global_diff[i]) {
atomic_long_add(global_diff[i], _stat[i]);
+   vm_activity = true;
+   }
+
+   return vm_activity;
+
 }
 
 /*
@@ -1172,24 +1181,69 @@ static const struct file_operations 
proc_vmstat_file_operations = {
 #endif /* CONFIG_PROC_FS */
 
 #ifdef CONFIG_SMP
+
+#define VMSTAT_NO_CPU (-1)
+
 static DEFINE_PER_CPU(struct delayed_work, vmstat_work);
 int sysctl_stat_interval __read_mostly = HZ;
+static struct cpumask vmstat_cpus;
+static int vmstat_monitor_cpu __read_mostly = VMSTAT_NO_CPU;
 
-static void vmstat_update(struct work_struct *w)
+static inline bool need_vmstat(int cpu)
 {
-   refresh_cpu_vm_stats(smp_processor_id());
-   schedule_delayed_work(&__get_cpu_var(vmstat_work),
-   round_jiffies_relative(sysctl_stat_interval));
+   struct zone *zone;
+   int i;
+
+   for_each_populated_zone(zone) {
+   struct per_cpu_pageset *p;
+
+   p = per_cpu_ptr(zone->pageset, cpu);
+
+   for (i = 0; i < NR_VM_ZONE_STAT_ITEMS; i++)
+   if (p->vm_stat_diff[i])
+   return true;
+
+   if (zone_to_nid(zone) != numa_node_id() && p->pcp.count)
+   return true;
+   }
+
+   return false;
 }
 
-static void __cpuinit start_cpu_timer(int cpu)
+static void vmstat_update(struct work_struct *w);
+
+static void start_cpu_timer(int cpu)
 {
struct delayed_work *work = _cpu(vmstat_work, cpu);
 
-   INIT_DEFERRABLE_WORK(work, vmstat_update);
+   cpumask_set_cpu(cpu, _cpus);
schedule_delayed_work_on(cpu, work, __round_jiffies_relative(HZ, cpu));
 }
 
+static void __cpuinit setup_cpu_timer(int cpu)
+{
+   struct delayed_work *work = _cpu(vmstat_work, cpu);
+
+   INIT_DEFERRABLE_WORK(work, vmstat_update);
+   start_cpu_timer(cpu);
+}
+
+static void vmstat_update(struct work_struct *w)
+{
+   int cpu, this_cpu = smp_processor_id();
+
+   if (unlikely(this_cpu == 

[PATCH v2 1/2] mm: make vmstat_update periodic run conditional

2013-06-19 Thread Gilad Ben-Yossef
vmstat_update runs every second from the work queue to update statistics
and drain per cpu pages back into the global page allocator.

This is useful in most circumstances but is wasteful if the CPU doesn't
actually make any VM activity. This can happen in the situtation that
the CPU is idle or running a CPU bound long term task (e.g. CPU
isolation), in which case the periodic vmstate_update timer needlessly
interrupts the CPU.

This patch tries to make vmstat_update schedule itself for the next
round only if there was any work for it to do in the previous run.
The assumption is that if for a whole second we didn't see any VM
activity it is reasnoable to assume that the CPU is not using the
VM because it is idle or runs a long term single CPU bound task.

A scapegoat CPU is picked to serve to periodically monitor
CPUs that have their vmstat_update work stopped and re-schedule them
if VM activity is detected. The scapegoat CPU never stops its
vmstat_update work item instance.

Signed-off-by: Gilad Ben-Yossef gi...@benyossef.com
CC: Christoph Lameter c...@linux.com
CC: Paul E. McKenney paul...@linux.vnet.ibm.com
CC: linux-kernel@vger.kernel.org
CC: linux...@kvack.org
---
 include/linux/vmstat.h |2 +-
 mm/vmstat.c|   92 ---
 2 files changed, 79 insertions(+), 15 deletions(-)

diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h
index c586679..a30ab79 100644
--- a/include/linux/vmstat.h
+++ b/include/linux/vmstat.h
@@ -198,7 +198,7 @@ extern void __inc_zone_state(struct zone *, enum 
zone_stat_item);
 extern void dec_zone_state(struct zone *, enum zone_stat_item);
 extern void __dec_zone_state(struct zone *, enum zone_stat_item);
 
-void refresh_cpu_vm_stats(int);
+bool refresh_cpu_vm_stats(int);
 void refresh_zone_stat_thresholds(void);
 
 void drain_zonestat(struct zone *zone, struct per_cpu_pageset *);
diff --git a/mm/vmstat.c b/mm/vmstat.c
index f42745e..6143c70 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -14,6 +14,7 @@
 #include linux/module.h
 #include linux/slab.h
 #include linux/cpu.h
+#include linux/cpumask.h
 #include linux/vmstat.h
 #include linux/sched.h
 #include linux/math64.h
@@ -432,11 +433,12 @@ EXPORT_SYMBOL(dec_zone_page_state);
  * with the global counters. These could cause remote node cache line
  * bouncing and will have to be only done when necessary.
  */
-void refresh_cpu_vm_stats(int cpu)
+bool refresh_cpu_vm_stats(int cpu)
 {
struct zone *zone;
int i;
int global_diff[NR_VM_ZONE_STAT_ITEMS] = { 0, };
+   bool vm_activity = false;
 
for_each_populated_zone(zone) {
struct per_cpu_pageset *p;
@@ -483,14 +485,21 @@ void refresh_cpu_vm_stats(int cpu)
if (p-expire)
continue;
 
-   if (p-pcp.count)
+   if (p-pcp.count) {
+   vm_activity = true;
drain_zone_pages(zone, p-pcp);
+   }
 #endif
}
 
for (i = 0; i  NR_VM_ZONE_STAT_ITEMS; i++)
-   if (global_diff[i])
+   if (global_diff[i]) {
atomic_long_add(global_diff[i], vm_stat[i]);
+   vm_activity = true;
+   }
+
+   return vm_activity;
+
 }
 
 /*
@@ -1172,24 +1181,69 @@ static const struct file_operations 
proc_vmstat_file_operations = {
 #endif /* CONFIG_PROC_FS */
 
 #ifdef CONFIG_SMP
+
+#define VMSTAT_NO_CPU (-1)
+
 static DEFINE_PER_CPU(struct delayed_work, vmstat_work);
 int sysctl_stat_interval __read_mostly = HZ;
+static struct cpumask vmstat_cpus;
+static int vmstat_monitor_cpu __read_mostly = VMSTAT_NO_CPU;
 
-static void vmstat_update(struct work_struct *w)
+static inline bool need_vmstat(int cpu)
 {
-   refresh_cpu_vm_stats(smp_processor_id());
-   schedule_delayed_work(__get_cpu_var(vmstat_work),
-   round_jiffies_relative(sysctl_stat_interval));
+   struct zone *zone;
+   int i;
+
+   for_each_populated_zone(zone) {
+   struct per_cpu_pageset *p;
+
+   p = per_cpu_ptr(zone-pageset, cpu);
+
+   for (i = 0; i  NR_VM_ZONE_STAT_ITEMS; i++)
+   if (p-vm_stat_diff[i])
+   return true;
+
+   if (zone_to_nid(zone) != numa_node_id()  p-pcp.count)
+   return true;
+   }
+
+   return false;
 }
 
-static void __cpuinit start_cpu_timer(int cpu)
+static void vmstat_update(struct work_struct *w);
+
+static void start_cpu_timer(int cpu)
 {
struct delayed_work *work = per_cpu(vmstat_work, cpu);
 
-   INIT_DEFERRABLE_WORK(work, vmstat_update);
+   cpumask_set_cpu(cpu, vmstat_cpus);
schedule_delayed_work_on(cpu, work, __round_jiffies_relative(HZ, cpu));
 }
 
+static void __cpuinit setup_cpu_timer(int cpu)
+{
+   struct delayed_work *work = per_cpu(vmstat_work, cpu);
+
+   INIT_DEFERRABLE_WORK(work, vmstat_update);
+