Re: [PATCH] add support for dynamic ticks and preempt rcu

2008-02-02 Thread Steven Rostedt

On Sat, 2 Feb 2008, Ingo Molnar wrote:
>
> i've added it a few days ago and it's already in the sched-devel.git
> tree:
>
>  git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-sched-devel.git


OK, thanks! I know everyone is extremely busy now and I just wanted to
make sure this didn't fall between the cracks.

-- Steve

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] add support for dynamic ticks and preempt rcu

2008-02-02 Thread Ingo Molnar

* Steven Rostedt <[EMAIL PROTECTED]> wrote:

> [
>   Resending this patch because no one commented on it (besides Paul)
>   and I'm thinking it got lost in the noise.
>   Without this patch, preempt RCU is broken when NO_HZ is configured.
> ]

i've added it a few days ago and it's already in the sched-devel.git 
tree:

 git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-sched-devel.git

Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] add support for dynamic ticks and preempt rcu

2008-02-02 Thread Ingo Molnar

* Steven Rostedt [EMAIL PROTECTED] wrote:

 [
   Resending this patch because no one commented on it (besides Paul)
   and I'm thinking it got lost in the noise.
   Without this patch, preempt RCU is broken when NO_HZ is configured.
 ]

i've added it a few days ago and it's already in the sched-devel.git 
tree:

 git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-sched-devel.git

Ingo
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] add support for dynamic ticks and preempt rcu

2008-02-02 Thread Steven Rostedt

On Sat, 2 Feb 2008, Ingo Molnar wrote:

 i've added it a few days ago and it's already in the sched-devel.git
 tree:

  git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-sched-devel.git


OK, thanks! I know everyone is extremely busy now and I just wanted to
make sure this didn't fall between the cracks.

-- Steve

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] add support for dynamic ticks and preempt rcu

2008-02-01 Thread Steven Rostedt

[
  Resending this patch because no one commented on it (besides Paul)
  and I'm thinking it got lost in the noise.
  Without this patch, preempt RCU is broken when NO_HZ is configured.
]


The PREEMPT-RCU can get stuck if a CPU goes idle and NO_HZ is set.
The idle CPU will not progress the RCU through its grace period
and a synchronize_rcu my get stuck. Without this patch I have
a box that will not boot when PREEMPT_RCU and NO_HZ are set.
That same box boots fine with this patch.

Note: This patch came directly from the -rt patch where it
has been tested for several months.


Signed-off-by: Steven Rostedt <[EMAIL PROTECTED]>
Signed-off-by: Paul E. McKenney <[EMAIL PROTECTED]>
---
 include/linux/hardirq.h|   10 ++
 include/linux/rcuclassic.h |3
 include/linux/rcupreempt.h |   22 
 kernel/rcupreempt.c|  224 -
 kernel/softirq.c   |1
 kernel/time/tick-sched.c   |3
 6 files changed, 259 insertions(+), 4 deletions(-)

Index: linux-compile.git/kernel/rcupreempt.c
===
--- linux-compile.git.orig/kernel/rcupreempt.c  2008-01-29 11:03:21.0 
-0500
+++ linux-compile.git/kernel/rcupreempt.c   2008-01-29 11:10:08.0 
-0500
@@ -23,6 +23,10 @@
  * to Suparna Bhattacharya for pushing me completely away
  * from atomic instructions on the read side.
  *
+ *  - Added handling of Dynamic Ticks
+ *  Copyright 2007 - Paul E. Mckenney <[EMAIL PROTECTED]>
+ * - Steven Rostedt <[EMAIL PROTECTED]>
+ *
  * Papers:  http://www.rdrop.com/users/paulmck/RCU
  *
  * Design Document: http://lwn.net/Articles/253651/
@@ -409,6 +413,212 @@ static void __rcu_advance_callbacks(stru
}
 }

+#ifdef CONFIG_NO_HZ
+
+DEFINE_PER_CPU(long, dynticks_progress_counter) = 1;
+static DEFINE_PER_CPU(long, rcu_dyntick_snapshot);
+static DEFINE_PER_CPU(int, rcu_update_flag);
+
+/**
+ * rcu_irq_enter - Called from Hard irq handlers and NMI/SMI.
+ *
+ * If the CPU was idle with dynamic ticks active, this updates the
+ * dynticks_progress_counter to let the RCU handling know that the
+ * CPU is active.
+ */
+void rcu_irq_enter(void)
+{
+   int cpu = smp_processor_id();
+
+   if (per_cpu(rcu_update_flag, cpu))
+   per_cpu(rcu_update_flag, cpu)++;
+
+   /*
+* Only update if we are coming from a stopped ticks mode
+* (dynticks_progress_counter is even).
+*/
+   if (!in_interrupt() &&
+   (per_cpu(dynticks_progress_counter, cpu) & 0x1) == 0) {
+   /*
+* The following might seem like we could have a race
+* with NMI/SMIs. But this really isn't a problem.
+* Here we do a read/modify/write, and the race happens
+* when an NMI/SMI comes in after the read and before
+* the write. But NMI/SMIs will increment this counter
+* twice before returning, so the zero bit will not
+* be corrupted by the NMI/SMI which is the most important
+* part.
+*
+* The only thing is that we would bring back the counter
+* to a postion that it was in during the NMI/SMI.
+* But the zero bit would be set, so the rest of the
+* counter would again be ignored.
+*
+* On return from the IRQ, the counter may have the zero
+* bit be 0 and the counter the same as the return from
+* the NMI/SMI. If the state machine was so unlucky to
+* see that, it still doesn't matter, since all
+* RCU read-side critical sections on this CPU would
+* have already completed.
+*/
+   per_cpu(dynticks_progress_counter, cpu)++;
+   /*
+* The following memory barrier ensures that any
+* rcu_read_lock() primitives in the irq handler
+* are seen by other CPUs to follow the above
+* increment to dynticks_progress_counter. This is
+* required in order for other CPUs to correctly
+* determine when it is safe to advance the RCU
+* grace-period state machine.
+*/
+   smp_mb(); /* see above block comment. */
+   /*
+* Since we can't determine the dynamic tick mode from
+* the dynticks_progress_counter after this routine,
+* we use a second flag to acknowledge that we came
+* from an idle state with ticks stopped.
+*/
+   per_cpu(rcu_update_flag, cpu)++;
+   /*
+* If we take an NMI/SMI now, they will also increment
+* the rcu_update_flag, and will not update the
+* dynticks_progress_counter on 

[PATCH] add support for dynamic ticks and preempt rcu

2008-02-01 Thread Steven Rostedt

[
  Resending this patch because no one commented on it (besides Paul)
  and I'm thinking it got lost in the noise.
  Without this patch, preempt RCU is broken when NO_HZ is configured.
]


The PREEMPT-RCU can get stuck if a CPU goes idle and NO_HZ is set.
The idle CPU will not progress the RCU through its grace period
and a synchronize_rcu my get stuck. Without this patch I have
a box that will not boot when PREEMPT_RCU and NO_HZ are set.
That same box boots fine with this patch.

Note: This patch came directly from the -rt patch where it
has been tested for several months.


Signed-off-by: Steven Rostedt [EMAIL PROTECTED]
Signed-off-by: Paul E. McKenney [EMAIL PROTECTED]
---
 include/linux/hardirq.h|   10 ++
 include/linux/rcuclassic.h |3
 include/linux/rcupreempt.h |   22 
 kernel/rcupreempt.c|  224 -
 kernel/softirq.c   |1
 kernel/time/tick-sched.c   |3
 6 files changed, 259 insertions(+), 4 deletions(-)

Index: linux-compile.git/kernel/rcupreempt.c
===
--- linux-compile.git.orig/kernel/rcupreempt.c  2008-01-29 11:03:21.0 
-0500
+++ linux-compile.git/kernel/rcupreempt.c   2008-01-29 11:10:08.0 
-0500
@@ -23,6 +23,10 @@
  * to Suparna Bhattacharya for pushing me completely away
  * from atomic instructions on the read side.
  *
+ *  - Added handling of Dynamic Ticks
+ *  Copyright 2007 - Paul E. Mckenney [EMAIL PROTECTED]
+ * - Steven Rostedt [EMAIL PROTECTED]
+ *
  * Papers:  http://www.rdrop.com/users/paulmck/RCU
  *
  * Design Document: http://lwn.net/Articles/253651/
@@ -409,6 +413,212 @@ static void __rcu_advance_callbacks(stru
}
 }

+#ifdef CONFIG_NO_HZ
+
+DEFINE_PER_CPU(long, dynticks_progress_counter) = 1;
+static DEFINE_PER_CPU(long, rcu_dyntick_snapshot);
+static DEFINE_PER_CPU(int, rcu_update_flag);
+
+/**
+ * rcu_irq_enter - Called from Hard irq handlers and NMI/SMI.
+ *
+ * If the CPU was idle with dynamic ticks active, this updates the
+ * dynticks_progress_counter to let the RCU handling know that the
+ * CPU is active.
+ */
+void rcu_irq_enter(void)
+{
+   int cpu = smp_processor_id();
+
+   if (per_cpu(rcu_update_flag, cpu))
+   per_cpu(rcu_update_flag, cpu)++;
+
+   /*
+* Only update if we are coming from a stopped ticks mode
+* (dynticks_progress_counter is even).
+*/
+   if (!in_interrupt() 
+   (per_cpu(dynticks_progress_counter, cpu)  0x1) == 0) {
+   /*
+* The following might seem like we could have a race
+* with NMI/SMIs. But this really isn't a problem.
+* Here we do a read/modify/write, and the race happens
+* when an NMI/SMI comes in after the read and before
+* the write. But NMI/SMIs will increment this counter
+* twice before returning, so the zero bit will not
+* be corrupted by the NMI/SMI which is the most important
+* part.
+*
+* The only thing is that we would bring back the counter
+* to a postion that it was in during the NMI/SMI.
+* But the zero bit would be set, so the rest of the
+* counter would again be ignored.
+*
+* On return from the IRQ, the counter may have the zero
+* bit be 0 and the counter the same as the return from
+* the NMI/SMI. If the state machine was so unlucky to
+* see that, it still doesn't matter, since all
+* RCU read-side critical sections on this CPU would
+* have already completed.
+*/
+   per_cpu(dynticks_progress_counter, cpu)++;
+   /*
+* The following memory barrier ensures that any
+* rcu_read_lock() primitives in the irq handler
+* are seen by other CPUs to follow the above
+* increment to dynticks_progress_counter. This is
+* required in order for other CPUs to correctly
+* determine when it is safe to advance the RCU
+* grace-period state machine.
+*/
+   smp_mb(); /* see above block comment. */
+   /*
+* Since we can't determine the dynamic tick mode from
+* the dynticks_progress_counter after this routine,
+* we use a second flag to acknowledge that we came
+* from an idle state with ticks stopped.
+*/
+   per_cpu(rcu_update_flag, cpu)++;
+   /*
+* If we take an NMI/SMI now, they will also increment
+* the rcu_update_flag, and will not update the
+* dynticks_progress_counter on exit. That 

Re: [PATCH] add support for dynamic ticks and preempt rcu

2008-01-29 Thread Paul E. McKenney
On Tue, Jan 29, 2008 at 11:18:12AM -0500, Steven Rostedt wrote:
> 
> [
>  Paul, you had your Signed-off-by in the RT patch, so I attached it here
>   too
> ]

Works for me!!!

> The PREEMPT-RCU can get stuck if a CPU goes idle and NO_HZ is set. The
> idle CPU will not progress the RCU through its grace period and a
> synchronize_rcu my get stuck. Without this patch I have a box that will
> not boot when PREEMPT_RCU and NO_HZ are set. That same box boots fine with
> this patch.
> 
> Note: This patch came directly from the -rt patch where it has been tested
> for several months.

For those who attended my lightening talk yesterday on changing RCU to
"let sleeping CPUs lie", this is the patch.

If your architecture calls rcu_irq_enter() or irq_enter() upon
NMI/SMI/MC/whatever handler entry and also calls rcu_irq_exit() or
irq_exit() upon NMI/SMI/MC/whatever handler exit, you are covered.

Alternatively, if none of your architecture's NMI/SMI/MC/whatever
handlers never invoke rcu_read_lock()/rcu_read_unlock() and friends,
you are also covered.

I believe that we are covered, but I cannot claim to fully understand
all 20+ architectures.  ;-)

Thanx, Paul

> Signed-off-by: Steven Rostedt <[EMAIL PROTECTED]>
> Signed-off-by: Paul E. McKenney <[EMAIL PROTECTED]>
> ---
>  include/linux/hardirq.h|   10 ++
>  include/linux/rcuclassic.h |3
>  include/linux/rcupreempt.h |   22 
>  kernel/rcupreempt.c|  224 
> -
>  kernel/softirq.c   |1
>  kernel/time/tick-sched.c   |3
>  6 files changed, 259 insertions(+), 4 deletions(-)
> 
> Index: linux-compile.git/kernel/rcupreempt.c
> ===
> --- linux-compile.git.orig/kernel/rcupreempt.c2008-01-29 
> 11:03:21.0 -0500
> +++ linux-compile.git/kernel/rcupreempt.c 2008-01-29 11:10:08.0 
> -0500
> @@ -23,6 +23,10 @@
>   *   to Suparna Bhattacharya for pushing me completely away
>   *   from atomic instructions on the read side.
>   *
> + *  - Added handling of Dynamic Ticks
> + *  Copyright 2007 - Paul E. Mckenney <[EMAIL PROTECTED]>
> + * - Steven Rostedt <[EMAIL PROTECTED]>
> + *
>   * Papers:  http://www.rdrop.com/users/paulmck/RCU
>   *
>   * Design Document: http://lwn.net/Articles/253651/
> @@ -409,6 +413,212 @@ static void __rcu_advance_callbacks(stru
>   }
>  }
> 
> +#ifdef CONFIG_NO_HZ
> +
> +DEFINE_PER_CPU(long, dynticks_progress_counter) = 1;
> +static DEFINE_PER_CPU(long, rcu_dyntick_snapshot);
> +static DEFINE_PER_CPU(int, rcu_update_flag);
> +
> +/**
> + * rcu_irq_enter - Called from Hard irq handlers and NMI/SMI.
> + *
> + * If the CPU was idle with dynamic ticks active, this updates the
> + * dynticks_progress_counter to let the RCU handling know that the
> + * CPU is active.
> + */
> +void rcu_irq_enter(void)
> +{
> + int cpu = smp_processor_id();
> +
> + if (per_cpu(rcu_update_flag, cpu))
> + per_cpu(rcu_update_flag, cpu)++;
> +
> + /*
> +  * Only update if we are coming from a stopped ticks mode
> +  * (dynticks_progress_counter is even).
> +  */
> + if (!in_interrupt() &&
> + (per_cpu(dynticks_progress_counter, cpu) & 0x1) == 0) {
> + /*
> +  * The following might seem like we could have a race
> +  * with NMI/SMIs. But this really isn't a problem.
> +  * Here we do a read/modify/write, and the race happens
> +  * when an NMI/SMI comes in after the read and before
> +  * the write. But NMI/SMIs will increment this counter
> +  * twice before returning, so the zero bit will not
> +  * be corrupted by the NMI/SMI which is the most important
> +  * part.
> +  *
> +  * The only thing is that we would bring back the counter
> +  * to a postion that it was in during the NMI/SMI.
> +  * But the zero bit would be set, so the rest of the
> +  * counter would again be ignored.
> +  *
> +  * On return from the IRQ, the counter may have the zero
> +  * bit be 0 and the counter the same as the return from
> +  * the NMI/SMI. If the state machine was so unlucky to
> +  * see that, it still doesn't matter, since all
> +  * RCU read-side critical sections on this CPU would
> +  * have already completed.
> +  */
> + per_cpu(dynticks_progress_counter, cpu)++;
> + /*
> +  * The following memory barrier ensures that any
> +  * rcu_read_lock() primitives in the irq handler
> +  * are seen by other CPUs to follow the above
> +  * increment to dynticks_progress_counter. This is
> +  * required in order for other CPUs to correctly
> +  

[PATCH] add support for dynamic ticks and preempt rcu

2008-01-29 Thread Steven Rostedt

[
 Paul, you had your Signed-off-by in the RT patch, so I attached it here
  too
]

The PREEMPT-RCU can get stuck if a CPU goes idle and NO_HZ is set. The
idle CPU will not progress the RCU through its grace period and a
synchronize_rcu my get stuck. Without this patch I have a box that will
not boot when PREEMPT_RCU and NO_HZ are set. That same box boots fine with
this patch.

Note: This patch came directly from the -rt patch where it has been tested
for several months.


Signed-off-by: Steven Rostedt <[EMAIL PROTECTED]>
Signed-off-by: Paul E. McKenney <[EMAIL PROTECTED]>
---
 include/linux/hardirq.h|   10 ++
 include/linux/rcuclassic.h |3
 include/linux/rcupreempt.h |   22 
 kernel/rcupreempt.c|  224 -
 kernel/softirq.c   |1
 kernel/time/tick-sched.c   |3
 6 files changed, 259 insertions(+), 4 deletions(-)

Index: linux-compile.git/kernel/rcupreempt.c
===
--- linux-compile.git.orig/kernel/rcupreempt.c  2008-01-29 11:03:21.0 
-0500
+++ linux-compile.git/kernel/rcupreempt.c   2008-01-29 11:10:08.0 
-0500
@@ -23,6 +23,10 @@
  * to Suparna Bhattacharya for pushing me completely away
  * from atomic instructions on the read side.
  *
+ *  - Added handling of Dynamic Ticks
+ *  Copyright 2007 - Paul E. Mckenney <[EMAIL PROTECTED]>
+ * - Steven Rostedt <[EMAIL PROTECTED]>
+ *
  * Papers:  http://www.rdrop.com/users/paulmck/RCU
  *
  * Design Document: http://lwn.net/Articles/253651/
@@ -409,6 +413,212 @@ static void __rcu_advance_callbacks(stru
}
 }

+#ifdef CONFIG_NO_HZ
+
+DEFINE_PER_CPU(long, dynticks_progress_counter) = 1;
+static DEFINE_PER_CPU(long, rcu_dyntick_snapshot);
+static DEFINE_PER_CPU(int, rcu_update_flag);
+
+/**
+ * rcu_irq_enter - Called from Hard irq handlers and NMI/SMI.
+ *
+ * If the CPU was idle with dynamic ticks active, this updates the
+ * dynticks_progress_counter to let the RCU handling know that the
+ * CPU is active.
+ */
+void rcu_irq_enter(void)
+{
+   int cpu = smp_processor_id();
+
+   if (per_cpu(rcu_update_flag, cpu))
+   per_cpu(rcu_update_flag, cpu)++;
+
+   /*
+* Only update if we are coming from a stopped ticks mode
+* (dynticks_progress_counter is even).
+*/
+   if (!in_interrupt() &&
+   (per_cpu(dynticks_progress_counter, cpu) & 0x1) == 0) {
+   /*
+* The following might seem like we could have a race
+* with NMI/SMIs. But this really isn't a problem.
+* Here we do a read/modify/write, and the race happens
+* when an NMI/SMI comes in after the read and before
+* the write. But NMI/SMIs will increment this counter
+* twice before returning, so the zero bit will not
+* be corrupted by the NMI/SMI which is the most important
+* part.
+*
+* The only thing is that we would bring back the counter
+* to a postion that it was in during the NMI/SMI.
+* But the zero bit would be set, so the rest of the
+* counter would again be ignored.
+*
+* On return from the IRQ, the counter may have the zero
+* bit be 0 and the counter the same as the return from
+* the NMI/SMI. If the state machine was so unlucky to
+* see that, it still doesn't matter, since all
+* RCU read-side critical sections on this CPU would
+* have already completed.
+*/
+   per_cpu(dynticks_progress_counter, cpu)++;
+   /*
+* The following memory barrier ensures that any
+* rcu_read_lock() primitives in the irq handler
+* are seen by other CPUs to follow the above
+* increment to dynticks_progress_counter. This is
+* required in order for other CPUs to correctly
+* determine when it is safe to advance the RCU
+* grace-period state machine.
+*/
+   smp_mb(); /* see above block comment. */
+   /*
+* Since we can't determine the dynamic tick mode from
+* the dynticks_progress_counter after this routine,
+* we use a second flag to acknowledge that we came
+* from an idle state with ticks stopped.
+*/
+   per_cpu(rcu_update_flag, cpu)++;
+   /*
+* If we take an NMI/SMI now, they will also increment
+* the rcu_update_flag, and will not update the
+* dynticks_progress_counter on exit. That is for
+* this IRQ to do.
+*/
+   }
+}
+
+/**
+ * 

[PATCH] add support for dynamic ticks and preempt rcu

2008-01-29 Thread Steven Rostedt

[
 Paul, you had your Signed-off-by in the RT patch, so I attached it here
  too
]

The PREEMPT-RCU can get stuck if a CPU goes idle and NO_HZ is set. The
idle CPU will not progress the RCU through its grace period and a
synchronize_rcu my get stuck. Without this patch I have a box that will
not boot when PREEMPT_RCU and NO_HZ are set. That same box boots fine with
this patch.

Note: This patch came directly from the -rt patch where it has been tested
for several months.


Signed-off-by: Steven Rostedt [EMAIL PROTECTED]
Signed-off-by: Paul E. McKenney [EMAIL PROTECTED]
---
 include/linux/hardirq.h|   10 ++
 include/linux/rcuclassic.h |3
 include/linux/rcupreempt.h |   22 
 kernel/rcupreempt.c|  224 -
 kernel/softirq.c   |1
 kernel/time/tick-sched.c   |3
 6 files changed, 259 insertions(+), 4 deletions(-)

Index: linux-compile.git/kernel/rcupreempt.c
===
--- linux-compile.git.orig/kernel/rcupreempt.c  2008-01-29 11:03:21.0 
-0500
+++ linux-compile.git/kernel/rcupreempt.c   2008-01-29 11:10:08.0 
-0500
@@ -23,6 +23,10 @@
  * to Suparna Bhattacharya for pushing me completely away
  * from atomic instructions on the read side.
  *
+ *  - Added handling of Dynamic Ticks
+ *  Copyright 2007 - Paul E. Mckenney [EMAIL PROTECTED]
+ * - Steven Rostedt [EMAIL PROTECTED]
+ *
  * Papers:  http://www.rdrop.com/users/paulmck/RCU
  *
  * Design Document: http://lwn.net/Articles/253651/
@@ -409,6 +413,212 @@ static void __rcu_advance_callbacks(stru
}
 }

+#ifdef CONFIG_NO_HZ
+
+DEFINE_PER_CPU(long, dynticks_progress_counter) = 1;
+static DEFINE_PER_CPU(long, rcu_dyntick_snapshot);
+static DEFINE_PER_CPU(int, rcu_update_flag);
+
+/**
+ * rcu_irq_enter - Called from Hard irq handlers and NMI/SMI.
+ *
+ * If the CPU was idle with dynamic ticks active, this updates the
+ * dynticks_progress_counter to let the RCU handling know that the
+ * CPU is active.
+ */
+void rcu_irq_enter(void)
+{
+   int cpu = smp_processor_id();
+
+   if (per_cpu(rcu_update_flag, cpu))
+   per_cpu(rcu_update_flag, cpu)++;
+
+   /*
+* Only update if we are coming from a stopped ticks mode
+* (dynticks_progress_counter is even).
+*/
+   if (!in_interrupt() 
+   (per_cpu(dynticks_progress_counter, cpu)  0x1) == 0) {
+   /*
+* The following might seem like we could have a race
+* with NMI/SMIs. But this really isn't a problem.
+* Here we do a read/modify/write, and the race happens
+* when an NMI/SMI comes in after the read and before
+* the write. But NMI/SMIs will increment this counter
+* twice before returning, so the zero bit will not
+* be corrupted by the NMI/SMI which is the most important
+* part.
+*
+* The only thing is that we would bring back the counter
+* to a postion that it was in during the NMI/SMI.
+* But the zero bit would be set, so the rest of the
+* counter would again be ignored.
+*
+* On return from the IRQ, the counter may have the zero
+* bit be 0 and the counter the same as the return from
+* the NMI/SMI. If the state machine was so unlucky to
+* see that, it still doesn't matter, since all
+* RCU read-side critical sections on this CPU would
+* have already completed.
+*/
+   per_cpu(dynticks_progress_counter, cpu)++;
+   /*
+* The following memory barrier ensures that any
+* rcu_read_lock() primitives in the irq handler
+* are seen by other CPUs to follow the above
+* increment to dynticks_progress_counter. This is
+* required in order for other CPUs to correctly
+* determine when it is safe to advance the RCU
+* grace-period state machine.
+*/
+   smp_mb(); /* see above block comment. */
+   /*
+* Since we can't determine the dynamic tick mode from
+* the dynticks_progress_counter after this routine,
+* we use a second flag to acknowledge that we came
+* from an idle state with ticks stopped.
+*/
+   per_cpu(rcu_update_flag, cpu)++;
+   /*
+* If we take an NMI/SMI now, they will also increment
+* the rcu_update_flag, and will not update the
+* dynticks_progress_counter on exit. That is for
+* this IRQ to do.
+*/
+   }
+}
+
+/**
+ * rcu_irq_exit -