On Sat, Feb 23, 2008 at 01:31:00PM +0100, Andi Kleen wrote:
> > *) compute the context-switch pair time average for the system. This is
> > your time threshold (CSt).
>
> This is not a uniform time. Consider the difference between
> context switch on the same hyperthread, context switch between
On Fri, Feb 22, 2008 at 11:55:45AM -0800, Sven-Thorsten Dietrich wrote:
>
> On Fri, 2008-02-22 at 11:43 -0800, Paul E. McKenney wrote:
> > On Fri, Feb 22, 2008 at 11:21:14AM -0800, Bill Huey (hui) wrote:
> > > On Fri, Feb 22, 2008 at 11:19 AM, Bill Huey (hui) <[EMAIL
On Fri, Feb 22, 2008 at 11:21:14AM -0800, Bill Huey (hui) wrote:
> On Fri, Feb 22, 2008 at 11:19 AM, Bill Huey (hui) <[EMAIL PROTECTED]> wrote:
> > Yeah, I'm not very keen on having a constant there without some
> > contention instrumentation to see how long the spins are. It would be
> > better
On Thu, Feb 21, 2008 at 05:41:09PM +0100, Andi Kleen wrote:
>
> > +config RTLOCK_DELAY
> > + int "Default delay (in loops) for adaptive rtlocks"
> > + range 0 10
> > + depends on ADAPTIVE_RTLOCK
>
> I must say I'm not a big fan of putting such subtle configurable numbers
> into Kcon
On Tue, Feb 19, 2008 at 05:03:18PM -0500, Mathieu Desnoyers wrote:
> * Paul E. McKenney ([EMAIL PROTECTED]) wrote:
> > On Tue, Feb 19, 2008 at 05:27:40PM +0100, Jan Kiszka wrote:
> > > Paul E. McKenney wrote:
> > > > On Mon, Feb 18, 2008 at 01:47:31PM +0100, Jan Ki
On Tue, Feb 19, 2008 at 03:33:26PM -0500, Mathieu Desnoyers wrote:
> * Jan Kiszka ([EMAIL PROTECTED]) wrote:
> > Paul E. McKenney wrote:
> > > On Mon, Feb 18, 2008 at 01:47:31PM +0100, Jan Kiszka wrote:
> > >> K. Prasad wrote:
> > >>> Hi Ingo,
> &g
On Tue, Feb 19, 2008 at 05:27:40PM +0100, Jan Kiszka wrote:
> Paul E. McKenney wrote:
> > On Mon, Feb 18, 2008 at 01:47:31PM +0100, Jan Kiszka wrote:
> >> K. Prasad wrote:
> >>> Hi Ingo,
> >>> Please accept these patches into the rt tree which convert th
On Mon, Feb 18, 2008 at 01:47:31PM +0100, Jan Kiszka wrote:
> K. Prasad wrote:
> > Hi Ingo,
> > Please accept these patches into the rt tree which convert the
> > existing RCU tracing mechanism for Preempt RCU and RCU Boost into
> > markers.
> >
> > These patches are based upon the 2.6.24-rc5
On Wed, Jan 30, 2008 at 11:45:01AM +0100, Wolfgang Grandegger wrote:
> Paul E. McKenney wrote:
> > On Wed, Jan 30, 2008 at 09:18:49AM +0100, Wolfgang Grandegger wrote:
> >> Paul E. McKenney wrote:
> >>> On Tue, Jan 29, 2008 at 02:38:04PM +0100, Wolfgang Grandegger
On Wed, Jan 30, 2008 at 09:18:49AM +0100, Wolfgang Grandegger wrote:
> Paul E. McKenney wrote:
> > On Tue, Jan 29, 2008 at 02:38:04PM +0100, Wolfgang Grandegger wrote:
> >> Luotao Fu wrote:
> >>> Hi,
> >>>
> >>> Wolfgang Grandegger wrote:
&
On Tue, Jan 29, 2008 at 02:38:04PM +0100, Wolfgang Grandegger wrote:
> Luotao Fu wrote:
> > Hi,
> >
> > Wolfgang Grandegger wrote:
> > ..
> >> Do you still get high latencies with:
> >>
> >> CONFIG_PREEMPT_RCU_BOOST=y
> >> CONFIG_RCU_TRACE=y
> >> CONFIG_NO_HZ is not set
> >>
> >> Wit
On Fri, Jan 11, 2008 at 08:58:35PM -0500, Steven Rostedt wrote:
>
>
> Hmm, I think this was caused by Paul's patch:
>
> http://lkml.org/lkml/2007/12/13/5
>
> I'll apply this too, unless Paul sees any reason not to.
They look good to me -- apologies for the hassle, Robert!
On Mon, Dec 31, 2007 at 09:14:30PM +0530, Sripathi Kodi wrote:
> Hi Paul,
>
> On Monday 31 December 2007 08:27, Paul E. McKenney wrote:
> > On Wed, Dec 26, 2007 at 09:53:25AM +0530, Chirag Jog wrote:
> > > * [EMAIL PROTECTED] <[EMAIL PROTECTED]> [2007-12-24
&
On Wed, Dec 26, 2007 at 09:53:25AM +0530, Chirag Jog wrote:
> * [EMAIL PROTECTED] <[EMAIL PROTECTED]> [2007-12-24 13:00:30]:
>
> Hi,
> > Do you have DEBUG and LOCKDEP configured?
> CONFIG_PREEMPT_DEBUG is enabled
> but LOCKDEP is not.
Strange... Neither PREEMPT_DEBUG nor LOCKDEP appear in my run
On Fri, Dec 14, 2007 at 03:51:14PM +0100, Johannes Weiner wrote:
> Hi,
>
> Gautham R Shenoy <[EMAIL PROTECTED]> writes:
>
> > diff --git a/kernel/rcuclassic.c b/kernel/rcuclassic.c
> > new file mode 100644
> > index 000..11c16aa
> > --- /dev/null
> > +++ b/kernel/rcuclassic.c
> > +/**
> > + *
On Thu, Dec 13, 2007 at 09:38:04PM +0100, Ingo Molnar wrote:
>
> * Gautham R Shenoy <[EMAIL PROTECTED]> wrote:
>
> > Hello everyone,
> >
> > This patchset is an updated version of the preemptible RCU patchset
> > that Paul McKenney had posted it in September earlier this year that
> > can be f
On Fri, Oct 05, 2007 at 06:51:14PM +0530, Gautham R Shenoy wrote:
> On Fri, Oct 05, 2007 at 08:24:21AM -0400, Steven Rostedt wrote:
> > On Fri, 5 Oct 2007, Gautham R Shenoy wrote:
> > > On Mon, Sep 10, 2007 at 11:39:01AM -0700, Paul E. McKenney wrote:
On Fri, Oct 05, 2007 at 08:21:49AM -0400, Steven Rostedt wrote:
>
> On Thu, 4 Oct 2007, Paul E. McKenney wrote:
>
> > On Wed, Oct 03, 2007 at 04:59:51PM -0400, Steven Rostedt wrote:
> > >
> > > PS. I got rid of your rcu_preeempt_task for rcu_preempt_tasks ;-)
>
==
> --- linux-2.6.23-rc9-rt1.orig/kernel/rcutorture.c
> +++ linux-2.6.23-rc9-rt1/kernel/rcutorture.c
> @@ -54,6 +54,7 @@ MODULE_AUTHOR("Paul E. McKenney
> static int nreaders = -1;/* # reader threads, defaults to 2*ncpus */
> static int nfakewriters = 4; /* # fake w
On Mon, Oct 01, 2007 at 03:09:16PM -0700, Davide Libenzi wrote:
> On Mon, 1 Oct 2007, Paul E. McKenney wrote:
>
> > That would indeed be one approach that CPU designers could take to
> > avoid being careless or sadistic. ;-)
>
> That'd be the easier (unique maybe) a
On Mon, Oct 01, 2007 at 12:56:26PM -0700, Arjan van de Ven wrote:
> On Mon, 1 Oct 2007 21:27:39 +0200 (CEST)
>
> > > I already did this here by checking for cpu != 0. But it also needs
> > > either tracking or forbidding migrations of irq 0. I can take care
> > > of the patch.
> >
> > I was think
On Mon, Oct 01, 2007 at 11:44:25AM -0700, Davide Libenzi wrote:
> On Sun, 30 Sep 2007, Paul E. McKenney wrote:
>
> > On Sun, Sep 30, 2007 at 04:02:09PM -0700, Davide Libenzi wrote:
> > > On Sun, 30 Sep 2007, Oleg Nesterov wrote:
> > >
> > > > Ah, but I
On Sun, Sep 30, 2007 at 08:38:49PM +0400, Oleg Nesterov wrote:
> On 09/10, Paul E. McKenney wrote:
> >
> > --- linux-2.6.22-d-schedclassic/kernel/rcupreempt.c 2007-08-22
> > 15:45:28.0 -0700
> > +++ linux-2.6.22-e-hotplugcpu/kernel/rcupreempt.c 2007-08-22
&
On Sun, Sep 30, 2007 at 08:31:02PM +0400, Oleg Nesterov wrote:
> On 09/28, Paul E. McKenney wrote:
> >
> > On Fri, Sep 28, 2007 at 06:47:14PM +0400, Oleg Nesterov wrote:
> > > Ah, I was confused by the comment,
> > >
> > > smp_mb(); /* Don
On Sun, Sep 30, 2007 at 04:02:09PM -0700, Davide Libenzi wrote:
> On Sun, 30 Sep 2007, Oleg Nesterov wrote:
>
> > Ah, but I asked the different question. We must see CPU 1's stores by
> > definition, but what about CPU 0's stores (which could be seen by CPU 1)?
> >
> > Let's take a "real life" ex
On Fri, Sep 28, 2007 at 07:05:14PM -0400, Steven Rostedt wrote:
>
>
> --
> On Fri, 28 Sep 2007, Gautham R Shenoy wrote:
>
> > >
> > > +#ifdef CONFIG_PREEMPT_RCU_BOOST
> > > +/*
> > > + * Task state with respect to being RCU-boosted. This state is changed
> > > + * by the task itself in response
On Fri, Sep 28, 2007 at 06:47:14PM +0400, Oleg Nesterov wrote:
> On 09/27, Paul E. McKenney wrote:
> >
> > On Wed, Sep 26, 2007 at 07:13:51PM +0400, Oleg Nesterov wrote:
> > >
> > > Yes, yes, I see now. We really need this barriers, except I think
> > > r
On Wed, Sep 26, 2007 at 07:13:51PM +0400, Oleg Nesterov wrote:
> On 09/23, Paul E. McKenney wrote:
> >
> > On Sun, Sep 23, 2007 at 09:38:07PM +0400, Oleg Nesterov wrote:
> > > Isn't DEFINE_PER_CPU_SHARED_ALIGNED better for rcu_flip_flag and
> > > rcu_mb_fla
akes it just another name of call_rcu for rcupreempt.
Looks good!
Acked-by: Paul E. McKenney <[EMAIL PROTECTED]>
> Signed-off-by: Steven Rostedt <[EMAIL PROTECTED]>
>
> Index: linux-2.6.23-rc8-rt1/include/linux/rcupreempt.h
> ==
On Wed, Sep 26, 2007 at 05:07:33PM -0400, Steven Rostedt wrote:
>
> --
> On Wed, 26 Sep 2007, Peter Zijlstra wrote:
> >
> > On Wed, 2007-09-26 at 12:55 -0700, Paul E. McKenney wrote:
> >
> > > Well, we could make spin_lock_irqsave() invoke rcu_read_lock() and
On Wed, Sep 26, 2007 at 10:54:46PM +0200, Peter Zijlstra wrote:
>
> On Wed, 2007-09-26 at 12:55 -0700, Paul E. McKenney wrote:
>
> > Well, we could make spin_lock_irqsave() invoke rcu_read_lock() and
> > spin_lock_irqrestore() invoke rcu_read_unlock(), with similar adjustm
On Wed, Sep 26, 2007 at 01:44:22PM -0400, Steven Rostedt wrote:
> --
> On Wed, 26 Sep 2007, Dmitry Torokhov wrote:
> > > The synchronize_all_irqs() will not return until:
> > >
> > > 1. All pre-existing hardirqs have completed.
> > >
> > > 2. All pre-existing threaded irqs have completed.
On Wed, Sep 26, 2007 at 01:19:05PM -0400, Dmitry Torokhov wrote:
> On 9/26/07, Paul E. McKenney <[EMAIL PROTECTED]> wrote:
> > On Wed, Sep 26, 2007 at 09:16:55AM -0400, Dmitry Torokhov wrote:
> >
> > > No, I don't think synchronize_irq() will work for me. Whil
On Wed, Sep 26, 2007 at 09:16:55AM -0400, Dmitry Torokhov wrote:
> Hi Paul,
>
> On 9/26/07, Paul E. McKenney <[EMAIL PROTECTED]> wrote:
> > On Wed, Sep 26, 2007 at 10:28:33AM +0200, Peter Zijlstra wrote:
> > > On Tue, 25 Sep 2007 18:11:39 -0700 "Paul E. McKenn
On Wed, Sep 26, 2007 at 10:28:33AM +0200, Peter Zijlstra wrote:
> On Tue, 25 Sep 2007 18:11:39 -0700 "Paul E. McKenney"
> <[EMAIL PROTECTED]> wrote:
>
> > On Wed, Sep 26, 2007 at 01:24:47AM +0200, Peter Zijlstra wrote:
> > > On Tue, 25 Sep 2007 16:02:45
On Wed, Sep 26, 2007 at 01:24:47AM +0200, Peter Zijlstra wrote:
> On Tue, 25 Sep 2007 16:02:45 -0400 (EDT) Steven Rostedt
> <[EMAIL PROTECTED]> wrote:
>
> > > This would of course require that synchronize_all_irqs() be in the
> > > RCU code rather than the irq code so that it could access the stat
On Tue, Sep 25, 2007 at 01:22:03PM -0400, Steven Rostedt wrote:
>
> --
> >
> > Passes light testing (five rounds of kernbench) on an x86_64 box.
> >
> > Signed-off-by: Paul E. McKenney <[EMAIL PROTECTED]>
> > ---
> >
> > include/linux/har
On Sun, Sep 23, 2007 at 09:38:07PM +0400, Oleg Nesterov wrote:
> On 09/10, Paul E. McKenney wrote:
> >
> > Work in progress, not for inclusion.
>
> Impressive work! a couple of random newbie's questions...
Thank you for the kind words, and most especially for the care
On Thu, Sep 20, 2007 at 06:12:22PM -0700, Paul E. McKenney wrote:
> Hello!
>
> Color me blind, but I don't see how the following race is avoided:
>
> CPU 0:A hardware interrupt is received for a threaded irq, which
> eventually results in do_hardirq(
thread_adge_irq().
Passes light testing (five rounds of kernbench) on an x86_64 box.
Signed-off-by: Paul E. McKenney <[EMAIL PROTECTED]>
---
include/linux/hardirq.h |4 +++-
kernel/irq/handle.c |2 ++
kernel/irq/manage.c | 25 +
3 files chang
On Fri, Sep 21, 2007 at 10:56:56PM -0400, Steven Rostedt wrote:
>
> [ sneaks away from the family for a bit to answer emails ]
[ same here, now that you mention it... ]
> --
> On Fri, 21 Sep 2007, Paul E. McKenney wrote:
>
> > On Fri, Sep 21, 2007 at 09:19:22PM -0400,
On Fri, Sep 21, 2007 at 11:15:42PM -0400, Steven Rostedt wrote:
> On Fri, 21 Sep 2007, Paul E. McKenney wrote:
> > On Fri, Sep 21, 2007 at 09:15:03PM -0400, Steven Rostedt wrote:
> > > On Fri, 21 Sep 2007, Paul E. McKenney wrote:
> > > > On Fri, Sep 21, 2007 at 10
On Fri, Sep 21, 2007 at 09:15:03PM -0400, Steven Rostedt wrote:
> On Fri, 21 Sep 2007, Paul E. McKenney wrote:
> > On Fri, Sep 21, 2007 at 10:40:03AM -0400, Steven Rostedt wrote:
> > > On Mon, Sep 10, 2007 at 11:34:12AM -0700, Paul
On Fri, Sep 21, 2007 at 09:19:22PM -0400, Steven Rostedt wrote:
>
> --
> On Fri, 21 Sep 2007, Paul E. McKenney wrote:
> > >
> > > In any case, I will be looking at the scenarios more carefully. If
> > > it turns out that GP_STAGES can indeed be cranked dow
On Fri, Sep 21, 2007 at 04:03:43PM -0700, Paul E. McKenney wrote:
> On Fri, Sep 21, 2007 at 11:20:48AM -0400, Steven Rostedt wrote:
> > On Mon, Sep 10, 2007 at 11:34:12AM -0700, Paul E. McKenney wrote:
[ . . . ]
> > Paul,
> >
> > Looking further into this, I s
On Fri, Sep 21, 2007 at 10:40:03AM -0400, Steven Rostedt wrote:
> On Mon, Sep 10, 2007 at 11:34:12AM -0700, Paul E. McKenney wrote:
Covering the pieces that weren't in Peter's reply. ;-)
And thank you -very- much for the careful and thorough review!!!
> > #endif /* __KERN
On Fri, Sep 21, 2007 at 07:23:09PM -0400, Steven Rostedt wrote:
> --
> On Fri, 21 Sep 2007, Paul E. McKenney wrote:
>
> > If you do a synchronize_rcu() it might well have to wait through the
> > following sequence of states:
> >
> > Stage 0: (might have to wait t
On Fri, Sep 21, 2007 at 11:20:48AM -0400, Steven Rostedt wrote:
> On Mon, Sep 10, 2007 at 11:34:12AM -0700, Paul E. McKenney wrote:
> > +
> > +/*
> > + * PREEMPT_RCU data structures.
> > + */
> > +
> > +#define GP_STAGES 4
> > +struct rcu_data {
>
On Fri, Sep 21, 2007 at 06:31:12PM -0400, Steven Rostedt wrote:
> On Fri, Sep 21, 2007 at 05:46:53PM +0200, Peter Zijlstra wrote:
> > On Fri, 21 Sep 2007 10:40:03 -0400 Steven Rostedt <[EMAIL PROTECTED]>
> > wrote:
> >
> > > On Mon, Sep 10, 2007 at 11:3
On Fri, Sep 21, 2007 at 05:46:53PM +0200, Peter Zijlstra wrote:
> On Fri, 21 Sep 2007 10:40:03 -0400 Steven Rostedt <[EMAIL PROTECTED]>
> wrote:
>
> > On Mon, Sep 10, 2007 at 11:34:12AM -0700, Paul E. McKenney wrote:
>
> > Can you have a pointer somewhere that ex
On Fri, Sep 21, 2007 at 12:17:21AM -0400, Steven Rostedt wrote:
> [ continued here from comment on patch 1]
>
> On Mon, Sep 10, 2007 at 11:34:12AM -0700, Paul E. McKenney wrote:
> > /* softirq mask and active fields moved to irq_cpustat_t in
> > diff -urpNa -X dontdif
loop in thread_edge_irq() is a case in point. Can this
do-while execute indefinitely in real systems?
Signed-off-by: Paul E. McKenney <[EMAIL PROTECTED]>
include/linux/hardirq.h |4 +++-
kernel/irq/manage.c | 27 +++
2 files changed, 30 insertions(+), 1 de
Hello!
Color me blind, but I don't see how the following race is avoided:
CPU 0: A hardware interrupt is received for a threaded irq, which
eventually results in do_hardirq() being invoked and the
descriptor lock being acquired. Because the IRQ_INPROGRESS
status bit is s
Work in progress, not for inclusion.
This patch updates the RCU documentation to reflect preemptible RCU as
well as recent publications. Fix an incorrect comment in the code.
Change the name ORDERED_WRT_IRQ() to ACCESS_ONCE() to better describe
its function.
Signed-off-by: Paul E. McKenney
.
Signed-off-by: Paul E. McKenney <[EMAIL PROTECTED]>
---
include/linux/rcupreempt.h | 15 ++
kernel/rcupreempt.c| 102 -
2 files changed, 115 insertions(+), 2 deletions(-)
diff -urpNa -X dontdiff linux-2.6.22-G-boosttorture/include
ly effective in -mm, run in presence
of CPU-hotplug operations.
Signed-off-by: Paul E. McKenney <[EMAIL PROTECTED]>
---
rcutorture.c | 91 +--
1 file changed, 77 insertions(+), 14 deletions(-)
diff -urpNa -X dontdiff linux-2.6
permit RCU read-side
critical sections to be preempted, there is no need to boost the priority
of Classic RCU readers. Boosting the priority of a running process
does not make it run any faster, at least not on any hardware that I am
aware of. ;-)
Signed-off-by: Paul E. McKenney <[EM
Work in progress, not for inclusion.
This patch allows preemptible RCU to tolerate CPU-hotplug operations.
It accomplishes this by maintaining a local copy of a map of online
CPUs, which it accesses under its own lock.
Signed-off-by: Paul E. McKenney <[EMAIL PROTECTED]>
---
include
teven Rostedt <[EMAIL PROTECTED]> (for RCU_SOFTIRQ)
Signed-off-by: Paul E. McKenney <[EMAIL PROTECTED]>
---
include/linux/rcuclassic.h | 79 +
include/linux/rcupdate.h | 30 --
include/linux/rcupreempt.h | 27 ++--
ith this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+ *
+ * Copyright IBM Corporation, 2006
+ *
+ * Authors: Paul E. McKenney <[EMAIL PROTECTED]>
+ * With thanks to Esben Nielsen, Bill Huey, and Ing
Work in progress, not for inclusion.
Fix rcu_barrier() to work properly in preemptive kernel environment.
Also, the ordering of callback must be preserved while moving
callbacks to another CPU during CPU hotplug.
Signed-off-by: Dipankar Sarma <[EMAIL PROTECTED]>
Signed-off-by: Paul E. Mc
-by: Dipankar Sarma <[EMAIL PROTECTED]>
Signed-off-by: Paul E. McKenney <[EMAIL PROTECTED]>
---
include/linux/rcuclassic.h | 149 +++
include/linux/rcupdate.h | 151 +++-
kernel/Makefile|2
kernel/rcuclassic.c
Work in progress, still not for inclusion. But code now complete!
This is a respin of the following prior posting:
http://lkml.org/lkml/2007/9/5/268
This release adds an additional patch that adds fixes to comments and RCU
documentation, along with one macro being renamed. The rcutorture patch
On Thu, Sep 06, 2007 at 12:52:00PM -0500, Clark Williams wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Paul E. McKenney wrote:
> > On Fri, Jun 08, 2007 at 03:43:48PM -0400, Steven Rostedt wrote:
> >> On Fri, 2007-06-08 at 12:36 -0700, Paul E. McKenney
.
Signed-off-by: Paul E. McKenney <[EMAIL PROTECTED]>
---
include/linux/rcupreempt.h | 15 ++
kernel/rcupreempt.c| 102 -
2 files changed, 115 insertions(+), 2 deletions(-)
diff -urpNa -X dontdiff linux-2.6.22-G-boosttorture/include
extremely long
time periods, increasing the probability of their being preempted and
thus needing priority boosting. The fact that rcutorture's "nreaders"
module parameter defaults to twice the number of CPUs helps ensure lots
of the needed preemption.
Signed-off-by: Paul E. M
permit RCU read-side
critical sections to be preempted, there is no need to boost the priority
of Classic RCU readers. Boosting the priority of a running process
does not make it run any faster, at least not on any hardware that I am
aware of. ;-)
Signed-off-by: Paul E. McKenney <[EM
Work in progress, not for inclusion.
This patch allows preemptible RCU to tolerate CPU-hotplug operations.
It accomplishes this by maintaining a local copy of a map of online
CPUs, which it accesses under its own lock.
Signed-off-by: Paul E. McKenney <[EMAIL PROTECTED]>
---
include
/rcuclassic.c might be dealt with. ;-) At current writing,
Gautham Shenoy's most recent CPU-hotplug fixes seem likely to obsolete
this patch (which would be a very good thing indeed!).
Signed-off-by: Steven Rostedt <[EMAIL PROTECTED]> (for RCU_SOFTIRQ)
Signed-off-by: Paul E. McKenney <[E
ith this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+ *
+ * Copyright IBM Corporation, 2006
+ *
+ * Authors: Paul E. McKenney <[EMAIL PROTECTED]>
+ * With thanks to Esben Nielsen, Bill Huey, and Ing
Work in progress, not for inclusion.
Fix rcu_barrier() to work properly in preemptive kernel environment.
Also, the ordering of callback must be preserved while moving
callbacks to another CPU during CPU hotplug.
Signed-off-by: Dipankar Sarma <[EMAIL PROTECTED]>
Signed-off-by: Paul E. Mc
-off-by: Dipankar Sarma <[EMAIL PROTECTED]>
Signed-off-by: Paul E. McKenney <[EMAIL PROTECTED]>
---
include/linux/rcuclassic.h | 149 +++
include/linux/rcupdate.h | 151 +++-
kernel/Makefile|2
kernel/rcuclassic.c
Work in progress, still not for inclusion.
This is a respin of the following prior postings:
http://lkml.org/lkml/2007/8/7/276 (the four initial preemptible RCU patches)
http://lkml.org/lkml/2007/8/17/262 (hotplug CPU for preemptible RCU)
http://lkml.org/lkml/2007/8/22/348 (RCU priority boosting)
On Fri, Aug 24, 2007 at 01:51:21PM +0530, Gautham R Shenoy wrote:
> On Thu, Aug 23, 2007 at 08:55:26AM -0700, Paul E. McKenney wrote:
> > > Even if we use another cpumask_t, whenever a cpu goes down or comes up,
> > > that will be reflected in this map, no? So what's the
On Thu, Aug 23, 2007 at 07:52:11PM +0530, Gautham R Shenoy wrote:
> On Thu, Aug 23, 2007 at 06:15:01AM -0700, Paul E. McKenney wrote:
> > On Thu, Aug 23, 2007 at 03:44:44PM +0530, Gautham R Shenoy wrote:
> > > On Thu, Aug 23, 2007 at 01:54:56AM -0700, Paul E. McKenney wrote:
>
On Thu, Aug 23, 2007 at 03:44:44PM +0530, Gautham R Shenoy wrote:
> On Thu, Aug 23, 2007 at 01:54:56AM -0700, Paul E. McKenney wrote:
> > On Thu, Aug 23, 2007 at 09:56:39AM +0530, Gautham R Shenoy wrote:
> > >
> > > I feel we should still be able to use for_each_onlin
On Thu, Aug 23, 2007 at 09:56:39AM +0530, Gautham R Shenoy wrote:
> Hi Paul,
> On Wed, Aug 22, 2007 at 12:02:54PM -0700, Paul E. McKenney wrote:
> > +/*
> > + * Print out RCU booster task statistics at the specified interval.
> > + */
> > +static void
On Wed, Aug 22, 2007 at 02:41:54PM -0700, Andrew Morton wrote:
> On Wed, 22 Aug 2007 14:22:16 -0700
> "Paul E. McKenney" <[EMAIL PROTECTED]> wrote:
>
> > On Wed, Aug 22, 2007 at 12:43:40PM -0700, Andrew Morton wrote:
> > > On Wed, 22 Aug 2007 12:02:5
On Wed, Aug 22, 2007 at 12:43:40PM -0700, Andrew Morton wrote:
> On Wed, 22 Aug 2007 12:02:54 -0700
> "Paul E. McKenney" <[EMAIL PROTECTED]> wrote:
>
> > Hello!
> >
> > This patch is a forward-port of RCU priority boosting (described in
> > http:/
-by: Paul E. McKenney <[EMAIL PROTECTED]>
---
rcutorture.c | 90 +--
1 file changed, 76 insertions(+), 14 deletions(-)
diff -urpNa -X dontdiff linux-2.6.22-f-boost/kernel/rcutorture.c
linux-2.6.22-g-boosttorture/kernel/rcutor
rcutorture on x86_64 and POWER, so OK for experimentation but
not ready for inclusion.
Signed-off-by: Paul E. McKenney <[EMAIL PROTECTED]>
---
include/linux/init_task.h | 12 +
include/linux/rcupdate.h | 13 +
include/linux/rcupreempt.h | 20 +
include/linux/sched.h | 16 +
init/
On Wed, Aug 08, 2007 at 11:10:32AM +0200, John Sigler wrote:
> [ Recipients list trimmed ]
>
> Paul E. McKenney wrote:
>
> >This patch was developed as a part of the -rt kernel development and
> >meant to provide better latencies when read-side critical sections
On Tue, Aug 07, 2007 at 09:18:29PM +0200, Peter Zijlstra wrote:
> On Tue, 2007-08-07 at 11:48 -0700, Paul E. McKenney wrote:
> > This patch implements a new version of RCU which allows its read-side
> > critical sections to be preempted. It uses a set of counter pairs
> > to k
On Wed, Aug 08, 2007 at 12:44:30AM +0530, Dipankar Sarma wrote:
> On Tue, Aug 07, 2007 at 11:52:26AM -0700, Paul E. McKenney wrote:
> > The combination of CPU hotplug and PREEMPT_RCU has resulted in deadlocks
> > due to the migration-based implementation of synchronize_sched() i
TED]> (for RCU_SOFTIRQ)
Signed-off-by: Paul E. McKenney <[EMAIL PROTECTED]>
---
include/linux/rcuclassic.h | 78 +-
include/linux/rcupdate.h | 30 --
include/linux/rcupreempt.h | 27 ++---
kernel/Makefile|
ee Software
+ * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+ *
+ * Copyright IBM Corporation, 2006
+ *
+ * Authors: Paul E. McKenney <[EMAIL PROTECTED]>
+ * With thanks to Esben Nielsen, Bill Huey, and Ingo Molnar
+ * for pushing me away
Fix rcu_barrier() to work properly in preemptive kernel environment.
Also, the ordering of callback must be preserved while moving
callbacks to another CPU during CPU hotplug.
Signed-off-by: Dipankar Sarma <[EMAIL PROTECTED]>
Signed-off-by: Paul E. McKenney <[EMAIL PROTECTED]>
---
TED]>
Signed-off-by: Paul E. McKenney <[EMAIL PROTECTED]>
---
include/linux/rcuclassic.h | 149
include/linux/rcupdate.h | 151 +++-
kernel/Makefile|2
kernel/rcuclassic.c| 558 +
kerne
Hello!
This patchset is an update of that posted by Dipankar last January
(http://lkml.org/lkml/2007/1/15/133). This is work in progress, not yet
ready for inclusion. It passes rcutorture on i386, x86_64, and ppc64
boxes as well as kernbench, so should be safe for experimentation. As
with Dipan
On Mon, Aug 06, 2007 at 10:55:44AM +0200, John Sigler wrote:
> John Sigler wrote:
>
> >I wrote a Linux app where I need high-resolution timers. I went all the
> >way and installed the -rt patch, which includes the -hrt patches, as far
> >as I understand.
> >
> >Since I could not afford to change
On Sun, Aug 05, 2007 at 07:53:10PM +0200, Ingo Molnar wrote:
>
> * Steven Rostedt <[EMAIL PROTECTED]> wrote:
>
> > Paul and Ingo,
> >
> > Should we just remove the upper limit check, or is something like this
> > patch sound?
>
> i've changed the limit to 30 (the same depth limit is used by lo
On Sun, Aug 05, 2007 at 10:24:15AM -0400, Steven Rostedt wrote:
>
> --
>
> On Sun, 5 Aug 2007, Ingo Molnar wrote:
>
> >
> > * Steven Rostedt <[EMAIL PROTECTED]> wrote:
> >
> > > > I don't have time to look further now, and it's something that isn't
> > > > easily reproducible (Well, it happened
suffix added to
keep the linker happy).
If this patch does turn out to be the right approach, the #ifdefs in
kernel/rcuclassic.c will be dealt with. ;-)
Lightly tested only on x86 machines, bugs no doubt remain.
Signed-off-by: Paul E. McKenney <[EMAIL PROTECTED]>
---
include/linux/rcucla
.
Unfortunately, get_random_bytes() ends up acquiring normal spinlocks,
which can block in -rt, resulting in very large numbers of "scheduling
while atomic" messages.
This patch takes a very crude approach, simply substituting the time
of day for get_random_bytes().
Signed-off-by: Paul E. McKenn
On Sun, Jul 29, 2007 at 07:45:35PM -0700, Daniel Walker wrote:
> Simple WARN_ON to catch any underflow in rcu_read_lock_nesting.
>
> Signed-off-by: Daniel Walker <[EMAIL PROTECTED]>
Acked-by: Paul E. McKenney <[EMAIL PROTECTED]>
> ---
> kernel/rcupreempt.c |6
On Mon, Jul 23, 2007 at 06:37:19AM -0700, Daniel Walker wrote:
> On Sun, 2007-07-22 at 20:13 -0700, Paul E. McKenney wrote:
> > On Sun, Jul 22, 2007 at 10:22:37AM -0700, Daniel Walker wrote:
> > >
> > > Strange rcu_read_unlock() which causes a imbalance, and boot han
On Sun, Jul 22, 2007 at 10:22:37AM -0700, Daniel Walker wrote:
>
> Strange rcu_read_unlock() which causes a imbalance, and boot hang.. I
> didn't notice a reason for it, and removing it allows my system to make
> progress.
>
> This should go into the preempt-realtime-sched.patch
Strange. I have
On Wed, Jul 18, 2007 at 09:18:52AM +0200, Ingo Molnar wrote:
>
> * Fernando Lopez-Lezcano <[EMAIL PROTECTED]> wrote:
>
> > > does lockdep pinpoint anything?
> >
> > Lots of stuff, and at the end the lock report for the problem.
> > Hopefully some of this will help... I have attached the whole b
On Thu, Jul 12, 2007 at 02:09:37AM +0200, Ingo Molnar wrote:
>
> * Paul E. McKenney <[EMAIL PROTECTED]> wrote:
>
> > Hello!
> >
> > Just work in progress, not recommended for inclusion. Seems stable
> > under rigorous rcutorture testing, so should be
necessarily in this order.
Thoughts?
Thanx, Paul
Signed-off-by: Paul E. McKenney <[EMAIL PROTECTED]>
---
include/linux/rcuclassic.h |3
include/linux/rcupreempt.h |2
include/linux/rcupreempt_trace.h | 36 +
include
1 - 100 of 116 matches
Mail list logo