On Mon, Aug 11, 2014 at 01:57:06PM +0200, Peter Zijlstra wrote:
> On Sun, Aug 10, 2014 at 08:30:48PM -0700, Paul E. McKenney wrote:
> > On Sun, Aug 10, 2014 at 10:14:25AM +0200, Peter Zijlstra wrote:
>
> > > want want want, I want a damn pony but somehow I'm not getting one. Why
> > > are they
On Sun, Aug 10, 2014 at 08:30:48PM -0700, Paul E. McKenney wrote:
> On Sun, Aug 10, 2014 at 10:14:25AM +0200, Peter Zijlstra wrote:
> > want want want, I want a damn pony but somehow I'm not getting one. Why
> > are they getting this?
>
> We can only be glad that my daughters' old My Little Pony
On Sun, Aug 10, 2014 at 08:30:48PM -0700, Paul E. McKenney wrote:
On Sun, Aug 10, 2014 at 10:14:25AM +0200, Peter Zijlstra wrote:
want want want, I want a damn pony but somehow I'm not getting one. Why
are they getting this?
We can only be glad that my daughters' old My Little Pony toys
On Mon, Aug 11, 2014 at 01:57:06PM +0200, Peter Zijlstra wrote:
On Sun, Aug 10, 2014 at 08:30:48PM -0700, Paul E. McKenney wrote:
On Sun, Aug 10, 2014 at 10:14:25AM +0200, Peter Zijlstra wrote:
want want want, I want a damn pony but somehow I'm not getting one. Why
are they getting
On Sun, Aug 10, 2014 at 05:00:05PM +0200, Peter Zijlstra wrote:
> On Sat, Aug 09, 2014 at 06:38:29PM -0700, Paul E. McKenney wrote:
> > On Sat, Aug 09, 2014 at 08:33:55PM +0200, Peter Zijlstra wrote:
> > > On Fri, Aug 08, 2014 at 01:58:26PM -0700, Paul E. McKenney wrote:
> > > >
> > > > > And on
On Sun, Aug 10, 2014 at 10:12:54AM +0200, Peter Zijlstra wrote:
> On Sat, Aug 09, 2014 at 06:26:12PM -0700, Paul E. McKenney wrote:
> > On Sat, Aug 09, 2014 at 08:19:20PM +0200, Peter Zijlstra wrote:
> > > On Sat, Aug 09, 2014 at 09:01:37AM -0700, Paul E. McKenney wrote:
> > > > > That's so wrong
On Sun, Aug 10, 2014 at 10:14:25AM +0200, Peter Zijlstra wrote:
> On Sat, Aug 09, 2014 at 06:29:24PM -0700, Paul E. McKenney wrote:
> > On Sat, Aug 09, 2014 at 08:24:00PM +0200, Peter Zijlstra wrote:
> > > On Sat, Aug 09, 2014 at 08:19:20PM +0200, Peter Zijlstra wrote:
> > > > How about we simply
On Sun, Aug 10, 2014 at 06:46:33PM +0200, Peter Zijlstra wrote:
> On Sun, Aug 10, 2014 at 10:12:54AM +0200, Peter Zijlstra wrote:
> > > Steven covered this earlier in this thread. One addition might be "For
> > > the same reason that event tracing provides the _rcuidle suffix."
> >
> > I really
On Sun, Aug 10, 2014 at 10:12:54AM +0200, Peter Zijlstra wrote:
> > Steven covered this earlier in this thread. One addition might be "For
> > the same reason that event tracing provides the _rcuidle suffix."
>
> I really don't think its worth the cost.
Entirely untested, but something like the
On Sat, Aug 09, 2014 at 06:38:29PM -0700, Paul E. McKenney wrote:
> On Sat, Aug 09, 2014 at 08:33:55PM +0200, Peter Zijlstra wrote:
> > On Fri, Aug 08, 2014 at 01:58:26PM -0700, Paul E. McKenney wrote:
> > >
> > > > And on that, you probably should change rcu_sched_rq() to read:
> > > >
> > > >
On Sat, Aug 09, 2014 at 06:29:24PM -0700, Paul E. McKenney wrote:
> On Sat, Aug 09, 2014 at 08:24:00PM +0200, Peter Zijlstra wrote:
> > On Sat, Aug 09, 2014 at 08:19:20PM +0200, Peter Zijlstra wrote:
> > > How about we simply assume 'idle' code, as defined by the rcu idle hooks
> > > are safe? Why
On Sat, Aug 09, 2014 at 06:26:12PM -0700, Paul E. McKenney wrote:
> On Sat, Aug 09, 2014 at 08:19:20PM +0200, Peter Zijlstra wrote:
> > On Sat, Aug 09, 2014 at 09:01:37AM -0700, Paul E. McKenney wrote:
> > > > That's so wrong its not funny. If you need some abortion to deal with
> > > > NOHZ_FULL
On Sat, Aug 09, 2014 at 06:26:12PM -0700, Paul E. McKenney wrote:
On Sat, Aug 09, 2014 at 08:19:20PM +0200, Peter Zijlstra wrote:
On Sat, Aug 09, 2014 at 09:01:37AM -0700, Paul E. McKenney wrote:
That's so wrong its not funny. If you need some abortion to deal with
NOHZ_FULL then put it
On Sat, Aug 09, 2014 at 06:29:24PM -0700, Paul E. McKenney wrote:
On Sat, Aug 09, 2014 at 08:24:00PM +0200, Peter Zijlstra wrote:
On Sat, Aug 09, 2014 at 08:19:20PM +0200, Peter Zijlstra wrote:
How about we simply assume 'idle' code, as defined by the rcu idle hooks
are safe? Why do we
On Sat, Aug 09, 2014 at 06:38:29PM -0700, Paul E. McKenney wrote:
On Sat, Aug 09, 2014 at 08:33:55PM +0200, Peter Zijlstra wrote:
On Fri, Aug 08, 2014 at 01:58:26PM -0700, Paul E. McKenney wrote:
And on that, you probably should change rcu_sched_rq() to read:
On Sun, Aug 10, 2014 at 10:12:54AM +0200, Peter Zijlstra wrote:
Steven covered this earlier in this thread. One addition might be For
the same reason that event tracing provides the _rcuidle suffix.
I really don't think its worth the cost.
Entirely untested, but something like the below
On Sun, Aug 10, 2014 at 06:46:33PM +0200, Peter Zijlstra wrote:
On Sun, Aug 10, 2014 at 10:12:54AM +0200, Peter Zijlstra wrote:
Steven covered this earlier in this thread. One addition might be For
the same reason that event tracing provides the _rcuidle suffix.
I really don't think
On Sun, Aug 10, 2014 at 10:14:25AM +0200, Peter Zijlstra wrote:
On Sat, Aug 09, 2014 at 06:29:24PM -0700, Paul E. McKenney wrote:
On Sat, Aug 09, 2014 at 08:24:00PM +0200, Peter Zijlstra wrote:
On Sat, Aug 09, 2014 at 08:19:20PM +0200, Peter Zijlstra wrote:
How about we simply assume
On Sun, Aug 10, 2014 at 10:12:54AM +0200, Peter Zijlstra wrote:
On Sat, Aug 09, 2014 at 06:26:12PM -0700, Paul E. McKenney wrote:
On Sat, Aug 09, 2014 at 08:19:20PM +0200, Peter Zijlstra wrote:
On Sat, Aug 09, 2014 at 09:01:37AM -0700, Paul E. McKenney wrote:
That's so wrong its not
On Sun, Aug 10, 2014 at 05:00:05PM +0200, Peter Zijlstra wrote:
On Sat, Aug 09, 2014 at 06:38:29PM -0700, Paul E. McKenney wrote:
On Sat, Aug 09, 2014 at 08:33:55PM +0200, Peter Zijlstra wrote:
On Fri, Aug 08, 2014 at 01:58:26PM -0700, Paul E. McKenney wrote:
And on that, you
On Sat, Aug 09, 2014 at 08:33:55PM +0200, Peter Zijlstra wrote:
> On Fri, Aug 08, 2014 at 01:58:26PM -0700, Paul E. McKenney wrote:
> >
> > > And on that, you probably should change rcu_sched_rq() to read:
> > >
> > > this_cpu_inc(rcu_sched_data.passed_quiesce);
> > >
> > > That avoids
On Sat, Aug 09, 2014 at 08:24:00PM +0200, Peter Zijlstra wrote:
> On Sat, Aug 09, 2014 at 08:19:20PM +0200, Peter Zijlstra wrote:
> > How about we simply assume 'idle' code, as defined by the rcu idle hooks
> > are safe? Why do we want to bend over backwards to cover this?
>
> The thing is, we
On Sat, Aug 09, 2014 at 08:19:20PM +0200, Peter Zijlstra wrote:
> On Sat, Aug 09, 2014 at 09:01:37AM -0700, Paul E. McKenney wrote:
> > > That's so wrong its not funny. If you need some abortion to deal with
> > > NOHZ_FULL then put it under CONFIG_NOHZ_FULL, don't burden the entire
> > > world
On Fri, Aug 08, 2014 at 01:58:26PM -0700, Paul E. McKenney wrote:
>
> > And on that, you probably should change rcu_sched_rq() to read:
> >
> > this_cpu_inc(rcu_sched_data.passed_quiesce);
> >
> > That avoids touching the per-cpu data offset.
>
> Hmmm... Interrupts are disabled,
No they
On Sat, Aug 09, 2014 at 08:19:20PM +0200, Peter Zijlstra wrote:
> How about we simply assume 'idle' code, as defined by the rcu idle hooks
> are safe? Why do we want to bend over backwards to cover this?
The thing is, we already have the special rcu trace hooks for tracing
inside this rcu-idle
On Sat, Aug 09, 2014 at 09:01:37AM -0700, Paul E. McKenney wrote:
> > That's so wrong its not funny. If you need some abortion to deal with
> > NOHZ_FULL then put it under CONFIG_NOHZ_FULL, don't burden the entire
> > world with it.
>
> Peter, the polling approach actually -reduces- the
On Sat, Aug 09, 2014 at 08:44:39AM -0400, Steven Rostedt wrote:
> On Sat, 9 Aug 2014 08:15:14 +0200
> Peter Zijlstra wrote:
>
>
> > As for idle tasks, I'm not sure about those, I think that we should say
> > NO to anything that would require waking idle CPUs, push the pain to
> >
On Sat, Aug 09, 2014 at 08:15:14AM +0200, Peter Zijlstra wrote:
> On Fri, Aug 08, 2014 at 01:58:26PM -0700, Paul E. McKenney wrote:
> > On Fri, Aug 08, 2014 at 09:13:26PM +0200, Peter Zijlstra wrote:
> > >
> > >
> > > So I think you can make the entire thing work with
> > >
On Sat, 9 Aug 2014 08:15:14 +0200
Peter Zijlstra wrote:
> As for idle tasks, I'm not sure about those, I think that we should say
> NO to anything that would require waking idle CPUs, push the pain to
> ftrace/kprobes, we should _not_ be waking idle cpus.
I agree, but I haven't had a chance
On Fri, Aug 08, 2014 at 01:58:26PM -0700, Paul E. McKenney wrote:
> On Fri, Aug 08, 2014 at 09:13:26PM +0200, Peter Zijlstra wrote:
> >
> >
> > So I think you can make the entire thing work with
> > rcu_note_context_switch().
> >
> > If we have the sync thing do something like:
> >
> >
> >
On Fri, Aug 08, 2014 at 01:58:26PM -0700, Paul E. McKenney wrote:
On Fri, Aug 08, 2014 at 09:13:26PM +0200, Peter Zijlstra wrote:
So I think you can make the entire thing work with
rcu_note_context_switch().
If we have the sync thing do something like:
for_each_task(t)
On Sat, 9 Aug 2014 08:15:14 +0200
Peter Zijlstra pet...@infradead.org wrote:
As for idle tasks, I'm not sure about those, I think that we should say
NO to anything that would require waking idle CPUs, push the pain to
ftrace/kprobes, we should _not_ be waking idle cpus.
I agree, but I
On Sat, Aug 09, 2014 at 08:15:14AM +0200, Peter Zijlstra wrote:
On Fri, Aug 08, 2014 at 01:58:26PM -0700, Paul E. McKenney wrote:
On Fri, Aug 08, 2014 at 09:13:26PM +0200, Peter Zijlstra wrote:
So I think you can make the entire thing work with
rcu_note_context_switch().
If
On Sat, Aug 09, 2014 at 08:44:39AM -0400, Steven Rostedt wrote:
On Sat, 9 Aug 2014 08:15:14 +0200
Peter Zijlstra pet...@infradead.org wrote:
As for idle tasks, I'm not sure about those, I think that we should say
NO to anything that would require waking idle CPUs, push the pain to
On Sat, Aug 09, 2014 at 09:01:37AM -0700, Paul E. McKenney wrote:
That's so wrong its not funny. If you need some abortion to deal with
NOHZ_FULL then put it under CONFIG_NOHZ_FULL, don't burden the entire
world with it.
Peter, the polling approach actually -reduces- the common-case
On Sat, Aug 09, 2014 at 08:19:20PM +0200, Peter Zijlstra wrote:
How about we simply assume 'idle' code, as defined by the rcu idle hooks
are safe? Why do we want to bend over backwards to cover this?
The thing is, we already have the special rcu trace hooks for tracing
inside this rcu-idle
On Fri, Aug 08, 2014 at 01:58:26PM -0700, Paul E. McKenney wrote:
And on that, you probably should change rcu_sched_rq() to read:
this_cpu_inc(rcu_sched_data.passed_quiesce);
That avoids touching the per-cpu data offset.
Hmmm... Interrupts are disabled,
No they are not,
On Sat, Aug 09, 2014 at 08:19:20PM +0200, Peter Zijlstra wrote:
On Sat, Aug 09, 2014 at 09:01:37AM -0700, Paul E. McKenney wrote:
That's so wrong its not funny. If you need some abortion to deal with
NOHZ_FULL then put it under CONFIG_NOHZ_FULL, don't burden the entire
world with it.
On Sat, Aug 09, 2014 at 08:24:00PM +0200, Peter Zijlstra wrote:
On Sat, Aug 09, 2014 at 08:19:20PM +0200, Peter Zijlstra wrote:
How about we simply assume 'idle' code, as defined by the rcu idle hooks
are safe? Why do we want to bend over backwards to cover this?
The thing is, we already
On Sat, Aug 09, 2014 at 08:33:55PM +0200, Peter Zijlstra wrote:
On Fri, Aug 08, 2014 at 01:58:26PM -0700, Paul E. McKenney wrote:
And on that, you probably should change rcu_sched_rq() to read:
this_cpu_inc(rcu_sched_data.passed_quiesce);
That avoids touching the per-cpu
On Fri, Aug 08, 2014 at 09:13:26PM +0200, Peter Zijlstra wrote:
>
>
> So I think you can make the entire thing work with
> rcu_note_context_switch().
>
> If we have the sync thing do something like:
>
>
> for_each_task(t) {
> atomic_inc(_tasks);
>
So I think you can make the entire thing work with
rcu_note_context_switch().
If we have the sync thing do something like:
for_each_task(t) {
atomic_inc(_tasks);
atomic_or(>rcu_attention, RCU_TASK);
smp_mb__after_atomic();
So I think you can make the entire thing work with
rcu_note_context_switch().
If we have the sync thing do something like:
for_each_task(t) {
atomic_inc(rcu_tasks);
atomic_or(t-rcu_attention, RCU_TASK);
smp_mb__after_atomic();
On Fri, Aug 08, 2014 at 09:13:26PM +0200, Peter Zijlstra wrote:
So I think you can make the entire thing work with
rcu_note_context_switch().
If we have the sync thing do something like:
for_each_task(t) {
atomic_inc(rcu_tasks);
On Thu, Aug 07, 2014 at 06:32:28PM +0200, Peter Zijlstra wrote:
> On Thu, Aug 07, 2014 at 08:43:58AM -0700, Paul E. McKenney wrote:
> > On Thu, Aug 07, 2014 at 10:49:21AM +0200, Peter Zijlstra wrote:
> > > On Tue, Aug 05, 2014 at 02:55:10PM -0700, Paul E. McKenney wrote:
> > > > +/* Check for
On Thu, Aug 07, 2014 at 08:43:58AM -0700, Paul E. McKenney wrote:
> On Thu, Aug 07, 2014 at 10:49:21AM +0200, Peter Zijlstra wrote:
> > On Tue, Aug 05, 2014 at 02:55:10PM -0700, Paul E. McKenney wrote:
> > > +/* Check for nohz_full CPUs executing in userspace. */
> > > +static void
On Thu, Aug 07, 2014 at 10:49:21AM +0200, Peter Zijlstra wrote:
> On Tue, Aug 05, 2014 at 02:55:10PM -0700, Paul E. McKenney wrote:
> > +/* Check for nohz_full CPUs executing in userspace. */
> > +static void check_no_hz_full_tasks(void)
> > +{
> > +#ifdef CONFIG_NO_HZ_FULL
> > + int cpu;
> > +
On Tue, Aug 05, 2014 at 02:55:10PM -0700, Paul E. McKenney wrote:
> +/* Check for nohz_full CPUs executing in userspace. */
> +static void check_no_hz_full_tasks(void)
> +{
> +#ifdef CONFIG_NO_HZ_FULL
> + int cpu;
> + struct task_struct *t;
> +
> + for_each_online_cpu(cpu) {
> +
On Tue, Aug 05, 2014 at 05:51:01PM -0700, Paul E. McKenney wrote:
> On Wed, Aug 06, 2014 at 08:33:29AM +0800, Lai Jiangshan wrote:
> > On 08/06/2014 05:55 AM, Paul E. McKenney wrote:
> > > On Tue, Aug 05, 2014 at 08:47:55AM +0800, Lai Jiangshan wrote:
> > >> On 08/04/2014 10:56 PM, Peter Zijlstra
On Tue, Aug 05, 2014 at 05:51:01PM -0700, Paul E. McKenney wrote:
On Wed, Aug 06, 2014 at 08:33:29AM +0800, Lai Jiangshan wrote:
On 08/06/2014 05:55 AM, Paul E. McKenney wrote:
On Tue, Aug 05, 2014 at 08:47:55AM +0800, Lai Jiangshan wrote:
On 08/04/2014 10:56 PM, Peter Zijlstra wrote:
On Tue, Aug 05, 2014 at 02:55:10PM -0700, Paul E. McKenney wrote:
+/* Check for nohz_full CPUs executing in userspace. */
+static void check_no_hz_full_tasks(void)
+{
+#ifdef CONFIG_NO_HZ_FULL
+ int cpu;
+ struct task_struct *t;
+
+ for_each_online_cpu(cpu) {
+
On Thu, Aug 07, 2014 at 10:49:21AM +0200, Peter Zijlstra wrote:
On Tue, Aug 05, 2014 at 02:55:10PM -0700, Paul E. McKenney wrote:
+/* Check for nohz_full CPUs executing in userspace. */
+static void check_no_hz_full_tasks(void)
+{
+#ifdef CONFIG_NO_HZ_FULL
+ int cpu;
+ struct
On Thu, Aug 07, 2014 at 08:43:58AM -0700, Paul E. McKenney wrote:
On Thu, Aug 07, 2014 at 10:49:21AM +0200, Peter Zijlstra wrote:
On Tue, Aug 05, 2014 at 02:55:10PM -0700, Paul E. McKenney wrote:
+/* Check for nohz_full CPUs executing in userspace. */
+static void
On Thu, Aug 07, 2014 at 06:32:28PM +0200, Peter Zijlstra wrote:
On Thu, Aug 07, 2014 at 08:43:58AM -0700, Paul E. McKenney wrote:
On Thu, Aug 07, 2014 at 10:49:21AM +0200, Peter Zijlstra wrote:
On Tue, Aug 05, 2014 at 02:55:10PM -0700, Paul E. McKenney wrote:
+/* Check for nohz_full CPUs
On Wed, Aug 06, 2014 at 08:33:29AM +0800, Lai Jiangshan wrote:
> On 08/06/2014 05:55 AM, Paul E. McKenney wrote:
> > On Tue, Aug 05, 2014 at 08:47:55AM +0800, Lai Jiangshan wrote:
> >> On 08/04/2014 10:56 PM, Peter Zijlstra wrote:
> >>> On Mon, Aug 04, 2014 at 02:25:15PM +0200, Peter Zijlstra
On Wed, Aug 06, 2014 at 08:27:51AM +0800, Lai Jiangshan wrote:
> On 08/06/2014 05:55 AM, Paul E. McKenney wrote:
> > On Tue, Aug 05, 2014 at 08:47:55AM +0800, Lai Jiangshan wrote:
> >> On 08/04/2014 10:56 PM, Peter Zijlstra wrote:
> >>> On Mon, Aug 04, 2014 at 02:25:15PM +0200, Peter Zijlstra
On 08/06/2014 05:55 AM, Paul E. McKenney wrote:
> On Tue, Aug 05, 2014 at 08:47:55AM +0800, Lai Jiangshan wrote:
>> On 08/04/2014 10:56 PM, Peter Zijlstra wrote:
>>> On Mon, Aug 04, 2014 at 02:25:15PM +0200, Peter Zijlstra wrote:
On Mon, Aug 04, 2014 at 04:50:44AM -0700, Paul E. McKenney
On 08/06/2014 05:55 AM, Paul E. McKenney wrote:
> On Tue, Aug 05, 2014 at 08:47:55AM +0800, Lai Jiangshan wrote:
>> On 08/04/2014 10:56 PM, Peter Zijlstra wrote:
>>> On Mon, Aug 04, 2014 at 02:25:15PM +0200, Peter Zijlstra wrote:
On Mon, Aug 04, 2014 at 04:50:44AM -0700, Paul E. McKenney
On Tue, Aug 05, 2014 at 08:47:55AM +0800, Lai Jiangshan wrote:
> On 08/04/2014 10:56 PM, Peter Zijlstra wrote:
> > On Mon, Aug 04, 2014 at 02:25:15PM +0200, Peter Zijlstra wrote:
> >> On Mon, Aug 04, 2014 at 04:50:44AM -0700, Paul E. McKenney wrote:
> >>> OK, I will bite...
> >>>
> >>> What kinds
On Tue, Aug 05, 2014 at 08:47:55AM +0800, Lai Jiangshan wrote:
On 08/04/2014 10:56 PM, Peter Zijlstra wrote:
On Mon, Aug 04, 2014 at 02:25:15PM +0200, Peter Zijlstra wrote:
On Mon, Aug 04, 2014 at 04:50:44AM -0700, Paul E. McKenney wrote:
OK, I will bite...
What kinds of tasks are on a
On 08/06/2014 05:55 AM, Paul E. McKenney wrote:
On Tue, Aug 05, 2014 at 08:47:55AM +0800, Lai Jiangshan wrote:
On 08/04/2014 10:56 PM, Peter Zijlstra wrote:
On Mon, Aug 04, 2014 at 02:25:15PM +0200, Peter Zijlstra wrote:
On Mon, Aug 04, 2014 at 04:50:44AM -0700, Paul E. McKenney wrote:
OK, I
On 08/06/2014 05:55 AM, Paul E. McKenney wrote:
On Tue, Aug 05, 2014 at 08:47:55AM +0800, Lai Jiangshan wrote:
On 08/04/2014 10:56 PM, Peter Zijlstra wrote:
On Mon, Aug 04, 2014 at 02:25:15PM +0200, Peter Zijlstra wrote:
On Mon, Aug 04, 2014 at 04:50:44AM -0700, Paul E. McKenney wrote:
OK, I
On Wed, Aug 06, 2014 at 08:27:51AM +0800, Lai Jiangshan wrote:
On 08/06/2014 05:55 AM, Paul E. McKenney wrote:
On Tue, Aug 05, 2014 at 08:47:55AM +0800, Lai Jiangshan wrote:
On 08/04/2014 10:56 PM, Peter Zijlstra wrote:
On Mon, Aug 04, 2014 at 02:25:15PM +0200, Peter Zijlstra wrote:
On
On Wed, Aug 06, 2014 at 08:33:29AM +0800, Lai Jiangshan wrote:
On 08/06/2014 05:55 AM, Paul E. McKenney wrote:
On Tue, Aug 05, 2014 at 08:47:55AM +0800, Lai Jiangshan wrote:
On 08/04/2014 10:56 PM, Peter Zijlstra wrote:
On Mon, Aug 04, 2014 at 02:25:15PM +0200, Peter Zijlstra wrote:
On
On 08/04/2014 10:56 PM, Peter Zijlstra wrote:
> On Mon, Aug 04, 2014 at 02:25:15PM +0200, Peter Zijlstra wrote:
>> On Mon, Aug 04, 2014 at 04:50:44AM -0700, Paul E. McKenney wrote:
>>> OK, I will bite...
>>>
>>> What kinds of tasks are on a runqueue, but neither ->on_cpu nor
>>> PREEMPT_ACTIVE?
>>
On 08/04, Paul E. McKenney wrote:
>
> OK, so I checked out my earlier concern about the group leader going away.
> It looks like the group leader now sticks around until all threads in
> the group have exited, which is a nice change from the behavior I was
> (perhaps incorrectly) recalling!
Ah, I
On Mon, Aug 04, 2014 at 03:32:21PM +0200, Oleg Nesterov wrote:
> On 08/03, Paul E. McKenney wrote:
> >
> > On Sun, Aug 03, 2014 at 03:33:18PM +0200, Oleg Nesterov wrote:
> > > It seems that you need another global list, a task should be visible on
> > > that
> > > list until exit_rcu().
> >
> >
On 08/03, Paul E. McKenney wrote:
>
> On Sun, Aug 03, 2014 at 03:33:18PM +0200, Oleg Nesterov wrote:
> > It seems that you need another global list, a task should be visible on that
> > list until exit_rcu().
>
> As in create another global list that all tasks are added to when created
> and then
On Mon, Aug 04, 2014 at 02:25:15PM +0200, Peter Zijlstra wrote:
> On Mon, Aug 04, 2014 at 04:50:44AM -0700, Paul E. McKenney wrote:
> > OK, I will bite...
> >
> > What kinds of tasks are on a runqueue, but neither ->on_cpu nor
> > PREEMPT_ACTIVE?
>
> Userspace tasks, they don't necessarily get
On Mon, Aug 04, 2014 at 06:51:04AM -0700, Paul E. McKenney wrote:
> On Mon, Aug 04, 2014 at 03:25:25PM +0200, Oleg Nesterov wrote:
> > On 08/03, Paul E. McKenney wrote:
> > >
> > > On Mon, Aug 04, 2014 at 08:37:37AM +0800, Lai Jiangshan wrote:
> > > > An alternative solution:
> > > >
On Mon, Aug 04, 2014 at 03:25:25PM +0200, Oleg Nesterov wrote:
> On 08/03, Paul E. McKenney wrote:
> >
> > On Mon, Aug 04, 2014 at 08:37:37AM +0800, Lai Jiangshan wrote:
> > > An alternative solution:
> > > srcu_read_lock() before exit_notify(), srcu_read_unlock() after the last
> > >
On Mon, Aug 04, 2014 at 03:29:27PM +0200, Oleg Nesterov wrote:
> On 08/03, Paul E. McKenney wrote:
> >
> > If I understand correctly, your goal is to remove a synchronize_sched()
> > worth of latency from the overall RCU-tasks callback latency. Or am I
> > still confused?
>
> Yes, exactly. But
On 08/03, Paul E. McKenney wrote:
>
> If I understand correctly, your goal is to remove a synchronize_sched()
> worth of latency from the overall RCU-tasks callback latency. Or am I
> still confused?
Yes, exactly. But again, I am not sure this minor optimization makes sense,
mostly I tried to
On 08/03, Paul E. McKenney wrote:
>
> On Mon, Aug 04, 2014 at 08:37:37AM +0800, Lai Jiangshan wrote:
> > An alternative solution:
> > srcu_read_lock() before exit_notify(), srcu_read_unlock() after the last
> > preempt_disable()
> > in the do_exit, and synchronize_srcu() in rcu_tasks_kthread().
>
On Mon, Aug 04, 2014 at 02:25:15PM +0200, Peter Zijlstra wrote:
> On Mon, Aug 04, 2014 at 04:50:44AM -0700, Paul E. McKenney wrote:
> > OK, I will bite...
> >
> > What kinds of tasks are on a runqueue, but neither ->on_cpu nor
> > PREEMPT_ACTIVE?
>
> Userspace tasks, they don't necessarily get
On Mon, Aug 04, 2014 at 04:50:44AM -0700, Paul E. McKenney wrote:
> OK, I will bite...
>
> What kinds of tasks are on a runqueue, but neither ->on_cpu nor
> PREEMPT_ACTIVE?
Userspace tasks, they don't necessarily get PREEMPT_ACTIVE when
preempted. Now obviously you're not _that_ interested in
On Mon, Aug 04, 2014 at 04:18:53PM +0800, Lai Jiangshan wrote:
> On 08/04/2014 03:46 PM, Peter Zijlstra wrote:
> > On Mon, Aug 04, 2014 at 09:28:45AM +0800, Lai Jiangshan wrote:
> >> On 08/01/2014 05:55 AM, Paul E. McKenney wrote:
> >>> + rcu_read_lock();
> >>> +
On 08/04/2014 03:46 PM, Peter Zijlstra wrote:
> On Mon, Aug 04, 2014 at 09:28:45AM +0800, Lai Jiangshan wrote:
>> On 08/01/2014 05:55 AM, Paul E. McKenney wrote:
>>> + rcu_read_lock();
>>> + for_each_process_thread(g, t) {
>>> + if (t != current &&
On Mon, Aug 04, 2014 at 09:28:45AM +0800, Lai Jiangshan wrote:
> On 08/01/2014 05:55 AM, Paul E. McKenney wrote:
> > + rcu_read_lock();
> > + for_each_process_thread(g, t) {
> > + if (t != current && ACCESS_ONCE(t->on_rq) &&
> > +
On Mon, Aug 04, 2014 at 09:28:45AM +0800, Lai Jiangshan wrote:
On 08/01/2014 05:55 AM, Paul E. McKenney wrote:
+ rcu_read_lock();
+ for_each_process_thread(g, t) {
+ if (t != current ACCESS_ONCE(t-on_rq)
+ !is_idle_task(t)) {
On 08/04/2014 03:46 PM, Peter Zijlstra wrote:
On Mon, Aug 04, 2014 at 09:28:45AM +0800, Lai Jiangshan wrote:
On 08/01/2014 05:55 AM, Paul E. McKenney wrote:
+ rcu_read_lock();
+ for_each_process_thread(g, t) {
+ if (t != current ACCESS_ONCE(t-on_rq)
+
On Mon, Aug 04, 2014 at 04:18:53PM +0800, Lai Jiangshan wrote:
On 08/04/2014 03:46 PM, Peter Zijlstra wrote:
On Mon, Aug 04, 2014 at 09:28:45AM +0800, Lai Jiangshan wrote:
On 08/01/2014 05:55 AM, Paul E. McKenney wrote:
+ rcu_read_lock();
+ for_each_process_thread(g, t) {
On Mon, Aug 04, 2014 at 04:50:44AM -0700, Paul E. McKenney wrote:
OK, I will bite...
What kinds of tasks are on a runqueue, but neither -on_cpu nor
PREEMPT_ACTIVE?
Userspace tasks, they don't necessarily get PREEMPT_ACTIVE when
preempted. Now obviously you're not _that_ interested in
On Mon, Aug 04, 2014 at 02:25:15PM +0200, Peter Zijlstra wrote:
On Mon, Aug 04, 2014 at 04:50:44AM -0700, Paul E. McKenney wrote:
OK, I will bite...
What kinds of tasks are on a runqueue, but neither -on_cpu nor
PREEMPT_ACTIVE?
Userspace tasks, they don't necessarily get
On 08/03, Paul E. McKenney wrote:
On Mon, Aug 04, 2014 at 08:37:37AM +0800, Lai Jiangshan wrote:
An alternative solution:
srcu_read_lock() before exit_notify(), srcu_read_unlock() after the last
preempt_disable()
in the do_exit, and synchronize_srcu() in rcu_tasks_kthread().
That is a
On 08/03, Paul E. McKenney wrote:
If I understand correctly, your goal is to remove a synchronize_sched()
worth of latency from the overall RCU-tasks callback latency. Or am I
still confused?
Yes, exactly. But again, I am not sure this minor optimization makes sense,
mostly I tried to check
On Mon, Aug 04, 2014 at 03:29:27PM +0200, Oleg Nesterov wrote:
On 08/03, Paul E. McKenney wrote:
If I understand correctly, your goal is to remove a synchronize_sched()
worth of latency from the overall RCU-tasks callback latency. Or am I
still confused?
Yes, exactly. But again, I am
On Mon, Aug 04, 2014 at 03:25:25PM +0200, Oleg Nesterov wrote:
On 08/03, Paul E. McKenney wrote:
On Mon, Aug 04, 2014 at 08:37:37AM +0800, Lai Jiangshan wrote:
An alternative solution:
srcu_read_lock() before exit_notify(), srcu_read_unlock() after the last
preempt_disable()
in
On Mon, Aug 04, 2014 at 06:51:04AM -0700, Paul E. McKenney wrote:
On Mon, Aug 04, 2014 at 03:25:25PM +0200, Oleg Nesterov wrote:
On 08/03, Paul E. McKenney wrote:
On Mon, Aug 04, 2014 at 08:37:37AM +0800, Lai Jiangshan wrote:
An alternative solution:
srcu_read_lock() before
On Mon, Aug 04, 2014 at 02:25:15PM +0200, Peter Zijlstra wrote:
On Mon, Aug 04, 2014 at 04:50:44AM -0700, Paul E. McKenney wrote:
OK, I will bite...
What kinds of tasks are on a runqueue, but neither -on_cpu nor
PREEMPT_ACTIVE?
Userspace tasks, they don't necessarily get
On 08/03, Paul E. McKenney wrote:
On Sun, Aug 03, 2014 at 03:33:18PM +0200, Oleg Nesterov wrote:
It seems that you need another global list, a task should be visible on that
list until exit_rcu().
As in create another global list that all tasks are added to when created
and then remved
On Mon, Aug 04, 2014 at 03:32:21PM +0200, Oleg Nesterov wrote:
On 08/03, Paul E. McKenney wrote:
On Sun, Aug 03, 2014 at 03:33:18PM +0200, Oleg Nesterov wrote:
It seems that you need another global list, a task should be visible on
that
list until exit_rcu().
As in create
On 08/04, Paul E. McKenney wrote:
OK, so I checked out my earlier concern about the group leader going away.
It looks like the group leader now sticks around until all threads in
the group have exited, which is a nice change from the behavior I was
(perhaps incorrectly) recalling!
Ah, I
On 08/04/2014 10:56 PM, Peter Zijlstra wrote:
On Mon, Aug 04, 2014 at 02:25:15PM +0200, Peter Zijlstra wrote:
On Mon, Aug 04, 2014 at 04:50:44AM -0700, Paul E. McKenney wrote:
OK, I will bite...
What kinds of tasks are on a runqueue, but neither -on_cpu nor
PREEMPT_ACTIVE?
Userspace tasks,
On 08/01/2014 05:55 AM, Paul E. McKenney wrote:
> + rcu_read_lock();
> + for_each_process_thread(g, t) {
> + if (t != current && ACCESS_ONCE(t->on_rq) &&
> + !is_idle_task(t)) {
> + get_task_struct(t);
On Mon, Aug 04, 2014 at 08:37:37AM +0800, Lai Jiangshan wrote:
> On 08/04/2014 06:05 AM, Paul E. McKenney wrote:
> > On Sun, Aug 03, 2014 at 03:33:18PM +0200, Oleg Nesterov wrote:
> >> On 08/02, Paul E. McKenney wrote:
> >>>
> >>> On Sat, Aug 02, 2014 at 04:56:16PM +0200, Oleg Nesterov wrote:
>
On 08/04/2014 06:05 AM, Paul E. McKenney wrote:
> On Sun, Aug 03, 2014 at 03:33:18PM +0200, Oleg Nesterov wrote:
>> On 08/02, Paul E. McKenney wrote:
>>>
>>> On Sat, Aug 02, 2014 at 04:56:16PM +0200, Oleg Nesterov wrote:
On 07/31, Paul E. McKenney wrote:
>
> + rcu_read_lock();
On Sun, Aug 03, 2014 at 03:33:18PM +0200, Oleg Nesterov wrote:
> On 08/02, Paul E. McKenney wrote:
> >
> > On Sat, Aug 02, 2014 at 04:56:16PM +0200, Oleg Nesterov wrote:
> > > On 07/31, Paul E. McKenney wrote:
> > > >
> > > > + rcu_read_lock();
> > > > +
On Sun, Aug 03, 2014 at 02:57:58PM +0200, Oleg Nesterov wrote:
> On 08/02, Paul E. McKenney wrote:
> >
> > On Fri, Aug 01, 2014 at 08:40:59PM +0200, Oleg Nesterov wrote:
> > > On 08/01, Paul E. McKenney wrote:
> > > >
> > > > On Fri, Aug 01, 2014 at 04:11:44PM +0200, Oleg Nesterov wrote:
> > > > >
On 08/02, Paul E. McKenney wrote:
>
> On Sat, Aug 02, 2014 at 04:56:16PM +0200, Oleg Nesterov wrote:
> > On 07/31, Paul E. McKenney wrote:
> > >
> > > + rcu_read_lock();
> > > + for_each_process_thread(g, t) {
> > > + if (t != current && ACCESS_ONCE(t->on_rq) &&
> >
1 - 100 of 140 matches
Mail list logo