Paul E. McKenney wrote:
On Thu, Apr 07, 2005 at 05:58:40PM +1000, Nick Piggin wrote:
OK thanks for the good explanation. So I'll keep it as is for now,
and whatever needs cleaning up later can be worked out as it comes
up.
Looking forward to the split of synchronize_kernel() into
On Thu, Apr 07, 2005 at 05:58:40PM +1000, Nick Piggin wrote:
> Ingo Molnar wrote:
> >* Nick Piggin <[EMAIL PROTECTED]> wrote:
> >
> >
> >>>At a minimum i think we need the fix+comment below.
> >>
> >>Well if we say "this is actually RCU", then yes. And we should
> >>probably change the
On Thu, Apr 07, 2005 at 05:58:40PM +1000, Nick Piggin wrote:
Ingo Molnar wrote:
* Nick Piggin [EMAIL PROTECTED] wrote:
At a minimum i think we need the fix+comment below.
Well if we say this is actually RCU, then yes. And we should
probably change the preempt_{dis|en}ables in other
Paul E. McKenney wrote:
On Thu, Apr 07, 2005 at 05:58:40PM +1000, Nick Piggin wrote:
OK thanks for the good explanation. So I'll keep it as is for now,
and whatever needs cleaning up later can be worked out as it comes
up.
Looking forward to the split of synchronize_kernel() into
Ingo Molnar wrote:
* Nick Piggin <[EMAIL PROTECTED]> wrote:
At a minimum i think we need the fix+comment below.
Well if we say "this is actually RCU", then yes. And we should
probably change the preempt_{dis|en}ables in other places to
rcu_read_lock.
OTOH, if we say we just want all running
* Nick Piggin <[EMAIL PROTECTED]> wrote:
> > At a minimum i think we need the fix+comment below.
>
> Well if we say "this is actually RCU", then yes. And we should
> probably change the preempt_{dis|en}ables in other places to
> rcu_read_lock.
>
> OTOH, if we say we just want all running
* Nick Piggin [EMAIL PROTECTED] wrote:
At a minimum i think we need the fix+comment below.
Well if we say this is actually RCU, then yes. And we should
probably change the preempt_{dis|en}ables in other places to
rcu_read_lock.
OTOH, if we say we just want all running threads to
Ingo Molnar wrote:
* Nick Piggin [EMAIL PROTECTED] wrote:
At a minimum i think we need the fix+comment below.
Well if we say this is actually RCU, then yes. And we should
probably change the preempt_{dis|en}ables in other places to
rcu_read_lock.
OTOH, if we say we just want all running
Ingo Molnar wrote:
* Nick Piggin <[EMAIL PROTECTED]> wrote:
4/5
One of the problems with the multilevel balance-on-fork/exec is that
it needs to jump through hoops to satisfy sched-domain's locking
semantics (that is, you may traverse your own domain when not
preemptable, and you may traverse
* Nick Piggin <[EMAIL PROTECTED]> wrote:
> 4/5
> One of the problems with the multilevel balance-on-fork/exec is that
> it needs to jump through hoops to satisfy sched-domain's locking
> semantics (that is, you may traverse your own domain when not
> preemptable, and you may traverse others'
* Nick Piggin [EMAIL PROTECTED] wrote:
4/5
One of the problems with the multilevel balance-on-fork/exec is that
it needs to jump through hoops to satisfy sched-domain's locking
semantics (that is, you may traverse your own domain when not
preemptable, and you may traverse others'
Ingo Molnar wrote:
* Nick Piggin [EMAIL PROTECTED] wrote:
4/5
One of the problems with the multilevel balance-on-fork/exec is that
it needs to jump through hoops to satisfy sched-domain's locking
semantics (that is, you may traverse your own domain when not
preemptable, and you may traverse
4/5
One of the problems with the multilevel balance-on-fork/exec is that it
needs to jump through hoops to satisfy sched-domain's locking semantics
(that is, you may traverse your own domain when not preemptable, and
you may traverse others' domains when holding their runqueue lock).
4/5
One of the problems with the multilevel balance-on-fork/exec is that it
needs to jump through hoops to satisfy sched-domain's locking semantics
(that is, you may traverse your own domain when not preemptable, and
you may traverse others' domains when holding their runqueue lock).
14 matches
Mail list logo