Sorry for taking so long to review. So many other things to do :-/
On Fri, 23 Aug 2013 14:26:39 +0800
Lai Jiangshan wrote:
> [PATCH] rcu/rt_mutex: eliminate a kind of deadlock for rcu read site
"rcu read site"?
This is specific to boosting, thus boosting should be in the subject,
perhaps
Sorry for taking so long to review. So many other things to do :-/
On Fri, 23 Aug 2013 14:26:39 +0800
Lai Jiangshan la...@cn.fujitsu.com wrote:
[PATCH] rcu/rt_mutex: eliminate a kind of deadlock for rcu read site
rcu read site?
This is specific to boosting, thus boosting should be in the
On Mon, Aug 26, 2013 at 10:39:32AM +0800, Lai Jiangshan wrote:
> On 08/26/2013 01:43 AM, Paul E. McKenney wrote:
> > On Sun, Aug 25, 2013 at 11:19:37PM +0800, Lai Jiangshan wrote:
> >> Hi, Steven
> >>
> >> Any comments about this patch?
> >
> > For whatever it is worth, it ran without incident
On Mon, Aug 26, 2013 at 10:39:32AM +0800, Lai Jiangshan wrote:
On 08/26/2013 01:43 AM, Paul E. McKenney wrote:
On Sun, Aug 25, 2013 at 11:19:37PM +0800, Lai Jiangshan wrote:
Hi, Steven
Any comments about this patch?
For whatever it is worth, it ran without incident for two hours
On 08/26/2013 01:43 AM, Paul E. McKenney wrote:
> On Sun, Aug 25, 2013 at 11:19:37PM +0800, Lai Jiangshan wrote:
>> Hi, Steven
>>
>> Any comments about this patch?
>
> For whatever it is worth, it ran without incident for two hours worth
> of rcutorture on my P5 test (boosting but no CPU
On Sun, Aug 25, 2013 at 11:19:37PM +0800, Lai Jiangshan wrote:
> Hi, Steven
>
> Any comments about this patch?
For whatever it is worth, it ran without incident for two hours worth
of rcutorture on my P5 test (boosting but no CPU hotplug).
Lai, do you have a specific test for this patch? Your
On Sun, Aug 25, 2013 at 11:19:37PM +0800, Lai Jiangshan wrote:
Hi, Steven
Any comments about this patch?
For whatever it is worth, it ran without incident for two hours worth
of rcutorture on my P5 test (boosting but no CPU hotplug).
Lai, do you have a specific test for this patch? Your
On 08/26/2013 01:43 AM, Paul E. McKenney wrote:
On Sun, Aug 25, 2013 at 11:19:37PM +0800, Lai Jiangshan wrote:
Hi, Steven
Any comments about this patch?
For whatever it is worth, it ran without incident for two hours worth
of rcutorture on my P5 test (boosting but no CPU hotplug).
Lai,
[PATCH] rcu/rt_mutex: eliminate a kind of deadlock for rcu read site
Current rtmutex's lock->wait_lock doesn't disables softirq nor irq, it will
cause rcu read site deadlock when rcu overlaps with any
softirq-context/irq-context lock.
@L is a spinlock of softirq or irq context.
CPU1
[PATCH] rcu/rt_mutex: eliminate a kind of deadlock for rcu read site
Current rtmutex's lock-wait_lock doesn't disables softirq nor irq, it will
cause rcu read site deadlock when rcu overlaps with any
softirq-context/irq-context lock.
@L is a spinlock of softirq or irq context.
CPU1
On Thu, 22 Aug 2013 10:34:31 -0400
Steven Rostedt wrote:
> On Thu, 22 Aug 2013 22:23:09 +0800
> Lai Jiangshan wrote:
>
>
> > > By making it a irq-safe lock, we need to disable interrupts every time
> > > it is taken, which means the entire pi-chain walk in
> > > rt_mutex_adjust_prio_chain()
On Thu, 22 Aug 2013 22:23:09 +0800
Lai Jiangshan wrote:
> > By making it a irq-safe lock, we need to disable interrupts every time
> > it is taken, which means the entire pi-chain walk in
> > rt_mutex_adjust_prio_chain() will pretty much be with interrupts
> > disabled.
>
>
> I didn't catch
On Thu, 22 Aug 2013 22:23:09 +0800
Lai Jiangshan eag0...@gmail.com wrote:
By making it a irq-safe lock, we need to disable interrupts every time
it is taken, which means the entire pi-chain walk in
rt_mutex_adjust_prio_chain() will pretty much be with interrupts
disabled.
I didn't
On Thu, 22 Aug 2013 10:34:31 -0400
Steven Rostedt rost...@goodmis.org wrote:
On Thu, 22 Aug 2013 22:23:09 +0800
Lai Jiangshan eag0...@gmail.com wrote:
By making it a irq-safe lock, we need to disable interrupts every time
it is taken, which means the entire pi-chain walk in
On Wed, Aug 21, 2013 at 11:25:55AM +0800, Lai Jiangshan wrote:
> On 08/21/2013 11:17 AM, Paul E. McKenney wrote:
> > On Sat, Aug 10, 2013 at 08:07:15AM -0700, Paul E. McKenney wrote:
> >> On Sat, Aug 10, 2013 at 11:43:59AM +0800, Lai Jiangshan wrote:
> >
> > [ . . . ]
> >
> >>> So I have to
On Wed, Aug 21, 2013 at 11:25:55AM +0800, Lai Jiangshan wrote:
On 08/21/2013 11:17 AM, Paul E. McKenney wrote:
On Sat, Aug 10, 2013 at 08:07:15AM -0700, Paul E. McKenney wrote:
On Sat, Aug 10, 2013 at 11:43:59AM +0800, Lai Jiangshan wrote:
[ . . . ]
So I have to narrow the range of
On 08/21/2013 11:17 AM, Paul E. McKenney wrote:
> On Sat, Aug 10, 2013 at 08:07:15AM -0700, Paul E. McKenney wrote:
>> On Sat, Aug 10, 2013 at 11:43:59AM +0800, Lai Jiangshan wrote:
>
> [ . . . ]
>
>>> So I have to narrow the range of suspect locks. Two choices:
>>> A) don't call
On Sat, Aug 10, 2013 at 08:07:15AM -0700, Paul E. McKenney wrote:
> On Sat, Aug 10, 2013 at 11:43:59AM +0800, Lai Jiangshan wrote:
[ . . . ]
> > So I have to narrow the range of suspect locks. Two choices:
> > A) don't call rt_mutex_unlock() from rcu_read_unlock(), only call it
> >from
On Sat, Aug 10, 2013 at 08:07:15AM -0700, Paul E. McKenney wrote:
On Sat, Aug 10, 2013 at 11:43:59AM +0800, Lai Jiangshan wrote:
[ . . . ]
So I have to narrow the range of suspect locks. Two choices:
A) don't call rt_mutex_unlock() from rcu_read_unlock(), only call it
from
On 08/21/2013 11:17 AM, Paul E. McKenney wrote:
On Sat, Aug 10, 2013 at 08:07:15AM -0700, Paul E. McKenney wrote:
On Sat, Aug 10, 2013 at 11:43:59AM +0800, Lai Jiangshan wrote:
[ . . . ]
So I have to narrow the range of suspect locks. Two choices:
A) don't call rt_mutex_unlock() from
On Mon, Aug 12, 2013 at 06:21:26PM +0200, Peter Zijlstra wrote:
> On Mon, Aug 12, 2013 at 08:16:18AM -0700, Paul E. McKenney wrote:
> > On Mon, Aug 12, 2013 at 03:55:44PM +0200, Peter Zijlstra wrote:
> > > On Fri, Aug 09, 2013 at 05:31:27PM +0800, Lai Jiangshan wrote:
> > > > On 08/09/2013 04:40
On Mon, Aug 12, 2013 at 08:16:18AM -0700, Paul E. McKenney wrote:
> On Mon, Aug 12, 2013 at 03:55:44PM +0200, Peter Zijlstra wrote:
> > On Fri, Aug 09, 2013 at 05:31:27PM +0800, Lai Jiangshan wrote:
> > > On 08/09/2013 04:40 AM, Paul E. McKenney wrote:
> > > > One problem here -- it may take quite
On Mon, Aug 12, 2013 at 03:55:44PM +0200, Peter Zijlstra wrote:
> On Fri, Aug 09, 2013 at 05:31:27PM +0800, Lai Jiangshan wrote:
> > On 08/09/2013 04:40 AM, Paul E. McKenney wrote:
> > > One problem here -- it may take quite some time for a set_need_resched()
> > > to take effect. This is
On Mon, Aug 12, 2013 at 10:10:08AM -0400, Steven Rostedt wrote:
> On Mon, 12 Aug 2013 15:53:10 +0200
> Peter Zijlstra wrote:
>
> > On Sat, Aug 10, 2013 at 11:43:59AM +0800, Lai Jiangshan wrote:
> > > Hi, Steven
> > >
> > > I was considering rtmutex's lock->wait_lock is a scheduler lock,
> > >
On Mon, 12 Aug 2013 15:53:10 +0200
Peter Zijlstra wrote:
> On Sat, Aug 10, 2013 at 11:43:59AM +0800, Lai Jiangshan wrote:
> > Hi, Steven
> >
> > I was considering rtmutex's lock->wait_lock is a scheduler lock,
> > But it is not, and it is just a spinlock of process context.
> > I hope you
On Fri, Aug 09, 2013 at 05:31:27PM +0800, Lai Jiangshan wrote:
> On 08/09/2013 04:40 AM, Paul E. McKenney wrote:
> > One problem here -- it may take quite some time for a set_need_resched()
> > to take effect. This is especially a problem for RCU priority boosting,
> > but can also needlessly
On Sat, Aug 10, 2013 at 11:43:59AM +0800, Lai Jiangshan wrote:
> Hi, Steven
>
> I was considering rtmutex's lock->wait_lock is a scheduler lock,
> But it is not, and it is just a spinlock of process context.
> I hope you change it to a spinlock of irq context.
rwmutex::wait_lock is irq-safe; it
On Sat, Aug 10, 2013 at 11:43:59AM +0800, Lai Jiangshan wrote:
Hi, Steven
I was considering rtmutex's lock-wait_lock is a scheduler lock,
But it is not, and it is just a spinlock of process context.
I hope you change it to a spinlock of irq context.
rwmutex::wait_lock is irq-safe; it had
On Fri, Aug 09, 2013 at 05:31:27PM +0800, Lai Jiangshan wrote:
On 08/09/2013 04:40 AM, Paul E. McKenney wrote:
One problem here -- it may take quite some time for a set_need_resched()
to take effect. This is especially a problem for RCU priority boosting,
but can also needlessly delay
On Mon, 12 Aug 2013 15:53:10 +0200
Peter Zijlstra pet...@infradead.org wrote:
On Sat, Aug 10, 2013 at 11:43:59AM +0800, Lai Jiangshan wrote:
Hi, Steven
I was considering rtmutex's lock-wait_lock is a scheduler lock,
But it is not, and it is just a spinlock of process context.
I hope
On Mon, Aug 12, 2013 at 10:10:08AM -0400, Steven Rostedt wrote:
On Mon, 12 Aug 2013 15:53:10 +0200
Peter Zijlstra pet...@infradead.org wrote:
On Sat, Aug 10, 2013 at 11:43:59AM +0800, Lai Jiangshan wrote:
Hi, Steven
I was considering rtmutex's lock-wait_lock is a scheduler lock,
On Mon, Aug 12, 2013 at 03:55:44PM +0200, Peter Zijlstra wrote:
On Fri, Aug 09, 2013 at 05:31:27PM +0800, Lai Jiangshan wrote:
On 08/09/2013 04:40 AM, Paul E. McKenney wrote:
One problem here -- it may take quite some time for a set_need_resched()
to take effect. This is especially a
On Mon, Aug 12, 2013 at 08:16:18AM -0700, Paul E. McKenney wrote:
On Mon, Aug 12, 2013 at 03:55:44PM +0200, Peter Zijlstra wrote:
On Fri, Aug 09, 2013 at 05:31:27PM +0800, Lai Jiangshan wrote:
On 08/09/2013 04:40 AM, Paul E. McKenney wrote:
One problem here -- it may take quite some time
On Mon, Aug 12, 2013 at 06:21:26PM +0200, Peter Zijlstra wrote:
On Mon, Aug 12, 2013 at 08:16:18AM -0700, Paul E. McKenney wrote:
On Mon, Aug 12, 2013 at 03:55:44PM +0200, Peter Zijlstra wrote:
On Fri, Aug 09, 2013 at 05:31:27PM +0800, Lai Jiangshan wrote:
On 08/09/2013 04:40 AM, Paul E.
On Sat, Aug 10, 2013 at 08:07:15AM -0700, Paul E. McKenney wrote:
> On Sat, Aug 10, 2013 at 11:43:59AM +0800, Lai Jiangshan wrote:
> > Hi, Steven
> >
> > I was considering rtmutex's lock->wait_lock is a scheduler lock,
> > But it is not, and it is just a spinlock of process context.
> > I hope
On Sat, Aug 10, 2013 at 11:43:59AM +0800, Lai Jiangshan wrote:
> Hi, Steven
>
> I was considering rtmutex's lock->wait_lock is a scheduler lock,
> But it is not, and it is just a spinlock of process context.
> I hope you change it to a spinlock of irq context.
>
> 1) it causes rcu read site more
On Sat, Aug 10, 2013 at 11:43:59AM +0800, Lai Jiangshan wrote:
Hi, Steven
I was considering rtmutex's lock-wait_lock is a scheduler lock,
But it is not, and it is just a spinlock of process context.
I hope you change it to a spinlock of irq context.
1) it causes rcu read site more
On Sat, Aug 10, 2013 at 08:07:15AM -0700, Paul E. McKenney wrote:
On Sat, Aug 10, 2013 at 11:43:59AM +0800, Lai Jiangshan wrote:
Hi, Steven
I was considering rtmutex's lock-wait_lock is a scheduler lock,
But it is not, and it is just a spinlock of process context.
I hope you change it
Hi, Steven
I was considering rtmutex's lock->wait_lock is a scheduler lock,
But it is not, and it is just a spinlock of process context.
I hope you change it to a spinlock of irq context.
1) it causes rcu read site more deadlockable, example:
x is a spinlock of softirq context.
CPU1
On Fri, Aug 09, 2013 at 05:31:27PM +0800, Lai Jiangshan wrote:
> On 08/09/2013 04:40 AM, Paul E. McKenney wrote:
> > On Wed, Aug 07, 2013 at 06:25:01PM +0800, Lai Jiangshan wrote:
> >> Background)
> >>
> >> Although all articles declare that rcu read site is deadlock-immunity.
> >> It is not true
On 08/09/2013 04:40 AM, Paul E. McKenney wrote:
> On Wed, Aug 07, 2013 at 06:25:01PM +0800, Lai Jiangshan wrote:
>> Background)
>>
>> Although all articles declare that rcu read site is deadlock-immunity.
>> It is not true for rcu-preempt, it will be deadlock if rcu read site
>> overlaps with
On 08/09/2013 04:40 AM, Paul E. McKenney wrote:
On Wed, Aug 07, 2013 at 06:25:01PM +0800, Lai Jiangshan wrote:
Background)
Although all articles declare that rcu read site is deadlock-immunity.
It is not true for rcu-preempt, it will be deadlock if rcu read site
overlaps with scheduler lock.
On Fri, Aug 09, 2013 at 05:31:27PM +0800, Lai Jiangshan wrote:
On 08/09/2013 04:40 AM, Paul E. McKenney wrote:
On Wed, Aug 07, 2013 at 06:25:01PM +0800, Lai Jiangshan wrote:
Background)
Although all articles declare that rcu read site is deadlock-immunity.
It is not true for
Hi, Steven
I was considering rtmutex's lock-wait_lock is a scheduler lock,
But it is not, and it is just a spinlock of process context.
I hope you change it to a spinlock of irq context.
1) it causes rcu read site more deadlockable, example:
x is a spinlock of softirq context.
CPU1
On Wed, Aug 07, 2013 at 06:25:01PM +0800, Lai Jiangshan wrote:
> Background)
>
> Although all articles declare that rcu read site is deadlock-immunity.
> It is not true for rcu-preempt, it will be deadlock if rcu read site
> overlaps with scheduler lock.
>
> ec433f0c, 10f39bb1 and 016a8d5b just
On Wed, Aug 07, 2013 at 06:25:01PM +0800, Lai Jiangshan wrote:
Background)
Although all articles declare that rcu read site is deadlock-immunity.
It is not true for rcu-preempt, it will be deadlock if rcu read site
overlaps with scheduler lock.
ec433f0c, 10f39bb1 and 016a8d5b just
Background)
Although all articles declare that rcu read site is deadlock-immunity.
It is not true for rcu-preempt, it will be deadlock if rcu read site
overlaps with scheduler lock.
ec433f0c, 10f39bb1 and 016a8d5b just partially solve it. But rcu read site
is still not deadlock-immunity. And the
Background)
Although all articles declare that rcu read site is deadlock-immunity.
It is not true for rcu-preempt, it will be deadlock if rcu read site
overlaps with scheduler lock.
ec433f0c, 10f39bb1 and 016a8d5b just partially solve it. But rcu read site
is still not deadlock-immunity. And the
48 matches
Mail list logo