Re: [PATCH v2] blk-cgroup: Use cond_resched() when destroy blkgs

2021-01-27 Thread Jens Axboe
On 1/27/21 8:49 PM, Baolin Wang wrote:
> 
> 
> 在 2021/1/28 11:41, Jens Axboe 写道:
>> On 1/27/21 8:22 PM, Baolin Wang wrote:
>>> On !PREEMPT kernel, we can get below softlockup when doing stress
>>> testing with creating and destroying block cgroup repeatly. The
>>> reason is it may take a long time to acquire the queue's lock in
>>> the loop of blkcg_destroy_blkgs(), or the system can accumulate a
>>> huge number of blkgs in pathological cases. We can add a need_resched()
>>> check on each loop and release locks and do cond_resched() if true
>>> to avoid this issue, since the blkcg_destroy_blkgs() is not called
>>> from atomic contexts.
>>>
>>> [ 4757.010308] watchdog: BUG: soft lockup - CPU#11 stuck for 94s!
>>> [ 4757.010698] Call trace:
>>> [ 4757.010700]  blkcg_destroy_blkgs+0x68/0x150
>>> [ 4757.010701]  cgwb_release_workfn+0x104/0x158
>>> [ 4757.010702]  process_one_work+0x1bc/0x3f0
>>> [ 4757.010704]  worker_thread+0x164/0x468
>>> [ 4757.010705]  kthread+0x108/0x138
>>
>> Kind of ugly with the two clauses for dropping the blkcg lock, one
>> being a cpu_relax() and the other a resched. How about something
>> like this:
>>
>>
>> diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
>> index 031114d454a6..4221a1539391 100644
>> --- a/block/blk-cgroup.c
>> +++ b/block/blk-cgroup.c
>> @@ -1016,6 +1016,8 @@ static void blkcg_css_offline(struct 
>> cgroup_subsys_state *css)
>>*/
>>   void blkcg_destroy_blkgs(struct blkcg *blkcg)
>>   {
>> +might_sleep();
>> +
>>  spin_lock_irq(>lock);
>>   
>>  while (!hlist_empty(>blkg_list)) {
>> @@ -1023,14 +1025,20 @@ void blkcg_destroy_blkgs(struct blkcg *blkcg)
>>  struct blkcg_gq, blkcg_node);
>>  struct request_queue *q = blkg->q;
>>   
>> -if (spin_trylock(>queue_lock)) {
>> -blkg_destroy(blkg);
>> -spin_unlock(>queue_lock);
>> -} else {
>> +if (need_resched() || !spin_trylock(>queue_lock)) {
>> +/*
>> + * Given that the system can accumulate a huge number
>> + * of blkgs in pathological cases, check to see if we
>> + * need to rescheduling to avoid softlockup.
>> + */
>>  spin_unlock_irq(>lock);
>> -cpu_relax();
>> +cond_resched();
>>  spin_lock_irq(>lock);
>> +continue;
>>  }
>> +
>> +blkg_destroy(blkg);
>> +spin_unlock(>queue_lock);
>>  }
>>   
>>  spin_unlock_irq(>lock);
>>
> 
> Looks better to me. Do I need resend with your suggestion? Thanks.

Probably best, gives Tejun another chance to sign off on it :-)


-- 
Jens Axboe



Re: [PATCH v2] blk-cgroup: Use cond_resched() when destroy blkgs

2021-01-27 Thread Jens Axboe
On 1/27/21 8:22 PM, Baolin Wang wrote:
> On !PREEMPT kernel, we can get below softlockup when doing stress
> testing with creating and destroying block cgroup repeatly. The
> reason is it may take a long time to acquire the queue's lock in
> the loop of blkcg_destroy_blkgs(), or the system can accumulate a
> huge number of blkgs in pathological cases. We can add a need_resched()
> check on each loop and release locks and do cond_resched() if true
> to avoid this issue, since the blkcg_destroy_blkgs() is not called
> from atomic contexts.
> 
> [ 4757.010308] watchdog: BUG: soft lockup - CPU#11 stuck for 94s!
> [ 4757.010698] Call trace:
> [ 4757.010700]  blkcg_destroy_blkgs+0x68/0x150
> [ 4757.010701]  cgwb_release_workfn+0x104/0x158
> [ 4757.010702]  process_one_work+0x1bc/0x3f0
> [ 4757.010704]  worker_thread+0x164/0x468
> [ 4757.010705]  kthread+0x108/0x138

Kind of ugly with the two clauses for dropping the blkcg lock, one
being a cpu_relax() and the other a resched. How about something
like this:


diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
index 031114d454a6..4221a1539391 100644
--- a/block/blk-cgroup.c
+++ b/block/blk-cgroup.c
@@ -1016,6 +1016,8 @@ static void blkcg_css_offline(struct cgroup_subsys_state 
*css)
  */
 void blkcg_destroy_blkgs(struct blkcg *blkcg)
 {
+   might_sleep();
+
spin_lock_irq(>lock);
 
while (!hlist_empty(>blkg_list)) {
@@ -1023,14 +1025,20 @@ void blkcg_destroy_blkgs(struct blkcg *blkcg)
struct blkcg_gq, blkcg_node);
struct request_queue *q = blkg->q;
 
-   if (spin_trylock(>queue_lock)) {
-   blkg_destroy(blkg);
-   spin_unlock(>queue_lock);
-   } else {
+   if (need_resched() || !spin_trylock(>queue_lock)) {
+   /*
+* Given that the system can accumulate a huge number
+* of blkgs in pathological cases, check to see if we
+* need to rescheduling to avoid softlockup.
+*/
spin_unlock_irq(>lock);
-   cpu_relax();
+   cond_resched();
spin_lock_irq(>lock);
+   continue;
}
+
+   blkg_destroy(blkg);
+   spin_unlock(>queue_lock);
}
 
spin_unlock_irq(>lock);

-- 
Jens Axboe



Re: [PATCH v2] blk-cgroup: Use cond_resched() when destroy blkgs

2021-01-27 Thread Baolin Wang




在 2021/1/28 11:41, Jens Axboe 写道:

On 1/27/21 8:22 PM, Baolin Wang wrote:

On !PREEMPT kernel, we can get below softlockup when doing stress
testing with creating and destroying block cgroup repeatly. The
reason is it may take a long time to acquire the queue's lock in
the loop of blkcg_destroy_blkgs(), or the system can accumulate a
huge number of blkgs in pathological cases. We can add a need_resched()
check on each loop and release locks and do cond_resched() if true
to avoid this issue, since the blkcg_destroy_blkgs() is not called
from atomic contexts.

[ 4757.010308] watchdog: BUG: soft lockup - CPU#11 stuck for 94s!
[ 4757.010698] Call trace:
[ 4757.010700]  blkcg_destroy_blkgs+0x68/0x150
[ 4757.010701]  cgwb_release_workfn+0x104/0x158
[ 4757.010702]  process_one_work+0x1bc/0x3f0
[ 4757.010704]  worker_thread+0x164/0x468
[ 4757.010705]  kthread+0x108/0x138


Kind of ugly with the two clauses for dropping the blkcg lock, one
being a cpu_relax() and the other a resched. How about something
like this:


diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
index 031114d454a6..4221a1539391 100644
--- a/block/blk-cgroup.c
+++ b/block/blk-cgroup.c
@@ -1016,6 +1016,8 @@ static void blkcg_css_offline(struct cgroup_subsys_state 
*css)
   */
  void blkcg_destroy_blkgs(struct blkcg *blkcg)
  {
+   might_sleep();
+
spin_lock_irq(>lock);
  
  	while (!hlist_empty(>blkg_list)) {

@@ -1023,14 +1025,20 @@ void blkcg_destroy_blkgs(struct blkcg *blkcg)
struct blkcg_gq, blkcg_node);
struct request_queue *q = blkg->q;
  
-		if (spin_trylock(>queue_lock)) {

-   blkg_destroy(blkg);
-   spin_unlock(>queue_lock);
-   } else {
+   if (need_resched() || !spin_trylock(>queue_lock)) {
+   /*
+* Given that the system can accumulate a huge number
+* of blkgs in pathological cases, check to see if we
+* need to rescheduling to avoid softlockup.
+*/
spin_unlock_irq(>lock);
-   cpu_relax();
+   cond_resched();
spin_lock_irq(>lock);
+   continue;
}
+
+   blkg_destroy(blkg);
+   spin_unlock(>queue_lock);
}
  
  	spin_unlock_irq(>lock);




Looks better to me. Do I need resend with your suggestion? Thanks.


Re: [PATCH v2] blk-cgroup: Use cond_resched() when destroy blkgs

2021-01-27 Thread Tejun Heo
On Thu, Jan 28, 2021 at 11:22:00AM +0800, Baolin Wang wrote:
> On !PREEMPT kernel, we can get below softlockup when doing stress
> testing with creating and destroying block cgroup repeatly. The
> reason is it may take a long time to acquire the queue's lock in
> the loop of blkcg_destroy_blkgs(), or the system can accumulate a
> huge number of blkgs in pathological cases. We can add a need_resched()
> check on each loop and release locks and do cond_resched() if true
> to avoid this issue, since the blkcg_destroy_blkgs() is not called
> from atomic contexts.
> 
> [ 4757.010308] watchdog: BUG: soft lockup - CPU#11 stuck for 94s!
> [ 4757.010698] Call trace:
> [ 4757.010700]  blkcg_destroy_blkgs+0x68/0x150
> [ 4757.010701]  cgwb_release_workfn+0x104/0x158
> [ 4757.010702]  process_one_work+0x1bc/0x3f0
> [ 4757.010704]  worker_thread+0x164/0x468
> [ 4757.010705]  kthread+0x108/0x138
> 
> Suggested-by: Tejun Heo 
> Signed-off-by: Baolin Wang 

Acked-by: Tejun Heo 

Thanks.

-- 
tejun


[PATCH v2] blk-cgroup: Use cond_resched() when destroy blkgs

2021-01-27 Thread Baolin Wang
On !PREEMPT kernel, we can get below softlockup when doing stress
testing with creating and destroying block cgroup repeatly. The
reason is it may take a long time to acquire the queue's lock in
the loop of blkcg_destroy_blkgs(), or the system can accumulate a
huge number of blkgs in pathological cases. We can add a need_resched()
check on each loop and release locks and do cond_resched() if true
to avoid this issue, since the blkcg_destroy_blkgs() is not called
from atomic contexts.

[ 4757.010308] watchdog: BUG: soft lockup - CPU#11 stuck for 94s!
[ 4757.010698] Call trace:
[ 4757.010700]  blkcg_destroy_blkgs+0x68/0x150
[ 4757.010701]  cgwb_release_workfn+0x104/0x158
[ 4757.010702]  process_one_work+0x1bc/0x3f0
[ 4757.010704]  worker_thread+0x164/0x468
[ 4757.010705]  kthread+0x108/0x138

Suggested-by: Tejun Heo 
Signed-off-by: Baolin Wang 
---
Changes from v1:
 - Add might_sleep() in blkcg_destroy_blkgs().
 - Add an explicitly need_resched() check before releasing lock.
 - Add some comments.
---
 block/blk-cgroup.c | 13 +
 1 file changed, 13 insertions(+)

diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
index 3465d6e..94eeed7 100644
--- a/block/blk-cgroup.c
+++ b/block/blk-cgroup.c
@@ -1016,6 +1016,8 @@ static void blkcg_css_offline(struct cgroup_subsys_state 
*css)
  */
 void blkcg_destroy_blkgs(struct blkcg *blkcg)
 {
+   might_sleep();
+
spin_lock_irq(>lock);
 
while (!hlist_empty(>blkg_list)) {
@@ -1031,6 +1033,17 @@ void blkcg_destroy_blkgs(struct blkcg *blkcg)
cpu_relax();
spin_lock_irq(>lock);
}
+
+   /*
+* Given that the system can accumulate a huge number
+* of blkgs in pathological cases, check to see if we
+* need to rescheduling to avoid softlockup.
+*/
+   if (need_resched()) {
+   spin_unlock_irq(>lock);
+   cond_resched();
+   spin_lock_irq(>lock);
+   }
}
 
spin_unlock_irq(>lock);
-- 
1.8.3.1