Re: [Cluster-devel] [PATCH] gfs2: Fix loop in gfs2_rbm_find (2)

2019-01-28 Thread Sasha Levin
Hi,

[This is an automated email]

This commit has been processed because it contains a "Fixes:" tag,
fixing commit: 2d29f6b96d8f gfs2: Fix loop in gfs2_rbm_find.

The bot has tested the following trees: v4.20.5, v4.19.18, v4.14.96, v4.9.153, 
v4.4.172, v3.18.133.

v4.20.5: Build OK!
v4.19.18: Failed to apply! Possible dependencies:
281b4952d185 ("gfs2: Rename bitmap.bi_{len => bytes}")
e54c78a27fcd ("gfs2: Use fs_* functions instead of pr_* function where we 
can")

v4.14.96: Failed to apply! Possible dependencies:
281b4952d185 ("gfs2: Rename bitmap.bi_{len => bytes}")
dc8fbb03dcd6 ("GFS2: gfs2_free_extlen can return an extent that is too 
long")
e54c78a27fcd ("gfs2: Use fs_* functions instead of pr_* function where we 
can")

v4.9.153: Failed to apply! Possible dependencies:
0d1c7ae9d849 ("GFS2: Prevent BUG from occurring when normal Withdraws 
occur")
281b4952d185 ("gfs2: Rename bitmap.bi_{len => bytes}")
9862ca056e65 ("GFS2: Switch tr_touched to flag in transaction")
dc8fbb03dcd6 ("GFS2: gfs2_free_extlen can return an extent that is too 
long")
e54c78a27fcd ("gfs2: Use fs_* functions instead of pr_* function where we 
can")
ed17545d01e4 ("GFS2: Allow glocks to be unlocked after withdraw")

v4.4.172: Failed to apply! Possible dependencies:
0d1c7ae9d849 ("GFS2: Prevent BUG from occurring when normal Withdraws 
occur")
281b4952d185 ("gfs2: Rename bitmap.bi_{len => bytes}")
3e11e5304150 ("GFS2: ignore unlock failures after withdraw")
471f3db2786b ("gfs2: change gfs2 readdir cookie")
9862ca056e65 ("GFS2: Switch tr_touched to flag in transaction")
dc8fbb03dcd6 ("GFS2: gfs2_free_extlen can return an extent that is too 
long")
e54c78a27fcd ("gfs2: Use fs_* functions instead of pr_* function where we 
can")
ed17545d01e4 ("GFS2: Allow glocks to be unlocked after withdraw")

v3.18.133: Failed to apply! Possible dependencies:
0d1c7ae9d849 ("GFS2: Prevent BUG from occurring when normal Withdraws 
occur")
281b4952d185 ("gfs2: Rename bitmap.bi_{len => bytes}")
2e60d7683c8d ("GFS2: update freeze code to use freeze/thaw_super on all 
nodes")
3cdcf63ed2d1 ("GFS2: use kvfree() instead of open-coding it")
3e11e5304150 ("GFS2: ignore unlock failures after withdraw")
471f3db2786b ("gfs2: change gfs2 readdir cookie")
9862ca056e65 ("GFS2: Switch tr_touched to flag in transaction")
a3e3213676d8 ("gfs2: fix shadow warning in gfs2_rbm_find()")
dc8fbb03dcd6 ("GFS2: gfs2_free_extlen can return an extent that is too 
long")
e54c78a27fcd ("gfs2: Use fs_* functions instead of pr_* function where we 
can")
ed17545d01e4 ("GFS2: Allow glocks to be unlocked after withdraw")


How should we proceed with this patch?

--
Thanks,
Sasha



[Cluster-devel] [PATCH AUTOSEL 3.18 05/61] dlm: Don't swamp the CPU with callbacks queued during recovery

2019-01-28 Thread Sasha Levin
From: Bob Peterson 

[ Upstream commit 216f0efd19b9cc32207934fd1b87a45f2c4c593e ]

Before this patch, recovery would cause all callbacks to be delayed,
put on a queue, and afterward they were all queued to the callback
work queue. This patch does the same thing, but occasionally takes
a break after 25 of them so it won't swamp the CPU at the expense
of other RT processes like corosync.

Signed-off-by: Bob Peterson 
Signed-off-by: David Teigland 
Signed-off-by: Sasha Levin 
---
 fs/dlm/ast.c | 10 ++
 1 file changed, 10 insertions(+)

diff --git a/fs/dlm/ast.c b/fs/dlm/ast.c
index dcea1e37a1b7..f18619bc2e09 100644
--- a/fs/dlm/ast.c
+++ b/fs/dlm/ast.c
@@ -290,6 +290,8 @@ void dlm_callback_suspend(struct dlm_ls *ls)
flush_workqueue(ls->ls_callback_wq);
 }
 
+#define MAX_CB_QUEUE 25
+
 void dlm_callback_resume(struct dlm_ls *ls)
 {
struct dlm_lkb *lkb, *safe;
@@ -300,15 +302,23 @@ void dlm_callback_resume(struct dlm_ls *ls)
if (!ls->ls_callback_wq)
return;
 
+more:
mutex_lock(>ls_cb_mutex);
list_for_each_entry_safe(lkb, safe, >ls_cb_delay, lkb_cb_list) {
list_del_init(>lkb_cb_list);
queue_work(ls->ls_callback_wq, >lkb_cb_work);
count++;
+   if (count == MAX_CB_QUEUE)
+   break;
}
mutex_unlock(>ls_cb_mutex);
 
if (count)
log_rinfo(ls, "dlm_callback_resume %d", count);
+   if (count == MAX_CB_QUEUE) {
+   count = 0;
+   cond_resched();
+   goto more;
+   }
 }
 
-- 
2.19.1



[Cluster-devel] [PATCH AUTOSEL 4.4 08/80] dlm: Don't swamp the CPU with callbacks queued during recovery

2019-01-28 Thread Sasha Levin
From: Bob Peterson 

[ Upstream commit 216f0efd19b9cc32207934fd1b87a45f2c4c593e ]

Before this patch, recovery would cause all callbacks to be delayed,
put on a queue, and afterward they were all queued to the callback
work queue. This patch does the same thing, but occasionally takes
a break after 25 of them so it won't swamp the CPU at the expense
of other RT processes like corosync.

Signed-off-by: Bob Peterson 
Signed-off-by: David Teigland 
Signed-off-by: Sasha Levin 
---
 fs/dlm/ast.c | 10 ++
 1 file changed, 10 insertions(+)

diff --git a/fs/dlm/ast.c b/fs/dlm/ast.c
index dcea1e37a1b7..f18619bc2e09 100644
--- a/fs/dlm/ast.c
+++ b/fs/dlm/ast.c
@@ -290,6 +290,8 @@ void dlm_callback_suspend(struct dlm_ls *ls)
flush_workqueue(ls->ls_callback_wq);
 }
 
+#define MAX_CB_QUEUE 25
+
 void dlm_callback_resume(struct dlm_ls *ls)
 {
struct dlm_lkb *lkb, *safe;
@@ -300,15 +302,23 @@ void dlm_callback_resume(struct dlm_ls *ls)
if (!ls->ls_callback_wq)
return;
 
+more:
mutex_lock(>ls_cb_mutex);
list_for_each_entry_safe(lkb, safe, >ls_cb_delay, lkb_cb_list) {
list_del_init(>lkb_cb_list);
queue_work(ls->ls_callback_wq, >lkb_cb_work);
count++;
+   if (count == MAX_CB_QUEUE)
+   break;
}
mutex_unlock(>ls_cb_mutex);
 
if (count)
log_rinfo(ls, "dlm_callback_resume %d", count);
+   if (count == MAX_CB_QUEUE) {
+   count = 0;
+   cond_resched();
+   goto more;
+   }
 }
 
-- 
2.19.1



[Cluster-devel] [PATCH AUTOSEL 4.9 009/107] dlm: Don't swamp the CPU with callbacks queued during recovery

2019-01-28 Thread Sasha Levin
From: Bob Peterson 

[ Upstream commit 216f0efd19b9cc32207934fd1b87a45f2c4c593e ]

Before this patch, recovery would cause all callbacks to be delayed,
put on a queue, and afterward they were all queued to the callback
work queue. This patch does the same thing, but occasionally takes
a break after 25 of them so it won't swamp the CPU at the expense
of other RT processes like corosync.

Signed-off-by: Bob Peterson 
Signed-off-by: David Teigland 
Signed-off-by: Sasha Levin 
---
 fs/dlm/ast.c | 10 ++
 1 file changed, 10 insertions(+)

diff --git a/fs/dlm/ast.c b/fs/dlm/ast.c
index dcea1e37a1b7..f18619bc2e09 100644
--- a/fs/dlm/ast.c
+++ b/fs/dlm/ast.c
@@ -290,6 +290,8 @@ void dlm_callback_suspend(struct dlm_ls *ls)
flush_workqueue(ls->ls_callback_wq);
 }
 
+#define MAX_CB_QUEUE 25
+
 void dlm_callback_resume(struct dlm_ls *ls)
 {
struct dlm_lkb *lkb, *safe;
@@ -300,15 +302,23 @@ void dlm_callback_resume(struct dlm_ls *ls)
if (!ls->ls_callback_wq)
return;
 
+more:
mutex_lock(>ls_cb_mutex);
list_for_each_entry_safe(lkb, safe, >ls_cb_delay, lkb_cb_list) {
list_del_init(>lkb_cb_list);
queue_work(ls->ls_callback_wq, >lkb_cb_work);
count++;
+   if (count == MAX_CB_QUEUE)
+   break;
}
mutex_unlock(>ls_cb_mutex);
 
if (count)
log_rinfo(ls, "dlm_callback_resume %d", count);
+   if (count == MAX_CB_QUEUE) {
+   count = 0;
+   cond_resched();
+   goto more;
+   }
 }
 
-- 
2.19.1



[Cluster-devel] [PATCH AUTOSEL 4.14 014/170] dlm: Don't swamp the CPU with callbacks queued during recovery

2019-01-28 Thread Sasha Levin
From: Bob Peterson 

[ Upstream commit 216f0efd19b9cc32207934fd1b87a45f2c4c593e ]

Before this patch, recovery would cause all callbacks to be delayed,
put on a queue, and afterward they were all queued to the callback
work queue. This patch does the same thing, but occasionally takes
a break after 25 of them so it won't swamp the CPU at the expense
of other RT processes like corosync.

Signed-off-by: Bob Peterson 
Signed-off-by: David Teigland 
Signed-off-by: Sasha Levin 
---
 fs/dlm/ast.c | 10 ++
 1 file changed, 10 insertions(+)

diff --git a/fs/dlm/ast.c b/fs/dlm/ast.c
index 07fed838d8fd..15fa4239ae9f 100644
--- a/fs/dlm/ast.c
+++ b/fs/dlm/ast.c
@@ -290,6 +290,8 @@ void dlm_callback_suspend(struct dlm_ls *ls)
flush_workqueue(ls->ls_callback_wq);
 }
 
+#define MAX_CB_QUEUE 25
+
 void dlm_callback_resume(struct dlm_ls *ls)
 {
struct dlm_lkb *lkb, *safe;
@@ -300,15 +302,23 @@ void dlm_callback_resume(struct dlm_ls *ls)
if (!ls->ls_callback_wq)
return;
 
+more:
mutex_lock(>ls_cb_mutex);
list_for_each_entry_safe(lkb, safe, >ls_cb_delay, lkb_cb_list) {
list_del_init(>lkb_cb_list);
queue_work(ls->ls_callback_wq, >lkb_cb_work);
count++;
+   if (count == MAX_CB_QUEUE)
+   break;
}
mutex_unlock(>ls_cb_mutex);
 
if (count)
log_rinfo(ls, "dlm_callback_resume %d", count);
+   if (count == MAX_CB_QUEUE) {
+   count = 0;
+   cond_resched();
+   goto more;
+   }
 }
 
-- 
2.19.1



[Cluster-devel] [PATCH AUTOSEL 4.19 021/258] dlm: Don't swamp the CPU with callbacks queued during recovery

2019-01-28 Thread Sasha Levin
From: Bob Peterson 

[ Upstream commit 216f0efd19b9cc32207934fd1b87a45f2c4c593e ]

Before this patch, recovery would cause all callbacks to be delayed,
put on a queue, and afterward they were all queued to the callback
work queue. This patch does the same thing, but occasionally takes
a break after 25 of them so it won't swamp the CPU at the expense
of other RT processes like corosync.

Signed-off-by: Bob Peterson 
Signed-off-by: David Teigland 
Signed-off-by: Sasha Levin 
---
 fs/dlm/ast.c | 10 ++
 1 file changed, 10 insertions(+)

diff --git a/fs/dlm/ast.c b/fs/dlm/ast.c
index 562fa8c3edff..47ee66d70109 100644
--- a/fs/dlm/ast.c
+++ b/fs/dlm/ast.c
@@ -292,6 +292,8 @@ void dlm_callback_suspend(struct dlm_ls *ls)
flush_workqueue(ls->ls_callback_wq);
 }
 
+#define MAX_CB_QUEUE 25
+
 void dlm_callback_resume(struct dlm_ls *ls)
 {
struct dlm_lkb *lkb, *safe;
@@ -302,15 +304,23 @@ void dlm_callback_resume(struct dlm_ls *ls)
if (!ls->ls_callback_wq)
return;
 
+more:
mutex_lock(>ls_cb_mutex);
list_for_each_entry_safe(lkb, safe, >ls_cb_delay, lkb_cb_list) {
list_del_init(>lkb_cb_list);
queue_work(ls->ls_callback_wq, >lkb_cb_work);
count++;
+   if (count == MAX_CB_QUEUE)
+   break;
}
mutex_unlock(>ls_cb_mutex);
 
if (count)
log_rinfo(ls, "dlm_callback_resume %d", count);
+   if (count == MAX_CB_QUEUE) {
+   count = 0;
+   cond_resched();
+   goto more;
+   }
 }
 
-- 
2.19.1