Re: [Cluster-devel] [GFS2 PATCH 4/9] gfs2: Force withdraw to replay journals and wait for it to finish

2019-02-15 Thread Steven Whitehouse

Hi,

On 13/02/2019 15:21, Bob Peterson wrote:

When a node withdraws from a file system, it often leaves its journal
in an incomplete state. This is especially true when the withdraw is
caused by io errors writing to the journal. Before this patch, a
withdraw would try to write a "shutdown" record to the journal, tell
dlm it's done with the file system, and none of the other nodes
know about the problem. Later, when the problem is fixed and the
withdrawn node is rebooted, it would then discover that its own
journal was incomplete, and replay it. However, replaying it at this
point is almost guaranteed to introduce corruption because the other
nodes are likely to have used affected resource groups that appeared
in the journal since the time of the withdraw. Replaying the journal
later will overwrite any changes made, and not through any fault of
dlm, which was instructed during the withdraw to release those
resources.

This patch makes file system withdraws seen by the entire cluster.
Withdrawing nodes dequeue their journal glock to allow recovery.

The remaining nodes check all the journals to see if they are
clean or in need of replay. They try to replay dirty journals, but
only the journals of withdrawn nodes will be "not busy" and
therefore available for replay.

Until the journal replay is complete, no i/o related glocks may be
given out, to ensure that the replay does not cause the
aforementioned corruption: We cannot allow any journal replay to
overwrite blocks associated with a glock once it is held. The
glocks not affected by a withdraw are permitted to be passed
around as normal during a withdraw. A new glops flag, called
GLOF_OK_AT_WITHDRAW, indicates glocks that may be passed around
freely while a withdraw is taking place.

One such glock is the "live" glock which is now used to signal when
a withdraw occurs. When a withdraw occurs, the node signals its
withdraw by dequeueing the "live" glock and trying to enqueue it
in EX mode, thus forcing the other nodes to all see a demote
request, by way of a "1CB" (one callback) try lock. The "live"
glock is not granted in EX; the callback is only just used to
indicate a withdraw has occurred.

Note that all nodes in the cluster must wait for the recovering
node to finish replaying the withdrawing node's journal before
continuing. To this end, it checks that the journals are clean
multiple times in a retry loop.

Signed-off-by: Bob Peterson 


This new algorithm seems rather complicated, so it will need a lot of 
careful testing I think. It would be good if there was some way to 
simplify things a bit here.




---
  fs/gfs2/glock.c  |  35 --
  fs/gfs2/glock.h  |   1 +
  fs/gfs2/glops.c  |  61 +-
  fs/gfs2/incore.h |   6 ++
  fs/gfs2/lock_dlm.c   |  32 ++
  fs/gfs2/log.c|  22 +--
  fs/gfs2/meta_io.c|   2 +-
  fs/gfs2/ops_fstype.c |  48 ++
  fs/gfs2/super.c  |  24 ---
  fs/gfs2/super.h  |   1 +
  fs/gfs2/util.c   | 148 ++-
  fs/gfs2/util.h   |   3 +
  12 files changed, 315 insertions(+), 68 deletions(-)

diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c
index c6d6e478f5e3..20fb6cdf7829 100644
--- a/fs/gfs2/glock.c
+++ b/fs/gfs2/glock.c
@@ -242,7 +242,8 @@ static void __gfs2_glock_put(struct gfs2_glock *gl)
gfs2_glock_remove_from_lru(gl);
spin_unlock(&gl->gl_lockref.lock);
GLOCK_BUG_ON(gl, !list_empty(&gl->gl_holders));
-   GLOCK_BUG_ON(gl, mapping && mapping->nrpages);
+   GLOCK_BUG_ON(gl, mapping && mapping->nrpages &&
+!test_bit(SDF_SHUTDOWN, &sdp->sd_flags));
trace_gfs2_glock_put(gl);
sdp->sd_lockstruct.ls_ops->lm_put_lock(gl);
  }
@@ -543,6 +544,8 @@ __acquires(&gl->gl_lockref.lock)
int ret;
  
  	if (unlikely(withdrawn(sdp)) &&

+   !(glops->go_flags & GLOF_OK_AT_WITHDRAW) &&
+   (gh && !(LM_FLAG_NOEXP & gh->gh_flags)) &&
target != LM_ST_UNLOCKED)
return;
lck_flags &= (LM_FLAG_TRY | LM_FLAG_TRY_1CB | LM_FLAG_NOEXP |
@@ -561,9 +564,10 @@ __acquires(&gl->gl_lockref.lock)
(lck_flags & (LM_FLAG_TRY|LM_FLAG_TRY_1CB)))
clear_bit(GLF_BLOCKING, &gl->gl_flags);
spin_unlock(&gl->gl_lockref.lock);
-   if (glops->go_sync)
+   if (glops->go_sync && !test_bit(SDF_SHUTDOWN, &sdp->sd_flags))
glops->go_sync(gl);
-   if (test_bit(GLF_INVALIDATE_IN_PROGRESS, &gl->gl_flags))
+   if (test_bit(GLF_INVALIDATE_IN_PROGRESS, &gl->gl_flags) &&
+   !test_bit(SDF_SHUTDOWN, &sdp->sd_flags))
glops->go_inval(gl, target == LM_ST_DEFERRED ? 0 : 
DIO_METADATA);
clear_bit(GLF_INVALIDATE_IN_PROGRESS, &gl->gl_flags);
  
@@ -1091,7 +1095,8 @@ int gfs2_glock_nq(struct gfs2_holder *gh)

struct gfs2_sbd *sdp = gl->gl_name.ln_sbd;
int error = 0;
  
-	if (unlikely(withdrawn(sdp)))

+   if (unlikely(withdrawn(sdp) && !(LM

[Cluster-devel] [GFS2 PATCH 4/9] gfs2: Force withdraw to replay journals and wait for it to finish

2019-02-13 Thread Bob Peterson
When a node withdraws from a file system, it often leaves its journal
in an incomplete state. This is especially true when the withdraw is
caused by io errors writing to the journal. Before this patch, a
withdraw would try to write a "shutdown" record to the journal, tell
dlm it's done with the file system, and none of the other nodes
know about the problem. Later, when the problem is fixed and the
withdrawn node is rebooted, it would then discover that its own
journal was incomplete, and replay it. However, replaying it at this
point is almost guaranteed to introduce corruption because the other
nodes are likely to have used affected resource groups that appeared
in the journal since the time of the withdraw. Replaying the journal
later will overwrite any changes made, and not through any fault of
dlm, which was instructed during the withdraw to release those
resources.

This patch makes file system withdraws seen by the entire cluster.
Withdrawing nodes dequeue their journal glock to allow recovery.

The remaining nodes check all the journals to see if they are
clean or in need of replay. They try to replay dirty journals, but
only the journals of withdrawn nodes will be "not busy" and
therefore available for replay.

Until the journal replay is complete, no i/o related glocks may be
given out, to ensure that the replay does not cause the
aforementioned corruption: We cannot allow any journal replay to
overwrite blocks associated with a glock once it is held. The
glocks not affected by a withdraw are permitted to be passed
around as normal during a withdraw. A new glops flag, called
GLOF_OK_AT_WITHDRAW, indicates glocks that may be passed around
freely while a withdraw is taking place.

One such glock is the "live" glock which is now used to signal when
a withdraw occurs. When a withdraw occurs, the node signals its
withdraw by dequeueing the "live" glock and trying to enqueue it
in EX mode, thus forcing the other nodes to all see a demote
request, by way of a "1CB" (one callback) try lock. The "live"
glock is not granted in EX; the callback is only just used to
indicate a withdraw has occurred.

Note that all nodes in the cluster must wait for the recovering
node to finish replaying the withdrawing node's journal before
continuing. To this end, it checks that the journals are clean
multiple times in a retry loop.

Signed-off-by: Bob Peterson 
---
 fs/gfs2/glock.c  |  35 --
 fs/gfs2/glock.h  |   1 +
 fs/gfs2/glops.c  |  61 +-
 fs/gfs2/incore.h |   6 ++
 fs/gfs2/lock_dlm.c   |  32 ++
 fs/gfs2/log.c|  22 +--
 fs/gfs2/meta_io.c|   2 +-
 fs/gfs2/ops_fstype.c |  48 ++
 fs/gfs2/super.c  |  24 ---
 fs/gfs2/super.h  |   1 +
 fs/gfs2/util.c   | 148 ++-
 fs/gfs2/util.h   |   3 +
 12 files changed, 315 insertions(+), 68 deletions(-)

diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c
index c6d6e478f5e3..20fb6cdf7829 100644
--- a/fs/gfs2/glock.c
+++ b/fs/gfs2/glock.c
@@ -242,7 +242,8 @@ static void __gfs2_glock_put(struct gfs2_glock *gl)
gfs2_glock_remove_from_lru(gl);
spin_unlock(&gl->gl_lockref.lock);
GLOCK_BUG_ON(gl, !list_empty(&gl->gl_holders));
-   GLOCK_BUG_ON(gl, mapping && mapping->nrpages);
+   GLOCK_BUG_ON(gl, mapping && mapping->nrpages &&
+!test_bit(SDF_SHUTDOWN, &sdp->sd_flags));
trace_gfs2_glock_put(gl);
sdp->sd_lockstruct.ls_ops->lm_put_lock(gl);
 }
@@ -543,6 +544,8 @@ __acquires(&gl->gl_lockref.lock)
int ret;
 
if (unlikely(withdrawn(sdp)) &&
+   !(glops->go_flags & GLOF_OK_AT_WITHDRAW) &&
+   (gh && !(LM_FLAG_NOEXP & gh->gh_flags)) &&
target != LM_ST_UNLOCKED)
return;
lck_flags &= (LM_FLAG_TRY | LM_FLAG_TRY_1CB | LM_FLAG_NOEXP |
@@ -561,9 +564,10 @@ __acquires(&gl->gl_lockref.lock)
(lck_flags & (LM_FLAG_TRY|LM_FLAG_TRY_1CB)))
clear_bit(GLF_BLOCKING, &gl->gl_flags);
spin_unlock(&gl->gl_lockref.lock);
-   if (glops->go_sync)
+   if (glops->go_sync && !test_bit(SDF_SHUTDOWN, &sdp->sd_flags))
glops->go_sync(gl);
-   if (test_bit(GLF_INVALIDATE_IN_PROGRESS, &gl->gl_flags))
+   if (test_bit(GLF_INVALIDATE_IN_PROGRESS, &gl->gl_flags) &&
+   !test_bit(SDF_SHUTDOWN, &sdp->sd_flags))
glops->go_inval(gl, target == LM_ST_DEFERRED ? 0 : 
DIO_METADATA);
clear_bit(GLF_INVALIDATE_IN_PROGRESS, &gl->gl_flags);
 
@@ -1091,7 +1095,8 @@ int gfs2_glock_nq(struct gfs2_holder *gh)
struct gfs2_sbd *sdp = gl->gl_name.ln_sbd;
int error = 0;
 
-   if (unlikely(withdrawn(sdp)))
+   if (unlikely(withdrawn(sdp) && !(LM_FLAG_NOEXP & gh->gh_flags) &&
+!(gl->gl_ops->go_flags & GLOF_OK_AT_WITHDRAW)))
return -EIO;
 
if (test_bit(GLF_LRU, &gl->gl_flags))
@@ -1135,11 +1140,28 @@ int gfs2_glock_poll(struct gf