Gitweb:     
http://git.kernel.org/git/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=756a1501ddbbe73098aa031939460930f6edc9cd
Commit:     756a1501ddbbe73098aa031939460930f6edc9cd
Parent:     de46c33745f5e2ad594c72f2cf5f490861b16ce1
Author:     Srinivas Eeda <[EMAIL PROTECTED]>
AuthorDate: Tue Apr 17 13:26:33 2007 -0700
Committer:  Mark Fasheh <[EMAIL PROTECTED]>
CommitDate: Thu Apr 26 13:33:02 2007 -0700

    ocfs2_dlm: fix race in dlm_remaster_locks
    
    There is a possibility that dlm_remaster_locks could overwride node->state
    with DLM_RECO_NODE_DATA_REQUESTED after dlm_reco_data_done_handler sets the
    node->state to DLM_RECO_NODE_DATA_DONE. This could lead to recovery getting
    stuck and requires a cluster reboot. Synchronize with dlm_reco_state_lock
    spinlock.
    
    Signed-off-by: Srinivas Eeda <[EMAIL PROTECTED]>
    Signed-off-by: Mark Fasheh <[EMAIL PROTECTED]>
---
 fs/ocfs2/dlm/dlmrecovery.c |    2 ++
 1 files changed, 2 insertions(+), 0 deletions(-)

diff --git a/fs/ocfs2/dlm/dlmrecovery.c b/fs/ocfs2/dlm/dlmrecovery.c
index 6d4a83d..c1807a4 100644
--- a/fs/ocfs2/dlm/dlmrecovery.c
+++ b/fs/ocfs2/dlm/dlmrecovery.c
@@ -611,6 +611,7 @@ static int dlm_remaster_locks(struct dlm_ctxt *dlm, u8 
dead_node)
                        }
                } while (status != 0);
 
+               spin_lock(&dlm_reco_state_lock);
                switch (ndata->state) {
                        case DLM_RECO_NODE_DATA_INIT:
                        case DLM_RECO_NODE_DATA_FINALIZE_SENT:
@@ -641,6 +642,7 @@ static int dlm_remaster_locks(struct dlm_ctxt *dlm, u8 
dead_node)
                                     ndata->node_num, dead_node);
                                break;
                }
+               spin_unlock(&dlm_reco_state_lock);
        }
 
        mlog(0, "done requesting all lock info\n");
-
To unsubscribe from this list: send the line "unsubscribe git-commits-head" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to