Re: [Ocfs2-devel] [PATCH] ocfs2/dlm: clean up unused variable in dlm_process_recovery_data

2018-04-02 Thread piaojun
LGTM

On 2018/4/3 13:42, Changwei Ge wrote:
> Signed-off-by: Changwei Ge 
Reviewed-by: Jun Piao 
> ---
>  fs/ocfs2/dlm/dlmrecovery.c | 4 
>  1 file changed, 4 deletions(-)
> 
> diff --git a/fs/ocfs2/dlm/dlmrecovery.c b/fs/ocfs2/dlm/dlmrecovery.c
> index ec8f758..be6b067 100644
> --- a/fs/ocfs2/dlm/dlmrecovery.c
> +++ b/fs/ocfs2/dlm/dlmrecovery.c
> @@ -1807,7 +1807,6 @@ static int dlm_process_recovery_data(struct dlm_ctxt 
> *dlm,
>   int i, j, bad;
>   struct dlm_lock *lock;
>   u8 from = O2NM_MAX_NODES;
> - unsigned int added = 0;
>   __be64 c;
>  
>   mlog(0, "running %d locks for this lockres\n", mres->num_locks);
> @@ -1823,7 +1822,6 @@ static int dlm_process_recovery_data(struct dlm_ctxt 
> *dlm,
>   spin_lock(>spinlock);
>   dlm_lockres_set_refmap_bit(dlm, res, from);
>   spin_unlock(>spinlock);
> - added++;
>   break;
>   }
>   BUG_ON(ml->highest_blocked != LKM_IVMODE);
> @@ -1911,7 +1909,6 @@ static int dlm_process_recovery_data(struct dlm_ctxt 
> *dlm,
>   /* do not alter lock refcount.  switching lists. */
>   list_move_tail(>list, queue);
>   spin_unlock(>spinlock);
> - added++;
>  
>   mlog(0, "just reordered a local lock!\n");
>   continue;
> @@ -2037,7 +2034,6 @@ static int dlm_process_recovery_data(struct dlm_ctxt 
> *dlm,
>"setting refmap bit\n", dlm->name,
>res->lockname.len, res->lockname.name, ml->node);
>   dlm_lockres_set_refmap_bit(dlm, res, ml->node);
> - added++;
>   }
>   spin_unlock(>spinlock);
>   }
> 

___
Ocfs2-devel mailing list
Ocfs2-devel@oss.oracle.com
https://oss.oracle.com/mailman/listinfo/ocfs2-devel


[Ocfs2-devel] [PATCH] ocfs2/dlm: clean up unused variable in dlm_process_recovery_data

2018-04-02 Thread Changwei Ge
Signed-off-by: Changwei Ge 
---
 fs/ocfs2/dlm/dlmrecovery.c | 4 
 1 file changed, 4 deletions(-)

diff --git a/fs/ocfs2/dlm/dlmrecovery.c b/fs/ocfs2/dlm/dlmrecovery.c
index ec8f758..be6b067 100644
--- a/fs/ocfs2/dlm/dlmrecovery.c
+++ b/fs/ocfs2/dlm/dlmrecovery.c
@@ -1807,7 +1807,6 @@ static int dlm_process_recovery_data(struct dlm_ctxt *dlm,
int i, j, bad;
struct dlm_lock *lock;
u8 from = O2NM_MAX_NODES;
-   unsigned int added = 0;
__be64 c;
 
mlog(0, "running %d locks for this lockres\n", mres->num_locks);
@@ -1823,7 +1822,6 @@ static int dlm_process_recovery_data(struct dlm_ctxt *dlm,
spin_lock(>spinlock);
dlm_lockres_set_refmap_bit(dlm, res, from);
spin_unlock(>spinlock);
-   added++;
break;
}
BUG_ON(ml->highest_blocked != LKM_IVMODE);
@@ -1911,7 +1909,6 @@ static int dlm_process_recovery_data(struct dlm_ctxt *dlm,
/* do not alter lock refcount.  switching lists. */
list_move_tail(>list, queue);
spin_unlock(>spinlock);
-   added++;
 
mlog(0, "just reordered a local lock!\n");
continue;
@@ -2037,7 +2034,6 @@ static int dlm_process_recovery_data(struct dlm_ctxt *dlm,
 "setting refmap bit\n", dlm->name,
 res->lockname.len, res->lockname.name, ml->node);
dlm_lockres_set_refmap_bit(dlm, res, ml->node);
-   added++;
}
spin_unlock(>spinlock);
}
-- 
2.7.4


___
Ocfs2-devel mailing list
Ocfs2-devel@oss.oracle.com
https://oss.oracle.com/mailman/listinfo/ocfs2-devel


Re: [Ocfs2-devel] [PATCH] ocfs2/dlm: wait for dlm recovery done when migrating all lock resources

2018-04-02 Thread Joseph Qi


On 18/3/15 20:59, piaojun wrote:
> Wait for dlm recovery done when migrating all lock resources in case that
> new lock resource left after leaving dlm domain. And the left lock
> resource will cause other nodes BUG.
> 
>   NodeA   NodeBNodeC
> 
> umount:
>   dlm_unregister_domain()
> dlm_migrate_all_locks()
> 
>  NodeB down
> 
> do recovery for NodeB
> and collect a new lockres
> form other live nodes:
> 
>   dlm_do_recovery
> dlm_remaster_locks
>   dlm_request_all_locks:
> 
>   dlm_mig_lockres_handler
> dlm_new_lockres
>   __dlm_insert_lockres
> 
> at last NodeA become the
> master of the new lockres
> and leave domain:
>   dlm_leave_domain()
> 
>   mount:
> dlm_join_domain()
> 
>   touch file and request
>   for the owner of the new
>   lockres, but all the
>   other nodes said 'NO',
>   so NodeC decide to be
>   the owner, and send do
>   assert msg to other
>   nodes:
>   dlmlock()
> dlm_get_lock_resource()
>   dlm_do_assert_master()
> 
>   other nodes receive the msg
>   and found two masters exist.
>   at last cause BUG in
>   dlm_assert_master_handler()
>   -->BUG();
> 
> Fixes: bc9838c4d44a ("dlm: allow dlm do recovery during shutdown")
> 
Redundant blank line here.
But I've found Andrew has already fix this when adding to -mm tree.

Acked-by: Joseph Qi 

> Signed-off-by: Jun Piao 
> Reviewed-by: Alex Chen 
> Reviewed-by: Yiwen Jiang 


___
Ocfs2-devel mailing list
Ocfs2-devel@oss.oracle.com
https://oss.oracle.com/mailman/listinfo/ocfs2-devel


Re: [Ocfs2-devel] [PATCH] ocfs2: Take inode cluster lock before moving reflinked inode from orphan dir

2018-04-02 Thread Joseph Qi


On 18/3/31 01:42, Ashish Samant wrote:
> While reflinking an inode, we create a new inode in orphan directory, then
> take EX lock on it, reflink the original inode to orphan inode and release
> EX lock. Once the lock is released another node might request it in PR mode
> which causes downconvert of the lock to PR mode.
> 
> Later we attempt to initialize security acl for the orphan inode and move
> it to the reflink destination. However, while doing this we dont take EX
> lock on the inode. So effectively, we are doing this and accessing the
> journal for this inode while holding PR lock. While accessing the journal,
> we make
> 
> ci->ci_last_trans = journal->j_trans_id
> 
> At this point, if there is another downconvert request on this inode from
> another node (PR->NL), we will trip on the following condition in
> ocfs2_ci_checkpointed()
> 
> BUG_ON(lockres->l_level != DLM_LOCK_EX && !checkpointed);
> 
> because we hold the lock in PR mode and journal->j_trans_id is not greater
> than ci_last_trans for the inode.
> 
> Fix this by taking orphan inode cluster lock in EX mode before
> initializing security and moving orphan inode to reflink destination.
> Use the __tracker variant while taking inode lock to avoid recursive
> locking in the ocfs2_init_security_and_acl() call chain.
> 
> Signed-off-by: Ashish Samant 

Looks good.
Reviewed-by: Joseph Qi 

___
Ocfs2-devel mailing list
Ocfs2-devel@oss.oracle.com
https://oss.oracle.com/mailman/listinfo/ocfs2-devel


Re: [Ocfs2-devel] [PATCH] ocfs2/dlm: wait for dlm recovery done when migrating all lock resources

2018-04-02 Thread Changwei Ge
On 2018/4/2 12:32, piaojun wrote:
> Hi Changwei,
> 
> On 2018/4/2 10:59, Changwei Ge wrote:
>> Hi Jun,
>>
>> It seems that you have ever posted this patch, If I remember correctly.
>> My concern is still if you disable dlm recovery via ::migrate_done. Then flag
>> DLM_LOCK_RES_RECOVERING can't be cleared.
> After I set migrate_done, all lockres had been migrated. So there won't be
> any lockres with DLM_LOCK_RES_RECOVERING. And I have tested this patch for
> a few months.

Oops,
Yes, it makes sense.

> 
> thanks,
> Jun
>>
>> So we can't purge the problematic lock resource since __dlm_lockres_unused()
>> needs to check that flag.
>>
>> Finally, umount will run the while loop in dlm_migrate_all_locks() 
>> infinitely.
>> Or if I miss something?
>>
>> Thanks,
>> Changwei
>>
>> On 2018/3/15 21:00, piaojun wrote:

>>>
>>> Fixes: bc9838c4d44a ("dlm: allow dlm do recovery during shutdown")
>>>
>>> Signed-off-by: Jun Piao 
>>> Reviewed-by: Alex Chen 
>>> Reviewed-by: Yiwen Jiang 
Acked-by: Changwei Ge 

>>> ---
>>>fs/ocfs2/dlm/dlmcommon.h   |  1 +
>>>fs/ocfs2/dlm/dlmdomain.c   | 15 +++
>>>fs/ocfs2/dlm/dlmrecovery.c | 13 ++---
>>>3 files changed, 26 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/fs/ocfs2/dlm/dlmcommon.h b/fs/ocfs2/dlm/dlmcommon.h


___
Ocfs2-devel mailing list
Ocfs2-devel@oss.oracle.com
https://oss.oracle.com/mailman/listinfo/ocfs2-devel