Re: [Gluster-users] Quota going crazy

2015-08-28 Thread Vijaikumar M

Hi Jonathan,

Are there any error related to quota in the brick log?

Thanks,
Vijay


On Friday 28 August 2015 12:22 PM, Jonathan MICHALON wrote:

Hi,

I'm experiencing strange quota mismatch (too much/too few) with 3.6.4 on a 
setup which is already an upgrade from the 3.4 series.

In an attempt to reset quota and check from scratch without breaking service I 
disabled quota and reset quota-related xattrs on every file on every brick 
(this is a 3×2 setup on 6 bricks of 40TB each).
I then re-enabled the quotas, waited a bit for the quota daemons to wakeup and 
then I launched a `find` on one of the quota-limited subdirectories. It 
computed the right size.
But on another (bigger) directory, the size was a little too small. I 
re-started the same `find`, and the final size was much much greater than the 
real size (provided by `du`). It should be around 4.1TB and it showed something 
like 5.4!
I relaunched the same `find` again and again but it continued to grow, until 
around 12.6 TB. Next I ran the `find` on another client and… again a growth.

I'm running out of idea right now. If any of you had an idea about what I could 
do… thanks in advance.

--
Jonathan Michalon
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] cluster.min-free-disk is not working in distributed disperse volume

2015-08-28 Thread Mohamed Pakkeer
Hi Susant,

Thanks for your reply.

Can you mention whether you gave force option while starting rebalance? If
force flag is set, rebalance will move the data with out considering
whether it is moving the data from a high avail space to a low avail space.

we are able to reproduce the same issue after stopping the rebalance with
force option and starting rebalance without force option.

source sub-volume is occupied 91% and destination sub-volume is occupied
93%( higher space disk to lower space disk)

*Rebalance Log:*

[2015-08-28 10:13:05.130492] I [dht-rebalance.c:1002:dht_migrate_file]
0-qubevaultdr-dht:
/Packages/Features/MPEG/A/AlaaElaa_FTR_S_TE-XX_IN-UA_51_HD_PIX_RIH_IOP_OV/AlaaElaa_FTR_S_TE-XX_IN-UA_51_HD_PIX_20141124_RIH_IOP_OV/ALA-ELA_R5_AUDIO_241114.mxf:
attempting to move from qubevaultdr-disperse-34 to qubevaultdr-disperse-31

/dev/sdb1   3.7T  3.4T  310G  92% /media/disk1 - disperse0
/dev/sdc1   3.7T  3.3T  371G  91% /media/disk2
/dev/sdd1   3.7T  3.5T  224G  95% /media/disk3
/dev/sde1   3.7T  3.4T  301G  92% /media/disk4
/dev/sdf1   3.7T  3.3T  356G  91% /media/disk5
/dev/sdg1   3.7T  3.5T  242G  94% /media/disk6
/dev/sdh1   3.7T  3.4T  335G  92% /media/disk7
/dev/sdi1   3.7T  3.3T  356G  91% /media/disk8
/dev/sdj1   3.7T  3.4T  272G  93% /media/disk9
/dev/sdk1   3.7T  3.4T  302G  92% /media/disk10
/dev/sdl1   3.7T  3.4T  246G  94% /media/disk11
/dev/sdm1   3.7T  3.4T  330G  92% /media/disk12
/dev/sdn1   3.7T  3.4T  339G  91% /media/disk13
/dev/sdo1   3.7T  3.4T  266G  93% /media/disk14 -
/dev/sdp1   3.7T  3.4T  342G  91% /media/disk15 -disperse14
/dev/sdq1   3.7T  3.4T  267G  93% /media/disk16
/dev/sdr1   3.7T  3.3T  358G  91% /media/disk17
/dev/sds1   3.7T  3.3T  360G  91% /media/disk18
/dev/sdt1   3.7T  3.4T  259G  94% /media/disk19
/dev/sdu1   3.7T  3.4T  313G  92% /media/disk20
/dev/sdv1   3.7T  3.3T  364G  91% /media/disk21
/dev/sdw1   3.7T  3.3T  367G  91% /media/disk22
/dev/sdx1   3.7T  3.4T  291G  93% /media/disk23
/dev/sdy1   3.7T  3.4T  302G  92% /media/disk24
/dev/sdz1   3.7T  3.3T  350G  91% /media/disk25
/dev/sdaa1  3.7T  3.5T  209G  95% /media/disk26
/dev/sdab1  3.7T  3.4T  333G  92% /media/disk27
/dev/sdac1  3.7T  3.3T  374G  90% /media/disk28
/dev/sdad1  3.7T  3.4T  318G  92% /media/disk29
/dev/sdae1  3.7T  3.3T  371G  91% /media/disk30 - disperse-29
/dev/sdaf1  3.7T  3.3T  371G  91% /media/disk31  - disperse-30
*/dev/sdag1  3.7T  3.4T  261G  93% /media/disk32*  - disperse-31
*=
destination*
/dev/sdah1  3.7T  3.4T  273G  93% /media/disk33 - disperse-32
/dev/sdai1  3.7T  3.3T  370G  91% /media/disk34  - disperse-33
*/dev/sdaj1  3.7T  3.3T  365G  91% /media/disk35*  - disperse-34
*=
source*
/dev/sdak1  3.7T  3.3T  366G  91% /media/disk36  - disperse-35

Thanks
Backer


On Thu, Aug 27, 2015 at 6:20 PM, Susant Palai spa...@redhat.com wrote:

 Comments inline.

 - Original Message -
 From: Mohamed Pakkeer mdfakk...@gmail.com
 To: Susant Palai spa...@redhat.com
 Cc: Mathieu Chateau mathieu.chat...@lotp.fr, gluster-users 
 gluster-users@gluster.org, Gluster Devel gluster-de...@gluster.org,
 Vijay Bellur vbel...@redhat.com, Pranith Kumar Karampuri 
 pkara...@redhat.com, Ashish Pandey aspan...@redhat.com
 Sent: Thursday, 27 August, 2015 5:41:02 PM
 Subject: Re: [Gluster-users] cluster.min-free-disk is not working in
 distributed disperse volume


 Hi Susant,


 Thanks for your reply.I think, we started the re-balance with force
 option. I requested a question on this mail thread regarding to run the
 rebalancer daemon forcefully on a dedicated peered node instead of
 selecting a cluster node automatically.
  Currently there is no such feature to start rebalance on a dedicated
 node.

 If i start rebalancer, the rebalancer daemon starts the fixlayout on all
 nodes and migrate files on anyone node(always node 1 on our cluster),
 The first node cpu usage is always high during rebalance compare with rest
 of the cluster nodes.


 To reduce the cpu usage of rebalancer datanode( node1), i peered a new
 node( without disk) for re-balance and started the rebalancer. It started
 again the rebalancer on same node1.


 Is there any way to run the rebalancer daemon( file migration) forcefully
 on a dedicated peered node?





 Regards
 Backer






 On Thu, Aug 27, 2015 at 3:00 PM, Susant Palai  spa...@redhat.com  wrote:


 comments inline.
 ++Ccing Pranith and Ashish to detail on disperse behaviour.

 - Original Message -
 From: Mohamed Pakkeer  mdfakk...@gmail.com 
 To: Susant Palai  spa...@redhat.com , Vijay Bellur 
 vbel...@redhat.com 
 Cc: Mathieu Chateau  mathieu.chat...@lotp.fr , gluster-users 
 gluster-users@gluster.org , Gluster Devel  gluster-de...@gluster.org 
 Sent: Wednesday, 26 August, 2015 2:08:02 PM
 Subject: Re: [Gluster-users] cluster.min-free-disk is not working in
 distributed disperse volume



Re: [Gluster-users] Quota going crazy

2015-08-28 Thread Jonathan MICHALON
Thanks, good catch. I didn't find anything in quota*.log but didn't have a look 
in the bricks subdir…

[2015-08-27 04:53:01.628979] W [marker-quota.c:1417:mq_release_parent_lock] 
(-- 
/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(_gf_log_callingfn+0x186)[0x7fcfbeeb5da6]
 (-- 
/usr/lib/x86_64-linux-gnu/glusterfs/3.6.4/xlator/features/marker.so(mq_release_parent_lock+0x271)[0x7fcfb93a5691]
 (-- 
/usr/lib/x86_64-linux-gnu/glusterfs/3.6.4/xlator/features/marker.so(mq_update_inode_contribution+0x3cc)[0x7fcfb93a631c]
 (-- 
/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(default_lookup_cbk+0xc0)[0x7fcfbeebb9c0]
 (-- 
/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(default_lookup_cbk+0xc0)[0x7fcfbeebb9c0]
 ) 0-img-data-marker: An operation during quota updation of path 
(/zone/programs/ProgsRX/Schrodinger2009/mmshare-v18212/lib/Linux-x86_64/lib/python2.6/site-packages/pytz/zoneinfo/Antarctica/Davis)
 failed (Invalid argument)
[2015-08-27 04:53:02.240268] E [marker-quota.c:1186:mq_get_xattr] (-- 
/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(_gf_log_callingfn+0x186)[0x7fcfbeeb5da6]
 (-- 
/usr/lib/x86_64-linux-gnu/glusterfs/3.6.4/xlator/features/marker.so(mq_fetch_child_size_and_contri+0x516)[0x7fcfb93a6a96]
 (-- 
/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(default_setxattr_cbk+0xa3)[0x7fcfbeebd363]
 (-- 
/usr/lib/x86_64-linux-gnu/glusterfs/3.6.4/xlator/features/access-control.so(posix_acl_setxattr_cbk+0xa3)[0x7fcfb9dec003]
 (-- 
/usr/lib/x86_64-linux-gnu/glusterfs/3.6.4/xlator/features/changelog.so(changelog_setxattr_cbk+0xe5)[0x7fcfb9ffc675]
 ) 0-: Assertion failed: !uuid null
[2015-08-27 04:53:02.240352] E 
[marker-quota.c:1831:mq_fetch_child_size_and_contri] (-- 
/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(_gf_log_callingfn+0x186)[0x7fcfbeeb5da6]
 (-- 
/usr/lib/x86_64-linux-gnu/glusterfs/3.6.4/xlator/features/marker.so(mq_fetch_child_size_and_contri+0x516)[0x7fcfb93a6a96]
 (-- 
/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(default_setxattr_cbk+0xa3)[0x7fcfbeebd363]
 (-- 
/usr/lib/x86_64-linux-gnu/glusterfs/3.6.4/xlator/features/access-control.so(posix_acl_setxattr_cbk+0xa3)[0x7fcfb9dec003]
 (-- 
/usr/lib/x86_64-linux-gnu/glusterfs/3.6.4/xlator/features/changelog.so(changelog_setxattr_cbk+0xe5)[0x7fcfb9ffc675]
 ) 0-: Assertion failed: !uuid null
[2015-08-27 04:53:02.240452] E [posix.c:150:posix_lookup] 0-img-data-posix: 
lstat on (null) failed: Invalid argument

Looks like the problem is about null gfid. Now I have no idea how it can happen 
to have a null gfid… :)

Searching for nulls I found a one in some xattrs:
trusted.glusterfs.quota.----0001.contri=0x0a1bafc7f200
This looks rather strange too, maybe it's related?

--
Jonathan Michalon
P.S. sorry for bad formatting, but have to use OWA…


De : Vijaikumar M vmall...@redhat.com
Envoyé : vendredi 28 août 2015 11:11
À : Jonathan MICHALON; gluster-users@gluster.org
Objet : Re: [Gluster-users] Quota going crazy

Hi Jonathan,

Are there any error related to quota in the brick log?

Thanks,
Vijay


On Friday 28 August 2015 12:22 PM, Jonathan MICHALON wrote:
 Hi,

 I'm experiencing strange quota mismatch (too much/too few) with 3.6.4 on a 
 setup which is already an upgrade from the 3.4 series.

 In an attempt to reset quota and check from scratch without breaking service 
 I disabled quota and reset quota-related xattrs on every file on every brick 
 (this is a 3×2 setup on 6 bricks of 40TB each).
 I then re-enabled the quotas, waited a bit for the quota daemons to wakeup 
 and then I launched a `find` on one of the quota-limited subdirectories. It 
 computed the right size.
 But on another (bigger) directory, the size was a little too small. I 
 re-started the same `find`, and the final size was much much greater than the 
 real size (provided by `du`). It should be around 4.1TB and it showed 
 something like 5.4!
 I relaunched the same `find` again and again but it continued to grow, until 
 around 12.6 TB. Next I ran the `find` on another client and… again a growth.

 I'm running out of idea right now. If any of you had an idea about what I 
 could do… thanks in advance.

 --
 Jonathan Michalon
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Quota going crazy

2015-08-28 Thread Vijaikumar M



On Friday 28 August 2015 05:33 PM, Jonathan MICHALON wrote:

Thanks, good catch. I didn't find anything in quota*.log but didn't have a look 
in the bricks subdir…

[2015-08-27 04:53:01.628979] W [marker-quota.c:1417:mq_release_parent_lock] (-- 
/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(_gf_log_callingfn+0x186)[0x7fcfbeeb5da6] (-- 
/usr/lib/x86_64-linux-gnu/glusterfs/3.6.4/xlator/features/marker.so(mq_release_parent_lock+0x271)[0x7fcfb93a5691]
 (-- 
/usr/lib/x86_64-linux-gnu/glusterfs/3.6.4/xlator/features/marker.so(mq_update_inode_contribution+0x3cc)[0x7fcfb93a631c]
 (-- /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(default_lookup_cbk+0xc0)[0x7fcfbeebb9c0] 
(-- /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(default_lookup_cbk+0xc0)[0x7fcfbeebb9c0] 
) 0-img-data-marker: An operation during quota updation of path 
(/zone/programs/ProgsRX/Schrodinger2009/mmshare-v18212/lib/Linux-x86_64/lib/python2.6/site-packages/pytz/zoneinfo/Antarctica/Davis)
 failed (Invalid argument)
[2015-08-27 04:53:02.240268] E [marker-quota.c:1186:mq_get_xattr] (-- 
/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(_gf_log_callingfn+0x186)[0x7fcfbeeb5da6] (-- 
/usr/lib/x86_64-linux-gnu/glusterfs/3.6.4/xlator/features/marker.so(mq_fetch_child_size_and_contri+0x516)[0x7fcfb93a6a96]
 (-- /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(default_setxattr_cbk+0xa3)[0x7fcfbeebd363] (-- 
/usr/lib/x86_64-linux-gnu/glusterfs/3.6.4/xlator/features/access-control.so(posix_acl_setxattr_cbk+0xa3)[0x7fcfb9dec003]
 (-- 
/usr/lib/x86_64-linux-gnu/glusterfs/3.6.4/xlator/features/changelog.so(changelog_setxattr_cbk+0xe5)[0x7fcfb9ffc675]
 ) 0-: Assertion failed: !uuid null
[2015-08-27 04:53:02.240352] E [marker-quota.c:1831:mq_fetch_child_size_and_contri] (-- 
/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(_gf_log_callingfn+0x186)[0x7fcfbeeb5da6] (-- 
/usr/lib/x86_64-linux-gnu/glusterfs/3.6.4/xlator/features/marker.so(mq_fetch_child_size_and_contri+0x516)[0x7fcfb93a6a96]
 (-- /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(default_setxattr_cbk+0xa3)[0x7fcfbeebd363] (-- 
/usr/lib/x86_64-linux-gnu/glusterfs/3.6.4/xlator/features/access-control.so(posix_acl_setxattr_cbk+0xa3)[0x7fcfb9dec003]
 (-- 
/usr/lib/x86_64-linux-gnu/glusterfs/3.6.4/xlator/features/changelog.so(changelog_setxattr_cbk+0xe5)[0x7fcfb9ffc675]
 ) 0-: Assertion failed: !uuid null
[2015-08-27 04:53:02.240452] E [posix.c:150:posix_lookup] 0-img-data-posix: 
lstat on (null) failed: Invalid argument

Looks like the problem is about null gfid. Now I have no idea how it can happen 
to have a null gfid… :)


I will try to re-create this problem in glusterfs-3.6.4 and 
glusterfs-3.6.5, and I will let you on the root cause soon.


Thanks,
Vijay



Searching for nulls I found a one in some xattrs:
trusted.glusterfs.quota.----0001.contri=0x0a1bafc7f200
This looks rather strange too, maybe it's related?

--
Jonathan Michalon
P.S. sorry for bad formatting, but have to use OWA…


De : Vijaikumar M vmall...@redhat.com
Envoyé : vendredi 28 août 2015 11:11
À : Jonathan MICHALON; gluster-users@gluster.org
Objet : Re: [Gluster-users] Quota going crazy

Hi Jonathan,

Are there any error related to quota in the brick log?

Thanks,
Vijay


On Friday 28 August 2015 12:22 PM, Jonathan MICHALON wrote:

Hi,

I'm experiencing strange quota mismatch (too much/too few) with 3.6.4 on a 
setup which is already an upgrade from the 3.4 series.

In an attempt to reset quota and check from scratch without breaking service I 
disabled quota and reset quota-related xattrs on every file on every brick 
(this is a 3×2 setup on 6 bricks of 40TB each).
I then re-enabled the quotas, waited a bit for the quota daemons to wakeup and 
then I launched a `find` on one of the quota-limited subdirectories. It 
computed the right size.
But on another (bigger) directory, the size was a little too small. I 
re-started the same `find`, and the final size was much much greater than the 
real size (provided by `du`). It should be around 4.1TB and it showed something 
like 5.4!
I relaunched the same `find` again and again but it continued to grow, until 
around 12.6 TB. Next I ran the `find` on another client and… again a growth.

I'm running out of idea right now. If any of you had an idea about what I could 
do… thanks in advance.

--
Jonathan Michalon
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Cannot upgrade from 3.6.3 to 3.7.3

2015-08-28 Thread Alastair Neil
Did you mean the option rpc-auth-allow-insecure on setting?  I just did a
rolling upgrade from 3.6 to 3.7 without issue, however, I had enabled
insecure connections because I had some clients running 3.7.

-Alastair


On 27 August 2015 at 10:04, Andreas Mather andr...@allaboutapps.at wrote:

 Hi Humble!

 Thanks for the reply. The docs do not mention anything related to 3.6-3.7
 upgrade that applies to my case.

 I could resolve the issue in the meantime by doing the steps mentioned in
 the 3.7.1 release notes (
 https://gluster.readthedocs.org/en/latest/release-notes/3.7.1/).

 Thanks,

 Andreas


 On Thu, Aug 27, 2015 at 3:22 PM, Humble Devassy Chirammal 
 humble.deva...@gmail.com wrote:

 Hi Andreas,

 
 Is it even possible to perform a rolling upgrade?
 


 The GlusterFS upgrade process is documented  @
 https://gluster.readthedocs.org/en/latest/Upgrade-Guide/README/



 --Humble


 On Thu, Aug 27, 2015 at 4:57 PM, Andreas Mather andr...@allaboutapps.at
 wrote:

 Hi All!

 I wanted to do a rolling upgrade of gluster from 3.6.3 to 3.7.3, but
 after the upgrade, the updated node won't connect.

 The cluster has 4 nodes (vhost[1-4]) and 4 volumes (vol[1-4]) with 2
 replicas each:
 vol1: vhost1/brick1, vhost2/brick2
 vol2: vhost2/brick1, vhost1/brick2
 vol3: vhost3/brick1, vhost4/brick2
 vol4: vhost4/brick1, vhost3/brick2

 I'm trying to start the upgrade on vhost4. After restarting glusterd,
 peer status shows all other peers as disconnected, the log has repeated
 entries like this:

 [2015-08-27 10:59:56.982254] E [MSGID: 106167]
 [glusterd-handshake.c:2078:__glusterd_peer_dump_version_cbk] 0-management:
 Error through RPC layer, retry again later
 [2015-08-27 10:59:56.982335] E [rpc-clnt.c:362:saved_frames_unwind] (--
 /lib64/libglusterfs.so.0(_gf_log_callingfn+0x196)[0x7f1a7a9579e6] (--
 /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7f1a7a7229be] (--
 /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7f1a7a722ace] (--
 /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x9c)[0x7f1a7a72447c] (--
 /lib64/libgfrpc.so.0(rpc_clnt_notify+0x48)[0x7f1a7a724c38] )
 0-management: forced unwinding frame type(GF-DUMP) op(NULL(2)) called at
 2015-08-27 10:59:56.981550 (xid=0x2)
 [2015-08-27 10:59:56.982346] W [rpc-clnt-ping.c:204:rpc_clnt_ping_cbk]
 0-management: socket disconnected
 [2015-08-27 10:59:56.982359] I [MSGID: 106004]
 [glusterd-handler.c:5051:__glusterd_peer_rpc_notify] 0-management: Peer
 vhost3-int (72e2078d-1ed9-4cdd-aad2-c86e418746d1), in state Peer in
 Cluster, has disconnected from glusterd.
 [2015-08-27 10:59:56.982491] W
 [glusterd-locks.c:677:glusterd_mgmt_v3_unlock] (--
 /lib64/libglusterfs.so.0(_gf_log_callingfn+0x196)[0x7f1a7a9579e6] (--
 /usr/lib64/glusterfs/3.7.3/xlator/mgmt/glusterd.so(glusterd_mgmt_v3_unlock+0x541)[0x7f1a6f55ee91]
 (--
 /usr/lib64/glusterfs/3.7.3/xlator/mgmt/glusterd.so(__glusterd_peer_rpc_notify+0x162)[0x7f1a6f4c6972]
 (--
 /usr/lib64/glusterfs/3.7.3/xlator/mgmt/glusterd.so(glusterd_big_locked_notify+0x4c)[0x7f1a6f4bc90c]
 (-- /lib64/libgfrpc.so.0(rpc_clnt_notify+0x90)[0x7f1a7a724c80] )
 0-management: Lock for vol vol1 not held
 [2015-08-27 10:59:56.982504] W [MSGID: 106118]
 [glusterd-handler.c:5073:__glusterd_peer_rpc_notify] 0-management: Lock not
 released for vol1
 [2015-08-27 10:59:56.982608] W
 [glusterd-locks.c:677:glusterd_mgmt_v3_unlock] (--
 /lib64/libglusterfs.so.0(_gf_log_callingfn+0x196)[0x7f1a7a9579e6] (--
 /usr/lib64/glusterfs/3.7.3/xlator/mgmt/glusterd.so(glusterd_mgmt_v3_unlock+0x541)[0x7f1a6f55ee91]
 (--
 /usr/lib64/glusterfs/3.7.3/xlator/mgmt/glusterd.so(__glusterd_peer_rpc_notify+0x162)[0x7f1a6f4c6972]
 (--
 /usr/lib64/glusterfs/3.7.3/xlator/mgmt/glusterd.so(glusterd_big_locked_notify+0x4c)[0x7f1a6f4bc90c]
 (-- /lib64/libgfrpc.so.0(rpc_clnt_notify+0x90)[0x7f1a7a724c80] )
 0-management: Lock for vol vol2 not held
 [2015-08-27 10:59:56.982618] W [MSGID: 106118]
 [glusterd-handler.c:5073:__glusterd_peer_rpc_notify] 0-management: Lock not
 released for vol2
 [2015-08-27 10:59:56.982728] W
 [glusterd-locks.c:677:glusterd_mgmt_v3_unlock] (--
 /lib64/libglusterfs.so.0(_gf_log_callingfn+0x196)[0x7f1a7a9579e6] (--
 /usr/lib64/glusterfs/3.7.3/xlator/mgmt/glusterd.so(glusterd_mgmt_v3_unlock+0x541)[0x7f1a6f55ee91]
 (--
 /usr/lib64/glusterfs/3.7.3/xlator/mgmt/glusterd.so(__glusterd_peer_rpc_notify+0x162)[0x7f1a6f4c6972]
 (--
 /usr/lib64/glusterfs/3.7.3/xlator/mgmt/glusterd.so(glusterd_big_locked_notify+0x4c)[0x7f1a6f4bc90c]
 (-- /lib64/libgfrpc.so.0(rpc_clnt_notify+0x90)[0x7f1a7a724c80] )
 0-management: Lock for vol vol3 not held
 [2015-08-27 10:59:56.982739] W [MSGID: 106118]
 [glusterd-handler.c:5073:__glusterd_peer_rpc_notify] 0-management: Lock not
 released for vol3
 [2015-08-27 10:59:56.982844] W
 [glusterd-locks.c:677:glusterd_mgmt_v3_unlock] (--
 /lib64/libglusterfs.so.0(_gf_log_callingfn+0x196)[0x7f1a7a9579e6] (--
 /usr/lib64/glusterfs/3.7.3/xlator/mgmt/glusterd.so(glusterd_mgmt_v3_unlock+0x541)[0x7f1a6f55ee91]
 (--