Re: [Gluster-users] Does gluster have a "scrub"?

2016-04-13 Thread Gaurav Garg
Ya, bitrot feature available in GlusterFS 3.7 release.

for bitrot command help use:

#gluster volume help | grep bitrot


Thanks,

~Gaurav

- Original Message -
> From: "Chen Chen" 
> To: "Lindsay Mathieson" 
> Cc: "gluster-users" 
> Sent: Wednesday, April 13, 2016 2:55:45 PM
> Subject: Re: [Gluster-users] Does gluster have a "scrub"?
> 
> I suppose it is already quite mature because it was already listed as a
> feature on RedHat Gluster Storage 3.2 Administration Guide.
> 
>  From my point of view, RHGS is somewhat a Enterprise stable version of
> Community Gluster, like RHEL vs Fedora.
> 
> On 4/13/2016 5:14 PM, Lindsay Mathieson wrote:
> > On 13 April 2016 at 19:09, Chen Chen  wrote:
> >> root@master# gluster volume get mainvol all | grep bitrot
> >> features.bitrot disable
> >
> >
> > Thanks.
> >
> > "gluster volume set help" doesn't list the feature - oversight? (3.7.9)
> >
> 
> --
> Chen Chen
> Shanghai SmartQuerier Biotechnology Co., Ltd.
> Add: Add: 3F, 1278 Keyuan Road, Shanghai 201203, P. R. China
> Mob: +86 15221885893
> Email: chenc...@smartquerier.com
> Web: www.smartquerier.com
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Does gluster have a "scrub"?

2016-04-13 Thread Gaurav Garg
use #gluster volume help | grep bitrot   command

Thanks,
~Gaurav

- Original Message -
> From: "Lindsay Mathieson" 
> To: "Chen Chen" 
> Cc: "gluster-users" 
> Sent: Wednesday, April 13, 2016 2:44:37 PM
> Subject: Re: [Gluster-users] Does gluster have a "scrub"?
> 
> On 13 April 2016 at 19:09, Chen Chen  wrote:
> > root@master# gluster volume get mainvol all | grep bitrot
> > features.bitrot disable
> 
> 
> Thanks.
> 
> "gluster volume set help" doesn't list the feature - oversight? (3.7.9)
> 
> --
> Lindsay
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
> 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Impact of force option in remove-brick

2016-03-22 Thread Gaurav Garg
comment inline.

- Original Message -
From: "ABHISHEK PALIWAL" <abhishpali...@gmail.com>
To: "Gaurav Garg" <gg...@redhat.com>
Cc: gluster-users@gluster.org, gluster-de...@gluster.org
Sent: Tuesday, March 22, 2016 12:23:25 PM
Subject: Re: [Gluster-users] Impact of force option in remove-brick

On Tue, Mar 22, 2016 at 12:14 PM, Gaurav Garg <gg...@redhat.com> wrote:

> >> I just want to know what is the difference in the following scenario:
>
> 1. remove-brick without the force option
> 2. remove-brick with the force option
>
>
> remove-brick without force option will perform task based on your option,
> for eg. remove-brick start option will start migration of file from given
> remove-brick to other available bricks in the cluster. you can check status
> of this remove-brick task by issuing remove-brick status command.
>
> But remove-brick with force option will just forcefully remove brick from
> the cluster.
> It will result in data loss in case of distributed volume, because it will
> not migrate file
> from given remove-brick to other available bricks in the cluster. In case
> of replicate volume
> you might not have problem by doing remove-brick force because later on
> after adding brick you
> can issue heal command and migrate file from first replica set to this
> newly added brick.\
>

so when you are saying the forcefully remove the brick means it will remove
the brick even when
that brick is not available or available but have the different uuid of
peers, without generating any error?


yes, it will remove the brick when brick is not available (brick not available
means when brick is hosted on the node which is down or not available).

i didn't get your point how are you getting different uuid of peers.

~Gaurav

>
> Thanks,
>
> ~Gaurav
>
> - Original Message -
> From: "ABHISHEK PALIWAL" <abhishpali...@gmail.com>
> To: gluster-users@gluster.org, gluster-de...@gluster.org
> Sent: Tuesday, March 22, 2016 11:35:52 AM
> Subject: [Gluster-users] Impact of force option in remove-brick
>
> Hi Team,
>
> I have the following scenario:
>
> 1. I have one replica 2 volume in which two brick are available.
> 2. in such permutation and combination I got the UUID of peers mismatch.
> 3. Because of UUID mismatch when I tried to remove brick on the second
> board I am getting the Incorrect Brick failure.
>
> Now, I have the question if I am using the remove-brick command with the
> 'force' option it means it should remove the brick in any situation either
> the brick is available or its UUID is mismatch.
>
> I just want to know what is the difference in the following scenario:
>
> 1. remove-brick without the force option
> 2. remove-brick with the force option
>
>
> Regards
> Abhishek
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>



-- 




Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Impact of force option in remove-brick

2016-03-22 Thread Gaurav Garg
>> I just want to know what is the difference in the following scenario: 

1. remove-brick without the force option 
2. remove-brick with the force option 


remove-brick without force option will perform task based on your option,
for eg. remove-brick start option will start migration of file from given
remove-brick to other available bricks in the cluster. you can check status
of this remove-brick task by issuing remove-brick status command.

But remove-brick with force option will just forcefully remove brick from the 
cluster.
It will result in data loss in case of distributed volume, because it will not 
migrate file
from given remove-brick to other available bricks in the cluster. In case of 
replicate volume
you might not have problem by doing remove-brick force because later on after 
adding brick you
can issue heal command and migrate file from first replica set to this newly 
added brick.

Thanks,

~Gaurav

- Original Message -
From: "ABHISHEK PALIWAL" 
To: gluster-users@gluster.org, gluster-de...@gluster.org
Sent: Tuesday, March 22, 2016 11:35:52 AM
Subject: [Gluster-users] Impact of force option in remove-brick

Hi Team, 

I have the following scenario: 

1. I have one replica 2 volume in which two brick are available. 
2. in such permutation and combination I got the UUID of peers mismatch. 
3. Because of UUID mismatch when I tried to remove brick on the second board I 
am getting the Incorrect Brick failure. 

Now, I have the question if I am using the remove-brick command with the 
'force' option it means it should remove the brick in any situation either the 
brick is available or its UUID is mismatch. 

I just want to know what is the difference in the following scenario: 

1. remove-brick without the force option 
2. remove-brick with the force option 


Regards 
Abhishek 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] question about remove-brick force

2016-03-21 Thread Gaurav Garg
Hi songxin,

>> 1.what is the different between runing "gluster volume remove-brick gv0 
>> replica 1 128.224.162.255:/data/brick/gv1 " and " gluster volume 
>> remove-brick gv0 replica 1 128.224.162.255:/data/brick/gv1 force "?what the 
>> "force" means when "remove a brick"? 

"gluster volume remove-brick gv0 replica 1 128.224.162.255:/data/brick/gv1  
does not make any sense. you need to give option while doing remove-brick for 
eg: start|stop|status|commit|force.

"gluster volume remove-brick gv0 replica 1 128.224.162.255:/data/brick/gv1 
force " means that you are forcefully removing brick from your cluster.

hope this make sense to you :)

Regards,
Gaurav


- Original Message -
From: "songxin" 
To: "Atin Mukherjee" 
Cc: "gluster-user" 
Sent: Monday, March 21, 2016 1:36:41 PM
Subject: [Gluster-users] question about remove-brick force

Hi, 
When I run the command "gluster volume remove-brick gv0 replica 1 
128.224.162.255:/data/brick/gv1 force" , it reture failed. 

my question: 
1.what is the different between runing "gluster volume remove-brick gv0 replica 
1 128.224.162.255:/data/brick/gv1 " and " gluster volume remove-brick gv0 
replica 1 128.224.162.255:/data/brick/gv1 force "?what the "force" means when 
"remove a brick"? 

Thanks, 
Xin 






___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS cluster peer stuck in state: Sent and Received peer request (Connected)

2016-03-19 Thread Gaurav Garg
>> > I’ve sent the logs directly as they push this message over the size limit.

Where have you send logs. i could not able to find. could you send glusterd 
logs so that we can start analyzing this issue.

Thanks,

Regards,
Gaurav 

- Original Message -
From: "Atin Mukherjee" 
To: "tommy yardley" , gluster-users@gluster.org
Sent: Wednesday, March 16, 2016 5:49:05 PM
Subject: Re: [Gluster-users] GlusterFS cluster peer stuck in state: Sent and 
Received peer request (Connected)

I couldn't look into this today, sorry about that. I can only look into
this case on Monday. Anyone else to take this up?

~Atin

On 03/15/2016 09:57 PM, tommy.yard...@baesystems.com wrote:
> Hi Atin,
> 
>  
> 
> All nodes are running 3.5.8 – the probe sequence is:
> 172.31.30.64
> 
> 172.31.27.27 (node having issue)
> 
> 172.31.26.134 (node the peer probe is ran on)
> 
> 172.31.19.46
> 
>  
> 
> I’ve sent the logs directly as they push this message over the size limit.
> 
>  
> 
> look forward to your reply,
> 
>  
> 
> Tommy
> 
>  
> 
>  
> 
> *From:*Atin Mukherjee [mailto:atin.mukherje...@gmail.com]
> *Sent:* 15 March 2016 15:58
> *To:* Yardley, Tommy (UK Guildford)
> *Cc:* gluster-users@gluster.org
> *Subject:* Re: [Gluster-users] GlusterFS cluster peer stuck in state:
> Sent and Received peer request (Connected)
> 
>  
> 
> This indicates the peer handshaking didn't go through properly and your
> cluster is messed up. Are you running 3.5.8 version in all the nodes?
> Could you get me the glusterd log from all the nodes and mention the
> peer probe sequence? I'd be able to look at it tomorrow only and get back.
> 
> -Atin
> Sent from one plus one
> 
> On 15-Mar-2016 9:16 pm, "tommy.yard...@baesystems.com
> "  > wrote:
> 
> Hi All,
> 
>  
> 
> I’m running GlusterFS on a cluster hosted in AWS. I have a script which
> provisions my instances and thus will set up GlusterFS (specifically:
> glusterfs 3.5.8).
> 
> My issue is that this only works ~50% of the time and the other 50% of
> the time one of the peers will be ‘stuck’ in the following state:
> 
> /root@ip-xx-xx-xx-1:/home/ubuntu# gluster peer status/
> 
> /Number of Peers: 3/
> 
> / /
> 
> /Hostname: xx.xx.xx.2/
> 
> /Uuid: 3b4c1fb9-b325-4204-98fd-2eb739fa867f/
> 
> /State: Peer in Cluster (Connected)/
> 
> / /
> 
> /Hostname: xx.xx.xx.3/
> 
> /Uuid: acfc1794-9080-4eb0-8f69-3abe78bbee16/
> 
> /State: Sent and Received peer request (Connected)/
> 
> / /
> 
> /Hostname: xx.xx.xx.4/
> 
> /Uuid: af33463d-1b32-4ffb-a4f0-46ce16151e2f/
> 
> /State: Peer in Cluster (Connected)/
> 
>  
> 
> Running gluster peer status on the instance that is affected yields:
> 
>  
> 
> /root@ip-xx-xx-xx-3:/var/log/glusterfs# gluster peer status
> Number of Peers: 1/
> 
> / /
> 
> /Hostname: xx.xx.xx.1/
> 
> /Uuid: c4f17e9a-893b-48f0-a014-1a05cca09d01/
> 
> /State: Peer is connected and Accepted (Connected)/
> 
> / /
> 
> Of which the status (Connected) in this case, will fluctuate between
> ‘Connected’ and ‘Disconnected’.
> 
>  
> 
> I have been unable to locate the cause of this issue. Has this been
> encountered before, and if so is there a general fix? I haven’t been
> able to find anything as of yet.
> 
>  
> 
> Many thanks,
> 
>  
> 
> *Tommy*
> 
>  
> 
> Please consider the environment before printing this email. This message
> should be regarded as confidential. If you have received this email in
> error please notify the sender and destroy it immediately. Statements of
> intent shall only become binding when confirmed in hard copy by an
> authorised signatory. The contents of this email may relate to dealings
> with other companies under the control of BAE Systems Applied
> Intelligence Limited, details of which can be found at
> http://www.baesystems.com/Businesses/index.htm.
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org 
> http://www.gluster.org/mailman/listinfo/gluster-users
> 
> Please consider the environment before printing this email. This message
> should be regarded as confidential. If you have received this email in
> error please notify the sender and destroy it immediately. Statements of
> intent shall only become binding when confirmed in hard copy by an
> authorised signatory. The contents of this email may relate to dealings
> with other companies under the control of BAE Systems Applied
> Intelligence Limited, details of which can be found at
> http://www.baesystems.com/Businesses/index.htm.
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
> 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] How to recover after one node breakdown

2016-03-19 Thread Gaurav Garg
>> Could I run some glusterfs command on good node to recover the replicate 
>> volume, if I don't copy the files ,including  glusterd.info and other 
>> files,from good node to new node.


running glusterfs command is not enough to recover the replicate volume. for 
recovery you need to follow following steps.

1) remove /var/lib/glusterd/* data from new node (if in any case it present) 
then start glusterd on new node
1) kill glusterd on new node
2) from 1st node (which is in good condition) execute #gluster peer status 
command and copy the uuid from the peer status (you will see one failed node 
entry with hostname and uuid) and replace this UUID in node file 
/var/lib/glusterd/glusterd.info
from the 1st node you can also get the uuid of failed node by doing #cat 
/var/lib/glusterd/peer/* it will show uuid of failed node along with 
hostname/ip-address of failed node.
3) copy /var/lib/glusterd/peers/* to new node.
4) you need to rename one of the /var/lib/glusterd/peers/* file (you can find 
that file in new node by just matching the uuid of new node 
(/var/lib/glusterd/glusterd.info) with /var/lib/glusterd/peers/* file name) 
with the uuid of 1st node (/var/lib/glusterd/glusterd.info) and modify the 
content of the same file with having uuid of 1st node and hostname of 1st node.
5) now start glusterd on new node.
6) your volume will recover.


above steps are mandatory steps to recover failed node.

Thanks,

Regards,
Gaurav

- Original Message -
From: "songxin" 
To: "Alastair Neil" 
Cc: gluster-users@gluster.org
Sent: Thursday, March 17, 2016 8:56:58 AM
Subject: Re: [Gluster-users] How to recover after one node breakdown

Thank you very much for your reply. 

In fact it is that I use a new node ,of which rootfs is new , to replace the 
failed node. 
And the new node has same IP address with the failed one. 

The brick is on a external hard disk.Because the hard disk is mounted on the 
node ,so the data on the brick of failed node will not be loss but may be async 
with the brick of good node.And the brick of failed node will be mounted on the 
new node. 

Now my recovery steps is run some glusterfs command on good node as below, 
after starting the glusterd on new node. 
1.remove brick of new node from volume(the volume type is changed from 
replicate to distribute) 
2.peer detach the new node ip(the new node ip is same as failed node) 
3.peer probe the new node ip 
3.add brick of new node to volume(the volume type is change to replicate) 

But many problem,like data async or peer state is error etc, will happen. 

My question is below. 

Could I run some glusterfs command on good node to recover the replicate 
volume, if I don't copy the files ,including glusterd.info and other files,from 
good node to new node. 

Thanks 
Xin 




发自我的 iPhone 

在 2016年3月17日,04:54,Alastair Neil < ajneil.t...@gmail.com > 写道: 




hopefully you have a back up of /var/lib/glusterd/ glusterd.info and 
/var/lib/glusterd/peers, if so I think you can copy them back to and restart 
glusterd and the volume info should get populated from the other node. If not 
you can probably reconstruct these from these files on the other node. 

i.e: 
On the unaffected node the peers directory should have an entry for the failed 
node containing the uuid of the failed node. The glusterd.info file should 
enable you to recreate the peer file on the failed node. 


On 16 March 2016 at 09:25, songxin < songxin_1...@126.com > wrote: 



Hi, 
Now I face a problem. 
Reproduc step is as below. 
1.I create a replicate volume using two brick on two board 
2.start the volume 
3.one board is breakdown and all 
files in the rootfs ,including /var/lib/glusterd/*,are lost. 
4.reboot the board and ip is not change. 

My question: 
How to recovery the replicate volume? 

Thanks, 
Xin 




___ 
Gluster-users mailing list 
Gluster-users@gluster.org 
http://www.gluster.org/mailman/listinfo/gluster-users 


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] How to recover after one node breakdown

2016-03-19 Thread Gaurav Garg
>> Could I run some glusterfs command on good node to recover the replicate 
>> volume, if I don't copy the files ,including  glusterd.info and other 
>> files,from good node to new node.

running glusterfs command is not enough to recover the replicate volume. for 
recovery you need to follow following steps.

1) remove /var/lib/glusterd/* data from new node (if in any case it present) 
then start glusterd on new node
1) kill glusterd on new node
2) from 1st node (which is in good condition) execute #gluster peer status 
command and copy the uuid from the peer status (you will see one failed node 
entry with hostname and uuid) and replace this UUID in node file 
/var/lib/glusterd/glusterd.info
from the 1st node you can also get the uuid of failed node by doing #cat 
/var/lib/glusterd/peer/* it will show uuid of failed node along with 
hostname/ip-address of failed node.
3) copy /var/lib/glusterd/peers/* to new node.
4) you need to rename one of the /var/lib/glusterd/peers/* file (you can find 
that file in new node by just matching the uuid of new node 
(/var/lib/glusterd/glusterd.info) with /var/lib/glusterd/peers/* file name) 
with the uuid of 1st node (/var/lib/glusterd/glusterd.info) and modify the 
content of the same file with having uuid of 1st node and hostname of 1st node.
5) now start glusterd on new node.
6) your volume will recover.


above steps are mandatory steps to recover failed node.

Thanks,

Regards,
Gaurav 

- Original Message -
From: "songxin" 
To: "Alastair Neil" 
Cc: gluster-users@gluster.org
Sent: Thursday, March 17, 2016 8:56:58 AM
Subject: Re: [Gluster-users] How to recover after one node breakdown

Thank you very much for your reply. 

In fact it is that I use a new node ,of which rootfs is new , to replace the 
failed node. 
And the new node has same IP address with the failed one. 

The brick is on a external hard disk.Because the hard disk is mounted on the 
node ,so the data on the brick of failed node will not be loss but may be async 
with the brick of good node.And the brick of failed node will be mounted on the 
new node. 

Now my recovery steps is run some glusterfs command on good node as below, 
after starting the glusterd on new node. 
1.remove brick of new node from volume(the volume type is changed from 
replicate to distribute) 
2.peer detach the new node ip(the new node ip is same as failed node) 
3.peer probe the new node ip 
3.add brick of new node to volume(the volume type is change to replicate) 

But many problem,like data async or peer state is error etc, will happen. 

My question is below. 

Could I run some glusterfs command on good node to recover the replicate 
volume, if I don't copy the files ,including glusterd.info and other files,from 
good node to new node. 

Thanks 
Xin 




发自我的 iPhone 

在 2016年3月17日,04:54,Alastair Neil < ajneil.t...@gmail.com > 写道: 




hopefully you have a back up of /var/lib/glusterd/ glusterd.info and 
/var/lib/glusterd/peers, if so I think you can copy them back to and restart 
glusterd and the volume info should get populated from the other node. If not 
you can probably reconstruct these from these files on the other node. 

i.e: 
On the unaffected node the peers directory should have an entry for the failed 
node containing the uuid of the failed node. The glusterd.info file should 
enable you to recreate the peer file on the failed node. 


On 16 March 2016 at 09:25, songxin < songxin_1...@126.com > wrote: 



Hi, 
Now I face a problem. 
Reproduc step is as below. 
1.I create a replicate volume using two brick on two board 
2.start the volume 
3.one board is breakdown and all 
files in the rootfs ,including /var/lib/glusterd/*,are lost. 
4.reboot the board and ip is not change. 

My question: 
How to recovery the replicate volume? 

Thanks, 
Xin 




___ 
Gluster-users mailing list 
Gluster-users@gluster.org 
http://www.gluster.org/mailman/listinfo/gluster-users 


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] MInutes of Gluster Community Bug Triage meeting at 12:00 UTC on 8th March, 2016

2016-03-08 Thread Gaurav Garg
Hi All,

Following are the meeting minutes for today Gluster community bug triaging 
meeting.


Meeting ended Tue Mar  8 12:58:43 2016 UTC. Information about MeetBot at 
http://wiki.debian.org/MeetBot .

Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-03-08/gluster_bug_triage.2016-03-08-12.01.html

Minutes (text): 
https://meetbot.fedoraproject.org/gluster-meeting/2016-03-08/gluster_bug_triage.2016-03-08-12.01.txt

Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-03-08/gluster_bug_triage.2016-03-08-12.01.log.html


Meeting summary
---
* Roll call  (ggarg_, 12:02:27)
  * ACTION: kkeithley_ will come up with a proposal to reduce the number
of bugs against "mainline" in NEW state  (ggarg, 12:06:49)
  * LINK:
https://ci.centos.org/view/Gluster/job/gluster_libgfapi-python/
(ndevos, 12:08:31)
  * LINK:

https://ci.centos.org/view/Gluster/job/gluster_libgfapi-python/buildTimeTrend
(ndevos, 12:09:42)
  * ACTION: ndevos to continue work on  proposing  some test-cases for
minimal libgfapi test  (ggarg, 12:11:15)
  * ACTION: Manikandan and Nandaja will update on bug automation
(ggarg, 12:13:09)
  * LINK: https://public.pad.fsfe.org/p/gluster-bugs-to-triage   (ggarg,
12:14:02)
  * LINK: https://bugzilla.redhat.com/show_bug.cgi?id=1315422   (rafi,
12:18:41)
  * LINK: http://ur1.ca/om4jt   (rafi, 12:46:38)

Meeting ended at 12:58:43 UTC.



Action Items

* kkeithley_ will come up with a proposal to reduce the number of bugs
  against "mainline" in NEW state
* ndevos to continue work on  proposing  some test-cases for minimal
  libgfapi test
* Manikandan and Nandaja will update on bug automation



Action Items, by person
---
* Manikandan
  * Manikandan and Nandaja will update on bug automation
* ndevos
  * ndevos to continue work on  proposing  some test-cases for minimal
libgfapi test

  * kkeithley_ will come up with a proposal to reduce the number of bugs
against "mainline" in NEW state




People Present (lines said)
---
* ggarg (54)
* rafi (37)
* ndevos (15)
* obnox (15)
* ira (15)
* jiffin (13)
* Manikandan (9)
* Saravanakmr (6)
* glusterbot (6)
* zodbot (3)
* ggarg_ (3)
* hgowtham (1)


Thanks,

Regards,
Gaurav

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] REMINDER: Gluster Community Bug Triage meeting at 12:00 UTC (~in 90 minutes)

2016-03-08 Thread Gaurav Garg
Hi all,

This meeting is scheduled for anyone that is interested in learning more
about, or assisting with the Bug Triage.

Meeting details:
- location: #gluster-meeting on Freenode IRC
(https://webchat.freenode.net/?channels=gluster-meeting  )
- date: every Tuesday
- time: 12:00 UTC
  (in your terminal, run: date -d "12:00 UTC")
- agenda: https://public.pad.fsfe.org/p/gluster-bug-triage

Currently the following items are listed:
* Roll Call
* Status of last weeks action items
* Group Triage
* Open Floor

The last two topics have space for additions. If you have a suitable bug
or topic to discuss, please add it to the agenda.

Appreciate your participation.

Thanks,

Regards,
Gaurav
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] glusterfs debug build

2016-03-07 Thread Gaurav Garg
Hi jayakrishnan,

where do you want to get debug message, in cli logs or glusterd logs or others ?

for getting debug logs at cli side you need to run command by providing 
"--log-level=DEBUG" option. for eg:
#gluster volume --log-level=DEBUG status


for getting debug logs at glusterd side you need to run glusterd by providing 
-LDEBUG option. for eg:
#glusterd -LDEBUG


for getting debug logs at server/client side you need to do:
#gluster volume set  client-log-level/brick-log-level DEBUG


thanks,

~Gaurav

- Original Message -
From: "jayakrishnan mm" 
To: gluster-users@gluster.org
Sent: Monday, March 7, 2016 3:02:04 PM
Subject: [Gluster-users] glusterfs debug build

Hi 

I am trying to get debug messages while running gluster commands thru' CLI. 
Steps 
 
make clean // for a clean build 

1. ./autogen.sh 
2. ./configure --enable-debug 
3. make 
4. sudo make install -- DESTDIR=/ 

But I am unable to get any debug messages on the console. 
I also put some printfs(). That is also not giving any output messages. 
Am I missing something ? 

Best Regards 
JK 



___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster 3.7.6 volume set: failed: One or more connected clients cannot support the feature being set

2016-03-02 Thread Gaurav Garg
Hi Steve,

As atin pointed out to take statedump by running #kill -SIGUSR1 $(pidof 
glusterd)  command. it will create .dump file in /var/run/gluster/ directory. 
client-op-version information will be present in dump file.

Thanks,
~Gaurav

- Original Message -
From: "Steve Dainard" <sdain...@spd1.com>
To: "Gaurav Garg" <gg...@redhat.com>
Cc: "gluster-users@gluster.org List" <gluster-users@gluster.org>
Sent: Thursday, March 3, 2016 12:07:25 AM
Subject: Re: [Gluster-users] gluster 3.7.6 volume set: failed: One or more 
connected clients cannot support the feature being set

>From the the client side logs I can see version info on mount:

Final graph:
+--+
  1: volume storage-client-0
  2: type protocol/client
  3: option clnt-lk-version 1
  4: option volfile-checksum 0
  5: option volfile-key /storage
  6: option client-version 3.7.6
  7: option process-uuid
template-centos7-compute.compute.domain-2773-2016/03/02-18:28:34:328100-storage-client-0-0-0
  8: option fops-version 1298437
  9: option ping-timeout 42
 10: option remote-host 10.0.231.50
 11: option remote-subvolume /mnt/raid6-storage/storage
 12: option transport-type socket
 13: option send-gids true
 14: end-volume
 15:
 16: volume storage-client-1
 17: type protocol/client
 18: option clnt-lk-version 1
 19: option volfile-checksum 0
 20: option volfile-key /storage
 21: option client-version 3.7.6
 22: option process-uuid
template-centos7-compute.compute.domain-2773-2016/03/02-18:28:34:328100-storage-client-1-0-0
 23: option fops-version 1298437
 24: option ping-timeout 42
 25: option remote-host 10.0.231.51
 26: option remote-subvolume /mnt/raid6-storage/storage
 27: option transport-type socket
 28: option send-gids true
 29: end-volume
 30:
 31: volume storage-client-2
 32: type protocol/client
 33: option clnt-lk-version 1
 34: option volfile-checksum 0
 35: option volfile-key /storage
 36: option client-version 3.7.6
 37: option process-uuid
template-centos7-compute.compute.domain-2773-2016/03/02-18:28:34:328100-storage-client-2-0-0
 38: option fops-version 1298437
 39: option ping-timeout 42
 40: option remote-host 10.0.231.52
 41: option remote-subvolume /mnt/raid6-storage/storage
 42: option transport-type socket
 43: option send-gids true
 44: end-volume
 45:
 46: volume storage-client-3
 47: type protocol/client
 48: option clnt-lk-version 1
 49: option volfile-checksum 0
 50: option volfile-key /storage
 51: option client-version 3.7.6
 52: option process-uuid
template-centos7-compute.compute.domain-2773-2016/03/02-18:28:34:328100-storage-client-3-0-0
 53: option fops-version 1298437
 54: option ping-timeout 42
 55: option remote-host 10.0.231.53
 56: option remote-subvolume /mnt/raid6-storage/storage
 57: option transport-type socket
 58: option send-gids true
 59: end-volume
 60:
 61: volume storage-client-4
 62: type protocol/client
 63: option ping-timeout 42
 64: option remote-host 10.0.231.54
 65: option remote-subvolume /mnt/raid6-storage/storage
 66: option transport-type socket
 67: option send-gids true
 68: end-volume
 69:
 70: volume storage-client-5
 71: type protocol/client
 72: option ping-timeout 42
 73: option remote-host 10.0.231.55
 74: option remote-subvolume /mnt/raid6-storage/storage
 75: option transport-type socket
 76: option send-gids true
 77: end-volume
 78:
 79: volume storage-dht
 80: type cluster/distribute
 81: subvolumes storage-client-0 storage-client-1 storage-client-2
storage-client-3 storage-client-4 storage-client-5
 82: end-volume


But not the client op-version, how can I retrieve this info?

Thanks

On Tue, Mar 1, 2016 at 10:19 PM, Gaurav Garg <gg...@redhat.com> wrote:

> Hi Steve,
>
> Which version you have upgraded client, could you tell us client
> op-version after upgrade ?
>
>
> have you upgraded all of your clients ?
>
>
> Thanks,
> Gaurav
>
>
> - Original Message -
> From: "Steve Dainard" <sdain...@spd1.com>
> To: "gluster-users@gluster.org List" <gluster-users@gluster.org>
> Sent: Wednesday, March 2, 2016 1:10:27 AM
> Subject: [Gluster-users] gluster 3.7.6 volume set: failed: One or more
> connected clients cannot support the feature being set
>
> Gluster 3.7.6
> 'storage' is a distributed volume
>
> # gluster volume set storage rebal-throttle lazy
> volume set: failed: One or more connected clients cannot support the
> feature being set. These clients need to be upgraded or disconnected before
> running this command again
>
> I found a client connected using version 3.6.7 so

Re: [Gluster-users] gluster 3.7.6 volume set: failed: One or more connected clients cannot support the feature being set

2016-03-01 Thread Gaurav Garg
Hi Steve,

Which version you have upgraded client, could you tell us client op-version 
after upgrade ?


have you upgraded all of your clients ?


Thanks,
Gaurav


- Original Message -
From: "Steve Dainard" 
To: "gluster-users@gluster.org List" 
Sent: Wednesday, March 2, 2016 1:10:27 AM
Subject: [Gluster-users] gluster 3.7.6 volume set: failed: One or more 
connected clients cannot support the feature being set

Gluster 3.7.6 
'storage' is a distributed volume 

# gluster volume set storage rebal-throttle lazy 
volume set: failed: One or more connected clients cannot support the feature 
being set. These clients need to be upgraded or disconnected before running 
this command again 

I found a client connected using version 3.6.7 so I upgraded & umount/mount the 
gluster volume on the client but I'm still getting this error. 

I've run grep "accepted client from" /var/log/glusterfs/bricks/* | grep -v 
3.7.6 and I get a few returns from the client above, all dated last week. 

I've run 'gluster volume status storage clients' and checked the connected 
clients manually, they're all running 3.7.6. 

/var/log/gluster/etc-glusterfs-glusterd.vol.log: 
[2016-03-01 19:23:20.180821] E [MSGID: 106022] 
[glusterd-utils.c:10154:glusterd_check_client_op_version_support] 0-management: 
One or more c 
lients don't support the required op-version 
[2016-03-01 19:23:20.180853] E [MSGID: 106301] 
[glusterd-syncop.c:1274:gd_stage_op_phase] 0-management: Staging of operation 
'Volume Set' fa 
iled on localhost : One or more connected clients cannot support the feature 
being set. These clients need to be upgraded or disconnected be 
fore running this command again 

Also tried setting the diagnostics.brick-log-level logging level and got the 
same error. 

/var/lib/glusterd/vols/storage/info: 
type=0 
count=6 
status=1 
sub_count=0 
stripe_count=1 
replica_count=1 
disperse_count=0 
redundancy_count=0 
version=26 
transport-type=0 
volume-id=26d355cb-c486-481f-ac16-e25390e73775 
username=eb9e2063-6ba8-4d16-a54f-2c7cf7740c4c 
password= 
op-version=3 
client-op-version=3 
quota-version=1 
parent_volname=N/A 
restored_from_snap=---- 
snap-max-hard-limit=256 
features.quota-deem-statfs=on 
features.inode-quota=on 
diagnostics.brick-log-level=WARNING 
features.quota=on 
performance.readdir-ahead=on 
performance.cache-size=1GB 
performance.stat-prefetch=on 
brick-0=10.0.231.50:-mnt-raid6-storage-storage 
brick-1=10.0.231.51:-mnt-raid6-storage-storage 
brick-2=10.0.231.52:-mnt-raid6-storage-storage 
brick-3=10.0.231.53:-mnt-raid6-storage-storage 
brick-4=10.0.231.54:-mnt-raid6-storage-storage 
brick-5=10.0.231.55:-mnt-raid6-storage-storage 


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Issue in Adding/Removing the gluster node

2016-03-01 Thread Gaurav Garg
Hi abhishek, 

Not yet,

I was busy with some other stuff. Will let you know about it.

Thanks,
~Gaurav

- Original Message -
From: "ABHISHEK PALIWAL" <abhishpali...@gmail.com>
To: "Gaurav Garg" <gg...@redhat.com>
Cc: gluster-users@gluster.org
Sent: Tuesday, March 1, 2016 5:57:12 PM
Subject: Re: [Gluster-users] Issue in Adding/Removing the gluster node

Hi Gaurav,

Have you got the time to analyze the logs.

Regards,
Abhishek

On Thu, Feb 25, 2016 at 11:23 AM, Gaurav Garg <gg...@redhat.com> wrote:

> sure,
>
> Thanks,
> ~Gaurav
>
>  Original Message -
> From: "ABHISHEK PALIWAL" <abhishpali...@gmail.com>
> To: "Gaurav Garg" <gg...@redhat.com>
> Cc: gluster-users@gluster.org
> Sent: Thursday, February 25, 2016 10:40:11 AM
> Subject: Re: [Gluster-users] Issue in Adding/Removing the gluster node
>
> Hi Gaurav,
>
>
> Here, I am sharing the log.zip file having logs for both of the nodes and
> other logs as well.
>
> Now I think we can analyze the logs and find out the actual problem of this
> issue.
>
> Regards,
> Abhishek
>
> On Wed, Feb 24, 2016 at 2:44 PM, Gaurav Garg <gg...@redhat.com> wrote:
>
> > hi abhishek,
> >
> > i need to look further why are you falling in this situation. file name
> > and uuid in /var/lib/glusterd/peers  should be same. each file in
> > /var/lib/glusterd/peers having information about its peer in the cluster.
> >
> > could you join #gluster channel on freenode. just ping me (irc name:
> > ggarg) after joining the channel.
> >
> > Thanks,
> > Gaurav
> >
> >
> > - Original Message -
> > From: "ABHISHEK PALIWAL" <abhishpali...@gmail.com>
> > To: "Gaurav Garg" <gg...@redhat.com>
> > Cc: gluster-users@gluster.org
> > Sent: Wednesday, February 24, 2016 12:31:51 PM
> > Subject: Re: [Gluster-users] Issue in Adding/Removing the gluster node
> >
> > Hi Gaurav,
> >
> > I have noticed one more thing in etc-glusterfs-glusterd.vol.log file with
> > respect to UUID of Peer <10.32.1.144>
> > It has two UUID
> > Before removing
> >
> > UUID is - b88c74b9-457d-4864-9fe6-403f6934d7d1 and after inserting the
> node
> > UUID is - 5ec06937-5f85-4a9d-b29e-4227bbb7b4fa
> >
> > Also have one file in glusterd/peers/ directory with the same name of
> first
> > UUID.
> >
> > What does this file mean in peers directory? is this file providing some
> > kind of linking between both of the UUID?
> >
> > Please find this file as an attachment.
> >
> > Regards,
> > Abhishek
> >
> > On Wed, Feb 24, 2016 at 12:06 PM, Gaurav Garg <gg...@redhat.com> wrote:
> >
> > > Hi abhishek,
> > >
> > > yes i looked into configuration file's that you have provided. there
> > every
> > > things seems to be fine.
> > >
> > > seems like some other problem. i will look into it today and will come
> > > back to you.
> > >
> > > thanks,
> > >
> > > ~Gaurav
> > >
> > > - Original Message -
> > > From: "ABHISHEK PALIWAL" <abhishpali...@gmail.com>
> > > To: "Gaurav Garg" <gg...@redhat.com>
> > > Cc: gluster-users@gluster.org
> > > Sent: Wednesday, February 24, 2016 12:02:47 PM
> > > Subject: Re: [Gluster-users] Issue in Adding/Removing the gluster node
> > >
> > > Hi Gaurav,
> > >
> > > Have you get the time to see the logs files which you asked yesterday?
> > >
> > > Regards,
> > > Abhishek
> > >
> > > On Tue, Feb 23, 2016 at 3:05 PM, ABHISHEK PALIWAL <
> > abhishpali...@gmail.com
> > > >
> > > wrote:
> > >
> > > > Hi Gaurav,
> > > >
> > > > Please find the vol.tar file.
> > > >
> > > > Regards,
> > > > Abhishek
> > > >
> > > > On Tue, Feb 23, 2016 at 2:37 PM, Gaurav Garg <gg...@redhat.com>
> wrote:
> > > >
> > > >> Hi abhishek,
> > > >>
> > > >> >> But after analyzing the following logs from the 1st board seems
> > that
> > > >> the
> > > >> process which will update the second brick in output of "# gluster
> > > volume
> > > >> status c_glusterfs" takes sometime to update this table and before
> the
> > > >> updation of this table remove-brick is getting executed that is why
> it
>

Re: [Gluster-users] Issue in Adding/Removing the gluster node

2016-02-24 Thread Gaurav Garg
sure,

Thanks,
~Gaurav

 Original Message -
From: "ABHISHEK PALIWAL" <abhishpali...@gmail.com>
To: "Gaurav Garg" <gg...@redhat.com>
Cc: gluster-users@gluster.org
Sent: Thursday, February 25, 2016 10:40:11 AM
Subject: Re: [Gluster-users] Issue in Adding/Removing the gluster node

Hi Gaurav,


Here, I am sharing the log.zip file having logs for both of the nodes and
other logs as well.

Now I think we can analyze the logs and find out the actual problem of this
issue.

Regards,
Abhishek

On Wed, Feb 24, 2016 at 2:44 PM, Gaurav Garg <gg...@redhat.com> wrote:

> hi abhishek,
>
> i need to look further why are you falling in this situation. file name
> and uuid in /var/lib/glusterd/peers  should be same. each file in
> /var/lib/glusterd/peers having information about its peer in the cluster.
>
> could you join #gluster channel on freenode. just ping me (irc name:
> ggarg) after joining the channel.
>
> Thanks,
> Gaurav
>
>
> - Original Message -
> From: "ABHISHEK PALIWAL" <abhishpali...@gmail.com>
> To: "Gaurav Garg" <gg...@redhat.com>
> Cc: gluster-users@gluster.org
> Sent: Wednesday, February 24, 2016 12:31:51 PM
> Subject: Re: [Gluster-users] Issue in Adding/Removing the gluster node
>
> Hi Gaurav,
>
> I have noticed one more thing in etc-glusterfs-glusterd.vol.log file with
> respect to UUID of Peer <10.32.1.144>
> It has two UUID
> Before removing
>
> UUID is - b88c74b9-457d-4864-9fe6-403f6934d7d1 and after inserting the node
> UUID is - 5ec06937-5f85-4a9d-b29e-4227bbb7b4fa
>
> Also have one file in glusterd/peers/ directory with the same name of first
> UUID.
>
> What does this file mean in peers directory? is this file providing some
> kind of linking between both of the UUID?
>
> Please find this file as an attachment.
>
> Regards,
> Abhishek
>
> On Wed, Feb 24, 2016 at 12:06 PM, Gaurav Garg <gg...@redhat.com> wrote:
>
> > Hi abhishek,
> >
> > yes i looked into configuration file's that you have provided. there
> every
> > things seems to be fine.
> >
> > seems like some other problem. i will look into it today and will come
> > back to you.
> >
> > thanks,
> >
> > ~Gaurav
> >
> > - Original Message -
> > From: "ABHISHEK PALIWAL" <abhishpali...@gmail.com>
> > To: "Gaurav Garg" <gg...@redhat.com>
> > Cc: gluster-users@gluster.org
> > Sent: Wednesday, February 24, 2016 12:02:47 PM
> > Subject: Re: [Gluster-users] Issue in Adding/Removing the gluster node
> >
> > Hi Gaurav,
> >
> > Have you get the time to see the logs files which you asked yesterday?
> >
> > Regards,
> > Abhishek
> >
> > On Tue, Feb 23, 2016 at 3:05 PM, ABHISHEK PALIWAL <
> abhishpali...@gmail.com
> > >
> > wrote:
> >
> > > Hi Gaurav,
> > >
> > > Please find the vol.tar file.
> > >
> > > Regards,
> > > Abhishek
> > >
> > > On Tue, Feb 23, 2016 at 2:37 PM, Gaurav Garg <gg...@redhat.com> wrote:
> > >
> > >> Hi abhishek,
> > >>
> > >> >> But after analyzing the following logs from the 1st board seems
> that
> > >> the
> > >> process which will update the second brick in output of "# gluster
> > volume
> > >> status c_glusterfs" takes sometime to update this table and before the
> > >> updation of this table remove-brick is getting executed that is why it
> > is
> > >> getting failed.
> > >>
> > >> It should not take that much of time. If your peer probe is successful
> > >> and you are able to
> > >> see 2nd broad peer entry in #gluster peer status command then it have
> > >> updated all information
> > >> of volume internally.
> > >>
> > >> your gluster volume status showing 2nd board entry:
> > >>
> > >> Brick 10.32.0.48:/opt/lvmdir/c2/brick   49153 0  Y
> > >> 2537
> > >> Self-heal Daemon on localhost   N/A   N/AY
> > >> 5577
> > >> Self-heal Daemon on 10.32.1.144 N/A   N/AY
> > >> 3850
> > >>
> > >> but its not showing 2nd board brick entry.
> > >>
> > >>
> > >> Did you perform any manual operation with configuration file which
> > >> resides in /var/lib/glusterd/* ?
> > >>
> > >> could you attach/paste the file
> > >&g

Re: [Gluster-users] Issue in Adding/Removing the gluster node

2016-02-24 Thread Gaurav Garg
hi abhishek,

i need to look further why are you falling in this situation. file name and 
uuid in /var/lib/glusterd/peers  should be same. each file in 
/var/lib/glusterd/peers having information about its peer in the cluster.

could you join #gluster channel on freenode. just ping me (irc name:  ggarg) 
after joining the channel.

Thanks,
Gaurav


- Original Message -
From: "ABHISHEK PALIWAL" <abhishpali...@gmail.com>
To: "Gaurav Garg" <gg...@redhat.com>
Cc: gluster-users@gluster.org
Sent: Wednesday, February 24, 2016 12:31:51 PM
Subject: Re: [Gluster-users] Issue in Adding/Removing the gluster node

Hi Gaurav,

I have noticed one more thing in etc-glusterfs-glusterd.vol.log file with
respect to UUID of Peer <10.32.1.144>
It has two UUID
Before removing

UUID is - b88c74b9-457d-4864-9fe6-403f6934d7d1 and after inserting the node
UUID is - 5ec06937-5f85-4a9d-b29e-4227bbb7b4fa

Also have one file in glusterd/peers/ directory with the same name of first
UUID.

What does this file mean in peers directory? is this file providing some
kind of linking between both of the UUID?

Please find this file as an attachment.

Regards,
Abhishek

On Wed, Feb 24, 2016 at 12:06 PM, Gaurav Garg <gg...@redhat.com> wrote:

> Hi abhishek,
>
> yes i looked into configuration file's that you have provided. there every
> things seems to be fine.
>
> seems like some other problem. i will look into it today and will come
> back to you.
>
> thanks,
>
> ~Gaurav
>
> - Original Message -
> From: "ABHISHEK PALIWAL" <abhishpali...@gmail.com>
> To: "Gaurav Garg" <gg...@redhat.com>
> Cc: gluster-users@gluster.org
> Sent: Wednesday, February 24, 2016 12:02:47 PM
> Subject: Re: [Gluster-users] Issue in Adding/Removing the gluster node
>
> Hi Gaurav,
>
> Have you get the time to see the logs files which you asked yesterday?
>
> Regards,
> Abhishek
>
> On Tue, Feb 23, 2016 at 3:05 PM, ABHISHEK PALIWAL <abhishpali...@gmail.com
> >
> wrote:
>
> > Hi Gaurav,
> >
> > Please find the vol.tar file.
> >
> > Regards,
> > Abhishek
> >
> > On Tue, Feb 23, 2016 at 2:37 PM, Gaurav Garg <gg...@redhat.com> wrote:
> >
> >> Hi abhishek,
> >>
> >> >> But after analyzing the following logs from the 1st board seems that
> >> the
> >> process which will update the second brick in output of "# gluster
> volume
> >> status c_glusterfs" takes sometime to update this table and before the
> >> updation of this table remove-brick is getting executed that is why it
> is
> >> getting failed.
> >>
> >> It should not take that much of time. If your peer probe is successful
> >> and you are able to
> >> see 2nd broad peer entry in #gluster peer status command then it have
> >> updated all information
> >> of volume internally.
> >>
> >> your gluster volume status showing 2nd board entry:
> >>
> >> Brick 10.32.0.48:/opt/lvmdir/c2/brick   49153 0  Y
> >> 2537
> >> Self-heal Daemon on localhost   N/A   N/AY
> >> 5577
> >> Self-heal Daemon on 10.32.1.144 N/A   N/AY
> >> 3850
> >>
> >> but its not showing 2nd board brick entry.
> >>
> >>
> >> Did you perform any manual operation with configuration file which
> >> resides in /var/lib/glusterd/* ?
> >>
> >> could you attach/paste the file
> >> /var/lib/glusterd/vols/c_glusterfs/trusted-*.tcp-fuse.vol file.
> >>
> >>
> >> Thanks,
> >>
> >> Regards,
> >> Gaurav
> >>
> >> - Original Message -
> >> From: "ABHISHEK PALIWAL" <abhishpali...@gmail.com>
> >> To: "Gaurav Garg" <gg...@redhat.com>
> >> Cc: gluster-users@gluster.org
> >> Sent: Tuesday, February 23, 2016 1:33:30 PM
> >> Subject: Re: [Gluster-users] Issue in Adding/Removing the gluster node
> >>
> >> Hi Gaurav,
> >>
> >> For the network connectivity I am doing peer probe to the 10.32.1.144
> i.e.
> >> 2nd board thats working fine means connectivity is there.
> >>
> >> #peer probe 10.32.1.144
> >>
> >> if the above command get success
> >>
> >> I executed the the remove-brick command which is getting failed.
> >>
> >> So,  now it seems the the peer probe will not give the correct
> >> connectivity
> >> status to execute the remove-brick command.
> >>

Re: [Gluster-users] Issue in Adding/Removing the gluster node

2016-02-23 Thread Gaurav Garg
Hi abhishek,

yes i looked into configuration file's that you have provided. there every 
things seems to be fine. 

seems like some other problem. i will look into it today and will come back to 
you.

thanks,

~Gaurav

- Original Message -
From: "ABHISHEK PALIWAL" <abhishpali...@gmail.com>
To: "Gaurav Garg" <gg...@redhat.com>
Cc: gluster-users@gluster.org
Sent: Wednesday, February 24, 2016 12:02:47 PM
Subject: Re: [Gluster-users] Issue in Adding/Removing the gluster node

Hi Gaurav,

Have you get the time to see the logs files which you asked yesterday?

Regards,
Abhishek

On Tue, Feb 23, 2016 at 3:05 PM, ABHISHEK PALIWAL <abhishpali...@gmail.com>
wrote:

> Hi Gaurav,
>
> Please find the vol.tar file.
>
> Regards,
> Abhishek
>
> On Tue, Feb 23, 2016 at 2:37 PM, Gaurav Garg <gg...@redhat.com> wrote:
>
>> Hi abhishek,
>>
>> >> But after analyzing the following logs from the 1st board seems that
>> the
>> process which will update the second brick in output of "# gluster volume
>> status c_glusterfs" takes sometime to update this table and before the
>> updation of this table remove-brick is getting executed that is why it is
>> getting failed.
>>
>> It should not take that much of time. If your peer probe is successful
>> and you are able to
>> see 2nd broad peer entry in #gluster peer status command then it have
>> updated all information
>> of volume internally.
>>
>> your gluster volume status showing 2nd board entry:
>>
>> Brick 10.32.0.48:/opt/lvmdir/c2/brick   49153 0  Y
>> 2537
>> Self-heal Daemon on localhost   N/A   N/AY
>> 5577
>> Self-heal Daemon on 10.32.1.144 N/A   N/AY
>> 3850
>>
>> but its not showing 2nd board brick entry.
>>
>>
>> Did you perform any manual operation with configuration file which
>> resides in /var/lib/glusterd/* ?
>>
>> could you attach/paste the file
>> /var/lib/glusterd/vols/c_glusterfs/trusted-*.tcp-fuse.vol file.
>>
>>
>> Thanks,
>>
>> Regards,
>> Gaurav
>>
>> - Original Message -
>> From: "ABHISHEK PALIWAL" <abhishpali...@gmail.com>
>> To: "Gaurav Garg" <gg...@redhat.com>
>> Cc: gluster-users@gluster.org
>> Sent: Tuesday, February 23, 2016 1:33:30 PM
>> Subject: Re: [Gluster-users] Issue in Adding/Removing the gluster node
>>
>> Hi Gaurav,
>>
>> For the network connectivity I am doing peer probe to the 10.32.1.144 i.e.
>> 2nd board thats working fine means connectivity is there.
>>
>> #peer probe 10.32.1.144
>>
>> if the above command get success
>>
>> I executed the the remove-brick command which is getting failed.
>>
>> So,  now it seems the the peer probe will not give the correct
>> connectivity
>> status to execute the remove-brick command.
>>
>> But after analyzing the following logs from the 1st board seems that the
>> process which will update the second brick in output of "# gluster volume
>> status c_glusterfs" takes sometime to update this table and before the
>> updation of this table remove-brick is getting executed that is why it is
>> getting failed.
>>
>> ++
>>
>> *1st board:*
>> # gluster volume info
>> status
>> gluster volume status c_glusterfs
>> Volume Name: c_glusterfs
>> Type: Replicate
>> Volume ID: 32793e91-6f88-4f29-b3e4-0d53d02a4b99
>> Status: Started
>> Number of Bricks: 1 x 2 = 2
>> Transport-type: tcp
>> Bricks:
>> Brick1: 10.32.0.48:/opt/lvmdir/c2/brick
>> Brick2: 10.32.1.144:/opt/lvmdir/c2/brick
>> Options Reconfigured:
>> nfs.disable: on
>> network.ping-timeout: 4
>> performance.readdir-ahead: on
>> # gluster peer status
>> Number of Peers: 1
>>
>> Hostname: 10.32.1.144
>> Uuid: b88c74b9-457d-4864-9fe6-403f6934d7d1
>> State: Peer in Cluster (Connected)
>> # gluster volume status c_glusterfs
>> Status of volume: c_glusterfs
>> Gluster process TCP Port  RDMA Port  Online
>> Pid
>>
>> --
>>
>> Brick 10.32.0.48:/opt/lvmdir/c2/brick   49153 0  Y
>> 2537
>> Self-heal Daemon on localhost   N/A   N/AY
>> 5577
>> Self-heal Daemon on 10.32.1.144 N/A   N/AY
>> 3850
>>
>> Task Status of

Re: [Gluster-users] question about replicate volume

2016-02-23 Thread Gaurav Garg
Hi songxin,

please find comment inline.

- Original Message -
From: "songxin" 
To: gluster-users@gluster.org
Cc: gluster-users@gluster.org
Sent: Wednesday, February 24, 2016 7:16:02 AM
Subject: [Gluster-users] question about replicate volume

Hi all, 
I have a question about replicate volume as below. 

precondition: 
1.A node ip: 128.224.162.163 
2.B node ip:128.224.162.255 
3.A node brick:/data/brick/gv0 
4. B node brick:/data/brick/gv0 

reproduce step: 
1. gluster peer probe 128.224.162.255 //run on A node 
2.gluster volume create gv0 128.224.162.163 : /data/brick/gv0 force //run on A 
node 
3.gluster volume start gv0 //run on A node 
4. mount -t glusterfs 128.224.162.163:/gv0 gluster //run on A node 
5.create some file(a,b,c) in directory gluster //run on A node 
6. gluster volume add-brick gv0 replica 2 128.224.162.255:/data/brick/gv0 force 
//run on A node 
7. create some file(d,e,f) in directory gluster //run on A node 
8. mount -t glusterfs 128.224.162.163:/gv0 gluster //run on B node 
9.ls gluster //run on B node 

My question is as below. 

After step 6, the volume type is change from distribute to replicate. 
The file (a,b,c) is created when the volume type is distribute. 
The file (d,e,f) is created when the volume type is replicate. 

>>After step 6, does the volume will replicate the file (a,b,c) in two brick?Or 
>>it just replicate the file(d,e,f) in two brick? 
If I run "gluster volume heal gv0 full", does the volume will replicate the 
file (a,b,c) in two brick? 


After step 6 volume have converted to replicate volume. So if you create file 
from mount point it will replicate these file to all replica set. In your case 
after step 6 it will replicate only file(d,e,f) because before step 6 volume 
was distributed. For replicating all the file (before step 6) you need to run 
#gluster volume heal  full. After executing this command file in both 
replica set should be same.

Thanks,

~Gaurav

Thanks, 
Xin 






___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] glusterfs client crashes

2016-02-23 Thread Gaurav Garg
ccing md-chache team member for this issue.

Thanks,
~Gaurav

- Original Message -
From: "Fredrik Widlund" <fredrik.widl...@gmail.com>
To: glus...@deej.net
Cc: gluster-users@gluster.org
Sent: Tuesday, February 23, 2016 5:51:37 PM
Subject: Re: [Gluster-users] glusterfs client crashes

Hi, 

I have experienced what looks like a very similar crash. Gluster 3.7.6 on 
CentOS 7. No errors on the bricks or on other at the time mounted clients. 
Relatively high load at the time. 

Remounting the filesystem brought it back online. 


pending frames: 
frame : type(1) op(READ) 
frame : type(1) op(READ) 
frame : type(1) op(READ) 
frame : type(1) op(READ) 
frame : type(1) op(READ) 
frame : type(1) op(READ) 
frame : type(1) op(READ) 
frame : type(1) op(READ) 
frame : type(1) op(READ) 
frame : type(1) op(READ) 
frame : type(1) op(READ) 
frame : type(1) op(READ) 
frame : type(1) op(READ) 
frame : type(1) op(READ) 
frame : type(1) op(READ) 
frame : type(1) op(READ) 
frame : type(1) op(READ) 
frame : type(1) op(READ) 
frame : type(1) op(READ) 
frame : type(1) op(READ) 
frame : type(1) op(READ) 
frame : type(1) op(STAT) 
frame : type(1) op(STAT) 
frame : type(1) op(READ) 
frame : type(1) op(READ) 
frame : type(1) op(READ) 
frame : type(1) op(READ) 
frame : type(1) op(READ) 
frame : type(1) op(READ) 
frame : type(1) op(READ) 
frame : type(0) op(0) 
patchset: git:// git.gluster.com/glusterfs.git 
signal received: 6 
time of crash: 
2016-02-22 10:28:45 
configuration details: 
argp 1 
backtrace 1 
dlfcn 1 
libpthread 1 
llistxattr 1 
setfsid 1 
spinlock 1 
epoll.h 1 
xattr.h 1 
st_atim.tv_nsec 1 
package-string: glusterfs 3.7.6 
/lib64/libglusterfs.so.0(_gf_msg_backtrace_nomem+0xc2)[0x7f83387f7012] 
/lib64/libglusterfs.so.0(gf_print_trace+0x31d)[0x7f83388134dd] 
/lib64/libc.so.6(+0x35670)[0x7f8336ee5670] 
/lib64/libc.so.6(gsignal+0x37)[0x7f8336ee55f7] 
/lib64/libc.so.6(abort+0x148)[0x7f8336ee6ce8] 
/lib64/libc.so.6(+0x75317)[0x7f8336f25317] 
/lib64/libc.so.6(+0x7cfe1)[0x7f8336f2cfe1] 
/lib64/libglusterfs.so.0(loc_wipe+0x27)[0x7f83387f4d47] 
/usr/lib64/glusterfs/3.7.6/xlator/performance/md-cache.so(mdc_local_wipe+0x11)[0x7f8329c8e5f1]
 
/usr/lib64/glusterfs/3.7.6/xlator/performance/md-cache.so(mdc_stat_cbk+0x10c)[0x7f8329c8f4fc]
 
/lib64/libglusterfs.so.0(default_stat_cbk+0xac)[0x7f83387fcc5c] 
/usr/lib64/glusterfs/3.7.6/xlator/cluster/distribute.so(dht_file_attr_cbk+0x149)[0x7f832ab2a409]
 
/usr/lib64/glusterfs/3.7.6/xlator/protocol/client.so(client3_3_stat_cbk+0x3c6)[0x7f832ad6d266]
 
/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0x90)[0x7f83385c5b80] 
/lib64/libgfrpc.so.0(rpc_clnt_notify+0x1bf)[0x7f83385c5e3f] 
/lib64/libgfrpc.so.0(rpc_transport_notify+0x23)[0x7f83385c1983] 
/usr/lib64/glusterfs/3.7.6/rpc-transport/socket.so(+0x9506)[0x7f832d261506] 
/usr/lib64/glusterfs/3.7.6/rpc-transport/socket.so(+0xc3f4)[0x7f832d2643f4] 
/lib64/libglusterfs.so.0(+0x878ea)[0x7f83388588ea] 
/lib64/libpthread.so.0(+0x7dc5)[0x7f833765fdc5] 
/lib64/libc.so.6(clone+0x6d)[0x7f8336fa621d] 



Kind regards, 
Fredrik Widlund 

On Tue, Feb 23, 2016 at 1:00 PM, < gluster-users-requ...@gluster.org > wrote: 


Date: Mon, 22 Feb 2016 15:08:47 -0500 
From: Dj Merrill < glus...@deej.net > 
To: Gaurav Garg < gg...@redhat.com > 
Cc: gluster-users@gluster.org 
Subject: Re: [Gluster-users] glusterfs client crashes 
Message-ID: < 56cb6acf.5080...@deej.net > 
Content-Type: text/plain; charset=utf-8; format=flowed 

On 2/21/2016 2:23 PM, Dj Merrill wrote: 
> Very interesting. They were reporting both bricks offline, but the 
> processes on both servers were still running. Restarting glusterfsd on 
> one of the servers brought them both back online. 

I realize I wasn't clear in my comments yesterday and would like to 
elaborate on this a bit further. The "very interesting" comment was 
sparked because when we were running 3.7.6, the bricks were not 
reporting as offline when a client was having an issue, so this is new 
behaviour now that we are running 3.7.8 (or a different issue entirely). 

The other point that I was not clear on is that we may have one client 
reporting the "Transport endpoint is not connected" error, but the other 
40+ clients all continue to work properly. This is the case with both 
3.7.6 and 3.7.8. 

Curious, how can the other clients continue to work fine if both Gluster 
3.7.8 servers are reporting the bricks as offline? 

What does "offline" mean in this context? 


Re: the server logs, here is what I've found so far listed on both 
gluster servers (glusterfs1 and glusterfs2): 

[2016-02-21 08:06:02.785788] I [glusterfsd-mgmt.c:1596:mgmt_getspec_cbk] 
0-glusterfs: No change in volfile, continuing 
[2016-02-21 18:48:20.677010] W [socket.c:588:__socket_rwv] 
0-gv0-client-1: readv on (sanitized IP of glusterfs2):49152 failed (No 
data available) 
[2016-02-21 18:48:20.677096] I [MSGID: 114018] 
[client.c:2030:client_rpc_notify] 0-gv0-client-1: disconne

Re: [Gluster-users] Issue in Adding/Removing the gluster node

2016-02-23 Thread Gaurav Garg
Hi abhishek,

>> But after analyzing the following logs from the 1st board seems that the
process which will update the second brick in output of "# gluster volume
status c_glusterfs" takes sometime to update this table and before the
updation of this table remove-brick is getting executed that is why it is
getting failed.

It should not take that much of time. If your peer probe is successful and you 
are able to
see 2nd broad peer entry in #gluster peer status command then it have updated 
all information
of volume internally.

your gluster volume status showing 2nd board entry:

Brick 10.32.0.48:/opt/lvmdir/c2/brick   49153 0  Y
2537
Self-heal Daemon on localhost   N/A   N/AY
5577
Self-heal Daemon on 10.32.1.144 N/A   N/AY
3850

but its not showing 2nd board brick entry.


Did you perform any manual operation with configuration file which resides in 
/var/lib/glusterd/* ?

could you attach/paste the file 
/var/lib/glusterd/vols/c_glusterfs/trusted-*.tcp-fuse.vol file.


Thanks,

Regards,
Gaurav 

- Original Message -
From: "ABHISHEK PALIWAL" <abhishpali...@gmail.com>
To: "Gaurav Garg" <gg...@redhat.com>
Cc: gluster-users@gluster.org
Sent: Tuesday, February 23, 2016 1:33:30 PM
Subject: Re: [Gluster-users] Issue in Adding/Removing the gluster node

Hi Gaurav,

For the network connectivity I am doing peer probe to the 10.32.1.144 i.e.
2nd board thats working fine means connectivity is there.

#peer probe 10.32.1.144

if the above command get success

I executed the the remove-brick command which is getting failed.

So,  now it seems the the peer probe will not give the correct connectivity
status to execute the remove-brick command.

But after analyzing the following logs from the 1st board seems that the
process which will update the second brick in output of "# gluster volume
status c_glusterfs" takes sometime to update this table and before the
updation of this table remove-brick is getting executed that is why it is
getting failed.

++

*1st board:*
# gluster volume info
status
gluster volume status c_glusterfs
Volume Name: c_glusterfs
Type: Replicate
Volume ID: 32793e91-6f88-4f29-b3e4-0d53d02a4b99
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 10.32.0.48:/opt/lvmdir/c2/brick
Brick2: 10.32.1.144:/opt/lvmdir/c2/brick
Options Reconfigured:
nfs.disable: on
network.ping-timeout: 4
performance.readdir-ahead: on
# gluster peer status
Number of Peers: 1

Hostname: 10.32.1.144
Uuid: b88c74b9-457d-4864-9fe6-403f6934d7d1
State: Peer in Cluster (Connected)
# gluster volume status c_glusterfs
Status of volume: c_glusterfs
Gluster process TCP Port  RDMA Port  Online
Pid
--

Brick 10.32.0.48:/opt/lvmdir/c2/brick   49153 0  Y
2537
Self-heal Daemon on localhost   N/A   N/AY
5577
Self-heal Daemon on 10.32.1.144 N/A   N/AY
3850

Task Status of Volume c_glusterfs
--

There are no active volume tasks

+++

I'll try this with some delay or wait to remove-brick until the # gluster
volume status c_glusterfs command show second brick in the list.

May we this approach will resolve the issue.

Please comment, If you are agree with my observation

Regards,
Abhishek

On Tue, Feb 23, 2016 at 1:10 PM, ABHISHEK PALIWAL <abhishpali...@gmail.com>
wrote:

> Hi Gaurav,
>
> In my case we are removing the brick in the offline state with the force
> option like in the following way:
>
>
>
> *gluster volume remove-brick %s replica 1 %s:%s force --mode=script*
> but still getting the failure or remove-brick
>
> it seems that brick is not present which we are trying to remove here are
> the log snippet of both of the boards
>
>
> *1st board:*
> # gluster volume info
> status
> gluster volume status c_glusterfs
> Volume Name: c_glusterfs
> Type: Replicate
> Volume ID: 32793e91-6f88-4f29-b3e4-0d53d02a4b99
> Status: Started
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: 10.32.0.48:/opt/lvmdir/c2/brick
> Brick2: 10.32.1.144:/opt/lvmdir/c2/brick
> Options Reconfigured:
> nfs.disable: on
> network.ping-timeout: 4
> performance.readdir-ahead: on
> # gluster peer status
> Number of Peers: 1
>
> Hostname: 10.32.1.144
> Uuid: b88c74b9-457d-4864-9fe6-403f6934d7d1
> State: Peer in Cluster (Connected)
> # gluster volume status c_glusterfs
> Status of volume: c_glusterfs
> Gluster process TCP Port  RDMA Port  Online
> Pid
> -

Re: [Gluster-users] Issue in Adding/Removing the gluster node

2016-02-22 Thread Gaurav Garg
Hi abhishek,

>> Can we perform remove-brick operation on the offline brick? what is the
meaning of offline and online brick?

No, you can't perform remove-brick operation on the offline brick. brick is 
offline means brick process is not running. you can see it by executing 
#gluster volume status. If brick is offline then respective brick will show "N" 
entry in Online column of #gluster volume status command. Alternatively you can 
also check whether glusterfsd process for that brick is running or not by 
executing #ps aux | grep glusterfsd, this command will list out all the brick 
process you can filter out from them, which one is online, which one is not.

But if you want to perform remove-brick operation on the offline brick then you 
need to execute it with force option. #gluster volume remove-brick  
hostname:/brick_name force. This might lead to data loss.



>> Also, Is there any logic in gluster through which we can check the
connectivity of node established or not before performing the any operation
on brick?

Yes, you can check it by executing #gluster peer status command.


Thanks,

~Gaurav


- Original Message -
From: "ABHISHEK PALIWAL" <abhishpali...@gmail.com>
To: "Gaurav Garg" <gg...@redhat.com>
Cc: gluster-users@gluster.org
Sent: Tuesday, February 23, 2016 11:50:43 AM
Subject: Re: [Gluster-users] Issue in Adding/Removing the gluster node

Hi Gaurav,

one general question related to gluster bricks.

Can we perform remove-brick operation on the offline brick? what is the
meaning of offline and online brick?
Also, Is there any logic in gluster through which we can check the
connectivity of node established or not before performing the any operation
on brick?

Regards,
Abhishek

On Mon, Feb 22, 2016 at 2:42 PM, Gaurav Garg <gg...@redhat.com> wrote:

> Hi abhishek,
>
> I went through your logs of node 1 and by looking glusterd logs its
> clearly indicate that your 2nd node (10.32.1.144) have disconnected from
> the cluster, because of that remove-brick operation failed. I think you
> need to check your network interface.
>
> But surprising things is that i did not see duplicate peer entry in
> #gluster peer status command output.
>
> May be i will get some more information from your (10.32.1.144) 2nd node
> logs. Could you also attach your 2nd node logs.
>
> after restarting glusterd, are you seeing duplicate peer entry in #gluster
> peer status command output ?
>
> will wait for 2nd node logs for further analyzing duplicate peer entry
> problem.
>
> Thanks,
>
> ~Gaurav
>
> - Original Message -
> From: "ABHISHEK PALIWAL" <abhishpali...@gmail.com>
> To: "Gaurav Garg" <gg...@redhat.com>
> Cc: gluster-users@gluster.org
> Sent: Monday, February 22, 2016 12:48:55 PM
> Subject: Re: [Gluster-users] Issue in Adding/Removing the gluster node
>
> Hi Gaurav,
>
> Here, You can find the attached logs for the boards in case of remove-brick
> failure.
> In these logs we do not have the cmd_history and
> etc-glusterfs-glusterd.vol.log for the second board.
>
> May be for that we need to some more time.
>
>
> Regards,
> Abhishek
>
> On Mon, Feb 22, 2016 at 10:18 AM, Gaurav Garg <gg...@redhat.com> wrote:
>
> > Hi Abhishek,
> >
> > >>  I'll provide the required log to you.
> >
> > sure
> >
> > on both node. do "pkill glusterd" and then start glusterd services.
> >
> > Thanks,
> >
> > ~Gaurav
> >
> > - Original Message -
> > From: "ABHISHEK PALIWAL" <abhishpali...@gmail.com>
> > To: "Gaurav Garg" <gg...@redhat.com>
> > Cc: gluster-users@gluster.org
> > Sent: Monday, February 22, 2016 10:11:48 AM
> > Subject: Re: [Gluster-users] Issue in Adding/Removing the gluster node
> >
> > Hi Gaurav,
> >
> > Thanks for your prompt reply.
> >
> > I'll provide the required log to you.
> >
> > As a workaround you suggested that restart the glusterd service. Could
> you
> > please tell me the point where I can do this?
> >
> > Regards,
> > Abhishek
> >
> > On Fri, Feb 19, 2016 at 6:11 PM, Gaurav Garg <gg...@redhat.com> wrote:
> >
> > > Hi Abhishek,
> > >
> > > Peer status output looks interesting where it have stale entry,
> > > technically it should not happen. Here few thing need to ask
> > >
> > > Did you perform any manual operation with GlusterFS configuration file
> > > which resides in /var/lib/glusterd/* folder.
> > >
> > > Can you provide output of "ls /var/lib/glusterd/peers"  from both of
> your
> > > nodes

Re: [Gluster-users] Issue in Adding/Removing the gluster node

2016-02-22 Thread Gaurav Garg
Hi abhishek,

I went through your logs of node 1 and by looking glusterd logs its clearly 
indicate that your 2nd node (10.32.1.144) have disconnected from the cluster, 
because of that remove-brick operation failed. I think you need to check your 
network interface.

But surprising things is that i did not see duplicate peer entry in #gluster 
peer status command output.

May be i will get some more information from your (10.32.1.144) 2nd node logs. 
Could you also attach your 2nd node logs.

after restarting glusterd, are you seeing duplicate peer entry in #gluster peer 
status command output ? 

will wait for 2nd node logs for further analyzing duplicate peer entry problem. 

Thanks,

~Gaurav

- Original Message -
From: "ABHISHEK PALIWAL" <abhishpali...@gmail.com>
To: "Gaurav Garg" <gg...@redhat.com>
Cc: gluster-users@gluster.org
Sent: Monday, February 22, 2016 12:48:55 PM
Subject: Re: [Gluster-users] Issue in Adding/Removing the gluster node

Hi Gaurav,

Here, You can find the attached logs for the boards in case of remove-brick
failure.
In these logs we do not have the cmd_history and
etc-glusterfs-glusterd.vol.log for the second board.

May be for that we need to some more time.


Regards,
Abhishek

On Mon, Feb 22, 2016 at 10:18 AM, Gaurav Garg <gg...@redhat.com> wrote:

> Hi Abhishek,
>
> >>  I'll provide the required log to you.
>
> sure
>
> on both node. do "pkill glusterd" and then start glusterd services.
>
> Thanks,
>
> ~Gaurav
>
> ----- Original Message -
> From: "ABHISHEK PALIWAL" <abhishpali...@gmail.com>
> To: "Gaurav Garg" <gg...@redhat.com>
> Cc: gluster-users@gluster.org
> Sent: Monday, February 22, 2016 10:11:48 AM
> Subject: Re: [Gluster-users] Issue in Adding/Removing the gluster node
>
> Hi Gaurav,
>
> Thanks for your prompt reply.
>
> I'll provide the required log to you.
>
> As a workaround you suggested that restart the glusterd service. Could you
> please tell me the point where I can do this?
>
> Regards,
> Abhishek
>
> On Fri, Feb 19, 2016 at 6:11 PM, Gaurav Garg <gg...@redhat.com> wrote:
>
> > Hi Abhishek,
> >
> > Peer status output looks interesting where it have stale entry,
> > technically it should not happen. Here few thing need to ask
> >
> > Did you perform any manual operation with GlusterFS configuration file
> > which resides in /var/lib/glusterd/* folder.
> >
> > Can you provide output of "ls /var/lib/glusterd/peers"  from both of your
> > nodes.
> >
> > Could you provide output of #gluster peer status command when 2nd node is
> > down
> >
> > Can you provide output of #gluster volume info command
> >
> > Can you provide full logs details of cmd_history.log and
> > etc-glusterfs-glusterd.vol.log from both the nodes.
> >
> >
> > You can restart your glusterd as of now as a workaround but we need to
> > analysis this issue further.
> >
> > Thanks,
> > Gaurav
> >
> > - Original Message -
> > From: "ABHISHEK PALIWAL" <abhishpali...@gmail.com>
> > To: "Gaurav Garg" <gg...@redhat.com>
> > Cc: gluster-users@gluster.org
> > Sent: Friday, February 19, 2016 5:27:21 PM
> > Subject: Re: [Gluster-users] Issue in Adding/Removing the gluster node
> >
> > Hi Gaurav,
> >
> > After the failure of add-brick following is outcome "gluster peer status"
> > command
> >
> > Number of Peers: 2
> >
> > Hostname: 10.32.1.144
> > Uuid: bbe2a458-ad3d-406d-b233-b6027c12174e
> > State: Peer in Cluster (Connected)
> >
> > Hostname: 10.32.1.144
> > Uuid: bbe2a458-ad3d-406d-b233-b6027c12174e
> > State: Peer in Cluster (Connected)
> >
> > Regards,
> > Abhishek
> >
> > On Fri, Feb 19, 2016 at 5:21 PM, ABHISHEK PALIWAL <
> abhishpali...@gmail.com
> > >
> > wrote:
> >
> > > Hi Gaurav,
> > >
> > > Both are the board connect through the backplane using ethernet.
> > >
> > > Even this inconsistency also occurs when I am trying to bringing back
> the
> > > node in slot. Means some time add-brick executes without failure but
> some
> > > time following error occurs.
> > >
> > > volume add-brick c_glusterfs replica 2 10.32.1.144:
> /opt/lvmdir/c2/brick
> > > force : FAILED : Another transaction is in progress for c_glusterfs.
> > Please
> > > try again after sometime.
> > >
> > >
> > > You can also see the attached logs for add-brick failure scenario.
> > >
> > &

Re: [Gluster-users] Issue in Adding/Removing the gluster node

2016-02-21 Thread Gaurav Garg
Hi Abhishek,

>>  I'll provide the required log to you.

sure

on both node. do "pkill glusterd" and then start glusterd services.

Thanks,

~Gaurav

- Original Message -
From: "ABHISHEK PALIWAL" <abhishpali...@gmail.com>
To: "Gaurav Garg" <gg...@redhat.com>
Cc: gluster-users@gluster.org
Sent: Monday, February 22, 2016 10:11:48 AM
Subject: Re: [Gluster-users] Issue in Adding/Removing the gluster node

Hi Gaurav,

Thanks for your prompt reply.

I'll provide the required log to you.

As a workaround you suggested that restart the glusterd service. Could you
please tell me the point where I can do this?

Regards,
Abhishek

On Fri, Feb 19, 2016 at 6:11 PM, Gaurav Garg <gg...@redhat.com> wrote:

> Hi Abhishek,
>
> Peer status output looks interesting where it have stale entry,
> technically it should not happen. Here few thing need to ask
>
> Did you perform any manual operation with GlusterFS configuration file
> which resides in /var/lib/glusterd/* folder.
>
> Can you provide output of "ls /var/lib/glusterd/peers"  from both of your
> nodes.
>
> Could you provide output of #gluster peer status command when 2nd node is
> down
>
> Can you provide output of #gluster volume info command
>
> Can you provide full logs details of cmd_history.log and
> etc-glusterfs-glusterd.vol.log from both the nodes.
>
>
> You can restart your glusterd as of now as a workaround but we need to
> analysis this issue further.
>
> Thanks,
> Gaurav
>
> - Original Message -
> From: "ABHISHEK PALIWAL" <abhishpali...@gmail.com>
> To: "Gaurav Garg" <gg...@redhat.com>
> Cc: gluster-users@gluster.org
> Sent: Friday, February 19, 2016 5:27:21 PM
> Subject: Re: [Gluster-users] Issue in Adding/Removing the gluster node
>
> Hi Gaurav,
>
> After the failure of add-brick following is outcome "gluster peer status"
> command
>
> Number of Peers: 2
>
> Hostname: 10.32.1.144
> Uuid: bbe2a458-ad3d-406d-b233-b6027c12174e
> State: Peer in Cluster (Connected)
>
> Hostname: 10.32.1.144
> Uuid: bbe2a458-ad3d-406d-b233-b6027c12174e
> State: Peer in Cluster (Connected)
>
> Regards,
> Abhishek
>
> On Fri, Feb 19, 2016 at 5:21 PM, ABHISHEK PALIWAL <abhishpali...@gmail.com
> >
> wrote:
>
> > Hi Gaurav,
> >
> > Both are the board connect through the backplane using ethernet.
> >
> > Even this inconsistency also occurs when I am trying to bringing back the
> > node in slot. Means some time add-brick executes without failure but some
> > time following error occurs.
> >
> > volume add-brick c_glusterfs replica 2 10.32.1.144:/opt/lvmdir/c2/brick
> > force : FAILED : Another transaction is in progress for c_glusterfs.
> Please
> > try again after sometime.
> >
> >
> > You can also see the attached logs for add-brick failure scenario.
> >
> > Please let me know if you need more logs.
> >
> > Regards,
> > Abhishek
> >
> >
> > On Fri, Feb 19, 2016 at 5:03 PM, Gaurav Garg <gg...@redhat.com> wrote:
> >
> >> Hi Abhishek,
> >>
> >> How are you connecting two board, and how are you removing it manually
> >> that need to know because if you are removing your 2nd board from the
> >> cluster (abrupt shutdown) then you can't perform remove brick operation
> in
> >> 2nd node from first node and its happening successfully in your case.
> could
> >> you ensure your network connection once again while removing and
> bringing
> >> back your node again.
> >>
> >> Thanks,
> >> Gaurav
> >>
> >> --
> >> *From: *"ABHISHEK PALIWAL" <abhishpali...@gmail.com>
> >> *To: *"Gaurav Garg" <gg...@redhat.com>
> >> *Cc: *gluster-users@gluster.org
> >> *Sent: *Friday, February 19, 2016 3:36:21 PM
> >>
> >> *Subject: *Re: [Gluster-users] Issue in Adding/Removing the gluster node
> >>
> >> Hi Gaurav,
> >>
> >> Thanks for reply
> >>
> >> 1. Here, I removed the board manually here but this time it works fine
> >>
> >> [2016-02-18 10:03:40.601472]  : volume remove-brick c_glusterfs replica
> 1
> >> 10.32.1.144:/opt/lvmdir/c2/brick force : SUCCESS
> >> [2016-02-18 10:03:40.885973]  : peer detach 10.32.1.144 : SUCCESS
> >>
> >> Yes this time board is reachable but how? don't know because board is
> >> detached.
> >>
> >> 2. Here, I attached the board this time its works fine in add-bricks

Re: [Gluster-users] glusterfs client crashes

2016-02-21 Thread Gaurav Garg
Hi Dj,

Its seems that your brick process are offline or all brick process have 
crashed. Could you paste output of #gluster volume status   and #gluster volume 
info command and attach core file.

ccing dht-team member.

Thanks,

~Gaurav



- Original Message -
From: "Dj Merrill" 
To: gluster-users@gluster.org
Sent: Sunday, February 21, 2016 10:37:02 PM
Subject: [Gluster-users] glusterfs client crashes

Several weeks ago we started seeing some weird behaviour on our Gluster 
client systems.  Things would be working fine for several days, then the 
client could no longer access the Gluster filesystems, giving an error:

ls: cannot access /mnt/hpc: Transport endpoint is not connected

We were running version 3.7.6 and this version had been working fine for 
a few months until the above started happening.  Thinking that it may be 
an OS or kernel update causing the issue, when 3.7.8 came out, we 
upgraded in hopes that the issue might be addressed, but we are still 
getting having the issue.

All client machines are running Centos 7.2 with the latest updates, and 
the problem is happening on several machines.  Not every Gluster client 
machine has had the problem, but enough different machines to make us 
think that this is more of a generic issue versus one that only affects 
specific types of machines (Both Intel and AMD CPUs, different system 
manufacturers, etc).

The log file included below from /var/log/glusterfs seems to be showing 
a crash of the glusterfs process if I am interpreting it correctly.  At 
the top you can see an entry made on the 17th, then no further entries 
until the crash today on the 21st.

We would greatly appreciate any help in tracking down the cause and 
possible fix for this.

The only way to temporarily "fix" the machines seems to be a reboot, 
which allows the machines to work properly for a few days before the 
issue happens again (random amount of days, no pattern).


[2016-02-17 23:56:39.685754] I [MSGID: 109036] 
[dht-common.c:8043:dht_log_new_layout_for_dir_selfheal] 0-gv0-dht: 
Setting layout of /tmp/ktreraya/gms-scr/tmp/123277 with [Subvol_name: 
gv0-replicate-0, Err: -1 , Start: 0 , Stop: 4294967295 , Hash: 1 ],
pending frames:
frame : type(1) op(GETXATTR)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(1) op(GETXATTR)
frame : type(1) op(GETXATTR)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(1) op(GETXATTR)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(1) op(GETXATTR)
frame : type(0) op(0)
frame : type(1) op(GETXATTR)
frame : type(0) op(0)
frame : type(1) op(GETXATTR)
frame : type(0) op(0)
frame : type(1) op(GETXATTR)
frame : type(0) op(0)
frame : type(0) op(0)
patchset: git://git.gluster.com/glusterfs.git
signal received: 6
time of crash:
2016-02-21 08:10:40
configuration details:
argp 1
backtrace 1
dlfcn 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.7.8
/lib64/libglusterfs.so.0(_gf_msg_backtrace_nomem+0xc2)[0x7ff56ddcd042]
/lib64/libglusterfs.so.0(gf_print_trace+0x31d)[0x7ff56dde950d]
/lib64/libc.so.6(+0x35670)[0x7ff56c4bb670]
/lib64/libc.so.6(gsignal+0x37)[0x7ff56c4bb5f7]
/lib64/libc.so.6(abort+0x148)[0x7ff56c4bcce8]
/lib64/libc.so.6(+0x75317)[0x7ff56c4fb317]
/lib64/libc.so.6(+0x7d023)[0x7ff56c503023]
/usr/lib64/glusterfs/3.7.8/xlator/protocol/client.so(client_local_wipe+0x39)[0x7ff5600a46b9]
/usr/lib64/glusterfs/3.7.8/xlator/protocol/client.so(client3_3_getxattr_cbk+0x182)[0x7ff5600a7f62]
/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0x90)[0x7ff56db9ba20]
/lib64/libgfrpc.so.0(rpc_clnt_notify+0x1bf)[0x7ff56db9bcdf]
/lib64/libgfrpc.so.0(rpc_transport_notify+0x23)[0x7ff56db97823]
/usr/lib64/glusterfs/3.7.8/rpc-transport/socket.so(+0x6636)[0x7ff5627a8636]
/usr/lib64/glusterfs/3.7.8/rpc-transport/socket.so(+0x9294)[0x7ff5627ab294]
/lib64/libglusterfs.so.0(+0x878ea)[0x7ff56de2e8ea]
/lib64/libpthread.so.0(+0x7dc5)[0x7ff56cc35dc5]
/lib64/libc.so.6(clone+0x6d)[0x7ff56c57c28d]


Thank you,

-Dj
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] glusterfs client crashes

2016-02-21 Thread Gaurav Garg
Hi Dj,

Its seems that your brick process are offline or all brick process have 
crashed. Could you paste output of #gluster volume status   and #gluster volume 
info command and attach core file.

ccing dht-team member.

Thanks,

~Gaurav



- Original Message -
From: "Dj Merrill" 
To: gluster-users@gluster.org
Sent: Sunday, February 21, 2016 10:37:02 PM
Subject: [Gluster-users] glusterfs client crashes

Several weeks ago we started seeing some weird behaviour on our Gluster 
client systems.  Things would be working fine for several days, then the 
client could no longer access the Gluster filesystems, giving an error:

ls: cannot access /mnt/hpc: Transport endpoint is not connected

We were running version 3.7.6 and this version had been working fine for 
a few months until the above started happening.  Thinking that it may be 
an OS or kernel update causing the issue, when 3.7.8 came out, we 
upgraded in hopes that the issue might be addressed, but we are still 
getting having the issue.

All client machines are running Centos 7.2 with the latest updates, and 
the problem is happening on several machines.  Not every Gluster client 
machine has had the problem, but enough different machines to make us 
think that this is more of a generic issue versus one that only affects 
specific types of machines (Both Intel and AMD CPUs, different system 
manufacturers, etc).

The log file included below from /var/log/glusterfs seems to be showing 
a crash of the glusterfs process if I am interpreting it correctly.  At 
the top you can see an entry made on the 17th, then no further entries 
until the crash today on the 21st.

We would greatly appreciate any help in tracking down the cause and 
possible fix for this.

The only way to temporarily "fix" the machines seems to be a reboot, 
which allows the machines to work properly for a few days before the 
issue happens again (random amount of days, no pattern).


[2016-02-17 23:56:39.685754] I [MSGID: 109036] 
[dht-common.c:8043:dht_log_new_layout_for_dir_selfheal] 0-gv0-dht: 
Setting layout of /tmp/ktreraya/gms-scr/tmp/123277 with [Subvol_name: 
gv0-replicate-0, Err: -1 , Start: 0 , Stop: 4294967295 , Hash: 1 ],
pending frames:
frame : type(1) op(GETXATTR)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(1) op(GETXATTR)
frame : type(1) op(GETXATTR)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(1) op(GETXATTR)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(1) op(GETXATTR)
frame : type(0) op(0)
frame : type(1) op(GETXATTR)
frame : type(0) op(0)
frame : type(1) op(GETXATTR)
frame : type(0) op(0)
frame : type(1) op(GETXATTR)
frame : type(0) op(0)
frame : type(0) op(0)
patchset: git://git.gluster.com/glusterfs.git
signal received: 6
time of crash:
2016-02-21 08:10:40
configuration details:
argp 1
backtrace 1
dlfcn 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.7.8
/lib64/libglusterfs.so.0(_gf_msg_backtrace_nomem+0xc2)[0x7ff56ddcd042]
/lib64/libglusterfs.so.0(gf_print_trace+0x31d)[0x7ff56dde950d]
/lib64/libc.so.6(+0x35670)[0x7ff56c4bb670]
/lib64/libc.so.6(gsignal+0x37)[0x7ff56c4bb5f7]
/lib64/libc.so.6(abort+0x148)[0x7ff56c4bcce8]
/lib64/libc.so.6(+0x75317)[0x7ff56c4fb317]
/lib64/libc.so.6(+0x7d023)[0x7ff56c503023]
/usr/lib64/glusterfs/3.7.8/xlator/protocol/client.so(client_local_wipe+0x39)[0x7ff5600a46b9]
/usr/lib64/glusterfs/3.7.8/xlator/protocol/client.so(client3_3_getxattr_cbk+0x182)[0x7ff5600a7f62]
/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0x90)[0x7ff56db9ba20]
/lib64/libgfrpc.so.0(rpc_clnt_notify+0x1bf)[0x7ff56db9bcdf]
/lib64/libgfrpc.so.0(rpc_transport_notify+0x23)[0x7ff56db97823]
/usr/lib64/glusterfs/3.7.8/rpc-transport/socket.so(+0x6636)[0x7ff5627a8636]
/usr/lib64/glusterfs/3.7.8/rpc-transport/socket.so(+0x9294)[0x7ff5627ab294]
/lib64/libglusterfs.so.0(+0x878ea)[0x7ff56de2e8ea]
/lib64/libpthread.so.0(+0x7dc5)[0x7ff56cc35dc5]
/lib64/libc.so.6(clone+0x6d)[0x7ff56c57c28d]


Thank you,

-Dj
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] upgrade from 3.7.3 to 3.7.8

2016-02-19 Thread Gaurav Garg
Hi Craig,

steps are same as previous there is no special steps. you can follow [1] for 
more details. after upgrading you need to bump up the op-version explicitly.

#gluster volume set all cluster.op-version 30707

since we have not introduced new op-version for 3.7.8 you need to bump up with 
30707 op-version.

[1]. http://www.gluster.org/community/documentation/index.php/Upgrade_to_3.7

Thanks,
Gaurav

- Original Message -
From: "craig w" 
To: gluster-users@gluster.org
Sent: Friday, February 19, 2016 7:05:55 PM
Subject: [Gluster-users] upgrade from 3.7.3 to 3.7.8

Are there any special steps to upgrade gluster from 3.7.3 to 3.7.8? On CentOS 
7, I was thinking I could just install the new rpm's and restart the processes. 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] two same ip addr in peer list

2016-02-19 Thread Gaurav Garg
Hi Xin,

I didn't heard about this issue in Gluster V 3.7.6/3.7.8 or any other version. 
After checking all logs i can say that whether its a issue or something else.

Thanks,
Gaurav

- Original Message -
From: "songxin" <songxin_1...@126.com>
To: "Gaurav Garg" <gg...@redhat.com>
Cc: gluster-users@gluster.org
Sent: Saturday, February 20, 2016 4:56:12 AM
Subject: Re: [Gluster-users] two same ip addr in peer list

Hi Gaurav,
Thank you for your reply. I will do these test as you said.
I face this issue on glusterd version is 3.7.6. Do you know if this issue has 
been fixed on latest version 3.7.8.

 Thanks,
Xin

发自我的 iPhone

> 在 2016年2月20日,02:17,Gaurav Garg <gg...@redhat.com> 写道:
> 
> Hi xin,
> 
> Thanks for bringing up your Gluster issue.
> 
> Abhishek (another Gluster community member) also faced the same issue. I 
> asked below things for futher analysing this issue. could you provide me 
> following information?
> 
> 
> Did you perform any manual operation with GlusterFS configuration file which 
> resides in /var/lib/glusterd/* folder.?
> 
> Can you provide output of "ls /var/lib/glusterd/peers"  from both of your 
> nodes.
> 
> Can you provide output of #gluster volume info command
> 
> Could you provide output of #gluster peer status command when 2nd node is 
> down  
> 
> Down the glusterd on both node and bring glusterd one by one on both node and 
> provide me output of #gluster peer status command
> 
> Can you provide full logs details of cmd_history.log and 
> etc-glusterfs-glusterd.vol.log from both the nodes.
> 
> 
> following things will be very useful for analysing this issue.
> 
> You can restart your glusterd as of now as a workaround but we need to 
> analysis this issue further.
> 
> 
> Thanks,
> 
> ~Gaurav
> 
> - Original Message -
> From: "songxin" <songxin_1...@126.com>
> To: gluster-users@gluster.org
> Sent: Friday, February 19, 2016 7:07:48 PM
> Subject: [Gluster-users] two same ip addr in peer list
> 
> Hi, 
> I create a replicate volume with 2 brick.And I frequently reboot my two nodes 
> and frequently run “peer detach” “peer detach” “add-brick” "remove-brick". 
> A borad ip: 10.32.0.48 
> B borad ip: 10.32.1.144 
> 
> After that,  I run "gluster peer status" on A board and it show as below.
> 
> Number of Peers: 2 
> 
> Hostname: 10.32.1.144 
> Uuid: bbe2a458-ad3d-406d-b233-b6027c12174e 
> State: Peer in Cluster (Connected) 
> 
> Hostname: 10.32.1.144 
> Uuid: bbe2a458-ad3d-406d-b233-b6027c12174e 
> State: Peer in Cluster (Connected) 
> 
> 
> 
> 
> I don't understand why the 10.32.0.48 has two peers which are both 
> 10.32.1.144. 
> Does glusterd not check duplicate ip addr? 
> Any can help me to answer my quesion? 
> 
> Thanks, 
> Xin 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] two same ip addr in peer list

2016-02-19 Thread Gaurav Garg
Hi xin,

Thanks for bringing up your Gluster issue.

Abhishek (another Gluster community member) also faced the same issue. I asked 
below things for futher analysing this issue. could you provide me following 
information?


Did you perform any manual operation with GlusterFS configuration file which 
resides in /var/lib/glusterd/* folder.?

Can you provide output of "ls /var/lib/glusterd/peers"  from both of your nodes.

Can you provide output of #gluster volume info command

Could you provide output of #gluster peer status command when 2nd node is down  

Down the glusterd on both node and bring glusterd one by one on both node and 
provide me output of #gluster peer status command

Can you provide full logs details of cmd_history.log and 
etc-glusterfs-glusterd.vol.log from both the nodes.


following things will be very useful for analysing this issue.

You can restart your glusterd as of now as a workaround but we need to analysis 
this issue further.


Thanks,

~Gaurav

- Original Message -
From: "songxin" 
To: gluster-users@gluster.org
Sent: Friday, February 19, 2016 7:07:48 PM
Subject: [Gluster-users] two same ip addr in peer list

Hi, 
I create a replicate volume with 2 brick.And I frequently reboot my two nodes 
and frequently run “peer detach” “peer detach” “add-brick” "remove-brick". 
A borad ip: 10.32.0.48 
B borad ip: 10.32.1.144 

After that,  I run "gluster peer status" on A board and it show as below.

Number of Peers: 2 

Hostname: 10.32.1.144 
Uuid: bbe2a458-ad3d-406d-b233-b6027c12174e 
State: Peer in Cluster (Connected) 

Hostname: 10.32.1.144 
Uuid: bbe2a458-ad3d-406d-b233-b6027c12174e 
State: Peer in Cluster (Connected) 




I don't understand why the 10.32.0.48 has two peers which are both 10.32.1.144. 
Does glusterd not check duplicate ip addr? 
Any can help me to answer my quesion? 

Thanks, 
Xin 










___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Issue in Adding/Removing the gluster node

2016-02-19 Thread Gaurav Garg
Hi Abhishek,

Peer status output looks interesting where it have stale entry, technically it 
should not happen. Here few thing need to ask

Did you perform any manual operation with GlusterFS configuration file which 
resides in /var/lib/glusterd/* folder.

Can you provide output of "ls /var/lib/glusterd/peers"  from both of your nodes.

Could you provide output of #gluster peer status command when 2nd node is down  

Can you provide output of #gluster volume info command

Can you provide full logs details of cmd_history.log and 
etc-glusterfs-glusterd.vol.log from both the nodes.


You can restart your glusterd as of now as a workaround but we need to analysis 
this issue further.

Thanks,
Gaurav

- Original Message -
From: "ABHISHEK PALIWAL" <abhishpali...@gmail.com>
To: "Gaurav Garg" <gg...@redhat.com>
Cc: gluster-users@gluster.org
Sent: Friday, February 19, 2016 5:27:21 PM
Subject: Re: [Gluster-users] Issue in Adding/Removing the gluster node

Hi Gaurav,

After the failure of add-brick following is outcome "gluster peer status"
command

Number of Peers: 2

Hostname: 10.32.1.144
Uuid: bbe2a458-ad3d-406d-b233-b6027c12174e
State: Peer in Cluster (Connected)

Hostname: 10.32.1.144
Uuid: bbe2a458-ad3d-406d-b233-b6027c12174e
State: Peer in Cluster (Connected)

Regards,
Abhishek

On Fri, Feb 19, 2016 at 5:21 PM, ABHISHEK PALIWAL <abhishpali...@gmail.com>
wrote:

> Hi Gaurav,
>
> Both are the board connect through the backplane using ethernet.
>
> Even this inconsistency also occurs when I am trying to bringing back the
> node in slot. Means some time add-brick executes without failure but some
> time following error occurs.
>
> volume add-brick c_glusterfs replica 2 10.32.1.144:/opt/lvmdir/c2/brick
> force : FAILED : Another transaction is in progress for c_glusterfs. Please
> try again after sometime.
>
>
> You can also see the attached logs for add-brick failure scenario.
>
> Please let me know if you need more logs.
>
> Regards,
> Abhishek
>
>
> On Fri, Feb 19, 2016 at 5:03 PM, Gaurav Garg <gg...@redhat.com> wrote:
>
>> Hi Abhishek,
>>
>> How are you connecting two board, and how are you removing it manually
>> that need to know because if you are removing your 2nd board from the
>> cluster (abrupt shutdown) then you can't perform remove brick operation in
>> 2nd node from first node and its happening successfully in your case. could
>> you ensure your network connection once again while removing and bringing
>> back your node again.
>>
>> Thanks,
>> Gaurav
>>
>> --
>> *From: *"ABHISHEK PALIWAL" <abhishpali...@gmail.com>
>> *To: *"Gaurav Garg" <gg...@redhat.com>
>> *Cc: *gluster-users@gluster.org
>> *Sent: *Friday, February 19, 2016 3:36:21 PM
>>
>> *Subject: *Re: [Gluster-users] Issue in Adding/Removing the gluster node
>>
>> Hi Gaurav,
>>
>> Thanks for reply
>>
>> 1. Here, I removed the board manually here but this time it works fine
>>
>> [2016-02-18 10:03:40.601472]  : volume remove-brick c_glusterfs replica 1
>> 10.32.1.144:/opt/lvmdir/c2/brick force : SUCCESS
>> [2016-02-18 10:03:40.885973]  : peer detach 10.32.1.144 : SUCCESS
>>
>> Yes this time board is reachable but how? don't know because board is
>> detached.
>>
>> 2. Here, I attached the board this time its works fine in add-bricks
>>
>> 2016-02-18 10:03:42.065038]  : peer probe 10.32.1.144 : SUCCESS
>> [2016-02-18 10:03:44.563546]  : volume add-brick c_glusterfs replica 2
>> 10.32.1.144:/opt/lvmdir/c2/brick force : SUCCESS
>>
>> 3.Here, again I removed the board this time failed occur
>>
>> [2016-02-18 10:37:02.816089]  : volume remove-brick c_glusterfs replica 1
>> 10.32.1.144:/opt/lvmdir/c2/brick force : FAILED : Incorrect brick
>> 10.32.1.144:/opt
>> /lvmdir/c2/brick for volume c_glusterfs
>>
>> but here board is not reachable.
>>
>> why this inconsistency is there while doing the same step multiple time.
>>
>> Hope you are getting my point.
>>
>> Regards,
>> Abhishek
>>
>> On Fri, Feb 19, 2016 at 3:25 PM, Gaurav Garg <gg...@redhat.com> wrote:
>>
>>> Abhishek,
>>>
>>> when sometime its working fine means 2nd board network connection is
>>> reachable to first node. you can conform this by executing same #gluster
>>> peer status command.
>>>
>>> Thanks,
>>> Gaurav
>>>
>>> - Original Message -
>>> From: "ABHISHEK PALIWAL" <abhishpali...@gmail.com>
>>> To: "Gaurav

Re: [Gluster-users] Issue in Adding/Removing the gluster node

2016-02-19 Thread Gaurav Garg
Hi Abhishek, 

How are you connecting two board, and how are you removing it manually that 
need to know because if you are removing your 2nd board from the cluster 
(abrupt shutdown) then you can't perform remove brick operation in 2nd node 
from first node and its happening successfully in your case. could you ensure 
your network connection once again while removing and bringing back your node 
again. 

Thanks, 
Gaurav 

- Original Message -

From: "ABHISHEK PALIWAL" <abhishpali...@gmail.com> 
To: "Gaurav Garg" <gg...@redhat.com> 
Cc: gluster-users@gluster.org 
Sent: Friday, February 19, 2016 3:36:21 PM 
Subject: Re: [Gluster-users] Issue in Adding/Removing the gluster node 

Hi Gaurav, 

Thanks for reply 

1. Here, I removed the board manually here but this time it works fine 

[2016-02-18 10:03:40.601472] : volume remove-brick c_glusterfs replica 1 
10.32.1.144:/opt/lvmdir/c2/brick force : SUCCESS 
[2016-02-18 10:03:40.885973] : peer detach 10.32.1.144 : SUCCESS 

Yes this time board is reachable but how? don't know because board is detached. 

2. Here, I attached the board this time its works fine in add-bricks 

2016-02-18 10:03:42.065038] : peer probe 10.32.1.144 : SUCCESS 
[2016-02-18 10:03:44.563546] : volume add-brick c_glusterfs replica 2 
10.32.1.144:/opt/lvmdir/c2/brick force : SUCCESS 

3.Here, again I removed the board this time failed occur 

[2016-02-18 10:37:02.816089] : volume remove-brick c_glusterfs replica 1 
10.32.1.144:/opt/lvmdir/c2/brick force : FAILED : Incorrect brick 
10.32.1.144:/opt 
/lvmdir/c2/brick for volume c_glusterfs 

but here board is not reachable. 

why this inconsistency is there while doing the same step multiple time. 

Hope you are getting my point. 

Regards, 
Abhishek 

On Fri, Feb 19, 2016 at 3:25 PM, Gaurav Garg < gg...@redhat.com > wrote: 


Abhishek, 

when sometime its working fine means 2nd board network connection is reachable 
to first node. you can conform this by executing same #gluster peer status 
command. 

Thanks, 
Gaurav 

- Original Message - 
From: "ABHISHEK PALIWAL" < abhishpali...@gmail.com > 
To: "Gaurav Garg" < gg...@redhat.com > 
Cc: gluster-users@gluster.org 
Sent: Friday, February 19, 2016 3:12:22 PM 
Subject: Re: [Gluster-users] Issue in Adding/Removing the gluster node 

Hi Gaurav, 

Yes, you are right actually I am force fully detaching the node from the 
slave and when we removed the board it disconnected from the another board. 

but my question is I am doing this process multiple time some time it works 
fine but some time it gave these errors. 


you can see the following logs from cmd_history.log file 

[2016-02-18 10:03:34.497996] : volume set c_glusterfs nfs.disable on : 
SUCCESS 
[2016-02-18 10:03:34.915036] : volume start c_glusterfs force : SUCCESS 
[2016-02-18 10:03:40.250326] : volume status : SUCCESS 
[2016-02-18 10:03:40.273275] : volume status : SUCCESS 
[2016-02-18 10:03:40.601472] : volume remove-brick c_glusterfs replica 1 
10.32.1.144:/opt/lvmdir/c2/brick force : SUCCESS 
[2016-02-18 10:03:40.885973] : peer detach 10.32.1.144 : SUCCESS 
[2016-02-18 10:03:42.065038] : peer probe 10.32.1.144 : SUCCESS 
[2016-02-18 10:03:44.563546] : volume add-brick c_glusterfs replica 2 
10.32.1.144:/opt/lvmdir/c2/brick force : SUCCESS 
[2016-02-18 10:30:53.297415] : volume status : SUCCESS 
[2016-02-18 10:30:53.313096] : volume status : SUCCESS 
[2016-02-18 10:37:02.748714] : volume status : SUCCESS 
[2016-02-18 10:37:02.762091] : volume status : SUCCESS 
[2016-02-18 10:37:02.816089] : volume remove-brick c_glusterfs replica 1 
10.32.1.144:/opt/lvmdir/c2/brick force : FAILED : Incorrect brick 
10.32.1.144:/opt/lvmdir/c2/brick for volume c_glusterfs 


On Fri, Feb 19, 2016 at 3:05 PM, Gaurav Garg < gg...@redhat.com > wrote: 

> Hi Abhishek, 
> 
> Seems your peer 10.32.1.144 have disconnected while doing remove brick. 
> see the below logs in glusterd: 
> 
> [2016-02-18 10:37:02.816009] E [MSGID: 106256] 
> [glusterd-brick-ops.c:1047:__glusterd_handle_remove_brick] 0-management: 
> Incorrect brick 10.32.1.144:/opt/lvmdir/c2/brick for volume c_glusterfs 
> [Invalid argument] 
> [2016-02-18 10:37:02.816061] E [MSGID: 106265] 
> [glusterd-brick-ops.c:1088:__glusterd_handle_remove_brick] 0-management: 
> Incorrect brick 10.32.1.144:/opt/lvmdir/c2/brick for volume c_glusterfs 
> The message "I [MSGID: 106004] 
> [glusterd-handler.c:5065:__glusterd_peer_rpc_notify] 0-management: Peer 
> <10.32.1.144> (<6adf57dc-c619-4e56-ae40-90e6aef75fe9>), in state  Cluster>, has disconnected from glusterd." repeated 25 times between 
> [2016-02-18 10:35:43.131945] and [2016-02-18 10:36:58.160458] 
> 
> 
> 
> If you are facing the same issue now, could you paste your # gluster peer 
> status command output here. 
> 
> Thanks, 
> ~Gaurav 
> 
> - Original Mes

Re: [Gluster-users] Issue in Adding/Removing the gluster node

2016-02-19 Thread Gaurav Garg
Abhishek,

when sometime its working fine means 2nd board network connection is reachable 
to first node. you can conform this by executing same #gluster peer status 
command.

Thanks,
Gaurav

- Original Message -
From: "ABHISHEK PALIWAL" <abhishpali...@gmail.com>
To: "Gaurav Garg" <gg...@redhat.com>
Cc: gluster-users@gluster.org
Sent: Friday, February 19, 2016 3:12:22 PM
Subject: Re: [Gluster-users] Issue in Adding/Removing the gluster node

Hi Gaurav,

Yes, you are right actually I am force fully detaching the node from the
slave and when we removed the board it disconnected from the another board.

but my question is I am doing this process multiple time some time it works
fine but some time it gave these errors.


you can see the following logs from cmd_history.log file

[2016-02-18 10:03:34.497996]  : volume set c_glusterfs nfs.disable on :
SUCCESS
[2016-02-18 10:03:34.915036]  : volume start c_glusterfs force : SUCCESS
[2016-02-18 10:03:40.250326]  : volume status : SUCCESS
[2016-02-18 10:03:40.273275]  : volume status : SUCCESS
[2016-02-18 10:03:40.601472]  : volume remove-brick c_glusterfs replica 1
10.32.1.144:/opt/lvmdir/c2/brick force : SUCCESS
[2016-02-18 10:03:40.885973]  : peer detach 10.32.1.144 : SUCCESS
[2016-02-18 10:03:42.065038]  : peer probe 10.32.1.144 : SUCCESS
[2016-02-18 10:03:44.563546]  : volume add-brick c_glusterfs replica 2
10.32.1.144:/opt/lvmdir/c2/brick force : SUCCESS
[2016-02-18 10:30:53.297415]  : volume status : SUCCESS
[2016-02-18 10:30:53.313096]  : volume status : SUCCESS
[2016-02-18 10:37:02.748714]  : volume status : SUCCESS
[2016-02-18 10:37:02.762091]  : volume status : SUCCESS
[2016-02-18 10:37:02.816089]  : volume remove-brick c_glusterfs replica 1
10.32.1.144:/opt/lvmdir/c2/brick force : FAILED : Incorrect brick
10.32.1.144:/opt/lvmdir/c2/brick for volume c_glusterfs


On Fri, Feb 19, 2016 at 3:05 PM, Gaurav Garg <gg...@redhat.com> wrote:

> Hi Abhishek,
>
> Seems your peer 10.32.1.144 have disconnected while doing remove brick.
> see the below logs in glusterd:
>
> [2016-02-18 10:37:02.816009] E [MSGID: 106256]
> [glusterd-brick-ops.c:1047:__glusterd_handle_remove_brick] 0-management:
> Incorrect brick 10.32.1.144:/opt/lvmdir/c2/brick for volume c_glusterfs
> [Invalid argument]
> [2016-02-18 10:37:02.816061] E [MSGID: 106265]
> [glusterd-brick-ops.c:1088:__glusterd_handle_remove_brick] 0-management:
> Incorrect brick 10.32.1.144:/opt/lvmdir/c2/brick for volume c_glusterfs
> The message "I [MSGID: 106004]
> [glusterd-handler.c:5065:__glusterd_peer_rpc_notify] 0-management: Peer
> <10.32.1.144> (<6adf57dc-c619-4e56-ae40-90e6aef75fe9>), in state  Cluster>, has disconnected from glusterd." repeated 25 times between
> [2016-02-18 10:35:43.131945] and [2016-02-18 10:36:58.160458]
>
>
>
> If you are facing the same issue now, could you paste your # gluster peer
> status command output here.
>
> Thanks,
> ~Gaurav
>
> - Original Message -
> From: "ABHISHEK PALIWAL" <abhishpali...@gmail.com>
> To: gluster-users@gluster.org
> Sent: Friday, February 19, 2016 2:46:35 PM
> Subject: [Gluster-users] Issue in Adding/Removing the gluster node
>
> Hi,
>
>
> I am working on two board setup connecting to each other. Gluster version
> 3.7.6 is running and added two bricks in replica 2 mode but when I manually
> removed (detach) the one board from the setup I am getting the following
> error.
>
> volume remove-brick c_glusterfs replica 1 10.32.1.144:/opt/lvmdir/c2/brick
> force : FAILED : Incorrect brick 10.32.1.144:/opt/lvmdir/c2/brick for
> volume c_glusterfs
>
> Please find the logs file as an attachment.
>
>
> Regards,
> Abhishek
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>



-- 




Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Glusterd on one node using 89% of memory

2016-01-26 Thread Gaurav Garg
Hi Brian,

>>I have one (of 4) gluster node that is using almost all of the available 
>>memory on my box. It has been growing and is up to 89%

Could you attach command history file and glusterd logs. We need to analysis 
how it went to 89% of memory. There is small memory leak but it should not goes 
89% of memory.

Thanks,

Regards,
Gaurav 

- Original Message -
From: "Gaurav Garg" <gg...@redhat.com>
To: "Brian Contractor Andrus" <bdand...@nps.edu>
Cc: gluster-users@gluster.org
Sent: Wednesday, January 27, 2016 8:55:05 AM
Subject: Re: [Gluster-users] Glusterd on one node using 89% of memory

Hi Brian,

This seems to be know issue in Glusterd memory leak. Patch 
http://review.gluster.org/#/c/12927/ is already posted for review. Need to do 
some rework on that patch.

Thanks for reporting this.

Regards,
~Gaurav

- Original Message -
From: "Brian Contractor Andrus" <bdand...@nps.edu>
To: gluster-users@gluster.org
Sent: Wednesday, January 27, 2016 2:07:05 AM
Subject: [Gluster-users] Glusterd on one node using 89% of memory



All, 



I have one (of 4) gluster node that is using almost all of the available memory 
on my box. It has been growing and is up to 89% 

I have already done ‘echo 2 > /proc/sys/vm/drop_caches’ 

There seems to be no effect. 



Are there any gotcha to just restart glusterd? 

This is a CentOS 6.6 system with gluster 3.7.6 



Thanks in advance, 



Brian Andrus 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Glusterd on one node using 89% of memory

2016-01-26 Thread Gaurav Garg
Hi Brian,

This seems to be know issue in Glusterd memory leak. Patch 
http://review.gluster.org/#/c/12927/ is already posted for review. Need to do 
some rework on that patch.

Thanks for reporting this.

Regards,
~Gaurav

- Original Message -
From: "Brian Contractor Andrus" 
To: gluster-users@gluster.org
Sent: Wednesday, January 27, 2016 2:07:05 AM
Subject: [Gluster-users] Glusterd on one node using 89% of memory



All, 



I have one (of 4) gluster node that is using almost all of the available memory 
on my box. It has been growing and is up to 89% 

I have already done ‘echo 2 > /proc/sys/vm/drop_caches’ 

There seems to be no effect. 



Are there any gotcha to just restart glusterd? 

This is a CentOS 6.6 system with gluster 3.7.6 



Thanks in advance, 



Brian Andrus 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] migrating and removing the bricks

2015-11-02 Thread Gaurav Garg
hi lindsay,

>>  Is there any indicator of this process happening? how do you know when it
is finished?


yes, you can execute following command 

#gluster volume remove-brick   status

above command will give you statistics of remove brick operation. 

Thanx,
Gaurav
 
- Original Message -
From: "Lindsay Mathieson" <lindsay.mathie...@gmail.com>
To: "Gaurav Garg" <gg...@redhat.com>, "gluster-users" 
<gluster-users@gluster.org>
Sent: Monday, November 2, 2015 12:17:06 PM
Subject: Re: [Gluster-users] migrating and removing the bricks

On 2 November 2015 at 16:25, Gaurav Garg <gg...@redhat.com> wrote:

> you can remove the brick that is already part of the volume. once you
> start remove brick operation then internally it will trigger rebalance
> operation for moving data from removing brick to all other existing bricks.
>

Is there any indicator of this process happening? how do you know when it
is finished?


-- 
Lindsay
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] migrating and removing the bricks

2015-11-01 Thread Gaurav Garg
>> But I can’t move to brick which is already part of volume and hence fails

you can remove the brick that is already part of the volume. once you start 
remove brick operation then internally it will trigger rebalance operation for 
moving data from removing brick to all other existing bricks.

how are you removing the bricks?

Regards,
Gaurav
 

- Original Message -
From: "L, Sridhar (Nokia - IN/Bangalore)" 
To: gluster-users@gluster.org
Sent: Friday, October 30, 2015 11:06:14 AM
Subject: [Gluster-users] migrating and removing the bricks

Hi, 
I have setup with glusterfs with Distributed-Replicated volume. 
Initially a volume is created with two bricks (replica 2). If more space is 
required I will add two more bricks to the same volume. If the extra space is 
no longer required, I will remove the added bricks so that I can use the space 
for some other purpose. 
I am facing a problem here. I have to migrate the data present in the last two 
bricks to the initial two bricks and then remove them. But I can’t move to 
brick which is already part of volume and hence fails. How can I achieve this? 
Regards, 
Sridhar L 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] glusterd crashing

2015-10-02 Thread Gaurav Garg
>> Pulling those logs now but how do I generate the core file you are asking
for?

When there is crash then core file automatically generated based on your 
*ulimit* set option. you can find location of core file in your root or current 
working directory or where ever you have set your core dump file location. core 
file gives you information regarding crash, where exactly crash happened.
you can find appropriate core file by looking at crash time in glusterd log's 
by searching "crash" keyword. you can also paste few line's just above latest 
"crash" keyword in glusterd logs.

Just for your curiosity if you willing to look where it crash then you can 
debug it by #gdb -c  glusterd

Thank you...

Regards,
Gaurav  

- Original Message -
From: "Gene Liverman" <glive...@westga.edu>
To: "Gaurav Garg" <gg...@redhat.com>
Cc: "gluster-users" <gluster-users@gluster.org>
Sent: Friday, October 2, 2015 8:28:49 PM
Subject: Re: [Gluster-users] glusterd crashing

Pulling those logs now but how do I generate the core file you are asking
for?





--
*Gene Liverman*
Systems Integration Architect
Information Technology Services
University of West Georgia
glive...@westga.edu
678.839.5492

ITS: Making Technology Work for You!




On Fri, Oct 2, 2015 at 2:25 AM, Gaurav Garg <gg...@redhat.com> wrote:

> Hi Gene,
>
> you have paste glustershd log. we asked you to paste glusterd log.
> glusterd and glustershd both are different process. with this information
> we can't find out why your glusterd crashed. could you paste *glusterd*
> logs (/var/log/glusterfs/usr-local-etc-glusterfs-glusterd.vol.log*) in
> pastebin (not in this mail thread) and give the link of pastebin in this
> mail thread. Can you also attach core file or you can paste backtrace of
> that core dump file.
> It will be great if you give us sos report of the node where the crash
> happen.
>
> Thanx,
>
> ~Gaurav
>
> - Original Message -
> From: "Gene Liverman" <glive...@westga.edu>
> To: "gluster-users" <gluster-users@gluster.org>
> Sent: Friday, October 2, 2015 4:47:00 AM
> Subject: Re: [Gluster-users] glusterd crashing
>
> Sorry for the delay. Here is what's installed:
> # rpm -qa | grep gluster
> glusterfs-geo-replication-3.7.4-2.el6.x86_64
> glusterfs-client-xlators-3.7.4-2.el6.x86_64
> glusterfs-3.7.4-2.el6.x86_64
> glusterfs-libs-3.7.4-2.el6.x86_64
> glusterfs-api-3.7.4-2.el6.x86_64
> glusterfs-fuse-3.7.4-2.el6.x86_64
> glusterfs-server-3.7.4-2.el6.x86_64
> glusterfs-cli-3.7.4-2.el6.x86_64
>
> The cmd_history.log file is attached.
> In gluster.log I have filtered out a bunch of lines like the one below due
> to make them more readable. I had a node down for multiple days due to
> maintenance and another one went down due to a hardware failure during that
> time too.
> [2015-10-01 00:16:09.643631] W [MSGID: 114031]
> [client-rpc-fops.c:2971:client3_3_lookup_cbk] 0-gv0-client-0: remote
> operation failed. Path: 
> (31f17f8c-6c96-4440-88c0-f813b3c8d364) [No such file or directory]
>
> I also filtered out a boat load of self heal lines like these two:
> [2015-10-01 15:14:14.851015] I [MSGID: 108026]
> [afr-self-heal-metadata.c:56:__afr_selfheal_metadata_do] 0-gv0-replicate-0:
> performing metadata selfheal on f78a47db-a359-430d-a655-1d217eb848c3
> [2015-10-01 15:14:14.856392] I [MSGID: 108026]
> [afr-self-heal-common.c:651:afr_log_selfheal] 0-gv0-replicate-0: Completed
> metadata selfheal on f78a47db-a359-430d-a655-1d217eb848c3. source=0 sinks=1
>
>
> [root@eapps-gluster01 glusterfs]# cat glustershd.log |grep -v 'remote
> operation failed' |grep -v 'self-heal'
> [2015-09-27 08:46:56.893125] E [rpc-clnt.c:201:call_bail] 0-glusterfs:
> bailing out frame type(GlusterFS Handshake) op(GETSPEC(2)) xid = 0x6 sent =
> 2015-09-27 08:16:51.742731. timeout = 1800 for 127.0.0.1:24007
> [2015-09-28 12:54:17.524924] W [socket.c:588:__socket_rwv] 0-glusterfs:
> readv on 127.0.0.1:24007 failed (Connection reset by peer)
> [2015-09-28 12:54:27.844374] I [glusterfsd-mgmt.c:1512:mgmt_getspec_cbk]
> 0-glusterfs: No change in volfile, continuing
> [2015-09-28 12:57:03.485027] W [socket.c:588:__socket_rwv] 0-gv0-client-2:
> readv on 160.10.31.227:24007 failed (Connection reset by peer)
> [2015-09-28 12:57:05.872973] E [socket.c:2278:socket_connect_finish]
> 0-gv0-client-2: connection to 160.10.31.227:24007 failed (Connection
> refused)
> [2015-09-28 12:57:38.490578] W [socket.c:588:__socket_rwv] 0-glusterfs:
> readv on 127.0.0.1:24007 failed (No data available)
> [2015-09-28 12:57:49.054475] I [glusterfsd-mgmt.c:1512:mgmt_getspec_cbk]
> 0-glusterfs: No change in volfile, continuing
> [2015-09-28 13:01:12.062960] W [glusterfsd.c:1219:cleanup_an

Re: [Gluster-users] glusterd crashing

2015-09-30 Thread Gaurav Garg
Hi Gene,

Could you paste or attach core file/glusterd log file/cmd history to find out 
actual RCA of the crash. What steps you performed for this crash.

>> How can I troubleshoot this?

If you want to troubleshoot this then you can look into the glusterd log file, 
core file.

Thank you..

Regards,
Gaurav

- Original Message -
From: "Gene Liverman" 
To: gluster-users@gluster.org
Sent: Thursday, October 1, 2015 7:59:47 AM
Subject: [Gluster-users] glusterd crashing

In the last few days I've started having issues with my glusterd service 
crashing. When it goes down it seems to do so on all nodes in my replicated 
volume. How can I troubleshoot this? I'm on a mix of CentOS 6 and RHEL 6. 
Thanks! 



Gene Liverman 
Systems Integration Architect 
Information Technology Services 
University of West Georgia 
glive...@westga.edu 


Sent from Outlook on my iPhone 


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS 3.7

2015-09-22 Thread Gaurav Garg
Hi Andreas,

>> Are there any restrictions as to when I'm allowed to make changes to the 
>> GlusterFS
volume (for instance: start/stop volume, add/remove brick or peer)?

There are no restriction to make changes to the GlusterFS volume its all depend 
on your need whether you want to start/stop volume, add/remove brick or peer.

>> How will it handle such changes when one of my two replicated servers is 
>> down? 

When one of the replicated server is down then it will lookup to another server 
which is online, and if you do any file operation from mount point it will be 
reflected to the replica count which is currently online. When second replica 
come back online then you can start heal operation on glusterFS volume, it will 
update the 2nd replica which was offline. 

>> How will GlusterFS know which set of configuration files it can trust when 
>> the other server is connected
again and the files will contain different information about the volume?

When other node (server) come online then it will do handshake with the node 
which was online and request for configuration file and the offline node which 
come back online will update the configuration file based on the updated 
configuration file it received. But you should not flout glusterFS 
configuration by editing volfile manually to avoid obscure behavior.

I hope that will answer your doubt.

Thank you...

Regards,
Gaurav Garg
 

- Original Message -
From: "Andreas Hollaus" <andreas.holl...@ericsson.com>
To: gluster-users@gluster.org
Sent: Tuesday, September 22, 2015 1:32:08 PM
Subject: [Gluster-users] GlusterFS 3.7

Hi,

Are there any restrictions as to when I'm allowed to make changes to the 
GlusterFS
volume (for instance: start/stop volume, add/remove brick or peer)? How will it
handle such changes when one of my two replicated servers is down? How will 
GlusterFS
know which set of configuration files it can trust when the other server is 
connected
again and the files will contain different information about the volume? If 
these
were data files on the GlusterFS volume that would have been handled by the 
extended
file attributes, but how about the GlusterFS configuration itself?

Regards
Andreas

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] can you change a brick's mount point?

2015-06-07 Thread Gaurav Garg
comments inline.

- Original Message -
From: Atin Mukherjee atin.mukherje...@gmail.com
To: Nathan Hand (Business) nath...@manu.com.au
Cc: gluster-users@gluster.org
Sent: Sunday, June 7, 2015 8:08:19 PM
Subject: Re: [Gluster-users] can you change a brick's mount point?





Sent from Samsung Galaxy S4 
On 7 Jun 2015 20:04, Nathan Hand (Business)  nath...@manu.com.au  wrote: 
 
 I have a simple distributed volume with two bricks. 
 
 # gluster volume info myvolume 
 Volume Name: myvolume 
 Type: Distribute 
 Volume ID: b0193509-817b-4f77-8f65-4cfeb384bedb 
 Status: Started 
 Number of Bricks: 2 
 Transport-type: tcp 
 Bricks: 
 Brick1: zeus.lan:/mnt/pool-lvol1/brick0 
 Brick2: io.lan:/data/glusterfs/myvolume/brick2/anchor 
 
 I’d like to change Brick1’s path to follow Brick2’s naming convention, i.e. 
 /data/glusterfs/myvolume/brick1/anchor 
 
 I can easily change Brick1’s mount point, but I don’t know how to tell 
 gluster about the new location. 
 
 Options I’ve considered: 
 
 * Editing /var/lib/glusterd/vols by hand, but that could be foolish. 
 * Adding brick3, removing brick1, adding brick1, removing brick3. 

By editing /var/lib/glusterd/vols you might miss something to edit. Adding and 
removing brick option seems good.

This looks to be the best option IMO. You could go for replace brick option 
but that doesnt protect you from data loss. 
 
 Is there a better way? Preferably one that doesn’t need brick3 because I 
 don’t have a brick that large lying around. 
 ___ 
 Gluster-users mailing list 
 Gluster-users@gluster.org 
 http://www.gluster.org/mailman/listinfo/gluster-users 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Adding a 3.6.3 peer to a 3.6.2 trusted pool fails

2015-05-11 Thread Gaurav Garg
Hi Adrian,

Did you try to upgrade/downgrade gluster03 before performing any operation. If 
you attach fresh node (3.6.3 version) to (3.6.2) glusterfs version then this 
problems is not coming. we need to know what operation you performed on 
gluster03 node before peer probe. if possible could you attach cmd_history.log 
and glusterd log.

~ Gaurav

- Original Message -
From: Adrián Santos Marrero asma...@ull.edu.es
To: Gluster-users gluster-users@gluster.org
Sent: Monday, May 11, 2015 12:39:33 PM
Subject: [Gluster-users] Adding a 3.6.3 peer to a 3.6.2 trusted pool fails

Hi, 

I've been trying to add a third node to my two-node trusted pool with gluster 
peer probe: 



gluster02:~# gluster peer probe gluster03.stic.ull.es 
peer probe: success. 

The problem is that this node remains in Accepted peer request state: 



Hostname: gluster03.stic.ull.es 
Uuid: 9ca23fb2-cd76-4d1c-8a5f-f93306119539 
State: Accepted peer request (Connected) 

If I restart glusterd in gluster02, the state changes to Sent and Received 
peer request: 



Hostname: gluster03.stic.ull.es 
Uuid: 9ca23fb2-cd76-4d1c-8a5f-f93306119539 
State: Sent and Received peer request (Connected) 

And if I restart glusterd in gluster03 it doesn't start because is Unable to 
find friend: gluster01.stic.ull.es : 



[2015-05-11 06:59:39.035184] D 
[glusterd-peer-utils.c:121:glusterd_peerinfo_find_by_hostname] 0-management: 
Unable to find friend: gluster01.stic.ull.es 
[2015-05-11 06:59:39.037942] D [common-utils.c:2897:gf_is_local_addr] 
0-management: 10.107.34.17 
[2015-05-11 06:59:39.039392] D [common-utils.c:2897:gf_is_local_addr] 
0-management: 10.107.34.17 
[2015-05-11 06:59:39.040584] D [common-utils.c:2897:gf_is_local_addr] 
0-management: 10.107.34.17 
[2015-05-11 06:59:39.041790] D [common-utils.c:2913:gf_is_local_addr] 
0-management: gluster01.stic.ull.es is not local 

In fact, the only peer that gluster03 knows is gluster02: 



gluster03:~# ls -l /var/lib/glusterd/peers/ 
total 4 
-rw---. 1 root root 73 may 11 07:48 739d7c3f-99d5-4c8b-ac28-f85163e26322 
gluster03:~# cat /var/lib/glusterd/peers/739d7c3f-99d5-4c8b-ac28-f85163e26322 
uuid=739d7c3f-99d5-4c8b-ac28-f85163e26322 
state=5 
hostname1=10.107.34.18 

10.107.34.18 == gluster02 



gluster03:~# gluster --version 
glusterfs 3.6.3 built on Apr 23 2015 16:11:45 
Repository revision: git:// git.gluster.com/glusterfs.git 
Copyright (c) 2006-2011 Gluster Inc.  http://www.gluster.com  
GlusterFS comes with ABSOLUTELY NO WARRANTY. 
You may redistribute copies of GlusterFS under the terms of the GNU General 
Public License. 

All servers are running Centos 7, gluster01 and gluster02 have Gluster 3.6.2 
version and gluster03 3.6.3. 

Am I forced to upgrade gluster01 and gluster02 to 3.6.3? 

Regards. 

-- 
 
Adrián Santos Marrero 
Técnico de Sistemas - Área de Infraestructuras TIC 
Servicios de Tecnologías de la Información y Comunicación (STIC) 
Universidad de La Laguna (ULL) 
Teléfono/Phone: +34 922 845089 
 

Este mensaje puede contener información confidencial y/o privilegiada. 
Si usted no es el destinatario o lo ha recibido por error debe borrarlo 
inmediatamente. 
Está estrictamente prohibido por la legislación vigente realizar 
cualquier copia, revelación o distribución del contenido de este mensaje 
sin autorización expresa. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient or have received this e-mail in 
error you must destroy it. 
Any unauthorised copying, disclosure or distribution of the material in 
this e-mail is strictly forbidden by current legislation. 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] GlusterD errors

2015-05-11 Thread Gaurav Garg
Hi Rastelli,

Could you tell us what steps you followed or what command you executed for 
getting these log.


~ Gaurav 

- Original Message -
From: RASTELLI Alessandro alessandro.raste...@skytv.it
To: gluster-users@gluster.org
Sent: Monday, May 11, 2015 2:05:16 PM
Subject: [Gluster-users] GlusterD errors

Signature electronique 


Hi, 

we’ve got a lot of these errors in /etc-glusterfs-glusterd.vol.log in our 
Glusterfs environment. 

Just wanted to know if I can do anything about that, or if I can ignore them. 

Thank you 



[2015-05-11 08:22:43.848305] E 
[glusterd-utils.c:7364:glusterd_add_inode_size_to_dict] 0-management: xfs_info 
exited with non-zero exit status 

[2015-05-11 08:22:43.848347] E 
[glusterd-utils.c:7390:glusterd_add_inode_size_to_dict] 0-management: failed to 
get inode size 

[2015-05-11 08:22:52.911718] E [glusterd-op-sm.c:207:glusterd_get_txn_opinfo] 
0-: Unable to get transaction opinfo for transaction ID : 
ace2f066-1acb-4e00-9cca-721f88691dce 

[2015-05-11 08:23:53.26] E [glusterd-syncop.c:961:_gd_syncop_commit_op_cbk] 
0-management: Failed to aggregate response from node/brick 



Alessandro 




From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Pierre Léonard 
Sent: venerdì 10 aprile 2015 16:18 
To: gluster-users@gluster.org 
Subject: Re: [Gluster-users] one node change uuid in the night 





Hi Atin and all, 




have corrected with the data in glusterd.info and suppress the bad peers file. 
Could you clarify what steps did you perform here. Also could you try to 
start glusterd with -LDEBUG and share the glusterd log file with us. 
Also do you see any delta in glusterd.info file between node 10 and the 
other nodes? 
~Atin 


The problem is solved. It came from a miwe of uuid file and their contents on 
the 10 node. 
As we said here Ouf ! because I have vacation on next week. 

May be It could be necessary to save the peers directory, as many problem came 
from their contents. 

As the log the name volfile in an error line I search on the web and found that 
page : 
http://www.gluster.org/community/documentation/index.php/Understanding_vol-file 

I have added some section of the example file. Is that pertinent for our 14 
node cluster or do I have to forget or change notably for the number of threads 
? 

Many thank's for all , 


-- 












Pierre Léonard 


Senior IT Manager 


MetaGenoPolis 



pierre.leon...@jouy.inra.fr 


Tél. : +33 (0)1 34 65 29 78 



Centre de recherche INRA 


Domaine de Vilvert – Bât. 325 R+1 


78 352 Jouy-en-Josas CEDEX 


France 


www.mgps.eu 



___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] cli.log logging level

2015-04-20 Thread Gaurav Garg
Hi Gabriele,

you can set cli log level by providing option while executing command. for eg 
(volume status command).

#gluster --log-level=DEBUG/INFO/etc...  volume status.

hope it will work.

Thank you

Regards
~Gaurav

- Original Message -
From: Gabriele Paggi gabriele.pa...@gmail.com
To: gluster-users gluster-users@gluster.org
Sent: Monday, April 20, 2015 2:05:23 PM
Subject: [Gluster-users] cli.log logging level

Hi, 

I have a working distributed-replicated setup with two bricks. 
On both nodes the cli.log file fills up with: 

[2015-04-20 08:26:06.023184] D [registry.c:408:cli_cmd_register] 0-cli: 
Returning 0 
[2015-04-20 08:26:06.023209] D [registry.c:408:cli_cmd_register] 0-cli: 
Returning 0 
[2015-04-20 08:26:06.023219] D [registry.c:408:cli_cmd_register] 0-cli: 
Returning 0 
[2015-04-20 08:26:06.023229] D [registry.c:408:cli_cmd_register] 0-cli: 
Returning 0 
[2015-04-20 08:26:06.023238] D [registry.c:408:cli_cmd_register] 0-cli: 
Returning 0 
[2015-04-20 08:26:06.023249] D [registry.c:408:cli_cmd_register] 0-cli: 
Returning 0 
[2015-04-20 08:26:06.023260] D [registry.c:408:cli_cmd_register] 0-cli: 
Returning 0 
[2015-04-20 08:26:06.023271] D [registry.c:408:cli_cmd_register] 0-cli: 
Returning 0 
[2015-04-20 08:26:06.023283] D [registry.c:408:cli_cmd_register] 0-cli: 
Returning 0 
[2015-04-20 08:26:06.023293] D [registry.c:408:cli_cmd_register] 0-cli: 
Returning 0 
[2015-04-20 08:26:06.023302] D [registry.c:408:cli_cmd_register] 0-cli: 
Returning 0 
[2015-04-20 08:26:06.023312] D [registry.c:408:cli_cmd_register] 0-cli: 
Returning 0 
[2015-04-20 08:26:06.023321] D [registry.c:408:cli_cmd_register] 0-cli: 
Returning 0 
[2015-04-20 08:26:06.023331] D [registry.c:408:cli_cmd_register] 0-cli: 
Returning 0 
[2015-04-20 08:26:06.023340] D [registry.c:408:cli_cmd_register] 0-cli: 
Returning 0 

And its size averages 600-800MB a day. 
I assume the D in the log line stands for DEBUG. Is it there a way to lower 
logging level so that only WARN and higher get logged? 

I already have these two options set for my volume: 

Options Reconfigured: 
diagnostics.brick-log-level: WARNING 
diagnostics.client-log-level: WARNING 

Thanks! 

-- 
Gabriele 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] glusterfs 3.6.2: gluster volume add-brick: wronp op-version

2015-04-14 Thread Gaurav Garg
Hi Gregor,

op-version, is a number, which indicates the feature set which is being run in 
a given 'glusterd' process. 

Did you try to upgrade your cluster node from lower op-version to 3.6.2 or you 
did fresh installation of glusterfs 3.6.2.

could you check your current op-version for cluster by using #cat 
/var/lib/glusterd/glusterd.info

Thank you...

~ Gaurav

- Original Message -
From: Gregor Burck gre...@aeppelbroe.de
To: (glusterfs - Mailingliste) gluster-users@gluster.org
Sent: Tuesday, April 14, 2015 3:11:05 PM
Subject: [Gluster-users] glusterfs 3.6.2: gluster volume add-brick: wronp   
op-version

Hi,

I tried to add an aditional brick to one volume, on both hosts I  
installed glusterfs 3.6.2 via launchpad repositories:
glusterfs-server:
   Installiert:   3.6.2-ubuntu1~trusty3
   Installationskandidat: 3.6.2-ubuntu1~trusty3
   Versionstabelle:
  *** 3.6.2-ubuntu1~trusty3 0
 500 http://ppa.launchpad.net/gluster/glusterfs-3.6/ubuntu/  
trusty/main amd64 Packages

Both systems are ubuntu 14.04

Error message:
volume add-brick: failed: One or more nodes do not support the  
required op-version. Cluster op-version must atleast be 30600.

I don't understand what op-version stand for?

Bye

Gregor



___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] replace-brick command modification

2015-04-02 Thread Gaurav Garg
Hi all,

Since GlusterFs version 3.6.0  gluster volume replace-brick VOLNAME 
SOURCE-BRICK NEW-BRICK {start [force]|pause|abort|status|commit } command 
have deprecated. Only gluster volume replace-brick VOLNAME SOURCE-BRICK 
NEW-BRICK commit force command supported.

for bug https://bugzilla.redhat.com/show_bug.cgi?id=1094119 , Patch 
http://review.gluster.org/#/c/10101/   is removing cli/glusterd code for  
gluster volume replace-brick VOLNAME BRICK NEW-BRICK {start 
[force]|pause|abort|status|commit } command. so only we have commit force 
option supported for replace-brick command.

Should we have new command gluster volume replace-brick VOLNAME 
SOURCE-BRICK NEW-BRICK instead of having gluster volume replace-brick 
VOLNAME SOURCE-BRICK NEW-BRICK commit force command. 


Thanks  Regards
Gaurav Garg
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] replace-brick command modification

2015-04-02 Thread Gaurav Garg
Hi all,

Thank you for your thoughts. force should be present in command, i will keep 
it to commit force.  
replace brick command will be gluster volume replace-brick VOLNAME 
SOURCE-BRICK NEW-BRICK commit force

Regards
Gaurav

- Original Message -
From: Raghavendra Talur raghavendra.ta...@gmail.com
To: Kaushal M kshlms...@gmail.com
Cc: gluster-users@gluster.org
Sent: Thursday, 2 April, 2015 11:33:50 PM
Subject: Re: [Gluster-users] replace-brick command modification



On Thu, Apr 2, 2015 at 10:28 PM, Kaushal M  kshlms...@gmail.com  wrote: 



On Thu, Apr 2, 2015 at 7:20 PM, Kelvin Edmison 
 kelvin.edmi...@alcatel-lucent.com  wrote: 
 Gaurav, 
 
 I think that it is appropriate to keep the commit force options for 
 replace-brick, just to prevent less experienced admins from self-inflicted 
 data loss scenarios. 
 
 The add-brick/remove-brick pair of operations is not an intuitive choice for 
 admins who are trying to solve a problem with a specific brick. In this 
 situation, admins are generally thinking 'how can I move the data from this 
 brick to another one', and an admin that is casually surfing documentation 
 might infer that the replace-brick operation is the correct one, rather than 
 a sequence of commands that are somehow magically related. 
 
 I believe that keeping the mandatory commit force options for replace-brick 
 will help give these admins reason to pause and re-consider if this is the 
 right command for them to do, and prevent cases where new gluster admins 
 start shouting 'gluster lost my data'. 
 
 Regards, 
 Kelvin 
 
 
 
 On 04/02/2015 07:26 AM, Gaurav Garg wrote: 
 
 Hi all, 
 
 Since GlusterFs version 3.6.0 gluster volume replace-brick VOLNAME 
 SOURCE-BRICK NEW-BRICK {start [force]|pause|abort|status|commit } 
 command have deprecated. Only gluster volume replace-brick VOLNAME 
 SOURCE-BRICK NEW-BRICK commit force command supported. 
 
 for bug https://bugzilla.redhat.com/show_bug.cgi?id=1094119 , Patch 
 http://review.gluster.org/#/c/10101/ is removing cli/glusterd code for 
 gluster volume replace-brick VOLNAME BRICK NEW-BRICK {start 
 [force]|pause|abort|status|commit } command. so only we have commit force 
 option supported for replace-brick command. 
 
 Should we have new command gluster volume replace-brick VOLNAME 
 SOURCE-BRICK NEW-BRICK instead of having gluster volume replace-brick 
 VOLNAME SOURCE-BRICK NEW-BRICK commit force command. 
 
 
 Thanks  Regards 
 Gaurav Garg 
 ___ 
 Gluster-users mailing list 
 Gluster-users@gluster.org 
 http://www.gluster.org/mailman/listinfo/gluster-users 
 
 
 
 
 ___ 
 Gluster-users mailing list 
 Gluster-users@gluster.org 
 http://www.gluster.org/mailman/listinfo/gluster-users 

AFAIK, it was never the plan to remove 'replace-brick commit force'. 
The plan was always to retain it while removing the unsupported and 
unneeded options, ie 'replace-brick (start|pause|abort|status)'. 

Gaurav, your change is attempting to do the correct thing already and 
needs no changes (other than any that arise via the review process). 


I agree with Kelvin and Kaushal. 
We should retain commit force; force brings the implicit meaning 
that I fully understand what I am asking to be done is not the norm, 
but do proceed and I hold myself responsible for anything bad that 
happens. 




~kaushal 
___ 
Gluster-users mailing list 
Gluster-users@gluster.org 
http://www.gluster.org/mailman/listinfo/gluster-users 



-- 
Raghavendra Talur 


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users