Re: [Gluster-users] AFR Version used for self-heal

2016-02-25 Thread Joe Julian
On February 25, 2016 8:32:44 PM PST, Kyle Maas wrote: >On 02/25/2016 08:20 PM, Ravishankar N wrote: >> On 02/25/2016 11:36 PM, Kyle Maas wrote: >>> How can I tell what AFR version a cluster is using for self-heal? >> If all your servers and clients are 3.7.8, then

Re: [Gluster-users] AFR Version used for self-heal

2016-02-25 Thread Ravishankar N
On 02/26/2016 10:02 AM, Kyle Maas wrote: On 02/25/2016 08:20 PM, Ravishankar N wrote: On 02/25/2016 11:36 PM, Kyle Maas wrote: How can I tell what AFR version a cluster is using for self-heal? If all your servers and clients are 3.7.8, then they are by default running afr-v2. Afr-v2 was a

Re: [Gluster-users] AFR Version used for self-heal

2016-02-25 Thread Kyle Maas
On 02/25/2016 08:25 PM, Krutika Dhananjay wrote: > > > > > *From: *"Kyle Maas" > *To: *gluster-users@gluster.org > *Sent: *Thursday, February 25, 2016 11:36:53 PM > *Subject:

Re: [Gluster-users] AFR Version used for self-heal

2016-02-25 Thread Kyle Maas
On 02/25/2016 08:20 PM, Ravishankar N wrote: > On 02/25/2016 11:36 PM, Kyle Maas wrote: >> How can I tell what AFR version a cluster is using for self-heal? > If all your servers and clients are 3.7.8, then they are by default > running afr-v2. Afr-v2 was a re-write of afr that went in for 3.6.,

Re: [Gluster-users] AFR Version used for self-heal

2016-02-25 Thread Krutika Dhananjay
- Original Message - > From: "Kyle Maas" > To: gluster-users@gluster.org > Sent: Thursday, February 25, 2016 11:36:53 PM > Subject: [Gluster-users] AFR Version used for self-heal > How can I tell what AFR version a cluster is using for self-heal? > The

Re: [Gluster-users] AFR Version used for self-heal

2016-02-25 Thread Ravishankar N
On 02/25/2016 11:36 PM, Kyle Maas wrote: How can I tell what AFR version a cluster is using for self-heal? If all your servers and clients are 3.7.8, then they are by default running afr-v2. Afr-v2 was a re-write of afr that went in for 3.6., so any gluster package from then on has this code,

Re: [Gluster-users] Replicated Volume (mirror) on 17 nodes.

2016-02-25 Thread Joe Julian
https://joejulian.name/blog/glusterfs-replication-dos-and-donts/ On 02/24/2016 01:34 PM, Simone Taliercio wrote: Hi all :) I would need soon to create a pool of 17 nodes. Each node requires a copy of the same file "locally" so that can be accessed from the deployed application. * Do you

Re: [Gluster-users] Gluster 3.7.6 add new node state Peer Rejected (Connected)

2016-02-25 Thread Mohammed Rafi K C
On 02/26/2016 01:53 AM, Mohammed Rafi K C wrote: > > > On 02/26/2016 01:32 AM, Steve Dainard wrote: >> I haven't done anything more than peer thus far, so I'm a bit >> confused as to how the volume info fits in, can you expand on this a bit? >> >> Failed commits? Is this split brain on the

Re: [Gluster-users] Replicated Volume (mirror) on 17 nodes.

2016-02-25 Thread Kaleb KEITHLEY
On 02/24/2016 04:34 PM, Simone Taliercio wrote: > Hi all :) > > I would need soon to create a pool of 17 nodes. Each node requires a > copy of the same file "locally" so that can be accessed from the > deployed application. > > * Do you see any performance issue in creating a Replica Set on 17

Re: [Gluster-users] Gluster 3.7.6 add new node state Peer Rejected (Connected)

2016-02-25 Thread Mohammed Rafi K C
On 02/26/2016 01:32 AM, Steve Dainard wrote: > I haven't done anything more than peer thus far, so I'm a bit confused > as to how the volume info fits in, can you expand on this a bit? > > Failed commits? Is this split brain on the replica volumes? I don't > get any return from 'gluster volume

[Gluster-users] Replicated Volume (mirror) on 17 nodes.

2016-02-25 Thread Simone Taliercio
Hi all :) I would need soon to create a pool of 17 nodes. Each node requires a copy of the same file "locally" so that can be accessed from the deployed application. * Do you see any performance issue in creating a Replica Set on 17 nodes ? Any best practice that I should follow ? * An other

Re: [Gluster-users] Gluster 3.7.6 add new node state Peer Rejected (Connected)

2016-02-25 Thread Steve Dainard
For clarity "no return" from 'gluster volume heal info': # gluster volume heal vm-storage info Brick 10.0.231.50:/mnt/lv-vm-storage/vm-storage Number of entries: 0 Brick 10.0.231.51:/mnt/lv-vm-storage/vm-storage Number of entries: 0 Brick 10.0.231.52:/mnt/lv-vm-storage/vm-storage Number of

Re: [Gluster-users] Gluster 3.7.6 add new node state Peer Rejected (Connected)

2016-02-25 Thread Mohammed Rafi K C
On 02/25/2016 11:45 PM, Steve Dainard wrote: > Hello, > > I upgraded from 3.6.6 to 3.7.6 a couple weeks ago. I just peered 2 new > nodes to a 4 node cluster and gluster peer status is: > > # gluster peer status *<-- from node gluster01* > Number of Peers: 5 > > Hostname: 10.0.231.51 > Uuid:

Re: [Gluster-users] geo-rep: remote operation failed - No such file or directory

2016-02-25 Thread ML mail
Hi Aravinda, Many thanks for the steps. I have a few questions about it: - in your point number 3, can I simply do an "rm -rf /my/brick/.glusterfs/changelogs" ? - in your point number 4, do I need to remove the attributes from every master nodes? - i am currently using GlusterFS 3.7.6, will

[Gluster-users] AFR Version used for self-heal

2016-02-25 Thread Kyle Maas
How can I tell what AFR version a cluster is using for self-heal? The reason I ask is that I have a two-node replicated 3.7.8 cluster (no arbiters) which has locking behavior during self-heal which looks very similar to that of AFRv1 (only heals one file at a time per self-heal daemon, appears to

Re: [Gluster-users] faied start the glusterd after reboot

2016-02-25 Thread Avra Sengupta
Hi, /var/lib/glusterd/snaps/ contains only 1 file called missed_snaps_list. Apart from it, there are only directories created with the snap names. Is .nfs01722f42, that you saw in /var/lib/glusterd a file or a directory. If it's a file, then it was not placed there as part of

Re: [Gluster-users] faied start the glusterd after reboot

2016-02-25 Thread songxin
If I run "reboot" on the a node,there are not .snap files on A node after reboot. Does the snap file only appear after unexpect reboot? Why its size is 0 byte? In this situation ,is a right method to solve this problem removing the snap file? thanks xin 发自我的 iPhone > 在 2016年2月25日,19:05,Atin

Re: [Gluster-users] faied start the glusterd after reboot

2016-02-25 Thread Atin Mukherjee
+ Rajesh , Avra On 02/25/2016 04:12 PM, songxin wrote: > Thanks for your reply. > > Do I need check all files in /var/lib/glusterd/*? > Must all files be same in A node and B node? Yes, they should be identical. > > I found that the size of > file

Re: [Gluster-users] geo-rep: remote operation failed - No such file or directory

2016-02-25 Thread Aravinda
Steps to force Geo-rep to sync from beginning 1. Stop Geo-replication 2. Disable the Changelog using `gluster volume set changelog.changelog off` 3. Delete all changelogs and htime files from Brick backend of Master Volume, $BRICK/.glusterfs/changelogs 4. Remove stime xattrs from all Brick

Re: [Gluster-users] faied start the glusterd after reboot

2016-02-25 Thread songxin
Thanks for your reply. Do I need check all files in /var/lib/glusterd/*? Must all files be same in A node and B node? I found that the size of file /var/lib/glusterd/snaps/.nfs01722f42 is 0 bytes after A board reboot. So glusterd can't restore by this snap file on A node.

Re: [Gluster-users] faied start the glusterd after reboot

2016-02-25 Thread Atin Mukherjee
I believe you and Abhishek are from the same group and sharing the common set up. Could you check the content of /var/lib/glusterd/* in board B (post reboot and before starting glusterd) matches with /var/lib/glusterd/* from board A? ~Atin On 02/25/2016 03:48 PM, songxin wrote: > Hi, > I have a

[Gluster-users] faied start the glusterd after reboot

2016-02-25 Thread songxin
Hi, I have a problem as below when I start the gluster after reboot a board. precondition: I use two boards do this test. The version of glusterfs is 3.7.6. A board ip:128.224.162.255 B board ip:128.224.95.140 reproduce steps: 1.systemctl start glusterd (A board) 2.systemctl start

Re: [Gluster-users] Issues removing then adding a brick to a replica volume (Gluster 3.7.6)

2016-02-25 Thread Sahina Bose
On 02/23/2016 04:34 PM, Lindsay Mathieson wrote: On 23/02/2016 8:29 PM, Sahina Bose wrote: Late jumping into this thread, but curious - Is there a specific reason that you are removing and adding a brick? Will replace-brick not work for you? Testing procedures for replacing a failed

Re: [Gluster-users] Replicated Volume (mirror) on 17 nodes.

2016-02-25 Thread Simone Taliercio
2016-02-25 9:20 GMT+01:00 Ravishankar N : > Right. You can mount the replica 3 volume that you just created on any node. > Like I said it's just like accessing a remote share. Except that the 'share' > is a glusterfs volume that you just created. > If I understand your use

Re: [Gluster-users] Issues removing then adding a brick to a replica volume (Gluster 3.7.6)

2016-02-25 Thread Sahina Bose
On 02/23/2016 04:34 PM, Lindsay Mathieson wrote: On 23/02/2016 8:29 PM, Sahina Bose wrote: Late jumping into this thread, but curious - Is there a specific reason that you are removing and adding a brick? Will replace-brick not work for you? Testing procedures for replacing a failed

Re: [Gluster-users] Replicated Volume (mirror) on 17 nodes.

2016-02-25 Thread Ravishankar N
Hello, On 02/25/2016 11:42 AM, Simone Taliercio wrote: Hi Ravi, Thanks a lot for your prompt reply! 2016-02-25 6:07 GMT+01:00 Ravishankar N : I don't know what your use case is but I don't think you want to create so many replicas. I need to scale my application on