[Gluster-users] It necessary make backup the .glusterfs directory ?

2018-01-24 Thread César E . Portela
Hi All, I have two glusterfs servers and doing the backup of these is very slow, when it does not fail. I have thousand and thousand and thousand files... Apparently the directory .glusterfs has some responsibility for the backup failure. Is necessary to make a backup of the .glusterfs

Re: [Gluster-users] Stale locks on shards

2018-01-24 Thread Samuli Heinonen
Hi! Thank you very much for your help so far. Could you please tell an example command how to use aux-gid-mount to remove locks? "gluster vol clear-locks" seems to mount volume by itself. Best regards, Samuli Heinonen Pranith Kumar Karampuri 23 January 2018 at

Re: [Gluster-users] Stale locks on shards

2018-01-24 Thread Pranith Kumar Karampuri
On Thu, Jan 25, 2018 at 2:27 AM, Samuli Heinonen wrote: > Hi! > > Thank you very much for your help so far. Could you please tell an example > command how to use aux-gid-mount to remove locks? "gluster vol clear-locks" > seems to mount volume by itself. > You are correct,

Re: [Gluster-users] geo-replication command rsync returned with 3

2018-01-24 Thread Kotresh Hiremath Ravishankar
It is clear that rsync is failing. Are the rsync versions on all masters and slave nodes same? I have seen that has caused problems sometimes. -Kotresh HR On Wed, Jan 24, 2018 at 10:29 PM, Dietmar Putz wrote: > Hi all, > i have made some tests on the latest Ubuntu

[Gluster-users] Replacing a third data node with an arbiter one

2018-01-24 Thread Hoggins!
Hello, The subject says it all. I have a replica 3 cluster : gluster> volume info thedude   Volume Name: thedude Type: Replicate Volume ID: bc68dfd3-94e2-4126-b04d-77b51ec6f27e Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp

Re: [Gluster-users] geo-replication command rsync returned with 3

2018-01-24 Thread Dietmar Putz
Hi all, i have made some tests on the latest Ubuntu 16.04.3 server image. Upgrades were disabled... the configuration was always the same...a distributed replicated volume on 4 VM's with geo-replication to a dist. repl .volume on 4 VM's. i started with 3.7.20, upgrade to 3.8.15, to 3.10.9 to

[Gluster-users] Split brain directory

2018-01-24 Thread Luca Gervasi
Hello, I'm trying to fix an issue with a Directory Split on a gluster 3.10.3. The effect consist of a specific file in this splitted directory to randomly be unavailable on some clients. I have gathered all the informations on this gist:

[Gluster-users] fault tolerancy in glusterfs distributed volume

2018-01-24 Thread atris adam
I have made a distributed replica3 volume with 6 nodes. I mean this: Volume Name: testvol Type: Distributed-Replicate Volume ID: f271a9bd-6599-43e7-bc69-26695b55d206 Status: Started Snapshot Count: 0 Number of Bricks: 2 x 3 = 6 Transport-type: tcp Bricks: Brick1: 10.0.0.2:/brick Brick2:

Re: [Gluster-users] Split brain directory

2018-01-24 Thread Karthik Subrahmanya
Hey, >From the getfattr output you have provided, the directory is clearly not in split brain. If all the bricks are being blamed by others then it is called split brain. In your case only client-13 that is Brick-14 in the volume info output had a pending entry heal on the directory. That is the

Re: [Gluster-users] fault tolerancy in glusterfs distributed volume

2018-01-24 Thread Aravinda
Volume will be available even if one of the brick each sub volume goes down. Sub volume 1 bricks: Brick1: 10.0.0.2:/brick Brick2: 10.0.0.3:/brick Brick3: 10.0.0.1:/brick Subvolume 2 bricks: Brick4: 10.0.0.5:/brick Brick5: 10.0.0.6:/brick Brick6: 10.0.0.7:/brick On Wednesday 24 January 2018