Re: [Gluster-users] geo replication issue

2018-10-24 Thread Krishna Verma
Hi Sunny, Thanks for your response. Yes " usr/libexec/glusterfs/python/syncdaemon/gsyncd.py'" was missing at slave. I have installed " glusterfs-geo-replication.x86_64" rpm and then the session is Active now. But now I am struggling with the indexing issue. Files more than 5GB in master

[Gluster-users] Log-file rotation on a Disperse Volume while a failed brick results in files that cannot be healed.

2018-10-24 Thread Jeff Byers
Hello, Regarding the issue: Bug 1642638 - Log-file rotation on a Disperse Volume while a failed brick results in files that cannot be healed. https://bugzilla.redhat.com/show_bug.cgi?id=1642638 Could anybody that has GlusterFS 4.1.x installed see if this problem exists there? If anyone

[Gluster-users] nfs volume usage

2018-10-24 Thread Oğuz Yarımtepe
Hi, How can i use my nfs exports from my storage as the peer's replicated volume? Any tip? Regards. -- Oğuz Yarımtepe http://about.me/oguzy ___ Gluster-users mailing list Gluster-users@gluster.org

[Gluster-users] geo replication session status faulty

2018-10-24 Thread Christos Tsalidis
Hi all, I am testing the geo-replication service in gluster version 3.10.12 on centos CentOS Linux release 7.5.1804 and my session remains in faulty state. On gluster 3.12 version we can configure the following command to solve the problem. gluster vol geo-replication mastervol

[Gluster-users] How to use system.affinity/distributed.migrate-data on distributed/replicated volume?

2018-10-24 Thread Ingo Fischer
Hi, I have setup a glusterfs volume gv0 as distributed/replicated: root@pm1:~# gluster volume info gv0 Volume Name: gv0 Type: Distributed-Replicate Volume ID: 64651501-6df2-4106-b330-fdb3e1fbcdf4 Status: Started Snapshot Count: 0 Number of Bricks: 3 x 2 = 6 Transport-type: tcp Bricks: Brick1:

Re: [Gluster-users] geo replication issue

2018-10-24 Thread Sunny Kumar
Hi Krishna, Please check for this file existance '/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py' at slave. - Sunny On Wed, Oct 24, 2018 at 4:36 PM Krishna Verma wrote: > > > > > > > > Hi Everyone, > > > > I have created a 4*4 distributed gluster but when I am starting the start the >

Re: [Gluster-users] GlusterFS 4.1.x deb packages missing for Debian 8 (jessie)

2018-10-24 Thread mabi
Anyone? I would really like to be able to install GlusterFS 4.1.x on Debian 8 (jessie). This version of Debian 8 is still widely in use and IMHO there should be a GlusterFS package for it. Many thanks in advance for your consideration. ‐‐‐ Original Message ‐‐‐ On Friday, October 19,

Re: [Gluster-users] Gluster clients intermittently hang until first gluster server in a Replica 1 Arbiter 1 cluster is rebooted, server error: 0-management: Unlocking failed & client error: bailing ou

2018-10-24 Thread Ravishankar N
On 10/24/2018 05:16 PM, Hoggins! wrote: Thank you, it's working as expected. I guess it's only safe to put cluster.data-self-heal back on when I get an updated version of GlusterFS? Yes correct. Also, you would still need to restart shd whenever you hit this issue until upgrade. -Ravi    

Re: [Gluster-users] Gluster clients intermittently hang until first gluster server in a Replica 1 Arbiter 1 cluster is rebooted, server error: 0-management: Unlocking failed & client error: bailing ou

2018-10-24 Thread Hoggins!
Thank you, it's working as expected. I guess it's only safe to put cluster.data-self-heal back on when I get an updated version of GlusterFS?     Hoggins! Le 24/10/2018 à 11:53, Ravishankar N a écrit : > > On 10/24/2018 02:38 PM, Hoggins! wrote: >> Thanks, that's helping a lot, I will do that.

[Gluster-users] geo replication issue

2018-10-24 Thread Krishna Verma
Hi Everyone, I have created a 4*4 distributed gluster but when I am starting the start the session its get failed with below errors. [2018-10-24 10:02:03.857861] I [gsyncdstatus(monitor):245:set_worker_status] GeorepStatus: Worker Status Change status=Initializing... [2018-10-24

[Gluster-users] How to use system.affinity/distributed.migrate-data on distributed/replicated volume?

2018-10-24 Thread Ingo Fischer
Hi, I have setup a glusterfs volume gv0 as distributed/replicated: root@pm1:~# gluster volume info gv0 Volume Name: gv0 Type: Distributed-Replicate Volume ID: 64651501-6df2-4106-b330-fdb3e1fbcdf4 Status: Started Snapshot Count: 0 Number of Bricks: 3 x 2 = 6 Transport-type: tcp Bricks: Brick1:

Re: [Gluster-users] Gluster clients intermittently hang until first gluster server in a Replica 1 Arbiter 1 cluster is rebooted, server error: 0-management: Unlocking failed & client error: bailing ou

2018-10-24 Thread Ravishankar N
On 10/24/2018 02:38 PM, Hoggins! wrote: Thanks, that's helping a lot, I will do that. One more question: should the glustershd restart be performed on the arbiter only, or on each node of the cluster? If you do a 'gluster volume start volname force' it will restart the shd on all nodes.

Re: [Gluster-users] Gluster clients intermittently hang until first gluster server in a Replica 1 Arbiter 1 cluster is rebooted, server error: 0-management: Unlocking failed & client error: bailing ou

2018-10-24 Thread Hoggins!
Thanks, that's helping a lot, I will do that. One more question: should the glustershd restart be performed on the arbiter only, or on each node of the cluster? Thanks!     Hoggins! Le 24/10/2018 à 02:55, Ravishankar N a écrit : > > On 10/23/2018 10:01 PM, Hoggins! wrote: >> Hello there, >> >>

[Gluster-users] Gluster Errors and configuraiton status

2018-10-24 Thread Vrgotic, Marko
Dear Gluster team, Since January 2018 I am running GLusterFS in mode with 4 nodes. The storage is attached to oVirt system and has been running happily so far. I have three volumes: Gv0_she – triple replicated volume for oVIrt SelfHostedEngine (it’s a requirement) Gv1_vmpool – distributed