Re: [Gluster-users] Geo-replication status Faulty

2020-10-27 Thread Strahil Nikolov
If you can afford the extra space , set the logs to TRACE and after reasonable timeframe lower them back. Despite RH's gluster versioning is different - this thread should help:

Re: [Gluster-users] Geo-replication status Faulty

2020-10-27 Thread Strahil Nikolov
It could be a "simple" bug - software has bugs and regressions. I would recommend you to ping the debian mailing list - at least it won't hurt. Best Regards, Strahil Nikolov В вторник, 27 октомври 2020 г., 20:10:39 Гринуич+2, Gilberto Nunes написа: [SOLVED] Well... It seems to me

Re: [Gluster-users] Geo-replication status Faulty

2020-10-27 Thread Gilberto Nunes
Not so fast with my solution! After shutting the other node in the head, get FAULTY stat again... The only failure I saw in this thing regarding xattr value... [2020-10-27 19:20:07.718897] E [syncdutils(worker /DATA/vms):110:gf_mount_ready] : failed to get the xattr value Don't know if I am

Re: [Gluster-users] Geo-replication status Faulty

2020-10-27 Thread Gilberto Nunes
[SOLVED] Well... It seems to me that pure Debian Linux 10 has some problem with XFS, which is the FS that I used. It's not accept attr2 mount options. Interestingly enough, I have now used Proxmox 6.x, which is Debian based, I am now able to use the attr2 mount point option. Then the Faulty

Re: [Gluster-users] Geo-replication status Faulty

2020-10-27 Thread Gilberto Nunes
>> IIUC you're begging for split-brain ... Not at all! I have used this configuration and there isn't any split brain at all! But if I do not use it, then I get a split brain. Regarding count 2 I will see it! Thanks --- Gilberto Nunes Ferreira Em ter., 27 de out. de 2020 às 09:37, Diego

Re: [Gluster-users] Geo-replication status Faulty

2020-10-27 Thread Diego Zuccato
Il 27/10/20 13:15, Gilberto Nunes ha scritto: > I have applied this parameters to the 2-node gluster: > gluster vol set VMS cluster.heal-timeout 10 > gluster volume heal VMS enable > gluster vol set VMS cluster.quorum-reads false > gluster vol set VMS cluster.quorum-count 1 Urgh! IIUC you're

Re: [Gluster-users] Geo-replication status Faulty

2020-10-27 Thread Gilberto Nunes
Hi Aravinda Let me thank you for that nice tools... It helps me a lot. And yes! Indeed I think this is the case, but why does gluster03 (which is the backup server) not continue since gluster02 are still online?? That puzzles me... --- Gilberto Nunes Ferreira Em ter., 27 de out. de 2020 às

Re: [Gluster-users] Geo-replication status Faulty

2020-10-27 Thread Gilberto Nunes
Dear Felix I have applied this parameters to the 2-node gluster: gluster vol set VMS cluster.heal-timeout 10 gluster volume heal VMS enable gluster vol set VMS cluster.quorum-reads false gluster vol set VMS cluster.quorum-count 1 gluster vol set VMS network.ping-timeout 2 gluster volume set VMS

Re: [Gluster-users] Geo-replication status Faulty

2020-10-27 Thread Aravinda VK
Hi Gilberto, Happy to see georepsetup tool is useful for you. The repo I moved to https://github.com/aravindavk/gluster-georep-tools (renamed as “gluster-georep-setup”). I think the georep command failure is due to respective node’s(peer)

Re: [Gluster-users] Geo-replication status Faulty

2020-10-27 Thread Felix Kölzow
Dear Gilberto, If I am right, you ran into server-quorum if you startet a 2-node replica and shutdown one host. From my perspective, its fine. Please correct me if I am wrong here. Regards, Felix On 27/10/2020 01:46, Gilberto Nunes wrote: Well I do not reboot the host. I shut down the

Re: [Gluster-users] Geo-replication status Faulty

2020-10-26 Thread Strahil Nikolov
Usually there is always only 1 "master" , but when you power off one of the 2 nodes - the geo rep should handle that and the second node should take the job. How long did you wait after gluster1 has been rebooted ? Best Regards, Strahil Nikolov В понеделник, 26 октомври 2020 г., 22:46:21

Re: [Gluster-users] Geo-replication status Faulty

2020-10-26 Thread Gilberto Nunes
Well I do not reboot the host. I shut down the host. Then after 15 min give up. Don't know why that happened. I will try it latter --- Gilberto Nunes Ferreira Em seg., 26 de out. de 2020 às 21:31, Strahil Nikolov escreveu: > Usually there is always only 1 "master" , but when you power

Re: [Gluster-users] Geo-replication status Faulty

2020-10-26 Thread Gilberto Nunes
I was able to solve the issue restarting all servers. Now I have another issue! I just powered off the gluster01 server and then the geo-replication entered in faulty status. I tried to stop and start the gluster geo-replication like that: gluster volume geo-replication DATA

[Gluster-users] Geo-replication status Faulty

2020-10-26 Thread Gilberto Nunes
Hi there... I'd created a 2 gluster vol and another 1 gluster server acting as a backup server, using geo-replication. So in gluster01 I'd issued the command: gluster peer probe gluster02;gluster peer probe gluster03 gluster vol create DATA replica 2 gluster01:/DATA/master01-data

Re: [Gluster-users] geo-replication status faulty

2014-07-01 Thread Venky Shankar
...@gmail.com] Sent: Thursday, June 26, 2014 11:13 PM To: Chris Ferraro Cc: gluster-users@gluster.org Subject: Re: [Gluster-users] geo-replication status faulty Hey Chris, '/nonexistent/gsyncd' is purposely used in the ssh connection so as to avoid insecure access via ssh. Fiddling with remote_gsyncd should

Re: [Gluster-users] geo-replication status faulty

2014-07-01 Thread Chris Ferraro
: exiting. Thanks again for any help Chris From: Venky Shankar [mailto:yknev.shan...@gmail.com] Sent: Thursday, June 26, 2014 11:13 PM To: Chris Ferraro Cc: gluster-users@gluster.orgmailto:gluster-users@gluster.org Subject: Re: [Gluster-users] geo-replication status faulty Hey Chris

Re: [Gluster-users] geo-replication status faulty

2014-06-27 Thread Venky Shankar
Hey Chris, ‘/nonexistent/gsyncd’ is purposely used in the ssh connection so as to avoid insecure access via ssh. Fiddling with remote_gsyncd should be avoided (it's a reserved option anyway). As the log messages say, there seems to be a misconfiguration in the setup. Could you please list down

Re: [Gluster-users] geo-replication status faulty

2014-06-26 Thread Chris Ferraro
Venky Shankar, can you follow up on these questions? I too have this issue and cannot resolve the reference to '/nonexistent/gsyncd'. As Steve mentions, the nonexistent reference in the logs looks like the culprit especially seeing that the ssh command trying to be run is printed on an

Re: [Gluster-users] geo-replication status faulty

2014-06-26 Thread Chris Ferraro
Found the remote_gsyncd config attribute is set to /nonexsistent/gsyndc However, the command to change the path fails. # gluster volume geo-replication gluster_vol0 node003::gluster_vol1 config remote_gsyncd /usr/libexec/glusterfs/gsyncd Reserved option geo-replication command failed Any

Re: [Gluster-users] geo-replication status faulty

2014-04-30 Thread Venky Shankar
On 04/29/2014 11:12 PM, Steve Dainard wrote: Fixed by editing the geo-rep volumes gsyncd.conf file, changing /nonexistent/gsyncd to /usr/libexec/glusterfs/gsyncd on both the master nodes. That is not required. Did you invoke create command with push-pem? Any reason why this is in the default

Re: [Gluster-users] geo-replication status faulty

2014-04-30 Thread Steve Dainard
On Wed, Apr 30, 2014 at 5:56 AM, Venky Shankar vshan...@redhat.com wrote: On 04/29/2014 11:12 PM, Steve Dainard wrote: Fixed by editing the geo-rep volumes gsyncd.conf file, changing /nonexistent/gsyncd to /usr/libexec/glusterfs/gsyncd on both the master nodes. That is not required. Did

Re: [Gluster-users] geo-replication status faulty

2014-04-29 Thread Steve Dainard
Fixed by editing the geo-rep volumes gsyncd.conf file, changing /nonexistent/gsyncd to /usr/libexec/glusterfs/gsyncd on both the master nodes. Any reason why this is in the default template? Also any reason why when I stop glusterd, change the template on both master nodes and start the gluster

Re: [Gluster-users] geo-replication status faulty

2014-04-29 Thread CJ Beck
sdain...@miovision.commailto:sdain...@miovision.com Date: Tuesday, April 29, 2014 at 10:42 AM To: gluster-users@gluster.orgmailto:gluster-users@gluster.org List gluster-users@gluster.orgmailto:gluster-users@gluster.org Subject: Re: [Gluster-users] geo-replication status faulty Fixed by editing

Re: [Gluster-users] geo-replication status faulty

2014-04-29 Thread Justin Clift
On 29/04/2014, at 6:42 PM, Steve Dainard wrote: Fixed by editing the geo-rep volumes gsyncd.conf file, changing /nonexistent/gsyncd to /usr/libexec/glusterfs/gsyncd on both the master nodes. That doesn't sound good. :( Do you have the time/inclination to file a bug about this in Bugzilla?