Subject: Re: [Gluster-users] Geo-Replication - Changelog socket is not
present - Falling back to xsync
One last question, correct me if I’m wrong.
When you start a geo-rep process it starts with xsync aka hybrid crawling
(sending files every 60s, with files windows set as 8192 files per sent
Ravishankar
khire...@redhat.commailto:khire...@redhat.com
Cc: gluster-users
gluster-users@gluster.orgmailto:gluster-users@gluster.org
Sent: Friday, May 22, 2015 9:34:47 PM
Subject: Re: [Gluster-users] Geo-Replication - Changelog socket is not present
- Falling back to xsync
One last question, correct
To: Kotresh Hiremath Ravishankar khire...@redhat.com
Cc: gluster-users gluster-users@gluster.org
Sent: Friday, May 22, 2015 5:31:13 AM
Subject: Re: [Gluster-users] Geo-Replication - Changelog socket is not
present - Falling back to xsync
Thanks to JoeJulian / Kaushal I managed to re-enable
Thanks to JoeJulian / Kaushal I managed to re-enable the changelog option and
the socket is now present.
For the record I had some clients running rhs gluster-fuse and our nodes are
running glusterfs release and op-version are not “compatible”.
Now I have to wait for the init crawl see if it
:31:13 AM
Subject: Re: [Gluster-users] Geo-Replication - Changelog socket is not
present - Falling back to xsync
Thanks to JoeJulian / Kaushal I managed to re-enable the changelog option and
the socket is now present.
For the record I had some clients running rhs gluster-fuse and our nodes
Hi,
Unfortunately,
# gluster vol set usr_global changelog.changelog off
volume set: failed: Staging failed on mvdcgluster01.us.alcatel-lucent.com.
Error: One or more connected clients cannot support the feature being set.
These clients need to be upgraded or disconnected before running this
Hi Cyril,
From the brick logs, it seems the changelog-notifier thread has got killed for
some reason,
as notify is failing with EPIPE.
Try the following. It should probably help:
1. Stop geo-replication.
2. Disable changelog: gluster vol set master-vol-name changelog.changelog off
3. Enable
One thing I've noticed is that you need to make sure that the SSH host
keys of _each_ of the slave bricks needs to be in the known_hosts of
each of the master bricks. Failure to ensure this can cause failure in
a non-obvious way.
Regards,
Paul
On 12 March 2015 at 20:29, John Gardeniers
Hi Paul,
That was certainly not made clear by the documentation, what there is of
it. I've done as you suggest and it's working now. Thank you.
regards,
John
On 16/03/15 09:22, Paul Mc Auley wrote:
One thing I've noticed is that you need to make sure that the SSH host
keys of _each_ of the
Just to make it clear, I *have* set up passwordless SSH between the node
where I'm running the command and the slave. I thought that should have
been obvious from my message. Also, the identity files are in the
standard location. So, back to the question I asked, What gives? More to
the point,
On 11 March 2015 at 06:30, John Gardeniers jgardeni...@objectmastery.com
wrote:
Using Gluster v3.5.3 and trying to follow the geo-replication instructions
(https://github.com/gluster/glusterfs/blob/master/doc/
admin-guide/en-US/markdown/admin_distributed_geo_rep.md), step by step,
gets me
Hi,
Thanks for reporting, will work on having correct path with push-pem
command.
Created a Bug to track the progress.
https://bugzilla.redhat.com/show_bug.cgi?id=1199885
--
regards
Aravinda
On 03/08/2015 01:12 AM, ML mail wrote:
Hello,
I am setting up geo replication on Debian wheezy
Yes my single slave node is a single brick. Here would be the output of the
volume info just in case:
Volume Name: myslavevol
Type: Distribute
Volume ID: *REMOVED*
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: gfs1geo:/data/myslavevol/brick
Options Reconfigured:
Looks like node C is in diconnected state. Please let us know the output
of `gluster peer status` from all the master nodes and slave nodes.
--
regards
Aravinda
On 01/22/2015 12:27 AM, PEPONNET, Cyril N (Cyril) wrote:
So,
On master node of my 3 node setup:
1) gluster system:: execute
Every node is connected:
root@nodeA geo-replication]# gluster peer status
Number of Peers: 2
Hostname: nodeB
Uuid: 6a9da7fc-70ec-4302-8152-0e61929a7c8b
State: Peer in Cluster (Connected)
Hostname: nodeC
Uuid: c12353b5-f41a-4911-9329-fee6a8d529de
State: Peer in Cluster (Connected)
[root@nodeB
For the record, after adding
operating-version=2
on every nodes (ABC) AND slave node, the commands are working
--
Cyril Peponnet
On Feb 2, 2015, at 9:46 AM, PEPONNET, Cyril N (Cyril)
cyril.pepon...@alcatel-lucent.commailto:cyril.pepon...@alcatel-lucent.com
wrote:
More informations here:
I
Update:
Sound that the active node is finally fixed, but sound that rsync process are
running from nodeA (I don’t understand the master notion so) and nodeA is the
more used node so its load average become dangerously high.
How to force a geo-replication to be stated from a specific node
More informations here:
I update the state of the peer in the uid file located in /v/l/g/peers from
state 10 to state 3 (as it is on other node) and now the node is in cluster.
gluster system:: execute gsec_create now create a proper file from master node
with every node’s key in it.
Now
On 26/01/15 21:35, Rosemond, Sonny wrote:
I have a RHEL7 testing environment consisting of 6 nodes total, all
running Gluster 3.6.1. The master volume is distributed/replicated,
and the slave volume is distributed. Firewalls and SELinux have been
disabled for testing purposes. Passwordless SSH
Vishwanath Bhat vb...@redhat.commailto:vb...@redhat.com
Date: Tue, 27 Jan 2015 14:41:33 +0530
To: LANL User LANL User so...@lanl.govmailto:so...@lanl.gov,
gluster-users@gluster.orgmailto:gluster-users@gluster.org
gluster-users@gluster.orgmailto:gluster-users@gluster.org
Subject: Re: [Gluster-users
So,
On master node of my 3 node setup:
1) gluster system:: execute gsec_create
in /var/lib/glusterd/geo-replication/common_secret.pub I have pem pub key from
master node A and node B (not node C).
On node C in don’t have anything in /v/l/g/geo/ except the gsync template
config.
So here I
Hi,
I’m ready for new testing, I delete the geo-rep session between master and
slace, remove the lines in authorized keys file on slave.
I also remove the common secret pem from slave, and from master. There is only
the gsyncd_template.conf in /var/lib/gluster now.
Here is our setup:
Site A:
On 01/20/2015 11:01 PM, PEPONNET, Cyril N (Cyril) wrote:
Hi,
I’m ready for new testing, I delete the geo-rep session between master and
slace, remove the lines in authorized keys file on slave.
I also remove the common secret pem from slave, and from master. There is only
the
Run the hook script directly in master node to discover the root cause
for the error.
/var/lib/glusterd/hooks/1/gsync-create/post/S56glusterd-geo-rep-create-post.sh
—volname=geo_test is_push_pem=1
pub_file=/var/lib/glusterd/geo-replication/common_secret.pem.pub
slave_ip=gluster-slave01
Also
3.5.2 every where.
sh -x
/var/lib/glusterd/hooks/1/gsync-create/post/S56glusterd-geo-rep-create-post.sh
—volname=geo_test is_push_pem=1
pub_file=/var/lib/glusterd/geo-replication/common_secret.pem.pub
slave_ip=gluster01
++ echo is_push_pem=1
++ cut -d ' ' -f 1
+ key_val_pair1=is_push_pem=1
++
Thank you for the advice. After re-compiling gluster with the xml option, I
was able to get geo-replication started!
Is this output normal? This is a 2x2 distributed/replicated volume:
]# gluster volume geo-rep shares gfs-a-bkp::bkpshares status
MASTER NODE MASTER VOL
Hi Dave,
Yes, it is normal that one among the replica pair will be active and
participate in syncing, If active node goes down then other node will
become active.
--
regards
Aravinda
http://aravindavk.in
On 12/14/2014 10:30 PM, David Gibbons wrote:
Thank you for the advice. After
Thanks for the feedback, answers inline below:
Have you followed all the upgrade steps w.r.t geo-rep
mentioned in the following link?
I didn't upgrade geo-rep, I disconnected the old replicated server and
started from scratch. So everything with regard to geo-rep is
fresh/brand-new.
Geo-replication depends on the xml output of Gluster Cli commands. For
example, before connecting to slave nodes it gets the nodes list from
both master and slave using gluster volume info and status commands with
--xml.
The Python tracebacks you are seeing in logs are due to inability to
-
From: David Gibbons david.c.gibb...@gmail.com
To: Kotresh Hiremath Ravishankar khire...@redhat.com
Cc: gluster-users Gluster-users@gluster.org, vno...@stonefly.com
Sent: Tuesday, December 9, 2014 6:16:03 PM
Subject: Re: [Gluster-users] Geo-Replication Issue
Hi Kotresh,
Yes, I believe that I
...@redhat.com
Cc: gluster-users Gluster-users@gluster.org, vno...@stonefly.com
Sent: Tuesday, December 9, 2014 6:16:03 PM
Subject: Re: [Gluster-users] Geo-Replication Issue
Hi Kotresh,
Yes, I believe that I am. Can you tell me which symlinks are missing/cause
geo-replication to fail to start
R
- Original Message -
From: David Gibbons david.c.gibb...@gmail.com
To: Kotresh Hiremath Ravishankar khire...@redhat.com
Cc: gluster-users Gluster-users@gluster.org, vno...@stonefly.com
Sent: Wednesday, December 10, 2014 6:12:00 PM
Subject: Re: [Gluster-users] Geo-Replication Issue
Gluster-users@gluster.org
Cc: vno...@stonefly.com
Sent: Monday, December 8, 2014 7:03:31 PM
Subject: Re: [Gluster-users] Geo-Replication Issue
Apologies for sending so many messages about this! I think I may be running
into this bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1105283
Would someone
...@stonefly.com
Sent: Monday, December 8, 2014 7:03:31 PM
Subject: Re: [Gluster-users] Geo-Replication Issue
Apologies for sending so many messages about this! I think I may be
running into this bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1105283
Would someone be so kind as to let me know
and Regards,
Kotresh H R
- Original Message -
From: David Gibbons david.c.gibb...@gmail.com
To: Kotresh Hiremath Ravishankar khire...@redhat.com
Cc: gluster-users Gluster-users@gluster.org, vno...@stonefly.com
Sent: Tuesday, December 9, 2014 6:16:03 PM
Subject: Re: [Gluster-users] Geo-Replication
Apologies for sending so many messages about this! I think I may be running
into this bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1105283
Would someone be so kind as to let me know which symlinks are missing when
this bug manifests, so that I can create them?
Thank you,
Dave
On Sun, Dec
Ok,
I was able to get geo-replication configured by
changing /usr/local/libexec/glusterfs/gverify.sh to use ssh to access the
local machine, instead of accessing bash -c directly. I then found that the
hook script was missing for geo-replication, so I copied that over
manually. I now have what
@gluster.orgmailto:gluster-users@gluster.org
Betreff: Re: [Gluster-users] geo-replication with lots of folders and files
Hello,
Please let us know the version of GlusterFS you are using. Do you see any
errors in the log files about sync failure?(Logs will be in
/var/log/glusterfs/geo-replication dir
On 20/10/14 21:48, Justin Clift wrote:
- Original Message -
The solution involves changelog crash consistency among other things.
Since this feature itself is targeted for glusterfs-3.7, I would say the
complete solution would be available with glusterfs-3.7
One the major challenges in
- Original Message -
I can add do it. I need some time (we have long weekend coming up in
India for Diwali) and some help from Aravinda.
Thanks Vishwanath, that will really help. :)
Regards and best wishes,
Justin Clift
--
GlusterFS - http://www.gluster.org
An open source,
To: Kingsley glus...@gluster.dogwind.com, James Payne
jimqwer...@hotmail.com
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] geo-replication breaks on CentOS 6.5 + gluster
3.6.0 beta3
Hi,
Right now, distributed-geo-rep has bunch of known issues with deletes
and renames. Part of the issue
On 18/10/14 20:31, Justin Clift wrote:
- Original Message -
snip
Right now, distributed-geo-rep has bunch of known issues with deletes
and renames. Part of the issue was solved with a patch sent to upstream
recently. But still it doesn't solve complete issue.
snip
Do we have an idea
- Original Message -
The solution involves changelog crash consistency among other things.
Since this feature itself is targeted for glusterfs-3.7, I would say the
complete solution would be available with glusterfs-3.7
One the major challenges in solving it involves
the issue shouldn't be there in a replicate only scenario?
Regards
James
--- Original Message ---
From: M S Vishwanath Bhat vb...@redhat.com
Sent: 17 October 2014 20:53
To: Kingsley glus...@gluster.dogwind.com, James Payne
jimqwer...@hotmail.com
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users
- Original Message -
snip
Right now, distributed-geo-rep has bunch of known issues with deletes
and renames. Part of the issue was solved with a patch sent to upstream
recently. But still it doesn't solve complete issue.
snip
Do we have an idea when the complete solution to this might
- Original Message -
Just adding that I have verified this as well with the 3.6 beta, I added a
log to the ticket regarding this.
https://bugzilla.redhat.com/show_bug.cgi?id=1141379
Please feel free to add to the bug report, I think we are seeing the same
issue. It isn't present
Hello,
we use GlusterFS Version 3.4.5-1.el6.x86_64.
In the logs appear no errors.
Regards,
Michael
Von: Aravinda [mailto:avish...@redhat.com]
Gesendet: Donnerstag, 16. Oktober 2014 06:48
An: Michael Rauch; gluster-users@gluster.org
Betreff: Re: [Gluster-users] geo-replication with lots
Hi,
Right now, distributed-geo-rep has bunch of known issues with deletes
and renames. Part of the issue was solved with a patch sent to upstream
recently. But still it doesn't solve complete issue.
So long story short, dist-geo-rep has still issues with short lived
renames where the
I have added a comment to that bug report (a paste of my original
email).
Cheers,
Kingsley.
On Tue, 2014-10-14 at 22:10 +0100, James Payne wrote:
Just adding that I have verified this as well with the 3.6 beta, I added a
log to the ticket regarding this.
Hello,
Please let us know the version of GlusterFS you are using. Do you see
any errors in the log files about sync failure?(Logs will be in
/var/log/glusterfs/geo-replication dir in each master nodes and
/var/log/glusterfs/geo-replication-slaves in each slave nodes)
--
regards
Aravinda
It's worth me adding that since geo-replication broke, if I query the
volume status (in this instance, on test1), I get this:
test1# gluster volume status
Another transaction is in progress. Please try again after sometime.
It's still giving this error, 24 hours later.
Cheers,
Kingsley.
On
Just adding that I have verified this as well with the 3.6 beta, I added a
log to the ticket regarding this.
https://bugzilla.redhat.com/show_bug.cgi?id=1141379
Please feel free to add to the bug report, I think we are seeing the same
issue. It isn't present in the 3.4 series which in the one
netstat shows that all the TCP sessions between the masters to the
slaves are in SYN-SENT state.
On 9/30/14 2:26 PM, M S Vishwanath Bhat wrote:
On 30 September 2014 23:29, Bin Zhou lakerz...@yahoo.com
mailto:lakerz...@yahoo.com wrote:
I managed to get around the
Hi,
Thanks for that. I may well give those a go, but as a test I've just
rebooted 3 of my brick servers (4 in total) but gluster didn't start up
automatically so I'm going to try to figure out why that is before I
start playing with different versions.
Before rebooting, I used chkconfig to
I managed to get around the transport.address-family not specified issue by
using IP address instead of host name in the CLI when creating and starting the
geo-replication volume.
However the replication to slave is not happening. The status shows as Not
Started. Any suggestions?
On 30 September 2014 23:29, Bin Zhou lakerz...@yahoo.com wrote:
I managed to get around the transport.address-family not specified issue
by using IP address instead of host name in the CLI when creating and
starting the geo-replication volume.
However the replication to slave is not
Hi,
replies within.
On Mon, 2014-09-29 at 13:50 +0530, Aravinda wrote:
On 09/27/2014 04:45 AM, Kingsley wrote:
Hi,
I'm new to gluster so forgive me if I'm being an idiot. I've searched
the list archives back to May but haven't found the exact issue I've
come across, so I thought I'd
Hi,
That does appear to be at least part of the problem. I'm not using any
Windowsy stuff, but my test script does create files that may then soon
be renamed or deleted (which could happen in our production
environment).
I've put more detail into my reply to Aravinda's email, in case you
wanted
On Mon, 2014-09-29 at 13:50 +0530, Aravinda wrote:
[snip]
Are these fop involves renames and delete of the same files? Geo-rep had
issue with short lived renamed files(Now fixed in Master
http://review.gluster.org/#/c/8761/).
Hi,
Apologies if this is a newbie question but how do I get hold
BTW, I have an rsync log from both servers if it's of interest (command
args called, duration of execution and exit code), so let me know if you
want this.
I meant to remove the attached logs from my previous email once I'd
supplied the links, oops :-/
--
Cheers,
Kingsley.
On 29/09/14 17:42, Kingsley wrote:
On Mon, 2014-09-29 at 13:50 +0530, Aravinda wrote:
[snip]
Are these fop involves renames and delete of the same files? Geo-rep had
issue with short lived renamed files(Now fixed in Master
http://review.gluster.org/#/c/8761/).
Hi,
Apologies if this is a
Not sure but is this the same as bug
https://bugzilla.redhat.com/show_bug.cgi?id=1141379
I have seen similar behaviour but in my case it was shown up due to using
Samba and every time a user created a folder (Windows calls it New Folder)
and renamed it quickly the Geo Rep version became instantly
Hello,
I've found out what the problem was ...
I am running gluster on DEBIAN wheezy
and the
/var/lib/glusterd/geo-replication/gsyncd_template.conf
was full of wrong paths like this one
[peersrx . %5Essh%3A]
remote_gsyncd = /nonexistent/gsyncd
After changing them all run ok !
On
Please share the log snippets(on every master brick node) from
/var/log/glusterfs/geo-replication/MASTER_VOL/*.log if you see any errors.
--
regards
Aravinda
On 09/13/2014 07:02 PM, HL wrote:
Hello
I've upgraded all my nodes from 3.3.x to 3.5.2 glusterfs
since the geo-replication was
Cc: David F. Robinson david.robin...@corvidtec.com;
gluster-users@gluster.org
Sent: 8/15/2014 6:25:04 AM
Subject: Re: [Gluster-users] geo replication help
On Wed, Aug 13, 2014 at 04:17:11PM +0530, M S Vishwanath Bhat wrote:
On 13/08/14 02:27, David F. Robinson wrote:
I was hoping someone
: [Gluster-users] geo replication help
On Wed, Aug 13, 2014 at 04:17:11PM +0530, M S Vishwanath Bhat wrote:
On 13/08/14 02:27, David F. Robinson wrote:
I was hoping someone could help me debug my geo-replication under
gluster 3.5.2.
I am trying to use geo-replication to create a lagged backup of my
--
From: Niels de Vos nde...@redhat.com
To: M S Vishwanath Bhat vb...@redhat.com
Cc: David F. Robinson david.robin...@corvidtec.com;
gluster-users@gluster.org
Sent: 8/15/2014 6:25:04 AM
Subject: Re: [Gluster-users] geo replication help
On Wed, Aug 13, 2014 at 04:17:11PM +0530, M S Vishwanath Bhat
On 15/08/14 15:55, Niels de Vos wrote:
On Wed, Aug 13, 2014 at 04:17:11PM +0530, M S Vishwanath Bhat wrote:
On 13/08/14 02:27, David F. Robinson wrote:
I was hoping someone could help me debug my geo-replication under
gluster 3.5.2.
I am trying to use geo-replication to create a lagged backup
On 18/08/14 14:35, M S Vishwanath Bhat wrote:
On 15/08/14 15:55, Niels de Vos wrote:
On Wed, Aug 13, 2014 at 04:17:11PM +0530, M S Vishwanath Bhat wrote:
On 13/08/14 02:27, David F. Robinson wrote:
I was hoping someone could help me debug my geo-replication under
gluster 3.5.2.
I am trying to
...@corvidtec.com
mailto:david.robin...@corvidtec.com; gluster-users@gluster.org
mailto:gluster-users@gluster.org
Sent: 8/13/2014 6:47:11 AM
Subject: Re: [Gluster-users] geo replication help
On 13/08/14 02:27, David F. Robinson wrote:
I was hoping someone could help me debug my geo-replication
On Wed, Aug 13, 2014 at 04:17:11PM +0530, M S Vishwanath Bhat wrote:
On 13/08/14 02:27, David F. Robinson wrote:
I was hoping someone could help me debug my geo-replication under
gluster 3.5.2.
I am trying to use geo-replication to create a lagged backup of my
data. What I wanted to do was to
On 13/08/14 02:27, David F. Robinson wrote:
I was hoping someone could help me debug my geo-replication under
gluster 3.5.2.
I am trying to use geo-replication to create a lagged backup of my
data. What I wanted to do was to turn off the geo-replication (gluster
volume geo-replication homegfs
Subject: Re: [Gluster-users] geo replication help
On 13/08/14 02:27, David F. Robinson wrote:
I was hoping someone could help me debug my geo-replication under
gluster 3.5.2.
I am trying to use geo-replication to create a lagged backup of my
data. What I wanted to do was to turn off the geo
Hello,
I have deleted all the /var/lib/glusterd files and directories on slave
and did all the steps once again and it worked... I was not able to
replicate the problem with faulty slave anymore...
I am not sure what was the problem but I think it was one of the old
steps from previous geo
Hello Vishwanath,
thanks for pointing me to right direction... This was helpful... I
thought the password less ssh connection was done from glusterfs using
the secret.pem in the initial run.. But wasn't.. I had to create the
id_rsa in the /root/.ssh/ directory to be able to ssh to slave
On 15/07/14 15:08, Stefan Moravcik wrote:
Hello Guys,
I have been trying to set a geo replication in our glusterfs test
environment and got a problem with a message invalid slave name
So first things first...
I have 3 nodes configured in a cluster. Those nodes are configured as
replica. On
Hello Vishwanath
thank you for your quick reply but i have a follow up question if it is
ok... Maybe a different issue and i should open a new thread, but i will
try to continue to use this one...
So I followed the new documentation... let me show you what i have done
and what is the final
On 15/07/14 18:13, Stefan Moravcik wrote:
Hello Vishwanath
thank you for your quick reply but i have a follow up question if it
is ok... Maybe a different issue and i should open a new thread, but i
will try to continue to use this one...
So I followed the new documentation... let me show
...@gmail.com]
Sent: Thursday, June 26, 2014 11:13 PM
To: Chris Ferraro
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] geo-replication status faulty
Hey Chris,
'/nonexistent/gsyncd' is purposely used in the ssh connection so as to avoid
insecure access via ssh. Fiddling with remote_gsyncd should
: exiting.
Thanks again for any help
Chris
From: Venky Shankar [mailto:yknev.shan...@gmail.com]
Sent: Thursday, June 26, 2014 11:13 PM
To: Chris Ferraro
Cc: gluster-users@gluster.orgmailto:gluster-users@gluster.org
Subject: Re: [Gluster-users] geo-replication status faulty
Hey Chris
Hey Chris,
‘/nonexistent/gsyncd’ is purposely used in the ssh connection so as to
avoid insecure access via ssh. Fiddling with remote_gsyncd should be
avoided (it's a reserved option anyway).
As the log messages say, there seems to be a misconfiguration in the setup.
Could you please list down
Venky Shankar, can you follow up on these questions? I too have this issue and
cannot resolve the reference to '/nonexistent/gsyncd'.
As Steve mentions, the nonexistent reference in the logs looks like the culprit
especially seeing that the ssh command trying to be run is printed on an
Found the remote_gsyncd config attribute is set to /nonexsistent/gsyndc
However, the command to change the path fails.
# gluster volume geo-replication gluster_vol0 node003::gluster_vol1 config
remote_gsyncd /usr/libexec/glusterfs/gsyncd
Reserved option
geo-replication command failed
Any
Hi James,
Just checking back -- were you able to test it out with the config option?
Thanks,
-venky
On Wed, May 7, 2014 at 11:07 PM, Venky Shankar yknev.shan...@gmail.comwrote:
On Wed, May 7, 2014 at 6:10 PM, James Le Cuirot
ch...@aura-online.co.ukwrote:
Hello,
I know this has been
Hi Venky,
On Wed, May 7, 2014 at 11:07 PM, Venky Shankar
yknev.shan...@gmail.comwrote:
On Wed, May 7, 2014 at 6:10 PM, James Le Cuirot
ch...@aura-online.co.ukwrote:
I have set up geo-replication between two machines on my LAN for
testing. Both are using NTP and the clocks are
On Wed, May 7, 2014 at 6:10 PM, James Le Cuirot ch...@aura-online.co.ukwrote:
Hello,
I know this has been asked before but I felt that it wasn't fully
answered and I think the situation may have changed in 3.5.
I have set up geo-replication between two machines on my LAN for
testing. Both
On 04/29/2014 11:12 PM, Steve Dainard wrote:
Fixed by editing the geo-rep volumes gsyncd.conf file, changing
/nonexistent/gsyncd to /usr/libexec/glusterfs/gsyncd on both the master
nodes.
That is not required. Did you invoke create command with push-pem?
Any reason why this is in the default
On Wed, Apr 30, 2014 at 5:56 AM, Venky Shankar vshan...@redhat.com wrote:
On 04/29/2014 11:12 PM, Steve Dainard wrote:
Fixed by editing the geo-rep volumes gsyncd.conf file, changing
/nonexistent/gsyncd to /usr/libexec/glusterfs/gsyncd on both the master
nodes.
That is not required. Did
Fixed by editing the geo-rep volumes gsyncd.conf file, changing
/nonexistent/gsyncd to /usr/libexec/glusterfs/gsyncd on both the master
nodes.
Any reason why this is in the default template? Also any reason why when I
stop glusterd, change the template on both master nodes and start the
gluster
sdain...@miovision.commailto:sdain...@miovision.com
Date: Tuesday, April 29, 2014 at 10:42 AM
To: gluster-users@gluster.orgmailto:gluster-users@gluster.org List
gluster-users@gluster.orgmailto:gluster-users@gluster.org
Subject: Re: [Gluster-users] geo-replication status faulty
Fixed by editing
On 29/04/2014, at 6:42 PM, Steve Dainard wrote:
Fixed by editing the geo-rep volumes gsyncd.conf file, changing
/nonexistent/gsyncd to /usr/libexec/glusterfs/gsyncd on both the master nodes.
That doesn't sound good. :(
Do you have the time/inclination to file a bug about this
in Bugzilla?
I have also tried enabling geo-replication using only a directory on the
slave server rather than a gluster volume and it fails in
the same way.
I've noticed that every time it fails the following is logged on the master
[2014-03-14 11:51:43.155292] I [fuse-bridge.c:3376:fuse_init]
Hi,
Thanks for the advice, I finally have time to go back to this issue now.
It doesn't seem to be sticking on any particular part of the file system as
far as I can tell.
One thing I've noticed is I always get an error about missing 'option
transport-type'
2014-03-13 09:57:00.902189] E
Could you try again after changing the log-level to DEBUG using:
# gluster volume geo-replication master slave config log-level DEBUG
Also, logs from both master and slave would help.
Thanks,
-venky
On Wed, Feb 12, 2014 at 4:44 PM, John Ewing johnewi...@gmail.com wrote:
No, its the latest
No, its the latest 3.3 series release.
3.3.2 on both master and slave.
Centos 6 on master , Amazon linux on slave.
rsync 3.0.6 on both
Using unprivileged ssh user setup with mountbroker.
One thing I noticed was that the 3.3 manual says the base requirement is
for rsync 3.0.0 and higher and the
Is this from the latest master branch?
On Tue, Feb 11, 2014 at 4:35 PM, John Ewing johnewi...@gmail.com wrote:
I am trying to use geo-replication but it is running slowly and I keep
getting the
following logged in the geo-replication log.
[2014-02-11 10:56:42.831517] I
Hey Venky,
That sounds...promising. But, I would like to do stuff *after* a file is
changed on the source but *before* the change is pushed through a geo-repl
link to a target. When multi fan-out replication is done, it should even be
possible to have some custom stuff happening on one target and
Hey Fred,
You could implement this without touching Geo-Replication code. Gluster now
has the changelog translator (journaling mechanism) which records changes
made to the filesystem (on each brick). Journals can be consumed using the
changelog consumer library (libgfchangelog). Geo-Replication
Steve,
I think the best bet would be geo replication with lvm snaps:
1) Geo replicate to another gluster install on separate hardware
2) Snap the volume using lvm that you have your gluster bricks on
If you snap once a day and retain for 7 days, that should achieve your
backup need.
Cheers,
Well I guess I'm carrying on a conversation with myself here, but I've
turned on Debug and gsyncd appears to be crashing in _query_xattr - which
is odd because as mentioned before I was previously able to get this volume
to sync the first 1TB of data before this started, but now it won't even do
301 - 400 of 425 matches
Mail list logo