Answers inline.
regards
Aravinda
http://aravindavk.in
On 02/23/2016 02:25 PM, Christian Rice wrote:
The subject line is a mouthful, but pretty much says it all.
apivision:~$ sudo gluster volume geo-replication MIXER
svc-mountbroker@trident24::DR-MIXER status
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE
SLAVE NODE STATUS CRAWL STATUS
LAST_SYNCED
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------
apivision MIXER /zpuddle/audio/mixer svc-mountbroker
svc-mountbroker@trident24::DR-MIXER ua610 Active History Crawl
2016-02-22 21:45:56
studer900 MIXER /zpuddle/audio/mixer svc-mountbroker
svc-mountbroker@trident24::DR-MIXER trident24 Passive N/A
N/A
neve88rs MIXER /zpuddle/audio/mixer svc-mountbroker
svc-mountbroker@trident24::DR-MIXER trident24 Passive N/A
N/A
ssl4000 MIXER /zpuddle/audio/mixer svc-mountbroker
svc-mountbroker@trident24::DR-MIXER ua610 Active History Crawl
2016-02-22 22:05:53
This seems to indicate only one of my slave nodes is actively participating in
the geo-replication. It seems wrong to me, or did I misunderstand the new
geo-replication feature related to multiple nodes participating in the process?
Can I get it to balance the rsyncs to more than one slave node?
Sync happens from Master Volume mount to Slave Volume mount. Both Active
workers connected to ua610, and maintains Slave volume mounts in that
node to sync data. So data is distributed in Slave as usual.(Depending
on Slave Volume topology)
i used georepsetup which, by the way, is a freaking awesome tool that did in a
few seconds what I was tearing my hair out to do for days--namely, to get
geo-replication working with mountbroker. But even using simple root
geo-replication with manual setup, the balance seemed to fall this way every
time on the back end.
Glad that tool helped you to setup Geo-replication. Let us know if you
have any feedback for that
tool.(https://github.com/aravindavk/georepsetup/issues)
Debian 8/Jessie, gluster 3.7.8-1, on zfs, a 119TB volume at each end. Data is
properly distributing in the slave pool (at cursory glance), and in general I’m
not aware of anything being outright broken. Front end replica pairs are
apivsion/neve88rs and ssl4000/studer900.
PS it’s in history crawl at the moment due to pausing/resuming geo-replication.
_______________________________________________
Gluster-users mailing list
[email protected]
http://www.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
[email protected]
http://www.gluster.org/mailman/listinfo/gluster-users