Hello,
I am using the following documentation in order to setup geo replication
between two sites
http://www.gluster.org/pipermail/gluster-users.old/2015-January/020080.html
Unfortunately the step:
gluster volume geo-replication myvolume gfs...@gfs1geo.domain.com::myvolume
create push-pem
Fa
nda.
Refer:
http://blog.gluster.org/2015/09/introducing-georepsetup-gluster-geo-replication-setup-tool-2/
Refer the README for both usual (root user based) and
mountbroker(non-root) setup details here:
https://github.com/aravindavk/georepsetup/blob/master/README.md
Thanks,
Saravana
On 09/13/2015 0
.md
Thanks,
Saravana
On 09/13/2015 09:46 PM, ML mail wrote:
> Hello,
>
> I am using the following documentation in order to setup geo replication
> between two sites
> http://www.gluster.org/pipermail/gluster-users.old/2015-January/020080.html
>
> Unfortunately the step:
&g
xecute gsec_create"
This needs to be done at the same NODE(at Master) where you execute
geo-rep create command.
You can find geo-replication related logs here
:/var/log/glusterfs/geo-replication/
Please share the logs if you still face any issues.
Thanks,
Saravana
On 09/14/2015 11:23 PM,
Hello,
I am trying in vain to setup geo-replication on now version 3.7.4 of GlusterFS
but it still does not seem to work. I have at least managed to run succesfully
the georepsetup using the following command:
georepsetup reptest gfsgeo@gfs1geo reptest
But as soon as I run:
gluster volume g
g to use stat command to
get inode details.
(stat command is provided by coreutils, which is quite a basic package).
Could you share your System details? Is it a Linux system ?
ps:
XFS is the recommended and widely tested filesystem for glusterfs.
Thanks,
Saravana
On 09/19/2015 03:03 AM, ML ma
he "SLAVE USER".
Is this normal???
So to resume, I've got geo-replication setup but it's quite patchy and messy
and does not run under my special replication user I wanted it to run under.
On Monday, September 21, 2015 8:07 AM, Saravanakumar Arumugam
wrote:
Replies inline.
Hello,
I just set up distributed geo-replication to a slave on my 2 nodes' replicated
volume and so far it works but I see every 60 seconds in the slave's
geo-replication-slaves gluster log file the following message:
[2016-01-31 17:38:48.027792] I [dict.c:473:dict_get]
(-->/usr/lib/x86_64-li
Hi Jiffin,
Thanks for fixing that, will be looking forward to this patch so that my log
files don't get so cluttered up ;)
Regards
ML
On Monday, February 1, 2016 6:54 AM, Jiffin Tony Thottan
wrote:
On 31/01/16 23:25, ML mail wrote:
> Hello,
>
> I just set up distributed g
Hello,
I just set up distributed geo-replication to a slave on my 2 nodes' replicated
volume and noticed quite a few error messages (around 70 of them) in the
slave's brick log file:
The exact log file is: /var/log/glusterfs/bricks/data-myvolume-geo-brick.log
[2016-01-31 22:19:29.524370] E [MS
Arumugam
wrote:
Hi,
On 02/01/2016 02:14 PM, ML mail wrote:
> Hello,
>
> I just set up distributed geo-replication to a slave on my 2 nodes'
> replicated volume and noticed quite a few error messages (around 70 of them)
> in the slave's brick log file:
>
> The exact
f92ad"
# grep 1c648409-e98b-4544-a7fa-c2aef87f92ad
/data/myvolume/brick/.glusterfs/changelogs -rn
Binary file /data/myvolume/brick/.glusterfs/changelogs/CHANGELOG.1454278219
matches
Regards
ML
On Monday, February 1, 2016 1:30 PM, Saravanakumar Arumugam
wrote:
Hi,
On 02/01/2016 02:14 P
Sure, I will just send it to you through an encrypted cloud storage app and
send you the password via private mail.
Regards
ML
On Monday, February 1, 2016 3:14 PM, Saravanakumar Arumugam
wrote:
On 02/01/2016 07:22 PM, ML mail wrote:
> I just found out I needed to run the getfattr o
189/
Following script can be used to find problematic file in each Brick backend.
https://gist.github.com/aravindavk/29f673f13c2f8963447e
regards
Aravinda
On 02/01/2016 08:45 PM, ML mail wrote:
> Sure, I will just send it to you through an encrypted cloud storage app and
> send you the password vi
but I will not use
it anymore in production until the patch is out in order to fix this.
Thanks again for your help and I am looking forward for next release including
that patch.
Regards
ML
On Thursday, February 4, 2016 11:14 AM, Saravanakumar Arumugam
wrote:
Hi,
On 02/03/2016 08:09 P
Hello,
I would like to upgrade my Gluster 3.7.6 installation to Gluster 3.7.8 and made
up the following procedure below. Can anyone check it and let me know if it is
correct or if I am missing anything? Note here that I am using Debian 8 and the
Debian packages from Gluster's APT repository. I
released early due to some issues with 3.7.7, we couldn't
get the following Geo-rep patches in the release as discussed in
previous mails.
http://review.gluster.org/#/c/13316/
http://review.gluster.org/#/c/13189/
Thanks
regards
Aravinda
On 02/12/2016 01:38 AM, ML mail wrote:
> Hell
Hello,
I noticed that the geo-replication of a volume has STATUS "Faulty" and while
looking in the *.gluster.log file in /var/log/glusterfs/geo-replication-slaves/
on my slave I can see the following relevant problem:
[2016-02-15 10:58:40.402516] I [rpc-clnt.c:1847:rpc_clnt_reconfig]
0-myvolum
TE f1.part
RENAME f1.part f1
DELETE f1
CREATE f1.part
RENAME f1.part f1
...
...
If not, then it would help if you could send the sequence
of file management operations.
--
Milind
- Original Message -
From: "Kotresh Hiremath Ravishankar"
To: "ML mail"
Cc: "Glust
Hi Milind,
Any news on this issue? I was wondering how can I fix and restart my
geo-replication? Can I simply delete the problematic file(s) on my slave and
restart geo-rep?
Regards
ML
On Wednesday, February 17, 2016 4:30 PM, ML mail wrote:
Hi Milind,
Thank you for your short analysis
ation
going into a Faulty state.
--
Milind
- Original Message -
From: "ML mail"
To: "Milind Changire" , "Gluster-users"
Sent: Monday, February 22, 2016 1:27:14 PM
Subject: Re: [Gluster-users] geo-rep: remote operation failed - No such file or
directory
Original Message -----
From: "ML mail"
To: "Milind Changire"
Cc: "Gluster-users"
Sent: Monday, February 22, 2016 9:10:56 PM
Subject: Re: [Gluster-users] geo-rep: remote operation failed - No such file or
directory
Hi Milind,
Thanks for the suggestion, I did th
path.
You should have geo-replication stopped when you are
setting the virtual xattr and start it when you are
done setting the xattr for the entire directory tree.
--
Milind
- Original Message -
From: "ML mail"
To: "Milind Changire"
Cc: "Gluster-users"
consistent gluster
state.
Unfortunately, you cannot selectively purge the changelogs.
You will have to delete the volume and empty the bricks
and recreate the volume with the empty bricks to start
all over again.
You can delete the volume with:
# gluster volume stop
# gluster volume delete
--
Milind
wrote:
We can provide workaround steps to resync from beginning without
deleting Volume(s).
I will send the Session reset details by tomorrow.
regards
Aravinda
On 02/24/2016 09:08 PM, ML mail wrote:
> That's right I saw already a few error messages mentioning "Device or
> resour
ave,
you will have to set the virtual xattr on the entire directory tree
in pre-order listing i.e. set the virtual xattr on the directory
starting at OC_DEFAULT_MODULE and then on the entries inside the
directory, and so on down the directory tree.
--
Milind
- Original Message -----
From:
related to
this issue for glusterfs-3.7.9
Geo-rep should cleanup this xattrs when session is deleted, We will work
on that fix in future releases
BUG: https://bugzilla.redhat.com/show_bug.cgi?id=1311926
regards
Aravinda
On 02/24/2016 09:59 PM, ML mail wrote:
> That would be great thank you.
bruary 26, 2016 5:54 AM, Aravinda wrote:
regards
Aravinda
On 02/26/2016 12:30 AM, ML mail wrote:
> Hi Aravinda,
>
> Many thanks for the steps. I have a few questions about it:
>
> - in your point number 3, can I simply do an "rm -rf
> /my/brick/.glusterfs/changelogs"
Hi,
I just upgraded from 3.7.6 to 3.7.8 and noticed the following 2 points:
1) As others on the mailing list I am also affected by a massive performance
drop on my FUSE mount. I used to have around 10 MB/s transfer, which is already
quite slow, but now since the upgrade I am at around 2 MB/s as
and after this timeout
respond again.
By the way is there a ChangeLog somewhere for 3.7.8?
Regards
ML
On Sunday, February 28, 2016 5:50 PM, Atin Mukherjee
wrote:
On 02/28/2016 04:48 PM, ML mail wrote:
> Hi,
>
> I just upgraded from 3.7.6 to 3.7.8 and noticed the following 2 point
Hi,
I recently updated GlusterFS from 3.7.6 to 3.7.8 on my two nodes master volume
(one brick per node) and slave node. Today I noticed a new type of warning
which I never saw before.
On the master node's geo-replication log file I see quite a few of these:
[2016-03-01 16:28:08.446464] W [mas
only when parent directory does not exists
on Slave or exists with different GFID.
regards
Aravinda
On 03/01/2016 11:08 PM, ML mail wrote:
> Hi,
>
> I recently updated GlusterFS from 3.7.6 to 3.7.8 on my two nodes master
> volume (one brick per node) and slave node. Today I not
Sorry to jump into this thread but I also noticed the "unable to get index-dir"
warning in my gluster self-healing daemon log file since I upgraded to 3.7.8
and I was wondering what I can do to avoid this warning? I think someone asked
if he could create manually the "indices/dirty" directory bu
And a thank you from me too for this release, I am looking forward to a working
geo-replication...
btw: where can I find the changelog for this release? I always somehow forget
where it is located.
Regards
ML
On Tuesday, March 22, 2016 4:19 AM, Vijay Bellur wrote:
Hi all,
GlusterFS 3.7.9
Hello,
I just upgraded my 2 nodes replica from GlusterFS 3.7.8 to 3.7.10 on Debian 8
and noticed in the brick log file
(/var/log/glusterfs/bricks/myvolume-brick.log) the following warning message
each time I copy a file. For example I just copied one single 110 kBytes file
and got 19 times the
Hello,
I just upgraded my 2 nodes replica from GlusterFS 3.7.8 to 3.7.10 on Debian 8
and noticed in the brick log file
(/var/log/glusterfs/bricks/myvolume-brick.log) the following warning message
each time I copy a file. For example I just copied one single 110 kBytes file
and got 19 times the
Hi,
I am also observing bad performance with small files on a GlusterFS 3.7.11
cluster. For example if I unpack the latest Linux kernel tar file it takes
roughly 9 minutes whereas on my laptop it takes 30 seconds.
Maybe there are a some paramters on the GlusterFS side which could help to fine
tu
Hello,
Should the gluster nodes be all located on the same network or subnet as their
clients in order to get the best performances?
I am currently using Gluster 3.7.11 with a 2 nodes replica for cloud storage
and mounting on the clients withe the native glusterfs protocol (mount -t
glusterfs) a
Hello,
I am running GlusterFS 3.7.11 and was wondering what is the procedure if I want
my volume to listen on an additional IP address on another network (VLAN)? Is
this possible and what would be the procedure?
RegardsML
___
Gluster-users mailing list
Hello
In order to avoid losing performance/latency I would like to have my Gluster
volumes available through one IP address on each of my networks/VLANs. So that
the gluster client and server are available on the same network. My clients
mount the volume using native gluster protocol.
So my ques
Luciano, how do you enable direct-io-mode?
On Wednesday, June 22, 2016 7:09 AM, Luciano Giacchetta
wrote:
Hi,
I have similar scenario, for a cars classified with millions of small files,
mounted with gluster native client in a replica config.
The gluster server has 16gb RAM and 4 cores
Where's the package for Debian?
On Wednesday, June 22, 2016 3:48 PM, "Glomski, Patrick"
wrote:
If you're not opposed to another dependency, there is a glusterfs-nagios
package (python-based) which presents the volumes in a much more useful format
for monitoring.
http://download.gluste
Hi,
On my GlusterFS clients when I do a lot of copying within the GlusterFS volume
(mounted as native glusterfs) I get quite a lot of these warning in the log of
my kernel (Debian 8):
[Sat Jun 25 12:18:58 2016] net_ratelimit: 8000 callbacks suppressed
[Sat Jun 25 14:39:39 2016] net_ratelimit: 11
Hi,
I just setup distributed geo replication on my two nodes replica master
glusterfs 3.7.11 towards my single slave node replica and noticed that for some
reasons it takes the hostname of my slave node instead of the fully qualified
domain name (FQDN) and this although I have specified the FQD
Hi guys,
Just a short mail to let you know that I have errors when using distributed
geo-replication on symlink files and directories with GlusterFS 3.7.11 and that
I have opened a bug for that:
https://bugzilla.redhat.com/show_bug.cgi?id=1350179
Regards
ML
___
also upload the geo-repliction logs and glusterd logs. We will look into
> it.
>
> Thanks and Regards,
> Kotresh H R
>
> ----- Original Message -
>> From: "ML mail"
>> To: "Gluster-users"
>> Sent: Sunday, June 26, 2016 3:38:19 AM
&g
Hi Gandalf
Not suggesting really here but just mentioning what I am using: I am using an
HBA adapter with 12 disks so basically JBOD but I am using ZFS and have an
array of 12 disks in RAIDZ2 (sort of RAID6 but ZFS-style). I am pretty happy
with that setup so far.
CheersML
On Monday, July
Hi
On my GlusterFS clients using FUSE mount I get a lot of these messages in the
kernel log:
net_ratelimit: 3 callbacks suppressed
Does anyone have a clue why and how I can avoid the logs getting clogged with
these messages?
Regards
ML
___
Gluster-
Hello,
I am planning to use snapshots on my geo-rep slave and as such wanted first to
ask if the following procedure regarding the LVM thin provisionning is correct:
Create physical volume:
pvcreate /dev/xvdb
Create volume group:
vgcreate gfs_vg /dev/xvdb
Create thin pool:
lvcreate -L 4T -T gf
Hello,
I am planning to use snapshots on my geo-rep slave and as such wanted first to
ask if the following procedure regarding the LVM thin provisionning is correct:
Create physical volume:
pvcreate /dev/xvdb
Create volume group:
vgcreate gfs_vg /dev/xvdb
Create thin pool:
lvcreate -L 4T -T g
Hi,
Can someone explain me what is the op-version everybody is speaking about on
the mailing list?
Cheers
ML
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
s not bumped up automatically.
HTH,Atin
On Sunday 7 August 2016, ML mail wrote:
Hi,
Can someone explain me what is the op-version everybody is speaking about on
the mailing list?
Cheers
ML
__ _
Gluster-users mailing list
Gluster-users@g
Hi,
I just finished to read the documentation about arbiter
(https://gluster.readthedocs.io/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/)
and would like to convert my existing replica 2 volumes to replica 3 volumes.
How do I proceed? Unfortunately, I did not find any documentatio
Hi,
The Upgrading to 3.8 guide is missing from:
http://gluster.readthedocs.io/en/latest/Upgrade-Guide/README/
Regards,
ML
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
Good point Gandalf! I really don't feel adventurous on a production cluster...
On Wednesday, August 10, 2016 2:14 PM, Gandalf Corvotempesta
wrote:
Il 10 ago 2016 11:59, "ML mail" ha scritto:
>
> Hi,
>
> The Upgrading to 3.8 guide is missing from:
>
>
&
Hi,
I just discovered that one of my replicated glusterfs volumes is not being
geo-replicated to my slave node (STATUS Faulty). The log file on the geo-rep
slave node indicates an error with a directory which seems not to be empty.
Below you will find the full log entry for this problem which g
.
Regards,
ML
On Wednesday, September 14, 2016 6:14 AM, Aravinda wrote:
Please share the logs from Master node which is
Faulty(/var/log/glusterfs/geo-replication/__/*.log)
regards
Aravinda
On Wednesday 14 September 2016 01:10 AM, ML mail wrote:
> Hi,
>
> I just discovered that
lures = self.slave.server.entry_ops(entries)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/repce.py", line
226, in __call__
return self.ins(self.meth, *a)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/repce.py", line
208, in __call__
raise res
OSError:
7:fuse_rename_cbk]
0-glusterfs-fuse: 25: /.gfid/f7eb9d21-d39a-4dd6-941c-46d430e18aa2/File
2016.xlsx.ocTransferId1333449197.part ->
/.gfid/f7eb9d21-d39a-4dd6-941c-46d430e18aa2/File 2016.xlsx => -1
(Directory not empty)
regards
Aravinda
On Wednesday 14 September 2016 12:49 PM, ML mail wrote:
&
We can check in brick backend.
ls -ld $BRICK_ROOT/.glusterfs/f7/eb/f7eb9d21-d39a-4dd6-941c-46d430e18aa2
regards
Aravinda
On Thursday 15 September 2016 09:12 PM, ML mail wrote:
> So I ran a on my master a "find /mybrick -name 'File 2016.xlsx'" and got the
> following
Hello,
I am testing GlusterFS for the first time and have installed the latest
GlusterFS 3.5 stable version on Debian 7 on brand new SuperMicro hardware with
ZFS instead of hardware RAID. My ZFS pool is a RAIDZ-2 with 6 SATA disks of 2
TB each.
After setting up a first and single test brick on
Hello,
I am currently testing GlustserFS and could not find any guidelines or even
rules of thumb on what kind of minimal hardware requirements for a bare metal
node.
My setup would be to start with two Gluster nodes using replication for HA. For
that I have two 4U SuperMicro storage servers w
Hi,
I have installed Gluster 3.5.3 on Debian 7 and have one single test volume
right now. Unfortunately after a reboot this volume does not get get started
automatically: the glusterfsd process for that volume is inexistant although
the glusterd process is running.
After a boot running "glus
Hi,
Is it possible to convert a 2 nodes replicated volume to a 4 nodes
distributed-replicated volume? If yes, is it as simple as just issuing the
add-brick with the two additional nodes and then start a rebalance?
And can this be repeated ad infinitum? Let's say I want to add again another 2
a
ler.c:3713:__glusterd_brick_rpc_notify] 0-management:
Disconnected from gfs1a:/data/myvol/brick
[2015-02-05 13:13:06.286042] I [socket.c:2321:socket_event_handler]
0-transport: disconnecting now
On Thursday, February 5, 2015 12:02 PM, Pranith Kumar Karampuri
wrote:
On 02/05/2015 04:08 AM, ML
Yes, I have activated the SA xattr for my ZFS volume that I use for GlusterFS.
On Thursday, February 5, 2015 12:22 PM, Vijay Bellur wrote:
On 02/02/2015 08:26 PM, ML mail wrote:
Is ZFS using SA based extended attributes here? Since GlusterFS makes
use of extended attributes for storing
oking at the startup scripts of rc2.d
ZFS gets started before glusterfs so the device should be available when
gluster starts.
On Thursday, February 5, 2015 5:03 PM, Pranith Kumar Karampuri
wrote:
On 02/05/2015 06:46 PM, ML mail wrote:
> Dear Pranith
>
> As asked you will find below
Hello,
I read in the Gluster Getting Started leaflet
(https://lists.gnu.org/archive/html/gluster-devel/2014-01/pdf3IS0tQgBE0.pdf)
that the max recommended brick size should be 100 TB.
Once my storage server nodes filled up with disks they will have in total 192
TB of storage space, does this m
performance gain? For example in terms of MB/s throughput? Also are there maybe
any disadvantages of running two bricks on the same node, especially in my case?
On Saturday, February 7, 2015 10:24 AM, Niels de Vos wrote:
On Fri, Feb 06, 2015 at 05:06:38PM +, ML mail wrote:
> Hello,
>
>
This seems to be a workaround, isn't there another proper way with the
configuration of the volume to achieve this? I would not like to have to setup
a third fake server just in order to avoid that.
On Monday, February 9, 2015 2:27 AM, Kaamesh Kamalaaharan
wrote:
It works! Thanks t
Dear Pranith
I would be interested to know what the cluster.ensure-durability off option
does, could you explain or point to the documentation?
RegardsML
On Thursday, February 12, 2015 8:24 AM, Pranith Kumar Karampuri
wrote:
On 02/12/2015 04:37 AM, Nico Schottelius wrote:
> Hello,
01:17 PM, ML mail wrote:
Dear Pranith
I would be interested to know what the cluster.ensure-durability off option
does, could you explain or point to the documentation?
By default replication translator does fsyncs on the files at certain times so
that it doesn't lose dat
Hi,
I was wondering if turning on the performance.flush-behind option is dangerous
in terms of data integrity? Reading the documentation it seems to me that I
could benefit from that especially for having a lot of small files but I would
like to stay on the safe side. So if anyone could tell me
For those interested here are the results of my tests using Gluster 3.5.2.
Nothing much better here neither...
shell$ dd bs=64k count=4k if=/dev/zero of=test oflag=dsync
4096+0 records in
4096+0 records out
268435456 bytes (268 MB) copied, 51.9808 s, 5.2 MB/s
shell$ dd bs=64k count=4k if=/dev/zero
Dear Ben,
Very interesting answer from yours of how to find out where the bottleneck is.
These commands and paramters (iostat, sar) should maybe be documented on the
Gluster wiki.
I have a question for you, in order to better use my CPU cores (6 cores per
node) I was wondering if I should crea
Just saw that my post below never got replied and would be very glad if
someone, maybe Niels?, could comment on this. Cheers!
On Saturday, February 7, 2015 10:13 PM, ML mail wrote:
Thank you Niels for your input, that definitely makes me more curious... Now
let me tell you a bit more about
ther you set up 2 or 4 or 8 bricks (for rep 2), afaik.
I think there is no real advise for your setup, ZFS on Linux is not that
common, to "gluster" it even less...
I hope my english is good enough to let you understand, what i mean ;)
Greetings, Frank
Am Montag, den 23.02.2015, 08
Hello,
Is it required to have the GlusterFS servers in /etc/hosts for the gluster
servers themselves? I read many tutorials where people always add an entry in
their /etc/hosts file.
I am asking because my issue is that my volumes, or more precisely glusterfsd,
are not starting at system boot.
ster nodes MUST resolve each other through DNS (preferred) or
/etc/hosts."
An entry in /etc/hosts is probably even more safe because you don't depend on
external DNS resolvers.
cheers,ck
On Tue, Mar 3, 2015 at 8:43 AM, ML mail wrote:
Hello,
>
>Is it required to have the GlusterF
uture?
On Tuesday, March 3, 2015 12:57 PM, Claudio Kuenzler
wrote:
Can you resolve the other gluster peers with "dig"?
Are you able to "ping" the other peers, too?
On Tue, Mar 3, 2015 at 12:38 PM, ML mail wrote:
Well the weird thing is that my DNS resolver serve
ar 3, 2015 at 1:56 PM, ML mail wrote:
Yes dig and ping works fine. I used first the short hostname gfs1 and then I
also tried gfs1.intra.domain.com. That did not change anything.
Currently for testing I only have a single node setup so my "gluster peer
status" output would be empty.
N
>issues after a reboot. But unfortunately I don't remember if I had to manually
>adapt something like the boot order of the init scripts).
>
>You can try and make your gluster scripts run at the very end of the boot
>process and see if that helps.
>
>
>
>On Tue, Mar
g fine if it was launched manually, did I understand that
right? It's only the automatic startup at boot which causes the lookup failure?
On Tue, Mar 3, 2015 at 2:54 PM, ML mail wrote:
Thanks for the tip but Debian wheezy does not use systemd at all, it's still
old sysV style init sc
and should
be fixed by the person responsible for packaging GlusterFS on Debian, who's
that btw?
On Tuesday, March 3, 2015 4:12 PM, ML mail wrote:
Yes, so I added a sleep 5 in the init script right before the startup of the
daemon here:
start-stop-daemon --start --quiet --oknodo --pi
Hello,
I have two gluster nodes in a replicated setup and have connected the two nodes
together directly through a 10 Gbit/s crossover cable. Now I would like to tell
gluster to use this seperate private network for any communications between the
two nodes. Does that make sense? Will this bring
Thank you for the detailed explanation. Due to the fact that right now it does
not make much difference to split the traffic I will refrain from doing that
and simply wait for the new style replication. This looks like a very promising
feature and I am looking forward to it. My other concern her
Hello,
I just setup geo replication from a 2 node master cluster to a 1 node slave
cluster and so far it worked well. I just have one issue on my slave if I check
the files on my brick i just see the following:
drwxr-xr-x 2 root root 15 Mar 5 23:13 .gfid
drw--- 20 root root 21 Mar 5 23:13
.c:5534:init] 0-myslavevol-posix: Posix
access control list is not supported.
So now I at least activated POSIX ACL but still the files are not here. Could
it be related to the ACL? or nothing to do with that?
On Friday, March 6, 2015 2:22 PM, M S Vishwanath Bhat wrote:
On 6 March 2015 at 14:27,
Hello,
I am setting up geo replication on Debian wheezy using the official 3.5.3
GlusterFS packages and noticed that when creating the geo-replication session
using the command:
gluster volume geo-replication myvol slavecluster::myvol create push-pem force
the authorized_keys SSH file (/root/.
Thanks Jeff for this blog post, looking forward to NSR and its chain
replication!
On Monday, March 9, 2015 1:00 PM, Jeff Darcy wrote:
> I would be very interested to read your blog post as soon as its out and I
> guess many others too. Please do post the link to this list as soon as its
> on
90 matches
Mail list logo