I am adding nodes/bricks to both my slaves and primary nodes under gluster
3.8.
The slave rebalance is taking a very long time. Its running on the 5th day.
I have paused my geo replication while doing y rebalance. Can I re-enable
it while the rebalance is still going on for the slave volumes?
If
Hi Axel,
No geo-replication can't be used without SSH. It's not configurable.
Geo-rep master nodes connect to slave and transfers data over ssh.
I assume you have created the geo-rep session before start.
In the command above, the syntax is incorrect. It should use "::" and not
":/"
gluster
We'd love to see more Gluster (and Gluster-adjacent) talks submitted
to mountpoint, our CFP closes at midnight Pacific time tomorrow on
June 15th.
So you actually have a day-ish to think about what you'd like to submit.
https://mountpoint.io/
On Fri, Jun 8, 2018 at 10:09 AM, Amye Scavarda
Hello
First of all Thank you for Information
As First Try i want to setup Geo -Replication in a Test System with Virtual
Machines
So for the Test i have a distributed Gluster with 3 Nodes:
Status of volume: gpool
Gluster process TCP Port RDMA Port Online Pid
Our gluster keeps getting to a state where it becomes painfully slow and
many of our applications time out on read/write call. When this happens a
simple ls at top level directory from the mount takes somewhere between
8-25s (normally it is very fast, at most 1-2s). The top level directory
only
Hi
We were trying to address some slowness issues we continue to have with
gluster
(http://lists.gluster.org/pipermail/gluster-users/2017-April/030529.html)
by using a suggestion from this list (disperse.eager-lock off). We were
unable to do this because we discovered that the clients were
thanks,
i have been able to remove the old id entry:
5953d666-fccd-48e2-aeb9-5308a53ad5a0 succesfully
On Thu, Jun 14, 2018 at 4:07 PM, Aravinda wrote:
> Hi,
>
> Thanks for the feedback, I will look into the issue.
>
> Gluster 4.1 version, glustercli allows to detach peer using peer id as
>
Hello Axel,
A warm welcome to you and happy to see you board the Gluster ship.
geo-replication requires two clusters: one master cluster which is the
source cluster and the second cluster called the slave cluster which is the
destination cluster
Just as you have created a primary/master cluster,
Hi,
Thanks for the feedback, I will look into the issue.
Gluster 4.1 version, glustercli allows to detach peer using peer id as
well. As a workaround can you try using REST API as below
curl -i -XDELETE http://localhost:24007/v1/peers/
For example, curl -i -XDELETE
Hello (i dont know the right name - sorry)
Thanks for fast Reply !
Now i understand it - so i build a Gluster System on the other side with
enough Capacity then i arrange a Geo Replication between this 2 Gluster
Groups.
A other Question regarding this issue:
In documentation i allways see SSH
Hi Axel,
You don't need single server with 140 TB capacity for replication. The
slave (backup) is also a gluster volume similar to master volume.
So create the slave (backup) gluster volume with 4 or more nodes to meet
the capacity of master and setup geo-rep between these two volumes.
Hello
Im new on GlusterFS - so a warm hello to everyone here...
Im testing GlusterFS since some Weeks in different configurations for a big
Media Storage
Currently for start we plan a distributed / Replicated Gluster with for
Nodes (4x70Tb)
I tryed this within a Test Area on differenzt Virtual
Hi Nithya
It seems that problem can be solved by either turning parallel-readir off
or downgrading client to 3.10.12-1 . Yesterday I downgraded some clients to
3.10.12-1 and it seems to fixed the problem. Today when I saw your email
then I disabled parallel-readir off and the current client
13 matches
Mail list logo