Hi folks,
I'm troubled moving an arbiter brick to another server because of I/O load
issues. My setup is as follows:
# gluster volume info
Volume Name: myvol
Type: Distributed-Replicate
Volume ID: 43ba517a-ac09-461e-99da-a197759a7dc8
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x (2 +
Ccing glusterd team for information
On Thu, Feb 8, 2018 at 10:02 AM, Alvin Starr wrote:
> That makes for an interesting problem.
>
> I cannot open port 24007 to allow RPC access.
>
> On 02/07/2018 11:29 PM, Kotresh Hiremath Ravishankar wrote:
>
> Hi Alvin,
>
> Yes,
That makes for an interesting problem.
I cannot open port 24007 to allow RPC access.
On 02/07/2018 11:29 PM, Kotresh Hiremath Ravishankar wrote:
Hi Alvin,
Yes, geo-replication sync happens via SSH. Ther server port 24007 is
of glusterd.
glusterd will be listening in this port and all volume
Hi Alvin,
Yes, geo-replication sync happens via SSH. Ther server port 24007 is of
glusterd.
glusterd will be listening in this port and all volume management
communication
happens via RPC.
Thanks,
Kotresh HR
On Wed, Feb 7, 2018 at 8:29 PM, Alvin Starr wrote:
> I am running
After fresh installation ip based peer probe same error message. Staging of
operation 'Volume Create' failed on localhost : Host 51.15.77.14 is not in
'Peer in Cluster' state
node 1
root@pri:/var/log/glusterfs# cat glusterd.log
[2018-02-07 23:06:34.526018] I [MSGID: 100030]
Yes, last node different subnet.
all servers on scaleway.com
node1 and node2 same subnet
node3 another subnet
But all can ping each other.
how about uuid generation ?
i looked code and if i check correct path it use hostname for generating uuid.
Maybe peerinfo->hostname is not same and
On 8/02/2018 4:45 AM, Gaurav Yadav wrote:
After seeing command history, I could see that you have 3 nodes, and
firstly you are peer probing 51.15.90.60 and 163.172.151.120 from
51.15.77.14
So here itself you have 3 node cluster, after all this you are going
on node 2 and again peer probing
Answers inline
On Wed, Feb 7, 2018 at 8:44 PM, Marcus Pedersén
wrote:
> Thank you for your help!
> Just to make things clear to me (and get a better understanding of
> gluster):
> So, if I make the slave cluster just distributed and node 1 goes down,
> data (say
On 02/07/2018 04:02 PM, tino_ma...@gmx.de wrote:
Kotresh workaround works for me. But before I tried it, I created some
strace-logs for Florian.
Thanks. We have a fairly good understanding what is going on now, and
we are considering our options how to fix it properly (without reverting
Thank you for your help!
Just to make things clear to me (and get a better understanding of gluster):
So, if I make the slave cluster just distributed and node 1 goes down,
data (say file.txt) that belongs to node 1 will not be synced.
When node 1 comes back up does the master not realize that
Hi,
Could you please send me the glusterd.logs and cmd-history logs from all
the nodes.
Thanks
Gaurav
On Wed, Feb 7, 2018 at 5:13 PM, Ercan Aydoğan
wrote:
> Hello,
>
> i have dedicated 3.11.3 version glusterfs 3 nodes. i can create volumes if
> peer probe with
I am running gluster 3.8.9 and trying to setup a geo-replicated volume
over ssh,
It looks like the volume create command is trying to directly access the
server over port 24007.
The docs imply that all communications are over ssh.
What am I missing?
--
Alvin Starr ||
Hi,
Kotresh workaround works for me. But before I tried it, I created some
strace-logs for Florian.
setup: 2 VMs (192.168.222.120 master, 192.168.222.121 slave), both with a
volume named vol with Ubuntu 16.04.3, glusterfs 3.13.2, rsync 3.1.1 .
Best regards,
Tino
root@master:~# cat
OK step 1 and 2 worked fine.
About step 3 I had to use "stop force" otherwise staging failed
because S3 didn't know about the replica and refused to stop it.
Thank you Kotresh,
Stefano
On 7 February 2018 at 14:42, Kotresh Hiremath Ravishankar
wrote:
> Hi,
>
> When S3 is
Hi,
When S3 is added to master volume from new node, the following cmd should
be run to generate and distribute ssh keys
1. Generate ssh keys from new node
#gluster system:: execute gsec_create
2. Push those ssh keys of new node to slave
#gluster vol geo-rep :: create
push-pem
Answers in line.
On Tue, Feb 6, 2018 at 6:24 PM, Marcus Pedersén
wrote:
> Hi again,
> I made some more tests and the behavior I get is that if any of
> the slaves are down the geo-replication stops working.
> It this the way distributed volumes work, if one server goes
We are happy to help you out. Please find the answers inline.
On Tue, Feb 6, 2018 at 4:39 PM, Marcus Pedersén
wrote:
> Hi all,
>
> I am planning my new gluster system and tested things out in
> a bunch of virtual machines.
> I need a bit of help to understand how
Hello,
i have dedicated 3.11.3 version glusterfs 3 nodes. i can create volumes if peer
probe with hostname.
But ip based probe it’s not working.
What’s the correct /etc/hosts and hostname values when ip based peer probe.
Do i need still need peer FQDN names on /etc/hosts .
i need advice how
Hi all,
i had a replica 2 gluster 3.12 between S1 and S2 (1 brick per node)
geo-replicated to S5 where both S1 and S2 were visible in the
geo-replication status and S2 "active" while S1 "passive".
I had to replace S1 with S3, so I did an
"add-brick replica 3 S3"
and then
"remove-brick replica 2
19 matches
Mail list logo