By any chance are you having any redundant peer entries in
/var/lib/glusterd/peers directory? Can you please share the content of this
folder from all the nodes?
On Tue, Jul 4, 2017 at 11:55 PM, Victor Nomura wrote:
> Specifically, I must stop glusterfs-server service on the
Arbiter brick is what you need
Sent from my Windows 10 phone
From: Ernie Dunbar
Sent: Wednesday, 5 July 2017 4:28 AM
To: Gluster-users
Subject: [Gluster-users] I need a sanity check.
Hi everyone!
I need a sanity check on our Server Quorum Ratio settings to ensure the maximum
uptime for our
Hi everyone!
I need a sanity check on our Server Quorum Ratio settings to
ensure the maximum uptime for our virtual machines. I'd like to
modify them slightly, but I'm not really interested in
experimenting with live servers to see if what I'm doing is going
Specifically, I must stop glusterfs-server service on the other nodes in order
to perform any gluster commands on any node.
From: Victor Nomura [mailto:vic...@mezine.com]
Sent: July-04-17 9:41 AM
To: 'Atin Mukherjee'
Cc: 'gluster-users'
Subject: RE: [Gluster-users] Gluster failure due to
The nodes have all been rebooted numerous times with no difference in outcome.
The nodes are all connected to the same switch and I also replaced it to see if
made any difference.
There is no issues with connectivity network wise and no firewall in place
between the nodes.
I can’t do
Thanks. I think reusing the same volume was the cause of lack of IO
distribution.
The latest profile output looks much more realistic and in line with i
would expect.
Let me analyse the numbers a bit and get back.
-Krutika
On Tue, Jul 4, 2017 at 12:55 PM, wrote:
> Hi
On 07/03/2017 09:01 PM, Pat Haley wrote:
Hi Soumya,
When I originally did the tests I ran tcpdump on the client.
I have rerun the tests, doing tcpdump on the server
tcpdump -i any -nnSs 0 host 172.16.1.121 -w /root/capture_nfsfail.pcap
The results are in the same place
Hi Krutika,
Thank you so much for myour reply. Let me answer all:
1. I have no idea why it did not get distributed over all bricks.
2. Hm.. This is really weird.
And others;
No. I use only one volume. When I tested sharded and striped volumes, I
manually stopped volume,
Hi Gencer,
I just checked the volume-profile attachments.
Things that seem really odd to me as far as the sharded volume is concerned:
1. Only the replica pair having bricks 5 and 6 on both nodes 09 and 10
seems to have witnessed all the IO. No other bricks witnessed any write
operations. This