By the way, try to capture the traffic on the systems and compare if only
specific packages are not delivered to the destination.
Overall JF won't give you a 2-digit improvement, so in your case I would switch
to 1500 MTU.
Best Regards,Strahil Nikolov
I already updated the firmware of th
We are currently shooting in the dark...If possible update the Firmware of the
NICs and FW of the switch .
Have you tried if other systems (on the same switch) have issues with the Jumbo
Frames ?
Best Regards,Strahil Nikolov
Yes, i did test the ping with a jumbo frame mtu and it worked w
Usually that kind of problems could be on many places.
When you set the MTU to 9000, did you test with ping and the "Do not fragment"
Flag ?
If there is a device on the path that is not configured (or doesn't support
MTU9000) , it will fragment all packets and that could lead to excessive devic
Il 2022-09-16 18:41 dpglus...@posteo.de ha scritto:
I have made extensive load tests in the last few days and figured out
it's definitely a network related issue. I changed from jumbo frames
(mtu 9000) to default mtu of 1500. With a mtu of 1500 the problem
doesn't occur. I'm able to bump the io-w
I have made extensive load tests in the last few days and figured out
it's definitely a network related issue. I changed from jumbo frames
(mtu 9000) to default mtu of 1500. With a mtu of 1500 the problem
doesn't occur. I'm able to bump the io-wait of our gluster storage
servers to the max poss
Yes, the firmware update of the network adapters is planned for the next
week.
The tcpdump is currently running and i will share the result with you.
The update to ovirt 4.4 (and to 4.5) is quite a big deal because of the
switch to CentOS stream where a full reinstall is required and there is
n
Did you tired to TCPdump the connections to see who and how closes the
connection? Normal fin-ack, or timeout? Maybe some network device between?
(This later has small probably since you told that you can trigger the
error by high load.)
于 2022年8月18日周四 12:38写道:
> I just niced all glusterfsd proc
I just niced all glusterfsd processes on all nodes to a value of -10.
The problem just occured, so it seems nicing the processes didn't help.
Am 18.08.2022 09:54 schrieb Péter Károly JUHÁSZ:
What if you renice the gluster processes to some negative value?
于 2022年8月18日周四 09:45写道:
Hi folks,
What if you renice the gluster processes to some negative value?
于 2022年8月18日周四 09:45写道:
> Hi folks,
>
> i am running multiple GlusterFS servers in multiple datacenters. Every
> datacenter is basically the same setup: 3x storage nodes, 3x kvm
> hypervisors (oVirt) and 2x HPE switches which are a