I have made extensive load tests in the last few days and figured out
it's definitely a network related issue. I changed from jumbo frames
(mtu 9000) to default mtu of 1500. With a mtu of 1500 the problem
doesn't occur. I'm able to bump the io-wait of our gluster storage
servers to the max
Hi folks,
my gluster volume isn't fully healing. We had an outage couple days ago
and all other files got healed successfully. Now - days later - i can
see there are still two gfid's per node remaining in healing list.
root@storage-001~# for i in `gluster volume list`; do gluster volume
Yes, the firmware update of the network adapters is planned for the next
week.
The tcpdump is currently running and i will share the result with you.
The update to ovirt 4.4 (and to 4.5) is quite a big deal because of the
switch to CentOS stream where a full reinstall is required and there is
I just niced all glusterfsd processes on all nodes to a value of -10.
The problem just occured, so it seems nicing the processes didn't help.
Am 18.08.2022 09:54 schrieb Péter Károly JUHÁSZ:
What if you renice the gluster processes to some negative value?
于 2022年8月18日周四 09:45写道:
Hi folks,
Hi folks,
i am running multiple GlusterFS servers in multiple datacenters. Every
datacenter is basically the same setup: 3x storage nodes, 3x kvm
hypervisors (oVirt) and 2x HPE switches which are acting as one logical
unit. The NICs of all servers are attached to both switches with a
bonding