Yes, the firmware update of the network adapters is planned for the next
week.
The tcpdump is currently running and i will share the result with you.
The update to ovirt 4.4 (and to 4.5) is quite a big deal because of the
switch to CentOS stream where a full reinstall is required and there is
n
Did you tired to TCPdump the connections to see who and how closes the
connection? Normal fin-ack, or timeout? Maybe some network device between?
(This later has small probably since you told that you can trigger the
error by high load.)
于 2022年8月18日周四 12:38写道:
> I just niced all glusterfsd proc
I just niced all glusterfsd processes on all nodes to a value of -10.
The problem just occured, so it seems nicing the processes didn't help.
Am 18.08.2022 09:54 schrieb Péter Károly JUHÁSZ:
What if you renice the gluster processes to some negative value?
于 2022年8月18日周四 09:45写道:
Hi folks,
Hi,
I am building a rpms of glusterfs9.6 & glusterfs10 but i am getting an error
rcu-bp.h:170: undefined reference to `urcu_bp_register'
.libs/glusterd_la-glusterd-reset-brick.o: In function `_urcu_bp_read_unlock':
/usr/local/include/urcu/static/urcu-bp.h:186: undefined reference to
`urcu_bp_re
What if you renice the gluster processes to some negative value?
于 2022年8月18日周四 09:45写道:
> Hi folks,
>
> i am running multiple GlusterFS servers in multiple datacenters. Every
> datacenter is basically the same setup: 3x storage nodes, 3x kvm
> hypervisors (oVirt) and 2x HPE switches which are a
Hi folks,
i am running multiple GlusterFS servers in multiple datacenters. Every
datacenter is basically the same setup: 3x storage nodes, 3x kvm
hypervisors (oVirt) and 2x HPE switches which are acting as one logical
unit. The NICs of all servers are attached to both switches with a
bonding