New event:
Mar 28 14:37:32 ovirt-node3.ovirt vdsm[4288]: WARN executor state: count=5
workers={, , , , at 0x7fcdc0010898> timeout=7.5, duration=7.50 at
0x7fcdc0010208> discarded task#=189 at 0x7fcdc0010390>}
Mar 28 14:37:32 ovirt-node3.ovirt sanlock[1662]: 2023-03-28 14:37:32 829
[7438]: s4
It's difficult to answer as the engine normally "freezes" or is taken down
during events... I will try to get them next time
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement:
On Tue, Mar 28, 2023 at 3:30 PM Diego Ercolani
wrote:
> No, now seem "stable" awaiting for next event
>
>
I mean logs around the time of arising problems... It engine has not
shutdown it will contain logs generated on it during the problematic
timeframe...
No, now seem "stable" awaiting for next event
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
On Tue, Mar 28, 2023 at 12:34 PM Diego Ercolani
wrote:
> I record entry like this in the journal of everynode:
> Mar 28 10:26:58 ovirt-node3.ovirt sanlock[1660]: 2023-03-28 10:26:58
> 1191247 [4105511]: s9 delta_renew read timeout 10 sec offset 0
>
I record entry like this in the journal of everynode:
Mar 28 10:26:58 ovirt-node3.ovirt sanlock[1660]: 2023-03-28 10:26:58 1191247
[4105511]: s9 delta_renew read timeout 10 sec offset 0
/rhev/data-center/mnt/glusterSD/ovirt-node3.ovirt:_gv0/4745320f-bfc3-46c4-8849-b4fe8f1b2de6/dom_md/ids
Mar 28
The scheduling policy was the "Suspend Workload if needed" and disabled
parallel migration.
The problem is that The Engine (mapped on external NFS domain implemented by a
linux box without any other vm mapped) simply disappear. I have a single 10Gbps
intel ethernet link that I use to distribute
On Tue, Mar 28, 2023 at 11:50 AM Diego Ercolani
wrote:
> Hello,
> in my installation I have to use poor storage... the oVirt installation
> doesn't manage such a case and begin to "balance" and move VMs around...
> taking too many snapshots stressing a poor performance all the cluster mess
>
8 matches
Mail list logo