Hi,
i just installed a new host, which cannot connect to the master domain. The
domain is an iscsi device on a nas, and ovirt is actually fine with it, other
hosts have no problem. I'sure , I#m just overlooking something.
What I see in the log is:
024-04-16 11:10:06,320-0400 INFO
One and a half month ok, last nicht same crash again. Well, quite rare, the
crashes, but magnitudes more than before the upgrade, anyway. Donno, what I did
wrong.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to
Hello Albert,
thanks, and sorry for the late response.
On Sept. 26 vdsmd went down on the host again. (didn't monitor properly, so,
only now, I realized it was down:)
This time I could not find any segvs in the logs.
When did it start?
We used until now, 4.3, and then upgraded first to
Hello,
since upgrade from ovirt 4.3 to 4.5., I encounter problems with one host. vdsmd
permanently crashes after a while and it tells me, a segmentation violation is
the cause.
As result, it becomes unresponsive to ovirt-engine.
Environment:
rocky linux 8.6, up to date
ovirt 4.5.2;
One
Problem solved. Reason was a fail at engine-setup.
Engine setup showed cluster not in Global Maintenance mode, even if the cluster
was indeed in Global Maintenance. Had to force engine setup by using a hint
from https://www.ovirt.org/media/Hosted-Engine-4.3-deep-dive.pdf, Major issues
4.
Good morning.
I have an hosted engine in my VM cluster that fails to start after upgrading
from 4.4.9 to 4.4.10.
Looking at the ovirt-enginge boot.log, I can see the following:
06:19:46,860 ERROR [org.jboss.as.controller.management-operation] WFLYCTL0013:
Operation ("add") failed - address:
Thanks for your answer.
Yesterday we removed all content from /rhev/data-center/mnt/blockSD/* on our
nodes (in Local Maintenance mode) and manually updated the nodes with "dnf
update". After a reboot and a night's sleep, all 12 nodes are reporting up to
date status.
That is, we are now on
Hi, Didi.
The command "find / -xdev -path "*/dom_md/metadata" -not -empty -not -type l"
shows no storage domains, so I guess this would solve the "false positive" list
of storage domains that are not really mounted to the node we want to update.
___
Hi again.
Just did a test. I unmapped the node from the iSCSI SAN and rebooted the node.
After reboot, the storage domains still where listed as / storage domains. In
other words, this was not a solution.
___
Users mailing list -- users@ovirt.org
To
Hi, Didi.
Thanks for your answer. Unfortunately the suggested command shows the same
storage domains as I listed above. The node is in Maintenance (as it would be
when Installing an update from the GUI). None of these storage domains are
mounted on the node ( at least not visible when running
So, we have a cluster of 3 servers running oVirt Node 4.4.1. Now we are
attempting to upgrade it to latest version, 4.4.2, but it fails as shown below.
Problem is that the Storage domains listed are all located on an external iSCSI
SAN. The Storage Domains were created in another cluster we had
Hi all oVirt experts.
So, I managed to f.. up a VM today. I had one VM running on a oVirt 4.3
cluster, where the disk was located on an iSCSI data domain. I stopped the VM,
put the storage domain in maintenance mode, detached adn removed the storage
domain from the ovirt 4.3 cluster.
Then I
12 matches
Mail list logo