Hi oVirt land
Hope you are well. Running into this issue, I hope you can help.
Centos7 and it is updated.
Ovirt 4.3, latest packages.
My network config:
[root@mob-r1-d-ovirt-aa-1-01 ~]# ip a
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group
default qlen 1000
link/loopback
moin,
i have learned to install a self hosted engine directly to the physical
interfaces.
later you can move it with the hosted engine into the different bonds or vlans.
it works fine by me round 20 times.
br
marcel
Am 3. Februar 2021 19:56:47 MEZ schrieb Nardus Geldenhuys :
>Hi oVirt land
In such a case, the disks shouldn't remain locked - sounds like a bug.
This one requires a deeper look.
If you're able to reproduce it again, please open a bug in Bugzilla (
https://bugzilla.redhat.com) with engine and vdsm logs,
so we'll be able to investigate it.
*Regards,*
*Shani Leviim*
On Wed, Feb 3, 2021 at 11:12 AM Roderick Mooi wrote:
>
> Hello and thanks for assisting!
>
> I think I may have found the problem :)
>
> /etc/ovirt-hosted-engine/hosted-engine.conf
>
> is blank.
>
> But I do have hosted-engine.conf~
Any idea how this happened?
Perhaps this is a backup done by
Since yesterday I found a couple VMs with locked disk. I don't know the
reason, I suspect some interaction made by our backup system (vprotect,
snapshot based), despite it's working for more than a year.
I'd give a chance to unlock_entity.sh script, but it reports:
CAUTION, this operation may
Hello and thanks for assisting!
I think I may have found the problem :)
/etc/ovirt-hosted-engine/hosted-engine.conf
is blank.
But I do have hosted-engine.conf~
Can I cp this to restore the original?
Anything else I need to do?
Appreciated
On 2021/02/02 11:37, Strahil Nikolov wrote:
Hi all,
I have NodeNG 4.4.4 installed and want to know what is the best way of
persisting custom vdsm hooks and some firmware binaries across updates.
I tried to update to 4.4.5-pre and lost my hooks and firmware.
Thanks,
Shantur
___
Users mailing list
Hi,
I have a node 4.4.3 install and it works perfectly for PCI passthrough.
After I upgraded to Node NG 4.4.4 the PCI passthrough leads to server
resets. There are no logs in the kernel messages just before reset apart
from vfio-pci enabling the device.
So, I assume there is some change related
Hi Giulio,
Before running unlock_entity.sh, let's try to find if there's any task in
progress.
Is there any hint on the events in the UI?
Or try to run [1]:
./taskcleaner.sh -o
Also, you can verify what entities are locked [2]:
./unlock_entity.sh -q -t all -c
[1]
Hi Shani,
no tasks listed in UI, and now "taskcleaner.sh -o" reports no task (just
before I gave "taskcleaner.sh -r").
But disks are still locked, and "unlock_entity.sh -q -t all -c"
(accordingly) reports only two disk's uuid (with their vm's uuid).
Time to give a chance to unlock_entity.sh?
Thank you.
I got so buried in the mechanics that I lost sight of the purpose of the
tagging. The tagged network should not be able to ping the untagged - that
was the whole purpose of the exercise.
The real problem is that the untagged network is unable to see its gateway
to the internet, which
Hi,
Any idea how this happened?
Somehow related to the power being "pulled" at the wrong time?
Perhaps this is a backup done by emacs?
Not sure what does it but I'm glad it did ;)
Please compare it to your other hosts. It should be (mostly?)
identical, but make sure that host_id= is
On Wed, Feb 3, 2021 at 4:21 PM Roderick Mooi wrote:
>
> Hi,
>
> > Any idea how this happened?
>
> Somehow related to the power being "pulled" at the wrong time?
>
> > Perhaps this is a backup done by emacs?
>
> Not sure what does it but I'm glad it did ;)
>
> > Please compare it to your other
Thanks,
I didn't check, but am pretty certain that it's not related to the
engine db. Do you see such duplicates there as well (using the web ui
or sql against it)? If so, fix these first. If no other means, put the
host to maintenance and reinstall with the correct name.
Not seeing
On Wed, Feb 3, 2021 at 4:52 PM Roderick Mooi wrote:
>
> Thanks,
>
> > I didn't check, but am pretty certain that it's not related to the
> > engine db. Do you see such duplicates there as well (using the web ui
> > or sql against it)? If so, fix these first. If no other means, put the
> > host to
I have read through many posts and I think this process seems fairly simple.
https://www.ovirt.org/documentation/upgrade_guide/#SHE_Upgrading_from_4-3
But I just wanted to see if anyone had any gotchas. I am thinking of
either using
- RHEL8 (Using developer program, probably best best atm)
A bit more on this.
I figured out that 4.4.3 is based on Centos 8.2 and 4.4.4 is based on
Centos 8.3
Looks like the newer kernel is having
https://bugzilla.kernel.org/show_bug.cgi?id=207489 issue.
Not sure if it's updated in the latest 4.4.5-Pre release.
Regards
Shantur
On Wed, Feb 3, 2021 at
Hi,
I tried unlock_entity.sh, and it solved the issue. So far so good.
But it's still unclear why disks were locked.
Let me make an hypothesis: in ovirt 4.3 a failure in snapshot removal
would lead to a snapshot in illegal status. No problem, you can remove
again and the situation is fixed.
In
18 matches
Mail list logo