[ovirt-users] Installing oVirt as a self-hosted engine - big big problem :()

2023-07-07 Thread Jorge Visentini
Hi guys, starting the weekend with a "cucumber" like that in my hands.

I've been racking my brains for about 4 days to deploy a new engine.
Turns out I already tested *4.4.10*, *4.5.4.x*, and *4.5.5(master)* (el8
and el9) and none is working.

It seems to me to be ansible or a python problem, but I'm not sure.

I've read several oVirt reddit and github threads, but they seem to have no
effect anymore. I believe it's some package in the CentOS Stream
repositories, *but unfortunately I don't have it frozen locally here*.

Deploy hangs at *[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Wait for
the host to be up]*

I already tried to update the version of *python netaddr* as I read in git
and it still didn't work
I also tried to *freeze the ansible update* in the engine and it didn't
work.
I updated the version of *ovirt-ansible-collection to
ovirt-ansible-collection-3.1.3-0.1.master.20230420113738.el8.noarch.rpm*
and it didn't work either...

*The error seems to be on all oVirt builds*, but I don't know what I'm
doing wrong anymore because I can't pinpoint where the error is.

I appreciate any tips

*Below are some log outputs:*

[root@ksmmi1r02ovirt36 ~]# tail -f /var/log/vdsm/vdsm.log
2023-07-07 22:23:30,144-0300 INFO  (vmrecovery) [vdsm.api] FINISH
getConnectedStoragePoolsList return={'poollist': []} from=internal,
task_id=6df1f5ed-0f41-4001-bb2e-e50fb0214ac7 (api:37)
2023-07-07 22:23:30,144-0300 INFO  (vmrecovery) [vds] recovery: waiting for
storage pool to go up (clientIF:723)
2023-07-07 22:23:35,146-0300 INFO  (vmrecovery) [vdsm.api] START
getConnectedStoragePoolsList() from=internal,
task_id=bd2a755d-3488-4b43-8ca4-44717dd6b017 (api:31)
2023-07-07 22:23:35,146-0300 INFO  (vmrecovery) [vdsm.api] FINISH
getConnectedStoragePoolsList return={'poollist': []} from=internal,
task_id=bd2a755d-3488-4b43-8ca4-44717dd6b017 (api:37)
2023-07-07 22:23:35,146-0300 INFO  (vmrecovery) [vds] recovery: waiting for
storage pool to go up (clientIF:723)
2023-07-07 22:23:39,320-0300 INFO  (periodic/3) [vdsm.api] START
repoStats(domains=()) from=internal,
task_id=68567ce3-b579-469d-a46d-7bafc7b3e6bd (api:31)
2023-07-07 22:23:39,320-0300 INFO  (periodic/3) [vdsm.api] FINISH repoStats
return={} from=internal, task_id=68567ce3-b579-469d-a46d-7bafc7b3e6bd
(api:37)
2023-07-07 22:23:40,151-0300 INFO  (vmrecovery) [vdsm.api] START
getConnectedStoragePoolsList() from=internal,
task_id=fadcf734-9f7e-4681-8764-9d3863718644 (api:31)
2023-07-07 22:23:40,151-0300 INFO  (vmrecovery) [vdsm.api] FINISH
getConnectedStoragePoolsList return={'poollist': []} from=internal,
task_id=fadcf734-9f7e-4681-8764-9d3863718644 (api:37)
2023-07-07 22:23:40,151-0300 INFO  (vmrecovery) [vds] recovery: waiting for
storage pool to go up (clientIF:723)
2023-07-07 22:23:44,183-0300 INFO  (jsonrpc/1) [api.host] START
getAllVmStats() from=::1,49920 (api:31)
2023-07-07 22:23:44,184-0300 INFO  (jsonrpc/1) [api.host] FINISH
getAllVmStats return={'status': {'code': 0, 'message': 'Done'},
'statsList': (suppressed)} from=::1,49920 (api:37)
2023-07-07 22:23:45,157-0300 INFO  (vmrecovery) [vdsm.api] START
getConnectedStoragePoolsList() from=internal,
task_id=504d8028-35be-45a3-b24d-4ec7cbc82f7e (api:31)
2023-07-07 22:23:45,157-0300 INFO  (vmrecovery) [vdsm.api] FINISH
getConnectedStoragePoolsList return={'poollist': []} from=internal,
task_id=504d8028-35be-45a3-b24d-4ec7cbc82f7e (api:37)
2023-07-07 22:23:45,157-0300 INFO  (vmrecovery) [vds] recovery: waiting for
storage pool to go up (clientIF:723)
2023-07-07 22:23:50,162-0300 INFO  (vmrecovery) [vdsm.api] START
getConnectedStoragePoolsList() from=internal,
task_id=297ad1df-c855-4fbb-a89f-dfbe7a1b60a2 (api:31)
2023-07-07 22:23:50,162-0300 INFO  (vmrecovery) [vdsm.api] FINISH
getConnectedStoragePoolsList return={'poollist': []} from=internal,
task_id=297ad1df-c855-4fbb-a89f-dfbe7a1b60a2 (api:37)
2023-07-07 22:23:50,162-0300 INFO  (vmrecovery) [vds] recovery: waiting for
storage pool to go up (clientIF:723)


[root@ksmmi1r02ovirt36 ~]# journalctl -f
-- Logs begin at Fri 2023-07-07 21:57:13 -03. --
Jul 07 22:24:46 ksmmi1r02ovirt36.kosmo.cloud
ansible-async_wrapper.py[13790]: 13791 still running (86045)
Jul 07 22:24:50 ksmmi1r02ovirt36.kosmo.cloud platform-python[22812]:
ansible-ovirt_host_info Invoked with
pattern=name=ksmmi1r02ovirt36.kosmo.cloud auth={'token':

[ovirt-users] Re: Suggestion to switch to nightly

2023-07-07 Thread Levi Wilbert
Should the nightly oVirt snapshot be installed on the host or on the hosted 
engine? I'm confused exactly where I need to implement this change in the 
process.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KIVB6H2N5YDZV3K2AWSKH4TGBQEJTF65/


[ovirt-users] Re: GPU Passthrough issues with oVirt 4.5

2023-07-07 Thread Thomas Hoberg
There is little chance you'll get much response here, because it's probably not 
considered an oVirt issue.

It's somewhere between your BIOS, the host kernel and KVM and I'd start by 
breaking it down to passing each GPU separately.

Fromt he PCI-ID it seems to be V100 SMX2 variants that would require a host 
that very likely has a capable and compatible BIOS. I've only ever tried dual 
PCIe V100 in a single VM and that works without any issues on Oracle's RHV 4.4 
variant of oVirt.

So you need to check your BIOS and to ensure that the host kernel isn't 
grabbing any of the GPUs e.g. via Nouveau, perhaps try running a manual KVM VM 
to validate that.

But if you've already solved the problem, it's nice to let people know here.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RNQJ6ZLDH7M5BWL6DDW3TT2ROPFFQXDB/


[ovirt-users] Re: No bootable disk OVA

2023-07-07 Thread Thomas Hoberg
In my experience OVA exports and imports saw very little QA, even within oVirt 
itself, right up to OVA exports full of zeros on the last 4.3 release (in 
preparation for a migration to 4.4).

The OVA format also shows very little practical interoperatbility, I've tried 
and failed in pretty much every direction between VMware, oVirt/RHV, 
VirtualBox, Xcp-ng and Proxmox.

So transporting the disk images and recreating the machine is certainly the 
more promising option and something that I've used myself, where the guest OS 
was ready to accept that.

Sparse images are another issue, as you have noticed, especially since the 
normal upload/import interfaces love to expand them. So if there is any 
resizing to be done, best do it once the disk image is inside oVirt, not before.

Migrating (handcrated) images has been regarded as being inferior to automated 
builds to such a degree, that QA seems to have completely abandoned any effort 
there, long before oVirt was abandoned itself.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NB7L4OXOYCVZFWZF4BHAR2GOBITFCZAA/