[ovirt-users] Re: Update Issue

2021-11-21 Thread Gilboa Davara
On Fri, Nov 19, 2021 at 10:26 PM Darrell Budic 
wrote:

> Ah, sounds like the issue I was having with a new install/upgrade as well (
> https://bugzilla.redhat.com/show_bug.cgi?id=2023919)
>
> It’s definitely affecting stream users and pretty  much any new install at
> the moment.
>

I can confirm that downgrading the qemu packages to 6.0 solves the problem.
I managed to successfully deploy the ME on a gluster and migrate VMs
between the hosts.

- Gilboa



>
> On Nov 18, 2021, at 7:14 AM, Yedidyah Bar David  wrote:
>
> On Thu, Nov 18, 2021 at 2:14 PM Christoph Timm  wrote:
>
>
> looks like they are already aware of it:
>
>
> Indeed.
>
>
> https://lists.ovirt.org/archives/list/de...@ovirt.org/thread/BDYP62MAJL2QVQZ7RHM2USZD4HXBGUA6/
>
>
> Now replied there, and created this bug:
>
> https://bugzilla.redhat.com/show_bug.cgi?id=2024605
>
> Also posted now to centos-devel, "qemu-kvm 6.1.0 with 16 PCIE root
> ports is broken".
>
> For the time being, we know of two workarounds:
>
> 1. Use qemu-kvm 6.0.0, available from the advanced virtualization SIG
> repo, should automatically be enabled by ovirt-release package. So
> e.g.:
>
> Per host:
> - Move to maintenance
> - dnf downgrade qemu-kvm-core-6.0.0
> - Activate
>
> 2. Configure your engine to use less than 16 pcie root ports, e.g. 12 like
> here:
>
> https://gerrit.ovirt.org/c/ovirt-system-tests/+/117689
>
> This might be problematic, though, if you need to add many devices to your
> VMs.
>
> Best regards,
>
> --
> Didi
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
>
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GDI5QN3ITUL2K3OTAHRXITG52G5KTVRV/


[ovirt-users] Re: Update Issue

2021-11-19 Thread Darrell Budic
Ah, sounds like the issue I was having with a new install/upgrade as well 
(https://bugzilla.redhat.com/show_bug.cgi?id=2023919 
)

It’s definitely affecting stream users and pretty  much any new install at the 
moment.

> On Nov 18, 2021, at 7:14 AM, Yedidyah Bar David  wrote:
> 
> On Thu, Nov 18, 2021 at 2:14 PM Christoph Timm  wrote:
>> 
>> looks like they are already aware of it:
> 
> Indeed.
> 
>> https://lists.ovirt.org/archives/list/de...@ovirt.org/thread/BDYP62MAJL2QVQZ7RHM2USZD4HXBGUA6/
> 
> Now replied there, and created this bug:
> 
> https://bugzilla.redhat.com/show_bug.cgi?id=2024605
> 
> Also posted now to centos-devel, "qemu-kvm 6.1.0 with 16 PCIE root
> ports is broken".
> 
> For the time being, we know of two workarounds:
> 
> 1. Use qemu-kvm 6.0.0, available from the advanced virtualization SIG
> repo, should automatically be enabled by ovirt-release package. So
> e.g.:
> 
> Per host:
> - Move to maintenance
> - dnf downgrade qemu-kvm-core-6.0.0
> - Activate
> 
> 2. Configure your engine to use less than 16 pcie root ports, e.g. 12 like 
> here:
> 
> https://gerrit.ovirt.org/c/ovirt-system-tests/+/117689
> 
> This might be problematic, though, if you need to add many devices to your 
> VMs.
> 
> Best regards,
> 
> --
> Didi
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NELQ5MU6I6WYLWD4DG5H3AIHWETITVGR/


[ovirt-users] Re: Update Issue

2021-11-18 Thread Yedidyah Bar David
On Thu, Nov 18, 2021 at 2:14 PM Christoph Timm  wrote:
>
> looks like they are already aware of it:

Indeed.

> https://lists.ovirt.org/archives/list/de...@ovirt.org/thread/BDYP62MAJL2QVQZ7RHM2USZD4HXBGUA6/

Now replied there, and created this bug:

https://bugzilla.redhat.com/show_bug.cgi?id=2024605

Also posted now to centos-devel, "qemu-kvm 6.1.0 with 16 PCIE root
ports is broken".

For the time being, we know of two workarounds:

1. Use qemu-kvm 6.0.0, available from the advanced virtualization SIG
repo, should automatically be enabled by ovirt-release package. So
e.g.:

Per host:
- Move to maintenance
- dnf downgrade qemu-kvm-core-6.0.0
- Activate

2. Configure your engine to use less than 16 pcie root ports, e.g. 12 like here:

https://gerrit.ovirt.org/c/ovirt-system-tests/+/117689

This might be problematic, though, if you need to add many devices to your VMs.

Best regards,

--
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives:


[ovirt-users] Re: Update Issue

2021-11-18 Thread Christoph Timm

looks like they are already aware of it:
https://lists.ovirt.org/archives/list/de...@ovirt.org/thread/BDYP62MAJL2QVQZ7RHM2USZD4HXBGUA6/


Am 18.11.21 um 11:10 schrieb Yoann Laissus:

It's also working with 6.0.0. So the problem definitely appears on 6.1.0.
The latest build https://cbs.centos.org/koji/taskinfo?taskID=2604860 is also 
impacted.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4IZZTSG6Y3TBVUPG7O2L6JWH4Z4CRRRQ/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BRMRQS2D33ZSY5M2ELUMGPVZ77Q2OPVD/


[ovirt-users] Re: Update Issue

2021-11-18 Thread Yoann Laissus
It's also working with 6.0.0. So the problem definitely appears on 6.1.0.
The latest build https://cbs.centos.org/koji/taskinfo?taskID=2604860 is also 
impacted.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4IZZTSG6Y3TBVUPG7O2L6JWH4Z4CRRRQ/


[ovirt-users] Re: Update Issue

2021-11-18 Thread Yoann Laissus
We have exactly the same issue. Qemu 6.1 is totally broken or incompatible with 
libvirt / vdsm.
The qemu process is launched properly (no libvirt / vdsm error, and no error in 
the qemu log file) but it doesn't boot at all, no spice, no network, nothing.

Downgrading to 5.2.0 (from 6.1.0) has fixed the problem for us.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2UTKEC2OQ2C5YV4MCYSGG7Y36NLZ4P4D/


[ovirt-users] Re: Update Issue

2021-11-17 Thread Gary Pedretty
Update,

downgrading libvirt and kvm seems to have resolved the issue.   Is this a known 
bug with those updates?


Gary



> On Nov 17, 2021, at 9:59 PM, Gary Pedretty  wrote:
> 
> After doing the latest updates on one of my CentOS 8 Stream hosts, it will 
> non longer allow VMs to be migrated to it. VMs can be started on the host, 
> but then have no network access.  The hosted engine can be started on the 
> host, but again no network access.  I have tried removing the host, 
> rebuilding the OS from scratch and re-adding it, with no change. It shows up 
> and you do not get any error messages that I can tell when a migrate fails, 
> it just shows that it failed.  
> 
> The updated host has the following versions, only KVM and LIBVIRT appear to 
> be different. I tried downgrading KVM and LIBVIRT, but the migration still 
> fails. I have avoided trying up update the other hosts since I cannot migrate 
> any of my VMs to the one host that I already tried to update.
> 
> 
> RHEL - 8.6 - 1.el8
> OS Description:
> CentOS Stream 8
> Kernel Version:
> 4.18.0 - 348.el8.x86_64
> KVM Version:
> 6.1.0 - 4.module_el8.6.0+983+a7505f3f
> LIBVIRT Version:
> libvirt-7.9.0-1.module_el8.6.0+983+a7505f3f
> VDSM Version:
> vdsm-4.40.90.4-1.el8
> 
> The other two hosts which have not been updated and still work normally have
> 
> 
> RHEL - 8.6 - 1.el8
> OS Description:
> CentOS Stream 8
> Kernel Version:
> 4.18.0 - 348.el8.x86_64
> KVM Version:
> 6.0.0 - 33.el8s
> LIBVIRT Version:
> libvirt-7.6.0-4.el8s
> VDSM Version:
> vdsm-4.40.90.4-1.el8
> 
> The engine log from an attempted migrations is as follows.
> 
> 
> 
> 
> 2021-11-17 21:12:46,099-09 INFO  
> [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (default task-21) 
> [c0c87da3-a8aa-4f8e-ab9c-1e2fc05babac] Running command: 
> MigrateVmToServerCommand internal: false. Entities affected :  ID: 
> fa1a2d6b-99cb-42bd-a343-91f314d5f47b Type: VMAction group MIGRATE_VM with 
> role type USER
> 2021-11-17 21:12:46,134-09 INFO  
> [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (default task-21) 
> [c0c87da3-a8aa-4f8e-ab9c-1e2fc05babac] START, MigrateVDSCommand( 
> MigrateVDSCommandParameters:{hostId='6ee16602-c686-471a-9f65-e5952b813672', 
> vmId='fa1a2d6b-99cb-42bd-a343-91f314d5f47b', 
> srcHost='ravn-kvm-8.ravnalaska.net', 
> dstVdsId='10491335-e2e1-49f2-96c3-79331535542b', 
> dstHost='ravn-kvm-9.ravnalaska.net:54321', migrationMethod='ONLINE', 
> tunnelMigration='false', migrationDowntime='0', autoConverge='true', 
> migrateCompressed='false', migrateEncrypted='false', consoleAddress='null', 
> maxBandwidth='1250', enableGuestEvents='true', maxIncomingMigrations='2', 
> maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, 
> params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, 
> {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, 
> action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, 
> params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, 
> {limit=-1, action={name=abort, params=[]}}]]', dstQemu='10.9.24.79'}), log 
> id: 3522dfb9
> 2021-11-17 21:12:46,134-09 INFO  
> [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (default 
> task-21) [c0c87da3-a8aa-4f8e-ab9c-1e2fc05babac] START, 
> MigrateBrokerVDSCommand(HostName = ravn-kvm-8.ravnalaska.net, 
> MigrateVDSCommandParameters:{hostId='6ee16602-c686-471a-9f65-e5952b813672', 
> vmId='fa1a2d6b-99cb-42bd-a343-91f314d5f47b', 
> srcHost='ravn-kvm-8.ravnalaska.net', 
> dstVdsId='10491335-e2e1-49f2-96c3-79331535542b', 
> dstHost='ravn-kvm-9.ravnalaska.net:54321', migrationMethod='ONLINE', 
> tunnelMigration='false', migrationDowntime='0', autoConverge='true', 
> migrateCompressed='false', migrateEncrypted='false', consoleAddress='null', 
> maxBandwidth='1250', enableGuestEvents='true', maxIncomingMigrations='2', 
> maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, 
> params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, 
> {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, 
> action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, 
> params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, 
> {limit=-1, action={name=abort, params=[]}}]]', dstQemu='10.9.24.79'}), log 
> id: 23f98865
> 2021-11-17 21:12:46,141-09 INFO  
> [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (default 
> task-21) [c0c87da3-a8aa-4f8e-ab9c-1e2fc05babac] FINISH, 
> MigrateBrokerVDSCommand, return: , log id: 23f98865
> 2021-11-17 21:12:46,143-09 INFO  
> [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (default task-21) 
> [c0c87da3-a8aa-4f8e-ab9c-1e2fc05babac] FINISH, MigrateVDSCommand, return: 
> MigratingFrom, log id: 3522dfb9
> 2021-11-17 21:12:46,149-09 INFO  
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
> (default task-21) [c0c87da3-a8aa-4f8e-ab9c-1e2fc05babac] EVENT_ID: 
>