[ovirt-users] Re: dnf update fails with oVirt 4.4 on centos 8 stream due to ansible package conflicts.

2022-03-23 Thread Jillian Morgan
I had to add --allowerasing to my dnf update command to get it to go
through (which it did, cleanly removing the old ansible package and
replacing it with ansible-core). I suspect that the new ansible-core
package doesn't properly obsolete the older ansible package. Trying to
Upgrade hosts from the Admin portal would fail because of this requirement.

As a side note, my systems hadn't been updated since before the mirrors for
Centos 8 packages went away, so all updates failed due to failure
downloading mirrorlists. I had to do this first, to get the updated repo
files pointing to Centos 8 Stream packages:

dnf update --disablerepo ovirt-4.4-* ovirt-realease44

-- 
Jillian Morgan (she/her) 🏳️‍⚧️
Systems & Networking Specialist
Primordial Software Group & I.T. Consultancy
https://www.primordial.ca


On Wed, Mar 23, 2022 at 3:53 PM Christoph Timm  wrote:

> Hi List,
>
> I see the same issue on my CentOS Stream 8 hosts and engine.
> I'm running 4.4.10.
> My systems are all migrated from CentOS 8 to CentOS Stream 8. Might this
> be caused by that?
>
> Best regards
> Christoph
>
>
> Am 20.02.22 um 19:58 schrieb Gilboa Davara:
>
> I managed to upgrade a couple of 8-streams based clusters w/ --nobest, and
> thus far, I've yet to experience any issues (knocks wood feaviously).
>
> - Gilboa
>
> On Sat, Feb 19, 2022 at 3:21 PM Daniel McCoshen 
> wrote:
>
>> Hey all,
>> I'm running ovirt 4.4 in production (4.4.5-11-1.el8), and I'm attempting
>> to update the OS on my hosts. The hosts are all centos 8 stream, and dnf
>> update is failing on all of them with the following output:
>>
>> [root@ovirthost ~]# dnf update
>> Last metadata expiration check: 1:36:32 ago on Thu 17 Feb 2022 12:01:25
>> PM CST.
>> Error:
>>  Problem: package cockpit-ovirt-dashboard-0.15.1-1.el8.noarch requires
>> ansible, but none of the providers can be installed
>>   - package ansible-2.9.27-2.el8.noarch conflicts with ansible-core >
>> 2.11.0 provided by ansible-core-2.12.2-2.el8.x86_64
>>   - package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 2.10.0
>> provided by ansible-2.9.27-2.el8.noarch
>>   - package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 2.10.0
>> provided by ansible-2.9.27-1.el8.noarch
>>   - package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 2.10.0
>> provided by ansible-2.9.17-1.el8.noarch
>>   - package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 2.10.0
>> provided by ansible-2.9.18-2.el8.noarch
>>   - package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 2.10.0
>> provided by ansible-2.9.20-2.el8.noarch
>>   - package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 2.10.0
>> provided by ansible-2.9.21-2.el8.noarch
>>   - package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 2.10.0
>> provided by ansible-2.9.23-2.el8.noarch
>>   - package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 2.10.0
>> provided by ansible-2.9.24-2.el8.noarch
>>   - cannot install the best update candidate for package
>> cockpit-ovirt-dashboard-0.15.1-1.el8.noarch
>>   - cannot install the best update candidate for package
>> ansible-2.9.27-2.el8.noarch
>>   - package ansible-2.9.20-1.el8.noarch is filtered out by exclude
>> filtering
>>   - package ansible-2.9.16-1.el8.noarch is filtered out by exclude
>> filtering
>>   - package ansible-2.9.19-1.el8.noarch is filtered out by exclude
>> filtering
>>   - package ansible-2.9.23-1.el8.noarch is filtered out by exclude
>> filtering
>> (try to add '--allowerasing' to command line to replace conflicting
>> packages or '--skip-broken' to skip uninstallable packages or '--nobest' to
>> use not only best candidate packages)
>>
>> cockpit-ovirt-dashboard.noarch is at 0.15.1-1.el8, and it looks like that
>> conflicting ansible-core package was added to the 8-stream repo two days
>> ago. That's when I first noticed the issue, but I it might be older. When
>> the eariler issues with the centos 8 deprecation happened, I had swapped
>> out the repos on some of these hosts for the new ones, and have since added
>> new hosts as well, using the updated repos. Both hosts that had been moved
>> from the old repos, and ones created with the new repos are experienceing
>> this issue.
>>
>> ansible-core is being pulled from the centos 8 stream AppStream repo, and
>> the ansible package that cockpit-ovirt-dashboard.noarch is trying to use as
>> a dependency is comming from ovirt-4.4-centos-ovirt44
>>
>> I'm tempted to blacklist ansible-core i

[ovirt-users] 4.4.9 -> 4.4.10 Cannot start or migrate any VM (hotpluggable cpus requested exceeds the maximum cpus supported by KVM)

2022-03-23 Thread Jillian Morgan
After upgrading the engine from 4.4.9 to 4.4.10, and then upgrading one
host, any attempt to migrate a VM to that host or start a VM on that host
results in the following error:

Number of hotpluggable cpus requested (16) exceeds the maximum cpus
supported by KVM (8)

While the version of qemu is the same across hosts, (
qemu-kvm-6.0.0-33.el8s.x86_64), I traced the difference to the upgraded
kernel on the new host. I have always run elrepo's kernel-ml on these hosts
to support bcache which RHEL's kernel doesn't support. The working hosts
still run kernel-ml-5.15.12. The upgraded host ran kernel-ml-5.17.0.

In case anyone else runs kernel-ml, have you run into this issue?
Does anyone know why KVM's KVM_CAP_MAX_VCPUS value is lowered on the new
kernel?
Does anyone know how to query the KVM capabilities from userspace without
writing a program leveraging kvm_ioctl()'s?

Related to this, it seems that ovirt and/or libvirtd always runs qmu-kvm
with an -smp argument of "maxcpus=16". This causes qemu's built-in check to
fail on the new kernel which is supporting max_vpus of 8.

Why does ovirt always request maxcpus=16?

And yes, before you say it, I know you're going to say that running
kernel-ml isn't supported.

-- 
Jillian Morgan (she/her) 🏳️‍⚧️
Systems & Networking Specialist
Primordial Software Group & I.T. Consultancy
https://www.primordial.ca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YSOGCHERQIE53O2YRUTF4SUHKJ2WL7HN/


[ovirt-users] Re: OVIRT mirrors

2022-01-02 Thread Jillian Morgan
I ran into the same problem this evening. It looks like the Let's Encrypt
certificate for resources.ovirt.org expired early this morning. Who knows
why their auto-renewal automation has failed. This might also be the reason
why the main mirrorlist site (used by default in the dnf repo files) is
failing.

NON-RECOMMENDED TEMPORARY WORKAROUND: If you wish to be _unsafe_ and accept
that their cert just expired, you can add "sslverify=0" to the
ovirt-4.4.repo file, comment out the mirrorlist, enable the baseurl, and
then a retry will succeed.

-- 
Jillian Morgan (she/her)
Systems & Networking Specialist
Primordial Software Group & I.T. Consultancy
https://www.primordial.ca


On Sun, 2 Jan 2022 at 14:19, Andy via Users  wrote:

> Are the OVERT mirrors down?  I am receiving the following error trying to
> update some of my systems.
>
> Errors during downloading metadata for repository 'ovirt-4.4':
>   - Status code: 500 for
> https://mirrorlist.ovirt.org/mirrorlist-ovirt-4.4-el8 (IP: 8.43.85.224)
> Error: Failed to download metadata for repo 'ovirt-4.4': Cannot prepare
> internal mirrorlist: Status code: 500 for
> https://mirrorlist.ovirt.org/mirrorlist-ovirt-4.4-el8 (IP: 8.43.85.224)
>
>
> Going to the link:
>
> https://mirrorlist.ovirt.org/mirrorlist-ovirt-4.4-el8
>
> Proxy Error The proxy server could not handle the request *GET 
> /mirrorlist-ovirt-4.4-el8
> <https://mirrorlist.ovirt.org/mirrorlist-ovirt-4.4-el8>*.
>
> Reason: *Error during SSL Handshake with remote server*
>
> thanks
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZHKFTD5TE5PO7Z7JQTKDY3VYFATAOMEV/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JJR3YUDNDDZIHBIB2CETO77ELRA5B3NM/


[ovirt-users] Re: Ovirt 4.4 HC gluster issues on new CentOS 8 node (cluster still in 4.3 compatibility level)

2020-06-03 Thread Jillian Morgan
Thank you, Strahil.

That was exactly the problem. I had already figured out other missing
gluster-related packages (ansible roles, gluster-server, etc), but
didn't think about the VDSM sub-package. Sigh.
So that problem's solved. Now I have another, but I'll debug that a
while before bugging the list again.

-- 
Jillian Morgan (she/her)
Systems & Networking Specialist
Primordial Software Group & I.T. Consultancy
https://www.primordial.ca

On Tue, 2 Jun 2020 at 13:02, Strahil Nikolov  wrote:
>
> Maybe  you miss  a rpm.
> Do you have vdsm-gluster  package installed  ?
>
> Best  Regards,
> Strahil Nikolov
>
> На 2 юни 2020 г. 19:18:43 GMT+03:00, jillian.mor...@primordial.ca написа:
> >I've successfully migrated to a new 4.4 engine, now managing the older
> >4.3 (CentOS 7) nodes. So far so good there.
> >
> >I installed a new CentOS 8 node w/ 4.4, joined it to the Gluster peer
> >group, and it can see all of the volumes, but the node won't go into
> >Online state in the engine because of apparent gluster-related VDSM
> >errors:
> >
> >Status of host butter was set to NonOperational.
> >Gluster command [] failed on server .
> >VDSM butter command ManageGlusterServiceVDS failed: The method does not
> >exist or is not available: {'method': 'GlusterService.action'}
> >
> >I haven't been able to find anything in the VDSM or Engine logs that
> >give me any hint as to what's going on (besides just repeating that the
> >"GlusterSerice.action" method doesn't exist). Anybody know what I'm
> >missing, or have hints on where to dig to debug further?
> >___
> >Users mailing list -- users@ovirt.org
> >To unsubscribe send an email to users-le...@ovirt.org
> >Privacy Statement: https://www.ovirt.org/privacy-policy.html
> >oVirt Code of Conduct:
> >https://www.ovirt.org/community/about/community-guidelines/
> >List Archives:
> >https://lists.ovirt.org/archives/list/users@ovirt.org/message/KERTWSZE5I5WMIWYXRXG4UVZZ2X4HZBE/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TQ5UU6M7T2M2LM7QMDEZYLOWRT3ADNXY/


[ovirt-users] Ovirt 4.4 HC gluster issues on new CentOS 8 node (cluster still in 4.3 compatibility level)

2020-06-02 Thread jillian . morgan
I've successfully migrated to a new 4.4 engine, now managing the older 4.3 
(CentOS 7) nodes. So far so good there.

I installed a new CentOS 8 node w/ 4.4, joined it to the Gluster peer group, 
and it can see all of the volumes, but the node won't go into Online state in 
the engine because of apparent gluster-related VDSM errors:

Status of host butter was set to NonOperational.
Gluster command [] failed on server .
VDSM butter command ManageGlusterServiceVDS failed: The method does not exist 
or is not available: {'method': 'GlusterService.action'}

I haven't been able to find anything in the VDSM or Engine logs that give me 
any hint as to what's going on (besides just repeating that the 
"GlusterSerice.action" method doesn't exist). Anybody know what I'm missing, or 
have hints on where to dig to debug further?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KERTWSZE5I5WMIWYXRXG4UVZZ2X4HZBE/