[ovirt-users] Re: can hosted engine deploy use local repository mirrors instead of internet ones?

2024-02-02 Thread Sketch

On Mon, 22 Jan 2024, iuco...@gmail.com wrote:


Opening up access to the internet is a bureaucratic procedure for us, as 
would be for adding all the URLs to the proxy. We have a lot of repos 
mirrored locally - is it possible to get hosted-engine to use the local 
ones? Is there a list? I had a search for files that might contain these 
repos in various places, but to no avail.


It is possible, I've done it before. But there are a lot of repos and 
some of them tend to change every release so it becomes a bit of a cat and 
mouse game keeping the list updated. It may not be so bad now that oVirt 
is basically in maintenance mode and new releases are few and far between.


In short: install the ovirt-release45 package and take a note of all of 
the dependent packages.  Go through all of the .repo files installed by 
these packages in /etc/dnf.conf and for each entry with enabled=1 set 
create local equivalents and put them into your own local-ovirt.repo file. 
I see 9 such repos on my hosts currently running oVirt 4.5.4 so things 
aren't quite as bad as they were in the 4.3 days when I originally did 
this. Put this repo file on your host before you run the hosted engine 
deploy and it will find all of the packages it needs in your local repos.


What I ended up doing instead was just configuring dnf to use an HTTP 
proxy server has access to the centos repo mirrors. Again this is simpler 
than it used to be now that everything is consolidated on CentOS repo 
mirrors instead of spread out across ovirt.org and various copr repos and 
such.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2WPL4YZF4I3MOPXJYGV3DDNMQRM6X7R7/


[ovirt-users] Re: oVirt 4.6 OS versions

2023-12-14 Thread Sketch
Doesn't a vote for "support Rocky 9" basically mean "continue supporting 
RHEL 9 and derivatives"? So effecitvely no change from 4.5 except for

potentially dropping EL8.

On Wed, 13 Dec 2023, Jean-Louis Dupond via Users wrote:

The ONLY was Rocky Linux 9 will be supported is by somebody doing a 
complete check of everything (ovirt-engine, vdsm, etc), and makes it 
compatible with RL9 and get all the changes/fixes merges.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PD4VEN55LPT5HGKLXFNEE4M4A2HLKJND/


[ovirt-users] Re: ovirt with rocky linux kvm

2023-07-20 Thread Sketch

You can't use Rocky with oVirt 4.3, you'll need to use 4.4 or 4.5.

oVirt 4.3 only supports EL7 releases: RHEL 7, CentOS 7, etc.

oVirt 4.4+ supports EL8 releases: RHEL 8, Rocky 8, CentOS Stream 8, etc.

As of 4.5.4 there is also at least some support for EL9 as well.

On Wed, 28 Jun 2023, cynthiaberb...@outlook.com wrote:


Hello,

after installing multiple machines as rocky linux, Ovirt was chosen to be used 
to manage this infra.
But adding these machines as "hosts" on ovirt always give:
Error while executing action: Cannot add Host. Connecting to host via SSH has 
failed, verify that the host is reachable (IP address, routable address etc.) 
You may refer to the engine.log file for further details.
Is rocky kvm supported on ovirt 4.3.10.4-1.el7?
Do you have any contact with a supporting company/team for ovirt to check their 
support plan?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6TVCCAUNMRMHBEQMS3DARJ5BFH7PLTEQ/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OAKG64R3NIA7ZK2ZG2M23KMQIGEFGB3N/


[ovirt-users] Re: Dead agent

2022-06-15 Thread Sketch

On Wed, 15 Jun 2022, Valerio Luccio wrote:


I have an ovirt 4.4 installation whit self-hosted engine where the agent
seems to have died. The VMs are still running, so I assume that the engine
itself is still running (is this a wrong assumption ?). Can I restart the
agent without affecting the running VMs, that is how will restarting the
agent affect the running VMs ? If I can restart the agent, what's the
correct way of doing it ?


If the engine is down, the VMs will continue to run.  You just won't be 
able to start/migrate/configure/etc them.


If the engine VM is still running, you may want to SSH into it and look at 
the state of the system to see if you can see what went wrong.


systemctl status 'ovirt*' may tell you if just a single service is down 
(such as ovirt-engine).  You might also check the logs in 
/var/log/ovirt-engine


If the VM is down or inaccessible, SSH into one of the hosts capable of 
running the engine and run the following to check VM status:


hosted-engine --vm-status

This should tell you if and whre it's running.  If it isn't dead, You can 
stop it with:


hosted-engine --vm-shutdown

Check the status and wait until it's actually down, then you can start it 
up again:


hosted-engine --vm-start
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KWGZQ4CETYECKR5MLCVNDHNHIERSY2CQ/


[ovirt-users] Re: Install Ovirt 4.4.10 to standalone system from iso fails

2022-06-13 Thread Sketch

On Tue, 14 Jun 2022, Guillaume Pavese wrote:


Not sure about recovering your cluster on a 4.5 install with a 4.4 backup. I
would also like to know if that is possible.


It's definitely possible.  I had an issue with my 4.4->4.5 upgrade (always 
make a backup first) and wanted to switch my engine from CentOS Stream to 
Rocky anyway, so I built a new host and installed 4.5 using my backup from 
4.4.10.


With 4.3->4.4 this was the only way to upgrade due to the OS version 
change.  4.3 requires el7, 4.4 requires el8, and there is no in-place 
el7->el8 upgrade (except maybe for very specific versions of RHEL).  So 
you had to make a backup on 4.3, reinstall your engine host with el8, then 
restore the backup on 4.4.  Plus a few extra steps if using 
the self-hosted engine...

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5RGQ2IANC7ZUK7MECFIZZBEGPLH2UURR/


[ovirt-users] Re: Install Ovirt 4.4.10 to standalone system from iso fails

2022-06-12 Thread Sketch

On Sun, 12 Jun 2022, David Johnson wrote:


There's a version mismatch between the libpq and the postgresql verson:
[root@ovirt1 ~]# dnf info --installed libpq* postgresql*
Installed Packages
Name         : libpq5
Version      : 14.3
Release      : 42PGDG.rhel8
Architecture : x86_64
Size         : 793 k
Source       : libpq5-14.3-42PGDG.rhel8.src.rpm
Repository   : @System
From repo    : pgdg-common


  ^  ^^^

Looks like this is not a clean system.  You should disable any 3rd party 
repos including EPEL and run 'dnf distro-sync --nobest' before intsalling. 
They are likely the source of all of your dependency and missing package 
issues.  oVirt has a lot of dependencies from a lot of repos, and 3rd 
party repos can mess up their delicate balance.___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JFOWCLBQSX36ZDQ6DB3CKFENI5YVLRQA/


[ovirt-users] Re: 4.4.10 -> 4.5.0 upgrade fails

2022-06-01 Thread Sketch

On Wed, 1 Jun 2022, Yedidyah Bar David wrote:


Might be a case of:

https://bugzilla.redhat.com/2077387


That was it.  The SQL query affected 2 rows, so I guess my install wasn't 
quite as ancient as the bug filer's.  I think this system was built in 
either late 4.2 or early 4.3 timeframe.


Thanks!
Sketch
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/T6ACRGU2PO5G5O4WZCICN2HB4KQNWLQF/


[ovirt-users] 4.4.10 -> 4.5.0 upgrade fails

2022-06-01 Thread Sketch
Just tried to upgrade my engine from 4.4.10.6 -> 4.5.0 on CentOS Stream 8. 
I relized I missed the step of updating to the very latest version of 
4.4.10 around the same time it failed and left things in somewhat of a bad 
state, so I just built a new Rocky 8.6 host and restored my backup there.


Update from 4.4.10.6 to 4.4.10.7 there went fine.  4.4.10.7 -> 4.5.0 fails 
with the same error, however:


[ ERROR ] schema.sh: FATAL: Cannot execute sql command: 
--file=/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/_config.sql
[ ERROR ] Failed to execute stage 'Misc configuration': Engine schema refresh 
failed

This appears to be the relevant section of the log:

psql:/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/_config.sql:22: NOTICE:  column 
"default_value" of relation "vdc_options" already exists, skipping
psql:/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/_config.sql:1454: ERROR:  
column "default_value" contains null values
CONTEXT:  SQL statement "ALTER TABLE vdc_options ALTER COLUMN default_value SET NOT 
NULL"
PL/pgSQL function fn_db_change_column_null(character varying,character 
varying,boolean) line 10 at EXECUTE
FATAL: Cannot execute sql command: 
--file=/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/_config.sql

2022-06-01 01:10:03,015-0700 ERROR 
otopi.plugins.ovirt_engine_setup.ovirt_engine.db.schema schema._misc:530 
schema.sh: FATAL: Cannot execute sql command: 
--file=/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/_config.sql
2022-06-01 01:10:03,016-0700 DEBUG otopi.context context._executeMethod:145 
method exception
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/otopi/context.py", line 132, in 
_executeMethod
  method['method']()
File 
"/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine/db/schema.py",
 line 532, in _misc
raise RuntimeError(_('Engine schema refresh failed'))
RuntimeError: Engine schema refresh failed
2022-06-01 01:10:03,017-0700 ERROR otopi.context 
context._executeMethod:154 Failed to execute stage 'Misc configuration': Engine 
schema refresh failed

Any ideas?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FENABEXHA4P3PHHLT6WMK6TMWEPJSRLO/


[ovirt-users] Re: Updating oVirt engine and nodes

2022-05-22 Thread Sketch

On Fri, 20 May 2022, Matheus wrote:


I use oVirt at home for small tasks and some server VM for my needs. I am
planning on updating from 4.4 to 4.5 and I found through the list I don't
need to install all from ground every time I have to update. I did that
when I got 4.4 from 4.3.


That was only required from 4.3 to 4.4 because it's not possible to do 
an in-place upgrdae of the underlying OS from EL7 -> EL8.



I tried to follow the update info from ovirt site
(https://www.ovirt.org/documentation/upgrade_guide/index.html) and I got
stuck on:

# dnf install -y centos-release-ovirt45

I use Rocky Linux 8.6 for the engine and that command line doesn't work here:

[root@rocky-engine ~]# dnf install -y rocky-release-ovirt45
Last metadata expiration check: 3:28:13 ago on Fri 20 May 2022 11:02:26 AM
-03.
No match for argument: rocky-release-ovirt45
Error: Unable to find a match: rocky-release-ovirt45
[root@rocky-engine ~]#


It doesn't work because the command in the documentation is correct for 
all Enterprise Linux variants:


# dnf install -y centos-release-ovirt45

If that doesn't work, then you missed the "If you are going to install on 
RHEL or derivatives please follow Installing on RHEL or derivatives 
first." warning which links you to 
https://www.ovirt.org/download/install_on_rhel.html


In that step, you add a couple of repos from CentOS Stream.  This is where 
the centos-ovirt-release45 package comes from, as well as some other repos 
containing oVirt dependencies.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PCX6JFNXRAPWBUKJUVZYS3HID22WCBEC/


[ovirt-users] Re: ovirt hosted engine restore backup fails: remot host identifaction changed

2022-04-04 Thread Sketch
It sounds like your machine is part of an IPA domain and getting the host 
key from IPA if it's in /var/lib/sss/pubconf, in which case it will keep 
re-adding the host key to that file every time you attempt to connect to 
it.  You need to either remove the old host keys from IPA (via webui or 
ipa commands) so they don't get re-added to the pubconf file, or remove 
the entire host from IPA and then re-join it to the IPA domain so that IPA 
has the correct keys.


On Sun, 3 Apr 2022, jeroen@telenet.be wrote:


I have a backup file from our ovirt hosted engine. When I try to run "hosted-engine 
--deploy --restore-from-file=backup.bck" on the same machine with a fresh install of 
ovirt node 4.3 I get this error after some minutes:


[ ERROR ] fatal: [localhost -> ovirt.*mydomain.com*]: FAILED! => {"changed": false, "elapsed": 
185, "msg": "timed out waiting for ping module test success: Failed to connect to the host via ssh: 
@@@\r\n@WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! 
@\r\n@@@\r\nIT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING 
NASTY!\r\nSomeone could be eavesdropping on you right now (man-in-the-middle attack)!\r\nIt is also possible that a host 
key has just been changed.\r\nThe fingerprint for the ECDSA key sent by the remote host 
is\nSHA256:aer7BMZyKHhfzMXX4pzVULHN7OwSSNDrCuOyvdmG8sQ.\r\nPlease contact your system administrator.\r\nAdd correct host 
key in /dev/null to get rid of this message.\r\nOffending ED25519 key in /var/lib/sss/pubconf/known_hosts:6\r\nPassword 
authentication is disabled to avoid man-in-the-middle attacks.\r\nKeyboard-interactive authentication is disabled t

o

avoid man-in-the-middle attacks.\r\nPermission denied 
(publickey,gssapi-keyex,gssapi-with-mic,password)."}

I can't find anything in the docs about this problem. I already removed all the 
entries in /var/lib/sss/pubconf/known_hosts on my ovirt host machine. But that 
didn't change anything. Is their something wrong with the backup. At the moment 
I have 2 other hosts running my VM's but no ovirt manager.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CQYPBO5TDLUKSVS7WW3T6OXMGGOJVHFW/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CS5SMQH7SCHPFJ2DHCD53GVBZC3F5ICH/


[ovirt-users] Re: dnf update fails with oVirt 4.4 on centos 8 stream due to ansible package conflicts.

2022-03-23 Thread Sketch

Just for the record, I fixed this with:

# rpm -e ansible --nodeps
# dnf install ansible

Now dnf update works as expected without errors or removing any packages.

I haven't encounted this issue on my hosts because they are running Rocky, 
not Stream.


On Thu, 24 Mar 2022, Sketch wrote:

Just a warning, adding --allowerasing on my engine causes ovirt-engine and a 
bunch of other ovirt-* packages to be removed.  So make sure you check the 
yum output carefully before running the command.


On Wed, 23 Mar 2022, Jillian Morgan wrote:


 I had to add --allowerasing to my dnf update command to get it to go
 through
 (which it did, cleanly removing the old ansible package and replacing it
 with ansible-core). I suspect that the new ansible-core package doesn't
 properly obsolete the older ansible package. Trying to Upgrade hosts from
 the Admin portal would fail because of this requirement.

 As a side note, my systems hadn't been updated since before the mirrors
 for
 Centos 8 packages went away, so all updates failed due to failure
 downloading mirrorlists. I had to do this first, to get the updated repo
 files pointing to Centos 8 Stream packages:

 dnf update --disablerepo ovirt-4.4-* ovirt-realease44

 --
 Jillian Morgan (she/her) ️‍⚧️
 Systems & Networking Specialist
 Primordial Software Group & I.T. Consultancy
 https://www.primordial.ca


 On Wed, Mar 23, 2022 at 3:53 PM Christoph Timm  wrote:
   Hi List,

   I see the same issue on my CentOS Stream 8 hosts and engine.
   I'm running 4.4.10.
   My systems are all migrated from CentOS 8 to CentOS Stream 8.
   Might this be caused by that?

   Best regards
   Christoph


   Am 20.02.22 um 19:58 schrieb Gilboa Davara:
   I managed to upgrade a couple of 8-streams based clusters
   w/ --nobest, and thus far, I've yet to experience any
   issues (knocks wood feaviously).

 - Gilboa

 On Sat, Feb 19, 2022 at 3:21 PM Daniel McCoshen
  wrote:
   Hey all,
   I'm running ovirt 4.4 in production
   (4.4.5-11-1.el8), and I'm attempting to update the
   OS on my hosts. The hosts are all centos 8 stream,
   and dnf update is failing on all of them with the
   following output:

   [root@ovirthost ~]# dnf update
   Last metadata expiration check: 1:36:32 ago on Thu
   17 Feb 2022 12:01:25 PM CST.
   Error:
    Problem: package
   cockpit-ovirt-dashboard-0.15.1-1.el8.noarch requires
   ansible, but none of the providers can be installed
     - package ansible-2.9.27-2.el8.noarch conflicts
   with ansible-core > 2.11.0 provided by
   ansible-core-2.12.2-2.el8.x86_64
     - package ansible-core-2.12.2-2.el8.x86_64
   obsoletes ansible < 2.10.0 provided by
   ansible-2.9.27-2.el8.noarch
     - package ansible-core-2.12.2-2.el8.x86_64
   obsoletes ansible < 2.10.0 provided by
   ansible-2.9.27-1.el8.noarch
     - package ansible-core-2.12.2-2.el8.x86_64
   obsoletes ansible < 2.10.0 provided by
   ansible-2.9.17-1.el8.noarch
     - package ansible-core-2.12.2-2.el8.x86_64
   obsoletes ansible < 2.10.0 provided by
   ansible-2.9.18-2.el8.noarch
     - package ansible-core-2.12.2-2.el8.x86_64
   obsoletes ansible < 2.10.0 provided by
   ansible-2.9.20-2.el8.noarch
     - package ansible-core-2.12.2-2.el8.x86_64
   obsoletes ansible < 2.10.0 provided by
   ansible-2.9.21-2.el8.noarch
     - package ansible-core-2.12.2-2.el8.x86_64
   obsoletes ansible < 2.10.0 provided by
   ansible-2.9.23-2.el8.noarch
     - package ansible-core-2.12.2-2.el8.x86_64
   obsoletes ansible < 2.10.0 provided by
   ansible-2.9.24-2.el8.noarch
     - cannot install the best update candidate for
   package cockpit-ovirt-dashboard-0.15.1-1.el8.noarch
     - cannot install the best update candidate for
   package ansible-2.9.27-2.el8.noarch
     - package ansible-2.9.20-1.el8.noarch is filtered
   out by exclude filtering
     - package ansible-2.9.16-1.el8.noarch is filtered
   out by exclude filtering
     - package ansible-2.9.19-1.el8.noarch is filtered
   out by exclude filtering
     - package ansible-2.9.23-1.el8.noarch is filtered
   out by exclude filtering
   (try to add '--allowerasing' to command line to
   replace conflicting packages or '--skip-broken' to
   skip uninstallable packages or '--nobest' to use not
   only best candidate packages)

   cockpit-ovirt-dashboard.noarch is at 0.15.1-1.el8,
   and it looks like that conflicting ansible-core
   package was added to the 8-stream repo two days ago.
   That's when I first noticed the issue, but I it
   might be older. When the eariler issues with the
   centos 8 deprecation happened, I had swapped out the
   repos on some of these hosts for the new ones, and
   have since added new hosts as well, u

[ovirt-users] Re: dnf update fails with oVirt 4.4 on centos 8 stream due to ansible package conflicts.

2022-03-23 Thread Sketch
Just a warning, adding --allowerasing on my engine causes ovirt-engine and 
a bunch of other ovirt-* packages to be removed.  So make sure you check 
the yum output carefully before running the command.


On Wed, 23 Mar 2022, Jillian Morgan wrote:


I had to add --allowerasing to my dnf update command to get it to go through
(which it did, cleanly removing the old ansible package and replacing it
with ansible-core). I suspect that the new ansible-core package doesn't
properly obsolete the older ansible package. Trying to Upgrade hosts from
the Admin portal would fail because of this requirement.

As a side note, my systems hadn't been updated since before the mirrors for
Centos 8 packages went away, so all updates failed due to failure
downloading mirrorlists. I had to do this first, to get the updated repo
files pointing to Centos 8 Stream packages:

dnf update --disablerepo ovirt-4.4-* ovirt-realease44

--
Jillian Morgan (she/her) ️‍⚧️
Systems & Networking Specialist
Primordial Software Group & I.T. Consultancy
https://www.primordial.ca


On Wed, Mar 23, 2022 at 3:53 PM Christoph Timm  wrote:
  Hi List,

  I see the same issue on my CentOS Stream 8 hosts and engine.
  I'm running 4.4.10.
  My systems are all migrated from CentOS 8 to CentOS Stream 8.
  Might this be caused by that?

  Best regards
  Christoph


  Am 20.02.22 um 19:58 schrieb Gilboa Davara:
  I managed to upgrade a couple of 8-streams based clusters
  w/ --nobest, and thus far, I've yet to experience any
  issues (knocks wood feaviously).

- Gilboa

On Sat, Feb 19, 2022 at 3:21 PM Daniel McCoshen
 wrote:
  Hey all,
  I'm running ovirt 4.4 in production
  (4.4.5-11-1.el8), and I'm attempting to update the
  OS on my hosts. The hosts are all centos 8 stream,
  and dnf update is failing on all of them with the
  following output:

  [root@ovirthost ~]# dnf update
  Last metadata expiration check: 1:36:32 ago on Thu
  17 Feb 2022 12:01:25 PM CST.
  Error:
   Problem: package
  cockpit-ovirt-dashboard-0.15.1-1.el8.noarch requires
  ansible, but none of the providers can be installed
    - package ansible-2.9.27-2.el8.noarch conflicts
  with ansible-core > 2.11.0 provided by
  ansible-core-2.12.2-2.el8.x86_64
    - package ansible-core-2.12.2-2.el8.x86_64
  obsoletes ansible < 2.10.0 provided by
  ansible-2.9.27-2.el8.noarch
    - package ansible-core-2.12.2-2.el8.x86_64
  obsoletes ansible < 2.10.0 provided by
  ansible-2.9.27-1.el8.noarch
    - package ansible-core-2.12.2-2.el8.x86_64
  obsoletes ansible < 2.10.0 provided by
  ansible-2.9.17-1.el8.noarch
    - package ansible-core-2.12.2-2.el8.x86_64
  obsoletes ansible < 2.10.0 provided by
  ansible-2.9.18-2.el8.noarch
    - package ansible-core-2.12.2-2.el8.x86_64
  obsoletes ansible < 2.10.0 provided by
  ansible-2.9.20-2.el8.noarch
    - package ansible-core-2.12.2-2.el8.x86_64
  obsoletes ansible < 2.10.0 provided by
  ansible-2.9.21-2.el8.noarch
    - package ansible-core-2.12.2-2.el8.x86_64
  obsoletes ansible < 2.10.0 provided by
  ansible-2.9.23-2.el8.noarch
    - package ansible-core-2.12.2-2.el8.x86_64
  obsoletes ansible < 2.10.0 provided by
  ansible-2.9.24-2.el8.noarch
    - cannot install the best update candidate for
  package cockpit-ovirt-dashboard-0.15.1-1.el8.noarch
    - cannot install the best update candidate for
  package ansible-2.9.27-2.el8.noarch
    - package ansible-2.9.20-1.el8.noarch is filtered
  out by exclude filtering
    - package ansible-2.9.16-1.el8.noarch is filtered
  out by exclude filtering
    - package ansible-2.9.19-1.el8.noarch is filtered
  out by exclude filtering
    - package ansible-2.9.23-1.el8.noarch is filtered
  out by exclude filtering
  (try to add '--allowerasing' to command line to
  replace conflicting packages or '--skip-broken' to
  skip uninstallable packages or '--nobest' to use not
  only best candidate packages)

  cockpit-ovirt-dashboard.noarch is at 0.15.1-1.el8,
  and it looks like that conflicting ansible-core
  package was added to the 8-stream repo two days ago.
  That's when I first noticed the issue, but I it
  might be older. When the eariler issues with the
  centos 8 deprecation happened, I had swapped out the
  repos on some of these hosts for the new ones, and
  have since added new hosts as well, using the
  updated repos. Both hosts that had been moved from
  the old repos, and ones created with the new repos
  are experienceing this issue.

  ansible-core is being pulled from the centos 8
  stream AppStream repo, and the ansible package that
  cockpit-ovirt-dashboard.noarch is trying to use as a
  dependency is comming from ovirt-4.4-centos-ovirt44

  I'm tempted to blacklist 

[ovirt-users] Re: Engine across Clusters

2022-03-11 Thread Sketch

On Fri, 11 Mar 2022, Abe E wrote:

Has anyone setup hype converged gluster (3Nodes) and then added more 
after while maintaining access to the engine?


I have added additional self-hosted-engine hosts to a cluster in 4.3 and 
it worked fine.  I don't know if 4.4 is more strict about that or not, as 
I moved away from self-hosted for 4.4


An oversight on my end was 2 fold, Engine gluster being on engine nodes 
and new nodes requiring their own cluster due to different CPU type.


You don't _have_ to use the default CPU type.  If they are just newer 
models of the same type, you can just add them to the existing cluster as 
the old CPU type.  I have a mix of Skylake and Cascade Lake CPUs in my 
Skylake cluster, and it works great.  I guess you could have issues if 
your original cluster is too old to have "Secure" CPU types, and your new 
CPUs only have "Secure" CPUs, or if your old cluster is Intel and new one 
is AMD.


So basically I am trying to see if I can setup a new cluster for my 
other nodes that require it while trying to give them ability to run the 
engine and ofcourse because they arent part of the engine cluster, we 
all know how that goes. Has anyone dealt with this or worked around it, 
any advices?


From a gluster standpoint, this should work fine, but I suspect oVirt 
won't like having self-hosted engine hosts in different clusters.


Unless you plan on entirely removing the original hosts in the future, I'm 
not sure this should be much of an issue anyway. While the idea of being 
able to run the engine on any host is nice, it's also a bit overkill. 
Three hosts should be sufficient for redundancy.


If you just want to have the engine running on the newer hosts because 
they may be around longer, I would just make the new hosts part of the 
existing cluster (assuming they're close enough), migrate everything to 
the new hosts, then remove the old hosts and rebuild them as 
non-self-hosted hosts and put them in a separate cluster.  Then once 
that's done, you can upgrade the cluster CPU type on the new machines.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EC5AUEOJKW2Q7GXWAXROROHALQIGQURF/


[ovirt-users] Re: Migrate Hosted-Engine from one NFS mount to another NFS mount

2022-02-24 Thread Sketch

On Wed, 23 Feb 2022, matthew.st...@fujitsu.com wrote:


I’ve always been told that migrating self-hosted-engine storage was a
backup, shutdown, and rebuild from backup procedure.

In my iscsi environment it has never worked.  (More due to the history of my
environment, than the procedure itself.)


This didn't work for me either, though it may have had to do with the many 
issues I had moving from 4.3 to 4.4.  What did work was backing up and 
restoring to a standalone engine.


I had initially planned to migrate it back to a self-hosted setup later, 
but found that this setup is a lot less temperamental than the self-hosted 
engine.  It also doesn't have to be a dedicated machine, it can be a VM 
hosted outside of oVirt, which may also be useful if you're hosting some 
stuff needed for bootstrapping the environment like DNS, NTP, etc. outside 
of oVirt.___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QMVYPGXLEBCHHV7YBHHAUZ2QZK6D2FH5/


[ovirt-users] Re: how to convert centos 8 to centos 8 Stream

2022-02-23 Thread Sketch

On Wed, 23 Feb 2022, Adam Xu wrote:


How can we convert centos 8 to centos 8 stream? Thanks.


dnf install centos-release-stream
dnf swap centos-{linux,stream}-repos
dnf distro-sync

Note that the last command is effectively a yum update that syncs your 
packages with all of the installed repos, so make sure you install the 
latest ovirt-release44 package with the working mirror URLs before you run 
it, or you might end up with some (or all) oVirt packages removed.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JOHWT2LYMOEQW7IKUSIB7K6OS4T26V2P/


[ovirt-users] Re: Xcp-ng, first impressions as an oVirt HCI alternative

2022-02-15 Thread Sketch

On Mon, 14 Feb 2022, Thomas Hoberg wrote:

Xen nodes are much more autonomous then oVirt hosts. The use whatever 
storage they might have locally, or attached via SAN/NAS/Gluster[!!!] 
and others. They will operate without a management engine

[...]
Any shared storage added to any node is immediately visible to the pool 
and disks can be flipped between local and shared storage very easily


It may be worth pointing out that these are not limitations of libvirt 
(which can even live-migrate VMs between non-shared storage), but only of 
oVirt.  The impression I've got from this mailing list is they are 
intentional design decisions to enforce "correctness" of the cluster.


I do feel like it comes up often enough on the mailing list that it would 
be nice if there was a way to designate a multi-host cluster as having 
both local and shared storage.  I think the flexibility would help those 
running dev or small prod clusters.  Maybe someone just needs to step up 
and write a patch for it?

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TGWHHJ5HEYN5JLIS6Z6477CEBEYMO4MB/


[ovirt-users] Re: oVirt alternatives

2022-02-05 Thread Sketch

On Sat, 5 Feb 2022, Alex McWhirter wrote:

ProxMox is probably the closest option, but has no multi-clustering support. 
The clusters are more or less isolated from each other, and would need 
another layer if you needed the ability to migrate between them.


It's also Debian-based, so if you're an EL shop, it may not play well with 
a lot of your existing infrastructure.  This was the main reason we went 
with oVirt in the first place.


OpenNebula, more like a DIY AWS than anything else, but was functional last i 
played with it.


I never tried it, but it sounds more like a lighter weight version of 
OpenStack.  I guess that's sort of the same thing...


Has anyone actually played with OpenShift virtualization (replaces RHV)? 
Wonder if OKD supports it with a similar model?


Not yet, but it looks like it's a plugin for OKD.

https://docs.okd.io/latest/virt/about-virt.html

One other possible replacement if you don't use the more advanced 
capabilities of oVirt which some may not think about is Foreman.  It 
allows you to provision VMs on libvirt, and post-provisioning it gives you 
the ability to launch a graphical console on them from a single web-based 
management interface.  You will need to log directly into hosts to use 
virsh for things like migration, adjusting VM parameters, etc, so it's not 
a complete replacement, but may work for some comfortable with using 
CLI-based management tools.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TTHKOX4HYE4XHESHY6TMUTCQTQYPCRZR/


[ovirt-users] Re: RHGS and RHV closing down: could you please put that on the home page?

2022-02-05 Thread Sketch
Interesting, I hadn't read about the planned migration from RHV to 
OpenShift.  Based on what I've read here, it seems like 4.5 development is 
well underway, so I doubt 4.4 will be the last release of oVirt.  That 
would mean August is probably not the end of the line.


However, the removal of gluster appears to be slated for 4.5, so it's 
possible it's intended as a final release to harmonize the feature between 
RHV and OpenShift somewhat, to make migration to OpenShift easier?


On Sat, 5 Feb 2022, Thomas Hoberg wrote:


Please have a look here:
https://access.redhat.com/support/policy/updates/rhev/

Without a commercial product to pay the vast majority of the developers, there 
is just no chance oVirt can survive (unless you're ready to take over). RHV 4.4 
full support ends this August and that very likely means that oVirt won't 
receive updates past July (judging by how things happened with 4.3).

And those will be CI tested against the Stream Beta not EL8 including RHEL.

Only with a RHV support contract ($) you will receive service until 2024 and 
with extended support ($$$) until 2026.

oVirt is dead already. They have known since October. They should have told us 
last year.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZPQ7DO75CVINFKDWXTSH6D2KM67L5FI4/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Q73NUBJZYMIWCJTFHICL5X4JIUCRL6ZR/


[ovirt-users] Re: RHGS and RHV closing down: could you please put that on the home page?

2022-02-04 Thread Sketch

On Fri, 4 Feb 2022, Leo David wrote:


Maybe its a perfect time to add ( again ) Ceph into discution.


Ceph already works pretty well in 4.4 in general.  It would be nice if the 
hosted engine supported ceph directly, but you can currently use an iSCSI 
or NFS export from ceph to host it.


Personally, I had issues migrating 4.3->4.4 on gluster, and ended up 
moving to a standalone engine for 4.4 instead.  I had planned to 
migrate back to self-hosted later, but standalone is so much simpler 
and less trouble-prone, I'm not sure I want to go back to self-hosted.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YZDSI25FUZQHN5DAL5OEERYWV34L6GK4/


[ovirt-users] Re: ovirt-4.4 morrors failing

2022-02-01 Thread Sketch
I can confirm that all of the repos work for Rocky now.  I was a little 
worried when I saw 98 packages to be updated, but then I looked at the 
versions and most were just changing the package tag from .el8 to .el8s. 
collectd was the only thing actually updated to a new version.


I do worry about the long term with stream repos, it seems like there 
could be compatibility issues down the road when stream is closer to 8.6 
than 8.5.


Just for the record, I haven't tried to do a clean install on Rocky yet, 
just converted existing hosts from CentOS.  I suppose if the installation 
is still broken, I can always install CentOS, install the host, and then 
convert it to Rocky.


On Tue, 1 Feb 2022, Lev Veyde wrote:


Hi,
We just released a new version of the ovirt-release package that includes
this temporary vault fix (4.4.10.1).

Thanks in advance,

On Tue, Feb 1, 2022 at 9:31 AM Sandro Bonazzola  wrote:


Il giorno lun 31 gen 2022 alle ore 21:17 Thomas Hoberg
 ha scritto:
  > Hi Emilio,
  >
  > Yes, looks like the patch that should fix this issue is
  already here:
  > https://github.com/oVirt/ovirt-release/pull/93 , but
  indeed it still hasn't
  > been reviewed and merged yet.


Hi, the patch has not been merged yet because the OpsTools repo for
CentOS Stream has not been yet populated by the OpsTools SIG.
I contacted the chair of the SIG last week but he was on PTO and
returning only this week.
As a temporary solution you can redirect the repositories to the
vault: https://vault.centos.org/8.5.2111/
 
  >
  > I hope that we'll have a fixed version very soon, but
  meanwhile you can try
  > to simply apply the changes manually in your *testing*
  env.

  So I did, but I can't help wondering: how well will code
  tested against "stream" work on RHEL, Alma, Rocky,
  Liberty, VzLinux?
  How well will an engine evidently built on "stream" work
  with hosts based on RHEL etc.?
  Shouldn't you in fact switch the engine to RHEL etc., too?


  >
  > Thanks in advance,
  >
  > On Mon, Jan 31, 2022 at 8:05 PM Emilio Del Plato
  https://www.ovirt.org/privacy-policy.html
  oVirt Code of Conduct:
  https://www.ovirt.org/community/about/community-guidelines/
  List 
Archives:https://lists.ovirt.org/archives/list/users@ovirt.org/message/MZTD4JSDLSVQ7
  XBSRCQF4PFRPHJYCVQT/



--

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA

sbona...@redhat.com   

[logo--200.png]
Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List 
Archives:https://lists.ovirt.org/archives/list/users@ovirt.org/message/D7KAW7Q45PJYW
SHWBKBGTSNQYOUDESPQ/



--

Lev Veyde

Senior Software Engineer, RHCE | RHCVA | MCITP

Red Hat Israel

l...@redhat.com | lve...@redhat.com

[logo-red-hat-black.png] TRIED. TESTED. TRUSTED.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/S3SQG7AHGJRNB2RSAAJ5AOSQPSI2DM4P/


[ovirt-users] Re: Cannot to update hosts, nothing provides libvirt-daemon-kvm >= 7.6.0-2 needed by vdsm-4.40.90.4-1.el8.x86_64

2021-11-17 Thread Sketch

On Thu, 4 Nov 2021, Christoph Timm wrote:


Correct, the host of oVirt 4.4.9 does not support CentOS 8.4 anymore.


According to the release notes for 4.4.9, either RHEL 8.5 beta or Stream 
is required.  However, now that CentOS 8.5 is out, we still get the same 
error when attempting to update.


BTW, https://www.ovirt.org/download/ also says 8.4 is supported, but I'm 
guessing someone just forgot to update it since the release notes specify 
8.5.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EUGNAJQBFJWDE6IIQGO6T77BFUZOYO76/


[ovirt-users] Re: posix storage mount path error when creating volumes

2021-09-02 Thread Sketch

On Wed, 1 Sep 2021, Nir Soffer wrote:


  This is a pretty major issue since we can no longer create new
  VMs.  As a
  workaround, I could change the mount path of the volume to only
  reference
  a single IP, but oVirt won't let me edit the mount.  I wonder
  if I could
  manually edit in the database, then reboot the hosts one by one
  to make
  the change take effect without having to shut down hundreds of
  VMs at
  once?


This should work.


I tried editing the database to point to a single IP and it works, I'm 
able to mount it and create disks on it.


However, attempts to migrate any live VMs between a system with the old 
mount and one with the new mount fail, presumably because the mountpoint 
is now named differently.  I believe doing this would effectively split my 
cluster into hosts on two separate storage pools.  I wonder if there's 
some way to force the mount name to be the same?___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/66P5IO5RFAHAFLJFDKSS5WJBSS3JIJ4C/


[ovirt-users] posix storage mount path error when creating volumes

2021-09-01 Thread Sketch
My cluster was originally built on 4.3, and things were working as long as 
my SPM was on 4.3.  I just killed off the last 4.3 host and rebuilt it as 
4.4, and upgraded my cluster and DC to compatibility level 4.6.


We had cephfs mounted as a posix FS which worked fine, but oddly in 4.3 we 
would end up with two mounts for the same volume.  The configuration had a 
comma separated list of IPs as that is how ceph was configured for 
redundancy, and this is the mount that shows up on both 4.3 and 4.4 hosts 
(/rhev/data-center/mnt/10.1.88.75,10.1.88.76,10.1.88.77:_vmstore/).  But 
the 4.3 hosts would also have a duplicate mount which had the FQDN of one 
of the servers instead of the comma separated list.


In 4.4, there's only a single mount and existing VMs will start just fine, 
but you can't create new disks or migrate existing disks onto the posix 
storage volume.  My suspicion is this is an issue with the mount parser 
not liking the comma in the name of the mount from the error that I get on 
the SPM host when it tries to create a volume (migration would also fail 
on the volume creation task):


2021-08-31 19:34:07,767-0700 INFO  (jsonrpc/6) [vdsm.api] START 
createVolume(sdUUID='e8ec5645-fc1b-4d64-a145-44aa8ac5ef48', spUUID='2948c860-9bdf-11e8-a6b3-00163e0419f0', 
imgUUID='7d704b4d-1ebe-462f-b11e-b91039f43637', size='1073741824', volFormat=5, preallocate=1, diskType='DATA', 
volUUID='be6cb033-4e42-4bf5-a4a3-6ab5bf03edee', 
desc='{"DiskAlias":"test","DiskDescription":""}', 
srcImgUUID='----', srcVolUUID='----', initialSize=None, 
addBitmaps=False) from=:::10.1.2.37,43490, flow_id=bb137995-1ffa-429f-b6eb-5b9ca9f8dfd7, 
task_id=2ddfd1bc-d7e1-4a1e-877a-68e1c2a897ed (api:48)
2021-08-31 19:34:07,767-0700 INFO  (jsonrpc/6) [IOProcessClient] (Global) 
Starting client (__init__:340)
2021-08-31 19:34:07,782-0700 INFO  (ioprocess/3193398) [IOProcess] (Global) 
Starting ioprocess (__init__:465)
2021-08-31 19:34:07,803-0700 INFO  (jsonrpc/6) [vdsm.api] FINISH createVolume 
return=None from=:::10.1.2.37,43490, 
flow_id=bb137995-1ffa-429f-b6eb-5b9ca9f8dfd7, 
task_id=2ddfd1bc-d7e1-4a1e-877a-68e1c2a897ed (api:54)
2021-08-31 19:34:07,844-0700 INFO  (tasks/5) [storage.ThreadPool.WorkerThread] START task 
2ddfd1bc-d7e1-4a1e-877a-68e1c2a897ed (cmd=>, args=None) (threadPool:146)
2021-08-31 19:34:07,869-0700 INFO  (tasks/5) [storage.StorageDomain] Create 
placeholder 
/rhev/data-center/mnt/10.1.88.75,10.1.88.76,10.1.88.77:_vmstore/e8ec5645-fc1b-4d64-a145-44aa8ac5ef48/images/7d704b4d-1ebe-462f-b11e-b91039f43637
 for image's volumes (sd:1718)
2021-08-31 19:34:07,869-0700 ERROR (tasks/5) [storage.TaskManager.Task] 
(Task='2ddfd1bc-d7e1-4a1e-877a-68e1c2a897ed') Unexpected error (task:877)
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/vdsm/storage/task.py", line 884, in 
_run
return fn(*args, **kargs)
  File "/usr/lib/python3.6/site-packages/vdsm/storage/task.py", line 350, in run
return self.cmd(*self.argslist, **self.argsdict)
  File "/usr/lib/python3.6/site-packages/vdsm/storage/securable.py", line 79, 
in wrapper
return method(self, *args, **kwargs)
  File "/usr/lib/python3.6/site-packages/vdsm/storage/sp.py", line 1945, in 
createVolume
initial_size=initialSize, add_bitmaps=addBitmaps)
  File "/usr/lib/python3.6/site-packages/vdsm/storage/sd.py", line 1216, in 
createVolume
initial_size=initial_size, add_bitmaps=add_bitmaps)
  File "/usr/lib/python3.6/site-packages/vdsm/storage/volume.py", line 1174, in 
create
imgPath = dom.create_image(imgUUID)
  File "/usr/lib/python3.6/site-packages/vdsm/storage/sd.py", line 1721, in 
create_image
"create_image_rollback", [image_dir])
  File "/usr/lib/python3.6/site-packages/vdsm/storage/task.py", line 385, in 
__init__
self.params = ParamList(argslist)
  File "/usr/lib/python3.6/site-packages/vdsm/storage/task.py", line 298, in 
__init__
raise ValueError("ParamsList: sep %s in %s" % (sep, i))
ValueError: ParamsList: sep , in 
/rhev/data-center/mnt/10.1.88.75,10.1.88.76,10.1.88.77:_vmstore/e8ec5645-fc1b-4d64-a145-44aa8ac5ef48/images/7d704b4d-1ebe-462f-b11e-b91039f43637
2021-08-31 19:34:07,964-0700 INFO  (tasks/5) [storage.ThreadPool.WorkerThread] 
FINISH task 2ddfd1bc-d7e1-4a1e-877a-68e1c2a897ed (threadPool:148)


This is a pretty major issue since we can no longer create new VMs.  As a 
workaround, I could change the mount path of the volume to only reference 
a single IP, but oVirt won't let me edit the mount.  I wonder if I could 
manually edit in the database, then reboot the hosts one by one to make 
the change take effect without having to shut down hundreds of VMs at 
once?

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 

[ovirt-users] Cinderlib RBD ceph template issues

2021-09-01 Thread Sketch
This is on oVirt 4.4.8, engine on CS8, hosts on C8, cluster and DC are 
both set to 4.6.


With a newly configured cinderlib/ceph RBD setup.  I can create new VM 
images, and copy existing VM images, but I can't copy existing template 
images to RBD.  When I do, I try, I get this error in cinderlib.log (see 
below), which sounds like the disk already exists there, but it definitely 
does not.  This leaves me unable to create new VMs on RBD, only migrate 
existing VM disks.


2021-09-01 04:31:05,881 - cinder.volume.driver - INFO - Driver hasn't 
implemented _init_vendor_properties()
2021-09-01 04:31:05,882 - cinderlib-client - INFO - Creating volume 
'0e8b9aca-1eb1-4837-ac9e-cb3d8f4c1676', with size '500' GB [5c5d0a6b]
2021-09-01 04:31:05,943 - cinderlib-client - ERROR - Failure occurred when trying to 
run command 'create_volume': Entity '' has no property 'glance_metadata' [5c5d0a6b]
2021-09-01 04:31:05,944 - cinder - CRITICAL - Unhandled error
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/cinderlib/objects.py", line 455, in 
create
self._raise_with_resource()
  File "/usr/lib/python3.6/site-packages/cinderlib/objects.py", line 222, in 
_raise_with_resource
six.reraise(*exc_info)
  File "/usr/lib/python3.6/site-packages/six.py", line 703, in reraise
raise value
  File "/usr/lib/python3.6/site-packages/cinderlib/objects.py", line 448, in 
create
model_update = self.backend.driver.create_volume(self._ovo)
  File "/usr/lib/python3.6/site-packages/cinder/volume/drivers/rbd.py", line 
986, in create_volume
features=client.features)
  File "/usr/lib/python3.6/site-packages/eventlet/tpool.py", line 190, in doit
result = proxy_call(self._autowrap, f, *args, **kwargs)
  File "/usr/lib/python3.6/site-packages/eventlet/tpool.py", line 148, in 
proxy_call
rv = execute(f, *args, **kwargs)
  File "/usr/lib/python3.6/site-packages/eventlet/tpool.py", line 129, in 
execute
six.reraise(c, e, tb)
  File "/usr/lib/python3.6/site-packages/six.py", line 703, in reraise
raise value
  File "/usr/lib/python3.6/site-packages/eventlet/tpool.py", line 83, in tworker
rv = meth(*args, **kwargs)
  File "rbd.pyx", line 629, in rbd.RBD.create
rbd.ImageExists: [errno 17] RBD image already exists (error creating image)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib64/python3.6/site-packages/sqlalchemy/orm/base.py", line 399, 
in _entity_descriptor
return getattr(entity, key)
AttributeError: type object 'Volume' has no attribute 'glance_metadata'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "./cinderlib-client.py", line 170, in main
args.command(args)
  File "./cinderlib-client.py", line 208, in create_volume
backend.create_volume(int(args.size), id=args.volume_id)
  File "/usr/lib/python3.6/site-packages/cinderlib/cinderlib.py", line 175, in 
create_volume
vol.create()
  File "/usr/lib/python3.6/site-packages/cinderlib/objects.py", line 457, in 
create
self.save()
  File "/usr/lib/python3.6/site-packages/cinderlib/objects.py", line 628, in 
save
self.persistence.set_volume(self)
  File "/usr/lib/python3.6/site-packages/cinderlib/persistence/dbms.py", line 
254, in set_volume
self.db.volume_update(objects.CONTEXT, volume.id, changed)
  File "/usr/lib/python3.6/site-packages/cinder/db/sqlalchemy/api.py", line 
236, in wrapper
return f(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/cinder/db/sqlalchemy/api.py", line 
184, in wrapper
return f(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/cinder/db/sqlalchemy/api.py", line 
2570, in volume_update
result = query.filter_by(id=volume_id).update(values)
  File "/usr/lib64/python3.6/site-packages/sqlalchemy/orm/query.py", line 3818, 
in update
update_op.exec_()
  File "/usr/lib64/python3.6/site-packages/sqlalchemy/orm/persistence.py", line 
1670, in exec_
self._do_pre_synchronize()
  File "/usr/lib64/python3.6/site-packages/sqlalchemy/orm/persistence.py", line 
1743, in _do_pre_synchronize
self._additional_evaluators(evaluator_compiler)
  File "/usr/lib64/python3.6/site-packages/sqlalchemy/orm/persistence.py", line 
1912, in _additional_evaluators
values = self._resolved_values_keys_as_propnames
  File "/usr/lib64/python3.6/site-packages/sqlalchemy/orm/persistence.py", line 
1831, in _resolved_values_keys_as_propnames
for k, v in self._resolved_values:
  File "/usr/lib64/python3.6/site-packages/sqlalchemy/orm/persistence.py", line 
1818, in _resolved_values
desc = _entity_descriptor(self.mapper, k)
  File "/usr/lib64/python3.6/site-packages/sqlalchemy/orm/base.py", line 402, 
in _entity_descriptor
"Entity '%s' has no property '%s'" % (description, key)
sqlalchemy.exc.InvalidRequestError: Entity '' has no property 'glance_metadata'

During handling of the above exception, another exception occurred:


[ovirt-users] Re: posix storage migration issue on 4.4 cluster

2021-08-04 Thread Sketch

On Wed, 4 Aug 2021, Sketch wrote:


What doesn't work is live migration of running VMs between hosts running 
4.4.7 (or 4.4.6 before I updated) when their disks are on ceph.  It appears 
that vdsm attempts to launch the VM on the destination host, and it either 
fails to start or dies right after starting (not entirely clear from the 
logs).  Then the running VM gets paused due to a storage error.


After further investigation, I've found the problem appears to be selinux 
related.  Setting the systems to permissive mode allows VMs to be live 
migrated.  I tailed the audit logs on both hosts and found a couple of 
denies which probably explains the lack of useful errors in the vdsm logs, 
though I'm not sure how to fix the problem.


Source host:

type=AVC msg=audit(1628052789.412:3381): avc:  denied  { read } for  pid=570656 
comm="live_migration" name="6f82b02d-8c22-4d50-a30e-53511776354c" dev="ceph" 
ino=1099511715125 scontext=system_u:system_r:svirt_t:s0:c752,c884 
tcontext=system_u:object_r:svirt_image_t:s0:c411,c583 tclass=file permissive=0
type=AVC msg=audit(1628052790.557:3382): avc:  denied  { read } for  pid=570656 comm="worker" 
path="/rhev/data-center/mnt/10.1.88.75,10.1.88.76,10.1.88.77:_vmstore/e8ec5645-fc1b-4d64-a145-44aa8ac5ef48/images/eb15970b-7b94-4cce-ab44-50f57850aa7f/6f82b02d-8c22-4d50-a30e-53511776354c"
 dev="ceph" ino=1099511715125 scontext=system_u:system_r:svirt_t:s0:c752,c884 
tcontext=system_u:object_r:svirt_image_t:s0:c411,c583 tclass=file permissive=0

# ls -lidZ 
/rhev/data-center/mnt/10.1.88.75,10.1.88.76,10.1.88.77:_vmstore/e8ec5645-fc1b-4d64-a145-44aa8ac5ef48/images/eb15970b-7b94-4cce-ab44-50f57850aa7f/6f82b02d-8c22-4d50-a30e-53511776354c
1099511715125 -rw-rw. 1 vdsm kvm 
system_u:object_r:svirt_image_t:s0:c344,c764 52031193088 Aug  3 23:51 
/rhev/data-center/mnt/10.1.88.75,10.1.88.76,10.1.88.77:_vmstore/e8ec5645-fc1b-4d64-a145-44aa8ac5ef48/images/eb15970b-7b94-4cce-ab44-50f57850aa7f/6f82b02d-8c22-4d50-a30e-53511776354c

Destination host:

type=AVC msg=audit(1628052787.312:1789): avc:  denied  { getattr } for  pid=115062 comm="qemu-kvm" 
name="/" dev="ceph" ino=1099511636351 scontext=system_u:system_r:svirt_t:s0:c411,c583 
tcontext=system_u:object_r:cephfs_t:s0 tclass=filesystem permissive=0

# ls -lidZ /rhev/data-center/mnt/10.1.88.75,10.1.88.76,10.1.88.77:_vmstore
1099511636351 drwxr-xr-x. 3 vdsm kvm unconfined_u:object_r:cephfs_t:s0 1 Aug  3 
23:14 /rhev/data-center/mnt/10.1.88.75,10.1.88.76,10.1.88.77:_vmstore
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ALFLUXTZ4ZTVGWMYLQKBABR7LSIG2QDG/


[ovirt-users] Re: posix storage migration issue on 4.4 cluster

2021-08-03 Thread Sketch

On Tue, 3 Aug 2021, Nir Soffer wrote:


On Tue, Aug 3, 2021 at 5:51 PM Sketch  wrote:


On the 4.3 cluster, migration works fine with any storage backend.  On
4.4, migration works against gluster or NFS, but fails when the VM is
hosted on POSIX cephfs.


What do you mean by "fails"?

What is the failing operation (move disk when vm is running or not?)
and how does it fail?


Sorry, I guess I didn't explain the issue well enough.  Moving disks 
between ceph and gluster works fine, even while the VM is running.



It appears that the VM fails to start on the new host, but it's not
obvious why from the logs.  Can anyone shed some light or suggest further
debugging?


What doesn't work is live migration of running VMs between hosts running 
4.4.7 (or 4.4.6 before I updated) when their disks are on ceph.  It 
appears that vdsm attempts to launch the VM on the destination host, and 
it either fails to start or dies right after starting (not entirely clear 
from the logs).  Then the running VM gets paused due to a storage error.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AR3TX4PWIXCKFXO6A3SYOPADNUI5EYP6/


[ovirt-users] posix storage migration issue on 4.4 cluster

2021-08-03 Thread Sketch
I currently have two clusters up and running under one engine.  An old 
cluster on 4.3, and a new cluster on 4.4.  In addition to migrating from 
4.3 to 4.4, we are also migrating from glusterfs to cephfs mounted as 
POSIX storage (not cinderlib, though we may make that conversion after 
moving to 4.4).  I have run into a strange issue, though.


On the 4.3 cluster, migration works fine with any storage backend.  On 
4.4, migration works against gluster or NFS, but fails when the VM is 
hosted on POSIX cephfs.  Both hosts are running CentOS 8.4 and were fully 
updated to oVirt 4.4.7 today, as well as fully updating the engine (all 
rebooted before this test, as well).


It appears that the VM fails to start on the new host, but it's not 
obvious why from the logs.  Can anyone shed some light or suggest further 
debugging?


Related engine log:

2021-08-03 07:11:51,609-07 INFO  
[org.ovirt.engine.core.bll.MigrateVmToServerCommand] (default task-34) 
[1b2d4416-30f0-452d-b689-291f3b7f7482] Lock Acquired to object 
'EngineLock:{exclusiveLocks='[1fd47e75-d708-43e4-ac0f-67bd28dceefd=VM]', 
sharedLocks=''}'
2021-08-03 07:11:51,679-07 INFO  
[org.ovirt.engine.core.bll.MigrateVmToServerCommand] (default task-34) 
[1b2d4416-30f0-452d-b689-291f3b7f7482] Running command: 
MigrateVmToServerCommand internal: false. Entities affected :  ID: 
1fd47e75-d708-43e4-ac0f-67bd28dceefd Type: VMAction group MIGRATE_VM with role 
type USER
2021-08-03 07:11:51,738-07 INFO  
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (default task-34) 
[1b2d4416-30f0-452d-b689-291f3b7f7482] START, MigrateVDSCommand( 
MigrateVDSCommandParameters:{hostId='6ec548e6-9a2a-4885-81da-74d0935b7ba5', 
vmId='1fd47e75-d708-43e4-ac0f-67bd28dceefd', srcHost='ovirt_host1', 
dstVdsId='6c31c294-477d-4fa8-b6ff-12e189918f69', dstHost='ovirt_host2:54321', 
migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', 
autoConverge='true', migrateCompressed='false', migrateEncrypted='false', 
consoleAddress='null', maxBandwidth='2500', enableGuestEvents='true', 
maxIncomingMigrations='2', maxOutgoingMigrations='2', 
convergenceSchedule='[init=[{name=setDowntime, params=[100]}], 
stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, 
action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, 
params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, 
action={name=setDowntime, params=[500]}}, {li
mit=-1, action={name=abort, params=[]}}]]', dstQemu='10.1.88.85'}), log id: 
67f63342
2021-08-03 07:11:51,739-07 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (default task-34) [1b2d4416-30f0-452d-b689-291f3b7f7482] START, MigrateBrokerVDSCommand(HostName = ovirt_host1, MigrateVDSCommandParameters:{hostId='6ec548e6-9a2a-4885-81da-74d0935b7ba5', vmId='1fd47e75-d708-43e4-ac0f-67bd28dceefd', srcHost='ovirt_host1', dstVdsId='6c31c294-477d-4fa8-b6ff-12e189918f69', dstHost='ovirt_host2:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', migrateEncrypted='false', consoleAddress='null', maxBandwidth='2500', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, 
action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]', dstQemu='10.1.88.85'}), log id: 37ab0828

2021-08-03 07:11:51,741-07 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (default 
task-34) [1b2d4416-30f0-452d-b689-291f3b7f7482] FINISH, 
MigrateBrokerVDSCommand, return: , log id: 37ab0828
2021-08-03 07:11:51,743-07 INFO  
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (default task-34) 
[1b2d4416-30f0-452d-b689-291f3b7f7482] FINISH, MigrateVDSCommand, return: 
MigratingFrom, log id: 67f63342
2021-08-03 07:11:51,750-07 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-34) [1b2d4416-30f0-452d-b689-291f3b7f7482] EVENT_ID: VM_MIGRATION_START(62), Migration started (VM: my_vm_hostname, Source: ovirt_host1, Destination: ovirt_host2, User: ebyrne@FreeIPA). 
2021-08-03 07:11:55,736-07 INFO  [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-21) [28d98b26] VM '1fd47e75-d708-43e4-ac0f-67bd28dceefd' was reported as Down on VDS '6c31c294-477d-4fa8-b6ff-12e189918f69'(ovirt_host2)

2021-08-03 07:11:55,736-07 INFO  
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
(ForkJoinPool-1-worker-21) [28d98b26] VM 
'1fd47e75-d708-43e4-ac0f-67bd28dceefd'(my_vm_hostname) was unexpectedly 
detected as 'Down' on VDS '6c31c294-477d-4fa8-b6ff-12e189918f69'(ovirt_host2) 
(expected on '6ec548e6-9a2a-4885-81da-74d0935b7ba5')

[ovirt-users] Re: [ANN] oVirt 4.4.7 Fourth Release Candidate is now available for testing

2021-06-23 Thread Sketch

On Wed, 23 Jun 2021, Nir Soffer wrote:


On Wed, Jun 23, 2021 at 12:22 PM Sketch  wrote:


Installation fails on a CentOS Linux 8.4 host using yum update.

  Problem 1: cannot install the best update candidate for package 
vdsm-4.40.60.7-1.el8.x86_64
   - nothing provides python3-sanlock >= 3.8.3-3 needed by 
vdsm-4.40.70.4-1.el8.x86_64
   - nothing provides sanlock >= 3.8.3-3 needed by vdsm-4.40.70.4-1.el8.x86_64



This version was not released yet for Centos. You need to wait until
this package
is released on Centos if you want to upgrade ovirt to 4.4.7.

If you want to use the latest oVirt version as soon as it is released,
you need to use
Centos Stream.


I suspected that might have been the case, but figured I'd mention it 
since we're on RC4 now and the release notes say it's available for RHEL 
8.4 a well as Stream.  I checked the RHEL package browser and it doesn't 
have sanlock >= 3.8.3-3 yet either.  Is the oVirt 4.4.7 GA release waiting 
on this update to be pushed?

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZOQOBR6MDDUNNFUQ4WRCH6GGHQDUQFD6/


[ovirt-users] Re: [ANN] oVirt 4.4.7 Fourth Release Candidate is now available for testing

2021-06-23 Thread Sketch

Installation fails on a CentOS Linux 8.4 host using yum update.

I had previously installed ceph and python3-os-brick from the 
centos-oepnstrack-train and ceph-nautilus repos, but I removed the 
packages and the repos and ran dnf distro-sync, and used yum list 
installed just to be sure everything was in a clean state with no extra 
packages installed from nonstandard repos.


Here's the output from yum update:

Last metadata expiration check: 2:31:18 ago on Tue 22 Jun 2021 11:26:39 PM PDT.
Error:
 Problem 1: cannot install the best update candidate for package 
vdsm-4.40.60.7-1.el8.x86_64
  - nothing provides python3-sanlock >= 3.8.3-3 needed by 
vdsm-4.40.70.4-1.el8.x86_64
  - nothing provides sanlock >= 3.8.3-3 needed by vdsm-4.40.70.4-1.el8.x86_64
 Problem 2: package vdsm-hook-fcoe-4.40.70.4-1.el8.noarch requires vdsm = 
4.40.70.4-1.el8, but none of the providers can be installed
  - cannot install the best update candidate for package 
vdsm-hook-fcoe-4.40.60.7-1.el8.noarch
  - nothing provides python3-sanlock >= 3.8.3-3 needed by 
vdsm-4.40.70.4-1.el8.x86_64
  - nothing provides sanlock >= 3.8.3-3 needed by vdsm-4.40.70.4-1.el8.x86_64
 Problem 3: package vdsm-hook-ethtool-options-4.40.70.4-1.el8.noarch requires 
vdsm = 4.40.70.4-1.el8, but none of the providers can be installed
  - cannot install the best update candidate for package 
vdsm-hook-ethtool-options-4.40.60.7-1.el8.noarch
  - nothing provides python3-sanlock >= 3.8.3-3 needed by 
vdsm-4.40.70.4-1.el8.x86_64
  - nothing provides sanlock >= 3.8.3-3 needed by vdsm-4.40.70.4-1.el8.x86_64
 Problem 4: cannot install the best update candidate for package 
vdsm-hook-vmfex-dev-4.40.60.7-1.el8.noarch
  - package vdsm-hook-vmfex-dev-4.40.70.4-1.el8.noarch requires vdsm = 
4.40.70.4-1.el8, but none of the providers can be installed
  - nothing provides python3-sanlock >= 3.8.3-3 needed by 
vdsm-4.40.70.4-1.el8.x86_64
  - nothing provides sanlock >= 3.8.3-3 needed by vdsm-4.40.70.4-1.el8.x86_64
 Problem 5: package ovirt-provider-ovn-driver-1.2.33-1.el8.noarch requires 
vdsm, but none of the providers can be installed
  - package vdsm-4.40.60.7-1.el8.x86_64 requires vdsm-http = 4.40.60.7-1.el8, 
but none of the providers can be installed
  - package vdsm-4.40.70.1-1.el8.x86_64 requires vdsm-http = 4.40.70.1-1.el8, 
but none of the providers can be installed
  - package vdsm-4.40.70.2-1.el8.x86_64 requires vdsm-http = 4.40.70.2-1.el8, 
but none of the providers can be installed
  - package vdsm-4.40.17-1.el8.x86_64 requires vdsm-http = 4.40.17-1.el8, but 
none of the providers can be installed
  - package vdsm-4.40.18-1.el8.x86_64 requires vdsm-http = 4.40.18-1.el8, but 
none of the providers can be installed
  - package vdsm-4.40.19-1.el8.x86_64 requires vdsm-http = 4.40.19-1.el8, but 
none of the providers can be installed
  - package vdsm-4.40.20-1.el8.x86_64 requires vdsm-http = 4.40.20-1.el8, but 
none of the providers can be installed
  - package vdsm-4.40.21-1.el8.x86_64 requires vdsm-http = 4.40.21-1.el8, but 
none of the providers can be installed
  - package vdsm-4.40.22-1.el8.x86_64 requires vdsm-http = 4.40.22-1.el8, but 
none of the providers can be installed
  - package vdsm-4.40.26.3-1.el8.x86_64 requires vdsm-http = 4.40.26.3-1.el8, 
but none of the providers can be installed
  - package vdsm-4.40.30-1.el8.x86_64 requires vdsm-http = 4.40.30-1.el8, but 
none of the providers can be installed
  - package vdsm-4.40.31-1.el8.x86_64 requires vdsm-http = 4.40.31-1.el8, but 
none of the providers can be installed
  - package vdsm-4.40.32-1.el8.x86_64 requires vdsm-http = 4.40.32-1.el8, but 
none of the providers can be installed
  - package vdsm-4.40.33-1.el8.x86_64 requires vdsm-http = 4.40.33-1.el8, but 
none of the providers can be installed
  - package vdsm-4.40.34-1.el8.x86_64 requires vdsm-http = 4.40.34-1.el8, but 
none of the providers can be installed
  - package vdsm-4.40.35-1.el8.x86_64 requires vdsm-http = 4.40.35-1.el8, but 
none of the providers can be installed
  - package vdsm-4.40.35.1-1.el8.x86_64 requires vdsm-http = 4.40.35.1-1.el8, 
but none of the providers can be installed
  - package vdsm-4.40.36-1.el8.x86_64 requires vdsm-http = 4.40.36-1.el8, but 
none of the providers can be installed
  - package vdsm-4.40.37-1.el8.x86_64 requires vdsm-http = 4.40.37-1.el8, but 
none of the providers can be installed
  - package vdsm-4.40.38-1.el8.x86_64 requires vdsm-http = 4.40.38-1.el8, but 
none of the providers can be installed
  - package vdsm-4.40.39-1.el8.x86_64 requires vdsm-http = 4.40.39-1.el8, but 
none of the providers can be installed
  - package vdsm-4.40.40-1.el8.x86_64 requires vdsm-http = 4.40.40-1.el8, but 
none of the providers can be installed
  - package vdsm-4.40.50.8-1.el8.x86_64 requires vdsm-http = 4.40.50.8-1.el8, 
but none of the providers can be installed
  - package vdsm-4.40.50.9-1.el8.x86_64 requires vdsm-http = 4.40.50.9-1.el8, 
but none of the providers can be installed
  - package 

[ovirt-users] Problems provisioning 4.4.6 hosted engine

2021-05-13 Thread Sketch
This is a new system is CentOS 8.3, with the oVirt-4.4 repo and all 
updates applied.  When I try to install the hosted engine with my engine 
backup from 4.3.10, the installation fails with a too many open files 
error.  My 8.3 hosts already had 1M system max files, which is more than 
any of my CentOS 7/oVirt 4.3 hosts have.  I tried increasing it to 2M with 
no luck, so my suspicion is that the error is on the engine itself?


I tried provisioning a new engine just to test, and I get SSH key errors 
instead of this one.


Any suggestions?

2021-05-12 23:09:44,731-0700 ERROR ansible failed {
"ansible_host": "localhost",
"ansible_playbook": 
"/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml",
"ansible_result": {
"_ansible_no_log": false,
"exception": "Traceback (most recent call last):\n  File 
\"/usr/lib/python3.6/site-packages/ansible/executor/task_executor.py\", line 665, in _execute\nresult = 
self._handler.run(task_vars=variables)\n  File \"/usr/lib/python3.6/site-packages/ansible/plugins/action/wait_for_connection.py\", line 
122, in run\nself._remove_tmp_path(self._connection._shell.tmpdir)\n  File 
\"/usr/lib/python3.6/site-packages/ansible/plugins/action/__init__.py\", line 417, in _remove_tmp_path\ntmp_rm_res = 
self._low_level_execute_command(cmd, sudoable=False)\n  File \"/usr/lib/python3.6/site-packages/ansible/plugins/action/__init__.py\", line 
1085, in _low_level_execute_command\nrc, stdout, stderr = self._connection.exec_command(cmd, in_data=in_data, sudoable=sudoable)\n  File 
\"/usr/lib/python3.6/site-packages/ansible/plugins/connection/ssh.py\", line 1191, in exec_command\ncmd = self._build_command(*args)\n  
File \"/usr/lib/python3.6/site-packages/ansible/plugins/connection/s
sh.py\", line 562, in _build_command\nself.sshpass_pipe = os.pipe()\nOSError: [Errno 24] Too many open files\n\nDuring 
handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File 
\"/usr/lib/python3.6/site-packages/ansible/executor/task_executor.py\", line 147, in run\nres = self._execute()\n  File 
\"/usr/lib/python3.6/site-packages/ansible/executor/task_executor.py\", line 673, in _execute\nself._handler.cleanup()\n 
 File \"/usr/lib/python3.6/site-packages/ansible/plugins/action/__init__.py\", line 128, in cleanup\n
self._remove_tmp_path(self._connection._shell.tmpdir)\n  File 
\"/usr/lib/python3.6/site-packages/ansible/plugins/action/__init__.py\", line 417, in _remove_tmp_path\ntmp_rm_res = 
self._low_level_execute_command(cmd, sudoable=False)\n  File 
\"/usr/lib/python3.6/site-packages/ansible/plugins/action/__init__.py\", line 1085, in _low_level_execute_command\nrc, 
stdout, stderr = self._connection.exec_command
(cmd, in_data=in_data, sudoable=sudoable)\n  File 
\"/usr/lib/python3.6/site-packages/ansible/plugins/connection/ssh.py\", line 1191, in 
exec_command\ncmd = self._build_command(*args)\n  File 
\"/usr/lib/python3.6/site-packages/ansible/plugins/connection/ssh.py\", line 562, in 
_build_command\nself.sshpass_pipe = os.pipe()\nOSError: [Errno 24] Too many open files\n",
"msg": "Unexpected failure during module execution.",
"stdout": ""
},
"ansible_task": "Wait for the local VM",
"ansible_type": "task",
"status": "FAILED",
"task_duration": 3605
}___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PPZB4WGOWCYVQU4ZPYQG2PRG4RB7NHVE/


[ovirt-users] Gluster version upgrades

2021-03-06 Thread Sketch
Is the gluster version on an oVirt host tied to the oVirt version, or 
would it be safe to upgrade to newer versions of gluster?


I have noticed gluster is often updated to new major versions on oVirt 
point release upgrades.  We have some compute+storage hosts on 4.3.6 which 
can't be upgraded easily at the moment, but we are having some gluster 
issues that appear to be due to bugs that I wonder if upgrading might 
help.  Would an in-place upgrade of gluster be a bad idea without also 
updating oVirt?

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XR47MTPOQ6XTPT7TOH6LGEWYCH2YKRS2/