[ovirt-users] Re: Assistance Needed with oVirt Engine Deployment 4.5 Error
Because CentOS Stream 8 is EOL, you need to update the URLs for the repos in /etc/yum.repos.d. For example and to start, change /etc/yum.repos.d/CentOS-Ceph-Pacific.repo to use: baseurl=http://vault.centos.org/$contentdir/$stream/storage/$basearch/ceph-pacific/ We used "--ansible-extra-vars=he_pause_before_engine_setup=True” in our "hosted-engine —deploy …” command. The first time the install paused, we ssh’d into the engine and fixed the repo’s before deleting the lock file to let the install proceed. I hope this helps! Devin > On Sep 4, 2024, at 5:08 AM, weichao.j...@rayseconsult.com wrote: > > Dear friends: > I am encountering an issue while attempting to deploy the oVirt Engine 4.5 on > my local engine VM and I'm seeking guidance on how to resolve this. > Here's a summary of the error message I received during the deployment > process: > > [ INFO ] TASK [ovirt.ovirt.engine_setup : Install oVirt Engine package] > [ ERROR ] fatal: [localhost -> 192.168.222.27]: FAILED! => {"changed": false, > "msg": "Failed to download metadata for repo 'centos-ceph-pacific': Cannot > download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were > tried", "rc": 1, "results": []} > [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Sync on engine machine] > [ INFO ] changed: [localhost -> 192.168.222.27] > [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Set destination directory > path] > [ INFO ] ok: [localhost -> localhost] > [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Create destination directory] > [ INFO ] changed: [localhost -> localhost] > [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : include_tasks] > [ INFO ] ok: [localhost] > [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Find the local appliance > image] > [ INFO ] ok: [localhost -> localhost] > [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Set local_vm_disk_path] > [ INFO ] ok: [localhost -> localhost] > [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Give the vm time to flush > dirty buffers] > [ INFO ] ok: [localhost -> localhost] > [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Copy engine logs] > [ INFO ] changed: [localhost] > [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Change ownership of copied > engine logs] > [ INFO ] changed: [localhost -> localhost] > [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Notify the user about a > failure] > [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "There was > a failure deploying the engine on the local engine VM. The system may not be > provisioned according to the playbook results: please check the logs for the > issue, fix accordingly or re-deploy from scratch.\n"} > > I would appreciate any advice or suggestions on how to proceed with > troubleshooting or resolving this error. If there are any specific logs or > information you need to assist me, please let me know. > > Thank you in advance for your help. > > Best regards > ___ > Users mailing list -- users@ovirt.org > To unsubscribe send an email to users-le...@ovirt.org > Privacy Statement: https://www.ovirt.org/privacy-policy.html > oVirt Code of Conduct: > https://www.ovirt.org/community/about/community-guidelines/ > List Archives: > https://lists.ovirt.org/archives/list/users@ovirt.org/message/VZM5PM6OPZYG2PWJIBT27XQT5M6TQO47/ ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RPSNUPJSMTYFNEB6DCK67XEFZI575PTO/
[ovirt-users] Re: Moving hosted engine to iscsi storage domain
We’ve finally completed this migration. I’m not exactly sure, but I believe the missing step was to manually run "rescan-scsi-bus.sh —forcerescan” on each ovirt server to make sure they all saw the new LUN. This allowed the new hosted_engine storage domain to activate and the setup to complete? Thanks, Devin > On Sep 11, 2024, at 9:57 AM, Devin A. Bougie wrote: > > Hello, > > We are attempting to move a hosted engine from an NFS to an iSCSI storage > domain, following the normal backup / restore procedure. > > With a little intervention in the new hosted engine VM, everything seems to > be working with the new engine and the process gets to the point of trying to > create the new iSCSI hosted_storage domain. At this point, it fails with > "Storage domain cannot be reached. Please ensure it is accessible from the > host(s).” > > The host itself does see the storage, and everything looks fine using > iscsiadm and multipath commands. However, if I look at the new > hosted_storage domain in the new engine, it’s stuck “unattached” and the > “hosted-engine --deploy --restore-from-file …” command loops at the "Please > specify the storage you would like to use” step. > > Please see below for an excerpt of the output and logs, and let me know what > additional information I can provide. > > Many thanks, > Devin > > Here’s the output from the “hosted-engine --deploy --restore-from-file …” > command. > ——— > [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : iSCSI login] > [ INFO ] changed: [localhost] > [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Get iSCSI LUNs] > [ INFO ] ok: [localhost] > The following luns have been found on the requested target: > [1] mpathm 1024.0GiB IFT DS 3000 Series > status: free, paths: 8 active >Please select the destination LUN (1) [1]: > [ INFO ] iSCSI discard after delete is disabled > [ INFO ] Creating Storage Domain > [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Execute just a specific set > of steps] > [ INFO ] ok: [localhost] > [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Force facts gathering] > [ INFO ] ok: [localhost] > [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Wait for the storage > interface to be up] > [ INFO ] skipping: [localhost] > [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Check local VM dir stat] > [ INFO ] ok: [localhost] > [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Enforce local VM dir > existence] > [ INFO ] skipping: [localhost] > [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : include_tasks] > [ INFO ] ok: [localhost] > [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Obtain SSO token using > username/password credentials] > [ INFO ] ok: [localhost] > [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch host facts] > [ INFO ] ok: [localhost] > [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch cluster ID] > [ INFO ] ok: [localhost] > [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch cluster facts] > [ INFO ] ok: [localhost] > [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch Datacenter facts] > [ INFO ] ok: [localhost] > [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch Datacenter ID] > [ INFO ] ok: [localhost] > [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch Datacenter name] > [ INFO ] ok: [localhost] > [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch cluster name] > [ INFO ] ok: [localhost] > [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch cluster version] > [ INFO ] ok: [localhost] > [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Enforce cluster major > version] > [ INFO ] skipping: [localhost] > [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Enforce cluster minor > version] > [ INFO ] skipping: [localhost] > [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Set storage_format] > [ INFO ] ok: [localhost] > [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Add NFS storage domain] > [ INFO ] skipping: [localhost] > [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Add glusterfs storage > domain] > [ INFO ] skipping: [localhost] > [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Add iSCSI storage domain] > [ INFO ] changed: [localhost] > [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Add Fibre Channel storage > domain] > [ INFO ] skipping: [localhost] > [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Get storage domain details] > [ INFO ] ok: [localhost] > [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Find the appliance OVF] > [ INFO ] ok: [localhost] > [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Get ovf data] >
[ovirt-users] Moving hosted engine to iscsi storage domain
Hello, We are attempting to move a hosted engine from an NFS to an iSCSI storage domain, following the normal backup / restore procedure. With a little intervention in the new hosted engine VM, everything seems to be working with the new engine and the process gets to the point of trying to create the new iSCSI hosted_storage domain. At this point, it fails with "Storage domain cannot be reached. Please ensure it is accessible from the host(s).” The host itself does see the storage, and everything looks fine using iscsiadm and multipath commands. However, if I look at the new hosted_storage domain in the new engine, it’s stuck “unattached” and the “hosted-engine --deploy --restore-from-file …” command loops at the "Please specify the storage you would like to use” step. Please see below for an excerpt of the output and logs, and let me know what additional information I can provide. Many thanks, Devin Here’s the output from the “hosted-engine --deploy --restore-from-file …” command. ——— [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : iSCSI login] [ INFO ] changed: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Get iSCSI LUNs] [ INFO ] ok: [localhost] The following luns have been found on the requested target: [1] mpathm 1024.0GiB IFT DS 3000 Series status: free, paths: 8 active Please select the destination LUN (1) [1]: [ INFO ] iSCSI discard after delete is disabled [ INFO ] Creating Storage Domain [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Execute just a specific set of steps] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Force facts gathering] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Wait for the storage interface to be up] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Check local VM dir stat] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Enforce local VM dir existence] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : include_tasks] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Obtain SSO token using username/password credentials] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch host facts] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch cluster ID] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch cluster facts] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch Datacenter facts] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch Datacenter ID] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch Datacenter name] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch cluster name] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch cluster version] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Enforce cluster major version] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Enforce cluster minor version] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Set storage_format] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Add NFS storage domain] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Add glusterfs storage domain] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Add iSCSI storage domain] [ INFO ] changed: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Add Fibre Channel storage domain] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Get storage domain details] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Find the appliance OVF] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Get ovf data] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Get disk size from ovf data] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Get required size] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Remove unsuitable storage domain] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Check storage domain free space] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Activate storage domain] [ ERROR ] ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is "[Storage domain cannot be reached. Please ensure it is accessible from the host(s).]". HTTP response code is 400. [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Stora
[ovirt-users] Re: Problems with RHEL 9.4 hosts
Unfortunately I’m not exactly sure what the problem was, but I was able to get the fully-updated EL9.4 host back in the cluster after manually deleting all of the iSCSI nodes. Some of the iscsiadm commands printed worked fine: ——— [root@lnxvirt06 ~]# /sbin/iscsiadm -m iface bond1 tcp,,,bond1, default tcp iser iser [root@lnxvirt06 ~]# /sbin/iscsiadm -m node -T iqn.2002-10.com.infortrend:raid.sn8087428.012 -I bond1 -p 192.168.56.54:3260,1 --op=new New iSCSI node [tcp:[hw=,ip=,net_if=bond1,iscsi_if=bond1] 192.168.56.54,3260,1 iqn.2002-10.com.infortrend:raid.sn8087428.012] added [root@lnxvirt06 ~]# iscsiadm -m node 192.168.56.54:3260,1 iqn.2002-10.com.infortrend:raid.sn8073743.001 192.168.56.54:3260,1 iqn.2002-10.com.infortrend:raid.sn8073743.001 192.168.56.56:3260,1 iqn.2002-10.com.infortrend:raid.sn8073743.101 192.168.56.56:3260,1 iqn.2002-10.com.infortrend:raid.sn8073743.101 192.168.56.55:3260,1 iqn.2002-10.com.infortrend:raid.sn8087428.012 192.168.56.55:3260,1 iqn.2002-10.com.infortrend:raid.sn8087428.012 192.168.56.57:3260,1 iqn.2002-10.com.infortrend:raid.sn8087428.112 192.168.56.57:3260,1 iqn.2002-10.com.infortrend:raid.sn8087428.112 192.168.56.50:3260,1 iqn.2002-10.com.infortrend:raid.uid58204.001 192.168.56.50:3260,1 iqn.2002-10.com.infortrend:raid.uid58204.001 192.168.56.51:3260,1 iqn.2002-10.com.infortrend:raid.uid58204.012 192.168.56.51:3260,1 iqn.2002-10.com.infortrend:raid.uid58204.012 192.168.56.52:3260,1 iqn.2002-10.com.infortrend:raid.uid58204.101 192.168.56.52:3260,1 iqn.2002-10.com.infortrend:raid.uid58204.101 192.168.56.53:3260,1 iqn.2002-10.com.infortrend:raid.uid58204.112 192.168.56.53:3260,1 iqn.2002-10.com.infortrend:raid.uid58204.112 192.168.56.50:3260,1 iqn.2002-10.com.infortrend:raid.uid58207.001 192.168.56.50:3260,1 iqn.2002-10.com.infortrend:raid.uid58207.001 192.168.56.51:3260,1 iqn.2002-10.com.infortrend:raid.uid58207.012 192.168.56.52:3260,1 iqn.2002-10.com.infortrend:raid.uid58207.101 192.168.56.53:3260,1 iqn.2002-10.com.infortrend:raid.uid58207.112 ——— But others didn’t, where the only difference is the portal: ——— [root@lnxvirt06 ~]# /sbin/iscsiadm -m node -T iqn.2002-10.com.infortrend:raid.sn8087428.012 -I bond1 -p 192.168.56.55:3260,1 --op=new iscsiadm: Error while adding record: invalid parameter ——— Likewise, I could delete some nodes using iscsiadm but not others: ——— [root@lnxvirt06 ~]# /sbin/iscsiadm -m node -T iqn.2002-10.com.infortrend:raid.sn8087428.012 -I bond1 -p 192.168.56.54:3260,1 --op=delete [root@lnxvirt06 ~]# /sbin/iscsiadm -m node -T iqn.2002-10.com.infortrend:raid.sn8087428.012 -I bond1 -p 192.168.56.55:3260,1 --op=delete iscsiadm: Could not execute operation on all records: invalid parameter [root@lnxvirt06 ~]# iscsiadm -m node -p 192.168.56.50 -o delete iscsiadm: Could not execute operation on all records: invalid parameter ——— At this point I wiped out /var/lib/iscsi/, rebooted, and everything just worked. Thanks so much for your time and help! Sincerely, Devin > On Jun 7, 2024, at 10:26 AM, Jean-Louis Dupond wrote: > > 2024-06-07 09:59:16,720-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm > executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104) > 2024-06-07 09:59:16,751-0400 INFO (jsonrpc/0) [storage.iscsi] Adding iscsi > node for target 192.168.56.55:3260,1 > iqn.2002-10.com.infortrend:raid.sn8087428.012 iface bond1 (iscsi:192) > 2024-06-07 09:59:16,751-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm > executing: ['/sbin/iscsiadm', '-m', 'node', '-T', > 'iqn.2002-10.com.infortrend:raid.sn8087428.012', '-I', 'bond1', '-p', > '192.168.56.55:3260,1', '--op=new'] (iscsiadm:104) > 2024-06-07 09:59:16,785-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm > executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104) > 2024-06-07 09:59:16,825-0400 ERROR (jsonrpc/0) [storage.storageServer] Could > not configure connection to 192.168.56.55:3260,1 > iqn.2002-10.com.infortrend:raid.sn8087428.012 and iface name='bond1' transport='tcp' netIfaceName='bond1'>: (7, b'', b'iscsiadm: > Error while adding record: invalid parameter\n') (storageServer:580) > Can you try to run those commands manually on the host? > And see what it gives :) > On 7/06/2024 16:13, Devin A. Bougie wrote: >> Thank you! I added a warning at the line you indicated, which produces the >> following output: >> >> ——— >> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,452-0400 WARN (jsonrpc/0) >> [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] >> (iscsiadm:104) >> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,493-0400 WARN (jsonrpc/0) >> [storage.iscsiadm] iscsiadm
[ovirt-users] Re: Problems with RHEL 9.4 hosts
Awesome, thanks again. Yes, the host is fixed by just downgrading the iscsi-initiator-utils and iscsi-initiator-utils-iscsiuio packages from: 6.2.1.9-1.gita65a472.el9.x86_64 to: 6.2.1.4-3.git2a8f9d8.el9.x86_64 Any additional pointers of where to look or how to debug the iscsiadm calls would be greatly appreciated. Many thanks! Devin > On Jun 6, 2024, at 2:04 PM, Jean-Louis Dupond wrote: > > 2024-06-06 13:28:10,478-0400 ERROR (jsonrpc/5) [storage.storageServer] Could > not configure connection to 192.168.56.57:3260,1 > iqn.2002-10.com.infortrend:raid.sn8087428.112 and iface name='bond1' transport='tcp' netIfaceName='bond1'>: (7, b'', b'iscsiadm: > Error while adding record: invalid parameter\n') (storageServer:580) > > Seems like some issue with iscsiadm calls. > Might want to debug which calls it does or what version change there is for > iscsiadm. > > > > "Devin A. Bougie" schreef op 6 juni 2024 19:32:29 > CEST: > Thanks so much! Yes, that patch fixed the “out of sync network” issue. > However, we’re still unable to join a fully updated 9.4 host to the cluster - > now with "Failed to connect Host to Storage Servers”. Downgrading all of the > updated packages fixes the issue. > > Please see the attached vdsm.log and supervdsm.log from the host after > updating it to EL 9.4 and then trying to activate it. Any more suggestions > would be greatly appreciated. > > Thanks again, > Devin > > > > > > > On Jun 5, 2024, at 2:35 AM, Jean-Louis Dupond wrote: > > > > You most likely need the following patch: > > https://github.com/oVirt/vdsm/commit/49eaf70c5a14eb00e85eac5f91ac36f010a9a327 > > > > Test with that, guess it's fixed then :) > > > > On 4/06/2024 22:33, Devin A. Bougie wrote: > >> Are there any known incompatibilities with RHEL 9.4 (and derivatives)? > >> > >> We are running a 7-node ovirt 4.5.5-1.el8 self hosted engine cluster, with > >> all of the hosts running AlmaLinux 9. After upgrading from 9.3 to 9.4, > >> every node started flapping between “Up” and “NonOperational,” with VMs in > >> turn migrating between hosts. > >> > >> I believe the underlying issue (or at least the point I got stuck at) was > >> with two of our logical networks being stuck “out of sync” on all hosts. > >> I was unable to synchronize networks or setup the networks using the UI. > >> A reinstall of a host succeeded but then the host immediately reverted to > >> the same state with the same networks being out of sync. > >> > >> I eventually found that if I downgraded the host from 9.4 to 9.3, it > >> immediately became stable and back online. > >> > >> Are there any known incompatibilities with RHEL 9.4 (and derivatives)? If > >> not, I’m happy to upgrade a single node to test. Please just let me know > >> what log files and details would be most helpful in debugging what goes > >> wrong. > >> > >> (And yes, I know we need to upgrade the hosted engine VM itself now that > >> CentOS Stream 8 is now EOL). > >> > >> Many thanks, > >> Devin > >> > >> ___ > >> Users mailing list -- users@ovirt.org > >> To unsubscribe send an email to users-le...@ovirt.org > >> Privacy Statement: https://www.ovirt.org/privacy-policy.html > >> oVirt Code of Conduct: > >> https://www.ovirt.org/community/about/community-guidelines/ > >> List Archives: > >> https://lists.ovirt.org/archives/list/users@ovirt.org/message/PIROQCVLPVBNJY6FWLE3VSLHRAZKRB3L/ > ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/KLKSACDXZ2JIWLYRZ53PGEFJX7KZOUV3/
[ovirt-users] Problems with RHEL 9.4 hosts
Are there any known incompatibilities with RHEL 9.4 (and derivatives)? We are running a 7-node ovirt 4.5.5-1.el8 self hosted engine cluster, with all of the hosts running AlmaLinux 9. After upgrading from 9.3 to 9.4, every node started flapping between “Up” and “NonOperational,” with VMs in turn migrating between hosts. I believe the underlying issue (or at least the point I got stuck at) was with two of our logical networks being stuck “out of sync” on all hosts. I was unable to synchronize networks or setup the networks using the UI. A reinstall of a host succeeded but then the host immediately reverted to the same state with the same networks being out of sync. I eventually found that if I downgraded the host from 9.4 to 9.3, it immediately became stable and back online. Are there any known incompatibilities with RHEL 9.4 (and derivatives)? If not, I’m happy to upgrade a single node to test. Please just let me know what log files and details would be most helpful in debugging what goes wrong. (And yes, I know we need to upgrade the hosted engine VM itself now that CentOS Stream 8 is now EOL). Many thanks, Devin ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/PIROQCVLPVBNJY6FWLE3VSLHRAZKRB3L/
[ovirt-users] Re: Upgrade from oVirt 4.5.4 to oVirt 4.5.5 - nothing provides selinux-policy >= 38.1.27-1.el9
Hi Zuhaimi, I was able to update the selinux-policy packages using the link Sandro sent (https://kojihub.stream.centos.org/koji/buildinfo?buildID=2). In our case, we created a new local yum repository for our oVirt cluster that contains the required packages specific to oVirt, but you should also be able to just download and update the packages manually. I hope this helps! Devin > On Jan 30, 2024, at 1:02 PM, zuhaimi.musa--- via Users > wrote: > > Hi Devin, > > I'm facing same issue. How do you managed to update the selinux-policy to > 38.1.27-1.el9? Can you share the procedure you took? > > Thanks. > ___ > Users mailing list -- users@ovirt.org > To unsubscribe send an email to users-le...@ovirt.org > Privacy Statement: https://www.ovirt.org/privacy-policy.html > oVirt Code of Conduct: > https://www.ovirt.org/community/about/community-guidelines/ > List Archives: > https://lists.ovirt.org/archives/list/users@ovirt.org/message/OMWGU3FTCT4AG2TUSGBRZEOJUPK5UJX5/ ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/KG74JNPLTC4UMANJR3YEETBOCZ5IF3DB/
[ovirt-users] Upgrading EL9 host from 4.5.4 to 4.5.5
Hi, All. When upgrading an EL9 host from 4.5.4 to 4.5.5, I've found I need to exclude the following packages to avoid the errors shown below: *openvswitch*,*ovn*,centos-release-nfv-common Is that to be expected, or am I missing a required repo or other upgrade step? I just wanted to clarify, as the docs seem a little outdated at least WRT comments about nmstate? https://ovirt.org/download/install_on_rhel.html Thanks, Devin -- [root@lnxvirt01 ~]# rpm -qa |grep -i openvswitch openvswitch-selinux-extra-policy-1.0-31.el9s.noarch ovirt-openvswitch-ovn-2.17-1.el9.noarch openvswitch2.17-2.17.0-103.el9s.x86_64 python3-openvswitch2.17-2.17.0-103.el9s.x86_64 openvswitch2.17-ipsec-2.17.0-103.el9s.x86_64 ovirt-openvswitch-ovn-host-2.17-1.el9.noarch ovirt-openvswitch-ipsec-2.17-1.el9.noarch ovirt-python-openvswitch-2.17-1.el9.noarch ovirt-openvswitch-2.17-1.el9.noarch ovirt-openvswitch-ovn-common-2.17-1.el9.noarch centos-release-nfv-openvswitch-1-5.el9.noarch [root@lnxvirt01 ~]# dnf update 173 files removed CLASSE oVirt Packages - x86_64 988 kB/s | 9.6 kB 00:00 CLASSE Packages - x86_64 45 MB/s | 642 kB 00:00 CentOS-9-stream - Ceph Pacific 561 kB/s | 557 kB 00:00 CentOS-9-stream - Gluster 10 245 kB/s | 56 kB 00:00 CentOS-9 - RabbitMQ 38 392 kB/s | 104 kB 00:00 CentOS Stream 9 - NFV OpenvSwitch 709 kB/s | 154 kB 00:00 CentOS-9 - OpenStack yoga 11 MB/s | 3.0 MB 00:00 CentOS Stream 9 - OpsTools - collectd 175 kB/s | 51 kB 00:00 CentOS Stream 9 - Extras packages 57 kB/s | 15 kB 00:00 CentOS Stream 9 - oVirt 4.5 2.7 MB/s | 1.0 MB 00:00 oVirt upstream for CentOS Stream 9 - oVirt 4.5 932 B/s | 7.5 kB 00:08 AlmaLinux 9 - AppStream 84 MB/s | 8.1 MB 00:00 AlmaLinux 9 - BaseOS 75 MB/s | 3.5 MB 00:00 AlmaLinux 9 - BaseOS - Debug 12 MB/s | 2.2 MB 00:00 AlmaLinux 9 - CRB 67 MB/s | 2.3 MB 00:00 AlmaLinux 9 - Extras 1.5 MB/s | 17 kB 00:00 AlmaLinux 9 - HighAvailability
[ovirt-users] Upgrade from oVirt 4.5.4 to oVirt 4.5.5 - nothing provides selinux-policy >= 38.1.27-1.el9
Hi, All. We're having trouble updating our 4.5.4 cluster to 4.5.5. We're running a self-hosted engine on fully updated AlmaLinux 9 hosts, and get the following errors when trying to upgrade to 4.5.5. Any suggestions would be greatly appreciated. Many thanks, Devin -- [root@lnxvirt01 ~]# dnf clean all 157 files removed [root@lnxvirt01 ~]# dnf update CLASSE Packages - x86_64 36 MB/s | 569 kB 00:00 CentOS-9-stream - Ceph Pacific 839 kB/s | 557 kB 00:00 CentOS-9-stream - Gluster 10 240 kB/s | 56 kB 00:00 CentOS-9 - RabbitMQ 38 354 kB/s | 104 kB 00:00 CentOS Stream 9 - NFV OpenvSwitch 923 kB/s | 154 kB 00:00 CentOS-9 - OpenStack yoga 5.7 MB/s | 3.0 MB 00:00 CentOS Stream 9 - OpsTools - collectd 228 kB/s | 51 kB 00:00 CentOS Stream 9 - oVirt 4.5 6.2 MB/s | 1.0 MB 00:00 oVirt upstream for CentOS Stream 9 - oVirt 4.5 1.0 kB/s | 7.5 kB 00:07 AlmaLinux 9 - AppStream 87 MB/s | 7.7 MB 00:00 AlmaLinux 9 - BaseOS 72 MB/s | 2.4 MB 00:00 AlmaLinux 9 - BaseOS - Debug 9.9 MB/s | 1.9 MB 00:00 AlmaLinux 9 - CRB 67 MB/s | 2.3 MB 00:00 AlmaLinux 9 - Extras 1.5 MB/s | 17 kB 00:00 AlmaLinux 9 - HighAvailability 29 MB/s | 434 kB 00:00 AlmaLinux 9 - NFV 56 MB/s | 1.0 MB 00:00 AlmaLinux 9 - Plus 2.5 MB/s | 22 kB 00:00 AlmaLinux 9 - ResilientStorage 30 MB/s | 446 kB 00:00 AlmaLinux 9 - RT 53 MB/s | 1.0 MB 00:00 AlmaLinux 9 - SAP 874 kB/s | 9.7 kB 00:00 AlmaLinux 9 - SAPHANA 1.3 MB/s | 13 kB 00:00 Error: Problem 1: cannot install the best update candidate for package ovirt-vmconsole-1.0.9-1.el9.noarch - nothing provides selinux-policy >= 38.1.27-1.el9 needed by ovirt-vmconsole-1.0.9-3.el9.noarch from centos-ovirt45 - nothing provides selinux-policy-base >= 38.1.27-1.el9 needed by ovirt-vmconsole-1.0.9-3.el9.noarch from centos-ovirt45
[ovirt-users] Re: Unable to start HostedEngine
I never resolved this, but was eventually able to restore our HostedEngine to a new NFS storage domain. Thanks, Devin > On Nov 1, 2023, at 12:31 PM, Devin A. Bougie wrote: > > After a failed attempt at migrating our HostedEngine to a new iSCSI storage > domain, we're unable to restart the original HostedEngine. > > Please see below for some details, and let me know what more information I > can provide. "Lnxvirt07" was the Host used to attempt the migration. Any > help would be greatly appreciated. > > Many thanks, > Devin > > -- > [root@lnxvirt01 ~]# tail -n 5 /var/log/ovirt-hosted-engine-ha/agent.log > MainThread::INFO::2023-11-01 > 12:29:53,514::state_decorators::51::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check) > Global maintenance detected > MainThread::INFO::2023-11-01 > 12:29:54,151::ovf_store::117::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan) > Found OVF_STORE: imgUUID:05ef954f-d06d-401c-85ec-5992e2afbe7d, > volUUID:d2860f1d-19cf-4084-8a7e-d97880c32431 > MainThread::INFO::2023-11-01 > 12:29:54,530::ovf_store::117::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan) > Found OVF_STORE: imgUUID:a375a35b-7a87-4df4-8d29-a5ba371fee85, > volUUID:ef8b3dae-bcae-4d58-bea8-cf1a34872267 > MainThread::ERROR::2023-11-01 > 12:29:54,813::config_ovf::65::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm::(_get_vm_conf_content_from_ovf_store) > Failed extracting VM OVF from the OVF_STORE volume, falling back to initial > vm.conf > MainThread::INFO::2023-11-01 > 12:29:54,843::hosted_engine::531::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) > Current state GlobalMaintenance (score: 3400) > > [root@lnxvirt01 ~]# hosted-engine --vm-start > Command VM.getStats with args {'vmID': > 'e6370d8f-c083-4f28-83d0-a232d693e07a'} failed: > (code=1, message=Virtual machine does not exist: {'vmId': > 'e6370d8f-c083-4f28-83d0-a232d693e07a'}) > Command VM.create with args {'vmID': 'e6370d8f-c083-4f28-83d0-a232d693e07a', > 'vmParams': {'vmId': 'e6370d8f-c083-4f28-83d0-a232d693e07a', 'memSize': > '16384', 'display': 'vnc', 'vmName': 'HostedEngine', 'smp': '4', 'maxVCpus': > '40', 'cpuType': 'Haswell-noTSX', 'emulatedMachine': 'pc', 'devices': > [{'index': '2', 'iface': 'ide', 'address': {'controller': '0', 'target': '0', > 'unit': '0', 'bus': '1', 'type': 'drive'}, 'specParams': {}, 'readonly': > 'true', 'deviceId': 'b3e2f40a-e28d-493c-af50-c1193fb9dc97', 'path': '', > 'device': 'cdrom', 'shared': 'false', 'type': 'disk'}, {'index': '0', > 'iface': 'virtio', 'format': 'raw', 'poolID': > '----', 'volumeID': > '6afa3b19-7a1a-4e5c-a681-eed756d316e9', 'imageID': > '94628710-cf73-4589-bd84-e58f741a4d5f', 'specParams': {}, 'readonly': > 'false', 'domainID': '555ad71c-1a4e-42b3-af8c-db39d9b9df67', 'optional': > 'false', 'deviceId': '6afa3b19-7a1a-4e5c-a681-eed756d316e9', 'address': > {'bus': '0x00', 'slot': '0x06', 'domain': '0x', 'type': 'pci', > 'function': '0x0'}, 'device': 'disk', 'shared': 'exclusive', > 'propagateErrors': 'off', 'type': 'disk', 'bootOrder': '1'}, {'device': > 'scsi', 'model': 'virtio-scsi', 'type': 'controller'}, {'nicModel': 'pv', > 'macAddr': '00:16:3e:3b:3f:14', 'linkActive': 'true', 'network': 'ovirtmgmt', > 'specParams': {}, 'deviceId': '002afd06-9649-4ac5-a5e8-1a4945c3c136', > 'address': {'bus': '0x00', 'slot': '0x03', 'domain': '0x', 'type': 'pci', > 'function': '0x0'}, 'device': 'bridge', 'type': 'interface
[ovirt-users] Re: Hosted-engine restore failing when migrating to new storage domain
This saga is finally over, at least for now. I never succeeded in restoring our hosted engine to a new iSCSI storage domain. For one reason or another, however, I was able to restore it to a new NFS storage domain. Any advice on the benefits or downsides of using an NFS storage domain versus iSCSI for the self hosted engine would be greatly appreciated. At least for now, however, things seem to be stable. Just incase helps anyone else, here's a rough outline of the procedure I finally used. - set the cluster into global maintenance mode - backup the engine using "engine-backup --scope=all --mode=backup --file=backup.bck --log=backuplog.log" - shutdown the engine - restore the engine using "hosted-engine --deploy --4 --restore-from-file=backup.bck" - when the restore pauses waiting for the lock file to be removed, scp backup.bck to the new engine VM. - ssh into the new VM, and restore the DBs using: - engine-backup --mode=restore --file=backup.bck --provision-all-databases --scope=db - engine-backup --mode=restore --file=backup.bck --provision-all-databases --scope=dwhdb - engine-backup --mode=restore --file=backup.bck --provision-all-databases --scope=grafanadb - delete the lock file, and proceed as usual Without manual intervention, the Postgres db on the new engine VM was never initialized or setup. Thanks again for everyones attention and advice. Sincerely, Devin > On Nov 6, 2023, at 12:00 PM, Devin A. Bougie wrote: > > Does anyone know how to regenerate > /etc/ovirt-hosted-engine/hosted-engine.conf? Or where exactly I find each > field to create the file manually? > > After trying to switch from an iscsi storage domain to NFS for the new hosted > engine, I finally have the engine back up and running. However, when trying > to reinstall a host to move it to the new hosted engine domain, I get "Failed > to fetch hosted engine configuration file." > > /etc/ovirt-hosted-engine/hosted-engine.conf doesn't exist on the host I ran > "hosted-engine --deploy" on. And on all other hosts, it still references the > old hosted engine VM and storage domain. > > Thanks, > Devin > >> On Oct 25, 2023, at 3:55 PM, Devin A. Bougie >> wrote: >> >> Thanks again, Gianluca. >> >> I'm currently ssh'd into the new local engine VM, and Postgres is running. >> However, an engine DB doesn't exist? Should it at this point, and do you >> have any other suggestions of where I should look? >> >> Devin >> >> -- >> [root@lnxvirt-engine ~]# su - postgres >> Last login: Wed Oct 25 15:47:18 EDT 2023 on pts/0 >> [postgres@lnxvirt-engine ~]$ psql >> psql (12.12) >> Type "help" for help. >> >> postgres=# \l >> List of databases >> Name| Owner | Encoding | Collate |Ctype| Access >> privileges >> ---+--+--+-+-+--- >> postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | template0 | >> postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres + >> | | | | | >> postgres=CTc/postgres >> template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres >> + >> | | | | | >> postgres=CTc/postgres >> (3 rows) >> >> postgres=# >> -- >> >> >>> On Oct 25, 2023, at 12:40 PM, Gianluca Cecchi >>> wrote: >>> >>> >>> On Wed, Oct 25, 2023 at 5:50 PM Devin A. Bougie >>> wrote: >>> I've had a chance to try this restore again, and this time login to the >>> local (new) hosted-engine VM to verify that >>> /root/DisableFenceAtStartupInSec.txt just contains: >>> to >>> >>> And if I try the "engine-config -g DisableFenceAtStartupInSec" from the new >>> hosted-engine VM, my connection closes. >>> [root@lnxvirt-engine ~]# cat /root/DisableFenceAtStartupInSec.txt >>> to >>> [root@lnxvirt-engine ~]# set -euo pipefail && engine-config -g >>> DisableFenceAtStartupInSec >>> Picked up JAVA_TOOL_OPTIONS: -Dcom.redhat.fips=false >>> Connection to the Database failed. Please check that the hostname and port >>> number are correct and that the Database service is up and running. >>> Connection to 192.168.222.25 closed. >>> >>> Any new suggestions or more tests I can run would be greatly appreciated. >>> >>
[ovirt-users] Re: Hosted-engine restore failing when migrating to new storage domain
Does anyone know how to regenerate /etc/ovirt-hosted-engine/hosted-engine.conf? Or where exactly I find each field to create the file manually? After trying to switch from an iscsi storage domain to NFS for the new hosted engine, I finally have the engine back up and running. However, when trying to reinstall a host to move it to the new hosted engine domain, I get "Failed to fetch hosted engine configuration file." /etc/ovirt-hosted-engine/hosted-engine.conf doesn't exist on the host I ran "hosted-engine --deploy" on. And on all other hosts, it still references the old hosted engine VM and storage domain. Thanks, Devin > On Oct 25, 2023, at 3:55 PM, Devin A. Bougie wrote: > > Thanks again, Gianluca. > > I'm currently ssh'd into the new local engine VM, and Postgres is running. > However, an engine DB doesn't exist? Should it at this point, and do you > have any other suggestions of where I should look? > > Devin > > -- > [root@lnxvirt-engine ~]# su - postgres > Last login: Wed Oct 25 15:47:18 EDT 2023 on pts/0 > [postgres@lnxvirt-engine ~]$ psql > psql (12.12) > Type "help" for help. > > postgres=# \l > List of databases > Name| Owner | Encoding | Collate |Ctype| Access > privileges > ---+--+--+-+-+--- > postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | template0 | > postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres + > | | | | | > postgres=CTc/postgres > template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres > + > | | | | | > postgres=CTc/postgres > (3 rows) > > postgres=# > -- > > >> On Oct 25, 2023, at 12:40 PM, Gianluca Cecchi >> wrote: >> >> >> On Wed, Oct 25, 2023 at 5:50 PM Devin A. Bougie >> wrote: >> I've had a chance to try this restore again, and this time login to the >> local (new) hosted-engine VM to verify that >> /root/DisableFenceAtStartupInSec.txt just contains: >> to >> >> And if I try the "engine-config -g DisableFenceAtStartupInSec" from the new >> hosted-engine VM, my connection closes. >> [root@lnxvirt-engine ~]# cat /root/DisableFenceAtStartupInSec.txt >> to >> [root@lnxvirt-engine ~]# set -euo pipefail && engine-config -g >> DisableFenceAtStartupInSec >> Picked up JAVA_TOOL_OPTIONS: -Dcom.redhat.fips=false >> Connection to the Database failed. Please check that the hostname and port >> number are correct and that the Database service is up and running. >> Connection to 192.168.222.25 closed. >> >> Any new suggestions or more tests I can run would be greatly appreciated. >> >> Thanks, >> Devin >> >> >> The key thing here is that for some reason it seems it is not able to >> connect to the database and so when "engine-config" command tries to get the >> second field of the output (the " | cut -d' ' -f2" part) it gets the "to" >> string here: >> >> Connection to the Database failed >> and anyway it returns error with failure of the overall playbook >> It should be investigated if there is a problem with the database itself on >> the new engine or if for some reason the "engine-config" command is not able >> to implicitly connect to the database >> > > ___ > Users mailing list -- users@ovirt.org > To unsubscribe send an email to users-le...@ovirt.org > Privacy Statement: https://www.ovirt.org/privacy-policy.html > oVirt Code of Conduct: > https://www.ovirt.org/community/about/community-guidelines/ > List Archives: > https://lists.ovirt.org/archives/list/users@ovirt.org/message/XJ6PEGB2LUCIOXJ5RKA35IUMIGR6LPIF/ ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/472WX3JWJNGD4NKTF4IOUTKXOTQTHORE/
[ovirt-users] Unable to start HostedEngine
After a failed attempt at migrating our HostedEngine to a new iSCSI storage domain, we're unable to restart the original HostedEngine. Please see below for some details, and let me know what more information I can provide. "Lnxvirt07" was the Host used to attempt the migration. Any help would be greatly appreciated. Many thanks, Devin -- [root@lnxvirt01 ~]# tail -n 5 /var/log/ovirt-hosted-engine-ha/agent.log MainThread::INFO::2023-11-01 12:29:53,514::state_decorators::51::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check) Global maintenance detected MainThread::INFO::2023-11-01 12:29:54,151::ovf_store::117::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan) Found OVF_STORE: imgUUID:05ef954f-d06d-401c-85ec-5992e2afbe7d, volUUID:d2860f1d-19cf-4084-8a7e-d97880c32431 MainThread::INFO::2023-11-01 12:29:54,530::ovf_store::117::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan) Found OVF_STORE: imgUUID:a375a35b-7a87-4df4-8d29-a5ba371fee85, volUUID:ef8b3dae-bcae-4d58-bea8-cf1a34872267 MainThread::ERROR::2023-11-01 12:29:54,813::config_ovf::65::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm::(_get_vm_conf_content_from_ovf_store) Failed extracting VM OVF from the OVF_STORE volume, falling back to initial vm.conf MainThread::INFO::2023-11-01 12:29:54,843::hosted_engine::531::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state GlobalMaintenance (score: 3400) [root@lnxvirt01 ~]# hosted-engine --vm-start Command VM.getStats with args {'vmID': 'e6370d8f-c083-4f28-83d0-a232d693e07a'} failed: (code=1, message=Virtual machine does not exist: {'vmId': 'e6370d8f-c083-4f28-83d0-a232d693e07a'}) Command VM.create with args {'vmID': 'e6370d8f-c083-4f28-83d0-a232d693e07a', 'vmParams': {'vmId': 'e6370d8f-c083-4f28-83d0-a232d693e07a', 'memSize': '16384', 'display': 'vnc', 'vmName': 'HostedEngine', 'smp': '4', 'maxVCpus': '40', 'cpuType': 'Haswell-noTSX', 'emulatedMachine': 'pc', 'devices': [{'index': '2', 'iface': 'ide', 'address': {'controller': '0', 'target': '0', 'unit': '0', 'bus': '1', 'type': 'drive'}, 'specParams': {}, 'readonly': 'true', 'deviceId': 'b3e2f40a-e28d-493c-af50-c1193fb9dc97', 'path': '', 'device': 'cdrom', 'shared': 'false', 'type': 'disk'}, {'index': '0', 'iface': 'virtio', 'format': 'raw', 'poolID': '----', 'volumeID': '6afa3b19-7a1a-4e5c-a681-eed756d316e9', 'imageID': '94628710-cf73-4589-bd84-e58f741a4d5f', 'specParams': {}, 'readonly': 'false', 'domainID': '555ad71c-1a4e-42b3-af8c-db39d9b9df67', 'optional': 'false', 'deviceId': '6afa3b19-7a1a-4e5c-a681-eed756d316e9', 'address': {'bus': '0x00', 'slot': '0x06', 'domain': '0x', 'type': 'pci', 'function': '0x0'}, 'device': 'disk', 'shared': 'exclusive', 'propagateErrors': 'off', 'type': 'disk', 'bootOrder': '1'}, {'device': 'scsi', 'model': 'virtio-scsi', 'type': 'controller'}, {'nicModel': 'pv', 'macAddr': '00:16:3e:3b:3f:14', 'linkActive': 'true', 'network': 'ovirtmgmt', 'specParams': {}, 'deviceId': '002afd06-9649-4ac5-a5e8-1a4945c3c136', 'address': {'bus': '0x00', 'slot': '0x03', 'domain': '0x', 'type': 'pci', 'function': '0x0'}, 'device': 'bridge', 'type': 'interface'}, {'device': 'console', 'type': 'console'}, {'device': 'vga', 'alias': 'video0', 'type': 'video'}, {'device': 'vnc', 'type': 'graphics'}, {'device': 'virtio', 'specParams': {'source': 'urandom'}, 'model': 'virtio', 'type': 'rng'}]}} failed: (code=100, message=General Exception: ("'xml'",)) VM failed to launch [root@lnxvirt01 ~]# cat /etc/ovirt-hosted-engine/hosted-engine.conf fqdn=lnxvirt-engine.classe.cornell.edu vm_disk_id=94628710-cf73-4589-bd84-e58f741a4d5f vm_disk_vol_id=6afa3b19-7a1a-4e5c-a681-eed756d316e9 vmid=e6370d8f-c083-4f28-83d0-a232d693e07a storage=192.168.56.50,192.168.56.51,192.168.56.52,192.168.56.53 nfs_version= mnt_options= conf=/var/run/ovirt-hosted-engine-ha/vm.conf host_id=8 console=vnc domainType=iscsi spUUID=---- sdUUID=555ad71c-1a4e-42b3-af8c-db39d9b9df67 connectionUUID=e29cf818-5ee5-46e1-85c1-8aeefa33e95d vdsm_use_ssl=true gateway=192.168.55.1 bridge=ovirtmgmt network_test=dns tcp_t_address= tcp_t_port= metadata_volume_UUID=2bf987a2-ab81-454c-9fc7-dc7ec8945fd9 metadata_image_UUID=35429b63-16ca-417a-b87a-d232463bf6a3 lockspace_volume_UUID=b0d09780-2047-433c-812d-10ba0beff788 lockspace_image_UUID=8ccb878d-9938-43c8-908b-e1b416fe991c conf_volume_UUID=0b40ac60-499e-4ff1-83d0-fc578f1af3dc conf_image_UUID=551d4fe5-a9f7-4ba1-9951-87418362b434 # The following are used only for iSCSI storage iqn=iqn.2002-10.com.infortrend:raid.uid58207.001 portal=1 user= password= port=3260,3260,3260,3260 [root@lnxvirt01 ~]# hosted-engine --vm-status !! Cluster is in GLOBAL MAINTENANCE mode !! --== Host lnxvirt06.classe.cornell.edu (id: 1) status ==-- Host ID: 1 Host timestamp : 3718817 Score : 3400 Engine sta
[ovirt-users] Re: Hosted-engine restore failing when migrating to new storage domain
Thanks again, Gianluca. I'm currently ssh'd into the new local engine VM, and Postgres is running. However, an engine DB doesn't exist? Should it at this point, and do you have any other suggestions of where I should look? Devin -- [root@lnxvirt-engine ~]# su - postgres Last login: Wed Oct 25 15:47:18 EDT 2023 on pts/0 [postgres@lnxvirt-engine ~]$ psql psql (12.12) Type "help" for help. postgres=# \l List of databases Name| Owner | Encoding | Collate |Ctype| Access privileges ---+--+--+-+-+--- postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres + | | | | | postgres=CTc/postgres template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres + | | | | | postgres=CTc/postgres (3 rows) postgres=# -- > On Oct 25, 2023, at 12:40 PM, Gianluca Cecchi > wrote: > > > On Wed, Oct 25, 2023 at 5:50 PM Devin A. Bougie > wrote: > I've had a chance to try this restore again, and this time login to the local > (new) hosted-engine VM to verify that /root/DisableFenceAtStartupInSec.txt > just contains: > to > > And if I try the "engine-config -g DisableFenceAtStartupInSec" from the new > hosted-engine VM, my connection closes. > [root@lnxvirt-engine ~]# cat /root/DisableFenceAtStartupInSec.txt > to > [root@lnxvirt-engine ~]# set -euo pipefail && engine-config -g > DisableFenceAtStartupInSec > Picked up JAVA_TOOL_OPTIONS: -Dcom.redhat.fips=false > Connection to the Database failed. Please check that the hostname and port > number are correct and that the Database service is up and running. > Connection to 192.168.222.25 closed. > > Any new suggestions or more tests I can run would be greatly appreciated. > > Thanks, > Devin > > > The key thing here is that for some reason it seems it is not able to connect > to the database and so when "engine-config" command tries to get the second > field of the output (the " | cut -d' ' -f2" part) it gets the "to" string > here: > > Connection to the Database failed > and anyway it returns error with failure of the overall playbook > It should be investigated if there is a problem with the database itself on > the new engine or if for some reason the "engine-config" command is not able > to implicitly connect to the database > ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/XJ6PEGB2LUCIOXJ5RKA35IUMIGR6LPIF/
[ovirt-users] Re: Hosted-engine restore failing when migrating to new storage domain
I've had a chance to try this restore again, and this time login to the local (new) hosted-engine VM to verify that /root/DisableFenceAtStartupInSec.txt just contains: to And if I try the "engine-config -g DisableFenceAtStartupInSec" from the new hosted-engine VM, my connection closes. [root@lnxvirt-engine ~]# cat /root/DisableFenceAtStartupInSec.txt to [root@lnxvirt-engine ~]# set -euo pipefail && engine-config -g DisableFenceAtStartupInSec Picked up JAVA_TOOL_OPTIONS: -Dcom.redhat.fips=false Connection to the Database failed. Please check that the hostname and port number are correct and that the Database service is up and running. Connection to 192.168.222.25 closed. Any new suggestions or more tests I can run would be greatly appreciated. Thanks, Devin > On Oct 15, 2023, at 9:10 AM, Devin A. Bougie wrote: > > Hi Gianluca, > > Thanks for taking another look. I'm not sure what to make of > /var/log/ovirt-hosted-engine-setup/engine-logs-2023-10-14T14:30:39Z/log/ovirt-engine/setup/restore-backup-20231014150412.log, > but here it is. Does that explain anything to you, or give an idea of where > to look next? > > Thanks again! > Devin > > -- > 2023-10-14 11:04:12 7680: Start of engine-backup mode restore scope all file > /root/engine_backup > 2023-10-14 11:04:12 7680: OUTPUT: Start of engine-backup with mode 'restore' > 2023-10-14 11:04:12 7680: OUTPUT: scope: all > 2023-10-14 11:04:12 7680: OUTPUT: archive file: /root/engine_backup > 2023-10-14 11:04:12 7680: OUTPUT: log file: > /var/log/ovirt-engine/setup/restore-backup-20231014150412.log > 2023-10-14 11:04:12 7680: OUTPUT: Preparing to restore: > 2023-10-14 11:04:12 7680: OUTPUT: - Unpacking file '/root/engine_backup' > 2023-10-14 11:04:12 7680: Opening tarball /root/engine_backup to > /tmp/engine-backup.Onm6LsDR0g > 2023-10-14 11:04:13 7680: Verifying hash > 2023-10-14 11:04:13 7680: Verifying version > 2023-10-14 11:04:13 7680: Reading config > 2023-10-14 11:04:13 7680: Scope after checking backup content: > 2023-10-14 11:04:13 7680: SCOPE_FILES:1 > 2023-10-14 11:04:13 7680: SCOPE_ENGINE_DB:1 > 2023-10-14 11:04:13 7680: SCOPE_DWH_DB:1 > 2023-10-14 11:04:13 7680: SCOPE_CINDERLIB_DB: > 2023-10-14 11:04:13 7680: SCOPE_KEYCLOAK_DB: > 2023-10-14 11:04:13 7680: SCOPE_GRAFANA_DB:1 > 2023-10-14 11:04:13 7680: OUTPUT: Restoring: > 2023-10-14 11:04:13 7680: OUTPUT: - Files > 2023-10-14 11:04:13 7680: Restoring files > tar: var/lib/grafana/plugins/performancecopilot-pcp-app: Cannot open: File > exists > tar: Exiting with failure status due to previous errors > 2023-10-14 11:04:13 7680: FATAL: Failed restoring /etc/ovirt-engine > /etc/ovirt-engine-dwh > /etc/ovirt-provider-ovn/conf.d > /etc/ovirt-provider-ovn/logger.conf > /etc/ovirt-vmconsole > /etc/pki/ovirt-engine > /etc/pki/ovirt-vmconsole > /etc/ovirt-engine-setup.conf.d > /etc/httpd/conf.d/internalsso-openidc.conf > /etc/httpd/conf.d/ovirt-engine-grafana-proxy.conf > /etc/httpd/conf.d/ovirt-engine-root-redirect.conf > /etc/httpd/conf.d/ssl.conf > /etc/httpd/conf.d/z-ovirt-engine-proxy.conf > /etc/httpd/conf.d/z-ovirt-engine-keycloak-proxy.conf > /etc/httpd/http.keytab > /etc/httpd/conf.d/ovirt-sso.conf > /etc/yum/pluginconf.d/versionlock.list > /etc/dnf/plugins/versionlock.list > /etc/firewalld/services/ovirt-https.xml > /etc/firewalld/services/ovirt-http.xml > /etc/firewalld/services/ovirt-postgres.xml > /etc/firewalld/services/ovirt-provider-ovn.xml > /etc/firewalld/services/ovn-central-firewall-service.xml > /var/lib/openvswitch > /etc/grafana > /var/lib/ovirt-engine/content > /var/lib/ovirt-engine/setup > /var/lib/grafana/plugins > -- > >> On Oct 15, 2023, at 5:59 AM, Gianluca Cecchi >> wrote: >> >> On Sat, Oct 14, 2023 at 7:05 PM Devin A. Bougie >> wrote: >> [snip] >> Any additional questions or suggestions would be greatly appreciated. >> >> Thanks again, >> Devin >> >> >> There is another FATAL line regarding restore itself, before the message I >> pointed out in my previous message. >> Can you analyze and/or share the contents of >> /var/log/ovirt-engine/setup/restore-backup-20231014150412.log? >> >> Gianluca > ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/D7LPPXNHQV3YCVSA7CSJHRXYU3T2FWQF/
[ovirt-users] Re: Hosted-engine restore failing when migrating to new storage domain
Hi Gianluca, Thanks for taking another look. I'm not sure what to make of /var/log/ovirt-hosted-engine-setup/engine-logs-2023-10-14T14:30:39Z/log/ovirt-engine/setup/restore-backup-20231014150412.log, but here it is. Does that explain anything to you, or give an idea of where to look next? Thanks again! Devin -- 2023-10-14 11:04:12 7680: Start of engine-backup mode restore scope all file /root/engine_backup 2023-10-14 11:04:12 7680: OUTPUT: Start of engine-backup with mode 'restore' 2023-10-14 11:04:12 7680: OUTPUT: scope: all 2023-10-14 11:04:12 7680: OUTPUT: archive file: /root/engine_backup 2023-10-14 11:04:12 7680: OUTPUT: log file: /var/log/ovirt-engine/setup/restore-backup-20231014150412.log 2023-10-14 11:04:12 7680: OUTPUT: Preparing to restore: 2023-10-14 11:04:12 7680: OUTPUT: - Unpacking file '/root/engine_backup' 2023-10-14 11:04:12 7680: Opening tarball /root/engine_backup to /tmp/engine-backup.Onm6LsDR0g 2023-10-14 11:04:13 7680: Verifying hash 2023-10-14 11:04:13 7680: Verifying version 2023-10-14 11:04:13 7680: Reading config 2023-10-14 11:04:13 7680: Scope after checking backup content: 2023-10-14 11:04:13 7680: SCOPE_FILES:1 2023-10-14 11:04:13 7680: SCOPE_ENGINE_DB:1 2023-10-14 11:04:13 7680: SCOPE_DWH_DB:1 2023-10-14 11:04:13 7680: SCOPE_CINDERLIB_DB: 2023-10-14 11:04:13 7680: SCOPE_KEYCLOAK_DB: 2023-10-14 11:04:13 7680: SCOPE_GRAFANA_DB:1 2023-10-14 11:04:13 7680: OUTPUT: Restoring: 2023-10-14 11:04:13 7680: OUTPUT: - Files 2023-10-14 11:04:13 7680: Restoring files tar: var/lib/grafana/plugins/performancecopilot-pcp-app: Cannot open: File exists tar: Exiting with failure status due to previous errors 2023-10-14 11:04:13 7680: FATAL: Failed restoring /etc/ovirt-engine /etc/ovirt-engine-dwh /etc/ovirt-provider-ovn/conf.d /etc/ovirt-provider-ovn/logger.conf /etc/ovirt-vmconsole /etc/pki/ovirt-engine /etc/pki/ovirt-vmconsole /etc/ovirt-engine-setup.conf.d /etc/httpd/conf.d/internalsso-openidc.conf /etc/httpd/conf.d/ovirt-engine-grafana-proxy.conf /etc/httpd/conf.d/ovirt-engine-root-redirect.conf /etc/httpd/conf.d/ssl.conf /etc/httpd/conf.d/z-ovirt-engine-proxy.conf /etc/httpd/conf.d/z-ovirt-engine-keycloak-proxy.conf /etc/httpd/http.keytab /etc/httpd/conf.d/ovirt-sso.conf /etc/yum/pluginconf.d/versionlock.list /etc/dnf/plugins/versionlock.list /etc/firewalld/services/ovirt-https.xml /etc/firewalld/services/ovirt-http.xml /etc/firewalld/services/ovirt-postgres.xml /etc/firewalld/services/ovirt-provider-ovn.xml /etc/firewalld/services/ovn-central-firewall-service.xml /var/lib/openvswitch /etc/grafana /var/lib/ovirt-engine/content /var/lib/ovirt-engine/setup /var/lib/grafana/plugins -- > On Oct 15, 2023, at 5:59 AM, Gianluca Cecchi > wrote: > > On Sat, Oct 14, 2023 at 7:05 PM Devin A. Bougie > wrote: > [snip] > Any additional questions or suggestions would be greatly appreciated. > > Thanks again, > Devin > > > There is another FATAL line regarding restore itself, before the message I > pointed out in my previous message. > Can you analyze and/or share the contents of > /var/log/ovirt-engine/setup/restore-backup-20231014150412.log? > > Gianluca ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/UVWBRJNQ3HVKVXQ7T56ERN4MVT5BSSG5/
[ovirt-users] Re: Hosted-engine restore failing when migrating to new storage domain
Thank you so much, Gianluca! Yes, the source and target environments are the same version. I'm not able to find /root/DisableFenceAtStartupInSec.txt anywhere, but maybe that's because at this point I've reverted to the original hosted_engine? Here is the output of the commands you sent: -- [root@lnxvirt-engine ~]# engine-config -g DisableFenceAtStartupInSec Picked up JAVA_TOOL_OPTIONS: -Dcom.redhat.fips=false DisableFenceAtStartupInSec: 300 version: general [root@lnxvirt-engine ~]# set -euo pipefail && engine-config -g DisableFenceAtStartupInSec | cut -d' ' -f 2 Picked up JAVA_TOOL_OPTIONS: -Dcom.redhat.fips=false 300 engine=# select * from vdc_options where option_name='DisableFenceAtStartupInSec'; option_id |option_name | option_value | version | default_value ---++--+-+--- 45 | DisableFenceAtStartupInSec | 300 | general | 300 (1 row) -- I also tried the following from the host I tried running the restore on. -- [root@lnxvirt07 ~]# set -euo pipefail && echo "Picked up JAVA_TOOL_OPTIONS: -Dcom.redhat.fips=false DisableFenceAtStartupInSec: 300 version: general" | cut -d' ' -f 2 up 300 -- Any additional questions or suggestions would be greatly appreciated. Thanks again, Devin On Oct 14, 2023, at 12:30 PM, Gianluca Cecchi wrote: On Sat, Oct 14, 2023 at 5:53 PM Devin A. Bougie mailto:devin.bou...@cornell.edu>> wrote: Hello, We have a functioning oVirt 4.5.4 cluster running on fully-updated EL9.2 hosts. We are trying to migrate the self-hosted engine to a new iSCSI storage domain using the existing hosts, following the documented procedure: - set the cluster into global maintenance mode - backup the engine using "engine-backup --scope=all --mode=backup --file=backup.bck --log=backuplog.log" - shutdown the engine - restore the engine using "hosted-engine --deploy --4 --restore-from-file=backup.bck" This almost works, but fails with the attached log file. Any help or suggestions would be greatly appreciated, including alternate procedures for migrating a self-hosted engine from one domain to another. Many thanks, Devin If I'm right, the starting error seems to be this one: 2023-10-14 11:06:16,529-0400 ERROR otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:113 fatal: [local host -> 192.168.1.25]: FAILED! => {"changed": true, "cmd": "set -euo pipefail && engine-config -g DisableFenceAtStartupInSec | c ut -d' ' -f2 > /root/DisableFenceAtStartupInSec.txt", "delta": "0:00:01.495195", "end": "2023-10-14 11:06:16.184479", "msg": "no n-zero return code", "rc": 1, "start": "2023-10-14 11:06:14.689284", "stderr": "Picked up JAVA_TOOL_OPTIONS: -Dcom.redhat.fips=f alse", "stderr_lines": ["Picked up JAVA_TOOL_OPTIONS: -Dcom.redhat.fips=false"], "stdout": "", "stdout_lines": []} As the return code is 1 ("rc": 1,) and determines the failure of the playbook, possibly the old environment doesn't have DisableFenceAtStartupInSec engine config property correctly set and/or the "cut" command fails... Or some other problem with that config parameter. Can you verify what it put into /root/DisableFenceAtStartupInSec.txt? I have only a 4.4.10 env at hand and on it: [root@ovengine01 ~]# engine-config -g DisableFenceAtStartupInSec Picked up JAVA_TOOL_OPTIONS: -Dcom.redhat.fips=false DisableFenceAtStartupInSec: 300 version: general [root@ovengine01 ~]# [root@ovengine01 ~]# set -euo pipefail && engine-config -g DisableFenceAtStartupInSec | cut -d' ' -f 2 Picked up JAVA_TOOL_OPTIONS: -Dcom.redhat.fips=false 300 [root@ovengine01 ~]# what is the output of this command on your old env: engine-config -g DisableFenceAtStartupInSec ? Are the source and target environments the same version? If you have access to your old env could you also run this query on engine database: select * from vdc_options where option_name='DisableFenceAtStartupInSec'; eg this way [root@ovengine01 ~]# su - postgres [postgres@ovengine01 ~]$ psql engine psql (12.9) Type "help" for help. engine=# select * from vdc_options where option_name='DisableFenceAtStartupInSec'; option_id |option_name | option_value | version | default_value ---++--+-+--- 40 | DisableFenceAtStartupInSec | 300 | general | 300 (1 row) engine=# Gianluca ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/7FPSGAMPCTNYWQIA3MWFJM5QOBYM3VSC/
[ovirt-users] Re: Hosted-engine restore failing
I was able to work around this by editing /usr/share/ovirt-hosted-engine-setup/he_ansible/callback_plugins/2_ovirt_logger.py and changing: from collections import Callable to: from collections.abc import Callable Thanks again for taking a look, Devin > On Oct 11, 2023, at 12:17 PM, Devin A. Bougie > wrote: > > Hi Jorge, > > Please see below for a full package listing. We are running on fully updated > RHEL9.2 hosts, and are simply trying to migrate to a new hosted engine > storage domain (not trying to replace or upgrade any of the underlying hosts). > > If there's a way to accomplish this without the full "engine-backup" from the > hosted engine followed by "hosted-engine --deploy --restore..." from a host, > that would work too. > > Many thanks for taking a look, > Devin > > -- > [root@lnxvirt01 ~]# rpm -qa | grep -e python -e ovirt > python3-pip-wheel-21.2.3-6.el9.noarch > python3-dbus-1.2.18-2.el9.x86_64 > python3-six-1.15.0-9.el9.noarch > python3-dasbus-1.4-5.el9.noarch > python3-idna-2.10-7.el9.noarch > python3-argcomplete-1.12.0-5.el9.noarch > libpeas-loader-python3-1.30.0-4.el9.x86_64 > python3-distro-1.5.0-7.el9.noarch > python3-dateutil-2.8.1-6.el9.noarch > python3-libcomps-0.1.18-1.el9.x86_64 > python3-chardet-4.0.0-5.el9.noarch > python3-ptyprocess-0.6.0-12.el9.noarch > python3-pexpect-4.8.0-7.el9.noarch > python3-pysocks-1.7.1-12.el9.noarch > python3-urllib3-1.26.5-3.el9.noarch > python3-pyyaml-5.4.1-6.el9.x86_64 > python3-systemd-234-18.el9.x86_64 > libcap-ng-python3-0.8.2-7.el9.x86_64 > python3-cups-2.0.1-10.el9.x86_64 > python3-enchant-3.2.0-5.el9.noarch > python3-louis-3.16.1-4.el9.noarch > python3-ply-3.11-14.el9.noarch > python3-pycparser-2.20-6.el9.noarch > python3-cffi-1.14.5-5.el9.x86_64 > python3-pyxdg-0.27-3.el9.noarch > python3-pyatspi-2.38.1-3.el9.noarch > python3-gpg-1.15.1-6.el9.x86_64 > python3-libreport-2.15.2-6.el9.alma.x86_64 > python-srpm-macros-3.9-52.el9.noarch > python3-speechd-0.10.2-4.el9.x86_64 > python3-brlapi-0.8.2-4.el9.x86_64 > ibus-anthy-python-1.5.13-1.el9.noarch > python3-audit-3.0.7-103.el9.x86_64 > python-rpm-macros-3.9-52.el9.noarch > python3-rpm-macros-3.9-52.el9.noarch > python3-pip-21.2.3-6.el9.noarch > python3-pyparsing-2.4.7-9.el9.noarch > python3-packaging-20.9-5.el9.noarch > python3-rpm-generators-12-8.el9.noarch > python3-lxml-4.6.5-3.el9.x86_64 > python3-gobject-base-3.40.1-6.el9.x86_64 > python3-gobject-base-noarch-3.40.1-6.el9.noarch > python3-cairo-1.20.1-1.el9.x86_64 > python3-gobject-3.40.1-6.el9.x86_64 > python3-pyudev-0.22.0-6.el9.noarch > python3-cryptography-36.0.1-2.el9.x86_64 > python3-sanlock-3.8.4-4.el9.x86_64 > python3-docutils-0.16-6.el9.noarch > python3-decorator-4.4.2-6.el9.noarch > python3-pyasn1-0.4.8-6.el9.noarch > python3-pwquality-1.4.4-8.el9.x86_64 > python3-ovirt-setup-lib-1.3.3-1.el9.noarch > python3-augeas-0.5.0-25.el9.noarch > ovirt-vmconsole-1.0.9-1.el9.noarch > python3-otopi-1.10.3-1.el9.noarch > python3-ioprocess-1.4.2-1.202111071752.git53786ff.el9.x86_64 > python3-lockfile-0.12.2-2.el9s.noarch > python3-daemon-2.3.0-1.el9s.noarch > python3-ovirt-engine-lib-4.5.4-1.el9.noarch > python3-pynacl-1.4.0-2.el9s.x86_64 > python3-sortedcontainers-2.3.0-2.el9s.noarch > vdsm-python-4.50.3.4-1.el9.noarch > python3-bcrypt-3.1.7-7.el9s.x86_64 > python3-paramiko-2.7.2-4.el9s.noarch > ovirt-engine-setup-base-4.5.4-1.el9.noarch > python3-pbr-5.6.0-1.el9s.noarch > python3-pytz-2021.1-4.el9.noarch > python3-greenlet-1.1.2-3.el9.x86_64 > python3-qrcode-core-6.1-12.el9.noarch > python3-pyusb-1.0.2-13.el9.noarch > python3-pyasn1-modules-0.4.8-6.el9.noarch > python3-netifaces-0.10.6-15.el9.x86_64 > python3-gssapi-1.6.9-5.el9.x86_64 > python3-msgpack-1.0.3-2.el9s.x86_64 > python3-extras-1.0.0-15.el9s.noarch > python3-fixtures-3.0.0-27.el9s.noarch > python3-testtools-2.5.0-2.el9s.noarch > ovirt-hosted-engine-ha-2.5.0-1.el9.noarch > python3-yubico-1.3.3-7.el9.noarch > python3-babel-2.9.1-2.el9.noarch > python3-inotify-0.9.6-25.el9.noarch > python3-ethtool-0.15-2.el9.x86_64 > python3-resolvelib-0.5.4-5.el9.noarch > python3-prettytable-0.7.2-27.el9.noarch > python3-jwcrypto-0.8-4.el9.noarch > ovirt-vmconsole-host-1.0.9-1.el9.noarch > python3-yappi-1.3.1-2.el9s.x86_64 > python3-wrapt-1.13.3-2.el9s.x86_64 > python3-debtcollector-2.5.0-1.el9s.noarch > python3-oslo-context-4.1.0-1.el9s.noarch > python3-tenacity-6.3.1-1.el9s.noarch > python3-tempita-0.5.2-2.el9s.noarch > python3-stevedore-3.5.2-1.el9s.noarch > python3-rfc3986-1.5.0-1.el9s.noarch > python3-repoze-lru-0.7-10.el9s.noarch > python3-routes-2.5.1-1.el9s.noarch > python3-jmespath-0.10.0-1.el9s.noarch > py
[ovirt-users] Re: Hosted-engine restore failing
h ovirt-host-dependencies-4.5.0-3.el9.x86_64 ovirt-hosted-engine-setup-2.7.0-1.el9.noarch ovirt-host-4.5.0-3.el9.x86_64 python3-nftables-1.0.4-10.el9_1.x86_64 python3-setuptools-wheel-53.0.0-12.el9.noarch python3-libselinux-3.5-1.el9.x86_64 python3-dns-2.2.1-2.el9.noarch python3-ldap-3.4.3-2.el9.x86_64 python3-tdb-1.4.7-1.el9.x86_64 python3-ldb-2.6.1-1.el9.x86_64 python3-libsemanage-3.5-1.el9.x86_64 python3-setools-4.4.1-1.el9.x86_64 python3-firewall-1.2.1-1.el9.noarch python3-linux-procfs-0.7.1-1.el9.noarch python3-talloc-2.3.4-1.el9.x86_64 python3-tevent-0.13.0-1.el9.x86_64 nbdkit-python-plugin-1.32.5-4.el9.x86_64 python3-policycoreutils-3.5-1.el9.noarch policycoreutils-python-utils-3.5-1.el9.noarch openwsman-python3-2.6.8-23.el9.x86_64 python3-libdnf-0.69.0-3.el9_2.alma.x86_64 python3-hawkey-0.69.0-3.el9_2.alma.x86_64 python3-libstoragemgmt-1.9.5-1.el9.x86_64 python3-libvirt-9.0.0-1.el9.x86_64 python3-rpm-4.16.1.3-22.el9.x86_64 python3-dnf-4.14.0-5.el9_2.alma.noarch python3.11-setuptools-wheel-65.5.1-2.el9.noarch python3.11-pip-wheel-22.3.1-2.el9.noarch python3.11-ply-3.11-1.el9.noarch python3.11-pycparser-2.20-1.el9.noarch python3.11-cffi-1.15.1-1.el9.x86_64 python3.11-cryptography-37.0.2-5.el9.x86_64 python3.11-pyyaml-6.0-1.el9.x86_64 python3.11-six-1.16.0-1.el9.noarch python3-dmidecode-3.12.3-2.el9.x86_64 python3-sssdconfig-2.8.2-3.el9_2.alma.noarch python3-requests-2.25.1-7.el9_2.noarch python3-libipa_hbac-2.8.2-3.el9_2.alma.x86_64 python3-sss-2.8.2-3.el9_2.alma.x86_64 python3-sss-murmur-2.8.2-3.el9_2.alma.x86_64 python3-file-magic-5.39-12.1.el9_2.noarch python3-samba-4.17.5-103.el9_2.alma.x86_64 python3-libxml2-2.9.13-3.el9_2.1.x86_64 centos-release-ovirt45-9.2-1.el9.noarch python3-pyOpenSSL-21.0.0-1.el9.noarch python3-ipalib-4.10.1-9.el9_2.alma.1.noarch python3-ipaclient-4.10.1-9.el9_2.alma.1.noarch python3-perf-5.14.0-284.30.1.el9_2.x86_64 python3-rados-16.2.13-1.el9s.x86_64 ovirt-openvswitch-ovn-2.17-1.el9.noarch ovirt-imageio-common-2.5.0-1.el9.x86_64 python3-ceph-argparse-16.2.13-1.el9s.x86_64 python3-openvswitch2.17-2.17.0-103.el9s.x86_64 python3-cephfs-16.2.13-1.el9s.x86_64 python3-rbd-16.2.13-1.el9s.x86_64 python3-rgw-16.2.13-1.el9s.x86_64 python3-pycurl-7.45.2-2.1.el9.x86_64 python3-ovirt-engine-sdk4-4.6.2-1.el9.x86_64 python3-eventlet-0.33.3-1.el9s.noarch python3-setuptools-57.4.0-1.el9s.noarch python3-ceph-common-16.2.13-1.el9s.x86_64 python3.11-pycurl-7.45.2-2.1.el9.x86_64 python3.11-ovirt-engine-sdk4-4.6.2-1.el9.x86_64 python3.11-passlib-1.7.4-3.2.el9.noarch python3.11-ovirt-imageio-common-2.5.0-1.el9.x86_64 python3.11-ovirt-imageio-client-2.5.0-1.el9.x86_64 python3.11-jmespath-0.9.0-11.5.el9.noarch ovirt-ansible-collection-3.1.3-1.el9.noarch python3-netaddr-0.8.0-12.2.el9.noarch python3-os-brick-5.2.3-1.el9s.noarch ovirt-imageio-client-2.5.0-1.el9.x86_64 ovirt-openvswitch-ovn-host-2.17-1.el9.noarch ovirt-openvswitch-ipsec-2.17-1.el9.noarch ovirt-python-openvswitch-2.17-1.el9.noarch ovirt-openvswitch-2.17-1.el9.noarch ovirt-imageio-daemon-2.5.0-1.el9.x86_64 ovirt-openvswitch-ovn-common-2.17-1.el9.noarch python3-passlib-1.7.4-3.2.el9.noarch python3-libnmstate-2.2.15-2.el9_2.x86_64 python3.11-libs-3.11.2-2.el9_2.2.x86_64 python3.11-3.11.2-2.el9_2.2.x86_64 python3.11-tkinter-3.11.2-2.el9_2.2.x86_64 python3-tkinter-3.9.16-1.el9_2.2.x86_64 python3-libs-3.9.16-1.el9_2.2.x86_64 python3-3.9.16-1.el9_2.2.x86_64 python-unversioned-command-3.9.16-1.el9_2.2.noarch python3-dnf-plugins-core-4.3.0-5.el9_2.alma.1.noarch python3-devel-3.9.16-1.el9_2.2.x86_64 -- > On Oct 11, 2023, at 11:58 AM, Jorge Visentini > wrote: > > Hi. > > It looks like you have different versions of Python or another package. > > rpm -qa | grep python > rpm -qa | grep ovirt > > On both hosts (old and new) > > Em qua., 11 de out. de 2023 às 12:08, Devin A. Bougie > escreveu: > Hi, All. We are attempting to migrate to a new storage domain for our oVirt > 4.5.4 self-hosted engine setup, and are failing with "cannot import name > 'Callable' from 'collections'" > > Please see below for the errors on the console. > > Many thanks, > Devin > > -- > hosted-engine --deploy --restore-from-file=backup.bck --4 > ... > [ INFO ] Checking available network interfaces: > [ ERROR ] b'[WARNING]: Skipping plugin (/usr/share/ovirt-hosted-engine-\n' > [ ERROR ] b'setup/he_ansible/callback_plugins/2_ovirt_logger.py), cannot > load: cannot\n' > [ ERROR ] b"import name 'Callable' from 'collections'\n" > [ ERROR ] b'(/usr/lib64/python3.11/collections/__init__.py)\n' > [ ERROR ] b"ERROR! Unexpected Exception, this is probably a bug: cannot > import name 'Callable' from 'collections' > (/usr/lib64/python3.11/collections/__init__.py)\n" > [ ERROR ] Failed to execute stage 'Environment customization
[ovirt-users] Hosted-engine restore failing
Hi, All. We are attempting to migrate to a new storage domain for our oVirt 4.5.4 self-hosted engine setup, and are failing with "cannot import name 'Callable' from 'collections'" Please see below for the errors on the console. Many thanks, Devin -- hosted-engine --deploy --restore-from-file=backup.bck --4 ... [ INFO ] Checking available network interfaces: [ ERROR ] b'[WARNING]: Skipping plugin (/usr/share/ovirt-hosted-engine-\n' [ ERROR ] b'setup/he_ansible/callback_plugins/2_ovirt_logger.py), cannot load: cannot\n' [ ERROR ] b"import name 'Callable' from 'collections'\n" [ ERROR ] b'(/usr/lib64/python3.11/collections/__init__.py)\n' [ ERROR ] b"ERROR! Unexpected Exception, this is probably a bug: cannot import name 'Callable' from 'collections' (/usr/lib64/python3.11/collections/__init__.py)\n" [ ERROR ] Failed to execute stage 'Environment customization': Failed executing ansible-playbook [ INFO ] Stage: Clean up [ INFO ] Cleaning temporary resources [ ERROR ] b'[WARNING]: Skipping plugin (/usr/share/ovirt-hosted-engine-\n' [ ERROR ] b'setup/he_ansible/callback_plugins/2_ovirt_logger.py), cannot load: cannot\n' [ ERROR ] b"import name 'Callable' from 'collections'\n" [ ERROR ] b'(/usr/lib64/python3.11/collections/__init__.py)\n' [ ERROR ] b"ERROR! Unexpected Exception, this is probably a bug: cannot import name 'Callable' from 'collections' (/usr/lib64/python3.11/collections/__init__.py)\n" [ ERROR ] Failed to execute stage 'Clean up': Failed executing ansible-playbook [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-2023100358.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Hosted Engine deployment failed Log file is located at /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-2023100352-raupj9.log ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/A7DPYZ4DG5EXB2YTYDERHVN5A2ZVKUPR/
[ovirt-users] Re: Installing Server 2022 VM
This ended up being a problem with an old ISO from Microsoft. The ISO labeled Server2022-December2021 doesn't work, but if we download a new one it's called "Server2022-October2022" and does work. Devin > On Apr 20, 2023, at 10:19 AM, Devin A. Bougie > wrote: > > We've been struggling to install Windows Server 2022 on oVirt. We recently > upgraded to the latest oVirt 4.5 on EL9 hosts, but it didn't help. > > In the past, we could boot a VM from the install CD, add the mass storage > drivers from the virt-io CD, and proceed from there. However, oVirt 4.3 > didn't have a Server 2022 selection. > > After upgrading to oVirt 4.5, we couldn't get it to boot from the CD. It just > gave me Windows boot errors if I do q35 EFI. If I switch back to i440fx I get > the same behavior as in oVirt 4.3 - it'll boot the DVD but doesn't find a > disk with the virtio-scsi drivers. > > We'd greatly appreciate if anyone has successfully installed Server 2022 in > oVirt. > > Many thanks, > Devin > ___ > Users mailing list -- users@ovirt.org > To unsubscribe send an email to users-le...@ovirt.org > Privacy Statement: https://www.ovirt.org/privacy-policy.html > oVirt Code of Conduct: > https://www.ovirt.org/community/about/community-guidelines/ > List Archives: > https://lists.ovirt.org/archives/list/users@ovirt.org/message/TSYQCABC4THLO4RZBO4URCQ6SBDW4UQ6/ ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/PJAM5IDTUSRWPCCO4E25TRIGUBG4BB6P/
[ovirt-users] Installing Server 2022 VM
We've been struggling to install Windows Server 2022 on oVirt. We recently upgraded to the latest oVirt 4.5 on EL9 hosts, but it didn't help. In the past, we could boot a VM from the install CD, add the mass storage drivers from the virt-io CD, and proceed from there. However, oVirt 4.3 didn't have a Server 2022 selection. After upgrading to oVirt 4.5, we couldn't get it to boot from the CD. It just gave me Windows boot errors if I do q35 EFI. If I switch back to i440fx I get the same behavior as in oVirt 4.3 - it'll boot the DVD but doesn't find a disk with the virtio-scsi drivers. We'd greatly appreciate if anyone has successfully installed Server 2022 in oVirt. Many thanks, Devin ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/TSYQCABC4THLO4RZBO4URCQ6SBDW4UQ6/
[ovirt-users] Re: unable to connect to the graphic server with VNC graphics protocol
Just to followup on this, we were able to use noVNC after installing the CA as outlined here: https://access.redhat.com/solutions/6961618 On Windows, however, we found that we needed to import the certificate as an Admin using mmc.exe rather than using the browser's settings. Many thanks to Murilo for pointing us in the right direction. Sincerely, Devin > On Apr 10, 2023, at 11:25 AM, Devin A. Bougie > wrote: > > Hi Murilo, > > Yes, when trying with noVNC we get: > Something went wrong, connection is closed > > This is getting pretty urgent for us, so any help connecting to the console > of VMs running on EL9 hosts would be greatly appreciated. > > Thanks for following up, > Devin > >> On Apr 8, 2023, at 4:17 AM, Murilo Morais wrote: >> >> Devin, good morning! All good? I hope so. >> >> Have you tried accessing VNC through noVNC? >> >> Em sex., 7 de abr. de 2023 às 16:22, Devin A. Bougie >> escreveu: >> Hello, >> >> After upgrading our cluster to oVirt 4.5 on EL9 hosts, we had to switch from >> QXL and SPICE to VGA and VNC as EL9 dropped support for SPICE. However, we >> are now unable to view the console using Windows, Linux, or MacOS. >> >> For example, after downloading and opening console.vv file with Remote >> Viewer (virt-viewer) on Linux, we see: >> Unable to connect to the graphic server. >> >> Any help connecting to a VM using a VGA/VNC console from Windows, Mac, or >> Linux would be greatly appreciated. >> >> Many thanks, >> Devin >> ___ >> Users mailing list -- users@ovirt.org >> To unsubscribe send an email to users-le...@ovirt.org >> Privacy Statement: https://www.ovirt.org/privacy-policy.html >> oVirt Code of Conduct: >> https://www.ovirt.org/community/about/community-guidelines/ >> List Archives: >> https://lists.ovirt.org/archives/list/users@ovirt.org/message/XWSCMSYHKEQVP64KBP7SQHAIJT3QKJ5O/ > ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/S3AQT6GTB4SPWOCCYFMV7ZOESBBI4QIW/
[ovirt-users] Re: unable to connect to the graphic server with VNC graphics protocol
Hi Murilo, Yes, when trying with noVNC we get: Something went wrong, connection is closed This is getting pretty urgent for us, so any help connecting to the console of VMs running on EL9 hosts would be greatly appreciated. Thanks for following up, Devin > On Apr 8, 2023, at 4:17 AM, Murilo Morais wrote: > > Devin, good morning! All good? I hope so. > > Have you tried accessing VNC through noVNC? > > Em sex., 7 de abr. de 2023 às 16:22, Devin A. Bougie > escreveu: > Hello, > > After upgrading our cluster to oVirt 4.5 on EL9 hosts, we had to switch from > QXL and SPICE to VGA and VNC as EL9 dropped support for SPICE. However, we > are now unable to view the console using Windows, Linux, or MacOS. > > For example, after downloading and opening console.vv file with Remote Viewer > (virt-viewer) on Linux, we see: > Unable to connect to the graphic server. > > Any help connecting to a VM using a VGA/VNC console from Windows, Mac, or > Linux would be greatly appreciated. > > Many thanks, > Devin > ___ > Users mailing list -- users@ovirt.org > To unsubscribe send an email to users-le...@ovirt.org > Privacy Statement: https://www.ovirt.org/privacy-policy.html > oVirt Code of Conduct: > https://www.ovirt.org/community/about/community-guidelines/ > List Archives: > https://lists.ovirt.org/archives/list/users@ovirt.org/message/XWSCMSYHKEQVP64KBP7SQHAIJT3QKJ5O/ ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/4XZQ7QWL7QLAUEONNQX4P7LJGPVVOPTQ/
[ovirt-users] unable to connect to the graphic server with VNC graphics protocol
Hello, After upgrading our cluster to oVirt 4.5 on EL9 hosts, we had to switch from QXL and SPICE to VGA and VNC as EL9 dropped support for SPICE. However, we are now unable to view the console using Windows, Linux, or MacOS. For example, after downloading and opening console.vv file with Remote Viewer (virt-viewer) on Linux, we see: Unable to connect to the graphic server. Any help connecting to a VM using a VGA/VNC console from Windows, Mac, or Linux would be greatly appreciated. Many thanks, Devin ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/XWSCMSYHKEQVP64KBP7SQHAIJT3QKJ5O/
[ovirt-users] setting iSCSI iface.net_ifacename (netIfaceName)
Where do I set the iSCSI iface to use when connecting to both the hosted_storate and VM Data Domain? I believe this is related to the difficulty I've had configuring iSCSI bonds within the oVirt engine as opposed to directly in the underlying OS. I've set "iscsi_default_ifaces = ovirtsan" in vdsm.conf, but vdsmd still insists on using the default iface and vdsm.log shows: 2017-04-03 11:17:21,109-0400 INFO (jsonrpc/5) [storage.ISCSI] iSCSI iface.net_ifacename not provided. Skipping. (iscsi:590) Many thanks, Devin ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] iSCSI Multipathing
Thanks for following up, Gianluca. At this point, my main question is why should I configure iSCSI Bonds within the oVirt engine instead of or in addition to configuring iSCSI initiators and multipathd directly in the host's OS. The multipath.conf created by VDSM works fine with our devices, as do the stock EL6/7 kernels and drivers. We've had great success using these devices for over a decade in various EL6/7 High-Availability server clusters, and when we configure everything manually they seem to work great with oVirt. We're just wondering exactly what the advantage is to taking the next step of configuring iSCSI Bonds within the oVirt engine. For what it's worth, these are Infortrend ESDS devices with redundant controllers and two 10GbE ports per controller. We connect each host and each controller to two separate switches, so we can simultaneously lose both a controller and a switch without impacting availability. Thanks again! Devin > On Apr 2, 2017, at 7:47 AM, Gianluca Cecchi wrote: > > > > Il 02 Apr 2017 05:20, "Devin A. Bougie" ha scritto: > We have a new 4.1.1 cluster up and running with OVS switches and an iSCSI > hosted_storage and VM data domain (same target, different LUN's). Everything > works fine, and I can configure iscsid and multipathd outside of the oVirt > engine to ensure redundancy with our iSCSI device. However, if I try to > configure iSCSI Multipathing within the engine, all of the hosts get stuck in > the "Connecting" status and the Data Center and Storage Domains go down. The > hosted engine, however, continues to work just fine. > > Before I provide excerpts from our logs and more details on what we're > seeing, it would be helpful to understand better what the advantages are of > configuring iSCSI Bonds within the oVirt engine. Is this mainly a feature > for oVirt users that don't have experience configuring and managing iscsid > and multipathd directly? Or, is it important to actually setup iSCSI Bonds > within the engine instead of directly in the underlying OS? > > Any advice or links to documentation I've overlooked would be greatly > appreciated. > > Many thanks, > Devin > > What kind of iscsi storage stay are you using? ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[ovirt-users] iSCSI Multipathing
We have a new 4.1.1 cluster up and running with OVS switches and an iSCSI hosted_storage and VM data domain (same target, different LUN's). Everything works fine, and I can configure iscsid and multipathd outside of the oVirt engine to ensure redundancy with our iSCSI device. However, if I try to configure iSCSI Multipathing within the engine, all of the hosts get stuck in the "Connecting" status and the Data Center and Storage Domains go down. The hosted engine, however, continues to work just fine. Before I provide excerpts from our logs and more details on what we're seeing, it would be helpful to understand better what the advantages are of configuring iSCSI Bonds within the oVirt engine. Is this mainly a feature for oVirt users that don't have experience configuring and managing iscsid and multipathd directly? Or, is it important to actually setup iSCSI Bonds within the engine instead of directly in the underlying OS? Any advice or links to documentation I've overlooked would be greatly appreciated. Many thanks, Devin ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] migration failed - "Cannot get interface MTU on 'vdsmbr_...'
Just incase anyone else runs into this, you need to set "migration_ovs_hook_enabled=True" in vdsm.conf. It seems the vdsm.conf created by "hosted-engine --deploy" did not list all of the options, so I overlooked this one. Thanks for all the help, Devin On Mar 27, 2017, at 11:10 AM, Devin A. Bougie wrote: > Hi, All. We have a new oVirt 4.1.1 cluster up with the OVS switch type. > Everything seems to be working great, except for live migration. > > I believe the red flag in vdsm.log on the source is: > Cannot get interface MTU on 'vdsmbr_QwORbsw2': No such device (migration:287) > > Which results from vdsm assigning an arbitrary bridge name to each ovs bridge. > > Please see below for more details on the bridges and excerpts from the logs. > Any help would be greatly appreciated. > > Many thanks, > Devin > > SOURCE OVS BRIDGES: > # ovs-vsctl show > 6d96d9a5-e30d-455b-90c7-9e9632574695 >Bridge "vdsmbr_QwORbsw2" >Port "vdsmbr_QwORbsw2" >Interface "vdsmbr_QwORbsw2" >type: internal >Port "vnet0" >Interface "vnet0" >Port classepublic >Interface classepublic >type: internal >Port "ens1f0" >Interface "ens1f0" >Bridge "vdsmbr_9P7ZYKWn" >Port ovirtmgmt >Interface ovirtmgmt >type: internal >Port "ens1f1" >Interface "ens1f1" >Port "vdsmbr_9P7ZYKWn" >Interface "vdsmbr_9P7ZYKWn" >type: internal >ovs_version: "2.7.0" > > DESTINATION OVS BRIDGES: > # ovs-vsctl show > f66d765d-712a-4c81-b18e-da1acc9cfdde >Bridge "vdsmbr_vdpp0dOd" >Port "vdsmbr_vdpp0dOd" >Interface "vdsmbr_vdpp0dOd" >type: internal >Port "ens1f0" >Interface "ens1f0" >Port classepublic >Interface classepublic >type: internal >Bridge "vdsmbr_3sEwEKd1" >Port "vdsmbr_3sEwEKd1" >Interface "vdsmbr_3sEwEKd1" >type: internal >Port "ens1f1" >Interface "ens1f1" >Port ovirtmgmt >Interface ovirtmgmt >type: internal >ovs_version: "2.7.0" > > > SOURCE VDSM LOG: > ... > 2017-03-27 10:57:02,567-0400 INFO (jsonrpc/1) [vdsm.api] START migrate > args=(, {u'incomingLimit': 2, u'src': > u'192.168.55.84', u'dstqemu': u'192.168.55.81', u'autoConverge': u'false', > u'tunneled': u'false', u'enableGuestEvents': False, u'dst': > u'lnxvirt01-p55.classe.cornell.edu:54321', u'vmId': > u'cf9c5dbf-3924-47c6-b323-22ac90a1f682', u'abortOnError': u'true', > u'outgoingLimit': 2, u'compressed': u'false', u'maxBandwidth': 5000, > u'method': u'online', 'mode': 'remote'}) kwargs={} (api:37) > 2017-03-27 10:57:02,570-0400 INFO (jsonrpc/1) [vdsm.api] FINISH migrate > return={'status': {'message': 'Migration in progress', 'code': 0}, > 'progress': 0} (api:43) > 2017-03-27 10:57:02,570-0400 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC > call VM.migrate succeeded in 0.01 seconds (__init__:515) > 2017-03-27 10:57:03,028-0400 INFO (migsrc/cf9c5dbf) [virt.vm] > (vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682') Creation of destination VM > took: 0 seconds (migration:455) > 2017-03-27 10:57:03,028-0400 INFO (migsrc/cf9c5dbf) [virt.vm] > (vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682') starting migration to > qemu+tls://lnxvirt01-p55.classe.cornell.edu/system with miguri > tcp://192.168.55.81 (migration:480) > 2017-03-27 10:57:03,224-0400 ERROR (migsrc/cf9c5dbf) [virt.vm] > (vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682') Cannot get interface MTU on > 'vdsmbr_QwORbsw2': No such device (migration:287) > 2017-03-27 10:57:03,322-0400 ERROR (migsrc/cf9c5dbf) [virt.vm] > (vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682') Failed to migrate > (migration:429) > Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 411, in > run >self._startUnderlyingMigration(time.time()) > File "/usr/lib/python2.7/site-package
[ovirt-users] migration failures - libvirtError - listen attribute must match address attribute of first listen element
We have a new 4.1.1 cluster setup. Migration of VM's that have a console / graphics setup is failing. Migration of VM's that run headless succeeds. The red flag in vdsm.log on the source is: libvirtError: unsupported configuration: graphics 'listen' attribute '192.168.55.82' must match 'address' attribute of first listen element (found '192.168.55.84') This happens when the console is set to either VNC or SPICE. Please see below for larger excerpts from vdsm.log on the source and destination. Any help would be greatly appreciated. Many thanks, Devin SOURCE: -- 2017-03-29 09:53:30,314-0400 INFO (jsonrpc/5) [vdsm.api] START migrate args=(, {u'incomingLimit': 2, u'src': u'192.168.55.82', u'dstqemu': u'192.168.55.84', u'autoConverge': u'false', u'tunneled': u'false', u'en ableGuestEvents': False, u'dst': u'192.168.55.84:54321', u'vmId': u'cf9c5dbf-3924-47c6-b323-22ac90a1f682', u'abortOnError': u'true', u'outgoingLimit': 2, u'compressed': u'false', u'maxBandwidth': 5000, u'method': u'online', 'mode': 'remote'}) k wargs={} (api:37) 2017-03-29 09:53:30,315-0400 INFO (jsonrpc/5) [vdsm.api] FINISH migrate return={'status': {'message': 'Migration in progress', 'code': 0}, 'progress': 0} (api:43) 2017-03-29 09:53:30,315-0400 INFO (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC call VM.migrate succeeded in 0.00 seconds (__init__:515) 2017-03-29 09:53:30,444-0400 INFO (Reactor thread) [ProtocolDetector.AcceptorImpl] Accepted connection from ::1:52494 (protocoldetector:72) 2017-03-29 09:53:30,450-0400 INFO (Reactor thread) [ProtocolDetector.Detector] Detected protocol stomp from ::1:52494 (protocoldetector:127) 2017-03-29 09:53:30,450-0400 INFO (Reactor thread) [Broker.StompAdapter] Processing CONNECT request (stompreactor:102) 2017-03-29 09:53:30,451-0400 INFO (JsonRpc (StompReactor)) [Broker.StompAdapter] Subscribe command received (stompreactor:129) 2017-03-29 09:53:30,628-0400 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call Host.getHardwareInfo succeeded in 0.00 seconds (__init__:515) 2017-03-29 09:53:30,630-0400 INFO (jsonrpc/7) [dispatcher] Run and protect: repoStats(options=None) (logUtils:51) 2017-03-29 09:53:30,631-0400 INFO (jsonrpc/7) [dispatcher] Run and protect: repoStats, Return response: {u'016ceee8-9117-4e8a-b611-f58f6763a098': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000226545', 'lastCheck': '3.2', 'valid': True}, u'2438f819-e7f5-4bb1-ad0d-5349fa371e6e': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000232943', 'lastCheck': '3.1', 'valid': True}, u'48d4f45d-0bdd-4f4a-90b6-35efe2da935a': {'code': 0, 'actual ': True, 'version': 4, 'acquired': True, 'delay': '0.000612878', 'lastCheck': '8.3', 'valid': True}} (logUtils:54) 2017-03-29 09:53:30,631-0400 INFO (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call Host.getStorageRepoStats succeeded in 0.00 seconds (__init__:515) 2017-03-29 09:53:30,701-0400 INFO (migsrc/cf9c5dbf) [virt.vm] (vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682') Creation of destination VM took: 0 seconds (migration:455) 2017-03-29 09:53:30,701-0400 INFO (migsrc/cf9c5dbf) [virt.vm] (vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682') starting migration to qemu+tls://192.168.55.84/system with miguri tcp://192.168.55.84 (migration:480) 2017-03-29 09:53:31,120-0400 ERROR (migsrc/cf9c5dbf) [virt.vm] (vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682') unsupported configuration: graphics 'listen' attribute '192.168.55.82' must match 'address' attribute of first listen element (found '1 92.168.55.84') (migration:287) 2017-03-29 09:53:31,206-0400 ERROR (migsrc/cf9c5dbf) [virt.vm] (vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682') Failed to migrate (migration:429) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 411, in run self._startUnderlyingMigration(time.time()) File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 489, in _startUnderlyingMigration self._perform_with_downtime_thread(duri, muri) File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 555, in _perform_with_downtime_thread self._perform_migration(duri, muri) File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 528, in _perform_migration self._vm._dom.migrateToURI3(duri, params, flags) File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 69, in f ret = attr(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 123, in wrapper ret = f(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 941, in wrapper return func(inst, *args, **kwargs) File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1939, in migrateToURI3 if ret == -1: raise libvirtError ('virDomainMigrateToURI3() failed', dom=self) libvirtError: unsupported configuration: graphics 'listen' attribute '192.168.55.82' must match 'address' attribute of first list
[ovirt-users] migration failed - "Cannot get interface MTU on 'vdsmbr_...'
Hi, All. We have a new oVirt 4.1.1 cluster up with the OVS switch type. Everything seems to be working great, except for live migration. I believe the red flag in vdsm.log on the source is: Cannot get interface MTU on 'vdsmbr_QwORbsw2': No such device (migration:287) Which results from vdsm assigning an arbitrary bridge name to each ovs bridge. Please see below for more details on the bridges and excerpts from the logs. Any help would be greatly appreciated. Many thanks, Devin SOURCE OVS BRIDGES: # ovs-vsctl show 6d96d9a5-e30d-455b-90c7-9e9632574695 Bridge "vdsmbr_QwORbsw2" Port "vdsmbr_QwORbsw2" Interface "vdsmbr_QwORbsw2" type: internal Port "vnet0" Interface "vnet0" Port classepublic Interface classepublic type: internal Port "ens1f0" Interface "ens1f0" Bridge "vdsmbr_9P7ZYKWn" Port ovirtmgmt Interface ovirtmgmt type: internal Port "ens1f1" Interface "ens1f1" Port "vdsmbr_9P7ZYKWn" Interface "vdsmbr_9P7ZYKWn" type: internal ovs_version: "2.7.0" DESTINATION OVS BRIDGES: # ovs-vsctl show f66d765d-712a-4c81-b18e-da1acc9cfdde Bridge "vdsmbr_vdpp0dOd" Port "vdsmbr_vdpp0dOd" Interface "vdsmbr_vdpp0dOd" type: internal Port "ens1f0" Interface "ens1f0" Port classepublic Interface classepublic type: internal Bridge "vdsmbr_3sEwEKd1" Port "vdsmbr_3sEwEKd1" Interface "vdsmbr_3sEwEKd1" type: internal Port "ens1f1" Interface "ens1f1" Port ovirtmgmt Interface ovirtmgmt type: internal ovs_version: "2.7.0" SOURCE VDSM LOG: ... 2017-03-27 10:57:02,567-0400 INFO (jsonrpc/1) [vdsm.api] START migrate args=(, {u'incomingLimit': 2, u'src': u'192.168.55.84', u'dstqemu': u'192.168.55.81', u'autoConverge': u'false', u'tunneled': u'false', u'enableGuestEvents': False, u'dst': u'lnxvirt01-p55.classe.cornell.edu:54321', u'vmId': u'cf9c5dbf-3924-47c6-b323-22ac90a1f682', u'abortOnError': u'true', u'outgoingLimit': 2, u'compressed': u'false', u'maxBandwidth': 5000, u'method': u'online', 'mode': 'remote'}) kwargs={} (api:37) 2017-03-27 10:57:02,570-0400 INFO (jsonrpc/1) [vdsm.api] FINISH migrate return={'status': {'message': 'Migration in progress', 'code': 0}, 'progress': 0} (api:43) 2017-03-27 10:57:02,570-0400 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call VM.migrate succeeded in 0.01 seconds (__init__:515) 2017-03-27 10:57:03,028-0400 INFO (migsrc/cf9c5dbf) [virt.vm] (vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682') Creation of destination VM took: 0 seconds (migration:455) 2017-03-27 10:57:03,028-0400 INFO (migsrc/cf9c5dbf) [virt.vm] (vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682') starting migration to qemu+tls://lnxvirt01-p55.classe.cornell.edu/system with miguri tcp://192.168.55.81 (migration:480) 2017-03-27 10:57:03,224-0400 ERROR (migsrc/cf9c5dbf) [virt.vm] (vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682') Cannot get interface MTU on 'vdsmbr_QwORbsw2': No such device (migration:287) 2017-03-27 10:57:03,322-0400 ERROR (migsrc/cf9c5dbf) [virt.vm] (vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682') Failed to migrate (migration:429) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 411, in run self._startUnderlyingMigration(time.time()) File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 489, in _startUnderlyingMigration self._perform_with_downtime_thread(duri, muri) File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 555, in _perform_with_downtime_thread self._perform_migration(duri, muri) File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 528, in _perform_migration self._vm._dom.migrateToURI3(duri, params, flags) File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 69, in f ret = attr(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 123, in wrapper ret = f(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 941, in wrapper return func(inst, *args, **kwargs) File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1939, in migrateToURI3 if ret == -1: raise libvirtError ('virDomainMigrateToURI3() failed', dom=self) libvirtError: Cannot get interface MTU on 'vdsmbr_QwORbsw2': No such device 2017-03-27 10:57:03,435-0400 INFO (Reactor thread) [ProtocolDetector.AcceptorImpl] Accepted connection from ::1:33716 (protocoldetector:72) 2017-03-27 10:57:03,452-0400 INFO (Reactor thread) [ProtocolDetector.Detector] Detected protocol stomp from ::1:33716 (protocoldetector:127) 2017-03-27 10:57:03,452-0400 INFO (Reactor thread) [Broker.StompAdapter] Processing CONNECT re
Re: [ovirt-users] NFS ISO domain from hosted-engine VM
On Mar 23, 2017, at 10:51 AM, Yedidyah Bar David wrote: > On Thu, Mar 23, 2017 at 4:12 PM, Devin A. Bougie > wrote: >> Hi, All. Are there any recommendations or best practices WRT whether or not >> to host an NFS ISO domain from the hosted-engine VM (running the oVirt >> Engine Appliance)? We have a hosted-engine 4.1.1 cluster up and running, >> and now just have to decide where to serve the NFS ISO domain from. > > NFS ISO domain on the engine machine is generally deprecated, and > specifically problematic for hosted-engine, see also: Thanks, Didi! I'll go ahead and setup the NFS ISO domain in a separate cluster. Sincerely, Devin > https://bugzilla.redhat.com/show_bug.cgi?id=1332813 > I recently pushed a patch to remove the question about it altogether: > https://gerrit.ovirt.org/74409 > > Best, > -- > Didi ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[ovirt-users] NFS ISO domain from hosted-engine VM
Hi, All. Are there any recommendations or best practices WRT whether or not to host an NFS ISO domain from the hosted-engine VM (running the oVirt Engine Appliance)? We have a hosted-engine 4.1.1 cluster up and running, and now just have to decide where to serve the NFS ISO domain from. Many thanks, Devin ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] OVS switch type for hosted-engine
Just to close this thread, we were able to manually convert our hosted-engine 4.1.1 cluster from the Legacy to OVS switch types. Very roughly: - Install first host and VM using "hosted-engine --deploy" - In the Engine UI, change Cluster switch type from Legacy to OVS - Shutdown the engine VM and stop vdsmd on the host. - Manually change the switch type to ovs in /var/lib/vdsm/persistence/netconf/nets/ovirtmgmt - Restart the host After that, everything seems to be working and new hosts are correctly setup with the OVS switch type. Thanks, Devin > On Mar 16, 2017, at 4:06 PM, Devin A. Bougie wrote: > > Is it possible to setup a hosted engine using the OVS switch type instead of > Legacy? If it's not possible to start out as OVS, instructions for switching > from Legacy to OVS after the fact would be greatly appreciated. > > Many thanks, > Devin ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] hosted-engine with iscsi storage domain
Hi Simone, On Mar 21, 2017, at 4:06 PM, Simone Tiraboschi wrote: > Did you already add your first storage domain for regular VMs? > If also that one is on iSCSI, it should be connected trough a different iSCSI > portal. Sure enough, once we added the data storage the hosted-storage imported and attached successfully. Both our hosted-storage and our VM data storage are from the same iSCSI target(s), but separate LUNs. Many thanks! Devin ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] hosted-engine with iscsi storage domain
On Mar 20, 2017, at 12:54 PM, Simone Tiraboschi wrote: > The engine should import it by itself once you add your first storage domain > for regular VMs. > No manual import actions are required. It didn't seem to for us. I don't see it in the Storage tab (maybe I shouldn't?). I can install a new host from the engine web ui, but I don't see any hosted-engine options. If I put the new host in maintenance and reinstall, I can select DEPLOY under "Choose hosted engine deployment action." However, the web UI than complains that: Cannot edit Host. You are using an unmanaged hosted engine VM. P{ease upgrade the cluster level to 3.6 and wait for the hosted engine storage domain to be properly imported. This is on a new 4.1 cluster with the hosted-engine created using hosted-engine --deploy on the first host. > No, a separate network for the storage is even recommended. Glad to hear, thanks! Devin ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[ovirt-users] hosted-engine with iscsi storage domain
We have a hosted-engine running on 4.1 with an iSCSI hosted_storage domain, and are able to import the domain. However, we cannot attache the domain to the data center. Just to make sure I'm not missing something basic, does the engine VM need to be able to connect to the iSCSI target itself? In other words, does the iSCSI traffic need to go over the ovirtmgmt bridged network? Currently we have the iSCSI SAN on a separate subnet from ovirtmgmt, so the hosted-engine VM can't directly see the iSCSI storage. Thanks, Devin ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[ovirt-users] OVS switch type for hosted-engine
Is it possible to setup a hosted engine using the OVS switch type instead of Legacy? If it's not possible to start out as OVS, instructions for switching from Legacy to OVS after the fact would be greatly appreciated. Many thanks, Devin ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[ovirt-users] hosted engine without the appliance?
Hi, All. Is it still possible or supported to run a hosted engine without using the oVirt Engine Appliance? In other words, to install our own OS on a VM and have it act as a hosted engine? "hosted-engine --deploy" now seems to insist on using the oVirt Engine Appliance, but if it's possible and not a mistake we'd prefer to run and manage our own OS. Thanks! Devin ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] iscsi data domain when engine is down
On Mar 11, 2017, at 10:59 AM, Chris Adams wrote: > Hosted engine runs fine on iSCSI since oVirt 3.5. It needs a separate > target from VM storage, but then that access is managed by the hosted > engine HA system. Thanks so much, Chris. It sounds like that is exactly what I was missing. It would be great to know how to add multiple paths to the hosted engine's iSCSI target, but hopefully I can figure that out once I have things up and running. Thanks again, Devin > > If all the engine hosts are shut down together, it will take a bit after > boot for the HA system to converge and try to bring the engine back > online (including logging in to the engine iSCSI LUN). You can force > this on one host by running "hosted-engine --vm-start". > > -- > Chris Adams ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] iscsi data domain when engine is down
On Mar 10, 2017, at 1:28 PM, Juan Pablo wrote: > Hi, what kind of setup you have? hosted engine just runs on nfs or gluster > afaik. Thanks for replying, Juan. I was under the impression that the hosted engine would run on an iSCSI data domain, based on http://www.ovirt.org/develop/release-management/features/engine/self-hosted-engine-iscsi-support/ and the fact that "hosted-engine --deploy" does give you the option to choose iscsi storage (but only one path, as far as I can tell). I certainly could manage the iSCSI sessions outside of ovirt / vdsm, but wasn't sure if that would cause problems or if that was all that's needed to allow the hosted engine to boot automatically on an iSCSI data domain. Thanks again, Devin > 2017-03-10 15:22 GMT-03:00 Devin A. Bougie : > We have an ovirt 4.1 cluster with an iSCSI data domain. If I shut down the > entire cluster and just boot the hosts, none of the hosts login to their > iSCSI sessions until the engine comes up. Without logging into the sessions, > sanlock doesn't obtain any leases and obviously none of the VMs start. > > I'm sure there's something I'm missing, as it looks like it should be > possible to run a hosted engine on a cluster using an iSCSI data domain. > > Any pointers or suggestions would be greatly appreciated. > > Many thanks, > Devin > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[ovirt-users] iscsi data domain when engine is down
We have an ovirt 4.1 cluster with an iSCSI data domain. If I shut down the entire cluster and just boot the hosts, none of the hosts login to their iSCSI sessions until the engine comes up. Without logging into the sessions, sanlock doesn't obtain any leases and obviously none of the VMs start. I'm sure there's something I'm missing, as it looks like it should be possible to run a hosted engine on a cluster using an iSCSI data domain. Any pointers or suggestions would be greatly appreciated. Many thanks, Devin ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[ovirt-users] migrate to hosted engine
Hi, All. We have an ovirt 4.1 cluster setup using multiple paths to a single iSCSI LUN for the data storage domain. I would now like to migrate to a hosted engine. I setup the new engine VM, shutdown and backed-up the old VM, and restored to the new VM using engine-backup. After updating DNS to change our engine's FQDN to point to the hosted engine, everything seems to work properly. However, when rebooting the entire cluster, the engine VM doesn't come up automatically. Is there anything that now needs to be done to tell the cluster that it's now using a hosted engine? I started with a standard engine setup, as I didn't see a way to specify multiple paths to a single iSCSI LUN when using "hosted-engine --deploy." Any tips would be greatly appreciated. Many thanks, Devin ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] vdsm without sanlock
On Nov 7, 2015, at 2:10 AM, Nir Soffer wrote: >> Mainly the dependence on a shared or remote filesystem (nfs, gfs2, etc.). > > There is no such dependency. > Sanlock is using either an lv on block device (iscsi, fop) Thanks, Nir! I was thinking sanlock required a disk_lease_dir, which all the documentation says to put on NFS or GFS2. However, as you say I now see that ovirt can use sanlock with block devices without requiring a disk_lease_dir. Thanks again, Devin ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] vdsm without sanlock
Hi Nir, On Nov 6, 2015, at 5:02 AM, Nir Soffer wrote: > On Thu, Nov 5, 2015 at 11:33 PM, Devin A. Bougie > wrote: >> Hi, All. Is it possible to run vdsm without sanlock? We'd prefer to run >> libvirtd with virtlockd (lock_manager = "lockd") to avoid the sanlock >> overhead, but it looks like vdsmd / ovirt requires sanlock. > > True, we require sanlock. > What is "sanlock overhead"? Mainly the dependence on a shared or remote filesystem (nfs, gfs2, etc.). I have no problem setting up the filesystem or configuring sanlock to use it, but then the vm's fail if the shared filesystem blocks or fails. We'd like to have our vm images use block devices and avoid any dependency on a remote or shared file system. My understanding is that virtlockd can lock a block device directly, while sanlock requires something like gfs2 or nfs. Perhaps it's my misunderstanding or misreading, but it seemed like things were moving in the direction of virtlockd. For example: http://lists.ovirt.org/pipermail/devel/2015-March/010127.html Thanks for following up! Devin ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[ovirt-users] vdsm without sanlock
Hi, All. Is it possible to run vdsm without sanlock? We'd prefer to run libvirtd with virtlockd (lock_manager = "lockd") to avoid the sanlock overhead, but it looks like vdsmd / ovirt requires sanlock. Thanks, Devin ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Error while executing action New SAN Storage Domain: Cannot zero out volume
On Oct 26, 2015, at 2:27 PM, Maor Lipchuk wrote: > If you try to import (or add) the Storage Domain again, what do you get? If I try to add it again, it first says "The following LUNs are already in use: ...," but if I "Approve operation" I get the same "Cannot zero out volume" error. If I try to import, I can log into the target but it doesn't show any "Storage Name / Storage ID (VG Name)" to import. Thanks again, Devin > > Regards, > Maor > > > > - Original Message - >> From: "Devin A. Bougie" >> To: "Maor Lipchuk" >> Cc: Users@ovirt.org >> Sent: Monday, October 26, 2015 7:47:31 PM >> Subject: Re: [ovirt-users] Error while executing action New SAN Storage >> Domain: Cannot zero out volume >> >> Hi Maor, >> >> On Oct 26, 2015, at 1:50 AM, Maor Lipchuk wrote: >>> Looks like zeroing out the metadata volume with a dd operation was working. >>> Can u try to remove the Storage Domain and add it back again now >> >> The Storage Domain disappears from the GUI and isn't seen by ovirt-shell, so >> I'm not sure how to delete it. Is there a more low-level command I should >> run? >> >> Nothing changed after upgrading to 3.5.5, so I've now created a bug report. >> >> https://bugzilla.redhat.com/show_bug.cgi?id=1275381 >> >> Thanks again, >> Devin >> >> ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Error while executing action New SAN Storage Domain: Cannot zero out volume
Hi Maor, On Oct 26, 2015, at 1:50 AM, Maor Lipchuk wrote: > Looks like zeroing out the metadata volume with a dd operation was working. > Can u try to remove the Storage Domain and add it back again now The Storage Domain disappears from the GUI and isn't seen by ovirt-shell, so I'm not sure how to delete it. Is there a more low-level command I should run? Nothing changed after upgrading to 3.5.5, so I've now created a bug report. https://bugzilla.redhat.com/show_bug.cgi?id=1275381 Thanks again, Devin ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Error while executing action New SAN Storage Domain: Cannot zero out volume
Hi Maor, On Oct 25, 2015, at 12:03 PM, Maor Lipchuk wrote: > few questions: > Which RHEL version is installed on your Host? 7.1 > Can you please share the output of "ls -l > /dev/cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19/" [root@lnx84 ~]# ls -l /dev/cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19/ total 0 lrwxrwxrwx 1 root root 8 Oct 23 16:05 ids -> ../dm-23 lrwxrwxrwx 1 root root 8 Oct 23 16:05 inbox -> ../dm-24 lrwxrwxrwx 1 root root 8 Oct 23 16:05 leases -> ../dm-22 lrwxrwxrwx 1 root root 8 Oct 23 16:05 metadata -> ../dm-20 lrwxrwxrwx 1 root root 8 Oct 23 16:05 outbox -> ../dm-21 > What happen when you run this command from your Host: > /usr/bin/nice -n 19 /usr/bin/ionice -c 3 /usr/bin/dd if=/dev/zero > of=/dev/cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19/metadata bs=1048576 seek=0 > skip=0 conv=notrunc count=40 oflag=direct [root@lnx84 ~]# /usr/bin/nice -n 19 /usr/bin/ionice -c 3 /usr/bin/dd if=/dev/zero of=/dev/cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19/metadata bs=1048576 seek=0 skip=0 conv=notrunc count=40 oflag=direct 40+0 records in 40+0 records out 41943040 bytes (42 MB) copied, 0.435552 s, 96.3 MB/s > Also, please consider to open a bug at > https://bugzilla.redhat.com/enter_bug.cgi?product=oVirt, with all the logs > and output so it can be resolved ASAP. I'll open a bug report in the morning unless you have any other suggestions. Many thanks for following up! Devin ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Error while executing action New SAN Storage Domain: Cannot zero out volume
Hi Maor, On Oct 25, 2015, at 6:36 AM, Maor Lipchuk wrote: > Does your host is working with enabled selinux? No, selinux is disabled. Sorry, I should have mentioned that initially. Any other suggestions would be greatly appreciated. Many thanks! Devin > - Original Message - >> >> Every time I try to create a Data / iSCSI Storage Domain, I receive an "Error >> while executing action New SAN Storage Domain: Cannot zero out volume" >> error. >> >> iscsid does login to the node, and the volumes appear to have been created. >> However, I cannot use it to create or import a Data / iSCSI storage domain. >> >> [root@lnx84 ~]# iscsiadm -m node >> #.#.#.#:3260,1 iqn.2015-10.N.N.N.lnx88:lnx88.target1 >> >> [root@lnx84 ~]# iscsiadm -m session >> tcp: [1] #.#.#.#:3260,1 iqn.2015-10.N.N.N.lnx88:lnx88.target1 (non-flash) >> >> [root@lnx84 ~]# pvscan >> PV /dev/mapper/1IET_00010001 VG f73c8720-77c3-42a6-8a29-9677db54bac6 >> lvm2 [547.62 GiB / 543.75 GiB free] >> ... >> [root@lnx84 ~]# lvscan >> inactive '/dev/f73c8720-77c3-42a6-8a29-9677db54bac6/metadata' >> [512.00 MiB] inherit >> inactive '/dev/f73c8720-77c3-42a6-8a29-9677db54bac6/outbox' >> [128.00 MiB] inherit >> inactive '/dev/f73c8720-77c3-42a6-8a29-9677db54bac6/leases' [2.00 >> GiB] inherit >> inactive '/dev/f73c8720-77c3-42a6-8a29-9677db54bac6/ids' [128.00 >> MiB] inherit >> inactive '/dev/f73c8720-77c3-42a6-8a29-9677db54bac6/inbox' [128.00 >> MiB] inherit >> inactive '/dev/f73c8720-77c3-42a6-8a29-9677db54bac6/master' [1.00 >> GiB] inherit >> ... >> >> Any help would be greatly appreciated. >> >> Many thanks, >> Devin >> >> Here are the relevant lines from engine.log: >> -- >> 2015-10-23 16:04:56,925 INFO >> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetDeviceListVDSCommand] >> (ajp--127.0.0.1-8702-8) START, GetDeviceListVDSCommand(HostName = lnx84, >> HostId = a650e161-75f6-4916-bc18-96044bf3fc26, storageType=ISCSI), log id: >> 44a64578 >> 2015-10-23 16:04:57,681 INFO >> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetDeviceListVDSCommand] >> (ajp--127.0.0.1-8702-8) FINISH, GetDeviceListVDSCommand, return: [LUNs >> [id=1IET_00010001, physicalVolumeId=wpmBIM-tgc1-yKtH-XSwc-40wZ-Kn49-btwBFn, >> volumeGroupId=8gZEwa-3x5m-TiqA-uEPX-gC04-wkzx-PlaQDu, >> serial=SIET_VIRTUAL-DISK, lunMapping=1, vendorId=IET, >> productId=VIRTUAL-DISK, _lunConnections=[{ id: null, connection: #.#.#.#, >> iqn: iqn.2015-10.N.N.N.lnx88:lnx88.target1, vfsType: null, mountOptions: >> null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };], >> deviceSize=547, vendorName=IET, pathsDictionary={sdi=true}, lunType=ISCSI, >> status=Used, diskId=null, diskAlias=null, storageDomainId=null, >> storageDomainName=null]], log id: 44a64578 >> 2015-10-23 16:05:06,474 INFO >> [org.ovirt.engine.core.bll.storage.AddSANStorageDomainCommand] >> (ajp--127.0.0.1-8702-8) [53dd8c98] Running command: >> AddSANStorageDomainCommand internal: false. Entities affected : ID: >> aaa0----123456789aaa Type: SystemAction group >> CREATE_STORAGE_DOMAIN with role type ADMIN >> 2015-10-23 16:05:06,488 INFO >> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVGVDSCommand] >> (ajp--127.0.0.1-8702-8) [53dd8c98] START, CreateVGVDSCommand(HostName = >> lnx84, HostId = a650e161-75f6-4916-bc18-96044bf3fc26, >> storageDomainId=cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19, >> deviceList=[1IET_00010001], force=true), log id: 12acc23b >> 2015-10-23 16:05:07,379 INFO >> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVGVDSCommand] >> (ajp--127.0.0.1-8702-8) [53dd8c98] FINISH, CreateVGVDSCommand, return: >> dDaCCO-PLDu-S2nz-yOjM-qpOW-PGaa-ecpJ8P, log id: 12acc23b >> 2015-10-23 16:05:07,384 INFO >> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] >> (ajp--127.0.0.1-8702-8) [53dd8c98] START, >> CreateStorageDomainVDSCommand(HostName = lnx84, HostId = >> a650e161-75f6-4916-bc18-96044bf3fc26, >> storageDomain=StorageDomainStatic[lnx88, >> cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19], >> args=dDaCCO-PLDu-S2nz-yOjM-qpOW-PGaa-ecpJ8P), log id: cc93ec6 >> 2015-10-23 16:05:10,356 ERROR >> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] >> (ajp--127.0.0.1-8702-8) [53dd8c98] Failed in CreateStorageDomainVDS method >> 2015-10-23 16:05:10,360 INFO >> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] >> (ajp--127.0.0.1-8702-8) [53dd8c98] Command >> org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand >> return value >> StatusOnlyReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=374, >> mMessage=Cannot zero out volume: >> ('/dev/cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19/metadata',)]] >> 2015-10-23 16:05:10,367 INFO >> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] >> (ajp--127.0.0.1-8702-8) [53dd8c98] HostName = lnx84 >> 2015-10-23 16:05:10,370 ERROR >> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] >> (ajp--127.0.0.1
[ovirt-users] Error while executing action New SAN Storage Domain: Cannot zero out volume
Every time I try to create a Data / iSCSI Storage Domain, I receive an "Error while executing action New SAN Storage Domain: Cannot zero out volume" error. iscsid does login to the node, and the volumes appear to have been created. However, I cannot use it to create or import a Data / iSCSI storage domain. [root@lnx84 ~]# iscsiadm -m node #.#.#.#:3260,1 iqn.2015-10.N.N.N.lnx88:lnx88.target1 [root@lnx84 ~]# iscsiadm -m session tcp: [1] #.#.#.#:3260,1 iqn.2015-10.N.N.N.lnx88:lnx88.target1 (non-flash) [root@lnx84 ~]# pvscan PV /dev/mapper/1IET_00010001 VG f73c8720-77c3-42a6-8a29-9677db54bac6 lvm2 [547.62 GiB / 543.75 GiB free] ... [root@lnx84 ~]# lvscan inactive '/dev/f73c8720-77c3-42a6-8a29-9677db54bac6/metadata' [512.00 MiB] inherit inactive '/dev/f73c8720-77c3-42a6-8a29-9677db54bac6/outbox' [128.00 MiB] inherit inactive '/dev/f73c8720-77c3-42a6-8a29-9677db54bac6/leases' [2.00 GiB] inherit inactive '/dev/f73c8720-77c3-42a6-8a29-9677db54bac6/ids' [128.00 MiB] inherit inactive '/dev/f73c8720-77c3-42a6-8a29-9677db54bac6/inbox' [128.00 MiB] inherit inactive '/dev/f73c8720-77c3-42a6-8a29-9677db54bac6/master' [1.00 GiB] inherit ... Any help would be greatly appreciated. Many thanks, Devin Here are the relevant lines from engine.log: -- 2015-10-23 16:04:56,925 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetDeviceListVDSCommand] (ajp--127.0.0.1-8702-8) START, GetDeviceListVDSCommand(HostName = lnx84, HostId = a650e161-75f6-4916-bc18-96044bf3fc26, storageType=ISCSI), log id: 44a64578 2015-10-23 16:04:57,681 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetDeviceListVDSCommand] (ajp--127.0.0.1-8702-8) FINISH, GetDeviceListVDSCommand, return: [LUNs [id=1IET_00010001, physicalVolumeId=wpmBIM-tgc1-yKtH-XSwc-40wZ-Kn49-btwBFn, volumeGroupId=8gZEwa-3x5m-TiqA-uEPX-gC04-wkzx-PlaQDu, serial=SIET_VIRTUAL-DISK, lunMapping=1, vendorId=IET, productId=VIRTUAL-DISK, _lunConnections=[{ id: null, connection: #.#.#.#, iqn: iqn.2015-10.N.N.N.lnx88:lnx88.target1, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };], deviceSize=547, vendorName=IET, pathsDictionary={sdi=true}, lunType=ISCSI, status=Used, diskId=null, diskAlias=null, storageDomainId=null, storageDomainName=null]], log id: 44a64578 2015-10-23 16:05:06,474 INFO [org.ovirt.engine.core.bll.storage.AddSANStorageDomainCommand] (ajp--127.0.0.1-8702-8) [53dd8c98] Running command: AddSANStorageDomainCommand internal: false. Entities affected : ID: aaa0----123456789aaa Type: SystemAction group CREATE_STORAGE_DOMAIN with role type ADMIN 2015-10-23 16:05:06,488 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVGVDSCommand] (ajp--127.0.0.1-8702-8) [53dd8c98] START, CreateVGVDSCommand(HostName = lnx84, HostId = a650e161-75f6-4916-bc18-96044bf3fc26, storageDomainId=cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19, deviceList=[1IET_00010001], force=true), log id: 12acc23b 2015-10-23 16:05:07,379 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVGVDSCommand] (ajp--127.0.0.1-8702-8) [53dd8c98] FINISH, CreateVGVDSCommand, return: dDaCCO-PLDu-S2nz-yOjM-qpOW-PGaa-ecpJ8P, log id: 12acc23b 2015-10-23 16:05:07,384 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] (ajp--127.0.0.1-8702-8) [53dd8c98] START, CreateStorageDomainVDSCommand(HostName = lnx84, HostId = a650e161-75f6-4916-bc18-96044bf3fc26, storageDomain=StorageDomainStatic[lnx88, cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19], args=dDaCCO-PLDu-S2nz-yOjM-qpOW-PGaa-ecpJ8P), log id: cc93ec6 2015-10-23 16:05:10,356 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] (ajp--127.0.0.1-8702-8) [53dd8c98] Failed in CreateStorageDomainVDS method 2015-10-23 16:05:10,360 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] (ajp--127.0.0.1-8702-8) [53dd8c98] Command org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand return value StatusOnlyReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=374, mMessage=Cannot zero out volume: ('/dev/cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19/metadata',)]] 2015-10-23 16:05:10,367 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] (ajp--127.0.0.1-8702-8) [53dd8c98] HostName = lnx84 2015-10-23 16:05:10,370 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] (ajp--127.0.0.1-8702-8) [53dd8c98] Command CreateStorageDomainVDSCommand(HostName = lnx84, HostId = a650e161-75f6-4916-bc18-96044bf3fc26, storageDomain=StorageDomainStatic[lnx88, cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19], args=dDaCCO-PLDu-S2nz-yOjM-qpOW-PGaa-ecpJ8P) execution failed. Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to CreateStorageDomainVDS, error = Cannot zero out volume: ('/dev/cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19/metadata',), code = 374 2015-10-23 16:05:10,381 INFO [org.ovirt.e