There is a default timeout for ansible, which is 30 minutes. You may refer to: https://bugzilla.redhat.com/show_bug.cgi?id=1528960 https://github.com/oVirt/ovirt-engine/blob/master/packaging/services/ovirt-engine/ovirt-engine.conf.in
" # Specify the ansible-playbook command execution timeout in minutes. It's used for any task, which executes # AnsibleExecutor class. To change the value permanentaly create a conf file 99-ansible-playbook-timeout.conf in # /etc/ovirt-engine/engine.conf.d/ ANSIBLE_PLAYBOOK_EXEC_DEFAULT_TIMEOUT=30 " Please try to create 99-ansible-playbook-timeout.conf in /etc/ovirt-engine/engine.conf.d/ and set ANSIBLE_PLAYBOOK_EXEC_DEFAULT_TIMEOUT=<Timeout> Regards, Liran. On Thu, Aug 22, 2019 at 4:06 PM <jason.l....@l3harris.com> wrote: > > I have a fairly large OVA (~200GB) that was exported from oVirt4.3.5. I'm > trying to import it into a new cluster, also oVirt4.3.5. The import starts > fine but fails again and again. > Everything I can find online appears to be outdated, mentioning incorrect log > file locations and saying virt-v2v does the import. > > On the engine in /var/log/ovirt-engine/engine.log I can see where it is doing > the CreateImageVDSCommand, then a few outputs concerning adding the disk, > which end with USER_ADD_DISK_TO_VM_FINISHED_SUCCESS, then the ansible command: > > 2019-08-20 15:40:38,653-04 > Executing Ansible command: /usr/bin/ansible-playbook --ssh-common-args=-F > /var/lib/ovirt-engine/.ssh/config -v > --private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa > --inventory=/tmp/ansible-inventory8416464991088315694 > --extra-vars=ovirt_import_ova_path="/mnt/vm_backups/myvm.ova" > --extra-vars=ovirt_import_ova_disks="['/rhev/data-center/mnt/glusterSD/myhost.mydomain.com:_vmstore/59502c8b-fd1e-482b-bff7-39c699c196b3/images/886a3313-19a9-435d-aeac-64c2d507bb54/465ce2ba-8883-4378-bae7-e231047ea09d']" > --extra-vars=ovirt_import_ova_image_mappings="{}" > /usr/share/ovirt-engine/playbooks/ovirt-ova-import.yml [Logfile: > /var/log/ovirt-engine/ova/ovirt-import-ova-ansible-20190820154038-myhost.mydomain.com-25f6ac6f-9bdc-4301-b896-d357712dbf01.log] > > Then nothing about the import until: > 2019-08-20 16:11:08,859-04 INFO > [org.ovirt.engine.core.bll.exportimport.ImportVmFromOvaCommand] > (EE-ManagedThreadFactory-engineScheduled-Thread-88) [3321d4f6] Lock freed to > object 'EngineLock:{exclusiveLocks='[myvm=VM_NAME, > 464a25ba-8f0a-421d-a6ab-13eff67b4c96=VM]', sharedLocks=''}' > 2019-08-20 16:11:08,894-04 ERROR > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] > (EE-ManagedThreadFactory-engineScheduled-Thread-88) [3321d4f6] EVENT_ID: > IMPORTEXPORT_IMPORT_VM_FAILED(1,153), Failed to import Vm myvm to Data Center > Default, Cluster Default > > > I've found the import logs on the engine, in /var/log/ovirt-engine/ova, but > the ovirt-import-ova-ansible*.logs for the imports of concern only contain: > > 2019-08-20 19:59:48,799 p=44701 u=ovirt | Using > /usr/share/ovirt-engine/playbooks/ansible.cfg as config file > 2019-08-20 19:59:49,271 p=44701 u=ovirt | PLAY [all] > ********************************************************************* > 2019-08-20 19:59:49,280 p=44701 u=ovirt | TASK [ovirt-ova-extract : Run > extraction script] ******************************* > > Watching the host selected for the import, I can see the qemu-img convert > process running, but then the engine frees the lock on the VM and reports the > import as having failed. However, the qemu-img process continues to run on > the host. I don't know where else to look to try and find out what's going on > and I cannot see anything that says why the import failed. > Since the qemu-img process on the host is still running after the engine log > shows the lock has been freed and import failed, I'm guessing what's > happening is on the engine side. > > Looking at the time between the start of the ansible command and when the > lock is freed it is consistently around 30 minutes. > > # first try > 2019-08-20 15:40:38,653-04 ansible command start > 2019-08-20 16:11:08,859-04 lock freed > > 31 minutes > > # second try > 2019-08-20 19:59:48,463-04 ansible command start > 2019-08-20 20:30:21,697-04 lock freed > > 30 minutes, 33 seconds > > # third try > 2019-08-21 09:16:42,706-04 ansible command start > 2019-08-21 09:46:47,103-04 lock freed > > 30 minutes, 45 seconds > > With that in mind, I took a look at the available configuration keys from > engine-config --list. After getting each the only one set to ~30 minutes and > looks like it could be the problem was SSHInactivityHardTimeoutSeconds (set > to 1800 seconds). I set it to 3600 and tried the import again, but it still > failed at ~30 minutes, so that's apparently not the correct key. > > > Also, just FYI, I also tried to import the ova using virt-v2v, but that fails > immediately: > > virt-v2v: error: expecting XML expression to return an integer (expression: > rasd:Parent/text(), matching string: 00000000-0000-0000-0000-000000000000) > > If reporting bugs, run virt-v2v with debugging enabled and include the > complete output: > > virt-v2v -v -x [...] > > > Does virt-v2v not support OVAs created by the oVirt 'export to ova' option? > > So my main question is, Is there a timeout for VM imports through the engine > web UI? > And if so, is if configurable? > > > > Thanks in advance. > _______________________________________________ > Users mailing list -- users@ovirt.org > To unsubscribe send an email to users-le...@ovirt.org > Privacy Statement: https://www.ovirt.org/site/privacy-policy/ > oVirt Code of Conduct: > https://www.ovirt.org/community/about/community-guidelines/ > List Archives: > https://lists.ovirt.org/archives/list/users@ovirt.org/message/7RSIQVAM3FQHCM6AS5MGAMZDBDQBWYIJ/ _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/TUOGJFDT7N2K2IRB7YIBY6GX6RDZHTA2/