[ovirt-users] Re: Weird problem starting VMs in oVirt-4.4

2020-06-08 Thread Stephen Panicho
I ended up making a BZ about this same issue a few weeks ago, but
misdiagnosed the root cause. Maybe we could add to that?

https://bugzilla.redhat.com/show_bug.cgi?id=1839598

On Mon, Jun 8, 2020, 11:54 AM Strahil Nikolov via Users 
wrote:

> Are you using ECC ram ?
>
> Best Regards,
> Strahil Nikolov
>
> На 8 юни 2020 г. 15:06:22 GMT+03:00, Joop  написа:
> >On 3-6-2020 14:58, Joop wrote:
> >> Hi All,
> >>
> >> Just had a rather new experience in that starting a VM worked but the
> >> kernel entered grub2 rescue console due to the fact that something
> >was
> >> wrong with its virtio-scsi disk.
> >> The message is Booting from Hard Disk 
> >> error: ../../grub-core/kern/dl.c:266:invalid arch-independent ELF
> >maginc.
> >> entering rescue mode...
> >>
> >> Doing a CTRL-ALT-Del through the spice console let the VM boot
> >> correctly. Shutting it down and repeating the procedure I get a disk
> >> problem everytime. Weird thing is if I activate the BootMenu and then
> >> straight away start the VM all is OK.
> >> I don't see any ERROR messages in either vdsm.log, engine.log
> >>
> >> If I would have to guess it looks like the disk image isn't connected
> >> yet when the VM boots but thats weird isn't it?
> >>
> >>
> >As an update to this:
> >Just had the same problem with a Windows VM but more importantly also
> >with HostedEngine itself.
> >On the host did:
> >hosted-engine --set-maintenance --mode=global
> >hosted-engine --vm-shutdown
> >
> >Stopped all oVirt related services, cleared all oVirt related logs from
> >/var/log/..., restarted the host, ran hosted-engine --set-maintenance
> >--mode=none
> >Watched /var/spool/mail/root to see the engine coming up. It went to
> >starting but never came into the Up status.
> >Set a password and used vncviewer to see the console, see attached
> >screenschot.
> >hosted-engine --vm-poweroff, and tried again, same result
> >hosted-engine --vm-start, works
> >Let it startup and then shut it down after enabling maintenance mode.
> >Copied, hopefully, all relevant logs and attached them.
> >
> >A sosreport is also available, size 12Mb. I can provide a download link
> >if needed.
> >
> >Hopefully someone is able to spot what is going wrong.
> >
> >Regards,
> >
> >Joop
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/XASNIEZTZIMWAUIANSOPCX4ZBK6T7TZT/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/T67XIXGE35N7PPJIIO7CMMLO6NHKL73K/


[ovirt-users] Re: 4.4 regression: engine-setup fails if admin password in answerfile contains a "%"

2020-05-24 Thread Stephen Panicho
Created a bug report at https://bugzilla.redhat.com/show_bug.cgi?id=1839533
(forgive me if this is improperly categorized...)

On Sun, May 24, 2020 at 5:47 AM Yedidyah Bar David  wrote:

> On Sun, May 24, 2020 at 12:08 PM Yedidyah Bar David 
> wrote:
> >
> > On Sun, May 24, 2020 at 9:51 AM Strahil Nikolov via Users
> >  wrote:
> > >
> > > Hi Stephen,
> > >
> > > I think it's a regression. Could you open an issue/bug .
> >
> > It is. Thanks for the report. Here is a fix:
> >
> > https://gerrit.ovirt.org/109244
> >
> > Did any of you open a bug? If not, I'll open one. Thanks.
> >
> > Best regards,
>
> Hi, Gianluca. Replying to your email on "4.4 HCI Install Failure -
> Missing /etc/pki/CA/cacert.pem":
>
> On Sun, May 24, 2020 at 12:28 PM Gianluca Cecchi
>  wrote:
> >
> > I I remember correctly it happened to me during the beta cycle and the
> only "strange" character I used for the admin password was the @
> > Donna if it related with what you reported for the % character
>
> Did you open a bug?
>
> In any case, my above patch is not supposed to fix '@', only '%' (I think).
>
> Thanks and best regards,
>
> >
> > >
> > > Best Regards,
> > > Strahil Nikolov
> > >
> > > На 24 май 2020 г. 2:28:41 GMT+03:00, Stephen Panicho <
> s.pani...@gmail.com> написа:
> > >>
> > >> I encountered this error when deploying the Hosted Engine via Cockpit:
> > >>
> > >> [ INFO ] TASK [ovirt.engine-setup : Run engine-setup with answerfile]
> > >> [ ERROR ] fatal: [localhost -> engine.ovirt.trashnet.xyz]: FAILED!
> => {"changed": true, "cmd": ["engine-setup", "--accept-defaults",
> "--config-append=/root/ovirt-engine-answers"], "delta": "0:00:01.396490",
> "end": "2020-05-22 18:32:41.965984", "msg": "non-zero return code", "rc":
> 1, "start": "2020-05-22 18:32:40.569494", "stderr": "", "stderr_lines": [],
> "stdout": "[ INFO ] Stage: Initializing\n[ ERROR ] Failed to execute stage
> 'Initializing': '%' must be followed by '%' or '(', found: '%JUUj'\n[ INFO
> ] Stage: Clean up\n Log file is located at
> /var/log/ovirt-engine/setup/ovirt-engine-setup-20200522183241-c7d1kh.log\n[
> ERROR ] Failed to execute stage 'Clean up': 'NoneType' object has no
> attribute 'cleanup'\n[ INFO ] Generating answer file
> '/var/lib/ovirt-engine/setup/answers/20200522183241-setup.conf'\n[ INFO ]
> Stage: Pre-termination\n[ INFO ] Stage: Termination\n[ ERROR ] Execution of
> setup failed", "stdout_lines": ["[ INFO ] Stage: Initializing", "[ ERROR ]
> Failed to execute stage 'Initializing': '%' must be followed by '%' or '(',
> found: '%JUUj'", "[ INFO ] Stage: Clean up", " Log file is located at
> /var/log/ovirt-engine/setup/ovirt-engine-setup-20200522183241-c7d1kh.log",
> "[ ERROR ] Failed to execute stage 'Clean up': 'NoneType' object has no
> attribute 'cleanup'", "[ INFO ] Generating answer file
> '/var/lib/ovirt-engine/setup/answers/20200522183241-setup.conf'", "[ INFO ]
> Stage: Pre-termination", "[ INFO ] Stage: Termination", "[ ERROR ]
> Execution of setup failed"]}
> > >>
> > >> The important bit is this: Failed to execute stage 'Initializing':
> '%' must be followed by '%' or '(', found: '%JUUj'"
> > >>
> > >> Hey! Those are the last few characters of the admin password. Note
> that I don't mean the root password to the VM, but the one for the "admin"
> user of the web interface. I added some debug lines to the Ansible play to
> see the answerfile that was being generated.
> > >>
> > >> OVESETUP_CONFIG/adminPassword=str:&6#b%JUUj
> > >>
> > >> Apparently engine-setup can no longer handle an answerfile with a "%"
> character in it. This same password worked in 4.3.
> > >>
> > >> Once I changed the admin password, installation progressed normally.
> > >
> > >
> > > --
> > > Изпратено от моето Андроид у-во чрез K-9 Mail. Моля да ме извините за
> краткият ми изказ.
> > > ___
> > > Users mailing list -- users@ovirt.org
> > > To unsubscribe send an email to users-le...@ovirt.org
> > > Privacy Statement: https://www.ovirt.org/privacy-policy.html
> > > oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> > > List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/OHDBPINYMOJVS3PYBNF2IVW72QZOSRTV/
> >
> >
> >
> > --
> > Didi
>
>
>
> --
> Didi
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WOY36ZACG2FGEVDMQVJOTJCMXS3URQXL/


[ovirt-users] Re: 4.4 HCI Install Failure - Missing /etc/pki/CA/cacert.pem

2020-05-23 Thread Stephen Panicho
Fixed the above in
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/6QODLB6J5Z74YCVF6C3TLQPF4KK7RKB5/

On Fri, May 22, 2020 at 2:43 PM Stephen Panicho  wrote:

> Looks like I spoke too soon. I can get to the point where
> HostedEngineLocal comes up, but it fails to run engine-setup.
>
> The ansible output:
> [ INFO ] TASK [ovirt.engine-setup : Run engine-setup with answerfile]
> [ ERROR ] fatal: [localhost -> engine.ovirt.trashnet.xyz]: FAILED! =>
> {"changed": true, "cmd": ["engine-setup", "--accept-defaults",
> "--config-append=/root/ovirt-engine-answers"], "delta": "0:00:01.396490",
> "end": "2020-05-22 18:32:41.965984", "msg": "non-zero return code", "rc":
> 1, "start": "2020-05-22 18:32:40.569494", "stderr": "", "stderr_lines": [],
> "stdout": "[ INFO ] Stage: Initializing\n[ ERROR ] Failed to execute stage
> 'Initializing': '%' must be followed by '%' or '(', found: '%JUUj'\n[ INFO
> ] Stage: Clean up\n Log file is located at
> /var/log/ovirt-engine/setup/ovirt-engine-setup-20200522183241-c7d1kh.log\n[
> ERROR ] Failed to execute stage 'Clean up': 'NoneType' object has no
> attribute 'cleanup'\n[ INFO ] Generating answer file
> '/var/lib/ovirt-engine/setup/answers/20200522183241-setup.conf'\n[ INFO ]
> Stage: Pre-termination\n[ INFO ] Stage: Termination\n[ ERROR ] Execution of
> setup failed", "stdout_lines": ["[ INFO ] Stage: Initializing", "[ ERROR ]
> Failed to execute stage 'Initializing': '%' must be followed by '%' or '(',
> found: '%JUUj'", "[ INFO ] Stage: Clean up", " Log file is located at
> /var/log/ovirt-engine/setup/ovirt-engine-setup-20200522183241-c7d1kh.log",
> "[ ERROR ] Failed to execute stage 'Clean up': 'NoneType' object has no
> attribute 'cleanup'", "[ INFO ] Generating answer file
> '/var/lib/ovirt-engine/setup/answers/20200522183241-setup.conf'", "[ INFO ]
> Stage: Pre-termination", "[ INFO ] Stage: Termination", "[ ERROR ]
> Execution of setup failed"]}
>
> SSHing to the HostedEngineLocal to get the logs...
>
> /var/lib/ovirt-engine/setup/answers/20200522183241-setup.conf:
> # OTOPI answer file, generated by human dialog
> [environment:default]
>
> /var/log/ovirt-engine/setup/ovirt-engine-setup-20200522183241-c7d1kh.log:
> I'll attach this one because it's huge.
>
> On Fri, May 22, 2020 at 2:11 PM Stephen Panicho 
> wrote:
>
>> That fixed it! Thanks so much for the help, Joop.
>>
>> On Fri, May 22, 2020 at 1:07 PM Joop  wrote:
>>
>>> On 22-5-2020 17:59, Stephen Panicho wrote:
>>>
>>> Hey Marcin. There aren't any logs for those services as they haven't
>>> been started yet. This failure happens very early in the deploy, just after
>>> the page where you configure the engine VM settings.
>>>
>>> Unfortunately, I can't try a redeploy on the same node because libvirtd
>>> is now in a bad state and can't come up at all. I now get the following
>>> error once we get past the Gluster Wizard and move on the the Hosted Engine
>>> Deploy:
>>> "libvirt is not running! Please ensure it is running before starting the
>>> wizard, so system capabilities can be queried."
>>>
>>> I'll sift through the ansible to see what it changed and report back.
>>> But I'd still like to get past this /etc/pki/CA/cacert.pem issue.
>>>
>>> On Fri, May 22, 2020 at 4:45 AM Marcin Sobczyk 
>>> wrote:
>>>
>>>> Hi,
>>>>
>>>> On 5/22/20 7:06 AM, Stephen Panicho wrote:
>>>>
>>>> Hi all! I'm using Cockpit to perform an HCI install, and it fails at
>>>> the hosted engine deploy. Libvirtd can't restart because of a missing
>>>> /etc/pki/CA/cacert.pem file.
>>>>
>>>> The log (tasks seemingly from
>>>> /usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/initial_clean.yml):
>>>> [ INFO ] TASK [ovirt.hosted_engine_setup : Stop libvirt service]
>>>> [ INFO ] changed: [localhost]
>>>> [ INFO ] TASK [ovirt.hosted_engine_setup : Drop vdsm config statements]
>>>> [ INFO ] changed: [localhost]
>>>> [ INFO ] TASK [ovirt.hosted_engine_setup : Restore initial abrt config
>>>> files]
>>>> [ INFO ] changed: [localhost]
>>>> [ INFO ] TASK [ovirt.hosted_engine_setup : Restart abrtd service]
>>>> [ INFO ] changed: [localhost]
>>>> [ INFO ] TASK [ovirt.hosted_eng

[ovirt-users] 4.4 regression: engine-setup fails if admin password in answerfile contains a "%"

2020-05-23 Thread Stephen Panicho
I encountered this error when deploying the Hosted Engine via Cockpit:

[ INFO ] TASK [ovirt.engine-setup : Run engine-setup with answerfile]
[ ERROR ] fatal: [localhost -> engine.ovirt.trashnet.xyz]: FAILED! =>
{"changed": true, "cmd": ["engine-setup", "--accept-defaults",
"--config-append=/root/ovirt-engine-answers"], "delta": "0:00:01.396490",
"end": "2020-05-22 18:32:41.965984", "msg": "non-zero return code", "rc":
1, "start": "2020-05-22 18:32:40.569494", "stderr": "", "stderr_lines": [],
"stdout": "[ INFO ] Stage: Initializing\n[ ERROR ] Failed to execute stage
'Initializing': '%' must be followed by '%' or '(', found: '%JUUj'\n[ INFO
] Stage: Clean up\n Log file is located at
/var/log/ovirt-engine/setup/ovirt-engine-setup-20200522183241-c7d1kh.log\n[
ERROR ] Failed to execute stage 'Clean up': 'NoneType' object has no
attribute 'cleanup'\n[ INFO ] Generating answer file
'/var/lib/ovirt-engine/setup/answers/20200522183241-setup.conf'\n[ INFO ]
Stage: Pre-termination\n[ INFO ] Stage: Termination\n[ ERROR ] Execution of
setup failed", "stdout_lines": ["[ INFO ] Stage: Initializing", "[ ERROR ]
Failed to execute stage 'Initializing': '%' must be followed by '%' or '(',
found: '%JUUj'", "[ INFO ] Stage: Clean up", " Log file is located at
/var/log/ovirt-engine/setup/ovirt-engine-setup-20200522183241-c7d1kh.log",
"[ ERROR ] Failed to execute stage 'Clean up': 'NoneType' object has no
attribute 'cleanup'", "[ INFO ] Generating answer file
'/var/lib/ovirt-engine/setup/answers/20200522183241-setup.conf'", "[ INFO ]
Stage: Pre-termination", "[ INFO ] Stage: Termination", "[ ERROR ]
Execution of setup failed"]}

The important bit is this: Failed to execute stage 'Initializing': '%' must
be followed by '%' or '(', found: '%JUUj'"

Hey! Those are the last few characters of the admin password. Note that I
don't mean the root password to the VM, but the one for the "admin" user of
the web interface. I added some debug lines to the Ansible play to see the
answerfile that was being generated.

OVESETUP_CONFIG/adminPassword=str:&6#b%JUUj

Apparently engine-setup can no longer handle an answerfile with a "%"
character in it. This same password worked in 4.3.

Once I changed the admin password, installation progressed normally.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6QODLB6J5Z74YCVF6C3TLQPF4KK7RKB5/


[ovirt-users] Re: 4.4 HCI Install Failure - Missing /etc/pki/CA/cacert.pem

2020-05-22 Thread Stephen Panicho
That fixed it! Thanks so much for the help, Joop.

On Fri, May 22, 2020 at 1:07 PM Joop  wrote:

> On 22-5-2020 17:59, Stephen Panicho wrote:
>
> Hey Marcin. There aren't any logs for those services as they haven't been
> started yet. This failure happens very early in the deploy, just after the
> page where you configure the engine VM settings.
>
> Unfortunately, I can't try a redeploy on the same node because libvirtd is
> now in a bad state and can't come up at all. I now get the following error
> once we get past the Gluster Wizard and move on the the Hosted Engine
> Deploy:
> "libvirt is not running! Please ensure it is running before starting the
> wizard, so system capabilities can be queried."
>
> I'll sift through the ansible to see what it changed and report back. But
> I'd still like to get past this /etc/pki/CA/cacert.pem issue.
>
> On Fri, May 22, 2020 at 4:45 AM Marcin Sobczyk 
> wrote:
>
>> Hi,
>>
>> On 5/22/20 7:06 AM, Stephen Panicho wrote:
>>
>> Hi all! I'm using Cockpit to perform an HCI install, and it fails at the
>> hosted engine deploy. Libvirtd can't restart because of a missing
>> /etc/pki/CA/cacert.pem file.
>>
>> The log (tasks seemingly from
>> /usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/initial_clean.yml):
>> [ INFO ] TASK [ovirt.hosted_engine_setup : Stop libvirt service]
>> [ INFO ] changed: [localhost]
>> [ INFO ] TASK [ovirt.hosted_engine_setup : Drop vdsm config statements]
>> [ INFO ] changed: [localhost]
>> [ INFO ] TASK [ovirt.hosted_engine_setup : Restore initial abrt config
>> files]
>> [ INFO ] changed: [localhost]
>> [ INFO ] TASK [ovirt.hosted_engine_setup : Restart abrtd service]
>> [ INFO ] changed: [localhost]
>> [ INFO ] TASK [ovirt.hosted_engine_setup : Drop libvirt sasl2
>> configuration by vdsm]
>> [ INFO ] changed: [localhost]
>> [ INFO ] TASK [ovirt.hosted_engine_setup : Stop and disable services]
>> [ INFO ] ok: [localhost]
>> [ INFO ] TASK [ovirt.hosted_engine_setup : Restore initial libvirt
>> default network configuration]
>> [ INFO ] changed: [localhost]
>> [ INFO ] TASK [ovirt.hosted_engine_setup : Start libvirt]
>> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg":
>> "Unable to start service libvirtd: Job for libvirtd.service failed because
>> the control process exited with error code.\nSee \"systemctl status
>> libvirtd.service\" and \"journalctl -xe\" for details.\n"}
>>
>> journalctl -u libvirtd:
>> May 22 04:33:25 node1 libvirtd[26392]: libvirt version: 5.6.0, package:
>> 10.el8 (CBS < c...@centos.org>, 2020-02-27-01:09:46, )
>> May 22 04:33:25 node1 libvirtd[26392]: hostname: node1
>> May 22 04:33:25 node1 libvirtd[26392]: Cannot read CA certificate
>> '/etc/pki/CA/cacert.pem': No such file or directory
>> May 22 04:33:25 node1 systemd[1]: libvirtd.service: Main process exited,
>> code=exited, status=6/NOTCONFIGURED
>> May 22 04:33:25 node1 systemd[1]: libvirtd.service: Failed with result
>> 'exit-code'.
>> May 22 04:33:25 node1 systemd[1]: Failed to start Virtualization daemon.
>>
>> Can you please share journalctl logs for vdsmd and supervdsmd?
>>
> I hate it when I have to say: me too.
>
> BUT during test week I think Simone had the same problem and did a
> /usr/sbin/ovirt-hosted-engine-cleanup?? and then you can retry the deply
> from the wizard.
> To recapitulate: Follow the HCI cockpit wizard until you get the error
> then open a terminal and cleanup and then retry the deployment. It will
> succeed. Did this yesterday and it worked.
> Even tried to run cleanup before starting the wizard but thats a no
> success.
>
> Greetings
>
> Joop
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/4PUIES2JGAFHJY6BJG5MQY4URRE3STKS/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2TPBUOVBISYB2NMU2VY4GCCJTSUB6GRW/


[ovirt-users] Re: 4.4 HCI Install Failure - Missing /etc/pki/CA/cacert.pem

2020-05-22 Thread Stephen Panicho
The issue is the "Drop vdsm config statements" task from
/usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/initial_clean.yml

I'm not sure how those config statements got there in the first place...
maybe a scriptlet from a vdsm rpm install? Either way, the task removes the
following section from the bottom of /etc/libvirt/libvirtd.conf, causing it
to look for the default ca_file, /etc/pki/CA/cacert.pem.

## beginning of configuration section by vdsm-4.40.0
auth_unix_rw="sasl"
ca_file="/etc/pki/vdsm/certs/cacert.pem"
cert_file="/etc/pki/vdsm/certs/vdsmcert.pem"
host_uuid="9def7285-9ed9-4a94-8a7d-ed1f05a9a224"
keepalive_interval=-1
key_file="/etc/pki/vdsm/keys/vdsmkey.pem"
## end of configuration section by vdsm-4.40.0

If I re-add this section to my bootstrap node's libvirtd.conf, I can start
the libvirtd service again. I'll try to comment out the "Drop vdsm config
statements" task from the playbook and see if I can proceed.

On Fri, May 22, 2020 at 11:59 AM Stephen Panicho 
wrote:

> Hey Marcin. There aren't any logs for those services as they haven't been
> started yet. This failure happens very early in the deploy, just after the
> page where you configure the engine VM settings.
>
> Unfortunately, I can't try a redeploy on the same node because libvirtd is
> now in a bad state and can't come up at all. I now get the following error
> once we get past the Gluster Wizard and move on the the Hosted Engine
> Deploy:
> "libvirt is not running! Please ensure it is running before starting the
> wizard, so system capabilities can be queried."
>
> I'll sift through the ansible to see what it changed and report back. But
> I'd still like to get past this /etc/pki/CA/cacert.pem issue.
>
> On Fri, May 22, 2020 at 4:45 AM Marcin Sobczyk 
> wrote:
>
>> Hi,
>>
>> On 5/22/20 7:06 AM, Stephen Panicho wrote:
>>
>> Hi all! I'm using Cockpit to perform an HCI install, and it fails at the
>> hosted engine deploy. Libvirtd can't restart because of a missing
>> /etc/pki/CA/cacert.pem file.
>>
>> The log (tasks seemingly from
>> /usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/initial_clean.yml):
>> [ INFO ] TASK [ovirt.hosted_engine_setup : Stop libvirt service]
>> [ INFO ] changed: [localhost]
>> [ INFO ] TASK [ovirt.hosted_engine_setup : Drop vdsm config statements]
>> [ INFO ] changed: [localhost]
>> [ INFO ] TASK [ovirt.hosted_engine_setup : Restore initial abrt config
>> files]
>> [ INFO ] changed: [localhost]
>> [ INFO ] TASK [ovirt.hosted_engine_setup : Restart abrtd service]
>> [ INFO ] changed: [localhost]
>> [ INFO ] TASK [ovirt.hosted_engine_setup : Drop libvirt sasl2
>> configuration by vdsm]
>> [ INFO ] changed: [localhost]
>> [ INFO ] TASK [ovirt.hosted_engine_setup : Stop and disable services]
>> [ INFO ] ok: [localhost]
>> [ INFO ] TASK [ovirt.hosted_engine_setup : Restore initial libvirt
>> default network configuration]
>> [ INFO ] changed: [localhost]
>> [ INFO ] TASK [ovirt.hosted_engine_setup : Start libvirt]
>> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg":
>> "Unable to start service libvirtd: Job for libvirtd.service failed because
>> the control process exited with error code.\nSee \"systemctl status
>> libvirtd.service\" and \"journalctl -xe\" for details.\n"}
>>
>> journalctl -u libvirtd:
>> May 22 04:33:25 node1 libvirtd[26392]: libvirt version: 5.6.0, package:
>> 10.el8 (CBS , 2020-02-27-01:09:46, )
>> May 22 04:33:25 node1 libvirtd[26392]: hostname: node1
>> May 22 04:33:25 node1 libvirtd[26392]: Cannot read CA certificate
>> '/etc/pki/CA/cacert.pem': No such file or directory
>> May 22 04:33:25 node1 systemd[1]: libvirtd.service: Main process exited,
>> code=exited, status=6/NOTCONFIGURED
>> May 22 04:33:25 node1 systemd[1]: libvirtd.service: Failed with result
>> 'exit-code'.
>> May 22 04:33:25 node1 systemd[1]: Failed to start Virtualization daemon.
>>
>> Can you please share journalctl logs for vdsmd and supervdsmd?
>>
>> Regards, Marcin
>>
>>
>> From a fresh CentOS 8.1 minimal install, I've installed the following:
>> - The 4.4 repo
>> - cockpit
>> - ovirt-cockpit-dashboard
>> - vdsm-gluster (providing glusterfs-server and allowing the Gluster
>> Wizard to complete)
>> - gluster-ansible-roles (only on the bootstrap host)
>>
>> I'm not exactly sure what that initial bit of the playbook does.
>> Comparing the bootstrap node with another that has yet to be touched, both
>> /etc/libvirt/libvirtd.conf and /etc/sysconf

[ovirt-users] Re: 4.4 HCI Install Failure - Missing /etc/pki/CA/cacert.pem

2020-05-22 Thread Stephen Panicho
Hey Marcin. There aren't any logs for those services as they haven't been
started yet. This failure happens very early in the deploy, just after the
page where you configure the engine VM settings.

Unfortunately, I can't try a redeploy on the same node because libvirtd is
now in a bad state and can't come up at all. I now get the following error
once we get past the Gluster Wizard and move on the the Hosted Engine
Deploy:
"libvirt is not running! Please ensure it is running before starting the
wizard, so system capabilities can be queried."

I'll sift through the ansible to see what it changed and report back. But
I'd still like to get past this /etc/pki/CA/cacert.pem issue.

On Fri, May 22, 2020 at 4:45 AM Marcin Sobczyk  wrote:

> Hi,
>
> On 5/22/20 7:06 AM, Stephen Panicho wrote:
>
> Hi all! I'm using Cockpit to perform an HCI install, and it fails at the
> hosted engine deploy. Libvirtd can't restart because of a missing
> /etc/pki/CA/cacert.pem file.
>
> The log (tasks seemingly from
> /usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/initial_clean.yml):
> [ INFO ] TASK [ovirt.hosted_engine_setup : Stop libvirt service]
> [ INFO ] changed: [localhost]
> [ INFO ] TASK [ovirt.hosted_engine_setup : Drop vdsm config statements]
> [ INFO ] changed: [localhost]
> [ INFO ] TASK [ovirt.hosted_engine_setup : Restore initial abrt config
> files]
> [ INFO ] changed: [localhost]
> [ INFO ] TASK [ovirt.hosted_engine_setup : Restart abrtd service]
> [ INFO ] changed: [localhost]
> [ INFO ] TASK [ovirt.hosted_engine_setup : Drop libvirt sasl2
> configuration by vdsm]
> [ INFO ] changed: [localhost]
> [ INFO ] TASK [ovirt.hosted_engine_setup : Stop and disable services]
> [ INFO ] ok: [localhost]
> [ INFO ] TASK [ovirt.hosted_engine_setup : Restore initial libvirt default
> network configuration]
> [ INFO ] changed: [localhost]
> [ INFO ] TASK [ovirt.hosted_engine_setup : Start libvirt]
> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Unable
> to start service libvirtd: Job for libvirtd.service failed because the
> control process exited with error code.\nSee \"systemctl status
> libvirtd.service\" and \"journalctl -xe\" for details.\n"}
>
> journalctl -u libvirtd:
> May 22 04:33:25 node1 libvirtd[26392]: libvirt version: 5.6.0, package:
> 10.el8 (CBS , 2020-02-27-01:09:46, )
> May 22 04:33:25 node1 libvirtd[26392]: hostname: node1
> May 22 04:33:25 node1 libvirtd[26392]: Cannot read CA certificate
> '/etc/pki/CA/cacert.pem': No such file or directory
> May 22 04:33:25 node1 systemd[1]: libvirtd.service: Main process exited,
> code=exited, status=6/NOTCONFIGURED
> May 22 04:33:25 node1 systemd[1]: libvirtd.service: Failed with result
> 'exit-code'.
> May 22 04:33:25 node1 systemd[1]: Failed to start Virtualization daemon.
>
> Can you please share journalctl logs for vdsmd and supervdsmd?
>
> Regards, Marcin
>
>
> From a fresh CentOS 8.1 minimal install, I've installed the following:
> - The 4.4 repo
> - cockpit
> - ovirt-cockpit-dashboard
> - vdsm-gluster (providing glusterfs-server and allowing the Gluster Wizard
> to complete)
> - gluster-ansible-roles (only on the bootstrap host)
>
> I'm not exactly sure what that initial bit of the playbook does. Comparing
> the bootstrap node with another that has yet to be touched, both
> /etc/libvirt/libvirtd.conf and /etc/sysconfig/libvirtd are the same on both
> hosts. Yet the bootstrap host can no longer start libvirtd while the other
> host can. Neither host has the /etc/pki/CA/cacert.pem file.
>
> Please let me know if I can provide any more information. Thanks!
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/XNW4HWUQUTN44VMATT4B6ARSEYVURDP7/
>
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SXQJPB4E2W5LEOKGBDPJC7BLKNYJA53D/


[ovirt-users] 4.4 HCI Install Failure - Missing /etc/pki/CA/cacert.pem

2020-05-21 Thread Stephen Panicho
Hi all! I'm using Cockpit to perform an HCI install, and it fails at the
hosted engine deploy. Libvirtd can't restart because of a missing
/etc/pki/CA/cacert.pem file.

The log (tasks seemingly from
/usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/initial_clean.yml):
[ INFO ] TASK [ovirt.hosted_engine_setup : Stop libvirt service]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Drop vdsm config statements]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Restore initial abrt config
files]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Restart abrtd service]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Drop libvirt sasl2 configuration
by vdsm]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Stop and disable services]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Restore initial libvirt default
network configuration]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Start libvirt]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Unable
to start service libvirtd: Job for libvirtd.service failed because the
control process exited with error code.\nSee \"systemctl status
libvirtd.service\" and \"journalctl -xe\" for details.\n"}

journalctl -u libvirtd:
May 22 04:33:25 node1 libvirtd[26392]: libvirt version: 5.6.0, package:
10.el8 (CBS , 2020-02-27-01:09:46, )
May 22 04:33:25 node1 libvirtd[26392]: hostname: node1
May 22 04:33:25 node1 libvirtd[26392]: Cannot read CA certificate
'/etc/pki/CA/cacert.pem': No such file or directory
May 22 04:33:25 node1 systemd[1]: libvirtd.service: Main process exited,
code=exited, status=6/NOTCONFIGURED
May 22 04:33:25 node1 systemd[1]: libvirtd.service: Failed with result
'exit-code'.
May 22 04:33:25 node1 systemd[1]: Failed to start Virtualization daemon.

>From a fresh CentOS 8.1 minimal install, I've installed the following:
- The 4.4 repo
- cockpit
- ovirt-cockpit-dashboard
- vdsm-gluster (providing glusterfs-server and allowing the Gluster Wizard
to complete)
- gluster-ansible-roles (only on the bootstrap host)

I'm not exactly sure what that initial bit of the playbook does. Comparing
the bootstrap node with another that has yet to be touched, both
/etc/libvirt/libvirtd.conf and /etc/sysconfig/libvirtd are the same on both
hosts. Yet the bootstrap host can no longer start libvirtd while the other
host can. Neither host has the /etc/pki/CA/cacert.pem file.

Please let me know if I can provide any more information. Thanks!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XNW4HWUQUTN44VMATT4B6ARSEYVURDP7/


[ovirt-users] Re: Enabling Libgfapi in 4.3.8 - VMs won't start

2020-02-13 Thread Stephen Panicho
Darrell, would you care to elaborate on your HA workaround?

As far as I understand, only the primary Gluster host is visible to libvirt
when using gfapi, so if that host goes down, all VMs break. I imagine
you're using a round-robin DNS entry for the primary Gluster host, but I'd
like to be sure.

On Wed, Feb 12, 2020 at 11:01 AM Darrell Budic 
wrote:

> Yes. I’m using libgfapi access on gluster 6.7 with overt 4.3.8 just fine,
> but I don’t use snapshots. You can work around the HA issue with DNS and
> backup server entries on the storage domain as well. Worth it to me for the
> performance, YMMV.
>
> On Feb 12, 2020, at 8:04 AM, Jayme  wrote:
>
> From my understanding it's not a default option but many users are still
> using libgfapi successfully. I'm not sure about its status in the latest
> 4.3.8 release but I know it is/was working for people in previous versions.
> The libgfapi bugs affect HA and snapshots (on 3 way replica HCI) but it
> should still be working otherwise, unless like I said something changed in
> more recent releases of oVirt.
>
> On Wed, Feb 12, 2020 at 9:43 AM Guillaume Pavese <
> guillaume.pav...@interactiv-group.com> wrote:
>
>> Libgfapi is not supported because of an old bug in qemu. That qemu bug is
>> slowly getting fixed, but the bugs about Libgfapi support in ovirt have
>> since been closed as WONTFIX and DEFERRED
>>
>> See :
>> https://bugzilla.redhat.com/show_bug.cgi?id=1465810
>> https://bugzilla.redhat.com/show_bug.cgi?id=1484660
>> <https://bugzilla.redhat.com/show_bug.cgi?id=1484227> : "No plans to
>> enable libgfapi in RHHI-V for now. Closing this bug"
>> https://bugzilla.redhat.com/show_bug.cgi?id=1484227 : "No plans to
>> enable libgfapi in RHHI-V for now. Closing this bug"
>> https://bugzilla.redhat.com/show_bug.cgi?id=1633642 : "Closing this as
>> no action taken from long back.Please reopen if required."
>>
>> Would be nice if someone could reopen the closed bugs so this feature
>> doesn't get forgotten
>>
>> Guillaume Pavese
>> Ingénieur Système et Réseau
>> Interactiv-Group
>>
>>
>> On Tue, Feb 11, 2020 at 9:58 AM Stephen Panicho 
>> wrote:
>>
>>> I used the cockpit-based hc setup and "option rpc-auth-allow-insecure"
>>> is absent from /etc/glusterfs/glusterd.vol.
>>>
>>> I'm going to redo the cluster this week and report back. Thanks for the
>>> tip!
>>>
>>> On Mon, Feb 10, 2020 at 6:01 PM Darrell Budic 
>>> wrote:
>>>
>>>> The hosts will still mount the volume via FUSE, but you might double
>>>> check you set the storage up as Gluster and not NFS.
>>>>
>>>> Then gluster used to need some config in glusterd.vol to set
>>>>
>>>> option rpc-auth-allow-insecure on
>>>>
>>>> I’m not sure if that got added to a hyper converged setup or not, but
>>>> I’d check it.
>>>>
>>>> On Feb 10, 2020, at 4:41 PM, Stephen Panicho 
>>>> wrote:
>>>>
>>>> No, this was a relatively new cluster-- only a couple days old. Just a
>>>> handful of VMs including the engine.
>>>>
>>>> On Mon, Feb 10, 2020 at 5:26 PM Jayme  wrote:
>>>>
>>>>> Curious do the vms have active snapshots?
>>>>>
>>>>> On Mon, Feb 10, 2020 at 5:59 PM  wrote:
>>>>>
>>>>>> Hello, all. I have a 3-node Hyperconverged oVirt 4.3.8 cluster
>>>>>> running on CentOS 7.7 hosts. I was investigating poor Gluster performance
>>>>>> and heard about libgfapi, so I thought I'd give it a shot. Looking 
>>>>>> through
>>>>>> the documentation, followed by lots of threads and BZ reports, I've done
>>>>>> the following to enable it:
>>>>>>
>>>>>> First, I shut down all VMs except the engine. Then...
>>>>>>
>>>>>> On the hosts:
>>>>>> 1. setsebool -P virt_use_glusterfs on
>>>>>> 2. dynamic_ownership=0 in /etc/libvirt/qemu.conf
>>>>>>
>>>>>> On the engine VM:
>>>>>> 1. engine-config -s LibgfApiSupported=true --cver=4.3
>>>>>> 2. systemctl restart ovirt-engine
>>>>>>
>>>>>> VMs now fail to launch. Am I doing this correctly? I should also note
>>>>>> that the hosts still have the Gluster domain mounted via FUSE.
>>>>>>
>>>>>> Here's a relevant bit from 

[ovirt-users] Re: Enabling Libgfapi in 4.3.8 - VMs won't start

2020-02-10 Thread Stephen Panicho
I used the cockpit-based hc setup and "option rpc-auth-allow-insecure" is
absent from /etc/glusterfs/glusterd.vol.

I'm going to redo the cluster this week and report back. Thanks for the tip!

On Mon, Feb 10, 2020 at 6:01 PM Darrell Budic 
wrote:

> The hosts will still mount the volume via FUSE, but you might double check
> you set the storage up as Gluster and not NFS.
>
> Then gluster used to need some config in glusterd.vol to set
>
> option rpc-auth-allow-insecure on
>
> I’m not sure if that got added to a hyper converged setup or not, but I’d
> check it.
>
> On Feb 10, 2020, at 4:41 PM, Stephen Panicho  wrote:
>
> No, this was a relatively new cluster-- only a couple days old. Just a
> handful of VMs including the engine.
>
> On Mon, Feb 10, 2020 at 5:26 PM Jayme  wrote:
>
>> Curious do the vms have active snapshots?
>>
>> On Mon, Feb 10, 2020 at 5:59 PM  wrote:
>>
>>> Hello, all. I have a 3-node Hyperconverged oVirt 4.3.8 cluster running
>>> on CentOS 7.7 hosts. I was investigating poor Gluster performance and heard
>>> about libgfapi, so I thought I'd give it a shot. Looking through the
>>> documentation, followed by lots of threads and BZ reports, I've done the
>>> following to enable it:
>>>
>>> First, I shut down all VMs except the engine. Then...
>>>
>>> On the hosts:
>>> 1. setsebool -P virt_use_glusterfs on
>>> 2. dynamic_ownership=0 in /etc/libvirt/qemu.conf
>>>
>>> On the engine VM:
>>> 1. engine-config -s LibgfApiSupported=true --cver=4.3
>>> 2. systemctl restart ovirt-engine
>>>
>>> VMs now fail to launch. Am I doing this correctly? I should also note
>>> that the hosts still have the Gluster domain mounted via FUSE.
>>>
>>> Here's a relevant bit from engine.log:
>>>
>>> 2020-02-06T16:38:32.573511Z qemu-kvm: -drive file=gluster://
>>> node1.fs.trashnet.xyz:24007/vmstore/781717e5-1cff-43a1-b586-9941503544e8/images/a1d56b14-6d72-4f46-a0aa-eb0870c36bc4/a2314816-7970-49ce-a80c-ab0d1cf17c78,file.debug=4,format=qcow2,if=none,id=drive-ua-a1d56b14-6d72-4f46-a0aa-eb0870c36bc4,serial=a1d56b14-6d72-4f46-a0aa-eb0870c36bc4,werror=stop,rerror=stop,cache=none,discard=unmap,aio=native:
>>> Could not read qcow2 header: Invalid argument.
>>>
>>> The full engine.log from one of the attempts:
>>>
>>> 2020-02-06 16:38:24,909Z INFO
>>> [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
>>> (ForkJoinPool-1-worker-12) [] add VM
>>> 'df9dbac4-35c0-40ee-acd4-a1cfc959aa8b'(yumcache) to rerun treatment
>>> 2020-02-06 16:38:25,010Z ERROR
>>> [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring]
>>> (ForkJoinPool-1-worker-12) [] Rerun VM
>>> 'df9dbac4-35c0-40ee-acd4-a1cfc959aa8b'. Called from VDS '
>>> node2.ovirt.trashnet.xyz'
>>> 2020-02-06 16:38:25,091Z WARN
>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>>> (EE-ManagedThreadFactory-engine-Thread-216) [] EVENT_ID:
>>> USER_INITIATED_RUN_VM_FAILED(151), Failed to run VM yumcache on Host
>>> node2.ovirt.trashnet.xyz.
>>> 2020-02-06 16:38:25,166Z INFO  [org.ovirt.engine.core.bll.RunVmCommand]
>>> (EE-ManagedThreadFactory-engine-Thread-216) [] Lock Acquired to object
>>> 'EngineLock:{exclusiveLocks='[df9dbac4-35c0-40ee-acd4-a1cfc959aa8b=VM]',
>>> sharedLocks=''}'
>>> 2020-02-06 16:38:25,179Z INFO
>>> [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand]
>>> (EE-ManagedThreadFactory-engine-Thread-216) [] START,
>>> IsVmDuringInitiatingVDSCommand(
>>> IsVmDuringInitiatingVDSCommandParameters:{vmId='df9dbac4-35c0-40ee-acd4-a1cfc959aa8b'}),
>>> log id: 2107f52a
>>> 2020-02-06 16:38:25,181Z INFO
>>> [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand]
>>> (EE-ManagedThreadFactory-engine-Thread-216) [] FINISH,
>>> IsVmDuringInitiatingVDSCommand, return: false, log id: 2107f52a
>>> 2020-02-06 16:38:25,298Z INFO  [org.ovirt.engine.core.bll.RunVmCommand]
>>> (EE-ManagedThreadFactory-engine-Thread-216) [] Running command:
>>> RunVmCommand internal: false. Entities affected :  ID:
>>> df9dbac4-35c0-40ee-acd4-a1cfc959aa8b Type: VMAction group RUN_VM with role
>>> type USER
>>> 2020-02-06 16:38:25,313Z INFO
>>> [org.ovirt.engine.core.bll.utils.EmulatedMachineUtils]
>>> (EE-ManagedThreadFactory-engine-Thread-216) [] Emulated machine
>>> 'pc-q35-rhel7.6.0' which is different than that of the cluster is set for
>>> 'yumcache'(df9dbac4-35c0-40ee-acd4-a1cfc959

[ovirt-users] Re: Enabling Libgfapi in 4.3.8 - VMs won't start

2020-02-10 Thread Stephen Panicho
No, this was a relatively new cluster-- only a couple days old. Just a
handful of VMs including the engine.

On Mon, Feb 10, 2020 at 5:26 PM Jayme  wrote:

> Curious do the vms have active snapshots?
>
> On Mon, Feb 10, 2020 at 5:59 PM  wrote:
>
>> Hello, all. I have a 3-node Hyperconverged oVirt 4.3.8 cluster running on
>> CentOS 7.7 hosts. I was investigating poor Gluster performance and heard
>> about libgfapi, so I thought I'd give it a shot. Looking through the
>> documentation, followed by lots of threads and BZ reports, I've done the
>> following to enable it:
>>
>> First, I shut down all VMs except the engine. Then...
>>
>> On the hosts:
>> 1. setsebool -P virt_use_glusterfs on
>> 2. dynamic_ownership=0 in /etc/libvirt/qemu.conf
>>
>> On the engine VM:
>> 1. engine-config -s LibgfApiSupported=true --cver=4.3
>> 2. systemctl restart ovirt-engine
>>
>> VMs now fail to launch. Am I doing this correctly? I should also note
>> that the hosts still have the Gluster domain mounted via FUSE.
>>
>> Here's a relevant bit from engine.log:
>>
>> 2020-02-06T16:38:32.573511Z qemu-kvm: -drive file=gluster://
>> node1.fs.trashnet.xyz:24007/vmstore/781717e5-1cff-43a1-b586-9941503544e8/images/a1d56b14-6d72-4f46-a0aa-eb0870c36bc4/a2314816-7970-49ce-a80c-ab0d1cf17c78,file.debug=4,format=qcow2,if=none,id=drive-ua-a1d56b14-6d72-4f46-a0aa-eb0870c36bc4,serial=a1d56b14-6d72-4f46-a0aa-eb0870c36bc4,werror=stop,rerror=stop,cache=none,discard=unmap,aio=native:
>> Could not read qcow2 header: Invalid argument.
>>
>> The full engine.log from one of the attempts:
>>
>> 2020-02-06 16:38:24,909Z INFO
>> [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
>> (ForkJoinPool-1-worker-12) [] add VM
>> 'df9dbac4-35c0-40ee-acd4-a1cfc959aa8b'(yumcache) to rerun treatment
>> 2020-02-06 16:38:25,010Z ERROR
>> [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring]
>> (ForkJoinPool-1-worker-12) [] Rerun VM
>> 'df9dbac4-35c0-40ee-acd4-a1cfc959aa8b'. Called from VDS '
>> node2.ovirt.trashnet.xyz'
>> 2020-02-06 16:38:25,091Z WARN
>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>> (EE-ManagedThreadFactory-engine-Thread-216) [] EVENT_ID:
>> USER_INITIATED_RUN_VM_FAILED(151), Failed to run VM yumcache on Host
>> node2.ovirt.trashnet.xyz.
>> 2020-02-06 16:38:25,166Z INFO  [org.ovirt.engine.core.bll.RunVmCommand]
>> (EE-ManagedThreadFactory-engine-Thread-216) [] Lock Acquired to object
>> 'EngineLock:{exclusiveLocks='[df9dbac4-35c0-40ee-acd4-a1cfc959aa8b=VM]',
>> sharedLocks=''}'
>> 2020-02-06 16:38:25,179Z INFO
>> [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand]
>> (EE-ManagedThreadFactory-engine-Thread-216) [] START,
>> IsVmDuringInitiatingVDSCommand(
>> IsVmDuringInitiatingVDSCommandParameters:{vmId='df9dbac4-35c0-40ee-acd4-a1cfc959aa8b'}),
>> log id: 2107f52a
>> 2020-02-06 16:38:25,181Z INFO
>> [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand]
>> (EE-ManagedThreadFactory-engine-Thread-216) [] FINISH,
>> IsVmDuringInitiatingVDSCommand, return: false, log id: 2107f52a
>> 2020-02-06 16:38:25,298Z INFO  [org.ovirt.engine.core.bll.RunVmCommand]
>> (EE-ManagedThreadFactory-engine-Thread-216) [] Running command:
>> RunVmCommand internal: false. Entities affected :  ID:
>> df9dbac4-35c0-40ee-acd4-a1cfc959aa8b Type: VMAction group RUN_VM with role
>> type USER
>> 2020-02-06 16:38:25,313Z INFO
>> [org.ovirt.engine.core.bll.utils.EmulatedMachineUtils]
>> (EE-ManagedThreadFactory-engine-Thread-216) [] Emulated machine
>> 'pc-q35-rhel7.6.0' which is different than that of the cluster is set for
>> 'yumcache'(df9dbac4-35c0-40ee-acd4-a1cfc959aa8b)
>> 2020-02-06 16:38:25,382Z INFO
>> [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand]
>> (EE-ManagedThreadFactory-engine-Thread-216) [] START,
>> UpdateVmDynamicDataVDSCommand(
>> UpdateVmDynamicDataVDSCommandParameters:{hostId='null',
>> vmId='df9dbac4-35c0-40ee-acd4-a1cfc959aa8b',
>> vmDynamic='org.ovirt.engine.core.common.businessentities.VmDynamic@9774a64'}),
>> log id: 4a83911f
>> 2020-02-06 16:38:25,417Z INFO
>> [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand]
>> (EE-ManagedThreadFactory-engine-Thread-216) [] FINISH,
>> UpdateVmDynamicDataVDSCommand, return: , log id: 4a83911f
>> 2020-02-06 16:38:25,418Z INFO
>> [org.ovirt.engine.core.vdsbroker.CreateVDSCommand]
>> (EE-ManagedThreadFactory-engine-Thread-216) [] START, CreateVDSCommand(
>> CreateVDSCommandParameters:{hostId='c3465ca2-395e-4c0c-b72e-b5b7153df452',
>> vmId='df9dbac4-35c0-40ee-acd4-a1cfc959aa8b', vm='VM [yumcache]'}), log id:
>> 5e07ba66
>> 2020-02-06 16:38:25,420Z INFO
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand]
>> (EE-ManagedThreadFactory-engine-Thread-216) [] START,
>> CreateBrokerVDSCommand(HostName = node1.ovirt.trashnet.xyz,
>> CreateVDSCommandParameters:{hostId='c3465ca2-395e-4c0c-b72e-b5b7153df452',
>> vmId='df9dbac4-35c0-40ee-acd4-a1cfc959aa8b', vm='VM [yumcache]'}), log id:
>> 1bfa03c4
>> 2020-02-06 16:38:25,424Z INFO
>>