[ovirt-users] Re: hosted-engine --deploy fails after "Wait for the host to be up" task

2020-02-16 Thread Yedidyah Bar David
Hi all,

On Fri, Feb 14, 2020 at 6:45 PM Florian Nolden  wrote:

> Thanks, Fredy for your great help. Setting the Banner and PrintMotd
> options on all 3 nodes helped me to succeed with the installation.
>

Thanks a lot for the report!


> Am Fr., 14. Feb. 2020 um 16:23 Uhr schrieb Fredy Sanchez <
> fredy.sanc...@modmed.com>:
>
>> Banner none
>> PrintMotd no
>>
>> # systemctl restart sshd
>>
>
> That should be fixed in the ovirt-node images.
>

I think I agree. Would you like to open a bug about this?

I wonder what we can/should do with EL7 hosts (non-ovirt-node).

Also need to check how 4.4 behaves - there, host-deploy was fully rewritten
using ansible. No idea how sensitive ansible is to these banners (compared
with otopi, which is very). Adding Dana.

Best regards,


>
>
>> If gluster installed successfully, you don't have to reinstall it.
>> Just run the hyperconverged install again from cockpit, and it will
>> detect the existing gluster install, and ask you if you want to re-use it;
>> re-using worked for me. Only thing I'd point out here is that gluster
>> didn't enable in my servers automagically; I had to enable it and start it
>> by hand before cockpit picked it up.
>> # systemctl enable glusterd --now
>> # systemctl status glusterd
>>
>> Gluster was running fine for me. For me that was not needed.
>
> Also,
>> # tail -f /var/log/secure
>> while the install is going will help you see if there's a problem with
>> ssh, other than the banners.
>>
>> --
>> Fredy
>>
>> On Fri, Feb 14, 2020 at 9:32 AM Florian Nolden 
>> wrote:
>>
>>>
>>> Am Fr., 14. Feb. 2020 um 12:21 Uhr schrieb Fredy Sanchez <
>>> fredy.sanc...@modmed.com>:
>>>
 Hi Florian,

>>>
 In my case, Didi's suggestions got me thinking, and I ultimately traced
 this to the ssh banners; they must be disabled. You can do this in
 sshd_config. I do think that logging could be better for this issue, and
 that the host up check should incorporate things other than ssh, even if
 just a ping. Good luck.

 Hi Fredy,
>>>
>>> thanks for the reply.
>>>
>>> I just have to uncomment "Banners none" in the /etc/ssh/sshd_config on
>>> all 3 nodes, and run redeploy in the cockpit?
>>> Or have you also reinstalled the nodes and the gluster storage?
>>>
 --
 Fredy

 On Fri, Feb 14, 2020, 4:55 AM Florian Nolden 
 wrote:

> I'also stuck with that issue.
>
> I have
> 3x  HP ProLiant DL360 G7
>
> 1x 1gbit => as control network
> 3x 1gbit => bond0 as Lan
> 2x 10gbit => bond1 as gluster network
>
> I installed on all 3 servers Ovirt Node 4.3.8
> configured the networks using cockpit.
> followed this guide for the gluster setup with cockpit:
> https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyperconverged.html
>
> the installed the hosted engine with cockpit ->:
>
> [ INFO ] TASK [ovirt.hosted_engine_setup : Wait for the host to be up]
> [ ERROR ] fatal: [localhost]: FAILED! => {"ansible_facts": 
> {"ovirt_hosts": [{"address": "x-c01-n01.lan.xilloc.com", 
> "affinity_labels": [], "auto_numa_status": "unknown", "certificate": 
> {"organization": "lan.xilloc.com", "subject": 
> "O=lan.xilloc.com,CN=x-c01-n01.lan.xilloc.com"}, "cluster": {"href": 
> "/ovirt-engine/api/clusters/3dff6890-4e7b-11ea-90cb-00163e6a7afe", "id": 
> "3dff6890-4e7b-11ea-90cb-00163e6a7afe"}, "comment": "", "cpu": {"speed": 
> 0.0, "topology": {}}, "device_passthrough": {"enabled": false}, 
> "devices": [], "external_network_provider_configurations": [], 
> "external_status": "ok", "hardware_information": 
> {"supported_rng_sources": []}, "hooks": [], "href": 
> "/ovirt-engine/api/hosts/ded7aa60-4a5e-456e-b899-dd7fc25cc7b3", "id": 
> "ded7aa60-4a5e-456e-b899-dd7fc25cc7b3", "katello_errata": [], 
> "kdump_status": "unknown", "ksm": {"enabled": false}, 
> "max_scheduling_memory": 0, "memory": 0, "name": 
> "x-c01-n01.lan.xilloc.com", "network_attachments": [], "nics": [], 
> "numa_nodes": [], "numa_supported": false, "os": 
> {"custom_kernel_cmdline": ""}, "permissions": [], "port": 54321, 
> "power_management": {"automatic_pm_enabled": true, "enabled": false, 
> "kdump_detection": true, "pm_proxies": []}, "protocol": "stomp", 
> "se_linux": {}, "spm": {"priority": 5, "status": "none"}, "ssh": 
> {"fingerprint": "SHA256:lWc/BuE5WukHd95WwfmFW2ee8VPJ2VugvJeI0puMlh4", 
> "port": 22}, "statistics": [], "status": "non_responsive", 
> "storage_connection_extensions": [], "summary": {"total": 0}, "tags": [], 
> "transparent_huge_pages": {"enabled": false}, "type": "ovirt_node", 
> "unmanaged_networks": [], "update_available": false, "vgpu_placement": 
> "consolidated"}]}, "attempts": 120, "changed": false, "deprecations": 
> [{"msg": "The 'ovirt_host_facts' module has been renamed to 
> 'ovirt_host_info', and the renamed one no longer returns 

[ovirt-users] Re: hosted-engine --deploy fails after "Wait for the host to be up" task

2020-02-14 Thread Florian Nolden
Thanks, Fredy for your great help. Setting the Banner and PrintMotd options
on all 3 nodes helped me to succeed with the installation.
Am Fr., 14. Feb. 2020 um 16:23 Uhr schrieb Fredy Sanchez <
fredy.sanc...@modmed.com>:

> Banner none
> PrintMotd no
>
> # systemctl restart sshd
>

That should be fixed in the ovirt-node images.


> If gluster installed successfully, you don't have to reinstall it.
> Just run the hyperconverged install again from cockpit, and it will detect
> the existing gluster install, and ask you if you want to re-use it;
> re-using worked for me. Only thing I'd point out here is that gluster
> didn't enable in my servers automagically; I had to enable it and start it
> by hand before cockpit picked it up.
> # systemctl enable glusterd --now
> # systemctl status glusterd
>
> Gluster was running fine for me. For me that was not needed.

Also,
> # tail -f /var/log/secure
> while the install is going will help you see if there's a problem with
> ssh, other than the banners.
>
> --
> Fredy
>
> On Fri, Feb 14, 2020 at 9:32 AM Florian Nolden 
> wrote:
>
>>
>> Am Fr., 14. Feb. 2020 um 12:21 Uhr schrieb Fredy Sanchez <
>> fredy.sanc...@modmed.com>:
>>
>>> Hi Florian,
>>>
>>
>>> In my case, Didi's suggestions got me thinking, and I ultimately traced
>>> this to the ssh banners; they must be disabled. You can do this in
>>> sshd_config. I do think that logging could be better for this issue, and
>>> that the host up check should incorporate things other than ssh, even if
>>> just a ping. Good luck.
>>>
>>> Hi Fredy,
>>
>> thanks for the reply.
>>
>> I just have to uncomment "Banners none" in the /etc/ssh/sshd_config on
>> all 3 nodes, and run redeploy in the cockpit?
>> Or have you also reinstalled the nodes and the gluster storage?
>>
>>> --
>>> Fredy
>>>
>>> On Fri, Feb 14, 2020, 4:55 AM Florian Nolden 
>>> wrote:
>>>
 I'also stuck with that issue.

 I have
 3x  HP ProLiant DL360 G7

 1x 1gbit => as control network
 3x 1gbit => bond0 as Lan
 2x 10gbit => bond1 as gluster network

 I installed on all 3 servers Ovirt Node 4.3.8
 configured the networks using cockpit.
 followed this guide for the gluster setup with cockpit:
 https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyperconverged.html

 the installed the hosted engine with cockpit ->:

 [ INFO ] TASK [ovirt.hosted_engine_setup : Wait for the host to be up]
 [ ERROR ] fatal: [localhost]: FAILED! => {"ansible_facts": {"ovirt_hosts": 
 [{"address": "x-c01-n01.lan.xilloc.com", "affinity_labels": [], 
 "auto_numa_status": "unknown", "certificate": {"organization": 
 "lan.xilloc.com", "subject": 
 "O=lan.xilloc.com,CN=x-c01-n01.lan.xilloc.com"}, "cluster": {"href": 
 "/ovirt-engine/api/clusters/3dff6890-4e7b-11ea-90cb-00163e6a7afe", "id": 
 "3dff6890-4e7b-11ea-90cb-00163e6a7afe"}, "comment": "", "cpu": {"speed": 
 0.0, "topology": {}}, "device_passthrough": {"enabled": false}, "devices": 
 [], "external_network_provider_configurations": [], "external_status": 
 "ok", "hardware_information": {"supported_rng_sources": []}, "hooks": [], 
 "href": "/ovirt-engine/api/hosts/ded7aa60-4a5e-456e-b899-dd7fc25cc7b3", 
 "id": "ded7aa60-4a5e-456e-b899-dd7fc25cc7b3", "katello_errata": [], 
 "kdump_status": "unknown", "ksm": {"enabled": false}, 
 "max_scheduling_memory": 0, "memory": 0, "name": 
 "x-c01-n01.lan.xilloc.com", "network_attachments": [], "nics": [], 
 "numa_nodes": [], "numa_supported": false, "os": {"custom_kernel_cmdline": 
 ""}, "permissions": [], "port": 54321, "power_management": 
 {"automatic_pm_enabled": true, "enabled": false, "kdump_detection": true, 
 "pm_proxies": []}, "protocol": "stomp", "se_linux": {}, "spm": 
 {"priority": 5, "status": "none"}, "ssh": {"fingerprint": 
 "SHA256:lWc/BuE5WukHd95WwfmFW2ee8VPJ2VugvJeI0puMlh4", "port": 22}, 
 "statistics": [], "status": "non_responsive", 
 "storage_connection_extensions": [], "summary": {"total": 0}, "tags": [], 
 "transparent_huge_pages": {"enabled": false}, "type": "ovirt_node", 
 "unmanaged_networks": [], "update_available": false, "vgpu_placement": 
 "consolidated"}]}, "attempts": 120, "changed": false, "deprecations": 
 [{"msg": "The 'ovirt_host_facts' module has been renamed to 
 'ovirt_host_info', and the renamed one no longer returns ansible_facts", 
 "version": "2.13"}]}



 What is the best approach now to install a Ovirt Hostedengine?


 Kind regards,

 *Florian Nolden*

 *Head of IT at Xilloc Medical B.V.*

 www.xilloc.com* “Get aHead with patient specific implants” *

 Xilloc Medical B.V., Urmonderbaan 22
 
  Gate
 2, Building 110, 6167 RD Sittard-Geleen

 —

 Di

[ovirt-users] Re: hosted-engine --deploy fails after "Wait for the host to be up" task

2020-02-14 Thread Fredy Sanchez
Banner none
PrintMotd no

# systemctl restart sshd

If gluster installed successfully, you don't have to reinstall it.
Just run the hyperconverged install again from cockpit, and it will detect
the existing gluster install, and ask you if you want to re-use it;
re-using worked for me. Only thing I'd point out here is that gluster
didn't enable in my servers automagically; I had to enable it and start it
by hand before cockpit picked it up.
# systemctl enable glusterd --now
# systemctl status glusterd

Also,
# tail -f /var/log/secure
while the install is going will help you see if there's a problem with ssh,
other than the banners.

--
Fredy

On Fri, Feb 14, 2020 at 9:32 AM Florian Nolden  wrote:

>
> Am Fr., 14. Feb. 2020 um 12:21 Uhr schrieb Fredy Sanchez <
> fredy.sanc...@modmed.com>:
>
>> Hi Florian,
>>
>
>> In my case, Didi's suggestions got me thinking, and I ultimately traced
>> this to the ssh banners; they must be disabled. You can do this in
>> sshd_config. I do think that logging could be better for this issue, and
>> that the host up check should incorporate things other than ssh, even if
>> just a ping. Good luck.
>>
>> Hi Fredy,
>
> thanks for the reply.
>
> I just have to uncomment "Banners none" in the /etc/ssh/sshd_config on all
> 3 nodes, and run redeploy in the cockpit?
> Or have you also reinstalled the nodes and the gluster storage?
>
>> --
>> Fredy
>>
>> On Fri, Feb 14, 2020, 4:55 AM Florian Nolden  wrote:
>>
>>> I'also stuck with that issue.
>>>
>>> I have
>>> 3x  HP ProLiant DL360 G7
>>>
>>> 1x 1gbit => as control network
>>> 3x 1gbit => bond0 as Lan
>>> 2x 10gbit => bond1 as gluster network
>>>
>>> I installed on all 3 servers Ovirt Node 4.3.8
>>> configured the networks using cockpit.
>>> followed this guide for the gluster setup with cockpit:
>>> https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyperconverged.html
>>>
>>> the installed the hosted engine with cockpit ->:
>>>
>>> [ INFO ] TASK [ovirt.hosted_engine_setup : Wait for the host to be up]
>>> [ ERROR ] fatal: [localhost]: FAILED! => {"ansible_facts": {"ovirt_hosts": 
>>> [{"address": "x-c01-n01.lan.xilloc.com", "affinity_labels": [], 
>>> "auto_numa_status": "unknown", "certificate": {"organization": 
>>> "lan.xilloc.com", "subject": 
>>> "O=lan.xilloc.com,CN=x-c01-n01.lan.xilloc.com"}, "cluster": {"href": 
>>> "/ovirt-engine/api/clusters/3dff6890-4e7b-11ea-90cb-00163e6a7afe", "id": 
>>> "3dff6890-4e7b-11ea-90cb-00163e6a7afe"}, "comment": "", "cpu": {"speed": 
>>> 0.0, "topology": {}}, "device_passthrough": {"enabled": false}, "devices": 
>>> [], "external_network_provider_configurations": [], "external_status": 
>>> "ok", "hardware_information": {"supported_rng_sources": []}, "hooks": [], 
>>> "href": "/ovirt-engine/api/hosts/ded7aa60-4a5e-456e-b899-dd7fc25cc7b3", 
>>> "id": "ded7aa60-4a5e-456e-b899-dd7fc25cc7b3", "katello_errata": [], 
>>> "kdump_status": "unknown", "ksm": {"enabled": false}, 
>>> "max_scheduling_memory": 0, "memory": 0, "name": 
>>> "x-c01-n01.lan.xilloc.com", "network_attachments": [], "nics": [], 
>>> "numa_nodes": [], "numa_supported": false, "os": {"custom_kernel_cmdline": 
>>> ""}, "permissions": [], "port": 54321, "power_management": 
>>> {"automatic_pm_enabled": true, "enabled": false, "kdump_detection": true, 
>>> "pm_proxies": []}, "protocol": "stomp", "se_linux": {}, "spm": {"priority": 
>>> 5, "status": "none"}, "ssh": {"fingerprint": 
>>> "SHA256:lWc/BuE5WukHd95WwfmFW2ee8VPJ2VugvJeI0puMlh4", "port": 22}, 
>>> "statistics": [], "status": "non_responsive", 
>>> "storage_connection_extensions": [], "summary": {"total": 0}, "tags": [], 
>>> "transparent_huge_pages": {"enabled": false}, "type": "ovirt_node", 
>>> "unmanaged_networks": [], "update_available": false, "vgpu_placement": 
>>> "consolidated"}]}, "attempts": 120, "changed": false, "deprecations": 
>>> [{"msg": "The 'ovirt_host_facts' module has been renamed to 
>>> 'ovirt_host_info', and the renamed one no longer returns ansible_facts", 
>>> "version": "2.13"}]}
>>>
>>>
>>>
>>> What is the best approach now to install a Ovirt Hostedengine?
>>>
>>>
>>> Kind regards,
>>>
>>> *Florian Nolden*
>>>
>>> *Head of IT at Xilloc Medical B.V.*
>>>
>>> www.xilloc.com* “Get aHead with patient specific implants” *
>>>
>>> Xilloc Medical B.V., Urmonderbaan 22
>>> 
>>>  Gate
>>> 2, Building 110, 6167 RD Sittard-Geleen
>>>
>>> —
>>>
>>> Disclaimer: The content of this e-mail, including any attachments, are
>>> confidential and are intended for the sole use of the individual or entity
>>> to which it is addressed. If you have received it by mistake please let us
>>> know by reply and then delete it from your system. Any distribution,
>>> copying or dissemination of this message is expected to conform to all
>>> legal stipulations governing the use of information.
>>>
>>>
>>> Am Mo., 27. Jan. 2020 um 07:56 U

[ovirt-users] Re: hosted-engine --deploy fails after "Wait for the host to be up" task

2020-02-14 Thread Florian Nolden
Am Fr., 14. Feb. 2020 um 12:21 Uhr schrieb Fredy Sanchez <
fredy.sanc...@modmed.com>:

> Hi Florian,
>

> In my case, Didi's suggestions got me thinking, and I ultimately traced
> this to the ssh banners; they must be disabled. You can do this in
> sshd_config. I do think that logging could be better for this issue, and
> that the host up check should incorporate things other than ssh, even if
> just a ping. Good luck.
>
> Hi Fredy,

thanks for the reply.

I just have to uncomment "Banners none" in the /etc/ssh/sshd_config on all
3 nodes, and run redeploy in the cockpit?
Or have you also reinstalled the nodes and the gluster storage?

> --
> Fredy
>
> On Fri, Feb 14, 2020, 4:55 AM Florian Nolden  wrote:
>
>> I'also stuck with that issue.
>>
>> I have
>> 3x  HP ProLiant DL360 G7
>>
>> 1x 1gbit => as control network
>> 3x 1gbit => bond0 as Lan
>> 2x 10gbit => bond1 as gluster network
>>
>> I installed on all 3 servers Ovirt Node 4.3.8
>> configured the networks using cockpit.
>> followed this guide for the gluster setup with cockpit:
>> https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyperconverged.html
>>
>> the installed the hosted engine with cockpit ->:
>>
>> [ INFO ] TASK [ovirt.hosted_engine_setup : Wait for the host to be up]
>> [ ERROR ] fatal: [localhost]: FAILED! => {"ansible_facts": {"ovirt_hosts": 
>> [{"address": "x-c01-n01.lan.xilloc.com", "affinity_labels": [], 
>> "auto_numa_status": "unknown", "certificate": {"organization": 
>> "lan.xilloc.com", "subject": 
>> "O=lan.xilloc.com,CN=x-c01-n01.lan.xilloc.com"}, "cluster": {"href": 
>> "/ovirt-engine/api/clusters/3dff6890-4e7b-11ea-90cb-00163e6a7afe", "id": 
>> "3dff6890-4e7b-11ea-90cb-00163e6a7afe"}, "comment": "", "cpu": {"speed": 
>> 0.0, "topology": {}}, "device_passthrough": {"enabled": false}, "devices": 
>> [], "external_network_provider_configurations": [], "external_status": "ok", 
>> "hardware_information": {"supported_rng_sources": []}, "hooks": [], "href": 
>> "/ovirt-engine/api/hosts/ded7aa60-4a5e-456e-b899-dd7fc25cc7b3", "id": 
>> "ded7aa60-4a5e-456e-b899-dd7fc25cc7b3", "katello_errata": [], 
>> "kdump_status": "unknown", "ksm": {"enabled": false}, 
>> "max_scheduling_memory": 0, "memory": 0, "name": "x-c01-n01.lan.xilloc.com", 
>> "network_attachments": [], "nics": [], "numa_nodes": [], "numa_supported": 
>> false, "os": {"custom_kernel_cmdline": ""}, "permissions": [], "port": 
>> 54321, "power_management": {"automatic_pm_enabled": true, "enabled": false, 
>> "kdump_detection": true, "pm_proxies": []}, "protocol": "stomp", "se_linux": 
>> {}, "spm": {"priority": 5, "status": "none"}, "ssh": {"fingerprint": 
>> "SHA256:lWc/BuE5WukHd95WwfmFW2ee8VPJ2VugvJeI0puMlh4", "port": 22}, 
>> "statistics": [], "status": "non_responsive", 
>> "storage_connection_extensions": [], "summary": {"total": 0}, "tags": [], 
>> "transparent_huge_pages": {"enabled": false}, "type": "ovirt_node", 
>> "unmanaged_networks": [], "update_available": false, "vgpu_placement": 
>> "consolidated"}]}, "attempts": 120, "changed": false, "deprecations": 
>> [{"msg": "The 'ovirt_host_facts' module has been renamed to 
>> 'ovirt_host_info', and the renamed one no longer returns ansible_facts", 
>> "version": "2.13"}]}
>>
>>
>>
>> What is the best approach now to install a Ovirt Hostedengine?
>>
>>
>> Kind regards,
>>
>> *Florian Nolden*
>>
>> *Head of IT at Xilloc Medical B.V.*
>>
>> www.xilloc.com* “Get aHead with patient specific implants” *
>>
>> Xilloc Medical B.V., Urmonderbaan 22
>> 
>>  Gate
>> 2, Building 110, 6167 RD Sittard-Geleen
>>
>> —
>>
>> Disclaimer: The content of this e-mail, including any attachments, are
>> confidential and are intended for the sole use of the individual or entity
>> to which it is addressed. If you have received it by mistake please let us
>> know by reply and then delete it from your system. Any distribution,
>> copying or dissemination of this message is expected to conform to all
>> legal stipulations governing the use of information.
>>
>>
>> Am Mo., 27. Jan. 2020 um 07:56 Uhr schrieb Yedidyah Bar David <
>> d...@redhat.com>:
>>
>>> On Sun, Jan 26, 2020 at 8:45 PM Fredy Sanchez 
>>> wrote:
>>>
 *Hi all,*

 *[root@bric-ovirt-1 ~]# cat /etc/*release**
 CentOS Linux release 7.7.1908 (Core)
 *[root@bric-ovirt-1 ~]# yum info ovirt-engine-appliance*
 Installed Packages
 Name: ovirt-engine-appliance
 Arch: x86_64
 Version : 4.3
 Release : 20191121.1.el7
 Size: 1.0 G
 Repo: installed
 From repo   : ovirt-4.3

 *Same situation as https://bugzilla.redhat.com/show_bug.cgi?id=1787267
 . The error message
 almost everywhere is some red herring message about ansible*

>>>
>>> You are right that it's misleading, but were the error

[ovirt-users] Re: hosted-engine --deploy fails after "Wait for the host to be up" task

2020-02-14 Thread Fredy Sanchez
Hi Florian,

In my case, Didi's suggestions got me thinking, and I ultimately traced
this to the ssh banners; they must be disabled. You can do this in
sshd_config. I do think that logging could be better for this issue, and
that the host up check should incorporate things other than ssh, even if
just a ping. Good luck.

--
Fredy

On Fri, Feb 14, 2020, 4:55 AM Florian Nolden  wrote:

> I'also stuck with that issue.
>
> I have
> 3x  HP ProLiant DL360 G7
>
> 1x 1gbit => as control network
> 3x 1gbit => bond0 as Lan
> 2x 10gbit => bond1 as gluster network
>
> I installed on all 3 servers Ovirt Node 4.3.8
> configured the networks using cockpit.
> followed this guide for the gluster setup with cockpit:
> https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyperconverged.html
>
> the installed the hosted engine with cockpit ->:
>
> [ INFO ] TASK [ovirt.hosted_engine_setup : Wait for the host to be up]
> [ ERROR ] fatal: [localhost]: FAILED! => {"ansible_facts": {"ovirt_hosts": 
> [{"address": "x-c01-n01.lan.xilloc.com", "affinity_labels": [], 
> "auto_numa_status": "unknown", "certificate": {"organization": 
> "lan.xilloc.com", "subject": "O=lan.xilloc.com,CN=x-c01-n01.lan.xilloc.com"}, 
> "cluster": {"href": 
> "/ovirt-engine/api/clusters/3dff6890-4e7b-11ea-90cb-00163e6a7afe", "id": 
> "3dff6890-4e7b-11ea-90cb-00163e6a7afe"}, "comment": "", "cpu": {"speed": 0.0, 
> "topology": {}}, "device_passthrough": {"enabled": false}, "devices": [], 
> "external_network_provider_configurations": [], "external_status": "ok", 
> "hardware_information": {"supported_rng_sources": []}, "hooks": [], "href": 
> "/ovirt-engine/api/hosts/ded7aa60-4a5e-456e-b899-dd7fc25cc7b3", "id": 
> "ded7aa60-4a5e-456e-b899-dd7fc25cc7b3", "katello_errata": [], "kdump_status": 
> "unknown", "ksm": {"enabled": false}, "max_scheduling_memory": 0, "memory": 
> 0, "name": "x-c01-n01.lan.xilloc.com", "network_attachments": [], "nics": [], 
> "numa_nodes": [], "numa_supported": false, "os": {"custom_kernel_cmdline": 
> ""}, "permissions": [], "port": 54321, "power_management": 
> {"automatic_pm_enabled": true, "enabled": false, "kdump_detection": true, 
> "pm_proxies": []}, "protocol": "stomp", "se_linux": {}, "spm": {"priority": 
> 5, "status": "none"}, "ssh": {"fingerprint": 
> "SHA256:lWc/BuE5WukHd95WwfmFW2ee8VPJ2VugvJeI0puMlh4", "port": 22}, 
> "statistics": [], "status": "non_responsive", 
> "storage_connection_extensions": [], "summary": {"total": 0}, "tags": [], 
> "transparent_huge_pages": {"enabled": false}, "type": "ovirt_node", 
> "unmanaged_networks": [], "update_available": false, "vgpu_placement": 
> "consolidated"}]}, "attempts": 120, "changed": false, "deprecations": 
> [{"msg": "The 'ovirt_host_facts' module has been renamed to 
> 'ovirt_host_info', and the renamed one no longer returns ansible_facts", 
> "version": "2.13"}]}
>
>
>
> What is the best approach now to install a Ovirt Hostedengine?
>
>
> Kind regards,
>
> *Florian Nolden*
>
> *Head of IT at Xilloc Medical B.V.*
>
> www.xilloc.com* “Get aHead with patient specific implants” *
>
> Xilloc Medical B.V., Urmonderbaan 22
> 
>  Gate
> 2, Building 110, 6167 RD Sittard-Geleen
>
> —
>
> Disclaimer: The content of this e-mail, including any attachments, are
> confidential and are intended for the sole use of the individual or entity
> to which it is addressed. If you have received it by mistake please let us
> know by reply and then delete it from your system. Any distribution,
> copying or dissemination of this message is expected to conform to all
> legal stipulations governing the use of information.
>
>
> Am Mo., 27. Jan. 2020 um 07:56 Uhr schrieb Yedidyah Bar David <
> d...@redhat.com>:
>
>> On Sun, Jan 26, 2020 at 8:45 PM Fredy Sanchez 
>> wrote:
>>
>>> *Hi all,*
>>>
>>> *[root@bric-ovirt-1 ~]# cat /etc/*release**
>>> CentOS Linux release 7.7.1908 (Core)
>>> *[root@bric-ovirt-1 ~]# yum info ovirt-engine-appliance*
>>> Installed Packages
>>> Name: ovirt-engine-appliance
>>> Arch: x86_64
>>> Version : 4.3
>>> Release : 20191121.1.el7
>>> Size: 1.0 G
>>> Repo: installed
>>> From repo   : ovirt-4.3
>>>
>>> *Same situation as https://bugzilla.redhat.com/show_bug.cgi?id=1787267
>>> . The error message
>>> almost everywhere is some red herring message about ansible*
>>>
>>
>> You are right that it's misleading, but were the errors below the only
>> ones you got from ansible?
>>
>>
>>> [ INFO  ] TASK [ovirt.hosted_engine_setup : Wait for the host to be up]
>>> [ ERROR ] fatal: [localhost]: FAILED! => {"ansible_facts":
>>> {"ovirt_hosts": []}, "attempts": 120, "changed": false, "deprecations":
>>> [{"msg": "The 'ovirt_host_facts' module has been renamed to
>>> 'ovirt_host_info', and the renamed one no longer returns ansible_facts",
>>> "version": "2.1

[ovirt-users] Re: hosted-engine --deploy fails after "Wait for the host to be up" task

2020-02-14 Thread Florian Nolden
I'also stuck with that issue.

I have
3x  HP ProLiant DL360 G7

1x 1gbit => as control network
3x 1gbit => bond0 as Lan
2x 10gbit => bond1 as gluster network

I installed on all 3 servers Ovirt Node 4.3.8
configured the networks using cockpit.
followed this guide for the gluster setup with cockpit:
https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyperconverged.html

the installed the hosted engine with cockpit ->:

[ INFO ] TASK [ovirt.hosted_engine_setup : Wait for the host to be up]
[ ERROR ] fatal: [localhost]: FAILED! => {"ansible_facts":
{"ovirt_hosts": [{"address": "x-c01-n01.lan.xilloc.com",
"affinity_labels": [], "auto_numa_status": "unknown", "certificate":
{"organization": "lan.xilloc.com", "subject":
"O=lan.xilloc.com,CN=x-c01-n01.lan.xilloc.com"}, "cluster": {"href":
"/ovirt-engine/api/clusters/3dff6890-4e7b-11ea-90cb-00163e6a7afe",
"id": "3dff6890-4e7b-11ea-90cb-00163e6a7afe"}, "comment": "", "cpu":
{"speed": 0.0, "topology": {}}, "device_passthrough": {"enabled":
false}, "devices": [], "external_network_provider_configurations": [],
"external_status": "ok", "hardware_information":
{"supported_rng_sources": []}, "hooks": [], "href":
"/ovirt-engine/api/hosts/ded7aa60-4a5e-456e-b899-dd7fc25cc7b3", "id":
"ded7aa60-4a5e-456e-b899-dd7fc25cc7b3", "katello_errata": [],
"kdump_status": "unknown", "ksm": {"enabled": false},
"max_scheduling_memory": 0, "memory": 0, "name":
"x-c01-n01.lan.xilloc.com", "network_attachments": [], "nics": [],
"numa_nodes": [], "numa_supported": false, "os":
{"custom_kernel_cmdline": ""}, "permissions": [], "port": 54321,
"power_management": {"automatic_pm_enabled": true, "enabled": false,
"kdump_detection": true, "pm_proxies": []}, "protocol": "stomp",
"se_linux": {}, "spm": {"priority": 5, "status": "none"}, "ssh":
{"fingerprint": "SHA256:lWc/BuE5WukHd95WwfmFW2ee8VPJ2VugvJeI0puMlh4",
"port": 22}, "statistics": [], "status": "non_responsive",
"storage_connection_extensions": [], "summary": {"total": 0}, "tags":
[], "transparent_huge_pages": {"enabled": false}, "type":
"ovirt_node", "unmanaged_networks": [], "update_available": false,
"vgpu_placement": "consolidated"}]}, "attempts": 120, "changed":
false, "deprecations": [{"msg": "The 'ovirt_host_facts' module has
been renamed to 'ovirt_host_info', and the renamed one no longer
returns ansible_facts", "version": "2.13"}]}



What is the best approach now to install a Ovirt Hostedengine?


Kind regards,

*Florian Nolden*

*Head of IT at Xilloc Medical B.V.*

www.xilloc.com* “Get aHead with patient specific implants” *

Xilloc Medical B.V., Urmonderbaan 22

Gate
2, Building 110, 6167 RD Sittard-Geleen

—

Disclaimer: The content of this e-mail, including any attachments, are
confidential and are intended for the sole use of the individual or entity
to which it is addressed. If you have received it by mistake please let us
know by reply and then delete it from your system. Any distribution,
copying or dissemination of this message is expected to conform to all
legal stipulations governing the use of information.


Am Mo., 27. Jan. 2020 um 07:56 Uhr schrieb Yedidyah Bar David <
d...@redhat.com>:

> On Sun, Jan 26, 2020 at 8:45 PM Fredy Sanchez 
> wrote:
>
>> *Hi all,*
>>
>> *[root@bric-ovirt-1 ~]# cat /etc/*release**
>> CentOS Linux release 7.7.1908 (Core)
>> *[root@bric-ovirt-1 ~]# yum info ovirt-engine-appliance*
>> Installed Packages
>> Name: ovirt-engine-appliance
>> Arch: x86_64
>> Version : 4.3
>> Release : 20191121.1.el7
>> Size: 1.0 G
>> Repo: installed
>> From repo   : ovirt-4.3
>>
>> *Same situation as https://bugzilla.redhat.com/show_bug.cgi?id=1787267
>> . The error message
>> almost everywhere is some red herring message about ansible*
>>
>
> You are right that it's misleading, but were the errors below the only
> ones you got from ansible?
>
>
>> [ INFO  ] TASK [ovirt.hosted_engine_setup : Wait for the host to be up]
>> [ ERROR ] fatal: [localhost]: FAILED! => {"ansible_facts":
>> {"ovirt_hosts": []}, "attempts": 120, "changed": false, "deprecations":
>> [{"msg": "The 'ovirt_host_facts' module has been renamed to
>> 'ovirt_host_info', and the renamed one no longer returns ansible_facts",
>> "version": "2.13"}]}
>> [ INFO  ] TASK [ovirt.hosted_engine_setup : Notify the user about a
>> failure]
>> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The
>> system may not be provisioned according to the playbook results: please
>> check the logs for the issue, fix accordingly or re-deploy from scratch.\n"}
>> [ ERROR ] Failed to execute stage 'Closing up': Failed executing
>> ansible-playbook
>> [ INFO  ] Stage: Termination
>> [ ERROR ] Hosted Engine deployment failed: please check the logs for the
>> issue, fix accordingly or re-deploy from scratch.
>>   L

[ovirt-users] Re: hosted-engine --deploy fails after "Wait for the host to be up" task

2020-01-26 Thread Yedidyah Bar David
On Sun, Jan 26, 2020 at 8:45 PM Fredy Sanchez 
wrote:

> *Hi all,*
>
> *[root@bric-ovirt-1 ~]# cat /etc/*release**
> CentOS Linux release 7.7.1908 (Core)
> *[root@bric-ovirt-1 ~]# yum info ovirt-engine-appliance*
> Installed Packages
> Name: ovirt-engine-appliance
> Arch: x86_64
> Version : 4.3
> Release : 20191121.1.el7
> Size: 1.0 G
> Repo: installed
> From repo   : ovirt-4.3
>
> *Same situation as https://bugzilla.redhat.com/show_bug.cgi?id=1787267
> . The error message
> almost everywhere is some red herring message about ansible*
>

You are right that it's misleading, but were the errors below the only ones
you got from ansible?


> [ INFO  ] TASK [ovirt.hosted_engine_setup : Wait for the host to be up]
> [ ERROR ] fatal: [localhost]: FAILED! => {"ansible_facts": {"ovirt_hosts":
> []}, "attempts": 120, "changed": false, "deprecations": [{"msg": "The
> 'ovirt_host_facts' module has been renamed to 'ovirt_host_info', and the
> renamed one no longer returns ansible_facts", "version": "2.13"}]}
> [ INFO  ] TASK [ovirt.hosted_engine_setup : Notify the user about a
> failure]
> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The
> system may not be provisioned according to the playbook results: please
> check the logs for the issue, fix accordingly or re-deploy from scratch.\n"}
> [ ERROR ] Failed to execute stage 'Closing up': Failed executing
> ansible-playbook
> [ INFO  ] Stage: Termination
> [ ERROR ] Hosted Engine deployment failed: please check the logs for the
> issue, fix accordingly or re-deploy from scratch.
>   Log file is located at
> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20200126170315-req4qb.log
>
> *But the "real" problem seems to be SSH related, as you can see below*
>

Indeed


> *[root@bric-ovirt-1 ovirt-engine]# pwd*
>
> /var/log/ovirt-hosted-engine-setup/engine-logs-2020-01-26T17:19:28Z/ovirt-engine
> *[root@bric-ovirt-1 ovirt-engine]# grep -i error engine.log*
> 2020-01-26 17:26:50,178Z ERROR
> [org.ovirt.engine.core.bll.hostdeploy.AddVdsCommand] (default task-1)
> [2341fd23-f0c7-4f1c-ad48-88af20c2d04b] Failed to establish session with
> host 'bric-ovirt-1.corp.modmed.com': SSH session closed during connection
> 'r...@bric-ovirt-1.corp.modmed.com'
>

Please check/share the entire portion of engine.log, from where it starts
to try to ssh til it gives up.


> 2020-01-26 17:26:50,205Z ERROR
> [org.ovirt.engine.api.restapi.resource.AbstractBackendResource] (default
> task-1) [] Operation Failed: [Cannot add Host. Connecting to host via SSH
> has failed, verify that the host is reachable (IP address, routable address
> etc.) You may refer to the engine.log file for further details.]
>
> *The funny thing is that the engine can indeed ssh to bric-ovirt-1
> (physical host). See below*
>
> *[root@bric-ovirt-1 ovirt-hosted-engine-setup]# cat /etc/hosts*
> 192.168.1.52 bric-ovirt-engine.corp.modmed.com # temporary entry added by
> hosted-engine-setup for the bootstrap VM
> 127.0.0.1   localhost localhost.localdomain localhost4
> localhost4.localdomain4
> #::1 localhost localhost.localdomain localhost6
> localhost6.localdomain6
> 10.130.0.50 bric-ovirt-engine bric-ovirt-engine.corp.modmed.com
> 10.130.0.51 bric-ovirt-1 bric-ovirt-1.corp.modmed.com
> 10.130.0.52 bric-ovirt-2 bric-ovirt-2.corp.modmed.com
> 10.130.0.53 bric-ovirt-3 bric-ovirt-3.corp.modmed.com
> 192.168.0.1 bric-ovirt-1gluster bric-ovirt-1gluster.corp.modmed.com
> 192.168.0.2 bric-ovirt-2gluster bric-ovirt-2gluster.corp.modmed.com
> 192.168.0.3 bric-ovirt-3gluster bric-ovirt-3gluster.corp.modmed.com
> [root@bric-ovirt-1 ovirt-hosted-engine-setup]#
>
> *[root@bric-ovirt-1 ~]# ssh 192.168.1.52*
> Last login: Sun Jan 26 17:55:20 2020 from 192.168.1.1
> [root@bric-ovirt-engine ~]#
> [root@bric-ovirt-engine ~]#
> *[root@bric-ovirt-engine ~]# ssh bric-ovirt-1*
> Password:
> Password:
> Last failed login: Sun Jan 26 18:17:16 UTC 2020 from 192.168.1.52 on
> ssh:notty
> There was 1 failed login attempt since the last successful login.
> Last login: Sun Jan 26 18:16:46 2020
> ###
> # UNAUTHORIZED ACCESS TO THIS SYSTEM IS PROHIBITED#
> # #
> # This system is the property of Modernizing Medicine, Inc.   #
> # It is for authorized Company business purposes only.#
> # All connections are monitored and recorded. #
> # Disconnect IMMEDIATELY if you are not an authorized user!   #
> ###
> [root@bric-ovirt-1 ~]#
> [root@bric-ovirt-1 ~]#
> [root@bric-ovirt-1 ~]# exit
> logout
> Connection to bric-ovirt-1 closed.
> [root@bric-ovirt-engine ~]#
> [root@bric-ovirt-engine ~]#
> *[root@bric-ovirt-engine ~]# ssh bric-ovirt-1.corp.modmed.com
>