[ovirt-users] Re: Install hosted-engine - Task Get local VM IP failed

2022-03-10 Thread Shaked
Hi !
Do you resolve this error?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ISJI6I2UZCZV5FUX6D3M2HVN3GMKWE7R/


[ovirt-users] Re: Install hosted-engine - Task Get local VM IP failed

2018-07-02 Thread charuraks
Just abort experiment on Virtual Machine and Trying on Physical Machine
There is some issue on VM it's will stuck at get VM IP Local Process, May 
because virtual machine doesn't support bridge driver or anyways
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XEP3NAVYMJ2LRYVJS3GU3UVMA5ZANXSG/


[ovirt-users] Re: Re : [ovirt-users] Re: Install hosted-engine - Task Get local VM IP failed

2018-06-29 Thread fsoyer

At this time the engine (and the cluster) is up. No problem after activating 
gluster and creating the volumes, then finish the install in the screen session.
So...

 
Le Vendredi, Juin 29, 2018 12:32 CEST, "fsoyer"  a écrit:  Hi,
I must say it : I'm -totally- lost.
To try to find a reason to this error, I've re-installed the first host from 
scratch - CentOS 7.5-1804, ovirt 4.2.3-1, gluster 3.12.9.
The first attemp was made with only em1 declared. Result = SUCCESS, the install 
pass "Get local VM IP", then through "Wait for the host to be up" without 
difficulty and wait at "Please specify the storage...".
At this time I even notice that I've forgot to stop/disable NetworkManager, 
that had no impact !
So : I re-install the host from scratch (yes, sometimes I'm a fool) to be 
absolutly sure that there is no problem coming from the preceding install. Now 
I declare em1 (10.0.0.230) and em2 (10.0.0.229, without gateway nor DNS, for 
futur vmnetwork). NetworkManager off and disabled. Result = SUCCESS... Oo
OK : Re-install host !! Now I declare, as I did some days ago, em1, em2 and 
bond0(em3+em4 with IP 192.168.0.30). Result : SUCCESS !!! Oo

So I'm unable to say what append tuesday. Actually I see only two differences :
- gluster is not active (I don't configure it to go faster)
- the version of ovirt (ovirt-release, ovirt-host, appliance...) has sligthly 
changed.

I've no more time for another attempt re-installing the host(s) with gluster 
activated, I must now go on as I need an operational system for other tasks 
with VMs this afternoon. So I leave the first host waiting for the end of 
install in a screen, I re-install the 2 other hosts and activate gluster and 
volumes on the 3 nodes. Then I'll end the install on the gluster volume.
I'll tell you if this works finally, but I hope so !
however, I'm in doubt with this problem. I have no explanation of what append 
tuesday, this is really annoying... Maybe have you the ability to test on a 
same configuration (3 hosts with 2 nics on the same network for ovirtmgmt and a 
futur vmnetwork, and gluster on a separate network) to try to understand ?

Thank you for the time spent.
Frank

PS : to answer to your question : yes, tuesday I ran 
ovirt-hosted-engine-cleanup between each attempt.


Le Jeudi, Juin 28, 2018 16:26 CEST, Simone Tiraboschi  a 
écrit:
  On Wed, Jun 27, 2018 at 5:48 PM fso...@systea.fr  wrote:Hi 
again,
In fact, the hour in file is exactly 2hours before, I guess a timezone problem 
(in the process of install ?), as the file itself is correctly timed at 11:17am 
(correct hour here in France). So the messages are synchrone.  Yes, sorry, 
fault of mine.From the logs I don't see anything strange. Can you please try 
again on your environment and connect to the bootstrap VM via virsh console or 
VNC to check what's happening there? Did you also run 
ovirt-hosted-engine-cleanup between one attempt and the next? 
-------- Message original 
Objet : Re: [ovirt-users] Re: Install hosted-engine - Task Get local VM IP 
failed
De : Simone Tiraboschi
À : fso...@systea.fr
Cc : users

 Hi,HostedEngineLocal was started at 2018-06-26 09:17:26 but /var/log/messages 
starts only at Jun 26 11:02:32.Can you please reattach it fro the relevant time 
frame?
 On Wed, Jun 27, 2018 at 10:54 AM fsoyer  wrote:Hi Simone,
here are the revelant part of messages and the engine install log (there were 
only this file in /var/log/libvirt/qemu) .

Thanks for your time.

Frank
 Le Mardi, Juin 26, 2018 11:43 CEST, Simone Tiraboschi  a 
écrit:
  On Tue, Jun 26, 2018 at 11:39 AM fsoyer  wrote:Well,
unfortunatly, it was a "false-positive". This morning I tried again, with the 
idea that at one moment the deploy will ask for the final destination for the 
engine, I will restart bond0+gluster+volume engine at thos moment.
Re-launching the deploy on the second "fresh" host (the first one with all 
errors yesterday let it in a doutful state) with em2 and gluster+bond0 off :
# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group 
default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: em1:  mtu 1500 qdisc mq state UP group 
default qlen 1000
    link/ether e0:db:55:15:f0:f0 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.227/8 brd 10.255.255.255 scope global em1
       valid_lft forever preferred_lft forever
    inet6 fe80::e2db:55ff:fe15:f0f0/64 scope link 
       valid_lft forever preferred_lft forever
3: em2:  mtu 1500 qdisc mq state DOWN group default qlen 
1000
    link/ether e0:db:55:15:f0:f1 brd ff:ff:ff:ff:ff:ff
4: em3:  mtu 1500 qdisc mq state DOWN group default qlen 
1000
    link/ether e0:db:55:15:f0:f2 brd ff:ff:ff:ff:ff:ff
5: em4:  mtu 1500 qdisc mq state DOWN group defa

[ovirt-users] Re: Install hosted-engine - Task Get local VM IP failed

2018-06-29 Thread Simone Tiraboschi
On Fri, Jun 29, 2018 at 12:32 PM fsoyer  wrote:

> Hi,
>

hi Frank,
thanks for the report.


> I must say it : I'm -totally- lost.
> To try to find a reason to this error, I've re-installed the first host
> from scratch - CentOS 7.5-1804, ovirt 4.2.3-1, gluster 3.12.9.
> The first attemp was made with only em1 declared. Result = SUCCESS, the
> install pass "Get local VM IP", then through "Wait for the host to be up"
> without difficulty and wait at "Please specify the storage...".
> At this time I even notice that I've forgot to stop/disable
> NetworkManager, that had no impact !
> So : I re-install the host from scratch (yes, sometimes I'm a fool) to be
> absolutly sure that there is no problem coming from the preceding install.
> Now I declare em1 (10.0.0.230) and em2 (10.0.0.229, without gateway nor
> DNS, for futur vmnetwork). NetworkManager off and disabled. Result =
> SUCCESS... Oo
> OK : Re-install host !! Now I declare, as I did some days ago, em1, em2
> and bond0(em3+em4 with IP 192.168.0.30). Result : SUCCESS !!! Oo
>
> So I'm unable to say what append tuesday. Actually I see only two
> differences :
> - gluster is not active (I don't configure it to go faster)
>

did you tried with gdeploy from the cockpit web ui?


> - the version of ovirt (ovirt-release, ovirt-host, appliance...) has
> sligthly changed.
>

AFAIK we didn't have any specific fix for that kind of issue in the recent
weeks.


>
> I've no more time for another attempt re-installing the host(s) with
> gluster activated, I must now go on as I need an operational system for
> other tasks with VMs this afternoon. So I leave the first host waiting for
> the end of install in a screen, I re-install the 2 other hosts and activate
> gluster and volumes on the 3 nodes. Then I'll end the install on the
> gluster volume.
> I'll tell you if this works finally, but I hope so !
> however, I'm in doubt with this problem. I have no explanation of what
> append tuesday, this is really annoying...


Yes, the same for me.


> Maybe have you the ability to test on a same configuration (3 hosts with 2
> nics on the same network for ovirtmgmt and a futur vmnetwork, and gluster
> on a separate network) to try to understand ?
>

In the past moths we had a lot of successful tests also on complex network
environments; I'll try to reproduce on something really close to your env.


>
> Thank you for the time spent.
> Frank
>
> PS : to answer to your question : yes, tuesday I
> ran ovirt-hosted-engine-cleanup between each attempt.
>
>
> Le Jeudi, Juin 28, 2018 16:26 CEST, Simone Tiraboschi 
> a écrit:
>
>
>
> On Wed, Jun 27, 2018 at 5:48 PM fso...@systea.fr  wrote:
>
>> Hi again,
>> In fact, the hour in file is exactly 2hours before, I guess a timezone
>> problem (in the process of install ?), as the file itself is correctly
>> timed at 11:17am (correct hour here in France). So the messages are
>> synchrone.
>>
>>
>
> Yes, sorry, fault of mine.
> From the logs I don't see anything strange.
>
> Can you please try again on your environment and connect to the bootstrap
> VM via virsh console or VNC to check what's happening there?
>
> Did you also run ovirt-hosted-engine-cleanup between one attempt and the
> next?
>
>
>>
>>  Message original 
>> Objet : Re: [ovirt-users] Re: Install hosted-engine - Task Get local VM
>> IP failed
>> De : Simone Tiraboschi
>> À : fso...@systea.fr
>> Cc : users
>>
>>
>>
>> Hi,
>> HostedEngineLocal was started at 2018-06-26 09:17:26 but
>> /var/log/messages starts only at Jun 26 11:02:32.
>> Can you please reattach it fro the relevant time frame?
>>
>> On Wed, Jun 27, 2018 at 10:54 AM fsoyer  wrote:
>>
>>> Hi Simone,
>>> here are the revelant part of messages and the engine install log (there
>>> were only this file in /var/log/libvirt/qemu) .
>>>
>>> Thanks for your time.
>>>
>>> Frank
>>>
>>>
>>> Le Mardi, Juin 26, 2018 11:43 CEST, Simone Tiraboschi <
>>> stira...@redhat.com> a écrit:
>>>
>>>
>>>
>>> On Tue, Jun 26, 2018 at 11:39 AM fsoyer  wrote:
>>>
>>>> Well,
>>>> unfortunatly, it was a "false-positive". This morning I tried again,
>>>> with the idea that at one moment the deploy will ask for the final
>>>> destination for the engine, I will restart bond0+gluster+volume engine at
>>>> thos moment.
>>>> Re-launching the deploy on the second "fresh&q

[ovirt-users] Re: Re : [ovirt-users] Re: Install hosted-engine - Task Get local VM IP failed

2018-06-29 Thread fsoyer

Hi,
I must say it : I'm -totally- lost.
To try to find a reason to this error, I've re-installed the first host from 
scratch - CentOS 7.5-1804, ovirt 4.2.3-1, gluster 3.12.9.
The first attemp was made with only em1 declared. Result = SUCCESS, the install 
pass "Get local VM IP", then through "Wait for the host to be up" without 
difficulty and wait at "Please specify the storage...".
At this time I even notice that I've forgot to stop/disable NetworkManager, 
that had no impact !
So : I re-install the host from scratch (yes, sometimes I'm a fool) to be 
absolutly sure that there is no problem coming from the preceding install. Now 
I declare em1 (10.0.0.230) and em2 (10.0.0.229, without gateway nor DNS, for 
futur vmnetwork). NetworkManager off and disabled. Result = SUCCESS... Oo
OK : Re-install host !! Now I declare, as I did some days ago, em1, em2 and 
bond0(em3+em4 with IP 192.168.0.30). Result : SUCCESS !!! Oo

So I'm unable to say what append tuesday. Actually I see only two differences :
- gluster is not active (I don't configure it to go faster)
- the version of ovirt (ovirt-release, ovirt-host, appliance...) has sligthly 
changed.

I've no more time for another attempt re-installing the host(s) with gluster 
activated, I must now go on as I need an operational system for other tasks 
with VMs this afternoon. So I leave the first host waiting for the end of 
install in a screen, I re-install the 2 other hosts and activate gluster and 
volumes on the 3 nodes. Then I'll end the install on the gluster volume.
I'll tell you if this works finally, but I hope so !
however, I'm in doubt with this problem. I have no explanation of what append 
tuesday, this is really annoying... Maybe have you the ability to test on a 
same configuration (3 hosts with 2 nics on the same network for ovirtmgmt and a 
futur vmnetwork, and gluster on a separate network) to try to understand ?

Thank you for the time spent.
Frank

PS : to answer to your question : yes, tuesday I ran 
ovirt-hosted-engine-cleanup between each attempt.


Le Jeudi, Juin 28, 2018 16:26 CEST, Simone Tiraboschi  a 
écrit:
  On Wed, Jun 27, 2018 at 5:48 PM fso...@systea.fr  wrote:Hi 
again,
In fact, the hour in file is exactly 2hours before, I guess a timezone problem 
(in the process of install ?), as the file itself is correctly timed at 11:17am 
(correct hour here in France). So the messages are synchrone.  Yes, sorry, 
fault of mine.From the logs I don't see anything strange. Can you please try 
again on your environment and connect to the bootstrap VM via virsh console or 
VNC to check what's happening there? Did you also run 
ovirt-hosted-engine-cleanup between one attempt and the next? 
-------- Message original ----
Objet : Re: [ovirt-users] Re: Install hosted-engine - Task Get local VM IP 
failed
De : Simone Tiraboschi
À : fso...@systea.fr
Cc : users

 Hi,HostedEngineLocal was started at 2018-06-26 09:17:26 but /var/log/messages 
starts only at Jun 26 11:02:32.Can you please reattach it fro the relevant time 
frame?
 On Wed, Jun 27, 2018 at 10:54 AM fsoyer  wrote:Hi Simone,
here are the revelant part of messages and the engine install log (there were 
only this file in /var/log/libvirt/qemu) .

Thanks for your time.

Frank
 Le Mardi, Juin 26, 2018 11:43 CEST, Simone Tiraboschi  a 
écrit:
  On Tue, Jun 26, 2018 at 11:39 AM fsoyer  wrote:Well,
unfortunatly, it was a "false-positive". This morning I tried again, with the 
idea that at one moment the deploy will ask for the final destination for the 
engine, I will restart bond0+gluster+volume engine at thos moment.
Re-launching the deploy on the second "fresh" host (the first one with all 
errors yesterday let it in a doutful state) with em2 and gluster+bond0 off :
# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group 
default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: em1:  mtu 1500 qdisc mq state UP group 
default qlen 1000
    link/ether e0:db:55:15:f0:f0 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.227/8 brd 10.255.255.255 scope global em1
       valid_lft forever preferred_lft forever
    inet6 fe80::e2db:55ff:fe15:f0f0/64 scope link 
       valid_lft forever preferred_lft forever
3: em2:  mtu 1500 qdisc mq state DOWN group default qlen 
1000
    link/ether e0:db:55:15:f0:f1 brd ff:ff:ff:ff:ff:ff
4: em3:  mtu 1500 qdisc mq state DOWN group default qlen 
1000
    link/ether e0:db:55:15:f0:f2 brd ff:ff:ff:ff:ff:ff
5: em4:  mtu 1500 qdisc mq state DOWN group default qlen 
1000
    link/ether e0:db:55:15:f0:f3 brd ff:ff:ff:ff:ff:ff
6: bond0:  mtu 9000 qdisc noqueue state DOWN group 
default qlen 1000
    link/ether 3a:ab:a2:f2:38:5c brd ff:ff:ff:ff:ff:ff

# ip r
default via 10.0.1.254 dev em1 
10.

[ovirt-users] Re: Install hosted-engine - Task Get local VM IP failed

2018-06-28 Thread Simone Tiraboschi
On Wed, Jun 27, 2018 at 5:48 PM fso...@systea.fr  wrote:

> Hi again,
> In fact, the hour in file is exactly 2hours before, I guess a timezone
> problem (in the process of install ?), as the file itself is correctly
> timed at 11:17am (correct hour here in France). So the messages are
> synchrone.
>
>
Yes, sorry, fault of mine.
>From the logs I don't see anything strange.

Can you please try again on your environment and connect to the bootstrap
VM via virsh console or VNC to check what's happening there?

Did you also run ovirt-hosted-engine-cleanup between one attempt and the
next?


>
>  Message original ----
> Objet : Re: [ovirt-users] Re: Install hosted-engine - Task Get local VM IP
> failed
> De : Simone Tiraboschi
> À : fso...@systea.fr
> Cc : users
>
>
> Hi,
> HostedEngineLocal was started at 2018-06-26 09:17:26 but /var/log/messages
> starts only at Jun 26 11:02:32.
> Can you please reattach it fro the relevant time frame?
>
> On Wed, Jun 27, 2018 at 10:54 AM fsoyer  wrote:
>
>> Hi Simone,
>> here are the revelant part of messages and the engine install log (there
>> were only this file in /var/log/libvirt/qemu) .
>>
>> Thanks for your time.
>>
>> Frank
>>
>>
>> Le Mardi, Juin 26, 2018 11:43 CEST, Simone Tiraboschi <
>> stira...@redhat.com> a écrit:
>>
>>
>>
>> On Tue, Jun 26, 2018 at 11:39 AM fsoyer  wrote:
>>
>>> Well,
>>> unfortunatly, it was a "false-positive". This morning I tried again,
>>> with the idea that at one moment the deploy will ask for the final
>>> destination for the engine, I will restart bond0+gluster+volume engine at
>>> thos moment.
>>> Re-launching the deploy on the second "fresh" host (the first one with
>>> all errors yesterday let it in a doutful state) with em2 and gluster+bond0
>>> off :
>>>
>>> # ip a
>>> 1: lo:  mtu 65536 qdisc noqueue state UNKNOWN
>>> group default qlen 1000
>>> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>>> inet 127.0.0.1/8 scope host lo
>>>valid_lft forever preferred_lft forever
>>> inet6 ::1/128 scope host
>>>valid_lft forever preferred_lft forever
>>> 2: em1:  mtu 1500 qdisc mq state UP
>>> group default qlen 1000
>>> link/ether e0:db:55:15:f0:f0 brd ff:ff:ff:ff:ff:ff
>>> inet 10.0.0.227/8 brd 10.255.255.255 scope global em1
>>>valid_lft forever preferred_lft forever
>>> inet6 fe80::e2db:55ff:fe15:f0f0/64 scope link
>>>valid_lft forever preferred_lft forever
>>> 3: em2:  mtu 1500 qdisc mq state DOWN group default
>>> qlen 1000
>>> link/ether e0:db:55:15:f0:f1 brd ff:ff:ff:ff:ff:ff
>>> 4: em3:  mtu 1500 qdisc mq state DOWN group default
>>> qlen 1000
>>> link/ether e0:db:55:15:f0:f2 brd ff:ff:ff:ff:ff:ff
>>> 5: em4:  mtu 1500 qdisc mq state DOWN group default
>>> qlen 1000
>>> link/ether e0:db:55:15:f0:f3 brd ff:ff:ff:ff:ff:ff
>>> 6: bond0:  mtu 9000 qdisc noqueue state DOWN
>>> group default qlen 1000
>>> link/ether 3a:ab:a2:f2:38:5c brd ff:ff:ff:ff:ff:ff
>>>
>>> # ip r
>>> default via 10.0.1.254 dev em1
>>> 10.0.0.0/8 dev em1 proto kernel scope link src 10.0.0.227
>>> 169.254.0.0/16 dev em1 scope link metric 1002
>>>
>>> ... does NOT work this morning
>>>
>>> [ INFO  ] TASK [Get local VM IP]
>>> [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 50, "changed":
>>> true, "cmd": "virsh -r net-dhcp-leases default | grep -i 00:16:3e:01:c6:32
>>> | awk '{ print $5 }' | cut -f1 -d'/'", "delta": "0:00:00.083587", "end":
>>> "2018-06-26 11:26:07.581706", "rc": 0, "start": "2018-06-26
>>> 11:26:07.498119", "stderr": "", "stderr_lines": [], "stdout": "",
>>> "stdout_lines": []}
>>>
>>> I'm sure that the network was the same yesterday when my attempt finally
>>> pass the "get local vm ip". Why not today ?
>>> After the error, the network was :
>>>
>>> 1: lo:  mtu 65536 qdisc noqueue state UNKNOWN
>>> group default qlen 1000
>>> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>>> inet 127.0.0.1/8 scope host lo
>>>valid_lft forever preferred_lft forever
>>> inet6 ::1/128 scope host

[ovirt-users]Re: Install hosted-engine - Task Get local VM IP failed

2018-06-27 Thread fso...@systea.fr
Hi again,In fact, the hour in file is exactly 2hours before, I guess a timezone problem (in the process of install ?), as the file itself is correctly timed at 11:17am (correct hour here in France). So the messages are synchrone. Message original Objet : Re: [ovirt-users] Re: Install hosted-engine - Task Get local VM IP failedDe : Simone Tiraboschi À : fso...@systea.frCc : users Hi,HostedEngineLocal was started at 2018-06-26 09:17:26 but /var/log/messages starts only at Jun 26 11:02:32.Can you please reattach it fro the relevant time frame?On Wed, Jun 27, 2018 at 10:54 AM fsoyer  wrote:Hi Simone,here are the revelant part of messages and the engine install log (there were only this file in /var/log/libvirt/qemu) .Thanks for your time.Frank Le Mardi, Juin 26, 2018 11:43 CEST, Simone Tiraboschi  a écrit:  On Tue, Jun 26, 2018 at 11:39 AM fsoyer  wrote:Well,unfortunatly, it was a "false-positive". This morning I tried again, with the idea that at one moment the deploy will ask for the final destination for the engine, I will restart bond0+gluster+volume engine at thos moment.Re-launching the deploy on the second "fresh" host (the first one with all errors yesterday let it in a doutful state) with em2 and gluster+bond0 off :# ip a1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00    inet 127.0.0.1/8 scope host lo       valid_lft forever preferred_lft forever    inet6 ::1/128 scope host        valid_lft forever preferred_lft forever2: em1:  mtu 1500 qdisc mq state UP group default qlen 1000    link/ether e0:db:55:15:f0:f0 brd ff:ff:ff:ff:ff:ff    inet 10.0.0.227/8 brd 10.255.255.255 scope global em1       valid_lft forever preferred_lft forever    inet6 fe80::e2db:55ff:fe15:f0f0/64 scope link        valid_lft forever preferred_lft forever3: em2:  mtu 1500 qdisc mq state DOWN group default qlen 1000    link/ether e0:db:55:15:f0:f1 brd ff:ff:ff:ff:ff:ff4: em3:  mtu 1500 qdisc mq state DOWN group default qlen 1000    link/ether e0:db:55:15:f0:f2 brd ff:ff:ff:ff:ff:ff5: em4:  mtu 1500 qdisc mq state DOWN group default qlen 1000    link/ether e0:db:55:15:f0:f3 brd ff:ff:ff:ff:ff:ff6: bond0:  mtu 9000 qdisc noqueue state DOWN group default qlen 1000    link/ether 3a:ab:a2:f2:38:5c brd ff:ff:ff:ff:ff:ff# ip rdefault via 10.0.1.254 dev em1 10.0.0.0/8 dev em1 proto kernel scope link src 10.0.0.227 169.254.0.0/16 dev em1 scope link metric 1002 ... does NOT work this morning[ INFO  ] TASK [Get local VM IP][ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 50, "changed": true, "cmd": "virsh -r net-dhcp-leases default | grep -i 00:16:3e:01:c6:32 | awk '{ print $5 }' | cut -f1 -d'/'", "delta": "0:00:00.083587", "end": "2018-06-26 11:26:07.581706", "rc": 0, "start": "2018-06-26 11:26:07.498119", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}I'm sure that the network was the same yesterday when my attempt finally pass the "get local vm ip". Why not today ?After the error, the network was :1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00    inet 127.0.0.1/8 scope host lo       valid_lft forever preferred_lft forever    inet6 ::1/128 scope host        valid_lft forever preferred_lft forever2: em1:  mtu 1500 qdisc mq state UP group default qlen 1000    link/ether e0:db:55:15:f0:f0 brd ff:ff:ff:ff:ff:ff    inet 10.0.0.227/8 brd 10.255.255.255 scope global em1       valid_lft forever preferred_lft forever    inet6 fe80::e2db:55ff:fe15:f0f0/64 scope link        valid_lft forever preferred_lft forever3: em2:  mtu 1500 qdisc mq state DOWN group default qlen 1000    link/ether e0:db:55:15:f0:f1 brd ff:ff:ff:ff:ff:ff4: em3:  mtu 1500 qdisc mq state DOWN group default qlen 1000    link/ether e0:db:55:15:f0:f2 brd ff:ff:ff:ff:ff:ff5: em4:  mtu 1500 qdisc mq state DOWN group default qlen 1000    link/ether e0:db:55:15:f0:f3 brd ff:ff:ff:ff:ff:ff6: bond0:  mtu 9000 qdisc noqueue state DOWN group default qlen 1000    link/ether 3a:ab:a2:f2:38:5c brd ff:ff:ff:ff:ff:ff7: virbr0:  mtu 1500 qdisc noqueue state UP group default qlen 1000    link/ether 52:54:00:ae:8d:93 brd ff:ff:ff:ff:ff:ff    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0       valid_lft forever preferred_lft forever8: virbr0-nic:  mtu 1500 qdisc pfifo_fast master virbr0 state DOWN group default qlen 1000    link/ether 52:54:00:ae:8d:93 brd ff:ff:ff:ff:ff:ff9: vnet0:  mtu 1500 qdisc pfifo_fast master virbr0 state UNKNOWN group default qlen 1000    link/ether fe:16:3e:01:c6:32 brd ff:ff:ff:ff:ff:ff    inet6 fe80::fc16:3eff:fe01:c632/64 scope link        valid_lft forever preferred_lft forever# ip rdefault via 10.0.1.254 dev em1 10.0.0.0/8 dev em1 proto kernel scope link src 10.0.0.227 169.254.0.0/16 dev em1 scope link metric 1002 192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1   So, finally, I hav

[ovirt-users] Re: Install hosted-engine - Task Get local VM IP failed

2018-06-27 Thread Simone Tiraboschi
Hi,
HostedEngineLocal was started at 2018-06-26 09:17:26 but /var/log/messages
starts only at Jun 26 11:02:32.
Can you please reattach it fro the relevant time frame?

On Wed, Jun 27, 2018 at 10:54 AM fsoyer  wrote:

> Hi Simone,
> here are the revelant part of messages and the engine install log (there
> were only this file in /var/log/libvirt/qemu) .
>
> Thanks for your time.
>
> Frank
>
>
> Le Mardi, Juin 26, 2018 11:43 CEST, Simone Tiraboschi 
> a écrit:
>
>
>
> On Tue, Jun 26, 2018 at 11:39 AM fsoyer  wrote:
>
>> Well,
>> unfortunatly, it was a "false-positive". This morning I tried again, with
>> the idea that at one moment the deploy will ask for the final destination
>> for the engine, I will restart bond0+gluster+volume engine at thos moment.
>> Re-launching the deploy on the second "fresh" host (the first one with
>> all errors yesterday let it in a doutful state) with em2 and gluster+bond0
>> off :
>>
>> # ip a
>> 1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group
>> default qlen 1000
>> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>> inet 127.0.0.1/8 scope host lo
>>valid_lft forever preferred_lft forever
>> inet6 ::1/128 scope host
>>valid_lft forever preferred_lft forever
>> 2: em1:  mtu 1500 qdisc mq state UP
>> group default qlen 1000
>> link/ether e0:db:55:15:f0:f0 brd ff:ff:ff:ff:ff:ff
>> inet 10.0.0.227/8 brd 10.255.255.255 scope global em1
>>valid_lft forever preferred_lft forever
>> inet6 fe80::e2db:55ff:fe15:f0f0/64 scope link
>>valid_lft forever preferred_lft forever
>> 3: em2:  mtu 1500 qdisc mq state DOWN group default
>> qlen 1000
>> link/ether e0:db:55:15:f0:f1 brd ff:ff:ff:ff:ff:ff
>> 4: em3:  mtu 1500 qdisc mq state DOWN group default
>> qlen 1000
>> link/ether e0:db:55:15:f0:f2 brd ff:ff:ff:ff:ff:ff
>> 5: em4:  mtu 1500 qdisc mq state DOWN group default
>> qlen 1000
>> link/ether e0:db:55:15:f0:f3 brd ff:ff:ff:ff:ff:ff
>> 6: bond0:  mtu 9000 qdisc noqueue state DOWN
>> group default qlen 1000
>> link/ether 3a:ab:a2:f2:38:5c brd ff:ff:ff:ff:ff:ff
>>
>> # ip r
>> default via 10.0.1.254 dev em1
>> 10.0.0.0/8 dev em1 proto kernel scope link src 10.0.0.227
>> 169.254.0.0/16 dev em1 scope link metric 1002
>>
>> ... does NOT work this morning
>>
>> [ INFO  ] TASK [Get local VM IP]
>> [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 50, "changed":
>> true, "cmd": "virsh -r net-dhcp-leases default | grep -i 00:16:3e:01:c6:32
>> | awk '{ print $5 }' | cut -f1 -d'/'", "delta": "0:00:00.083587", "end":
>> "2018-06-26 11:26:07.581706", "rc": 0, "start": "2018-06-26
>> 11:26:07.498119", "stderr": "", "stderr_lines": [], "stdout": "",
>> "stdout_lines": []}
>>
>> I'm sure that the network was the same yesterday when my attempt finally
>> pass the "get local vm ip". Why not today ?
>> After the error, the network was :
>>
>> 1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group
>> default qlen 1000
>> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>> inet 127.0.0.1/8 scope host lo
>>valid_lft forever preferred_lft forever
>> inet6 ::1/128 scope host
>>valid_lft forever preferred_lft forever
>> 2: em1:  mtu 1500 qdisc mq state UP
>> group default qlen 1000
>> link/ether e0:db:55:15:f0:f0 brd ff:ff:ff:ff:ff:ff
>> inet 10.0.0.227/8 brd 10.255.255.255 scope global em1
>>valid_lft forever preferred_lft forever
>> inet6 fe80::e2db:55ff:fe15:f0f0/64 scope link
>>valid_lft forever preferred_lft forever
>> 3: em2:  mtu 1500 qdisc mq state DOWN group default
>> qlen 1000
>> link/ether e0:db:55:15:f0:f1 brd ff:ff:ff:ff:ff:ff
>> 4: em3:  mtu 1500 qdisc mq state DOWN group default
>> qlen 1000
>> link/ether e0:db:55:15:f0:f2 brd ff:ff:ff:ff:ff:ff
>> 5: em4:  mtu 1500 qdisc mq state DOWN group default
>> qlen 1000
>> link/ether e0:db:55:15:f0:f3 brd ff:ff:ff:ff:ff:ff
>> 6: bond0:  mtu 9000 qdisc noqueue state DOWN
>> group default qlen 1000
>> link/ether 3a:ab:a2:f2:38:5c brd ff:ff:ff:ff:ff:ff
>> 7: virbr0:  mtu 1500 qdisc noqueue state
>> UP group default qlen 1000
>> link/ether 52:54:00:ae:8d:93 brd ff:ff:ff:ff:ff:ff
>> inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
>>valid_lft forever preferred_lft forever
>> 8: virbr0-nic:  mtu 1500 qdisc pfifo_fast master
>> virbr0 state DOWN group default qlen 1000
>> link/ether 52:54:00:ae:8d:93 brd ff:ff:ff:ff:ff:ff
>> 9: vnet0:  mtu 1500 qdisc pfifo_fast
>> master virbr0 state UNKNOWN group default qlen 1000
>> link/ether fe:16:3e:01:c6:32 brd ff:ff:ff:ff:ff:ff
>> inet6 fe80::fc16:3eff:fe01:c632/64 scope link
>>valid_lft forever preferred_lft forever
>>
>> # ip r
>> default via 10.0.1.254 dev em1
>> 10.0.0.0/8 dev em1 proto kernel scope link src 10.0.0.227
>> 169.254.0.0/16 dev em1 scope link metric 1002
>> 192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1
>>
>>
>>
>> So, finally, I have no idea why this appends :(((
>
>
> Can you pl

[ovirt-users] Re: Install hosted-engine - Task Get local VM IP failed

2018-06-27 Thread fsoyer

Hi Simone,
here are the revelant part of messages and the engine install log (there were 
only this file in /var/log/libvirt/qemu) .

Thanks for your time.

Frank
 Le Mardi, Juin 26, 2018 11:43 CEST, Simone Tiraboschi  a 
écrit:
  On Tue, Jun 26, 2018 at 11:39 AM fsoyer  wrote:Well,
unfortunatly, it was a "false-positive". This morning I tried again, with the 
idea that at one moment the deploy will ask for the final destination for the 
engine, I will restart bond0+gluster+volume engine at thos moment.
Re-launching the deploy on the second "fresh" host (the first one with all 
errors yesterday let it in a doutful state) with em2 and gluster+bond0 off :
# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group 
default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: em1:  mtu 1500 qdisc mq state UP group 
default qlen 1000
    link/ether e0:db:55:15:f0:f0 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.227/8 brd 10.255.255.255 scope global em1
       valid_lft forever preferred_lft forever
    inet6 fe80::e2db:55ff:fe15:f0f0/64 scope link 
       valid_lft forever preferred_lft forever
3: em2:  mtu 1500 qdisc mq state DOWN group default qlen 
1000
    link/ether e0:db:55:15:f0:f1 brd ff:ff:ff:ff:ff:ff
4: em3:  mtu 1500 qdisc mq state DOWN group default qlen 
1000
    link/ether e0:db:55:15:f0:f2 brd ff:ff:ff:ff:ff:ff
5: em4:  mtu 1500 qdisc mq state DOWN group default qlen 
1000
    link/ether e0:db:55:15:f0:f3 brd ff:ff:ff:ff:ff:ff
6: bond0:  mtu 9000 qdisc noqueue state DOWN group 
default qlen 1000
    link/ether 3a:ab:a2:f2:38:5c brd ff:ff:ff:ff:ff:ff

# ip r
default via 10.0.1.254 dev em1 
10.0.0.0/8 dev em1 proto kernel scope link src 10.0.0.227 
169.254.0.0/16 dev em1 scope link metric 1002 ... does NOT work this morning
[ INFO  ] TASK [Get local VM IP]
[ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 50, "changed": true, 
"cmd": "virsh -r net-dhcp-leases default | grep -i 00:16:3e:01:c6:32 | awk '{ 
print $5 }' | cut -f1 -d'/'", "delta": "0:00:00.083587", "end": "2018-06-26 
11:26:07.581706", "rc": 0, "start": "2018-06-26 11:26:07.498119", "stderr": "", 
"stderr_lines": [], "stdout": "", "stdout_lines": []}I'm sure that the network 
was the same yesterday when my attempt finally pass the "get local vm ip". Why 
not today ?
After the error, the network was :
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group 
default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: em1:  mtu 1500 qdisc mq state UP group 
default qlen 1000
    link/ether e0:db:55:15:f0:f0 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.227/8 brd 10.255.255.255 scope global em1
       valid_lft forever preferred_lft forever
    inet6 fe80::e2db:55ff:fe15:f0f0/64 scope link 
       valid_lft forever preferred_lft forever
3: em2:  mtu 1500 qdisc mq state DOWN group default qlen 
1000
    link/ether e0:db:55:15:f0:f1 brd ff:ff:ff:ff:ff:ff
4: em3:  mtu 1500 qdisc mq state DOWN group default qlen 
1000
    link/ether e0:db:55:15:f0:f2 brd ff:ff:ff:ff:ff:ff
5: em4:  mtu 1500 qdisc mq state DOWN group default qlen 
1000
    link/ether e0:db:55:15:f0:f3 brd ff:ff:ff:ff:ff:ff
6: bond0:  mtu 9000 qdisc noqueue state DOWN group 
default qlen 1000
    link/ether 3a:ab:a2:f2:38:5c brd ff:ff:ff:ff:ff:ff
7: virbr0:  mtu 1500 qdisc noqueue state UP 
group default qlen 1000
    link/ether 52:54:00:ae:8d:93 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
8: virbr0-nic:  mtu 1500 qdisc pfifo_fast master virbr0 
state DOWN group default qlen 1000
    link/ether 52:54:00:ae:8d:93 brd ff:ff:ff:ff:ff:ff
9: vnet0:  mtu 1500 qdisc pfifo_fast master 
virbr0 state UNKNOWN group default qlen 1000
    link/ether fe:16:3e:01:c6:32 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fc16:3eff:fe01:c632/64 scope link 
       valid_lft forever preferred_lft forever

# ip r
default via 10.0.1.254 dev em1 
10.0.0.0/8 dev em1 proto kernel scope link src 10.0.0.227 
169.254.0.0/16 dev em1 scope link metric 1002 
192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 
 
 So, finally, I have no idea why this appends :((( Can you please attach 
/var/log/messages and /var/log/libvirt/qemu/* ?  

Le Mardi, Juin 26, 2018 09:21 CEST, Simone Tiraboschi  a 
écrit:
  On Mon, Jun 25, 2018 at 6:32 PM fsoyer  wrote:Well, 
answering to myself for more informations.
Thinking that the network was part of the problem, I tried to stop gluster 
volumes, stop gluster on host, and stop bond0.
So, the host now had just em1 with one IP.
And... The winner is... Yes : the install passed the "[Get local VM IP]" and 
continued !!

I hit ctrl-c, restart the bond0, res

[ovirt-users] Re: Install hosted-engine - Task Get local VM IP failed

2018-06-26 Thread Simone Tiraboschi
On Tue, Jun 26, 2018 at 11:39 AM fsoyer  wrote:

> Well,
> unfortunatly, it was a "false-positive". This morning I tried again, with
> the idea that at one moment the deploy will ask for the final destination
> for the engine, I will restart bond0+gluster+volume engine at thos moment.
> Re-launching the deploy on the second "fresh" host (the first one with all
> errors yesterday let it in a doutful state) with em2 and gluster+bond0 off :
>
> # ip a
> 1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group
> default qlen 1000
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
>valid_lft forever preferred_lft forever
> inet6 ::1/128 scope host
>valid_lft forever preferred_lft forever
> 2: em1:  mtu 1500 qdisc mq state UP group
> default qlen 1000
> link/ether e0:db:55:15:f0:f0 brd ff:ff:ff:ff:ff:ff
> inet 10.0.0.227/8 brd 10.255.255.255 scope global em1
>valid_lft forever preferred_lft forever
> inet6 fe80::e2db:55ff:fe15:f0f0/64 scope link
>valid_lft forever preferred_lft forever
> 3: em2:  mtu 1500 qdisc mq state DOWN group default
> qlen 1000
> link/ether e0:db:55:15:f0:f1 brd ff:ff:ff:ff:ff:ff
> 4: em3:  mtu 1500 qdisc mq state DOWN group default
> qlen 1000
> link/ether e0:db:55:15:f0:f2 brd ff:ff:ff:ff:ff:ff
> 5: em4:  mtu 1500 qdisc mq state DOWN group default
> qlen 1000
> link/ether e0:db:55:15:f0:f3 brd ff:ff:ff:ff:ff:ff
> 6: bond0:  mtu 9000 qdisc noqueue state DOWN
> group default qlen 1000
> link/ether 3a:ab:a2:f2:38:5c brd ff:ff:ff:ff:ff:ff
>
> # ip r
> default via 10.0.1.254 dev em1
> 10.0.0.0/8 dev em1 proto kernel scope link src 10.0.0.227
> 169.254.0.0/16 dev em1 scope link metric 1002
>
> ... does NOT work this morning
>
> [ INFO  ] TASK [Get local VM IP]
> [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 50, "changed": true,
> "cmd": "virsh -r net-dhcp-leases default | grep -i 00:16:3e:01:c6:32 | awk
> '{ print $5 }' | cut -f1 -d'/'", "delta": "0:00:00.083587", "end":
> "2018-06-26 11:26:07.581706", "rc": 0, "start": "2018-06-26
> 11:26:07.498119", "stderr": "", "stderr_lines": [], "stdout": "",
> "stdout_lines": []}
>
> I'm sure that the network was the same yesterday when my attempt finally
> pass the "get local vm ip". Why not today ?
> After the error, the network was :
>
> 1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group
> default qlen 1000
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
>valid_lft forever preferred_lft forever
> inet6 ::1/128 scope host
>valid_lft forever preferred_lft forever
> 2: em1:  mtu 1500 qdisc mq state UP group
> default qlen 1000
> link/ether e0:db:55:15:f0:f0 brd ff:ff:ff:ff:ff:ff
> inet 10.0.0.227/8 brd 10.255.255.255 scope global em1
>valid_lft forever preferred_lft forever
> inet6 fe80::e2db:55ff:fe15:f0f0/64 scope link
>valid_lft forever preferred_lft forever
> 3: em2:  mtu 1500 qdisc mq state DOWN group default
> qlen 1000
> link/ether e0:db:55:15:f0:f1 brd ff:ff:ff:ff:ff:ff
> 4: em3:  mtu 1500 qdisc mq state DOWN group default
> qlen 1000
> link/ether e0:db:55:15:f0:f2 brd ff:ff:ff:ff:ff:ff
> 5: em4:  mtu 1500 qdisc mq state DOWN group default
> qlen 1000
> link/ether e0:db:55:15:f0:f3 brd ff:ff:ff:ff:ff:ff
> 6: bond0:  mtu 9000 qdisc noqueue state DOWN
> group default qlen 1000
> link/ether 3a:ab:a2:f2:38:5c brd ff:ff:ff:ff:ff:ff
> 7: virbr0:  mtu 1500 qdisc noqueue state
> UP group default qlen 1000
> link/ether 52:54:00:ae:8d:93 brd ff:ff:ff:ff:ff:ff
> inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
>valid_lft forever preferred_lft forever
> 8: virbr0-nic:  mtu 1500 qdisc pfifo_fast master
> virbr0 state DOWN group default qlen 1000
> link/ether 52:54:00:ae:8d:93 brd ff:ff:ff:ff:ff:ff
> 9: vnet0:  mtu 1500 qdisc pfifo_fast
> master virbr0 state UNKNOWN group default qlen 1000
> link/ether fe:16:3e:01:c6:32 brd ff:ff:ff:ff:ff:ff
> inet6 fe80::fc16:3eff:fe01:c632/64 scope link
>valid_lft forever preferred_lft forever
>
> # ip r
> default via 10.0.1.254 dev em1
> 10.0.0.0/8 dev em1 proto kernel scope link src 10.0.0.227
> 169.254.0.0/16 dev em1 scope link metric 1002
> 192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1
>
>
>
> So, finally, I have no idea why this appends :(((
>

Can you please attach /var/log/messages and /var/log/libvirt/qemu/* ?



>
>
> Le Mardi, Juin 26, 2018 09:21 CEST, Simone Tiraboschi 
> a écrit:
>
>
>
> On Mon, Jun 25, 2018 at 6:32 PM fsoyer  wrote:
>
>> Well, answering to myself for more informations.
>> Thinking that the network was part of the problem, I tried to stop
>> gluster volumes, stop gluster on host, and stop bond0.
>> So, the host now had just em1 with one IP.
>> And... The winner is... Yes : the install passed the "[Get local VM IP]"
>> and continued !!
>>
>> I hit ctrl-c, restart the bond0, restart deploy : it crashed.

[ovirt-users] Re: Install hosted-engine - Task Get local VM IP failed

2018-06-26 Thread fsoyer

Well,
unfortunatly, it was a "false-positive". This morning I tried again, with the 
idea that at one moment the deploy will ask for the final destination for the 
engine, I will restart bond0+gluster+volume engine at thos moment.
Re-launching the deploy on the second "fresh" host (the first one with all 
errors yesterday let it in a doutful state) with em2 and gluster+bond0 off :
# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group 
default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: em1:  mtu 1500 qdisc mq state UP group 
default qlen 1000
    link/ether e0:db:55:15:f0:f0 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.227/8 brd 10.255.255.255 scope global em1
       valid_lft forever preferred_lft forever
    inet6 fe80::e2db:55ff:fe15:f0f0/64 scope link 
       valid_lft forever preferred_lft forever
3: em2:  mtu 1500 qdisc mq state DOWN group default qlen 
1000
    link/ether e0:db:55:15:f0:f1 brd ff:ff:ff:ff:ff:ff
4: em3:  mtu 1500 qdisc mq state DOWN group default qlen 
1000
    link/ether e0:db:55:15:f0:f2 brd ff:ff:ff:ff:ff:ff
5: em4:  mtu 1500 qdisc mq state DOWN group default qlen 
1000
    link/ether e0:db:55:15:f0:f3 brd ff:ff:ff:ff:ff:ff
6: bond0:  mtu 9000 qdisc noqueue state DOWN group 
default qlen 1000
    link/ether 3a:ab:a2:f2:38:5c brd ff:ff:ff:ff:ff:ff

# ip r
default via 10.0.1.254 dev em1 
10.0.0.0/8 dev em1 proto kernel scope link src 10.0.0.227 
169.254.0.0/16 dev em1 scope link metric 1002 ... does NOT work this morning
[ INFO  ] TASK [Get local VM IP]
[ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 50, "changed": true, 
"cmd": "virsh -r net-dhcp-leases default | grep -i 00:16:3e:01:c6:32 | awk '{ 
print $5 }' | cut -f1 -d'/'", "delta": "0:00:00.083587", "end": "2018-06-26 
11:26:07.581706", "rc": 0, "start": "2018-06-26 11:26:07.498119", "stderr": "", 
"stderr_lines": [], "stdout": "", "stdout_lines": []}I'm sure that the network 
was the same yesterday when my attempt finally pass the "get local vm ip". Why 
not today ?
After the error, the network was :
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group 
default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: em1:  mtu 1500 qdisc mq state UP group 
default qlen 1000
    link/ether e0:db:55:15:f0:f0 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.227/8 brd 10.255.255.255 scope global em1
       valid_lft forever preferred_lft forever
    inet6 fe80::e2db:55ff:fe15:f0f0/64 scope link 
       valid_lft forever preferred_lft forever
3: em2:  mtu 1500 qdisc mq state DOWN group default qlen 
1000
    link/ether e0:db:55:15:f0:f1 brd ff:ff:ff:ff:ff:ff
4: em3:  mtu 1500 qdisc mq state DOWN group default qlen 
1000
    link/ether e0:db:55:15:f0:f2 brd ff:ff:ff:ff:ff:ff
5: em4:  mtu 1500 qdisc mq state DOWN group default qlen 
1000
    link/ether e0:db:55:15:f0:f3 brd ff:ff:ff:ff:ff:ff
6: bond0:  mtu 9000 qdisc noqueue state DOWN group 
default qlen 1000
    link/ether 3a:ab:a2:f2:38:5c brd ff:ff:ff:ff:ff:ff
7: virbr0:  mtu 1500 qdisc noqueue state UP 
group default qlen 1000
    link/ether 52:54:00:ae:8d:93 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
8: virbr0-nic:  mtu 1500 qdisc pfifo_fast master virbr0 
state DOWN group default qlen 1000
    link/ether 52:54:00:ae:8d:93 brd ff:ff:ff:ff:ff:ff
9: vnet0:  mtu 1500 qdisc pfifo_fast master 
virbr0 state UNKNOWN group default qlen 1000
    link/ether fe:16:3e:01:c6:32 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fc16:3eff:fe01:c632/64 scope link 
       valid_lft forever preferred_lft forever

# ip r
default via 10.0.1.254 dev em1 
10.0.0.0/8 dev em1 proto kernel scope link src 10.0.0.227 
169.254.0.0/16 dev em1 scope link metric 1002 
192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 
 
 So, finally, I have no idea why this appends :(((


Le Mardi, Juin 26, 2018 09:21 CEST, Simone Tiraboschi  a 
écrit:
  On Mon, Jun 25, 2018 at 6:32 PM fsoyer  wrote:Well, 
answering to myself for more informations.
Thinking that the network was part of the problem, I tried to stop gluster 
volumes, stop gluster on host, and stop bond0.
So, the host now had just em1 with one IP.
And... The winner is... Yes : the install passed the "[Get local VM IP]" and 
continued !!

I hit ctrl-c, restart the bond0, restart deploy : it crashed. So it seems that 
more than one network is the problem. But ! How do I install engine on gluster 
on a separate - bonding - jumbo network in this case ???

Can you reproduce this on your side ? Can you please attach the output of 'ip 
a' in both the case? 
Frank
 

Le Lundi, Juin 25, 2018 16:50 CEST, "fsoyer"  a écrit:
  H

[ovirt-users] Re: Install hosted-engine - Task Get local VM IP failed

2018-06-26 Thread Simone Tiraboschi
On Mon, Jun 25, 2018 at 6:32 PM fsoyer  wrote:

> Well, answering to myself for more informations.
> Thinking that the network was part of the problem, I tried to stop gluster
> volumes, stop gluster on host, and stop bond0.
> So, the host now had just em1 with one IP.
> And... The winner is... Yes : the install passed the "[Get local VM IP]"
> and continued !!
>
> I hit ctrl-c, restart the bond0, restart deploy : it crashed. So it seems
> that more than one network is the problem. But ! How do I install engine on
> gluster on a separate - bonding - jumbo network in this case ???
>
> Can you reproduce this on your side ?
>

Can you please attach the output of 'ip a' in both the case?


>
> Frank
>
>
>
>
> Le Lundi, Juin 25, 2018 16:50 CEST, "fsoyer"  a écrit:
>
>
>
>
> Hi staff,
> Installing a fresh ovirt - CentOS 7.5.1804 up to date, ovirt version :
> # rpm -qa | grep ovirt
> ovirt-hosted-engine-ha-2.2.11-1.el7.centos.noarch
> ovirt-imageio-common-1.3.1.2-0.el7.centos.noarch
> ovirt-host-dependencies-4.2.2-2.el7.centos.x86_64
> ovirt-vmconsole-1.0.5-4.el7.centos.noarch
> ovirt-provider-ovn-driver-1.2.10-1.el7.centos.noarch
> ovirt-hosted-engine-setup-2.2.20-1.el7.centos.noarch
> ovirt-engine-appliance-4.2-20180504.1.el7.centos.noarch
> python-ovirt-engine-sdk4-4.2.6-2.el7.centos.x86_64
> ovirt-host-deploy-1.7.3-1.el7.centos.noarch
> ovirt-release42-4.2.3.1-1.el7.noarch
> ovirt-vmconsole-host-1.0.5-4.el7.centos.noarch
> cockpit-ovirt-dashboard-0.11.24-1.el7.centos.noarch
> ovirt-setup-lib-1.1.4-1.el7.centos.noarch
> ovirt-imageio-daemon-1.3.1.2-0.el7.centos.noarch
> ovirt-host-4.2.2-2.el7.centos.x86_64
> ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch
>
> ON PHYSICAL SERVERS (not on VMware, why should I be ?? ;) I got exactly
> the same error :
> [ INFO  ] TASK [Get local VM IP]
> [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 50, "changed": true,
> "cmd": "virsh -r net-dhcp-leases default | grep -i 00:16:3e:69:3a:c6 | awk
> '{ print $5 }' | cut -f1 -d'/'", "delta": "0:00:00.073313", "end":
> "2018-06-25 16:11:36.025277", "rc": 0, "start": "2018-06-25
> 16:11:35.951964", "stderr": "", "stderr_lines": [], "stdout": "",
> "stdout_lines": []}
> [ INFO  ] TASK [include_tasks]
> [ INFO  ] ok: [localhost]
> [ INFO  ] TASK [Remove local vm dir]
> [ INFO  ] changed: [localhost]
> [ INFO  ] TASK [Notify the user about a failure]
> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The
> system may not be provisioned according to the playbook results: please
> check the logs for the issue, fix accordingly or re-deploy from scratch.\n"}
> [ ERROR ] Failed to execute stage 'Closing up': Failed executing
> ansible-playbook
> [ INFO  ] Stage: Clean up
>
>
> I have 4 NIC :
> em1 10.0.0.230/8 is for ovirmgmt, it have the gateway
> em2 10.0.0.229/8 is for a vmnetwork
> em3+em4 in bond0 192.168.0.30 are for gluster with jumbo frames, volumes
> (ENGINE, ISO,EXPORT,DATA) are up and operationals.
>
> I tried to stop em2 (ONBOOT=No and restart network), so the network is
> actually :
> # ip a
> 1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group
> default qlen 1000
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
>valid_lft forever preferred_lft forever
> inet6 ::1/128 scope host
>valid_lft forever preferred_lft forever
> 2: em1:  mtu 1500 qdisc mq state UP group
> default qlen 1000
> link/ether e0:db:55:15:eb:70 brd ff:ff:ff:ff:ff:ff
> inet 10.0.0.230/8 brd 10.255.255.255 scope global em1
>valid_lft forever preferred_lft forever
> inet6 fe80::e2db:55ff:fe15:eb70/64 scope link
>valid_lft forever preferred_lft forever
> 3: em2:  mtu 1500 qdisc mq state DOWN group default
> qlen 1000
> link/ether e0:db:55:15:eb:71 brd ff:ff:ff:ff:ff:ff
> 4: em3:  mtu 9000 qdisc mq master
> bond0 state UP group default qlen 1000
> link/ether e0:db:55:15:eb:72 brd ff:ff:ff:ff:ff:ff
> 5: em4:  mtu 9000 qdisc mq master
> bond0 state UP group default qlen 1000
> link/ether e0:db:55:15:eb:72 brd ff:ff:ff:ff:ff:ff
> 6: bond0:  mtu 9000 qdisc noqueue
> state UP group default qlen 1000
> link/ether e0:db:55:15:eb:72 brd ff:ff:ff:ff:ff:ff
> inet 192.168.0.30/24 brd 192.168.0.255 scope global bond0
>valid_lft forever preferred_lft forever
> inet6 fe80::e2db:55ff:fe15:eb72/64 scope link
>valid_lft forever preferred_lft forever
>
> # ip r
> default via 10.0.1.254 dev em1
> 10.0.0.0/8 dev em1 proto kernel scope link src 10.0.0.230
> 169.254.0.0/16 dev em1 scope link metric 1002
> 169.254.0.0/16 dev bond0 scope link metric 1006
> 192.168.0.0/24 dev bond0 proto kernel scope link src 192.168.0.30
>
> but same issue, after "/usr/sbin/ovirt-hosted-engine-cleanup" and
> restarting the deployment.
>
> NetworkManager was stopped and disabled at the node install, and it is
> still stopped.
> After the error, the network shows this after device 6 (bond0) :
> 7: virbr0:  mtu 1500 qdisc noqueue state
> UP group defaul

[ovirt-users] Re: Install hosted-engine - Task Get local VM IP failed

2018-06-25 Thread fsoyer

Well, answering to myself for more informations.
Thinking that the network was part of the problem, I tried to stop gluster 
volumes, stop gluster on host, and stop bond0.
So, the host now had just em1 with one IP.
And... The winner is... Yes : the install passed the "[Get local VM IP]" and 
continued !!

I hit ctrl-c, restart the bond0, restart deploy : it crashed. So it seems that 
more than one network is the problem. But ! How do I install engine on gluster 
on a separate - bonding - jumbo network in this case ???

Can you reproduce this on your side ?

Frank
 

Le Lundi, Juin 25, 2018 16:50 CEST, "fsoyer"  a écrit:
  Hi staff,
Installing a fresh ovirt - CentOS 7.5.1804 up to date, ovirt version :
# rpm -qa | grep ovirt
ovirt-hosted-engine-ha-2.2.11-1.el7.centos.noarch
ovirt-imageio-common-1.3.1.2-0.el7.centos.noarch
ovirt-host-dependencies-4.2.2-2.el7.centos.x86_64
ovirt-vmconsole-1.0.5-4.el7.centos.noarch
ovirt-provider-ovn-driver-1.2.10-1.el7.centos.noarch
ovirt-hosted-engine-setup-2.2.20-1.el7.centos.noarch
ovirt-engine-appliance-4.2-20180504.1.el7.centos.noarch
python-ovirt-engine-sdk4-4.2.6-2.el7.centos.x86_64
ovirt-host-deploy-1.7.3-1.el7.centos.noarch
ovirt-release42-4.2.3.1-1.el7.noarch
ovirt-vmconsole-host-1.0.5-4.el7.centos.noarch
cockpit-ovirt-dashboard-0.11.24-1.el7.centos.noarch
ovirt-setup-lib-1.1.4-1.el7.centos.noarch
ovirt-imageio-daemon-1.3.1.2-0.el7.centos.noarch
ovirt-host-4.2.2-2.el7.centos.x86_64
ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch

ON PHYSICAL SERVERS (not on VMware, why should I be ?? ;) I got exactly the 
same error :
[ INFO  ] TASK [Get local VM IP]
[ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 50, "changed": true, 
"cmd": "virsh -r net-dhcp-leases default | grep -i 00:16:3e:69:3a:c6 | awk '{ 
print $5 }' | cut -f1 -d'/'", "delta": "0:00:00.073313", "end": "2018-06-25 
16:11:36.025277", "rc": 0, "start": "2018-06-25 16:11:35.951964", "stderr": "", 
"stderr_lines": [], "stdout": "", "stdout_lines": []}
[ INFO  ] TASK [include_tasks]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Remove local vm dir]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [Notify the user about a failure]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The system 
may not be provisioned according to the playbook results: please check the logs 
for the issue, fix accordingly or re-deploy from scratch.\n"}
[ ERROR ] Failed to execute stage 'Closing up': Failed executing 
ansible-playbook
[ INFO  ] Stage: Clean up


I have 4 NIC :
em1 10.0.0.230/8 is for ovirmgmt, it have the gateway
em2 10.0.0.229/8 is for a vmnetwork
em3+em4 in bond0 192.168.0.30 are for gluster with jumbo frames, volumes 
(ENGINE, ISO,EXPORT,DATA) are up and operationals.

I tried to stop em2 (ONBOOT=No and restart network), so the network is actually 
:
# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group 
default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: em1:  mtu 1500 qdisc mq state UP group 
default qlen 1000
    link/ether e0:db:55:15:eb:70 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.230/8 brd 10.255.255.255 scope global em1
       valid_lft forever preferred_lft forever
    inet6 fe80::e2db:55ff:fe15:eb70/64 scope link 
       valid_lft forever preferred_lft forever
3: em2:  mtu 1500 qdisc mq state DOWN group default qlen 
1000
    link/ether e0:db:55:15:eb:71 brd ff:ff:ff:ff:ff:ff
4: em3:  mtu 9000 qdisc mq master bond0 
state UP group default qlen 1000
    link/ether e0:db:55:15:eb:72 brd ff:ff:ff:ff:ff:ff
5: em4:  mtu 9000 qdisc mq master bond0 
state UP group default qlen 1000
    link/ether e0:db:55:15:eb:72 brd ff:ff:ff:ff:ff:ff
6: bond0:  mtu 9000 qdisc noqueue state 
UP group default qlen 1000
    link/ether e0:db:55:15:eb:72 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.30/24 brd 192.168.0.255 scope global bond0
       valid_lft forever preferred_lft forever
    inet6 fe80::e2db:55ff:fe15:eb72/64 scope link 
       valid_lft forever preferred_lft forever

# ip r
default via 10.0.1.254 dev em1 
10.0.0.0/8 dev em1 proto kernel scope link src 10.0.0.230 
169.254.0.0/16 dev em1 scope link metric 1002 
169.254.0.0/16 dev bond0 scope link metric 1006 
192.168.0.0/24 dev bond0 proto kernel scope link src 192.168.0.30 

but same issue, after "/usr/sbin/ovirt-hosted-engine-cleanup" and restarting 
the deployment.
NetworkManager was stopped and disabled at the node install, and it is still 
stopped.After the error, the network shows this after device 6 (bond0) :
7: virbr0:  mtu 1500 qdisc noqueue state UP 
group default qlen 1000
    link/ether 52:54:00:38:e0:5a brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
8: virbr0-nic:  mtu 1500 qdisc pfifo_fast master virbr0 
state DOWN group default qlen 1000
    link/ether 52:54:00:38:e0:5a brd 

[ovirt-users] Re: Install hosted-engine - Task Get local VM IP failed

2018-06-25 Thread fsoyer

Hi staff,
Installing a fresh ovirt - CentOS 7.5.1804 up to date, ovirt version :
# rpm -qa | grep ovirt
ovirt-hosted-engine-ha-2.2.11-1.el7.centos.noarch
ovirt-imageio-common-1.3.1.2-0.el7.centos.noarch
ovirt-host-dependencies-4.2.2-2.el7.centos.x86_64
ovirt-vmconsole-1.0.5-4.el7.centos.noarch
ovirt-provider-ovn-driver-1.2.10-1.el7.centos.noarch
ovirt-hosted-engine-setup-2.2.20-1.el7.centos.noarch
ovirt-engine-appliance-4.2-20180504.1.el7.centos.noarch
python-ovirt-engine-sdk4-4.2.6-2.el7.centos.x86_64
ovirt-host-deploy-1.7.3-1.el7.centos.noarch
ovirt-release42-4.2.3.1-1.el7.noarch
ovirt-vmconsole-host-1.0.5-4.el7.centos.noarch
cockpit-ovirt-dashboard-0.11.24-1.el7.centos.noarch
ovirt-setup-lib-1.1.4-1.el7.centos.noarch
ovirt-imageio-daemon-1.3.1.2-0.el7.centos.noarch
ovirt-host-4.2.2-2.el7.centos.x86_64
ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch

ON PHYSICAL SERVERS (not on VMware, why should I be ?? ;) I got exactly the 
same error :
[ INFO  ] TASK [Get local VM IP]
[ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 50, "changed": true, 
"cmd": "virsh -r net-dhcp-leases default | grep -i 00:16:3e:69:3a:c6 | awk '{ 
print $5 }' | cut -f1 -d'/'", "delta": "0:00:00.073313", "end": "2018-06-25 
16:11:36.025277", "rc": 0, "start": "2018-06-25 16:11:35.951964", "stderr": "", 
"stderr_lines": [], "stdout": "", "stdout_lines": []}
[ INFO  ] TASK [include_tasks]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Remove local vm dir]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [Notify the user about a failure]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The system 
may not be provisioned according to the playbook results: please check the logs 
for the issue, fix accordingly or re-deploy from scratch.\n"}
[ ERROR ] Failed to execute stage 'Closing up': Failed executing 
ansible-playbook
[ INFO  ] Stage: Clean up


I have 4 NIC :
em1 10.0.0.230/8 is for ovirmgmt, it have the gateway
em2 10.0.0.229/8 is for a vmnetwork
em3+em4 in bond0 192.168.0.30 are for gluster with jumbo frames, volumes 
(ENGINE, ISO,EXPORT,DATA) are up and operationals.

I tried to stop em2 (ONBOOT=No and restart network), so the network is actually 
:
# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group 
default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: em1:  mtu 1500 qdisc mq state UP group 
default qlen 1000
    link/ether e0:db:55:15:eb:70 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.230/8 brd 10.255.255.255 scope global em1
       valid_lft forever preferred_lft forever
    inet6 fe80::e2db:55ff:fe15:eb70/64 scope link 
       valid_lft forever preferred_lft forever
3: em2:  mtu 1500 qdisc mq state DOWN group default qlen 
1000
    link/ether e0:db:55:15:eb:71 brd ff:ff:ff:ff:ff:ff
4: em3:  mtu 9000 qdisc mq master bond0 
state UP group default qlen 1000
    link/ether e0:db:55:15:eb:72 brd ff:ff:ff:ff:ff:ff
5: em4:  mtu 9000 qdisc mq master bond0 
state UP group default qlen 1000
    link/ether e0:db:55:15:eb:72 brd ff:ff:ff:ff:ff:ff
6: bond0:  mtu 9000 qdisc noqueue state 
UP group default qlen 1000
    link/ether e0:db:55:15:eb:72 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.30/24 brd 192.168.0.255 scope global bond0
       valid_lft forever preferred_lft forever
    inet6 fe80::e2db:55ff:fe15:eb72/64 scope link 
       valid_lft forever preferred_lft forever

# ip r
default via 10.0.1.254 dev em1 
10.0.0.0/8 dev em1 proto kernel scope link src 10.0.0.230 
169.254.0.0/16 dev em1 scope link metric 1002 
169.254.0.0/16 dev bond0 scope link metric 1006 
192.168.0.0/24 dev bond0 proto kernel scope link src 192.168.0.30 

but same issue, after "/usr/sbin/ovirt-hosted-engine-cleanup" and restarting 
the deployment.
NetworkManager was stopped and disabled at the node install, and it is still 
stopped.After the error, the network shows this after device 6 (bond0) :
7: virbr0:  mtu 1500 qdisc noqueue state UP 
group default qlen 1000
    link/ether 52:54:00:38:e0:5a brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
8: virbr0-nic:  mtu 1500 qdisc pfifo_fast master virbr0 
state DOWN group default qlen 1000
    link/ether 52:54:00:38:e0:5a brd ff:ff:ff:ff:ff:ff
11: vnet0:  mtu 1500 qdisc pfifo_fast master 
virbr0 state UNKNOWN group default qlen 1000
    link/ether fe:16:3e:69:3a:c6 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fc16:3eff:fe69:3ac6/64 scope link 
       valid_lft forever preferred_lft forever

I do not see ovirmgmt... And I don't know if I can access the engine vm as I 
have not its IP :(
I tried to ping addresses after 192.168.122.1, but no one are accessible so I 
stopped at 122.10. The VM seems up (kvm process), qemu-kvm process taking 150% 
of cpu in "top"...

I pasted the log here : https://pastebin.com/Ebzh1uEh

PLEASE ! This issue seems to be recc

[ovirt-users] Re: Install hosted-engine - Task Get local VM IP failed

2018-06-07 Thread florentl

Here are the logs :

[root@ovirt-host1 ~]# tail 
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20180601110217-4bsolz.log 
-n 30
2018-06-01 11:17:46,926+0200 DEBUG otopi.context 
context.dumpEnvironment:869 ENV 
QUESTION/1/DIALOGOVEHOSTED_NOTIF/smtpServer=str:'localhost'
2018-06-01 11:17:46,926+0200 DEBUG otopi.context 
context.dumpEnvironment:869 ENV 
QUESTION/1/DIALOGOVEHOSTED_NOTIF/sourceEmail=str:'root@localhost'
2018-06-01 11:17:46,927+0200 DEBUG otopi.context 
context.dumpEnvironment:869 ENV 
QUESTION/1/ENGINE_ADMIN_PASSWORD=str:'**FILTERED**'
2018-06-01 11:17:46,927+0200 DEBUG otopi.context 
context.dumpEnvironment:869 ENV 
QUESTION/1/OVEHOSTED_GATEWAY=str:'192.168.176.1'
2018-06-01 11:17:46,927+0200 DEBUG otopi.context 
context.dumpEnvironment:869 ENV 
QUESTION/1/OVEHOSTED_VMENV_OVF_ANSIBLE=str:''
2018-06-01 11:17:46,927+0200 DEBUG otopi.context 
context.dumpEnvironment:869 ENV QUESTION/1/ovehosted_bridge_if=str:'ens192'
2018-06-01 11:17:46,927+0200 DEBUG otopi.context 
context.dumpEnvironment:869 ENV QUESTION/1/ovehosted_vmenv_cpu=str:'4'
2018-06-01 11:17:46,928+0200 DEBUG otopi.context 
context.dumpEnvironment:869 ENV 
QUESTION/1/ovehosted_vmenv_mac=str:'00:16:3e:6c:5a:91'
2018-06-01 11:17:46,928+0200 DEBUG otopi.context 
context.dumpEnvironment:869 ENV QUESTION/1/ovehosted_vmenv_mem=str:'12000'
2018-06-01 11:17:46,928+0200 DEBUG otopi.context 
context.dumpEnvironment:869 ENV 
QUESTION/2/CI_INSTANCE_HOSTNAME=str:'ovirt-vengine.local.univ-savoie.fr'
2018-06-01 11:17:46,928+0200 DEBUG otopi.context 
context.dumpEnvironment:869 ENV 
QUESTION/2/CI_ROOT_PASSWORD=str:'**FILTERED**'
2018-06-01 11:17:46,928+0200 DEBUG otopi.context 
context.dumpEnvironment:869 ENV 
QUESTION/2/ENGINE_ADMIN_PASSWORD=str:'**FILTERED**'
2018-06-01 11:17:46,928+0200 DEBUG otopi.context 
context.dumpEnvironment:869 ENV SYSTEM/clockMaxGap=int:'5'
2018-06-01 11:17:46,929+0200 DEBUG otopi.context 
context.dumpEnvironment:869 ENV SYSTEM/clockSet=bool:'False'
2018-06-01 11:17:46,929+0200 DEBUG otopi.context 
context.dumpEnvironment:869 ENV 
SYSTEM/commandPath=str:'/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin'
2018-06-01 11:17:46,929+0200 DEBUG otopi.context 
context.dumpEnvironment:869 ENV SYSTEM/reboot=bool:'False'
2018-06-01 11:17:46,929+0200 DEBUG otopi.context 
context.dumpEnvironment:869 ENV SYSTEM/rebootAllow=bool:'True'
2018-06-01 11:17:46,929+0200 DEBUG otopi.context 
context.dumpEnvironment:869 ENV SYSTEM/rebootDeferTime=int:'10'
2018-06-01 11:17:46,930+0200 DEBUG otopi.context 
context.dumpEnvironment:873 ENVIRONMENT DUMP - END
2018-06-01 11:17:46,932+0200 DEBUG otopi.context 
context._executeMethod:128 Stage pre-terminate METHOD 
otopi.plugins.otopi.dialog.cli.Plugin._pre_terminate
2018-06-01 11:17:46,932+0200 DEBUG otopi.context 
context._executeMethod:135 condition False
2018-06-01 11:17:46,933+0200 INFO otopi.context context.runSequence:741 
Stage: Termination
2018-06-01 11:17:46,934+0200 DEBUG otopi.context context.runSequence:745 
STAGE terminate
2018-06-01 11:17:46,935+0200 DEBUG otopi.context 
context._executeMethod:128 Stage terminate METHOD 
otopi.plugins.gr_he_common.core.misc.Plugin._terminate
2018-06-01 11:17:46,935+0200 ERROR otopi.plugins.gr_he_common.core.misc 
misc._terminate:240 Hosted Engine deployment failed: please check the 
logs for the issue, fix accordingly or re-deploy from scratch.
2018-06-01 11:17:46,936+0200 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:204 DIALOG:SEND Log file is located 
at 
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20180601110217-4bsolz.log
2018-06-01 11:17:46,938+0200 DEBUG otopi.context 
context._executeMethod:128 Stage terminate METHOD 
otopi.plugins.otopi.dialog.human.Plugin._terminate
2018-06-01 11:17:46,945+0200 DEBUG otopi.context 
context._executeMethod:128 Stage terminate METHOD 
otopi.plugins.otopi.dialog.machine.Plugin._terminate
2018-06-01 11:17:46,945+0200 DEBUG otopi.context 
context._executeMethod:135 condition False
2018-06-01 11:17:46,947+0200 DEBUG otopi.context 
context._executeMethod:128 Stage terminate METHOD 
otopi.plugins.otopi.core.log.Plugin._terminate




File : 
ovirt-hosted-engine-setup-ansible-bootstrap_local_vm-20180601110459-yue15n.log


2018-06-01 11:17:39,632+0200 DEBUG var changed: host "localhost" var 
"ansible_play_batch" type "" value: "[]"
2018-06-01 11:17:39,634+0200 ERROR ansible failed {'status': 'FAILED', 
'ansible_type': 'task', 'ansible_task': u'Notify the user about a 
failure', 'ansible_result': u'type: \nstr: {\'msg\': 
u\'The system may not be provisioned according to the playbook results: 
please check the logs for the issue, fix accordingly or re-deploy from 
scratch.\\n\', \'changed\': False, \'_ansible_no_log\': False}\nrepr: 
{\'msg\': u\'The system may not be provisioned according to the playbook 
results: please check the logs for the issue, fix accordingly or 
re-deploy from scratch.\\n\', \'changed\': False, \'_ansible_no_log\': 
False}\ndir

[ovirt-users] Re: Install hosted-engine - Task Get local VM IP failed

2018-06-04 Thread Simone Tiraboschi
On Mon, Jun 4, 2018 at 2:20 PM, Phillip Bailey  wrote:

> Hi Florent,
>
> Could you please provide the log for the stage in which the wizard is
> failing? Logs can be found in /var/log/ovirt-hosted-engine-setup.
>
> Thanks!
>
> -Phillip Bailey
>
> On Fri, Jun 1, 2018 at 7:57 AM, florentl 
> wrote:
>
>> Hi all,
>> I try to install hosted-engine on node : ovirt-node-ng-4.2.3-0.20180518.
>> Every times I get stuck on :
>> [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 50, "changed":
>> true, "cmd": "virsh -r net-dhcp-leases default | grep -i 00:16:3e:6c:5a:91
>> | awk '{ print $5 }' | cut -f1 -d'/'", "delta": "0:00:00.108872", "end":
>> "2018-06-01 11:17:34.421769", "rc": 0, "start": "2018-06-01
>> 11:17:34.312897", "stderr": "", "stderr_lines": [], "stdout": "",
>> "stdout_lines": []}
>> I tried with static IP Address and with DHCP but both failed.
>>
>> To be more specific, I installed three nodes, deployed glusterfs with the
>> wizard. I'm in a nested virtualization environment for this lab (Vmware
>> Esxi Hypervisor).
>>
>
Unfortunately I think that the issue is trying to run a nested env over
ESXi.
AFAIk nesting KVM VMs over ESX is still problematic.

I'd suggest to repeat the experiment nesting over KVM on L0.



> My node IP is : 192.168.176.40 / and I want the hosted-engine vm has
>> 192.168.176.43.
>>
>> Thanks,
>> Florent
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct: https://www.ovirt.org/communit
>> y/about/community-guidelines/
>> List Archives: https://lists.ovirt.org/archiv
>> es/list/users@ovirt.org/message/F3BNUQ2T434EASIX56F7KQQJVF7OCDUM/
>>
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
> message/RU34XDM2W6GPDCRRGWORBTPH2BUN3CJR/
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KOBKROJ2CTHA7QUIOJUECBADGXQUMEJQ/


[ovirt-users] Re: Install hosted-engine - Task Get local VM IP failed

2018-06-04 Thread Phillip Bailey
Hi Florent,

Could you please provide the log for the stage in which the wizard is
failing? Logs can be found in /var/log/ovirt-hosted-engine-setup.

Thanks!

-Phillip Bailey

On Fri, Jun 1, 2018 at 7:57 AM, florentl  wrote:

> Hi all,
> I try to install hosted-engine on node : ovirt-node-ng-4.2.3-0.20180518.
> Every times I get stuck on :
> [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 50, "changed": true,
> "cmd": "virsh -r net-dhcp-leases default | grep -i 00:16:3e:6c:5a:91 | awk
> '{ print $5 }' | cut -f1 -d'/'", "delta": "0:00:00.108872", "end":
> "2018-06-01 11:17:34.421769", "rc": 0, "start": "2018-06-01
> 11:17:34.312897", "stderr": "", "stderr_lines": [], "stdout": "",
> "stdout_lines": []}
> I tried with static IP Address and with DHCP but both failed.
>
> To be more specific, I installed three nodes, deployed glusterfs with the
> wizard. I'm in a nested virtualization environment for this lab (Vmware
> Esxi Hypervisor).
> My node IP is : 192.168.176.40 / and I want the hosted-engine vm has
> 192.168.176.43.
>
> Thanks,
> Florent
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/communit
> y/about/community-guidelines/
> List Archives: https://lists.ovirt.org/archiv
> es/list/users@ovirt.org/message/F3BNUQ2T434EASIX56F7KQQJVF7OCDUM/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RU34XDM2W6GPDCRRGWORBTPH2BUN3CJR/