[ovirt-users] Re: Info about soft fencing mechanism

2019-06-17 Thread Martin Perina
On Fri, Jun 14, 2019 at 3:02 PM Strahil  wrote:

>
> On Jun 13, 2019 16:14, Gianluca Cecchi  wrote:
> >
> > Hello,
> > I would like to know in better detail how soft fencing works in 4.3.
> > In particular, with "soft fencing" we "only" mean vdsmd restart attempt,
> correct?
>

Yes, it just restarts vdsmd service using SSH connection. In the past we
had several cases, where VDSM was non-responsive, but VMs were running
fine, that's why we added this as the 1st step in non-responding treatment
flow.
We try to connect to host using SSH, restarts VDSM and waits if host start
communicate again. If there is an error during SSH connection or service
restart, we immediately continue to next phase of the treatment.

> Who is responsible for issuing the command? Manager or host itself?
>
> The manager should take the decision, but the actual command should be
> done by another  host.
>

The manager, this flow is started  from host monitoring if there a network
error or connection timeout ...

> > Because in case of Manager, if the host has already lost connection, how
> could the manager be able to do it?
>
> Soft fencing is ussed when ssh is available. In all other cases it doesn't
> work.
>

So if engine cannot communicate with host, we don't know the reason, so
there are several steps in non-responding treatment:

1. SSH Soft Fencing
2. Kdump detection (if it's configured for the host and we detecte host is
dumping, we can restart HA VMs on different host)
3. Power Management restart
- according to cluster fencing policy we can skip restarting host if
for exampl host is renewing its storage lease or gluster cluster is healing
- this part is executed on different host in the same cluster/data
center

If you want to know more about fencing in oVirt, please take a look at
below links:

Host fencing in oVirt - Fixing the unknown and allowing VMs to be highly
available
https://www.youtube.com/watch?v=V1JQtmdleaM

Integrating kdump into oVirt
https://www.youtube.com/watch?v=RAGV_za_Qvw

Automatic fencing in oVirt
https://www.ovirt.org/develop/developer-guide/engine/automatic-fencing.html

Fence-kdump integration in oVirt
https://www.ovirt.org/develop/release-management/features/infra/fence-kdump.html


And course feel free to ask questions

Martin

> Thanks in advance for clarifications and eventually documentation pointers
>
> oVirt DOCs need a lot of updates, but I never found a way to add or edit a
> page.
>
> Best Regards,
> Strahil Nikolov
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/OQIENJDAWQNHORWFLSUYWJKH7SS7E5JE/
>


-- 
Martin Perina
Manager, Software Engineering
Red Hat Czech s.r.o.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SRBPLSHWUZNILFG4KJRVFO4LBB37OODF/


[ovirt-users] Re: Hosted engine setup: "Failed to configure management network on host Local due to setup networks failure"

2019-06-17 Thread me
Googling the pertinent text from the above long error:
duplicate key value violates unique constraint "name_server_pkey"
led me to this bug report:
https://bugzilla.redhat.com/show_bug.cgi?id=1530944
and the discovery I had a duplicate DNS IP address in /etc/resolv.conf
Removing this and adding the host again worked :-)
But it shouldn't have been this hard to install oVirt.  May I suggest tolarance 
of duplicate DNS IPs be added?
In above bug report, Yaniv Kaul says won't fix because it's user error.  
Perhaps, but the oVirt installer should do a modicum of hand-holding IMO.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FAIHQ5CTMBNE2MHVOC6IAQBXCNJ2UINZ/


[ovirt-users] Re: Cannot access dashboard after upgrading to 4.3.4

2019-06-17 Thread Shirly Radco
Hi,

Please open a bug with the details you added here.
Please attach setup logs (all of them since 4.2, including) and
ovirt-engine-dwh log so we can investigate this issue.

If you stop ovirt-engine-dwhd process, does pg calm down?

Thank you

--

Shirly Radco

BI Senior Software Engineer

Red Hat 




On Sun, Jun 16, 2019 at 12:07 PM Albl, Oliver 
wrote:

> Hi,
>
>
>
> ovirt_engine_history=# SELECT id, version, script, checksum, installed_by,
> started_at, ended_at, state, current, comment
>
> FROM schema_version
>
> ORDER BY version DESC limit 10  ;
>
> id | version  |
> script   | checksum
> | installed_by | started_at |
> ended_at  |   state   | current |
>
> comment
>
>
> +--+---+--+--+++---+-+---
>
> 
>
> 50 | 04030020 |
> upgrade/04_03_0020_update_rx_tx_rate_percition.sql|
> c480b070cc69681cf62fb853f3d7139f | ovirt_engine_history | 2019-06-14
> 18:14:44.960503 | 2019-06-14 18:14:44.991593 | SKIPPED   | t   |
> Installed
>
> already by 04020040
>
> 49 | 04030010 |
> upgrade/04_03_0010_update_network_name_length.sql |
> a1a0d75560575cdc60c0bbaad2cda773 | ovirt_engine_history | 2019-06-14
> 18:14:44.904426 | 2019-06-14 18:14:44.935725 | SKIPPED   | f   |
> Installed
>
> already by 04020030
>
> 48 | 04020040 |
> upgrade/04_02_0040_update_rx_tx_rate_percition.sql|
> c480b070cc69681cf62fb853f3d7139f | ovirt_engine_history | 2018-07-07
> 14:34:42.505446 | 2018-07-07 14:36:31.662577 | INSTALLED | f   |
>
> 47 | 04020030 |
> upgrade/04_02_0030_update_network_name_length.sql |
> a1a0d75560575cdc60c0bbaad2cda773 | ovirt_engine_history | 2018-07-07
> 14:34:42.438056 | 2018-07-07 14:34:42.482705 | INSTALLED | f   |
>
> 46 | 04020020 |
> upgrade/04_02_0020__updated_vm_interface_history_id_to_bigint.sql |
> 58a8afa29fc720dc87f37b7f9c9e0151 | ovirt_engine_history | 2018-04-18
> 17:17:04.908667 | 2018-04-18 17:17:39.111339 | INSTALLED | f   |
>
> 45 | 04020010 |
> upgrade/04_02_0010_updated_vm_template_name_length.sql|
> 4b5391f40e8787e3b1033635aafe18a1 | ovirt_engine_history | 2018-01-05
> 09:56:39.213757 | 2018-01-05 09:56:39.238775 | SKIPPED   | f   |
> Installed
>
> already by 04010020
>
> 44 | 04010020 |
> upgrade/04_01_0020_updated_vm_template_name_lentgh.sql|
> 4b5391f40e8787e3b1033635aafe18a1 | ovirt_engine_history | 2017-10-05
> 13:53:04.225474 | 2017-10-05 13:53:04.269508 | INSTALLED | f   |
>
> 43 | 04010010 |
> upgrade/04_01_0010_added_seconds_in_status_to_sample_tables.sql   |
> be7a1b2fc7f03d263b45a613d5bced03 | ovirt_engine_history | 2017-02-03
> 13:16:18.29672  | 2017-02-03 13:16:18.320728 | SKIPPED   | f   |
> Installed
>
> already by 0450
>
> 42 | 0450 |
> upgrade/04_00_0050_added_seconds_in_status_to_sample_tables.sql   |
> be7a1b2fc7f03d263b45a613d5bced03 | ovirt_engine_history | 2016-10-03
> 15:13:33.856501 | 2016-10-03 15:13:34.010135 | INSTALLED | f   |
>
> 41 | 0440 |
> upgrade/04_00_0040_drop_all_history_db_foreign_keys.sql   |
> ed8b2c02bea97d0ee21f737614a2d5e3 | ovirt_engine_history | 2016-10-03
> 15:13:33.763905 | 2016-10-03 15:13:33.839532 | INSTALLED | f   |
>
> (10 rows)
>
>
>
> All the best,
>
> Oliver
>
>
>
> *Von:* Shirly Radco 
> *Gesendet:* Sonntag, 16. Juni 2019 11:04
> *An:* Albl, Oliver 
> *Cc:* slev...@redhat.com; users@ovirt.org; Eli Mesika ;
> Yedidyah Bar David 
> *Betreff:* Re: [ovirt-users] Re: Cannot access dashboard after upgrading
> to 4.3.4
>
>
>
> Hi,
>
>
>
> Please attach here the result of the following query from the
> ovirt_engine_history db:
>
>
>
> SELECT id, version, script, checksum, installed_by, started_at, ended_at,
> state, current, comment
>
> FROM schema_version
>
> ORDER BY version DESC limit 10  ;
>
>
>
> Best regards,
>
> --
>
> *Shirly Radco*
>
> BI Senior Software Engineer
>
> Red Hat 
>
> [image: Das Bild wurde vom Absender entfernt.] 
>
>
>
> -- Forwarded message -
> From: *Albl, Oliver* 
> Date: Sun, Jun 16, 2019 at 11:42 AM
> Subject: AW: [ovirt-users] Re: Cannot access dashboard after upgrading to
> 4.3.4
> To: sra...@redhat.com 
> Cc: slev...@redhat.com , users@ovirt.org <
> users@ovirt.org>
>
>
>
> Hi,
>
>
>
>   rebooted oVirt engine VM (no hosted engine), same result. DHW packages
> are:
>
>
>
> ovirt-engine-dwh.noarch   4.3.0-1.el7@ovirt-4.3
>
> ovirt-engine-dwh-setup.noarch 4.3.0-1.el7@ovirt-4.3
>
>
>
> After reboot the following two queries start running again (they were
> running for more than 24 hours before this reboot):
>
>
>

[ovirt-users] HE Fails to install on oVirt 4.3.3

2019-06-17 Thread nico . kruger
Hi Guys,

I have tried installing oVirt 4.3.4 using 
ovirt-node-ng-installer-4.3.4-2019061016.el7.iso and 
ovirt-engine-appliance-4.3-20190610.1.el7.x86_64.rpm
Gluster install works fine, but HE deployment fails every time at last "waiting 
for host to be up". I have tried the deployment on multiple different hardware 
types, also tried single Vs 3 node deployments and all fail at the same point. 

I suspect that the IP in the HE is not being correctly configured as i see qemu 
running, but ansible times out and cleans up the fail install.

Any ideas on why this is happening? i will try add the logs

I am going to try using a older HE appliance rpm to see if that fixes the issue.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZCRUMMGV3UIIUNLHQBNQ7FXKLPP6M7O2/


[ovirt-users] Re: New to oVirt - Cluster questions

2019-06-17 Thread Strahil
Hi Adam,

The arbiter holds only metadata for gluster to be able to resolve any potential 
split brains. It can run on another host (not in the 2 replicas)  and the only 
requirements are:
1. Low network lattency - you don't want to put it in Chona, unless your 
replica servers are also there :)
With remote arbiter - the lattency will affect you only when 1 of the replicas 
is down.
2. Fast storage that can easily create the files' inodes. I would preffer a SSD 
, but even high speed rotational disks are OK

In other words, you should create gluster bricks like this one:
server1:/gluster_bricks/myvolume/brick1
server2:/gluster_bricks/myvolume/brick1
arbiter:/gluster_bricks/myvolume/arbiter_brick

In this case, if a node dies - the other node + arbiter will have quorum and 
will serve any I/O for oVirt.
The arbiter_brick does not require much space, as only metadata will be written.

Actually with hyperconverged setup you have 2 clusters on the same nodes - 
storage (gluster) & oVirt (virtualization).

Best Regards,
Strahil NikolovOn Jun 17, 2019 15:06, adam.fasna...@gmail.com wrote:
>
> Hello Strahil, 
>
> Thank you for getting back to me.  The arbitier machine is similar to a 
> hyper-v Quoroum, is that correct?  I come from a Hyper-V world, and am 
> getting familiar with the terminology here.  If that is true, can he arbitier 
> be a VM hosted on another separate host?  In other words, I have 2 hosts that 
> I would use for ovirt.  I have another host that is only used as a backup 
> device.  Could a VM on the backup host be used as an arbitier for the 
> cluster? 
>
> Adam
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/YHRLXTHSL3JWWWOYGWN4OHWTRZ4PGEQB/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/L4WEC6GJRUJR5E7Q2OO37MHJJGHJWAAA/


[ovirt-users] Re: New to oVirt - Cluster questions

2019-06-17 Thread Strahil
In my setup, I have 2 AMD hosts +  an Intel-based arbiter.
Actually , if you separate them in 2  different clusters (1  for intel + 1  for 
amd) - you  can run VMs on any of the nodes.

So you can host the arbiter as a VM or get a 3rd machine for it. I think a VM 
(with decent storage) is  enough for the arbiter  , but you should test prior 
putting the oVirt setup into production.

Best Regards,
Strahil NikolovOn Jun 17, 2019 15:09, adam.fasna...@gmail.com wrote:
>
> Jayme, 
>
> Could the arbiter be a VM hosted on another host Host 3?   I do have another 
> server that I could use, but it does not match the 2 main hosts. (Host 3 
> would not actually be used for VM storage/running VMs.)  
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/N5KOFY2EKFYOKUAC4PAEBBEFR36NJBFJ/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JUKDRFD7XCAAD46FUU2M4NCWTZGBUHG2/


[ovirt-users] Re: New to oVirt - Cluster questions

2019-06-17 Thread adam . fasnacht
Jayme,

Could the arbiter be a VM hosted on another host Host 3?   I do have another 
server that I could use, but it does not match the 2 main hosts. (Host 3 would 
not actually be used for VM storage/running VMs.)  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/N5KOFY2EKFYOKUAC4PAEBBEFR36NJBFJ/


[ovirt-users] Re: New to oVirt - Cluster questions

2019-06-17 Thread adam . fasnacht
Hello Strahil,

Thank you for getting back to me.  The arbitier machine is similar to a hyper-v 
Quoroum, is that correct?  I come from a Hyper-V world, and am getting familiar 
with the terminology here.  If that is true, can he arbitier be a VM hosted on 
another separate host?  In other words, I have 2 hosts that I would use for 
ovirt.  I have another host that is only used as a backup device.  Could a VM 
on the backup host be used as an arbitier for the cluster?

Adam
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YHRLXTHSL3JWWWOYGWN4OHWTRZ4PGEQB/


[ovirt-users] Re: Ovirt 4.3.3 on Centos7 hosted-engine deployment failure

2019-06-17 Thread me
Fixed my problem.  I had a duplicate DNS IP address in my /etc/resolv.conf.  
Removing the duplicate and adding the host again worked :-)  See further 
details how I deduced this in thread here:
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/J65PSDSA2HR4KOCKR4J6GXKOSPNSU54H/

Please report back if this also works for you.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IVXDXQW3ZYDFLADDPGO3JCO57G7XE7H2/


[ovirt-users] Re: 4.3.4 caching disk error during hyperconverged deployment

2019-06-17 Thread Sachidananda URS
On Thu, Jun 13, 2019 at 7:11 AM  wrote:

> While trying to do a hyperconverged setup and trying to use "configure LV
> Cache" /dev/sdf the deployment fails. If I dont use the LV cache SSD Disk
> the setup succeds, thought you mighg want to know, for now I retested with
> 4.3.3 and all worked fine, so reverting to 4.3.3 unless you know of a
> workaround?
>
> Error:
> TASK [gluster.infra/roles/backend_setup : Extend volume group]
> *
> failed: [vmm11.mydomain.com] (item={u'vgname': u'gluster_vg_sdb',
> u'cachethinpoolname': u'gluster_thinpool_gluster_vg_sdb', u'cachelvname':
> u'cachelv_gluster_thinpool_gluster_vg_sdb', u'cachedisk': u'/dev/sdf',
> u'cachemetalvname': u'cache_gluster_thinpool_gluster_vg_sdb', u'cachemode':
> u'writethrough', u'cachemetalvsize': u'0.1G', u'cachelvsize': u'0.9G'}) =>
> {"ansible_loop_var": "item", "changed": false, "err": "  Physical volume
> \"/dev/sdb\" still in use\n", "item": {"cachedisk": "/dev/sdf",
> "cachelvname": "cachelv_gluster_thinpool_gluster_vg_sdb", "cachelvsize":
> "0.9G", "cachemetalvname": "cache_gluster_thinpool_gluster_vg_sdb",
> "cachemetalvsize": "0.1G", "cachemode": "writethrough",
> "cachethinpoolname": "gluster_thinpool_gluster_vg_sdb", "vgname":


The variable file does not seem to be right.
You have mentioned cachethinpoolname: gluster_thinpool_gluster_vg_sdb but
you are not creating it anywhere.
So, the Ansible module is trying to shrink the volume group.

Also why is cachelvsize is 0.9G and chachemetasize 0.1G? Isn't it too less?

Please refer:
https://github.com/gluster/gluster-ansible/blob/master/playbooks/hc-ansible-deployment/gluster_inventory.yml
for example.

-sac

>
> "gluster_vg_sdb"}, "msg": "Unable to reduce gluster_vg_sdb by /dev/sdb.",
> "rc": 5}
>
> failed: [vmm12.mydomain.com] (item={u'vgname': u'gluster_vg_sdb',
> u'cachethinpoolname': u'gluster_thinpool_gluster_vg_sdb', u'cachelvname':
> u'cachelv_gluster_thinpool_gluster_vg_sdb', u'cachedisk': u'/dev/sdf',
> u'cachemetalvname': u'cache_gluster_thinpool_gluster_vg_sdb', u'cachemode':
> u'writethrough', u'cachemetalvsize': u'0.1G', u'cachelvsize': u'0.9G'}) =>
> {"ansible_loop_var": "item", "changed": false, "err": "  Physical volume
> \"/dev/sdb\" still in use\n", "item": {"cachedisk": "/dev/sdf",
> "cachelvname": "cachelv_gluster_thinpool_gluster_vg_sdb", "cachelvsize":
> "0.9G", "cachemetalvname": "cache_gluster_thinpool_gluster_vg_sdb",
> "cachemetalvsize": "0.1G", "cachemode": "writethrough",
> "cachethinpoolname": "gluster_thinpool_gluster_vg_sdb", "vgname":
> "gluster_vg_sdb"}, "msg": "Unable to reduce gluster_vg_sdb by /dev/sdb.",
> "rc": 5}
>
> failed: [vmm10.mydomain.com] (item={u'vgname': u'gluster_vg_sdb',
> u'cachethinpoolname': u'gluster_thinpool_gluster_vg_sdb', u'cachelvname':
> u'cachelv_gluster_thinpool_gluster_vg_sdb', u'cachedisk': u'/dev/sdf',
> u'cachemetalvname': u'cache_gluster_thinpool_gluster_vg_sdb', u'cachemode':
> u'writethrough', u'cachemetalvsize': u'30G', u'cachelvsize': u'270G'}) =>
> {"ansible_loop_var": "item", "changed": false, "err": "  Physical volume
> \"/dev/sdb\" still in use\n", "item": {"cachedisk": "/dev/sdf",
> "cachelvname": "cachelv_gluster_thinpool_gluster_vg_sdb", "cachelvsize":
> "270G", "cachemetalvname": "cache_gluster_thinpool_gluster_vg_sdb",
> "cachemetalvsize": "30G", "cachemode": "writethrough", "cachethinpoolname":
> "gluster_thinpool_gluster_vg_sdb", "vgname": "gluster_vg_sdb"}, "msg":
> "Unable to reduce gluster_vg_sdb by /dev/sdb.", "rc": 5}
>
> PLAY RECAP
> *
> vmm10.mydomain.com   : ok=13   changed=4unreachable=0
> failed=1skipped=10   rescued=0ignored=0
> vmm11.mydomain.com   : ok=13   changed=4unreachable=0
> failed=1skipped=10   rescued=0ignored=0
> vmm12.mydomain.com   : ok=13   changed=4unreachable=0
> failed=1skipped=10   rescued=0ignored=0
>
>
>
>
> -
> #cat /etc/ansible/hc_wizard_inventory.yml
>
> -
> hc_nodes:
>   hosts:
> vmm10.mydomain.com:
>   gluster_infra_volume_groups:
> - vgname: gluster_vg_sdb
>   pvname: /dev/sdb
> - vgname: gluster_vg_sdc
>   pvname: /dev/sdc
> - vgname: gluster_vg_sdd
>   pvname: /dev/sdd
> - vgname: gluster_vg_sde
>   pvname: /dev/sde
>   gluster_infra_mount_devices:
> - path: /gluster_bricks/engine
>   lvname: gluster_lv_engine
>   vgname: gluster_vg_sdb
> - path: /gluster_bricks/vmstore1
>   lvname: gluster_lv_vmstore1
>   vgname: gluster_vg_sdc
> - path: /gluster_bricks/data1
>   lvname: gluster_lv_data1
>   vgname: gluster_vg_sdd
> - path: 

[ovirt-users] Re: Metrics store install failed

2019-06-17 Thread Shirly Radco
Hi,

Please see here for the fix that caused the nics issue
https://gerrit.ovirt.org/#/c/100865/

The /etc/ovirt-guest-agent.conf ignored_nics should be update to
ignored_nics = docker0 tun0

Best,

--

Shirly Radco

BI Senior Software Engineer

Red Hat 




On Mon, Jun 3, 2019 at 7:44 PM  wrote:

> Shirly,
>
> I updated the IP of the metrics-store-installer VM once it was created.
> The script continued to run after I updated the IP from the manager VM and
> completed successfully.
>
> I also updated the master0 VM when the script got to the point of trying
> to contact it and once I assigned the IP, it continued successfully.
>
> -
> Okay, so here is the issue so far. I've narrowed it down to DNS resolution
> wanting to go through tun0 adapter rather than eth0 which happens at the
> end of the playbook. It all works fine and routes correctly when running
> the playbook up until something causes routes to change and DNS tries to go
> through tun0.
>
> I figured this out by opening two terminal windows and running a ping
> command inside master0 VM towards redhat.com and in the other terminal
> window on master0 VM I ran #ip monitor
>
> The result is as mentioned. DNS entries are attempting to route through
> tun0 rather than eth0
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/E4IK6N3HDCIG6UJKVFUTPVX3N2DLBYP4/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QFA6DUXPUE334BDJKLYPCC37DGBKPYLQ/


[ovirt-users] Re: Hosted engine setup: "Failed to configure management network on host Local due to setup networks failure"

2019-06-17 Thread Yuval Turgeman
Hi Edward, you're hitting [1] - it will be included in the next appliance

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1718399


On Monday, June 17, 2019, Edward Berger  wrote:

> The hosted engine is created in two steps, first as a 192.168.x.x address
> as a local VM on the host, then it gets copied over to shared storage and
> gets the real ip address you assigned in the setup wizard..  So that part
> is normal behavior.
>
> I had a recent hosted engine installation failure with oVirt node 4.3.4,
> where the local VM was stuck trying to yum install yum-utils, but couldn't
> because it is behind a firewall, so I ssh'd into the local VM, added a
> proxy line to /etc/yum.conf, kill -HUP'd the bad process and manually
> re-ran the yum install command and it was able to complete the hosted
> engine installation.
>
> If that's not the issue, maybe your node's network config is not something
> the installer expects like preconfigured bridge when it wants to do the
> bridge configuration for itself, or a bond type not supported...
>
>
> On Sun, Jun 16, 2019 at 12:12 PM  wrote:
>
> Hi,
>
> I've been failing to install hosted-engine on oVirt Node for a long time.
> I'm now trying on a Coffee Lake Xeon-based system, having previously tried
> on Broadwell-E.
>
> Trying using the webui or hosted-engine --deploy has similar result.
> Error in the title occurs when using the webui.  Using hosted-engine
> --deploy gets shows:
> [ INFO  ] TASK [ovirt.hosted_engine_setup : Wait for the host to be up]
> [ INFO  ] ok: [localhost]
> [ INFO  ] TASK [ovirt.hosted_engine_setup : Check host status]
> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The
> host has been set in non_operational status, please check engine logs, fix
> accordingly and re-deploy.\n"}
>
> Despite the failure, the oVirt webui is can be browsed on https://:6900,
> but the host has status "unassigned".  The Node webui (https://:9090)
> has the engine VM running, but when I login to its console, I see its IP is
> 192.168.122.123, not the DHCP-reserved IP address (on our 10.0.8.x
> network), which doesn't seem right.  I suspect some problem with DHCP, but
> I don't know how to fix.  Any ideas?
>
> vdsm.log shows:
> 2019-06-16 15:06:39,117+ INFO  (vmrecovery) [vds] recovery: waiting
> for storage pool to go up (clientIF:709)
> 2019-06-16 15:06:44,122+ INFO  (vmrecovery) [vdsm.api] START
> getConnectedStoragePoolsList(options=None) from=internal,
> task_id=7f984b0d-9765-457e-ac8e-c5cd0bdf73d2 (api:48)
> 2019-06-16 15:06:44,122+ INFO  (vmrecovery) [vdsm.api] FINISH
> getConnectedStoragePoolsList return={'poollist': []} from=internal,
> task_id=7f984b0d-9765-457e-ac8e-c5cd0bdf73d2 (api:54)
> 2019-06-16 15:06:44,122+ INFO  (vmrecovery) [vds] recovery: waiting
> for storage pool to go up (clientIF:709)
> 2019-06-16 15:06:48,258+ INFO  (periodic/1) [vdsm.api] START
> repoStats(domains=()) from=internal, 
> task_id=0526307b-bb37-4eff-94d6-910ac0d64933
> (api:48)
> 2019-06-16 15:06:48,258+ INFO  (periodic/1) [vdsm.api] FINISH
> repoStats return={} from=internal, 
> task_id=0526307b-bb37-4eff-94d6-910ac0d64933
> (api:54)
> 2019-06-16 15:06:49,126+ INFO  (vmrecovery) [vdsm.api] START
> getConnectedStoragePoolsList(options=None) from=internal,
> task_id=0d5b359e-1a4c-4cc0-87a1-4a41e91ba356 (api:48)
> 2019-06-16 15:06:49,126+ INFO  (vmrecovery) [vdsm.api] FINISH
> getConnectedStoragePoolsList return={'poollist': []} from=internal,
> task_id=0d5b359e-1a4c-4cc0-87a1-4a41e91ba356 (api:54)
> 2019-06-16 15:06:49,126+ INFO  (vmrecovery) [vds] recovery: waiting
> for storage pool to go up (clientIF:709)
> 2019-06-16 15:06:53,040+ INFO  (jsonrpc/5) [api.host] START
> getAllVmStats() from=::1,50104 (api:48)
> 2019-06-16 15:06:53,041+ INFO  (jsonrpc/5) [api.host] FINISH
> getAllVmStats return={'status': {'message': 'Done', 'code': 0},
> 'statsList': (suppressed)} from=::1,50104 (api:54)
> 2019-06-16 15:06:53,041+ INFO  (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC
> call Host.getAllVmStats succeeded in 0.01 seconds (__init__:312)
> 2019-06-16 15:06:54,132+ INFO  (vmrecovery) [vdsm.api] START
> getConnectedStoragePoolsList(options=None) from=internal,
> task_id=99c33317-7753-4d24-a10b-b716adcdaf76 (api:48)
> 2019-06-16 15:06:54,132+ INFO  (vmrecovery) [vdsm.api] FINISH
> getConnectedStoragePoolsList return={'poollist': []} from=internal,
> task_id=99c33317-7753-4d24-a10b-b716adcdaf76 (api:54)
> 2019-06-16 15:06:54,132+ INFO  (vmrecovery) [vds] recovery: waiting
> for storage pool to go up (clientIF:709)
> 2019-06-16 15:06:59,134+ INFO  (vmrecovery) [vdsm.api] START
> getConnectedStoragePoolsList(options=None) from=internal,
> task_id=8f5679a1-8734-491d-b925-7387effe4726 (api:48)
> 2019-06-16 15:06:59,134+ INFO  (vmrecovery) [vdsm.api] FINISH
> getConnectedStoragePoolsList return={'poollist': []} from=internal,
> task_id=8f5679a1-8734-491d-b925-7387effe4726 (api:54)
> 2019-06-16 15:06:59,134+