[ovirt-users] Re: Grafana - Origin Not Allowed

2022-06-13 Thread Nardus Geldenhuys
This worked for us:

edit /etc/httpd/conf.d/ovirt-engine-grafana-proxy.conf
add "ProxyPreserveHost On"
should look like this now:


LoadModule proxy_module modules/mod_proxy.so



ProxyPreserveHost On
ProxyPass http://127.0.0.1:3000 retry=0 disablereuse=On
ProxyPassReverse http://127.0.0.1:3000/ovirt-engine-grafana


systemctl restart httpd
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TZFFIN4ESDYTJLNHYW7WDXYKECEZ57K6/


[ovirt-users] Re: oVirt 4.3 DWH with Grafana

2021-04-30 Thread Nardus Geldenhuys
Hi

Thank you. I did figured it out after a while...

Another question. How can I see LUN utilization ? We had an issue where one LUN 
was over utiliazed, meaning it was hammered by the VM's. We only discovered 
this after the SAN team told us, we moved the VM's storage around it we 
resolved the issue. Is it possible to get a Grafana graph with LUN utilization 
per VM?

Thanks

Nardus
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NJXX3UBTJE54L7XWKRMS27SOHFW6K4Q2/


[ovirt-users] Re: oVirt 4.3 DWH with Grafana

2021-04-13 Thread Nardus Geldenhuys
Hey Michal

Hope you are well. Thank you so much for this write up and all the work you put 
into it.

Do you have a easy way of using different data sources, more than one oVirt 
engine DWH ? Or will I have to adapt all the dashboards to the different data 
sources?

Thanks again

Nardus
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WVH2NIPY56YCTCBZRPAWKFGS54JTOJLP/


[ovirt-users] self hosted-engine deploy fails on network

2021-02-03 Thread Nardus Geldenhuys
Hi oVirt land

Hope you are well. Running into this issue, I hope you can help.

Centos7 and it is updated.
Ovirt 4.3, latest packages.

My network config:

[root@mob-r1-d-ovirt-aa-1-01 ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group
default qlen 1000
   link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
   inet 127.0.0.1/8 scope host lo
  valid_lft forever preferred_lft forever
   inet6 ::1/128 scope host
  valid_lft forever preferred_lft forever
2: ens1f0:  mtu 1500 qdisc mq master
bond0 state UP group default qlen 1000
   link/ether 00:90:fa:c2:d2:48 brd ff:ff:ff:ff:ff:ff
3: ens1f1:  mtu 1500 qdisc mq master
bond0 state UP group default qlen 1000
   link/ether 00:90:fa:c2:d2:48 brd ff:ff:ff:ff:ff:ff
4: enp11s0f0:  mtu 1500 qdisc mq state
DOWN group default qlen 1000
   link/ether 00:90:fa:c2:d2:50 brd ff:ff:ff:ff:ff:ff
5: enp11s0f1:  mtu 1500 qdisc mq state
DOWN group default qlen 1000
   link/ether 00:90:fa:c2:d2:54 brd ff:ff:ff:ff:ff:ff
21: bond0:  mtu 1500 qdisc noqueue
state UP group default qlen 1000
   link/ether 00:90:fa:c2:d2:48 brd ff:ff:ff:ff:ff:ff
   inet6 fe80::290:faff:fec2:d248/64 scope link
  valid_lft forever preferred_lft forever
22: bond0.1131@bond0:  mtu 1500 qdisc
noqueue state UP group default qlen 1000
   link/ether 00:90:fa:c2:d2:48 brd ff:ff:ff:ff:ff:ff
   inet 172.18.206.184/23 brd 172.18.207.255 scope global bond0.1131
  valid_lft forever preferred_lft forever
   inet6 fe80::290:faff:fec2:d248/64 scope link
  valid_lft forever preferred_lft forever

[root@mob-r1-d-ovirt-aa-1-01 network-scripts]# cat ifcfg-bond0
BONDING_OPTS='mode=1 miimon=100'
TYPE=Bond
BONDING_MASTER=yes
PROXY_METHOD=none
BROWSER_ONLY=no
IPV6INIT=no
NAME=bond0
UUID=c11ef6ef-794f-4683-a068-d6338e5c19b6
DEVICE=bond0
ONBOOT=yes
[root@mob-r1-d-ovirt-aa-1-01 network-scripts]# cat ifcfg-bond0.1131
DEVICE=bond0.1131
VLAN=yes
ONBOOT=yes
MTU=1500
IPADDR=172.18.206.184
NETMASK=255.255.254.0
GATEWAY=172.18.206.1
BOOTPROTO=none
MTU=1500
DEFROUTE=yes
NM_CONTROLLED=no
IPV6INIT=yes
DNS1=172.20.150.10
DNS2=172.20.150.11

I get the following error:

[ INFO  ] TASK [ovirt.hosted_engine_setup : Generate output list]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Validate selected bridge
interface if management bridge does not exists]
[ INFO  ] skipping: [localhost]
 Please indicate a nic to set ovirtmgmt bridge on: (bond0,
bond0.1131) [bond0.1131]:
 Please specify which way the network connectivity should be
checked (ping, dns, tcp, none) [dns]:
..
..
..
..
..
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Validate selected bridge
interface if management bridge does not exists]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The
selected network interface is not valid"}
[ ERROR ] Failed to execute stage 'Closing up': Failed executing
ansible-playbook
[ INFO  ] Stage: Clean up

And if I create the ifcfg-ovirtmgmt as a bridge it fails later.

What is the correct network setup for my bond configuration to do a self
hosted-engine setup ?

Regards

Nar
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NRU4CI6GLGL77RDY7O7EGZOD67TJOPJ3/


[ovirt-users] Re: Unassigned hosts

2020-08-07 Thread Nardus Geldenhuys
Hi Artur

Hope you are well, please see below, this after I restarted the engine:

host:
[root@ovirt-aa-1-21:~]↥ # tcpdump -i ovirtmgmt -c 1000 -nnvvS dst
ovirt-engine-aa-1-01
tcpdump: listening on ovirtmgmt, link-type EN10MB (Ethernet), capture size
262144 bytes
2020-08-07 12:09:32.553543 ARP, Ethernet (len 6), IPv4 (len 4), Reply
172.140.220.111 is-at 00:25:b5:04:00:25, length 28
2020-08-07 12:10:05.584594 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF],
proto TCP (6), length 60)
172.140.220.111.54321 > 172.140.220.23.56202: Flags [S.], cksum 0x5cd5
(incorrect -> 0xc8ca), seq 4036072905, ack 3265413231, win 28960, options
[mss 1460,sackOK,TS val 3039504636 ecr 341411251,nop,wscale 7], length 0
2020-08-07 12:10:10.589276 ARP, Ethernet (len 6), IPv4 (len 4), Reply
172.140.220.111 is-at 00:25:b5:04:00:25, length 28
2020-08-07 12:10:15.596230 IP (tos 0x0, ttl 64, id 48438, offset 0, flags
[DF], proto TCP (6), length 52)
172.140.220.111.54321 > 172.140.220.23.56202: Flags [F.], cksum 0x5ccd
(incorrect -> 0x40b8), seq 4036072906, ack 3265413231, win 227, options
[nop,nop,TS val 3039514647 ecr 341411251], length 0
2020-08-07 12:10:20.596429 ARP, Ethernet (len 6), IPv4 (len 4), Request
who-has 172.140.220.23 tell 172.140.220.111, length 28
2020-08-07 12:10:20.663699 IP (tos 0x0, ttl 64, id 64726, offset 0, flags
[DF], proto TCP (6), length 40)
172.140.220.111.54321 > 172.140.220.23.56202: Flags [R], cksum 0x1d20
(correct), seq 4036072907, win 0, length 0

engine
[root@ovirt-engine-aa-1-01 ~]# tcpdump -i eth0 -c 1000 -nnvvS src
ovirt-aa-1-21
tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size
262144 bytes
2020-08-07 12:09:31.891242 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF],
proto TCP (6), length 60)
172.140.220.111.54321 > 172.140.220.23.56202: Flags [S.], cksum 0xc8ca
(correct), seq 4036072905, ack 3265413231, win 28960, options [mss
1460,sackOK,TS val 3039504636 ecr 341411251,nop,wscale 7], length 0
2020-08-07 12:09:36.895502 ARP, Ethernet (len 6), IPv4 (len 4), Reply
172.140.220.111 is-at 00:25:b5:04:00:25, length 42
2020-08-07 12:09:41.901981 IP (tos 0x0, ttl 64, id 48438, offset 0, flags
[DF], proto TCP (6), length 52)
172.140.220.111.54321 > 172.140.220.23.56202: Flags [F.], cksum 0x40b8
(correct), seq 4036072906, ack 3265413231, win 227, options [nop,nop,TS val
3039514647 ecr 341411251], length 0
2020-08-07 12:09:46.901681 ARP, Ethernet (len 6), IPv4 (len 4), Request
who-has 172.140.220.23 tell 172.140.220.111, length 42
2020-08-07 12:09:46.968911 IP (tos 0x0, ttl 64, id 64726, offset 0, flags
[DF], proto TCP (6), length 40)
172.140.220.111.54321 > 172.140.220.23.56202: Flags [R], cksum 0x1d20
(correct), seq 4036072907, win 0, length 0

Regards

Nar

On Fri, 7 Aug 2020 at 11:54, Artur Socha  wrote:

> Hi Nardus,
> There is one more thing to be checked.
>
> 1) could you check if there are any packets sent from the affected host to
> the engine?
> on host:
> # outgoing traffic
>  sudo  tcpdump -i  -c 1000 -nnvvS dst
> 
>
> 2) same the other way round. Check if there are packets received on engine
> side from affected host
> on engine:
> # incoming traffic
> sudo  tcpdump -i  -c 1000 -nnvvS src
> 
>
> Artur
>
>
> On Thu, Aug 6, 2020 at 4:51 PM Artur Socha  wrote:
>
>> Thanks Nardus,
>> After a quick look I found what I was suspecting - there are way too many
>> threads in Blocked state. I don't know yet the reason but this is very
>> helpful. I'll let you know about the findings/investigation. Meanwhile, you
>> may try restarting the engine as (a very brute and ugly) workaround).
>> You may try to setup slightly bigger thread pool - may save you some time
>> until the next hiccup. However, please be aware that this may come with the
>> cost in memory usage and higher cpu usage (due to increased context
>> switching)
>> Here are some docs:
>>
>> # Specify the thread pool size for jboss managed scheduled executor service 
>> used by commands to periodically execute
>> # methods. It is generally not necessary to increase the number of threads 
>> in this thread pool. To change the value
>> # permanently create a conf file 99-engine-scheduled-thread-pool.conf in 
>> /etc/ovirt-engine/engine.conf.d/
>> ENGINE_SCHEDULED_THREAD_POOL_SIZE=100
>>
>>
>> A.
>>
>>
>> On Thu, Aug 6, 2020 at 4:19 PM Nardus Geldenhuys 
>> wrote:
>>
>>> Hi Artur
>>>
>>> Please find attached, also let me know if I need to rerun. They 5 min
>>> apart
>>>
>>> [root@engine-aa-1-01 ovirt-engine]#  ps -ef | grep jboss | grep -v grep
>>> | awk '{ print $2 }'
>>> 27390
>>> [root@engine-aa-1-01 ovirt-engine]# jstack -F 27390 >
>>> your_engine_thread_dump

[ovirt-users] Re: Unassigned hosts

2020-08-06 Thread Nardus Geldenhuys
Hi

Can create thread dump, please send details on howto.

Regards

Nardus

On Thu, 6 Aug 2020 at 14:17, Artur Socha  wrote:

> Hi Nardus,
> You might have hit an issue I have been hunting for some time ( [1] and
> [2] ).
> [1] could not be properly resolved because at a time was not able to
> recreate an issue on dev setup.
> I suspect [2] is related.
>
> Would you be able to prepare a thread dump from your engine instance?
> Additionally, please check for potential libvirt errors/warnings.
> Can you also paste the output of:
> sudo yum list installed | grep vdsm
> sudo yum list installed | grep ovirt-engine
> sudo yum list installed | grep libvirt
>
> Usually, according to previous reports, restarting the engine helps to
> restore connectivity with hosts ... at least for some time.
>
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1845152
> [2] https://bugzilla.redhat.com/show_bug.cgi?id=1846338
>
> regards,
> Artur
>
>
>
> On Thu, Aug 6, 2020 at 8:01 AM Nardus Geldenhuys 
> wrote:
>
>> Also see this in engine:
>>
>> Aug 6, 2020, 7:37:17 AM
>> VDSM someserver command Get Host Capabilities failed: Message timeout
>> which can be caused by communication issues
>>
>> On Thu, 6 Aug 2020 at 07:09, Strahil Nikolov 
>> wrote:
>>
>>> Can you fheck for errors on the affected host. Most probably you need
>>> the vdsm logs.
>>>
>>> Best Regards,
>>> Strahil Nikolov
>>>
>>> На 6 август 2020 г. 7:40:23 GMT+03:00, Nardus Geldenhuys <
>>> nard...@gmail.com> написа:
>>> >Hi Strahil
>>> >
>>> >Hope you are well. I get the following error when I tried to confirm
>>> >reboot:
>>> >
>>> >Error while executing action: Cannot confirm 'Host has been rebooted'
>>> >Host.
>>> >Valid Host statuses are "Non operational", "Maintenance" or
>>> >"Connecting".
>>> >
>>> >And I can't put it in maintenance, only option is "restart" or "stop".
>>> >
>>> >Regards
>>> >
>>> >Nar
>>> >
>>> >On Thu, 6 Aug 2020 at 06:16, Strahil Nikolov 
>>> >wrote:
>>> >
>>> >> After rebooting the node, have you "marked" it that it was rebooted ?
>>> >>
>>> >> Best Regards,
>>> >> Strahil Nikolov
>>> >>
>>> >> На 5 август 2020 г. 21:29:04 GMT+03:00, Nardus Geldenhuys <
>>> >> nard...@gmail.com> написа:
>>> >> >Hi oVirt land
>>> >> >
>>> >> >Hope you are well. Got a bit of an issue, actually a big issue. We
>>> >had
>>> >> >some
>>> >> >sort of dip of some sort. All the VM's is still running, but some of
>>> >> >the
>>> >> >hosts is show "Unassigned" or "NonResponsive". So all the hosts was
>>> >> >showing
>>> >> >UP and was fine before our dip. So I did increase
>>> >vdsHeartbeatInSecond
>>> >> >to
>>> >> >240, no luck.
>>> >> >
>>> >> >I still get a timeout on the engine lock even thou I can connect to
>>> >> >that
>>> >> >host from the engine using nc to test to port 54321. I also did
>>> >restart
>>> >> >vdsmd and also rebooted the host with no luck.
>>> >> >
>>> >> > nc -v someserver 54321
>>> >> >Ncat: Version 7.50 ( https://nmap.org/ncat )
>>> >> >Ncat: Connected to 172.40.2.172:54321.
>>> >> >
>>> >> >2020-08-05 20:20:34,256+02 ERROR
>>> >>
>>> >>[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>>> >> >(EE-ManagedThreadFactory-engineScheduled-Thread-70) [] EVENT_ID:
>>> >> >VDS_BROKER_COMMAND_FAILURE(10,802), VDSM someserver command Get Host
>>> >> >Capabilities failed: Message timeout which can be caused by
>>> >> >communication
>>> >> >issues
>>> >> >
>>> >> >Any troubleshoot ideas will be gladly appreciated.
>>> >> >
>>> >> >Regards
>>> >> >
>>> >> >Nar
>>> >>
>>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/C4HB2J3MH76FI2325Z4AV4VCCEKH4M3S/
>>
>
>
> --
> Artur Socha
> Senior Software Engineer, RHV
> Red Hat
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YYUSMPZHIKV57X3L44ODZV47IMFQEVZE/


[ovirt-users] Re: Unassigned hosts

2020-08-06 Thread Nardus Geldenhuys
Hi

[root@engine-aa-1-01 ovirt-engine]# sudo yum list installed | grep vdsm
vdsm-jsonrpc-java.noarch   1.4.18-1.el7
@ovirt-4.3
[root@engine-aa-1-01 ovirt-engine]# sudo yum list installed | grep vdsm
vdsm-jsonrpc-java.noarch   1.4.18-1.el7
@ovirt-4.3
[root@engine-aa-1-01 ovirt-engine]# sudo yum list installed | grep
ovirt-engine
ovirt-engine.noarch4.3.6.7-1.el7
 @ovirt-4.3
ovirt-engine-api-explorer.noarch   0.0.5-1.el7
 @ovirt-4.3
ovirt-engine-backend.noarch4.3.6.7-1.el7
 @ovirt-4.3
ovirt-engine-dbscripts.noarch  4.3.6.7-1.el7
 @ovirt-4.3
ovirt-engine-dwh.noarch4.3.6-1.el7
 @ovirt-4.3
ovirt-engine-dwh-setup.noarch  4.3.6-1.el7
 @ovirt-4.3
ovirt-engine-extension-aaa-jdbc.noarch 1.1.10-1.el7
@ovirt-4.3
ovirt-engine-extension-aaa-ldap.noarch 1.3.10-1.el7
@ovirt-4.3
ovirt-engine-extension-aaa-ldap-setup.noarch
ovirt-engine-extensions-api-impl.noarch4.3.6.7-1.el7
 @ovirt-4.3
ovirt-engine-metrics.noarch1.3.4.1-1.el7
 @ovirt-4.3
ovirt-engine-restapi.noarch4.3.6.7-1.el7
 @ovirt-4.3
ovirt-engine-setup.noarch  4.3.6.7-1.el7
 @ovirt-4.3
ovirt-engine-setup-base.noarch 4.3.6.7-1.el7
 @ovirt-4.3
ovirt-engine-setup-plugin-cinderlib.noarch 4.3.6.7-1.el7
 @ovirt-4.3
ovirt-engine-setup-plugin-ovirt-engine.noarch
ovirt-engine-setup-plugin-ovirt-engine-common.noarch
ovirt-engine-setup-plugin-vmconsole-proxy-helper.noarch
ovirt-engine-setup-plugin-websocket-proxy.noarch
ovirt-engine-tools.noarch  4.3.6.7-1.el7
 @ovirt-4.3
ovirt-engine-tools-backup.noarch   4.3.6.7-1.el7
 @ovirt-4.3
ovirt-engine-ui-extensions.noarch  1.0.10-1.el7
@ovirt-4.3
ovirt-engine-vmconsole-proxy-helper.noarch 4.3.6.7-1.el7
 @ovirt-4.3
ovirt-engine-webadmin-portal.noarch4.3.6.7-1.el7
 @ovirt-4.3
ovirt-engine-websocket-proxy.noarch4.3.6.7-1.el7
 @ovirt-4.3
ovirt-engine-wildfly.x86_6417.0.1-1.el7
@ovirt-4.3
ovirt-engine-wildfly-overlay.noarch17.0.1-1.el7
@ovirt-4.3
python-ovirt-engine-sdk4.x86_644.3.2-2.el7
 @ovirt-4.3
python2-ovirt-engine-lib.noarch4.3.6.7-1.el7
 @ovirt-4.3
[root@engine-aa-1-01 ovirt-engine]# sudo yum list installed | grep libvirt
[root@engine-aa-1-01 ovirt-engine]#

I can send more info if needed. And yes, it looks like sometimes it helps
if you restart the engine.

Regards

Nardus

On Thu, 6 Aug 2020 at 14:17, Artur Socha  wrote:

> Hi Nardus,
> You might have hit an issue I have been hunting for some time ( [1] and
> [2] ).
> [1] could not be properly resolved because at a time was not able to
> recreate an issue on dev setup.
> I suspect [2] is related.
>
> Would you be able to prepare a thread dump from your engine instance?
> Additionally, please check for potential libvirt errors/warnings.
> Can you also paste the output of:
> sudo yum list installed | grep vdsm
> sudo yum list installed | grep ovirt-engine
> sudo yum list installed | grep libvirt
>
> Usually, according to previous reports, restarting the engine helps to
> restore connectivity with hosts ... at least for some time.
>
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1845152
> [2] https://bugzilla.redhat.com/show_bug.cgi?id=1846338
>
> regards,
> Artur
>
>
>
> On Thu, Aug 6, 2020 at 8:01 AM Nardus Geldenhuys 
> wrote:
>
>> Also see this in engine:
>>
>> Aug 6, 2020, 7:37:17 AM
>> VDSM someserver command Get Host Capabilities failed: Message timeout
>> which can be caused by communication issues
>>
>> On Thu, 6 Aug 2020 at 07:09, Strahil Nikolov 
>> wrote:
>>
>>> Can you fheck for errors on the affected host. Most probably you need
>>> the vdsm logs.
>>>
>>> Best Regards,
>>> Strahil Nikolov
>>>
>>> На 6 август 2020 г. 7:40:23 GMT+03:00, Nardus Geldenhuys <
>>> nard...@gmail.com> написа:
>>> >Hi Strahil
>>> >
>>> >Hope you are well. I get the following error when I tried to confirm
>>> >reboot:
>>> >
>>> >Error while executing action: Cannot confirm 'Host has been rebooted'
>>> >Host.
>>> >Valid Host statuses are "Non operational", "Maintenance" or
>>> >"Connecting".
>>> >
>>> >And I can't put it in maintenance, only option is "restart" or "stop".
>>> >
>>> >Regards
>>> >
>>> >Nar
>>> >
>>> >On Thu, 6 Aug 2020 at 06:16, Strahil Nikolov 
>>> >wrote:
>>> >
>>> >> After rebooting the node, have you "marked" it that it was rebooted ?
>>> >>
>>&g

[ovirt-users] Re: VDSM can't see StoragePool

2020-08-06 Thread Nardus Geldenhuys
Hi

Hope you are well. Did you find a solution for this? Think we have the same
type of issue.

Regards

Nar
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KAUAL3OSRCALWXNWVPNL3YEKGC6P42O3/


[ovirt-users] Re: Unassigned hosts

2020-08-06 Thread Nardus Geldenhuys
Also see this in engine:

Aug 6, 2020, 7:37:17 AM
VDSM someserver command Get Host Capabilities failed: Message timeout which
can be caused by communication issues

On Thu, 6 Aug 2020 at 07:09, Strahil Nikolov  wrote:

> Can you fheck for errors on the affected host. Most probably you need the
> vdsm logs.
>
> Best Regards,
> Strahil Nikolov
>
> На 6 август 2020 г. 7:40:23 GMT+03:00, Nardus Geldenhuys <
> nard...@gmail.com> написа:
> >Hi Strahil
> >
> >Hope you are well. I get the following error when I tried to confirm
> >reboot:
> >
> >Error while executing action: Cannot confirm 'Host has been rebooted'
> >Host.
> >Valid Host statuses are "Non operational", "Maintenance" or
> >"Connecting".
> >
> >And I can't put it in maintenance, only option is "restart" or "stop".
> >
> >Regards
> >
> >Nar
> >
> >On Thu, 6 Aug 2020 at 06:16, Strahil Nikolov 
> >wrote:
> >
> >> After rebooting the node, have you "marked" it that it was rebooted ?
> >>
> >> Best Regards,
> >> Strahil Nikolov
> >>
> >> На 5 август 2020 г. 21:29:04 GMT+03:00, Nardus Geldenhuys <
> >> nard...@gmail.com> написа:
> >> >Hi oVirt land
> >> >
> >> >Hope you are well. Got a bit of an issue, actually a big issue. We
> >had
> >> >some
> >> >sort of dip of some sort. All the VM's is still running, but some of
> >> >the
> >> >hosts is show "Unassigned" or "NonResponsive". So all the hosts was
> >> >showing
> >> >UP and was fine before our dip. So I did increase
> >vdsHeartbeatInSecond
> >> >to
> >> >240, no luck.
> >> >
> >> >I still get a timeout on the engine lock even thou I can connect to
> >> >that
> >> >host from the engine using nc to test to port 54321. I also did
> >restart
> >> >vdsmd and also rebooted the host with no luck.
> >> >
> >> > nc -v someserver 54321
> >> >Ncat: Version 7.50 ( https://nmap.org/ncat )
> >> >Ncat: Connected to 172.40.2.172:54321.
> >> >
> >> >2020-08-05 20:20:34,256+02 ERROR
> >>
> >>[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> >> >(EE-ManagedThreadFactory-engineScheduled-Thread-70) [] EVENT_ID:
> >> >VDS_BROKER_COMMAND_FAILURE(10,802), VDSM someserver command Get Host
> >> >Capabilities failed: Message timeout which can be caused by
> >> >communication
> >> >issues
> >> >
> >> >Any troubleshoot ideas will be gladly appreciated.
> >> >
> >> >Regards
> >> >
> >> >Nar
> >>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/C4HB2J3MH76FI2325Z4AV4VCCEKH4M3S/


[ovirt-users] Re: Unassigned hosts

2020-08-05 Thread Nardus Geldenhuys
p (clientIF:723)

I see "[vds] recovery: waiting for storage pool to go up (clientIF:723)"
alot.

Regards

Nardus

On Thu, 6 Aug 2020 at 07:09, Strahil Nikolov  wrote:

> Can you fheck for errors on the affected host. Most probably you need the
> vdsm logs.
>
> Best Regards,
> Strahil Nikolov
>
> На 6 август 2020 г. 7:40:23 GMT+03:00, Nardus Geldenhuys <
> nard...@gmail.com> написа:
> >Hi Strahil
> >
> >Hope you are well. I get the following error when I tried to confirm
> >reboot:
> >
> >Error while executing action: Cannot confirm 'Host has been rebooted'
> >Host.
> >Valid Host statuses are "Non operational", "Maintenance" or
> >"Connecting".
> >
> >And I can't put it in maintenance, only option is "restart" or "stop".
> >
> >Regards
> >
> >Nar
> >
> >On Thu, 6 Aug 2020 at 06:16, Strahil Nikolov 
> >wrote:
> >
> >> After rebooting the node, have you "marked" it that it was rebooted ?
> >>
> >> Best Regards,
> >> Strahil Nikolov
> >>
> >> На 5 август 2020 г. 21:29:04 GMT+03:00, Nardus Geldenhuys <
> >> nard...@gmail.com> написа:
> >> >Hi oVirt land
> >> >
> >> >Hope you are well. Got a bit of an issue, actually a big issue. We
> >had
> >> >some
> >> >sort of dip of some sort. All the VM's is still running, but some of
> >> >the
> >> >hosts is show "Unassigned" or "NonResponsive". So all the hosts was
> >> >showing
> >> >UP and was fine before our dip. So I did increase
> >vdsHeartbeatInSecond
> >> >to
> >> >240, no luck.
> >> >
> >> >I still get a timeout on the engine lock even thou I can connect to
> >> >that
> >> >host from the engine using nc to test to port 54321. I also did
> >restart
> >> >vdsmd and also rebooted the host with no luck.
> >> >
> >> > nc -v someserver 54321
> >> >Ncat: Version 7.50 ( https://nmap.org/ncat )
> >> >Ncat: Connected to 172.40.2.172:54321.
> >> >
> >> >2020-08-05 20:20:34,256+02 ERROR
> >>
> >>[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> >> >(EE-ManagedThreadFactory-engineScheduled-Thread-70) [] EVENT_ID:
> >> >VDS_BROKER_COMMAND_FAILURE(10,802), VDSM someserver command Get Host
> >> >Capabilities failed: Message timeout which can be caused by
> >> >communication
> >> >issues
> >> >
> >> >Any troubleshoot ideas will be gladly appreciated.
> >> >
> >> >Regards
> >> >
> >> >Nar
> >>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KTMYF4A6PGZJWJM4FUJG5HXSU6F2AKYY/


[ovirt-users] Re: Unassigned hosts

2020-08-05 Thread Nardus Geldenhuys
Hi Strahil

Hope you are well. I get the following error when I tried to confirm reboot:

Error while executing action: Cannot confirm 'Host has been rebooted' Host.
Valid Host statuses are "Non operational", "Maintenance" or "Connecting".

And I can't put it in maintenance, only option is "restart" or "stop".

Regards

Nar

On Thu, 6 Aug 2020 at 06:16, Strahil Nikolov  wrote:

> After rebooting the node, have you "marked" it that it was rebooted ?
>
> Best Regards,
> Strahil Nikolov
>
> На 5 август 2020 г. 21:29:04 GMT+03:00, Nardus Geldenhuys <
> nard...@gmail.com> написа:
> >Hi oVirt land
> >
> >Hope you are well. Got a bit of an issue, actually a big issue. We had
> >some
> >sort of dip of some sort. All the VM's is still running, but some of
> >the
> >hosts is show "Unassigned" or "NonResponsive". So all the hosts was
> >showing
> >UP and was fine before our dip. So I did increase  vdsHeartbeatInSecond
> >to
> >240, no luck.
> >
> >I still get a timeout on the engine lock even thou I can connect to
> >that
> >host from the engine using nc to test to port 54321. I also did restart
> >vdsmd and also rebooted the host with no luck.
> >
> > nc -v someserver 54321
> >Ncat: Version 7.50 ( https://nmap.org/ncat )
> >Ncat: Connected to 172.40.2.172:54321.
> >
> >2020-08-05 20:20:34,256+02 ERROR
> >[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> >(EE-ManagedThreadFactory-engineScheduled-Thread-70) [] EVENT_ID:
> >VDS_BROKER_COMMAND_FAILURE(10,802), VDSM someserver command Get Host
> >Capabilities failed: Message timeout which can be caused by
> >communication
> >issues
> >
> >Any troubleshoot ideas will be gladly appreciated.
> >
> >Regards
> >
> >Nar
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NMQDDIGOIV7EDXO3EFHVU3ROKU44Y6ZY/


[ovirt-users] Unassigned hosts

2020-08-05 Thread Nardus Geldenhuys
Hi oVirt land

Hope you are well. Got a bit of an issue, actually a big issue. We had some
sort of dip of some sort. All the VM's is still running, but some of the
hosts is show "Unassigned" or "NonResponsive". So all the hosts was showing
UP and was fine before our dip. So I did increase  vdsHeartbeatInSecond to
240, no luck.

I still get a timeout on the engine lock even thou I can connect to that
host from the engine using nc to test to port 54321. I also did restart
vdsmd and also rebooted the host with no luck.

 nc -v someserver 54321
Ncat: Version 7.50 ( https://nmap.org/ncat )
Ncat: Connected to 172.40.2.172:54321.

2020-08-05 20:20:34,256+02 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engineScheduled-Thread-70) [] EVENT_ID:
VDS_BROKER_COMMAND_FAILURE(10,802), VDSM someserver command Get Host
Capabilities failed: Message timeout which can be caused by communication
issues

Any troubleshoot ideas will be gladly appreciated.

Regards

Nar
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BWQ5G2PBJSXVXOQ6KDLZXCCX6SKEWZAT/


[ovirt-users] Re: affinity/pool/fencing ?

2020-07-28 Thread Nardus Geldenhuys
Hi Paul

Thanks for the response, easy way to add 100 VM's in your second step ?

Regards

Nar

On Tue, 28 Jul 2020 at 13:17, Staniforth, Paul <
p.stanifo...@leedsbeckett.ac.uk> wrote:

> Hello Nar,
>you can achieve this with 2 affinity rules, a positive
> and negative one.
>
> In the cluster create the first affinity group VM affinity rule disabled
> HOST affinity rule positive  set enforcing mode. Then add the VMs and 2
> Hosts to force them to run on these hosts.
>
> In the cluster create the second affinity group VM affinity rule disabled
> HOST affinity rule negative set enforcing mode. Then add the rest of the
> VMs and the 2 Hosts to force them not to run on these hosts.
>
>
> https://www.ovirt.org/documentation/vmm-guide/chap-Administrative_Tasks.html
>
>
> Regards,
>
>      Paul S.
>
>
> --
> *From:* Nardus Geldenhuys 
> *Sent:* 28 July 2020 11:22
> *To:* users@ovirt.org 
> *Subject:* [ovirt-users] affinity/pool/fencing ?
>
>
> *Caution External Mail:* Do not click any links or open any attachments
> unless you trust the sender and know that the content is safe.
> Hi oVirt land
>
> Hope you are well. Don't even know what to call this. But let me describe
> what I want to achieve.
>
> We have a cluster with say 100 vm's. But we want two use two hosts in the
> cluster two run only certain VM's. I think you can do that with affinity
> rules. But how can I restrict those two hosts to only run the VM's, meaning
> that no other VM's will run on them.
>
> I don't want to go and edit the 100 other VM's to not run on 2 hosts. Is
> there an easy way for doing this?
>
> Oracle VM uses pools, I dont know what vmware uses.
>
> Any advice will help.
>
> Regards
>
> Nar
> To view the terms under which this email is distributed, please go to:-
> http://leedsbeckett.ac.uk/disclaimer/email/
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4V4DW7HI375CW4G75NJW3TIMNAZCP4NR/


[ovirt-users] affinity/pool/fencing ?

2020-07-28 Thread Nardus Geldenhuys
Hi oVirt land

Hope you are well. Don't even know what to call this. But let me describe
what I want to achieve.

We have a cluster with say 100 vm's. But we want two use two hosts in the
cluster two run only certain VM's. I think you can do that with affinity
rules. But how can I restrict those two hosts to only run the VM's, meaning
that no other VM's will run on them.

I don't want to go and edit the 100 other VM's to not run on 2 hosts. Is
there an easy way for doing this?

Oracle VM uses pools, I dont know what vmware uses.

Any advice will help.

Regards

Nar
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z6A6ZQKI67UYKG5A5NBMBE7GYQFDWQYI/


[ovirt-users] import from qemu+tcp still working ?

2020-06-24 Thread Nardus Geldenhuys
Hi oVirt Land

Hope you are all well and that you can help me.

I build a centos 8.2 vm on my local Fedora 32 qemu/kvm laptop. Then I ssh
to my ovirt host in the following manner: ssh ovirthost -R
1:localhost:16509. I can then view the vm's on my local laptop with the
following configuration string: qemu+tcp://localhost:1/system.
Eveything goes according to plan until I hit the import button. It fails
almost immediately. I even tried it on the command line and this is what I
get, it also error immediately:

virt-v2v -ic qemu+tcp://localhost:1/system -o libvirt -os SOME_STORAGE
centos8_ovirt
virt-v2v: warning: no support for remote libvirt connections to '-ic
qemu+tcp://localhost:1/system'.  The conversion may fail when it tries
to read the source disks.
[   0.0] Opening the source -i libvirt -ic
qemu+tcp://localhost:1/system centos8_ovirt
[   0.2] Creating an overlay to protect the source from being modified
qemu-img: /tmp/v2vovldd7ebf.qcow2: Could not open
'/home/libvirt/disks/centos8_ovirt.qcow2': No such file or directory
Could not open backing image to determine size.
virt-v2v: error: qemu-img command failed, see earlier errors

If reporting bugs, run virt-v2v with debugging enabled and include the
complete output:

  virt-v2v -v -x [...]

This solution worked for me last year but I can't get it going. Is there
something obvious I am missing or is this just not working anymore?

Version information:

Fedora 32 with libvirtd:
libvirt-daemon-kvm-6.1.0-4.fc32.x86_64
qemu-kvm-4.2.0-7.fc32.x86_64
qemu-kvm-core-4.2.0-7.fc32.x86_64

Ovirt 4.3.8.2-1.el7
virt-v2v-1.40.2-9.el7.x86_64

Thanks

Nardus
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HFAVSKWS2S4FSAKHRALYTCR76QVWD32C/


[ovirt-users] Re: NFS ISO Domain - Outage ?

2020-03-11 Thread Nardus Geldenhuys
Hi

Thanks for the response. Seems that they got paused, all came right after
the NFS server came backup.

Regards

Nardus

On Wed, 11 Mar 2020 at 11:34, Strahil Nikolov  wrote:

> On March 11, 2020 9:45:30 AM GMT+02:00, Nardus Geldenhuys <
> nard...@gmail.com> wrote:
> >Hi Ovirt Mailing-list
> >
> >Hope you are well. We had an outage over several ovirt clusters. It
> >looks
> >like we had the same ISO NFS domain shared to all off them, many of the
> >VM's had a CD attached to it. The NFS server went down for an hour, all
> >hell broke lose when the NFS server went down. Some of the ovirt nodes
> >became "red/unresponssive" on the ovirt dashboard. We learned now to
> >spilt
> >the NFS server for ISO's and/or remove the ISO's when done.
> >
> >Has anyone seen similar issues with an NFS ISO Domain? Is there special
> >options we need to pass to the mount to get around this? Can we put the
> >ISO's somewhere else?
> >
> >Regards
> >
> >Nardus
>
> Hi Nardus,
>
> The iso domain is deprecated, but there  are some issues  when uploading
> ISOs to block-based data domains.Using  ISOs uploaded  to gluster-based
> data domain is working pretty fine (I'm using it in my LAB),  but you need
> to properly test prior implementing on Prod.
>
> As far as I know, oVirt is making I/O checks frequently , so even the hard
> mount option won't help.
>
> Is your NFS  clusterized ? If not ,you may consider clusterizing it.
>
> Actually, did your VMs got  paused  or completely crashed ?
>
> Best Regards,
> Strahil Nikolov
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7HJAYGM7W7KSVFYSSURF5TLFWLWY4CQV/


[ovirt-users] NFS ISO Domain - Outage ?

2020-03-11 Thread Nardus Geldenhuys
Hi Ovirt Mailing-list

Hope you are well. We had an outage over several ovirt clusters. It looks
like we had the same ISO NFS domain shared to all off them, many of the
VM's had a CD attached to it. The NFS server went down for an hour, all
hell broke lose when the NFS server went down. Some of the ovirt nodes
became "red/unresponssive" on the ovirt dashboard. We learned now to spilt
the NFS server for ISO's and/or remove the ISO's when done.

Has anyone seen similar issues with an NFS ISO Domain? Is there special
options we need to pass to the mount to get around this? Can we put the
ISO's somewhere else?

Regards

Nardus
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/77UVMNQLP2TNXPTDU7A554MUWQHVWYET/


[ovirt-users] disable selinux ?

2019-09-19 Thread Nardus Geldenhuys
Hi

Hope you are well. Quick question.

Can I disable SELINUX on the ovirt nodes ? Will there be any issues. I know
that you cant migrate from SELINUX to DISABLED SELINUX nodes.

Regards

Nar
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GWA4ISEYHSHRV36MWXAIVQ7ADODLLBE3/


[ovirt-users] Re: Ovirt engine setup Error

2019-06-19 Thread Nardus Geldenhuys
Hi

Stab in the dark, you using DHCP for the engine and it is not getting an 
address ?

Ciao

Nar
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5W4EAVBBMMZ5Y5MRO4KDNNIS6FP67T5L/


[ovirt-users] vm_dynamic table very big in ovirt engine db

2019-05-07 Thread Nardus Geldenhuys
Hi There

Hope you are well. We have two clusters with two ovirt-engines. On the one 
cluster's ovirt -engine the vm_dynamic tables is almost 3.5 GB big, is that 
normal ? We are on the latest engine software.

Regards

Nar
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ARSXJQO7UHJYPJ7NNGDRZE7AY6NZXJQX/


[ovirt-users] Re: oVirt 4.3.2 Error: genev_sys_6081 is not present in the system

2019-04-29 Thread Nardus Geldenhuys
Hey

Do use openvswitch? We don't and I did notice that the messages disappear after 
a while.

Regards

Nardus
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/P3HHAW2GBAXYWC43GFPN2ZBJRDLP3SR7/


[ovirt-users] Re: oVirt 4.3.2 Error: genev_sys_6081 is not present in the system

2019-04-15 Thread Nardus Geldenhuys
Hi

Please find attached. Yes they can talk to each other on those ports.

Ncat: UDP packet sent successfully

Regards

Nardus


On Mon, 15 Apr 2019 at 15:11, Dominik Holler  wrote:

> Thanks, looks like message about too many open files is valid.
> Next question is why the geneve tunnel is failing.
> Would you share the output of
> ovs-vsctl list Interface
> ?
>
> Can the affected host can reach the other OVN hosts via their
> remote_ip on udp port 6081?
> A check by netcat might give a hint:
> ovs-vsctl list Interface | grep remote_ip
> nc -zuv  6081
>
>
>
>
>
> On Mon, 15 Apr 2019 12:01:54 +0200
> Nardus Geldenhuys  wrote:
>
> > ss output attached
> >
> > On Mon, 15 Apr 2019 at 10:10, Dominik Holler  wrote:
> >
> > > Would you please share the last lines of all files
> > > in /var/log/openvswitch, the relevant lines of /var/log/message, and
> > > the ouput of
> > > ss -lap
> > > lsof
> > > ?
> > > Thanks
> > >
> > >
> > > On Sun, 14 Apr 2019 14:17:53 -
> > > "Nardus Geldenhuys"  wrote:
> > >
> > > > Also get this after install new ovirt node. It stops after about 20
> > > minutes.
> > > > ___
> > > > Users mailing list -- users@ovirt.org
> > > > To unsubscribe send an email to users-le...@ovirt.org
> > > > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > > > oVirt Code of Conduct:
> > > https://www.ovirt.org/community/about/community-guidelines/
> > > > List Archives:
> > >
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/IGJUPKTPCVJ32NXIA5ULP6BMBGOWICIS/
> > >
> > >
>
>
_uuid   : 46197380-0859-4c52-bc22-6485321da38b
admin_state : down
bfd : {}
bfd_status  : {}
cfm_fault   : []
cfm_fault_status: []
cfm_flap_count  : []
cfm_health  : []
cfm_mpid: []
cfm_remote_mpids: []
cfm_remote_opstate  : []
duplex  : []
error   : []
external_ids: {}
ifindex : 7
ingress_policing_burst: 0
ingress_policing_rate: 0
lacp_current: []
link_resets : 0
link_speed  : []
link_state  : down
lldp: {}
mac : []
mac_in_use  : "9a:4f:3f:e3:89:4c"
mtu : 1500
mtu_request : []
name: br-int
ofport  : 65534
ofport_request  : []
options : {}
other_config: {}
statistics  : {collisions=0, rx_bytes=0, rx_crc_err=0, rx_dropped=0, 
rx_errors=0, rx_frame_err=0, rx_over_err=0, rx_packets=0, tx_bytes=0, 
tx_dropped=0, tx_errors=0, tx_packets=0}
status  : {driver_name=openvswitch}
type: internal

_uuid   : d24db938-83c7-425e-9ebe-09689af637a7
admin_state : up
bfd : {enable="false"}
bfd_status  : {}
cfm_fault   : []
cfm_fault_status: []
cfm_flap_count  : []
cfm_health  : []
cfm_mpid: []
cfm_remote_mpids: []
cfm_remote_opstate  : []
duplex  : []
error   : []
external_ids: {}
ifindex : 65503
ingress_policing_burst: 0
ingress_policing_rate: 0
lacp_current: []
link_resets : 0
link_speed  : []
link_state  : up
lldp: {}
mac : []
mac_in_use  : "1e:93:d3:a5:06:20"
mtu : []
mtu_request : []
name: "ovn-87d2a7-0"
ofport  : 2
ofport_request  : []
options : {csum="true", key=flow, remote_ip="172.18.207.19"}
other_config: {}
statistics  : {rx_bytes=0, rx_packets=0, tx_bytes=0, tx_packets=0}
status  : {}
type: geneve

_uuid   : ca9129e1-bc29-4952-9314-dc3551e54eb8
admin_state : up
bfd : {enable="false"}
bfd_status  : {}
cfm_fault   : []
cfm_fault_status: []
cfm_flap_count  : []
cfm_health  : []
cfm_mpid: []
cfm_remote_mpids: []
cfm_remote_opstate  : []
duplex  : []
error   : []
external_ids: {}
ifindex : 65503
ingress_policing_burst: 0
ingress_policing_rate: 0
lacp_current: []
link_resets : 0
link_speed  : []
link_state  : up
lldp: {}
mac : []
mac_in_use  : "aa:f7:8e:fd:f7:64"
mtu : []
mtu_request : []
name: "ovn-b1a927-0"
ofport  : 1
ofport_request  : []
options : {csum="t

[ovirt-users] Re: oVirt 4.3.2 Error: genev_sys_6081 is not present in the system

2019-04-15 Thread Nardus Geldenhuys
Please find attached

On Mon, 15 Apr 2019 at 10:10, Dominik Holler  wrote:

> Would you please share the last lines of all files
> in /var/log/openvswitch, the relevant lines of /var/log/message, and
> the ouput of
> ss -lap
> lsof
> ?
> Thanks
>
>
> On Sun, 14 Apr 2019 14:17:53 -
> "Nardus Geldenhuys"  wrote:
>
> > Also get this after install new ovirt node. It stops after about 20
> minutes.
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/IGJUPKTPCVJ32NXIA5ULP6BMBGOWICIS/
>
>
2019-04-15T01:06:01.714Z|00330|vlog|INFO|opened log file /var/log/openvswitch/ovsdb-server.log
2019-04-15T08:29:54.343Z|1|vlog|INFO|opened log file /var/log/openvswitch/ovsdb-server.log
2019-04-15T08:29:54.353Z|2|ovsdb_server|INFO|ovsdb-server (Open vSwitch) 2.10.1
2019-04-15T08:29:54.560Z|3|jsonrpc|WARN|unix#2: receive error: Connection reset by peer
2019-04-15T08:29:54.560Z|4|reconnect|WARN|unix#2: connection dropped (Connection reset by peer)
2019-04-15T08:29:54.634Z|5|jsonrpc|WARN|unix#4: receive error: Connection reset by peer
2019-04-15T08:29:54.634Z|6|reconnect|WARN|unix#4: connection dropped (Connection reset by peer)
2019-04-15T08:29:55.109Z|7|jsonrpc|WARN|unix#7: receive error: Connection reset by peer
2019-04-15T08:29:55.109Z|8|reconnect|WARN|unix#7: connection dropped (Connection reset by peer)
2019-04-15T08:30:04.362Z|9|memory|INFO|3508 kB peak resident set size after 10.0 seconds
2019-04-15T08:30:04.362Z|00010|memory|INFO|cells:127 json-caches:2 monitors:3 sessions:2
2019-04-15T08:30:22.454Z|00011|jsonrpc|WARN|unix#10: receive error: Connection reset by peer
2019-04-15T08:30:22.454Z|00012|reconnect|WARN|unix#10: connection dropped (Connection reset by peer)
2019-04-15T08:32:25.586Z|00013|jsonrpc|WARN|unix#14: receive error: Connection reset by peer
2019-04-15T08:32:25.586Z|00014|reconnect|WARN|unix#14: connection dropped (Connection reset by peer)
2019-04-15T08:32:25.677Z|00015|jsonrpc|WARN|unix#15: receive error: Connection reset by peer
2019-04-15T08:32:25.677Z|00016|reconnect|WARN|unix#15: connection dropped (Connection reset by peer)
2019-04-15T08:32:26.063Z|00017|jsonrpc|WARN|unix#16: receive error: Connection reset by peer
2019-04-15T08:32:26.063Z|00018|reconnect|WARN|unix#16: connection dropped (Connection reset by peer)
2019-04-15T08:32:42.662Z|00019|jsonrpc|WARN|unix#18: receive error: Connection reset by peer
2019-04-15T08:32:42.662Z|00020|reconnect|WARN|unix#18: connection dropped (Connection reset by peer)
2019-04-15T08:32:42.754Z|00021|jsonrpc|WARN|unix#19: receive error: Connection reset by peer
2019-04-15T08:32:42.754Z|00022|reconnect|WARN|unix#19: connection dropped (Connection reset by peer)
2019-04-15T08:32:43.148Z|00023|jsonrpc|WARN|unix#20: receive error: Connection reset by peer
2019-04-15T08:32:43.148Z|00024|reconnect|WARN|unix#20: connection dropped (Connection reset by peer)
2019-04-15T08:33:17.216Z|00025|jsonrpc|WARN|unix#22: receive error: Connection reset by peer
2019-04-15T08:33:17.217Z|00026|reconnect|WARN|unix#22: connection dropped (Connection reset by peer)
2019-04-15T08:33:17.306Z|00027|jsonrpc|WARN|unix#23: receive error: Connection reset by peer
2019-04-15T08:33:17.306Z|00028|reconnect|WARN|unix#23: connection dropped (Connection reset by peer)
2019-04-15T08:33:17.702Z|00029|jsonrpc|WARN|unix#24: receive error: Connection reset by peer
2019-04-15T08:33:17.702Z|00030|reconnect|WARN|unix#24: connection dropped (Connection reset by peer)
2019-04-15T08:33:48.757Z|00031|jsonrpc|WARN|unix#26: receive error: Connection reset by peer
2019-04-15T08:33:48.757Z|00032|reconnect|WARN|unix#26: connection dropped (Connection reset by peer)
2019-04-15T08:33:48.850Z|00033|jsonrpc|WARN|unix#27: receive error: Connection reset by peer
2019-04-15T08:33:48.850Z|00034|reconnect|WARN|unix#27: connection dropped (Connection reset by peer)
2019-04-15T08:33:49.256Z|00035|jsonrpc|WARN|unix#28: receive error: Connection reset by peer
2019-04-15T08:33:49.256Z|00036|reconnect|WARN|unix#28: connection dropped (Connection reset by peer)
2019-04-15T08:33:54.741Z|00037|jsonrpc|WARN|unix#30: receive error: Connection reset by peer
2019-04-15T08:33:54.741Z|00038|reconnect|WARN|unix#30: connection dropped (Connection reset by peer)
2019-04-15T08:33:54.832Z|00039|reconnect|WARN|unix#31: connection dropped (Connection reset by peer)
2019-04-15T08:33:55.235Z|00040|reconnect|WARN|unix#32: connection dropped (Connection reset by peer)
2019-04-15T09:20:46.083Z|131838|poll_loop|INFO|Dropped 205210 log messages in last 6 seconds (most recently, 0 

[ovirt-users] Re: oVirt 4.3.2 Error: genev_sys_6081 is not present in the system

2019-04-14 Thread Nardus Geldenhuys
Also get this after install new ovirt node. It stops after about 20 minutes.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IGJUPKTPCVJ32NXIA5ULP6BMBGOWICIS/


[ovirt-users] Re: Bug 1666795 - Related? - VM's don't start after shutdown on FCP

2019-04-14 Thread Nardus Geldenhuys
This is fixed. Was a table in db that was truncated, we fixed it by restoring 
backup
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KFDWSRJGBRJGXKY6RT2HEHTCZWMOTYYC/


[ovirt-users] Re: Bug 1666795 - Related? - VM's don't start after shutdown on FCP

2019-04-10 Thread Nardus Geldenhuys
Hi Milan

Nothing special. We did the upgrade on two clusters. One is fine and this one 
is broken. Is there a way to rescan the cluster with all its VM's to pull 
information

I did notice also that there is no NIC showing under the VM's network. When you 
trying to add one it complains that it exists but it is not showing.

Thanks

Nardus
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PDCCB5C2YXSEBXDFJNFD6EKUVJATXUOT/


[ovirt-users] Re: Bug 1666795 - Related? - VM's don't start after shutdown on FCP

2019-04-10 Thread Nardus Geldenhuys
It seems that ovirt-engine thinks that the storage is attached to a running VM. 
But it is not. Is there away to refresh these stats ?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RAD6THJISCDZBQHIBCVBUHYACXRJB7LS/


[ovirt-users] Re: Bug 1666795 - Related? - VM's don't start after shutdown on FCP

2019-04-10 Thread Nardus Geldenhuys
Can't find any logs containing the VM name on the host it was supposed to 
start. Seems that it does not even get to the host and that it fails in the 
ovirt engine
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TYTJ7SLSJTZH54AERIWEMYKUDM7PLW2F/


[ovirt-users] Re: Bug 1666795 - Related? - VM's don't start after shutdown on FCP

2019-04-10 Thread Nardus Geldenhuys
Can a moderator delete this post please. Can't find the option to delete it. 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SOLZQMJPFU6G3QY3BVLCCYREB3O3F2QQ/


[ovirt-users] Re: Bug 1666795 - Related? - VM's don't start after shutdown on FCP

2019-04-10 Thread Nardus Geldenhuys
attached is the engine.log

On Wed, 10 Apr 2019 at 10:39, Milan Zamazal  wrote:

> nard...@gmail.com writes:
>
> > Wonder if this issue is related to our problem and if there is a way
> > around it. We upgraded from 4.2.8. to 4.3.2. Now when we start some of
> > the VM's fail to start. You need to deattach the disks, create new VM,
> > reattach the disks to the new VM and then the new VM starts.
>
> Hi, were those VMs previously migrated from a 4.2.8 to a 4.3.2 host or
> to a 4.3.[01] host (which have the given bug)?
>
> Would it be possible to provide Vdsm logs from some of the failed and
> successful (with the new VM) starts with the same storage and also from
> the destination host of the preceding migration of the VM to a 4.3 host
> (if the VM was migrated)?
>
> Thanks,
> Milan
>
2019-04-10 10:09:46,786+02 INFO  [org.ovirt.engine.core.bll.RunVmCommand] (default task-331) [bcc741ae-2fc3-4c21-8fcd-e5d69c48946b] Lock Acquired to object 'EngineLock:{exclusiveLocks='[81a82c39-4786-46db-b719-2808f736e359=VM]', sharedLocks=''}'
2019-04-10 10:09:46,831+02 INFO  [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default task-331) [bcc741ae-2fc3-4c21-8fcd-e5d69c48946b] START, IsVmDuringInitiatingVDSCommand( IsVmDuringInitiatingVDSCommandParameters:{vmId='81a82c39-4786-46db-b719-2808f736e359'}), log id: 2cbbd903
2019-04-10 10:09:46,832+02 INFO  [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default task-331) [bcc741ae-2fc3-4c21-8fcd-e5d69c48946b] FINISH, IsVmDuringInitiatingVDSCommand, return: false, log id: 2cbbd903
2019-04-10 10:09:46,922+02 INFO  [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-331) [bcc741ae-2fc3-4c21-8fcd-e5d69c48946b] Candidate host 's4-cluster-05.abc.co.za' ('e2e277d0-1fcb-4c39-8b43-f7931c18abf5') was filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'Memory' (correlation id: null)
2019-04-10 10:09:47,125+02 INFO  [org.ovirt.engine.core.bll.RunVmCommand] (EE-ManagedThreadFactory-engine-Thread-47) [bcc741ae-2fc3-4c21-8fcd-e5d69c48946b] Running command: RunVmCommand internal: false. Entities affected :  ID: 81a82c39-4786-46db-b719-2808f736e359 Type: VMAction group RUN_VM with role type USER
2019-04-10 10:09:47,196+02 INFO  [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (EE-ManagedThreadFactory-engine-Thread-47) [bcc741ae-2fc3-4c21-8fcd-e5d69c48946b] Candidate host 's4-cluster-05.abc.co.za' ('e2e277d0-1fcb-4c39-8b43-f7931c18abf5') was filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'Memory' (correlation id: bcc741ae-2fc3-4c21-8fcd-e5d69c48946b)
2019-04-10 10:09:47,366+02 INFO  [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] (EE-ManagedThreadFactory-engine-Thread-47) [bcc741ae-2fc3-4c21-8fcd-e5d69c48946b] START, UpdateVmDynamicDataVDSCommand( UpdateVmDynamicDataVDSCommandParameters:{hostId='null', vmId='81a82c39-4786-46db-b719-2808f736e359', vmDynamic='org.ovirt.engine.core.common.businessentities.VmDynamic@7b441365'}), log id: 1387e426
2019-04-10 10:09:47,394+02 INFO  [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] (EE-ManagedThreadFactory-engine-Thread-47) [bcc741ae-2fc3-4c21-8fcd-e5d69c48946b] FINISH, UpdateVmDynamicDataVDSCommand, return: , log id: 1387e426
2019-04-10 10:09:47,397+02 INFO  [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (EE-ManagedThreadFactory-engine-Thread-47) [bcc741ae-2fc3-4c21-8fcd-e5d69c48946b] START, CreateVDSCommand( CreateVDSCommandParameters:{hostId='69e69c67-49a5-4af6-a754-ddbed3e11f03', vmId='81a82c39-4786-46db-b719-2808f736e359', vm='VM [s4-grandmama-db-01]'}), log id: 1935c97c
2019-04-10 10:09:47,398+02 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-47) [bcc741ae-2fc3-4c21-8fcd-e5d69c48946b] START, CreateBrokerVDSCommand(HostName = s4-cluster-06.abc.co.za, CreateVDSCommandParameters:{hostId='69e69c67-49a5-4af6-a754-ddbed3e11f03', vmId='81a82c39-4786-46db-b719-2808f736e359', vm='VM [s4-grandmama-db-01]'}), log id: 18e44d16
2019-04-10 10:09:47,415+02 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-47) [bcc741ae-2fc3-4c21-8fcd-e5d69c48946b] Failed in 'CreateBrokerVDS' method, for vds: 's4-cluster-06.abc.co.za'; host: 's4-cluster-06.abc.co.za': null
2019-04-10 10:09:47,416+02 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-47) [bcc741ae-2fc3-4c21-8fcd-e5d69c48946b] Command 'CreateBrokerVDSCommand(HostName = s4-cluster-06.abc.co.za, CreateVDSCommandParameters:{hostId='69e69c67-49a5-4af6-a754-ddbed3e11f03', vmId='81a82c39-4786-46db-b719-2808f736e359', vm='VM [s4-grandmama-db-01]'})' execution failed: null
2019-04-10 10:09:47,416+02 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-47) [bcc741ae-2fc3-4c21-8fcd-e5d69c48946b] FINISH, CreateBrokerVDSCommand, return: , log id: 18e44d16
2019-04-10 10:09:47,416+02 

[ovirt-users] Re: Bug 1666795 - Related? - VM's don't start after shutdown on FCP

2019-04-10 Thread Nardus Geldenhuys
attaching ovirt engine.log
2019-04-10 10:09:46,786+02 INFO  [org.ovirt.engine.core.bll.RunVmCommand] (default task-331) [bcc741ae-2fc3-4c21-8fcd-e5d69c48946b] Lock Acquired to object 'EngineLock:{exclusiveLocks='[81a82c39-4786-46db-b719-2808f736e359=VM]', sharedLocks=''}'
2019-04-10 10:09:46,831+02 INFO  [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default task-331) [bcc741ae-2fc3-4c21-8fcd-e5d69c48946b] START, IsVmDuringInitiatingVDSCommand( IsVmDuringInitiatingVDSCommandParameters:{vmId='81a82c39-4786-46db-b719-2808f736e359'}), log id: 2cbbd903
2019-04-10 10:09:46,832+02 INFO  [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default task-331) [bcc741ae-2fc3-4c21-8fcd-e5d69c48946b] FINISH, IsVmDuringInitiatingVDSCommand, return: false, log id: 2cbbd903
2019-04-10 10:09:46,922+02 INFO  [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-331) [bcc741ae-2fc3-4c21-8fcd-e5d69c48946b] Candidate host 's4-cluster-05.abc.co.za' ('e2e277d0-1fcb-4c39-8b43-f7931c18abf5') was filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'Memory' (correlation id: null)
2019-04-10 10:09:47,125+02 INFO  [org.ovirt.engine.core.bll.RunVmCommand] (EE-ManagedThreadFactory-engine-Thread-47) [bcc741ae-2fc3-4c21-8fcd-e5d69c48946b] Running command: RunVmCommand internal: false. Entities affected :  ID: 81a82c39-4786-46db-b719-2808f736e359 Type: VMAction group RUN_VM with role type USER
2019-04-10 10:09:47,196+02 INFO  [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (EE-ManagedThreadFactory-engine-Thread-47) [bcc741ae-2fc3-4c21-8fcd-e5d69c48946b] Candidate host 's4-cluster-05.abc.co.za' ('e2e277d0-1fcb-4c39-8b43-f7931c18abf5') was filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'Memory' (correlation id: bcc741ae-2fc3-4c21-8fcd-e5d69c48946b)
2019-04-10 10:09:47,366+02 INFO  [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] (EE-ManagedThreadFactory-engine-Thread-47) [bcc741ae-2fc3-4c21-8fcd-e5d69c48946b] START, UpdateVmDynamicDataVDSCommand( UpdateVmDynamicDataVDSCommandParameters:{hostId='null', vmId='81a82c39-4786-46db-b719-2808f736e359', vmDynamic='org.ovirt.engine.core.common.businessentities.VmDynamic@7b441365'}), log id: 1387e426
2019-04-10 10:09:47,394+02 INFO  [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] (EE-ManagedThreadFactory-engine-Thread-47) [bcc741ae-2fc3-4c21-8fcd-e5d69c48946b] FINISH, UpdateVmDynamicDataVDSCommand, return: , log id: 1387e426
2019-04-10 10:09:47,397+02 INFO  [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (EE-ManagedThreadFactory-engine-Thread-47) [bcc741ae-2fc3-4c21-8fcd-e5d69c48946b] START, CreateVDSCommand( CreateVDSCommandParameters:{hostId='69e69c67-49a5-4af6-a754-ddbed3e11f03', vmId='81a82c39-4786-46db-b719-2808f736e359', vm='VM [s4-grandmama-db-01]'}), log id: 1935c97c
2019-04-10 10:09:47,398+02 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-47) [bcc741ae-2fc3-4c21-8fcd-e5d69c48946b] START, CreateBrokerVDSCommand(HostName = s4-cluster-06.abc.co.za, CreateVDSCommandParameters:{hostId='69e69c67-49a5-4af6-a754-ddbed3e11f03', vmId='81a82c39-4786-46db-b719-2808f736e359', vm='VM [s4-grandmama-db-01]'}), log id: 18e44d16
2019-04-10 10:09:47,415+02 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-47) [bcc741ae-2fc3-4c21-8fcd-e5d69c48946b] Failed in 'CreateBrokerVDS' method, for vds: 's4-cluster-06.abc.co.za'; host: 's4-cluster-06.abc.co.za': null
2019-04-10 10:09:47,416+02 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-47) [bcc741ae-2fc3-4c21-8fcd-e5d69c48946b] Command 'CreateBrokerVDSCommand(HostName = s4-cluster-06.abc.co.za, CreateVDSCommandParameters:{hostId='69e69c67-49a5-4af6-a754-ddbed3e11f03', vmId='81a82c39-4786-46db-b719-2808f736e359', vm='VM [s4-grandmama-db-01]'})' execution failed: null
2019-04-10 10:09:47,416+02 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-47) [bcc741ae-2fc3-4c21-8fcd-e5d69c48946b] FINISH, CreateBrokerVDSCommand, return: , log id: 18e44d16
2019-04-10 10:09:47,416+02 ERROR [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (EE-ManagedThreadFactory-engine-Thread-47) [bcc741ae-2fc3-4c21-8fcd-e5d69c48946b] Failed to create VM: java.lang.NullPointerException
at org.ovirt.engine.core.vdsbroker.builder.vminfo.LibvirtVmXmlBuilder.lambda$writeInterfaces$30(LibvirtVmXmlBuilder.java:1250) [vdsbroker.jar:]
at java.util.Comparator.lambda$comparing$77a9974f$1(Comparator.java:469) [rt.jar:1.8.0_201]
at java.util.TimSort.countRunAndMakeAscending(TimSort.java:355) [rt.jar:1.8.0_201]
at java.util.TimSort.sort(TimSort.java:220) [rt.jar:1.8.0_201]
at java.util.Arrays.sort(Arrays.java:1512) [rt.jar:1.8.0_201]
at java.util.stream.SortedOps$SizedRefSortingSink.end(SortedOps.java:348)