Re: [ovirt-users] ovirt 4.2.2-rc3 switching from legacy to OVS..

2018-03-15 Thread Thomas Davis

Alrighty, I figured it out.

0) To setup a node in a cluster, make sure the cluster is in OVS, not 
legacy.


1) Make sure you have an OVN controller setup somewhere.  Default 
appears to be the ovirt-hosted-engine.
   a) you should also have the external network provider for OVN 
configured also; see the web interface.


2) when you install the node, make sure it has openvswitch installed and 
running - ie:
   a) 'systemctl status openvswitch' says it's up and running. (be sure 
it's enable also)

   b) 'ovs-vsctl show' has vdsm bridges listed, and possibly a br-int
  bridge.

3) if there is no br-int bridge, do 'vdsm-tool ovn-config 
ovn-controller-ip host-ip'


4) when you have configured several nodes in the OVN, you should see 
them listed as geneve devices in 'ovs-vsctl show', ie:


This is a 4 node cluster, so the other 3 nodes are expected:

[root@d8-r12-c1-n3 ~]# ovs-vsctl show
42df28ba-ffd6-4e61-b7b2-219576da51ab
Bridge br-int
fail_mode: secure
Port "ovn-27461b-0"
Interface "ovn-27461b-0"
type: geneve
options: {csum="true", key=flow, remote_ip="192.168.85.91"}
Port "vnet1"
Interface "vnet1"
Port "ovn-a1c08f-0"
Interface "ovn-a1c08f-0"
type: geneve
options: {csum="true", key=flow, remote_ip="192.168.85.87"}
Port "patch-br-int-to-f7a19c7d-021a-455d-bf3a-c15e212d8831"
Interface 
"patch-br-int-to-f7a19c7d-021a-455d-bf3a-c15e212d8831"

type: patch
options: 
{peer="patch-f7a19c7d-021a-455d-bf3a-c15e212d8831-to-br-int"}

Port "vnet0"
Interface "vnet0"
Port "patch-br-int-to-7874ba85-8f6f-4e43-9535-5a1b1353a9ec"
Interface 
"patch-br-int-to-7874ba85-8f6f-4e43-9535-5a1b1353a9ec"

type: patch
options: 
{peer="patch-7874ba85-8f6f-4e43-9535-5a1b1353a9ec-to-br-int"}

Port "ovn-8da92c-0"
Interface "ovn-8da92c-0"
type: geneve
options: {csum="true", key=flow, remote_ip="192.168.85.95"}
Port br-int
Interface br-int
type: internal
Bridge "vdsmbr_LZmj3uJ1"
Port "vdsmbr_LZmj3uJ1"
Interface "vdsmbr_LZmj3uJ1"
type: internal
Port "net211"
tag: 211
Interface "net211"
type: internal
Port "eno2"
Interface "eno2"
Bridge "vdsmbr_e7rcnufp"
Port "vdsmbr_e7rcnufp"
Interface "vdsmbr_e7rcnufp"
type: internal
Port ipmi
tag: 20
Interface ipmi
type: internal
Port ovirtmgmt
tag: 50
Interface ovirtmgmt
type: internal
Port "patch-f7a19c7d-021a-455d-bf3a-c15e212d8831-to-br-int"
Interface 
"patch-f7a19c7d-021a-455d-bf3a-c15e212d8831-to-br-int"

type: patch
options: 
{peer="patch-br-int-to-f7a19c7d-021a-455d-bf3a-c15e212d8831"}

Port "eno1"
Interface "eno1"
Port "patch-7874ba85-8f6f-4e43-9535-5a1b1353a9ec-to-br-int"
Interface 
"patch-7874ba85-8f6f-4e43-9535-5a1b1353a9ec-to-br-int"

type: patch
options: 
{peer="patch-br-int-to-7874ba85-8f6f-4e43-9535-5a1b1353a9ec"}

ovs_version: "2.7.3"

5) Create in the cluster the legacy style bridge networks - ie, 
ovirtmgmt, etc.  Do this just like you where creating them for the 
legacy network.  Define the VLAN #, the MTU, etc.


6) Now, create in the network config, the OVN networks - ie, 
ovn-ovirtmgmt is on an external provider (select OVN), and make sure 
'connect to physical network' is checked, and the correct network from 
step 5 is picked.  Save this off.


This will connect the two networks together in a bridge, and all 
services are visible to both ie dhcp, dns..


7) when you create the VM, select the OVN network interface, not the 
legacy bridge interface (this is why I decided to prefix with 'ovn-').


8) Create the vm, start it, migrate, stop, re-start, etc, it all should 
work now.


Lots of reading.. lots of interesting stuff found..  finally figured 
this out after reading a bunch of bug fixes for the latest RC (released 
today)


thomas

On 03/15/2018 03:21 AM, Dan Kenigsberg wrote:

On Thu, Mar 15, 2018 at 1:50 AM, Thomas Davis <tada...@lbl.gov> wrote:

Well, I just hit

https://bugzilla.redhat.com/show_bug.cgi?id=1513991

And it's been closed, which mea

Re: [ovirt-users] ovirt 4.2.2-rc3 switching from legacy to OVS..

2018-03-14 Thread Thomas Davis

Well, I just hit

https://bugzilla.redhat.com/show_bug.cgi?id=1513991

And it's been closed, which means with vdsm-4.20.17-1.el7.centos.x86_64
 OVS networking is totally borked..

I know OVS is Experimental, but it worked in 4.1.x, and now we have to 
do a step back to legacy bridge just to use 4.2.x, which in a vlan 
environment just wreaks havoc (every VLAN need's a unique mac assigned 
to the bridge, which vdsm does not do, so suddenly you get the kernel 
complaining about seeing it's mac address several times.)


There is zero documentation on how to use OVN instead of OVS.

thomas

On 03/13/2018 09:22 AM, Thomas Davis wrote:
I'll work on it some more.  I have 2 different clusters in the data 
center (1 is the Hosted Engine systems, another is not..)  I had trouble 
with both.  I'll try again on the non-hosted engine cluster to see what 
it is doing.  I have it working in 4.1, but we are trying to do a clean 
wipe since the 4.1 engine has been upgraded so many times from v3.5 plus 
we want to move to hosted-engine-ha from a single engine node and the 
ansible modules/roles (which also have problems..)


thomas

On Tue, Mar 13, 2018 at 6:27 AM, Edward Haas <eh...@redhat.com 
<mailto:eh...@redhat.com>> wrote:



OVS switch support is experimental at this stage and in some cases
when trying to change from one switch to the other, it fails.
It was also not checked against a hosted engine setup, which handles
networking a bit differently for the management network (ovirtmgmt).
Nevertheless, we are interested in understanding all the problems
that exists today, so if you can, please share the supervdsm log, it
has the interesting networking traces.

We plan to block cluster switch editing until these problems are
resolved. It will be only allowed to define a new cluster as OVS,
not convert an existing one from Linux Bridge to OVS.

On Fri, Mar 9, 2018 at 9:54 AM, Thomas Davis <tada...@lbl.gov
<mailto:tada...@lbl.gov>> wrote:

I'm getting further along with 4.2.2rc3 than the 4.2.1 when it
comes to hosted engine and vlans..  it actually does install
under 4.2.2rc3.

But it's a complete failure when I switch the cluster from Linux
Bridge/Legacy to OVS.  The first time I try, vdsm does
not properly configure the node, it's all messed up.

I'm getting this in vdsmd logs:

2018-03-08 23:12:46,610-0800 INFO  (jsonrpc/7) [api.network]
START setupNetworks(networks={u'ovirtmgmt': {u'ipv6autoconf':
True, u'nic': u'eno1', u'vlan': u'50', u'ipaddr':
u'192.168.85.49', u'switch': u'ovs', u'mtu': 1500, u'netmask':
u'255.255.252.0', u'dhcpv6': False, u'STP': u'no', u'bridged':
u'true', u'gateway': u'192.168.85.254', u'defaultRoute': True}},
bondings={}, options={u'connectivityCheck': u'true',
u'connectivityTimeout': 120}) from=:::192.168.85.24,56806,
flow_id=4147e25f-0a23-4f47-a0a4-d424a3437d11 (api:46)

2018-03-08 23:12:52,449-0800 INFO  (jsonrpc/2)
[jsonrpc.JsonRpcServer] RPC call Host.ping2 succeeded in 0.00
seconds (__init__:573)

2018-03-08 23:12:52,511-0800 INFO  (jsonrpc/7) [api.network]
FINISH setupNetworks error=[Errno 19] ovirtmgmt is not present
in the system from=:::192.168.85.24,56806,
flow_id=4147e25f-0a23-4f47-a0a4-d424a3437d11 (api:50)
2018-03-08 23:12:52,512-0800 ERROR (jsonrpc/7)
[jsonrpc.JsonRpcServer] Internal server error (__init__:611)
Traceback (most recent call last):
   File
"/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line
606, in _handle_request
     res = method(**params)
   File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py",
line 201, in _dynamicMethod
     result = fn(*methodArgs)
   File "", line 2, in setupNetworks
   File "/usr/lib/python2.7/site-packages/vdsm/common/api.py",
line 48, in method
     ret = func(*args, **kwargs)
   File "/usr/lib/python2.7/site-packages/vdsm/API.py", line
1527, in setupNetworks
     supervdsm.getProxy().setupNetworks(networks, bondings, options)
   File
"/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py",
line 55, in __call__
     return callMethod()
   File
"/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py",
line 53, in 
     **kwargs)
   File "", line 2, in setupNetworks
   File "/usr/lib64/python2.7/multiprocessing/managers.py", line
773, in _callmethod
     raise convert_to_error(kind, result)
IOError: [Errno 19] ovirtmgmt is not present in the system
2018-03-08 23:12:52,512-0800 INFO  (jsonrpc/7)
[jsonrpc.JsonRpcServ

Re: [ovirt-users] ovirt 4.2.2-rc3 switching from legacy to OVS..

2018-03-13 Thread Thomas Davis
I'll work on it some more.  I have 2 different clusters in the data center
(1 is the Hosted Engine systems, another is not..)  I had trouble with
both.  I'll try again on the non-hosted engine cluster to see what it is
doing.  I have it working in 4.1, but we are trying to do a clean wipe
since the 4.1 engine has been upgraded so many times from v3.5 plus we want
to move to hosted-engine-ha from a single engine node and the ansible
modules/roles (which also have problems..)

thomas

On Tue, Mar 13, 2018 at 6:27 AM, Edward Haas <eh...@redhat.com> wrote:

>
> OVS switch support is experimental at this stage and in some cases when
> trying to change from one switch to the other, it fails.
> It was also not checked against a hosted engine setup, which handles
> networking a bit differently for the management network (ovirtmgmt).
> Nevertheless, we are interested in understanding all the problems that
> exists today, so if you can, please share the supervdsm log, it has the
> interesting networking traces.
>
> We plan to block cluster switch editing until these problems are resolved.
> It will be only allowed to define a new cluster as OVS, not convert an
> existing one from Linux Bridge to OVS.
>
> On Fri, Mar 9, 2018 at 9:54 AM, Thomas Davis <tada...@lbl.gov> wrote:
>
>> I'm getting further along with 4.2.2rc3 than the 4.2.1 when it comes to
>> hosted engine and vlans..  it actually does install
>> under 4.2.2rc3.
>>
>> But it's a complete failure when I switch the cluster from Linux
>> Bridge/Legacy to OVS.  The first time I try, vdsm does
>> not properly configure the node, it's all messed up.
>>
>> I'm getting this in vdsmd logs:
>>
>> 2018-03-08 23:12:46,610-0800 INFO  (jsonrpc/7) [api.network] START
>> setupNetworks(networks={u'ovirtmgmt': {u'ipv6autoconf': True, u'nic':
>> u'eno1', u'vlan': u'50', u'ipaddr': u'192.168.85.49', u'switch': u'ovs',
>> u'mtu': 1500, u'netmask': u'255.255.252.0', u'dhcpv6': False, u'STP':
>> u'no', u'bridged': u'true', u'gateway': u'192.168.85.254', u'defaultRoute':
>> True}}, bondings={}, options={u'connectivityCheck': u'true',
>> u'connectivityTimeout': 120}) from=:::192.168.85.24,56806,
>> flow_id=4147e25f-0a23-4f47-a0a4-d424a3437d11 (api:46)
>>
>> 2018-03-08 23:12:52,449-0800 INFO  (jsonrpc/2) [jsonrpc.JsonRpcServer]
>> RPC call Host.ping2 succeeded in 0.00 seconds (__init__:573)
>>
>> 2018-03-08 23:12:52,511-0800 INFO  (jsonrpc/7) [api.network] FINISH
>> setupNetworks error=[Errno 19] ovirtmgmt is not present in the system
>> from=:::192.168.85.24,56806, flow_id=4147e25f-0a23-4f47-a0a4-d424a3437d11
>> (api:50)
>> 2018-03-08 23:12:52,512-0800 ERROR (jsonrpc/7) [jsonrpc.JsonRpcServer]
>> Internal server error (__init__:611)
>> Traceback (most recent call last):
>>   File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line
>> 606, in _handle_request
>> res = method(**params)
>>   File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 201,
>> in _dynamicMethod
>> result = fn(*methodArgs)
>>   File "", line 2, in setupNetworks
>>   File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48,
>> in method
>> ret = func(*args, **kwargs)
>>   File "/usr/lib/python2.7/site-packages/vdsm/API.py", line 1527, in
>> setupNetworks
>> supervdsm.getProxy().setupNetworks(networks, bondings, options)
>>   File "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line
>> 55, in __call__
>> return callMethod()
>>   File "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line
>> 53, in 
>> **kwargs)
>>   File "", line 2, in setupNetworks
>>   File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in
>> _callmethod
>> raise convert_to_error(kind, result)
>> IOError: [Errno 19] ovirtmgmt is not present in the system
>> 2018-03-08 23:12:52,512-0800 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer]
>> RPC call Host.setupNetworks failed (error -32603) in 5.90 seconds
>> (__init__:573)
>> 2018-03-08 23:12:54,769-0800 INFO  (jsonrpc/1) [jsonrpc.JsonRpcServer]
>> RPC call Host.ping2 succeeded in 0.00 seconds (__init__:573)
>> 2018-03-08 23:12:54,772-0800 INFO  (jsonrpc/5) [api.host] START
>> getCapabilities() from=::1,45562 (api:46)
>> 2018-03-08 23:12:54,906-0800 INFO  (jsonrpc/5) [api.host] FINISH
>> getCapabilities error=[Errno 19] ovirtmgmt is not present in the system
>> from=::1,45562 (api:50)
>> 2018-03-08 23:12:54,906-0800 ERROR (jsonrpc/5) [jsonrpc.JsonRpcServer]
>> Interna

[ovirt-users] ovirt 4.2.2-rc3 switching from legacy to OVS..

2018-03-08 Thread Thomas Davis

I'm getting further along with 4.2.2rc3 than the 4.2.1 when it comes to hosted 
engine and vlans..  it actually does install
under 4.2.2rc3.

But it's a complete failure when I switch the cluster from Linux Bridge/Legacy 
to OVS.  The first time I try, vdsm does
not properly configure the node, it's all messed up.

I'm getting this in vdsmd logs:

2018-03-08 23:12:46,610-0800 INFO  (jsonrpc/7) [api.network] START setupNetworks(networks={u'ovirtmgmt': {u'ipv6autoconf': True, 
u'nic': u'eno1', u'vlan': u'50', u'ipaddr': u'192.168.85.49', u'switch': u'ovs', u'mtu': 1500, u'netmask': u'255.255.252.0', 
u'dhcpv6': False, u'STP': u'no', u'bridged': u'true', u'gateway': u'192.168.85.254', u'defaultRoute': True}}, bondings={}, 
options={u'connectivityCheck': u'true', u'connectivityTimeout': 120}) from=:::192.168.85.24,56806, 
flow_id=4147e25f-0a23-4f47-a0a4-d424a3437d11 (api:46)


2018-03-08 23:12:52,449-0800 INFO  (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call 
Host.ping2 succeeded in 0.00 seconds (__init__:573)

2018-03-08 23:12:52,511-0800 INFO  (jsonrpc/7) [api.network] FINISH setupNetworks error=[Errno 19] ovirtmgmt is not present in the 
system from=:::192.168.85.24,56806, flow_id=4147e25f-0a23-4f47-a0a4-d424a3437d11 (api:50)

2018-03-08 23:12:52,512-0800 ERROR (jsonrpc/7) [jsonrpc.JsonRpcServer] Internal 
server error (__init__:611)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 606, in 
_handle_request
res = method(**params)
  File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 201, in 
_dynamicMethod
result = fn(*methodArgs)
  File "", line 2, in setupNetworks
  File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in method
ret = func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/API.py", line 1527, in 
setupNetworks
supervdsm.getProxy().setupNetworks(networks, bondings, options)
  File "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 55, in 
__call__
return callMethod()
  File "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 53, in 

**kwargs)
  File "", line 2, in setupNetworks
  File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in 
_callmethod
raise convert_to_error(kind, result)
IOError: [Errno 19] ovirtmgmt is not present in the system
2018-03-08 23:12:52,512-0800 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call Host.setupNetworks failed (error -32603) in 5.90 
seconds (__init__:573)

2018-03-08 23:12:54,769-0800 INFO  (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call 
Host.ping2 succeeded in 0.00 seconds (__init__:573)
2018-03-08 23:12:54,772-0800 INFO  (jsonrpc/5) [api.host] START 
getCapabilities() from=::1,45562 (api:46)
2018-03-08 23:12:54,906-0800 INFO  (jsonrpc/5) [api.host] FINISH getCapabilities error=[Errno 19] ovirtmgmt is not present in the 
system from=::1,45562 (api:50)

2018-03-08 23:12:54,906-0800 ERROR (jsonrpc/5) [jsonrpc.JsonRpcServer] Internal 
server error (__init__:611)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 606, in 
_handle_request
res = method(**params)
  File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 201, in 
_dynamicMethod
result = fn(*methodArgs)
  File "", line 2, in getCapabilities
  File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in method
ret = func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/API.py", line 1339, in 
getCapabilities
c = caps.get()
  File "/usr/lib/python2.7/site-packages/vdsm/host/caps.py", line 168, in get
net_caps = supervdsm.getProxy().network_caps()
  File "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 55, in 
__call__
return callMethod()
  File "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 53, in 

**kwargs)
  File "", line 2, in network_caps
  File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in 
_callmethod
raise convert_to_error(kind, result)
IOError: [Errno 19] ovirtmgmt is not present in the system

So something is dreadfully wrong with the bridge to ovs conversion in 4.2.2rc3.

thomas
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted-engine 4.2.1-pre setup on a clean node..

2018-02-15 Thread Thomas Davis
In playing with this, I found that 4.2.1 hosted-engine will not install 
on a node with the ovirtmgmt interface being a vlan.


Is this a still supported config?  I see that access port, bonded and 
vlan tagged are supported by older versions..


thomas

On 02/05/2018 08:16 AM, Simone Tiraboschi wrote:



On Fri, Feb 2, 2018 at 9:10 PM, Thomas Davis <tada...@lbl.gov 
<mailto:tada...@lbl.gov>> wrote:


Is this supported?

I have a node, that centos 7.4 minimal is installed on, with an
interface setup for an IP address.

I've yum installed nothing else except the ovirt-4.2.1-pre rpm, run
screen, and then do the 'hosted-engine --deploy' command.


Fine, nothing else is required.


It hangs on:

[ INFO  ] changed: [localhost]
[ INFO  ] TASK [Get ovirtmgmt route table id]
[ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 50, "changed":
true, "cmd": "ip rule list | grep ovirtmgmt | sed s/[.*]\\
//g | awk '{ print $9 }'", "delta": "0:00:00.004845", "end":
"2018-02-02 12:03:30.794860", "rc": 0, "start": "2018-02-02
12:03:30.790015", "stderr": "", "stderr_lines": [], "stdout": "",
"stdout_lines": []}
[ ERROR ] Failed to execute stage 'Closing up': Failed executing
ansible-playbook
[ INFO  ] Stage: Clean up
[ INFO  ] Cleaning temporary resources
[ INFO  ] TASK [Gathering Facts]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Remove local vm dir]
[ INFO  ] ok: [localhost]
[ INFO  ] Generating answer file
'/var/lib/ovirt-hosted-engine-setup/answers/answers-20180202120333.conf'
[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination
[ ERROR ] Hosted Engine deployment failed: please check the logs for
the issue, fix accordingly or re-deploy from scratch.
           Log file is located at

/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20180202115038-r11nh1.log

but the VM is up and running, just attached to the 192.168.122.0/24
<http://192.168.122.0/24> subnet

[root@d8-r13-c2-n1 ~]# ssh root@192.168.122.37
<mailto:root@192.168.122.37>
root@192.168.122.37 <mailto:root@192.168.122.37>'s password:
Last login: Fri Feb  2 11:54:47 2018 from 192.168.122.1
[root@ovirt ~]# systemctl status ovirt-engine
● ovirt-engine.service - oVirt Engine
    Loaded: loaded (/usr/lib/systemd/system/ovirt-engine.service;
enabled; vendor preset: disabled)
    Active: active (running) since Fri 2018-02-02 11:54:42 PST;
11min ago
  Main PID: 24724 (ovirt-engine.py)
    CGroup: /system.slice/ovirt-engine.service
            ├─24724 /usr/bin/python
/usr/share/ovirt-engine/services/ovirt-engine/ovirt-engine.py
--redirect-output --systemd=notify start
            └─24856 ovirt-engine -server -XX:+TieredCompilation
-Xms3971M -Xmx3971M -Djava.awt.headless=true
-Dsun.rmi.dgc.client.gcInterval=360
-Dsun.rmi.dgc.server.gcInterval=360 -Djsse...

Feb 02 11:54:41 ovirt.crt.nersc.gov <http://ovirt.crt.nersc.gov>
systemd[1]: Starting oVirt Engine...
Feb 02 11:54:41 ovirt.crt.nersc.gov <http://ovirt.crt.nersc.gov>
ovirt-engine.py[24724]: 2018-02-02 11:54:41,767-0800 ovirt-engine:
INFO _detectJBossVersion:187 Detecting JBoss version. Running:
/usr/lib/jvm/jre/...60', '-
Feb 02 11:54:42 ovirt.crt.nersc.gov <http://ovirt.crt.nersc.gov>
ovirt-engine.py[24724]: 2018-02-02 11:54:42,394-0800 ovirt-engine:
INFO _detectJBossVersion:207 Return code: 0,  | stdout: '[u'WildFly
Full 11.0.0tderr: '[]'
Feb 02 11:54:42 ovirt.crt.nersc.gov <http://ovirt.crt.nersc.gov>
systemd[1]: Started oVirt Engine.
Feb 02 11:55:25 ovirt.crt.nersc.gov <http://ovirt.crt.nersc.gov>
python2[25640]: ansible-stat Invoked with checksum_algorithm=sha1
get_checksum=True follow=False
path=/usr/share/ovirt-engine/playbooks/roles/ovir...ributes=True
Feb 02 11:55:29 ovirt.crt.nersc.gov <http://ovirt.crt.nersc.gov>
python2[25698]: ansible-stat Invoked with checksum_algorithm=sha1
get_checksum=True follow=False
path=/usr/share/ovirt-engine/playbooks/roles/ovir...ributes=True
Feb 02 11:55:30 ovirt.crt.nersc.gov <http://ovirt.crt.nersc.gov>
python2[25741]: ansible-stat Invoked with checksum_algorithm=sha1
get_checksum=True follow=False
path=/usr/share/ovirt-engine/playbooks/roles/ovir...ributes=True
Feb 02 11:55:30 ovirt.crt.nersc.gov <http://ovirt.crt.nersc.gov>
python2[25767]: ansible-stat Invoked with checksum_algorithm=sha1
get_checksum=True follow=False
path=/usr/share/ovirt-engine/playbooks/roles/ovir...ributes=True
Feb 02 11:55:31 ovirt.crt.nersc.gov <http://ovirt.crt.nersc.gov>
python

[ovirt-users] hosted-engine 4.2.1-pre setup on a clean node..

2018-02-02 Thread Thomas Davis
Is this supported?

I have a node, that centos 7.4 minimal is installed on, with an interface
setup for an IP address.

I've yum installed nothing else except the ovirt-4.2.1-pre rpm, run screen,
and then do the 'hosted-engine --deploy' command.

It hangs on:

[ INFO  ] changed: [localhost]
[ INFO  ] TASK [Get ovirtmgmt route table id]
[ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 50, "changed": true,
"cmd": "ip rule list | grep ovirtmgmt | sed s/[.*]\\ //g | awk '{
print $9 }'", "delta": "0:00:00.004845", "end": "2018-02-02
12:03:30.794860", "rc": 0, "start": "2018-02-02 12:03:30.790015", "stderr":
"", "stderr_lines": [], "stdout": "", "stdout_lines": []}
[ ERROR ] Failed to execute stage 'Closing up': Failed executing
ansible-playbook
[ INFO  ] Stage: Clean up
[ INFO  ] Cleaning temporary resources
[ INFO  ] TASK [Gathering Facts]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Remove local vm dir]
[ INFO  ] ok: [localhost]
[ INFO  ] Generating answer file
'/var/lib/ovirt-hosted-engine-setup/answers/answers-20180202120333.conf'
[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination
[ ERROR ] Hosted Engine deployment failed: please check the logs for the
issue, fix accordingly or re-deploy from scratch.
  Log file is located at
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20180202115038-r11nh1.log

but the VM is up and running, just attached to the 192.168.122.0/24 subnet

[root@d8-r13-c2-n1 ~]# ssh root@192.168.122.37
root@192.168.122.37's password:
Last login: Fri Feb  2 11:54:47 2018 from 192.168.122.1
[root@ovirt ~]# systemctl status ovirt-engine
● ovirt-engine.service - oVirt Engine
   Loaded: loaded (/usr/lib/systemd/system/ovirt-engine.service; enabled;
vendor preset: disabled)
   Active: active (running) since Fri 2018-02-02 11:54:42 PST; 11min ago
 Main PID: 24724 (ovirt-engine.py)
   CGroup: /system.slice/ovirt-engine.service
   ├─24724 /usr/bin/python
/usr/share/ovirt-engine/services/ovirt-engine/ovirt-engine.py
--redirect-output --systemd=notify start
   └─24856 ovirt-engine -server -XX:+TieredCompilation -Xms3971M
-Xmx3971M -Djava.awt.headless=true -Dsun.rmi.dgc.client.gcInterval=360
-Dsun.rmi.dgc.server.gcInterval=360 -Djsse...

Feb 02 11:54:41 ovirt.crt.nersc.gov systemd[1]: Starting oVirt Engine...
Feb 02 11:54:41 ovirt.crt.nersc.gov ovirt-engine.py[24724]: 2018-02-02
11:54:41,767-0800 ovirt-engine: INFO _detectJBossVersion:187 Detecting
JBoss version. Running: /usr/lib/jvm/jre/...60', '-
Feb 02 11:54:42 ovirt.crt.nersc.gov ovirt-engine.py[24724]: 2018-02-02
11:54:42,394-0800 ovirt-engine: INFO _detectJBossVersion:207 Return code:
0,  | stdout: '[u'WildFly Full 11.0.0tderr: '[]'
Feb 02 11:54:42 ovirt.crt.nersc.gov systemd[1]: Started oVirt Engine.
Feb 02 11:55:25 ovirt.crt.nersc.gov python2[25640]: ansible-stat Invoked
with checksum_algorithm=sha1 get_checksum=True follow=False
path=/usr/share/ovirt-engine/playbooks/roles/ovir...ributes=True
Feb 02 11:55:29 ovirt.crt.nersc.gov python2[25698]: ansible-stat Invoked
with checksum_algorithm=sha1 get_checksum=True follow=False
path=/usr/share/ovirt-engine/playbooks/roles/ovir...ributes=True
Feb 02 11:55:30 ovirt.crt.nersc.gov python2[25741]: ansible-stat Invoked
with checksum_algorithm=sha1 get_checksum=True follow=False
path=/usr/share/ovirt-engine/playbooks/roles/ovir...ributes=True
Feb 02 11:55:30 ovirt.crt.nersc.gov python2[25767]: ansible-stat Invoked
with checksum_algorithm=sha1 get_checksum=True follow=False
path=/usr/share/ovirt-engine/playbooks/roles/ovir...ributes=True
Feb 02 11:55:31 ovirt.crt.nersc.gov python2[25795]: ansible-stat Invoked
with checksum_algorithm=sha1 get_checksum=True follow=False
path=/etc/ovirt-engine-metrics/config.yml get_md5...ributes=True

The 'ip rule list' never has an ovirtmgmt rule/table in it.. which means
the ansible script loops then dies; vdsmd has never configured the network
on the node.

[root@d8-r13-c2-n1 ~]# systemctl status vdsmd -l
● vdsmd.service - Virtual Desktop Server Manager
   Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor
preset: enabled)
   Active: active (running) since Fri 2018-02-02 11:55:11 PST; 14min ago
 Main PID: 7654 (vdsmd)
   CGroup: /system.slice/vdsmd.service
   └─7654 /usr/bin/python2 /usr/share/vdsm/vdsmd

Feb 02 11:55:11 d8-r13-c2-n1 vdsmd_init_common.sh[7551]: vdsm: Running
dummybr
Feb 02 11:55:11 d8-r13-c2-n1 vdsmd_init_common.sh[7551]: vdsm: Running
tune_system
Feb 02 11:55:11 d8-r13-c2-n1 vdsmd_init_common.sh[7551]: vdsm: Running
test_space
Feb 02 11:55:11 d8-r13-c2-n1 vdsmd_init_common.sh[7551]: vdsm: Running
test_lo
Feb 02 11:55:11 d8-r13-c2-n1 systemd[1]: Started Virtual Desktop Server
Manager.
Feb 02 11:55:12 d8-r13-c2-n1 vdsm[7654]: WARN File:
/var/run/vdsm/trackedInterfaces/vnet0 already removed
Feb 02 11:55:12 d8-r13-c2-n1 vdsm[7654]: WARN Not ready yet, ignoring event
'|virt|VM_status|ba56a114-efb0-45e0-b2ad-808805ae93e0'

Re: [ovirt-users] ovirt 4.1.0.4 error..

2017-02-23 Thread Thomas Davis

On 02/23/2017 12:53 AM, Yedidyah Bar David wrote:

On Thu, Feb 23, 2017 at 6:17 AM, Thomas Davis <tada...@lbl.gov> wrote:

I am getting this error message:

Unexpected character ('<' (code 60)): expected a valid value (number,
String, array, object, 'true', 'false' or 'null') at [Source:
java.io.StringReader@4c65dbb1; line: 1, column: 2]

in both screen after login on ovirt, and in the engine.log.


Can you share more of engine.log? Thanks.



I'll send them as a seperate email to just you.

thomas
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users