Alrighty, I figured it out.

0) To setup a node in a cluster, make sure the cluster is in OVS, not legacy.

1) Make sure you have an OVN controller setup somewhere. Default appears to be the ovirt-hosted-engine. a) you should also have the external network provider for OVN configured also; see the web interface.

2) when you install the node, make sure it has openvswitch installed and running - ie: a) 'systemctl status openvswitch' says it's up and running. (be sure it's enable also)
   b) 'ovs-vsctl show' has vdsm bridges listed, and possibly a br-int
      bridge.

3) if there is no br-int bridge, do 'vdsm-tool ovn-config ovn-controller-ip host-ip'

4) when you have configured several nodes in the OVN, you should see them listed as geneve devices in 'ovs-vsctl show', ie:

This is a 4 node cluster, so the other 3 nodes are expected:

[root@d8-r12-c1-n3 ~]# ovs-vsctl show
42df28ba-ffd6-4e61-b7b2-219576da51ab
    Bridge br-int
        fail_mode: secure
        Port "ovn-27461b-0"
            Interface "ovn-27461b-0"
                type: geneve
                options: {csum="true", key=flow, remote_ip="192.168.85.91"}
        Port "vnet1"
            Interface "vnet1"
        Port "ovn-a1c08f-0"
            Interface "ovn-a1c08f-0"
                type: geneve
                options: {csum="true", key=flow, remote_ip="192.168.85.87"}
        Port "patch-br-int-to-f7a19c7d-021a-455d-bf3a-c15e212d8831"
Interface "patch-br-int-to-f7a19c7d-021a-455d-bf3a-c15e212d8831"
                type: patch
options: {peer="patch-f7a19c7d-021a-455d-bf3a-c15e212d8831-to-br-int"}
        Port "vnet0"
            Interface "vnet0"
        Port "patch-br-int-to-7874ba85-8f6f-4e43-9535-5a1b1353a9ec"
Interface "patch-br-int-to-7874ba85-8f6f-4e43-9535-5a1b1353a9ec"
                type: patch
options: {peer="patch-7874ba85-8f6f-4e43-9535-5a1b1353a9ec-to-br-int"}
        Port "ovn-8da92c-0"
            Interface "ovn-8da92c-0"
                type: geneve
                options: {csum="true", key=flow, remote_ip="192.168.85.95"}
        Port br-int
            Interface br-int
                type: internal
    Bridge "vdsmbr_LZmj3uJ1"
        Port "vdsmbr_LZmj3uJ1"
            Interface "vdsmbr_LZmj3uJ1"
                type: internal
        Port "net211"
            tag: 211
            Interface "net211"
                type: internal
        Port "eno2"
            Interface "eno2"
    Bridge "vdsmbr_e7rcnufp"
        Port "vdsmbr_e7rcnufp"
            Interface "vdsmbr_e7rcnufp"
                type: internal
        Port ipmi
            tag: 20
            Interface ipmi
                type: internal
        Port ovirtmgmt
            tag: 50
            Interface ovirtmgmt
                type: internal
        Port "patch-f7a19c7d-021a-455d-bf3a-c15e212d8831-to-br-int"
Interface "patch-f7a19c7d-021a-455d-bf3a-c15e212d8831-to-br-int"
                type: patch
options: {peer="patch-br-int-to-f7a19c7d-021a-455d-bf3a-c15e212d8831"}
        Port "eno1"
            Interface "eno1"
        Port "patch-7874ba85-8f6f-4e43-9535-5a1b1353a9ec-to-br-int"
Interface "patch-7874ba85-8f6f-4e43-9535-5a1b1353a9ec-to-br-int"
                type: patch
options: {peer="patch-br-int-to-7874ba85-8f6f-4e43-9535-5a1b1353a9ec"}
    ovs_version: "2.7.3"

5) Create in the cluster the legacy style bridge networks - ie, ovirtmgmt, etc. Do this just like you where creating them for the legacy network. Define the VLAN #, the MTU, etc.

6) Now, create in the network config, the OVN networks - ie, ovn-ovirtmgmt is on an external provider (select OVN), and make sure 'connect to physical network' is checked, and the correct network from step 5 is picked. Save this off.

This will connect the two networks together in a bridge, and all services are visible to both ie dhcp, dns..

7) when you create the VM, select the OVN network interface, not the legacy bridge interface (this is why I decided to prefix with 'ovn-').

8) Create the vm, start it, migrate, stop, re-start, etc, it all should work now.

Lots of reading.. lots of interesting stuff found.. finally figured this out after reading a bunch of bug fixes for the latest RC (released today)

thomas

On 03/15/2018 03:21 AM, Dan Kenigsberg wrote:
On Thu, Mar 15, 2018 at 1:50 AM, Thomas Davis <tada...@lbl.gov> wrote:
Well, I just hit

https://bugzilla.redhat.com/show_bug.cgi?id=1513991

And it's been closed, which means with vdsm-4.20.17-1.el7.centos.x86_64
  OVS networking is totally borked..

You are welcome to reopen that bug, specifying your use case for OvS.
I cannot promise fixing this bug, as our resources are limited, and
that bug, which was introduced in 4.2, was not deemed as urgently
needed. https://gerrit.ovirt.org/#/c/86932/ attempts to fix the bug,
but it still needs a lot of work.


I know OVS is Experimental, but it worked in 4.1.x, and now we have to do a
step back to legacy bridge just to use 4.2.x, which in a vlan environment
just wreaks havoc (every VLAN need's a unique mac assigned to the bridge,
which vdsm does not do, so suddenly you get the kernel complaining about
seeing it's mac address several times.)

Could you elaborate on this issue? What is wrong with a bridge that
learns its mac from its underlying device? What wold like Vdsm to do,
in your opinion? You can file a bug (or even send a patch) if there is
a functionality that you'd like to fix.


There is zero documentation on how to use OVN instead of OVS.

I hope that 
https://ovirt.org/develop/release-management/features/network/provider-physical-network/
can help.

thomas

On 03/13/2018 09:22 AM, Thomas Davis wrote:

I'll work on it some more.  I have 2 different clusters in the data center
(1 is the Hosted Engine systems, another is not..)  I had trouble with both.
I'll try again on the non-hosted engine cluster to see what it is doing.  I
have it working in 4.1, but we are trying to do a clean wipe since the 4.1
engine has been upgraded so many times from v3.5 plus we want to move to
hosted-engine-ha from a single engine node and the ansible modules/roles
(which also have problems..)

thomas

On Tue, Mar 13, 2018 at 6:27 AM, Edward Haas <eh...@redhat.com
<mailto:eh...@redhat.com>> wrote:


     OVS switch support is experimental at this stage and in some cases
     when trying to change from one switch to the other, it fails.
     It was also not checked against a hosted engine setup, which handles
     networking a bit differently for the management network (ovirtmgmt).
     Nevertheless, we are interested in understanding all the problems
     that exists today, so if you can, please share the supervdsm log, it
     has the interesting networking traces.

     We plan to block cluster switch editing until these problems are
     resolved. It will be only allowed to define a new cluster as OVS,
     not convert an existing one from Linux Bridge to OVS.

     On Fri, Mar 9, 2018 at 9:54 AM, Thomas Davis <tada...@lbl.gov
     <mailto:tada...@lbl.gov>> wrote:

         I'm getting further along with 4.2.2rc3 than the 4.2.1 when it
         comes to hosted engine and vlans..  it actually does install
         under 4.2.2rc3.

         But it's a complete failure when I switch the cluster from Linux
         Bridge/Legacy to OVS.  The first time I try, vdsm does
         not properly configure the node, it's all messed up.

         I'm getting this in vdsmd logs:

         2018-03-08 23:12:46,610-0800 INFO  (jsonrpc/7) [api.network]
         START setupNetworks(networks={u'ovirtmgmt': {u'ipv6autoconf':
         True, u'nic': u'eno1', u'vlan': u'50', u'ipaddr':
         u'192.168.85.49', u'switch': u'ovs', u'mtu': 1500, u'netmask':
         u'255.255.252.0', u'dhcpv6': False, u'STP': u'no', u'bridged':
         u'true', u'gateway': u'192.168.85.254', u'defaultRoute': True}},
         bondings={}, options={u'connectivityCheck': u'true',
         u'connectivityTimeout': 120}) from=::ffff:192.168.85.24,56806,
         flow_id=4147e25f-0a23-4f47-a0a4-d424a3437d11 (api:46)

         2018-03-08 23:12:52,449-0800 INFO  (jsonrpc/2)
         [jsonrpc.JsonRpcServer] RPC call Host.ping2 succeeded in 0.00
         seconds (__init__:573)

         2018-03-08 23:12:52,511-0800 INFO  (jsonrpc/7) [api.network]
         FINISH setupNetworks error=[Errno 19] ovirtmgmt is not present
         in the system from=::ffff:192.168.85.24,56806,
         flow_id=4147e25f-0a23-4f47-a0a4-d424a3437d11 (api:50)
         2018-03-08 23:12:52,512-0800 ERROR (jsonrpc/7)
         [jsonrpc.JsonRpcServer] Internal server error (__init__:611)
         Traceback (most recent call last):
            File
         "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line
         606, in _handle_request
              res = method(**params)
            File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py",
         line 201, in _dynamicMethod
              result = fn(*methodArgs)
            File "<string>", line 2, in setupNetworks
            File "/usr/lib/python2.7/site-packages/vdsm/common/api.py",
         line 48, in method
              ret = func(*args, **kwargs)
            File "/usr/lib/python2.7/site-packages/vdsm/API.py", line
         1527, in setupNetworks
              supervdsm.getProxy().setupNetworks(networks, bondings,
options)
            File
         "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py",
         line 55, in __call__
              return callMethod()
            File
         "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py",
         line 53, in <lambda>
              **kwargs)
            File "<string>", line 2, in setupNetworks
            File "/usr/lib64/python2.7/multiprocessing/managers.py", line
         773, in _callmethod
              raise convert_to_error(kind, result)
         IOError: [Errno 19] ovirtmgmt is not present in the system
         2018-03-08 23:12:52,512-0800 INFO  (jsonrpc/7)
         [jsonrpc.JsonRpcServer] RPC call Host.setupNetworks failed
         (error -32603) in 5.90 seconds (__init__:573)
         2018-03-08 23:12:54,769-0800 INFO  (jsonrpc/1)
         [jsonrpc.JsonRpcServer] RPC call Host.ping2 succeeded in 0.00
         seconds (__init__:573)
         2018-03-08 23:12:54,772-0800 INFO  (jsonrpc/5) [api.host] START
         getCapabilities() from=::1,45562 (api:46)
         2018-03-08 23:12:54,906-0800 INFO  (jsonrpc/5) [api.host] FINISH
         getCapabilities error=[Errno 19] ovirtmgmt is not present in the
         system from=::1,45562 (api:50)
         2018-03-08 23:12:54,906-0800 ERROR (jsonrpc/5)
         [jsonrpc.JsonRpcServer] Internal server error (__init__:611)
         Traceback (most recent call last):
            File
         "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line
         606, in _handle_request
              res = method(**params)
            File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py",
         line 201, in _dynamicMethod
              result = fn(*methodArgs)
            File "<string>", line 2, in getCapabilities
            File "/usr/lib/python2.7/site-packages/vdsm/common/api.py",
         line 48, in method
              ret = func(*args, **kwargs)
            File "/usr/lib/python2.7/site-packages/vdsm/API.py", line
         1339, in getCapabilities
              c = caps.get()
            File "/usr/lib/python2.7/site-packages/vdsm/host/caps.py",
         line 168, in get
              net_caps = supervdsm.getProxy().network_caps()
            File
         "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py",
         line 55, in __call__
              return callMethod()
            File
         "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py",
         line 53, in <lambda>
              **kwargs)
            File "<string>", line 2, in network_caps
            File "/usr/lib64/python2.7/multiprocessing/managers.py", line
         773, in _callmethod
              raise convert_to_error(kind, result)
         IOError: [Errno 19] ovirtmgmt is not present in the system

         So something is dreadfully wrong with the bridge to ovs
         conversion in 4.2.2rc3.

         thomas
         _______________________________________________
         Users mailing list
         Users@ovirt.org <mailto:Users@ovirt.org>
         http://lists.ovirt.org/mailman/listinfo/users
         <http://lists.ovirt.org/mailman/listinfo/users>



_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

Reply via email to