Re: [ovs-discuss] Openvswitch and LXC integration on Ubuntu 18.04

2018-05-14 Thread densha
Paul

Thanks for that command.  I tried it and found that my br-int was not up .

After  "sudo ip link set br-int up" and "sudo ip addr add 192.168.1.1/24
dev br-int" it worked and I could ping as expected.

For Ubuntu 18.04 I have added the following to /etc/network/interfaces

allow-ovs br-int
iface br-int inet static
address 192.168.1.1
netmask 255.255.255.0
ovs_type OVSBridge

But on reboot br-int is not coming up correctly after reboot.

5: br-int:  mtu 1500 qdisc noop state DOWN group
default qlen 1000
link/ether c6:8e:e2:7b:0f:4f brd ff:ff:ff:ff:ff:ff

Is this the correct way to define a Openvswitch bridge with IP on Ubuntu?


Thanks

Densha

It looks> Before you rebuild, I suggest you ping at the interval of 0.01,
then, take
> "ovs-ofctl dump-flows br-int" and save it to a file. The relevant columns
> are table and n_packets. Wait a couple of seconds, then take the dump
> again. Compare and find the entries that increment at the rate of your
> ping.
>
> If you don't see the hits in the tables - check iptables, kmod, etc.
>
> If you ser them, use trace to figure out why your traffic is being
> dropped.
>
> Regards,
> Paul
>
>
> Get Outlook for iOS<https://aka.ms/o0ukef>
> 
> From: ovs-discuss-boun...@openvswitch.org
>  on behalf of den...@exemail.com.au
> 
> Sent: Saturday, May 12, 2018 11:45:57 PM
> To: Orabuntu-LXC
> Cc: ovs-discuss@openvswitch.org
> Subject: Re: [ovs-discuss] Openvswitch and LXC integration on Ubuntu 18.04
>
> Thanks.  I tried that and still unable to ping from the LXC container to
> the IP address set on the bridge.
>
> I will rebuild everything from scratch and retry.
>
>> Check sysctl settings.  Check/set these on the LXC host machine in the
>> /etc/sysctl.conf (or in a new file in the /etc/sysctl.d directory, e.g
>> you
>> could call it  /etc/sysctl.d/60-lxc.conf) :
>>
>> net.ipv4.conf.default.rp_filter=0
>> net.ipv4.conf.all.rp_filter=0
>> net.ipv4.ip_forward=1
>>
>> Reference:
>> https://thenewstack.io/solving-a-common-beginners-problem-when-pinging-from-an-openstack-instance/
>>
>>
>>
>> On Sat, May 12, 2018 at 7:09 AM,  wrote:
>>
>>> Thanks for the response and links.  I will watch the OvS Con videos.
>>>
>>> I have now successfully started the container, but unable to ping out
>>> or
>>> into the container.
>>>
>>> I have modified my /var/lib/vm1/conf to be
>>>
>>> # Network configuration
>>> lxc.net.0.type = veth
>>> lxc.net.0.link = br-int <- Name of my internal container bridge
>>> lxc.net.0.flags = up
>>> lxc.net.0.name=eth0
>>> lxc.net.0.hwaddr = 00:16:3e:d2:23:a8 .<- This was in the conf when
>>> created.
>>>
>>>
>>> When I start the container - I can see the port be added to the bridge
>>> on
>>> the host system
>>>
>>> # sudo lxc-start -n vm1
>>> # sudo ovs-vsctl show
>>> c3d9247e-68f1-4ae1-be0e-4bb86fd2c541
>>> Bridge br-dmz
>>> Port br-dmz
>>> Interface br-dmz
>>> type: internal
>>> Bridge br-int
>>> Port "veth4U4B0B"  <- New port added when
>>> container starts
>>> Interface "veth4U4B0B"
>>> Port br-int
>>> Interface br-int
>>> type: internal
>>> Port "enp2s0"
>>> Interface "enp2s0"
>>> ovs_version: "2.9.0"
>>>
>>> The bridge br-int has self IP 192.168.10.1/24 - that I added using
>>> (after
>>> reboot)
>>>
>>> # sudo ip addr del 192.168.0.1/24 dev br-int
>>>
>>> 5: br-int:  mtu 1500 qdisc noop state DOWN group
>>> default qlen 1000
>>> link/ether 00:01:80:82:f8:59 brd ff:ff:ff:ff:ff:ff
>>> inet 192.168.10.1/24 scope global br-int
>>>valid_lft forever preferred_lft forever
>>>
>>> and the new port
>>>
>>> 8: veth4U4B0B@if7:  mtu 1500 qdisc
>>> noqueue master ovs-system state UP group default qlen 1000
>>> link/ether fe:b8:87:1b:1e:5e brd ff:ff:ff:ff:ff:ff link-netnsid 0
>>> inet6 fe80::fcb8:87ff:fe1b:1e5e/64 scope link
>>>valid_lft forever preferred_lft forever
>>>
>>> Inside the container I set the IP of eth0 device using
>>>
>>> ubuntu@vm1:~$ sudo ip addr add 192.168.10.2/24 dev eth0
>>>
&

Re: [ovs-discuss] Openvswitch and LXC integration on Ubuntu 18.04

2018-05-12 Thread densha
Thanks.  I tried that and still unable to ping from the LXC container to
the IP address set on the bridge.

I will rebuild everything from scratch and retry.

> Check sysctl settings.  Check/set these on the LXC host machine in the
> /etc/sysctl.conf (or in a new file in the /etc/sysctl.d directory, e.g you
> could call it  /etc/sysctl.d/60-lxc.conf) :
>
> net.ipv4.conf.default.rp_filter=0
> net.ipv4.conf.all.rp_filter=0
> net.ipv4.ip_forward=1
>
> Reference:
> https://thenewstack.io/solving-a-common-beginners-problem-when-pinging-from-an-openstack-instance/
>
>
>
> On Sat, May 12, 2018 at 7:09 AM,  wrote:
>
>> Thanks for the response and links.  I will watch the OvS Con videos.
>>
>> I have now successfully started the container, but unable to ping out or
>> into the container.
>>
>> I have modified my /var/lib/vm1/conf to be
>>
>> # Network configuration
>> lxc.net.0.type = veth
>> lxc.net.0.link = br-int <- Name of my internal container bridge
>> lxc.net.0.flags = up
>> lxc.net.0.name=eth0
>> lxc.net.0.hwaddr = 00:16:3e:d2:23:a8 .<- This was in the conf when
>> created.
>>
>>
>> When I start the container - I can see the port be added to the bridge
>> on
>> the host system
>>
>> # sudo lxc-start -n vm1
>> # sudo ovs-vsctl show
>> c3d9247e-68f1-4ae1-be0e-4bb86fd2c541
>> Bridge br-dmz
>> Port br-dmz
>> Interface br-dmz
>> type: internal
>> Bridge br-int
>> Port "veth4U4B0B"  <- New port added when
>> container starts
>> Interface "veth4U4B0B"
>> Port br-int
>> Interface br-int
>> type: internal
>> Port "enp2s0"
>> Interface "enp2s0"
>> ovs_version: "2.9.0"
>>
>> The bridge br-int has self IP 192.168.10.1/24 - that I added using
>> (after
>> reboot)
>>
>> # sudo ip addr del 192.168.0.1/24 dev br-int
>>
>> 5: br-int:  mtu 1500 qdisc noop state DOWN group
>> default qlen 1000
>> link/ether 00:01:80:82:f8:59 brd ff:ff:ff:ff:ff:ff
>> inet 192.168.10.1/24 scope global br-int
>>valid_lft forever preferred_lft forever
>>
>> and the new port
>>
>> 8: veth4U4B0B@if7:  mtu 1500 qdisc
>> noqueue master ovs-system state UP group default qlen 1000
>> link/ether fe:b8:87:1b:1e:5e brd ff:ff:ff:ff:ff:ff link-netnsid 0
>> inet6 fe80::fcb8:87ff:fe1b:1e5e/64 scope link
>>valid_lft forever preferred_lft forever
>>
>> Inside the container I set the IP of eth0 device using
>>
>> ubuntu@vm1:~$ sudo ip addr add 192.168.10.2/24 dev eth0
>>
>> ubuntu@vm1:~$ ip a
>> 7: eth0@if8:  mtu 1500 qdisc noqueue
>> state UP group default qlen 1000
>> link/ether 00:16:3e:d2:23:a8 brd ff:ff:ff:ff:ff:ff link-netnsid 0
>> inet 192.168.10.2/24 scope global eth0
>>valid_lft forever preferred_lft forever
>> inet6 fe80::216:3eff:fed2:23a8/64 scope link
>>valid_lft forever preferred_lft forever
>>
>> However I still cannot ping the self IP of the bridge.
>>
>> Is there anything obvious that I have configured wrong?
>>
>> Thanks
>>
>> Densha
>>
>>
>> > These materials might help:
>> >
>> > 1.  Presentation on running LXC on OpenvSwitch at OvS Con:
>> >
>> > https://www.youtube.com/watch?v=MXewSiDvQl4&t=221s (presentation I
>> gave
>> at
>> > OvS Con).
>> >
>> > I discuss in the preso that for LXC 2.1+, you now have the option to
>> > configure OpenvSwitch for LXC in two different ways.  You can
>> configure
>> it
>> > using, as you mentioned, the scripts (and this was the way we had to
>> do
>> it
>> > in LXC 1.0.x and  2.0.x.  This method has advantage that VLAN's can
>> also
>> > be
>> > configured pretty easily in these scripts too.
>> >
>> > lxc.net.0.script.up
>> > lxc.net.0.script.down
>> >
>> > Or, starting from 2.1.x you can also configure it directly in the LXC
>> > config using for example these parameters:
>> >
>> >   lxc.net.0.type = veth
>> >   lxc.net.0.link = ovsbr0
>> >   lxc.net.0.flags = up
>> >   lxc.net.0.name = eth0
>> >
>> > which is also discussed here:
>> > https://discuss.linuxcontainers.org/t/lxc-2-1-has-been-released/487
>> >
>> > 2.  Also, 

Re: [ovs-discuss] Openvswitch and LXC integration on Ubuntu 18.04

2018-05-12 Thread densha
Thanks for the response and links.  I will watch the OvS Con videos.

I have now successfully started the container, but unable to ping out or
into the container.

I have modified my /var/lib/vm1/conf to be

# Network configuration
lxc.net.0.type = veth
lxc.net.0.link = br-int <- Name of my internal container bridge
lxc.net.0.flags = up
lxc.net.0.name=eth0
lxc.net.0.hwaddr = 00:16:3e:d2:23:a8 .<- This was in the conf when
created.


When I start the container - I can see the port be added to the bridge on
the host system

# sudo lxc-start -n vm1
# sudo ovs-vsctl show
c3d9247e-68f1-4ae1-be0e-4bb86fd2c541
Bridge br-dmz
Port br-dmz
Interface br-dmz
type: internal
Bridge br-int
Port "veth4U4B0B"  <- New port added when
container starts
Interface "veth4U4B0B"
Port br-int
Interface br-int
type: internal
Port "enp2s0"
Interface "enp2s0"
ovs_version: "2.9.0"

The bridge br-int has self IP 192.168.10.1/24 - that I added using (after
reboot)

# sudo ip addr del 192.168.0.1/24 dev br-int

5: br-int:  mtu 1500 qdisc noop state DOWN group
default qlen 1000
link/ether 00:01:80:82:f8:59 brd ff:ff:ff:ff:ff:ff
inet 192.168.10.1/24 scope global br-int
   valid_lft forever preferred_lft forever

and the new port

8: veth4U4B0B@if7:  mtu 1500 qdisc
noqueue master ovs-system state UP group default qlen 1000
link/ether fe:b8:87:1b:1e:5e brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::fcb8:87ff:fe1b:1e5e/64 scope link
   valid_lft forever preferred_lft forever

Inside the container I set the IP of eth0 device using

ubuntu@vm1:~$ sudo ip addr add 192.168.10.2/24 dev eth0

ubuntu@vm1:~$ ip a
7: eth0@if8:  mtu 1500 qdisc noqueue
state UP group default qlen 1000
link/ether 00:16:3e:d2:23:a8 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.10.2/24 scope global eth0
   valid_lft forever preferred_lft forever
inet6 fe80::216:3eff:fed2:23a8/64 scope link
   valid_lft forever preferred_lft forever

However I still cannot ping the self IP of the bridge.

Is there anything obvious that I have configured wrong?

Thanks

Densha


> These materials might help:
>
> 1.  Presentation on running LXC on OpenvSwitch at OvS Con:
>
> https://www.youtube.com/watch?v=MXewSiDvQl4&t=221s (presentation I gave at
> OvS Con).
>
> I discuss in the preso that for LXC 2.1+, you now have the option to
> configure OpenvSwitch for LXC in two different ways.  You can configure it
> using, as you mentioned, the scripts (and this was the way we had to do it
> in LXC 1.0.x and  2.0.x.  This method has advantage that VLAN's can also
> be
> configured pretty easily in these scripts too.
>
> lxc.net.0.script.up
> lxc.net.0.script.down
>
> Or, starting from 2.1.x you can also configure it directly in the LXC
> config using for example these parameters:
>
>   lxc.net.0.type = veth
>   lxc.net.0.link = ovsbr0
>   lxc.net.0.flags = up
>   lxc.net.0.name = eth0
>
> which is also discussed here:
> https://discuss.linuxcontainers.org/t/lxc-2-1-has-been-released/487
>
> 2.  Also, my Orabuntu-LXC software projects is specifically designed for
> deploying an entire LXC VLAN-tagged infrastructure on OpenvSwitch with
> just
> a single command:
>
> https://github.com/gstanden/orabuntu-lxc
>
> See if these references above help you set it up, and if not, let me know.
>
> HTH, Gilbert
>
>
>
> On Sat, May 12, 2018 at 2:32 AM,  wrote:
>
>>
>> I am attempting to use LXC containers with OpenVswitch on Ubuntu 18.04
>> LTS
>> server.  However, I am unable to work out the syntax for the container
>> settings.  The container is failing to start due to unable to create the
>> network.
>>
>> I did a vanilla install onto a media play with two NIC cards - enp1s0
>> and
>> enp2s0.
>>
>> I installed, created, tested and then destroyed a container using lxc to
>> confirm that lxc was functioning correctly on the server.
>>
>> #sudo apt-get install lxc lxc-templates wget bridge-utils
>> #sudo lxc-checkconfig
>> #sudo lxc-create -n vm1 -t ubuntu
>> #sudo lxc-start -n vm1
>> #sudo lxc-console -n vm1
>> #sudo lxc-stop -n vm1
>> #sudo lxc-destroy -n vm1
>>
>> I then removed lxc bridge - lxcbr0 by setting USE_LXC_BRIDGE to false in
>> /etc/default/lxc-net and removed lxcbr0 device and rebooted.
>>
>> # sudo ip link set lxcbr0 down
>> # sudo brctl delbr lxcbr0
>>
>> I then installed openvswitch and created two bridges br-dmz (dmz
>> containers - 172.18.0.0/24) and br-int (internal containers -
>> 192.168.0.0/24).  I have added physical NI

[ovs-discuss] Openvswitch and LXC integration on Ubuntu 18.04

2018-05-12 Thread densha

I am attempting to use LXC containers with OpenVswitch on Ubuntu 18.04 LTS
server.  However, I am unable to work out the syntax for the container
settings.  The container is failing to start due to unable to create the
network.

I did a vanilla install onto a media play with two NIC cards - enp1s0 and
enp2s0.

I installed, created, tested and then destroyed a container using lxc to
confirm that lxc was functioning correctly on the server.

#sudo apt-get install lxc lxc-templates wget bridge-utils
#sudo lxc-checkconfig
#sudo lxc-create -n vm1 -t ubuntu
#sudo lxc-start -n vm1
#sudo lxc-console -n vm1
#sudo lxc-stop -n vm1
#sudo lxc-destroy -n vm1

I then removed lxc bridge - lxcbr0 by setting USE_LXC_BRIDGE to false in
/etc/default/lxc-net and removed lxcbr0 device and rebooted.

# sudo ip link set lxcbr0 down
# sudo brctl delbr lxcbr0

I then installed openvswitch and created two bridges br-dmz (dmz
containers - 172.18.0.0/24) and br-int (internal containers -
192.168.0.0/24).  I have added physical NIC port enp2s0 to br-int as I
have a local WAP installed on that interface.

#sudo apt-get install openvswitch-switch
#sudo ovs-vsctl add-br br-dmz
#sudo ovs-vsctl add-br br-int
#sudo ovs-vsctl add-port br-int enp2s0

#sudo ip addr add 172.18.0.1/24 dev br-dmz
#sudo ip addr add 192.168.10.1/24 dev br-int

#sudo ovs-vsctl show
c3d9247e-68f1-4ae1-be0e-4bb86fd2c541
Bridge br-dmz
Port br-dmz
Interface br-dmz
type: internal
Bridge br-int
Port br-int
Interface br-int
type: internal
Port "enp2s0"
Interface "enp2s0"
ovs_version: "2.9.0"

#ip a

5: br-dmz:  mtu 1500 qdisc noop state DOWN group
default qlen 1000
link/ether 7e:86:2a:79:24:4e brd ff:ff:ff:ff:ff:ff
inet 172.18.0.1/24 scope global br-dmz
   valid_lft forever preferred_lft forever
6: br-int:  mtu 1500 qdisc noop state DOWN group
default qlen 1000
link/ether 00:01:80:82:f8:59 brd ff:ff:ff:ff:ff:ff
inet 192.168.10.1/24 scope global br-int
   valid_lft forever preferred_lft forever


I created a LXC container VM1 and I would like to attach to br-int

sudo lxc-create -n vm1 -t ubuntu

Edit VMs config vi /var/lib/lxc/vm1/config

lxc.net.0.link = br-int<- from lxcbr0
lxc.net.0.script.up=/etc/lxc/ifup   <- added
lxc.net.0.script.down=/etc/lxc/ifdown   <- added

Created scripts to ifup / ifdown interface

vi /etc/lxc/ifup
#!/bin/bash
BRIDGE=br-int
ovs-vsctl --may-exist add-br $BRIDGE
ovs-vsctl --if-exists del-port $BRIDGE $5
ovs-vsctl --may-exist add-port $BRIDGE $5

vi /etc/lxc/ifdown
#!/bin/bash
ovsBr=br-int
ovs-vsctl --if-exists del-port ${ovsBr} $5

chmod +x /etc/lxc/if*

When I try to start the container using openvswitch I get the following
error.

sudo lxc-start -n vm1 --logfile log.txt

lxc-start vm1 20180512072653.582 ERRORlxc_conf - conf.c:run_buffer:347
- Script exited with status 1
lxc-start vm1 20180512072653.610 ERRORlxc_network -
network.c:lxc_create_network_priv:2436 - Failed to create network device
lxc-start vm1 20180512072653.610 ERRORlxc_start -
start.c:lxc_spawn:1545 - Failed to create the network
lxc-start vm1 20180512072653.610 ERRORlxc_start -
start.c:__lxc_start:1866 - Failed to spawn container "vm1"
lxc-start vm1 20180512072653.610 ERRORlxc_container -
lxccontainer.c:wait_on_daemonized_start:824 - Received container state
"STOPPING" instead of "RUNNING"


Any idea what I have missed that is causing the container netwok to not be
created.

Thanks

Densha










___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] Ubuntu 16.04 Openvswitch bridge on bond

2017-12-09 Thread densha
Hi Forum,

I am attempting to setup a linux bond and create a openvswitch bridge that
uses this bond0 on Ubuntu 16.04.03 using a cisco 2960 switch with
openvswitch version 255.2.

On the Ubuntu server I have configured

# /etc/modprobe.d/bonding.conf
alias bond0 bonding
options bonding mode=4 miimon=100 lacp_rate=1

# /etc/network/interfaces
auto eno1
iface eno1 inet manual
  bond-master bond0

auto eno2
iface eno2 inet manual
  bond-master bond0

auto bond0
allow-br0 bond0
iface bond0 inet manual
  bond-slaves eno1 eno2
  ovs_bridge br0
  ovs_type OVSPort

auto br0
allow-ovs br0
iface br0 inet static
  address 192.168.0.8
  netmask 255.255.255.0
  gateway 192.168.0.1
  dns-nameservers 192.168.0.1
  ovs_type OVSBridge
  ovs_ports br0

On the Cisco switch I have my LAN 192.168.0.1 connect to port G0/1 on
access port and created ether-channel and connected ports G0/7 and G0/8 to
my Ubuntu server.  Vlan 1 to make things simple.

# show run
interface Port-channel1
 switchport mode access

interface GigabitEthernet0/7
 switchport mode access
 channel-protocol lacp
 channel-group 1 mode active
!
interface GigabitEthernet0/8
 switchport mode access
 channel-protocol lacp
 channel-group 1 mode active

>From the switch I can see the LACP neighbours

Switch#show lacp neighbor
  LACP portAdmin  Oper   PortPort
Port  Flags   Priority  Dev ID  AgekeyKeyNumber 
State
Gi0/7 SA  255   a01d.48c7.7618  26s0x00x90x2 0x3D
Gi0/8 SA  255   a01d.48c7.7618  25s0x00x90x1 0x3D

However, I am unable to get my Ubuntu server to be able to ping any
devices on my local network.  From my switch I can ping other devices my
network apart from the Ubuntu server.

Could someone explain to me what I have missed with regards to this setup?

Thanks

Densha

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] Replace Linux Bridge with Openvswitch on Ubuntu

2017-11-13 Thread densha
Hi Forum

I would like to replace the linux bridge installed with libvirt with a
openvswitch bridge, so I can learn about openvswitch.

I cannot get the VM to communicate with the bridge I create with openvswitch.

Here is what I did on a vanilla Ubutnu 17.10 Server X86_64 installation. 
Installing libvirt and openvswitch.

PACKAGES="qemu-kvm openvswitch-switch libvirt-bin virtinst virt-manager"
sudo apt-get update
sudo apt-get dist-upgrade -qy

sudo apt-get install -qy ${PACKAGES}

sudo adduser `id -un` libvirtd
sudo adduser `id -un` kvm

I deleted the current linux bridge using

virsh net-destroy default
virsh net-undefine default
systemctl restart libvirtd

Created a openvswitch bridge and assigned a IP address.

ovs-vsctl add-br virbr0
ip addr add 192.168.122.1/24 dev virbr0

I can see the bridge and ip address.

5: virbr0:  mtu 1500 qdisc noop state DOWN group
default qlen 1000
link/ether 26:68:17:50:25:40 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 scope global virbr0
   valid_lft forever preferred_lft forever

I create a VM using virsh-install

sudo qemu-img create -f qcow2 -o preallocation=metadata
/var/lib/libvirt/images/kvm01.qcow2 10G

sudo virt-install -n kvm01 \
--connect qemu:///system \
--vcpus=2 \
 -r 4096 \
--os-type linux \
--os-variant ubuntu16.04 \
--network=bridge:virbr0,virtualport_type='openvswitch' \
--vnc --noautoconsole \
--keymap=en-us \
--console pty,target_type=serial \
-f /var/lib/libvirt/images/kvm01.qcow2 \
--location /var/lib/libvirt/images/ubuntu-16.04.3-server-amd64.iso \
get_hostname=kvm01 vga=788"


When the VM boots I assign a static IP address to 192.168.122.2/24. 
Inside the VM the NIC reoprts as

2: ens3:  mtu 1500 qdisc pfifo_fast qlen
1000
link/ether 52:54:00:a5:53:36 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.2/24 brd 192.168.122.255 scope global ens3
   valid_lft forever preferred_lft forever
inet6 fe80::5254:ff:faa5:5336/64 scope link
   valid_lft forever preferred_lft forever


When I virsh dumpxml on the VM domain I see the network setup as

  
  
  

  
  
  
  
  


>From the host machine openswitch reports the vnet0 as an internal port
sudo ovs-vsctl show
30fdaeff-9867-4651-85a6-5b0bf53f5130
Bridge "virbr0"
Port "vnet0"
Interface "vnet0"
Port "virbr0"
Interface "virbr0"
type: internal
ovs_version: "2.8.0"

>From the VM I am unable to ping the IP of the bridge.
#ping 192.168.122.1

>From the host I am also unable to ping the VM
#ping 192.168.122.2

I am at my limit of understanding of linux networking and I wonder could
someone point out what I have done wrong here?

Thanks

Densha


___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss