Re: [ovs-discuss] OVS DPDK performance with SELECT group

2019-11-26 Thread Gregory Rose


On 11/26/2019 7:41 AM, Rami Neiman wrote:


Hello,

I am using OVS DPDK 2.9.2 with TRex traffic generator to simply 
forward the received traffic back to the traffic generator (i.e. 
ingress0->egeress0, egress0->ingress0) over 2 port 10G NIC.


The OVS throughput with this setup matches the traffic generator (all 
packets sent by TG are received). And we are getting around 2.5Mpps of 
traffic forwarded fine (we can probably go even higher, so that’s not 
a limit).


Our next goal is to have the TG traffic also mirrored over additional 
2 10G ports to a monitoring device and we use SELECT group to achieve 
load balancing of mirrored traffic. When we add the group as follows:




Putting all that on a single NIC might be overwhelming the PCIE 
bandwidth.  Something to check.


- Greg

ovs-ofctl -O OpenFlow13 add-group br0 
group_id=5,type=select,bucket=output:mirror0,bucket=output:mirror1


ovs-ofctl -O OpenFlow13 add-flow br0 "table=5, 
metadata=0,in_port=egress0,actions=group:5,output:ingress0"


ovs-ofctl -O OpenFlow13 add-flow br0 "table=5, 
metadata=0,in_port=ingress0,actions=group:5,output:egress0"


mirror0 and mirror1 being our mirror ports. The mirroring works as 
expected, however the OVS throughput drops to less than 500 Kpps (as 
reported by the traffic generator).


The ingress0 and egress0 (i.e. ports that receive traffic) show 
packets being dropped in large numbers. Adding more pmd cores and 
distributing Rx queues among them has no effect. Changing the hash 
fields of the SELECT group has no effect either.


My question is: is there a way to give more cores/memory or otherwise 
influence the hash calculation and SELECT group action to make it more 
performant? less than 500Kpps seems like a very low number.


Just in case, here’s the output of the most important statistics commands:

ovs-vsctl --column statistics list interface egress0

statistics : {flow_director_filter_add_errors=0, 
flow_director_filter_remove_errors=0, mac_local_errors=17, 
mac_remote_errors=1, "rx_128_to_255_packets"=3936120, 
"rx_1_to_64_packets"=14561687, "rx_256_to_511_packets"=1624884, 
"rx_512_to_1023_packets"=2180436, "rx_65_to_127_packets"=21519189, 
rx_broadcast_packets=17, rx_bytes=23487692367, rx_crc_errors=0, 
rx_dropped=23759559, rx_errors=0, rx_fcoe_crc_errors=0, 
rx_fcoe_dropped=0, rx_fcoe_mbuf_allocation_errors=0, 
rx_fragment_errors=0, rx_illegal_byte_errors=0, rx_jabber_errors=0, 
rx_length_errors=0, rx_mac_short_packet_dropped=0, 
rx_management_dropped=0, rx_management_packets=0, 
rx_mbuf_allocation_errors=0, rx_missed_errors=23759559, 
rx_oversize_errors=0, rx_packets=39363905, 
"rx_priority0_dropped"=23759559, 
"rx_priority0_mbuf_allocation_errors"=0, "rx_priority1_dropped"=0, 
"rx_priority1_mbuf_allocation_errors"=0, "rx_priority2_dropped"=0, 
"rx_priority2_mbuf_allocation_errors"=0, "rx_priority3_dropped"=0, 
"rx_priority3_mbuf_allocation_errors"=0, "rx_priority4_dropped"=0, 
"rx_priority4_mbuf_allocation_errors"=0, "rx_priority5_dropped"=0, 
"rx_priority5_mbuf_allocation_errors"=0, "rx_priority6_dropped"=0, 
"rx_priority6_mbuf_allocation_errors"=0, "rx_priority7_dropped"=0, 
"rx_priority7_mbuf_allocation_errors"=0, rx_undersize_errors=0, 
"tx_128_to_255_packets"=1549647, "tx_1_to_64_packets"=10995089, 
"tx_256_to_511_packets"=7309468, "tx_512_to_1023_packets"=739062, 
"tx_65_to_127_packets"=7837579, tx_broadcast_packets=6, 
tx_bytes=28481732482, tx_dropped=0, tx_errors=0, 
tx_management_packets=0, tx_multicast_packets=0, tx_packets=43936201}


ovs-vsctl --column statistics list interface ingress0

statistics : {flow_director_filter_add_errors=0, 
flow_director_filter_remove_errors=0, mac_local_errors=37, 
mac_remote_errors=1, "rx_128_to_255_packets"=2778420, 
"rx_1_to_64_packets"=18198197, "rx_256_to_511_packets"=13168041, 
"rx_512_to_1023_packets"=886524, "rx_65_to_127_packets"=14853438, 
rx_broadcast_packets=17, rx_bytes=28481734408, rx_crc_errors=0, 
rx_dropped=22718779, rx_errors=0, rx_fcoe_crc_errors=0, 
rx_fcoe_dropped=0, rx_fcoe_mbuf_allocation_errors=0, 
rx_fragment_errors=0, rx_illegal_byte_errors=0, rx_jabber_errors=0, 
rx_length_errors=0, rx_mac_short_packet_dropped=0, 
rx_management_dropped=0, rx_management_packets=0, 
rx_mbuf_allocation_errors=0, rx_missed_errors=22718779, 
rx_oversize_errors=0, rx_packets=43936225, 
"rx_priority0_dropped"=22718779, 
"rx_priority0_mbuf_allocation_errors"=0, "rx_priority1_dropped"=0, 
"rx_priority1_mbuf_allocation_errors"=0, "rx_priority2_dropped"=0, 
"rx_priority2_mbuf_allocation_errors"=0, "rx_priority3_dropped"=0, 
"rx_priority3_mbuf_allocation_errors"=0, "rx_priority4_dropped"=0, 
"rx_priority4_mbuf_allocation_errors"=0, "rx_priority5_dropped"=0, 
"rx_priority5_mbuf_allocation_errors"=0, "rx_priority6_dropped"=0, 
"rx_priority6_mbuf_allocation_errors"=0, "rx_priority7_dropped"=0, 
"rx_priority7_mbuf_allocation_errors"=0, rx_undersize_errors=0, 
"tx_128_to_255_packets"=1793095, "tx_1_to_64_packets"=7027091, 
"tx_256_to_511_packets"=783763, "tx_512

Re: [ovs-discuss] OVS-IPsec - Network going down.

2019-11-26 Thread Ansis
On Tue, 26 Nov 2019 at 04:24, Rajak, Vishal  wrote:
>
> Hi,
>
>
>
> We are trying to bring up IPsec over vxlan between two nodes of openstack 
> cluster in our lab environment.

Since what you described below does *much more* than set up trivial
ipsec vxlan tunnel setup, then:
1. have you tried to set up same thing with plain vxlan setup and
succeed? I think you are jumping too early in conclusions that you
have IPsec problem here.
2. have you tried to set up simple ipsec vxlan and succeed without
setting up OpenFlow controller, fail-mode=safe and patch ports? A
simple IPsec set up really does not need that.
3. Also, did you check ovs-monitor-ipsec.log and libreswan files for errors?

>
>
>
> Note: There are only 2 nodes in cluster(Compute and controller).
>
> Following are the steps followed to bring-up ovs  with ipsec
>
> Link:  http://docs.openvswitch.org/en/latest/tutorials/ipsec/
>
> Commands used on Controller node: (IP – 10.2.2.1)
>
> a. dnf install python2-openvswitch libreswan \
>   "kernel-devel-uname-r == $(uname -r)"
>
> b. yum install python-openvswitch   - to install 
> python-openvswitch-2.11.0-4.el7.x86_64 as it has support for ipsec.
>
> c. Download openvswitch 2.11 version rpms and put it in on the server
> d. Install openvswitch rpms in the server
>
>  ex- rpm -ivh openvswitch-ipsec-2.11.0-4.el7.x86_64.rpm  -- to install 
> ovs-ipsec rpm
>
> e. iptables -A INPUT -p esp -j ACCEPT
> f.  iptables -A INPUT -p udp --dport 500 -j ACCEPT

Do you have firewalld running? If so then using iptables is not the
right way to ACCEPT traffic. You have to use firewalld-cmd.

>
> g. cp -r /usr/share/openvswitch/ /usr/local/share/
I don't understand why you have to do the command above. Are you
mixing self built packages with official packages that have different
path prefixes?


> h. systemctl start openvswitch-ipsec.service
>
> I. ovs-vsctl add-port br-ex ipsec_vxlan -- set interface ipsec_vxlan 
> type=vxlan options:remote_ip=10.2.2.2 options:psk=swordfish
>
>
>
> Commands used on compute node: (IP -10.2.2.2)
>
> Link:  https://devinpractice.com/2016/10/18/open-vswitch-introduction-part-1/
>
> a. ovs-vsctl add-br br-ex
>
> b. ip link set br-ex up
>
> c. ovs-vsctl add-port br-ex enp1s0f1
>
> d. ip addr del 10.2.2.2/24 dev enp1s0f1
>
> e. ip addr add 10.2.2.2/24 dev br-ex
>
> f. ip route add default via 10.2.2.254 dev br-ex
>
> g. Same step as done for controller node above for ipsec configuration.
>
>  After bringing up ipsec in compute node the connectivity for all 10.2.2.0 
> network went down.

Commands a-g don't have anything to do with IPsec. I see how network
connectivity could go down there at the time you move IP address from
enp1s0f1 to br-ex.

>
> Following are the steps followed to resolve the issue of 10.2.2.0.network 
> down due to creation of  bridge in compute node
>
> Replicated the output of ovs-vsctl show in compute node by comparing the 
> output of controller node by executing following commands
>
>  1. ovs-vsctl set-controller br-ex tcp:127.0.0.1:6633
>
>  2. ovs-vsctl – set Bridge br-ex fail-mode=secure
>
> After running the above 2 commands the network connectivity to outside 
> network from compute node went down and other servers came up.
>
> 3. ovs-vsctl  add-port br-ex phy-br-ex – set interface phy-br-ex type=patch 
> options:peer=int-br-ex
>
>  4. ovs-vsctl  add-port br-int int-br-ex – set interface  int-br-ex 
> type=patch options:peer=phy-br-ex

Is phy-br-ex the physical bridge and int-br-ex the integration bridge?
If so then it seems odd that you are trying to connect them with patch
port and use IPsec (or for that matter even plain) tunneling.
>
> After running the above 2 commands also the compute node was 
> not reachable to outside network.
>
> Compared the files in network-script in both compute node and controller node 
> and found some difference.
>
> Compute node didn’t had ifcfg-br-ex file, so added it in compute node. 
> Made some changes in ifcfg-enp1s0f1 file after comparing it with the same 
> file present in controller node.
>
>  d. Restarted the network service.
>
>  e. After restarting the network service the changes which was made in 
> ovs-vsctl was removed and only the bridge br-ex which was created on physical 
> interface remained.
>
>  f. The compute node started ping the outer network as well.
>
>  g. Ran command to establish ipsec-vsxlan.
>
>   ovs-vsctl add-port br-ex ipsec_vxlan -- set interface ipsec_vxlan 
> type=vxlan options:remote_ip=10.2.2.1 options:psk=swordfish
>
>  h. After port was added for ipsec-vxlan the network again went down.
>
>  I. Removed the ipsec-vxlan port.
>
>  j. Now the compute node has bridge over physical interface and its 
> pinging to outside network as well.
>
> k. Tried pinging from VM in compute node to VM in controlled node. Ping 
> didn’t work.
>
>1. Removed the VM form compute node and tried creating ano

Re: [ovs-discuss] OVS virtual devices and network namespaces

2019-11-26 Thread Maksym Planeta
Thanks. I switched from GRE tunneling to VxLAN tunneling, which does not 
create these devices.


On 26/11/2019 19:26, Ben Pfaff wrote:

On Tue, Nov 26, 2019 at 09:23:58AM +0100, Maksym Planeta wrote:

I want to configure OVS to communicate between containers. I configure OVS
and see it creating some additional interfaces, like gre, or system. But
when I create a container with an isolated network namespace these devices
are still visible.

How do I make OVS devices invisible inside the container unless I explicitly
say so?

...

I would not expect gre0, gretap0, and erspan0 to be present.


These aren't OVS devices.  As I understand it, they're global system
devices that are always present in every namespace if their drivers are
loaded.  A Google search for gretap0 turns this up as the first hit, for
example: https://github.com/lxc/lxd/issues/3338



--
Regards,
Maksym Planeta
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVS virtual devices and network namespaces

2019-11-26 Thread Ben Pfaff
On Tue, Nov 26, 2019 at 09:23:58AM +0100, Maksym Planeta wrote:
> I want to configure OVS to communicate between containers. I configure OVS
> and see it creating some additional interfaces, like gre, or system. But
> when I create a container with an isolated network namespace these devices
> are still visible.
> 
> How do I make OVS devices invisible inside the container unless I explicitly
> say so?
...
> I would not expect gre0, gretap0, and erspan0 to be present.

These aren't OVS devices.  As I understand it, they're global system
devices that are always present in every namespace if their drivers are
loaded.  A Google search for gretap0 turns this up as the first hit, for
example: https://github.com/lxc/lxd/issues/3338
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] OVS DPDK performance with SELECT group

2019-11-26 Thread Rami Neiman
Hello,

I am using OVS DPDK 2.9.2 with TRex traffic generator to simply forward the
received traffic back to the traffic generator (i.e. ingress0->egeress0,
egress0->ingress0) over 2 port 10G NIC.

The OVS throughput with this setup matches the traffic generator (all
packets sent by TG are received). And we are getting around 2.5Mpps of
traffic forwarded fine (we can probably go even higher, so that’s not a
limit).

Our next goal is to have the TG traffic also mirrored over additional 2 10G
ports to a monitoring device and we use SELECT group to achieve load
balancing of mirrored traffic. When we add the group as follows:



ovs-ofctl -O OpenFlow13 add-group br0
group_id=5,type=select,bucket=output:mirror0,bucket=output:mirror1



ovs-ofctl -O OpenFlow13 add-flow br0 "table=5,
metadata=0,in_port=egress0,actions=group:5,output:ingress0"

ovs-ofctl -O OpenFlow13 add-flow br0 "table=5,
metadata=0,in_port=ingress0,actions=group:5,output:egress0"



mirror0 and mirror1 being our mirror ports. The mirroring works as
expected, however the OVS throughput drops to less than 500 Kpps (as
reported by the traffic generator).



The ingress0 and egress0 (i.e. ports that receive traffic) show packets
being dropped in large numbers. Adding more pmd cores and distributing Rx
queues among them has no effect. Changing the hash fields of the SELECT
group has no effect either.

My question is: is there a way to give more cores/memory or otherwise
influence the hash calculation and SELECT group action to make it more
performant? less than 500Kpps seems like a very low number.



Just in case, here’s the output of the most important statistics commands:



ovs-vsctl --column statistics list interface egress0

statistics  : {flow_director_filter_add_errors=0,
flow_director_filter_remove_errors=0, mac_local_errors=17,
mac_remote_errors=1, "rx_128_to_255_packets"=3936120,
"rx_1_to_64_packets"=14561687, "rx_256_to_511_packets"=1624884,
"rx_512_to_1023_packets"=2180436, "rx_65_to_127_packets"=21519189,
rx_broadcast_packets=17, rx_bytes=23487692367, rx_crc_errors=0,
rx_dropped=23759559, rx_errors=0, rx_fcoe_crc_errors=0, rx_fcoe_dropped=0,
rx_fcoe_mbuf_allocation_errors=0, rx_fragment_errors=0,
rx_illegal_byte_errors=0, rx_jabber_errors=0, rx_length_errors=0,
rx_mac_short_packet_dropped=0, rx_management_dropped=0,
rx_management_packets=0, rx_mbuf_allocation_errors=0,
rx_missed_errors=23759559, rx_oversize_errors=0, rx_packets=39363905,
"rx_priority0_dropped"=23759559, "rx_priority0_mbuf_allocation_errors"=0,
"rx_priority1_dropped"=0, "rx_priority1_mbuf_allocation_errors"=0,
"rx_priority2_dropped"=0, "rx_priority2_mbuf_allocation_errors"=0,
"rx_priority3_dropped"=0, "rx_priority3_mbuf_allocation_errors"=0,
"rx_priority4_dropped"=0, "rx_priority4_mbuf_allocation_errors"=0,
"rx_priority5_dropped"=0, "rx_priority5_mbuf_allocation_errors"=0,
"rx_priority6_dropped"=0, "rx_priority6_mbuf_allocation_errors"=0,
"rx_priority7_dropped"=0, "rx_priority7_mbuf_allocation_errors"=0,
rx_undersize_errors=0, "tx_128_to_255_packets"=1549647,
"tx_1_to_64_packets"=10995089, "tx_256_to_511_packets"=7309468,
"tx_512_to_1023_packets"=739062, "tx_65_to_127_packets"=7837579,
tx_broadcast_packets=6, tx_bytes=28481732482, tx_dropped=0, tx_errors=0,
tx_management_packets=0, tx_multicast_packets=0, tx_packets=43936201}



ovs-vsctl --column statistics list interface ingress0

statistics  : {flow_director_filter_add_errors=0,
flow_director_filter_remove_errors=0, mac_local_errors=37,
mac_remote_errors=1, "rx_128_to_255_packets"=2778420,
"rx_1_to_64_packets"=18198197, "rx_256_to_511_packets"=13168041,
"rx_512_to_1023_packets"=886524, "rx_65_to_127_packets"=14853438,
rx_broadcast_packets=17, rx_bytes=28481734408, rx_crc_errors=0,
rx_dropped=22718779, rx_errors=0, rx_fcoe_crc_errors=0, rx_fcoe_dropped=0,
rx_fcoe_mbuf_allocation_errors=0, rx_fragment_errors=0,
rx_illegal_byte_errors=0, rx_jabber_errors=0, rx_length_errors=0,
rx_mac_short_packet_dropped=0, rx_management_dropped=0,
rx_management_packets=0, rx_mbuf_allocation_errors=0,
rx_missed_errors=22718779, rx_oversize_errors=0, rx_packets=43936225,
"rx_priority0_dropped"=22718779, "rx_priority0_mbuf_allocation_errors"=0,
"rx_priority1_dropped"=0, "rx_priority1_mbuf_allocation_errors"=0,
"rx_priority2_dropped"=0, "rx_priority2_mbuf_allocation_errors"=0,
"rx_priority3_dropped"=0, "rx_priority3_mbuf_allocation_errors"=0,
"rx_priority4_dropped"=0, "rx_priority4_mbuf_allocation_errors"=0,
"rx_priority5_dropped"=0, "rx_priority5_mbuf_allocation_errors"=0,
"rx_priority6_dropped"=0, "rx_priority6_mbuf_allocation_errors"=0,
"rx_priority7_dropped"=0, "rx_priority7_mbuf_allocation_errors"=0,
rx_undersize_errors=0, "tx_128_to_255_packets"=1793095,
"tx_1_to_64_packets"=7027091, "tx_256_to_511_packets"=783763,
"tx_512_to_1023_packets"=1133960, "tx_65_to_127_packets"=14219400,
tx_broadcast_packets=6, tx_bytes=23487691707, tx_dropped=0, tx_errors=0,
tx_management_packets=0, tx_multicast_packets=0, tx_packets=393638

[ovs-discuss] OVS-IPsec - Network going down.

2019-11-26 Thread Rajak, Vishal
Hi,

We are trying to bring up IPsec over vxlan between two nodes of openstack 
cluster in our lab environment.

Note: There are only 2 nodes in cluster(Compute and controller).

Following are the steps followed to bring-up ovs  with ipsec

Link:  http://docs.openvswitch.org/en/latest/tutorials/ipsec/

Commands used on Controller node: (IP - 10.2.2.1)

a. dnf install python2-openvswitch libreswan \
  "kernel-devel-uname-r == $(uname -r)"

b. yum install python-openvswitch   - to install 
python-openvswitch-2.11.0-4.el7.x86_64 as it has support for ipsec.

c. Download openvswitch 2.11 version rpms and put it in on the server
d. Install openvswitch rpms in the server

 ex- rpm -ivh openvswitch-ipsec-2.11.0-4.el7.x86_64.rpm  -- to install 
ovs-ipsec rpm

e. iptables -A INPUT -p esp -j ACCEPT
f.  iptables -A INPUT -p udp --dport 500 -j ACCEPT

g. cp -r /usr/share/openvswitch/ /usr/local/share/
h. systemctl start openvswitch-ipsec.service

I. ovs-vsctl add-port br-ex ipsec_vxlan -- set interface ipsec_vxlan type=vxlan 
options:remote_ip=10.2.2.2 options:psk=swordfish



Commands used on compute node: (IP -10.2.2.2)

Link:  https://devinpractice.com/2016/10/18/open-vswitch-introduction-part-1/

a. ovs-vsctl add-br br-ex

b. ip link set br-ex up

c. ovs-vsctl add-port br-ex enp1s0f1

d. ip addr del 10.2.2.2/24 dev enp1s0f1

e. ip addr add 10.2.2.2/24 dev br-ex

f. ip route add default via 10.2.2.254 dev br-ex

g. Same step as done for controller node above for ipsec configuration.

 After bringing up ipsec in compute node the connectivity for all 10.2.2.0 
network went down.

Following are the steps followed to resolve the issue of 10.2.2.0.network down 
due to creation of  bridge in compute node

  1.  Replicated the output of ovs-vsctl show in compute node by comparing the 
output of controller node by executing following commands

 1. ovs-vsctl set-controller br-ex tcp:127.0.0.1:6633

 2. ovs-vsctl - set Bridge br-ex fail-mode=secure

After running the above 2 commands the network connectivity to outside 
network from compute node went down and other servers came up.

3. ovs-vsctl  add-port br-ex phy-br-ex - set interface phy-br-ex type=patch 
options:peer=int-br-ex

 4. ovs-vsctl  add-port br-int int-br-ex - set interface  int-br-ex type=patch 
options:peer=phy-br-ex

After running the above 2 commands also the compute node was 
not reachable to outside network.

  1.  Compared the files in network-script in both compute node and controller 
node and found some difference.

Compute node didn't had ifcfg-br-ex file, so added it in compute node. Made 
some changes in ifcfg-enp1s0f1 file after comparing it with the same file 
present in controller node.

 d. Restarted the network service.

 e. After restarting the network service the changes which was made in 
ovs-vsctl was removed and only the bridge br-ex which was created on physical 
interface remained.

 f. The compute node started ping the outer network as well.

 g. Ran command to establish ipsec-vsxlan.

  ovs-vsctl add-port br-ex ipsec_vxlan -- set interface ipsec_vxlan 
type=vxlan options:remote_ip=10.2.2.1 options:psk=swordfish

 h. After port was added for ipsec-vxlan the network again went down.

 I. Removed the ipsec-vxlan port.

 j. Now the compute node has bridge over physical interface and its pinging 
to outside network as well.

k. Tried pinging from VM in compute node to VM in controlled node. Ping 
didn't work.

   1. Removed the VM form compute node and tried creating another instance. 
Creation of another instance failed.

   2. Debugged the issue and found out that  was not running.

   3. Started neutron-openvswitch-agent again. After starting 
neutron-openvswitch-agent the creation of VM was successful.

   4. Still the VMs are not pinging.

l. Compared /etc/neutron/plugins/ml2/openvswitch_agent.ini in both 
controller and compute node and found some difference. After resolving those 
difference and restarting neutron-openvswitch-  agent  the phy-br-ex port, 
controller tcp:127.0.0.1:6633 and fail-mode=secure automatically got added in 
Bridge br-ex.



Still the VMs are not pinging.

Ipsec is not established and VMs are not pinging.

Regards,
Vishal.
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] OVS virtual devices and network namespaces

2019-11-26 Thread Maksym Planeta

Hello,

I want to configure OVS to communicate between containers. I configure 
OVS and see it creating some additional interfaces, like gre, or system. 
But when I create a container with an isolated network namespace these 
devices are still visible.


How do I make OVS devices invisible inside the container unless I 
explicitly say so?


Here is what I have:

# sudo ovs-vsctl show
a3a830a0-0634-4ee3-9424-ad4efc709dc1
Bridge "ovsbr0"
Port "ovsbr0"
Interface "ovsbr0"
type: internal
Port "ovsgre0"
Interface "ovsgre0"
type: gre
options: {remote_ip="192.168.1.130"}
ovs_version: "2.11.2"

ip a outside the container (some devices are omitted for brevity):

...
3: docker_gwbridge:  mtu 1500 qdisc 
noqueue state UP group default

link/ether 02:42:50:d2:d7:25 brd ff:ff:ff:ff:ff:ff
inet 172.19.0.1/16 brd 172.19.255.255 scope global docker_gwbridge
   valid_lft forever preferred_lft forever
inet6 fe80::42:50ff:fed2:d725/64 scope link
   valid_lft forever preferred_lft forever
4: docker0:  mtu 1500 qdisc noqueue 
state UP group default

link/ether 02:42:d7:49:21:2b brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
   valid_lft forever preferred_lft forever
inet6 fe80::42:d7ff:fe49:212b/64 scope link
   valid_lft forever preferred_lft forever
...
17: gre0@NONE:  mtu 1476 qdisc noop state DOWN group default qlen 
1000

link/gre 0.0.0.0 brd 0.0.0.0
18: gretap0@NONE:  mtu 1462 qdisc noop state DOWN 
group default qlen 1000

link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
19: erspan0@NONE:  mtu 1450 qdisc noop state DOWN 
group default qlen 1000

link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
...
30: ovs-system:  mtu 1500 qdisc noop state DOWN 
group default qlen 1000

link/ether 0a:72:e7:17:43:71 brd ff:ff:ff:ff:ff:ff
31: ovsbr0:  mtu 1500 qdisc noop state DOWN group 
default qlen 1000

link/ether 4e:94:c0:62:75:4e brd ff:ff:ff:ff:ff:ff
32: gre_sys@NONE:  mtu 65000 qdisc 
pfifo_fast master ovs-system state UNKNOWN group default qlen 1000

link/ether b2:8a:d6:e9:fa:67 brd ff:ff:ff:ff:ff:ff
inet6 fe80::6ca9:39ff:fecd:927a/64 scope link
   valid_lft forever preferred_lft forever

And here is the some from inside:
sudo docker run --rm -it --name test alpine ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN qlen 
1000

link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
2: gre0@NONE:  mtu 1476 qdisc noop state DOWN qlen 1000
link/gre 0.0.0.0 brd 0.0.0.0
3: gretap0@NONE:  mtu 1462 qdisc noop state DOWN 
qlen 1000

link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
4: erspan0@NONE:  mtu 1450 qdisc noop state DOWN 
qlen 1000

link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
41: eth0@if42:  mtu 1500 qdisc 
noqueue state UP

link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
   valid_lft forever preferred_lft forever

I would not expect gre0, gretap0, and erspan0 to be present.


--
Regards,
Maksym Planeta
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss