On 9/18/2019 5:22 AM, Anish M wrote:
Hello,
Im trying to forward/mirror VM packets between two ovs-dpdk compute nodes. With the help of http://docs.openvswitch.org/en/latest/howto/userspace-tunneling/ link, i could able to setup additional NIC for forwarding the packet from source ovs-dpdk compute node to the destination compute node. I could able to see all the vxlan forwarded packets at the destination compute node's additional NIC. But the same is not visible at the ovs bridge where is attached that additional NIC.

Hi, can you please clarify the sentence above? Is it that you can see packets arrive at NIC but the packets are not seen routing through the bridge?


At both the compute nodes, i have same type of 2 port 10G NIC. ens1f0 & ens1f1.

ens1f0 is acting as DPDK port and im using it inside the openstack for DPDK VMs at both compute nodes.

Just to clarify, ens1f0 is a dpdk port type on each node and ens1f1 is not? I.e. is ens1f1 a netdev linux device?


In order to mirror DPDK VM traffic from one compute node to another compute node, i followed the above userspace-tunnelling link and able to forward VM traffic from source compute node towards the ens1f1(172.28.41.101) of the destination compute node.

Can you list the mirroring commands you used to set this up?


ovs-vsctl --may-exist add-br br-phy \
     -- set Bridge br-phy datapath_type=netdev \
     -- br-set-external-id br-phy bridge-id br-phy \
     -- set bridge br-phy fail-mode=standalone \
          other_config:hwaddr=48:df:37:7e:c2:08

ovs-vsctl --timeout 10 add-port br-phy ens1f1
ip addr add 172.28.41.101/24 dev br-phy
ip link set br-phy up
ip addr flush dev ens1f1 2>/dev/null
ip link set ens1f1 up

Even though i receiving all the mirrored vxlan packets at ens1f1 port (checked using tcpdump), the same number of packets are not available inside the ovs br-phy bridge (only ~10% of the mirrored traffic is available inside the br-phy)

Just to be aware, low volumes of traffic are fine for mirroring but at high volumes it would be expected that you would not see the same amount of traffic mirrored due to the overhead associated with mirroring. In this case are you sending high volumes of traffic? Have you tested with smaller bursts of traffic?


[root@overcloud-hpcomputeovsdpdk-0 ~]# ovs-ofctl dump-flows br-phy
 cookie=0x3435, duration=46778.969s, table=0, n_packets=30807, n_bytes=3889444, priority=0 actions=NORMAL

[root@overcloud-hpcomputeovsdpdk-0 ~]# ovs-ofctl dump-ports br-phy
OFPST_PORT reply (xid=0x2): 3 ports
  port LOCAL: rx pkts=77, bytes=5710, drop=0, errs=0, frame=0, over=0, crc=0
            tx pkts=30107, bytes=3604094, drop=28077, errs=0, coll=0
  port  ens1f1: rx pkts=4385104, bytes=596066321, drop=0, errs=0, frame=0, over=0, crc=0
            tx pkts=33606, bytes=4021526, drop=0, errs=0, coll=0
  port  "patch-tap-bint": rx pkts=30056, bytes=3591859, drop=?, errs=?, frame=?, over=?, crc=?
            tx pkts=739, bytes=294432, drop=?, errs=?, coll=?

In ens1f1, i could see lot of rx pkts, but in the br-phy flow im seeing only few packets.

Please provide any advice how i can mirror/forward packets between two OVS-DPDK compute nodes.

A diagram of your setup would be useful to help debug/understand the use case. I'm slightly confused with the mix of DPDK/non-DPDK ports you have vs what you want to achieve with mirroring. If you could provide more info on this as well as the expected flow of a packet as it is mirrored/forwarded it would be helpful.

Thanks
Ian

Best Regards,
Anish

_______________________________________________
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss

_______________________________________________
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss

Reply via email to