Great, thanks for the update and quick response!

On Thu, 21 Apr, 2022, 5:41 pm Aaron Conole, <[email protected]> wrote:

> Eelco Chaudron <[email protected]> writes:
>
> > Guess this is a known issue, please check out this patch:
> >
> >
> https://patchwork.ozlabs.org/project/openvswitch/patch/[email protected]/
> >
> >
> > Aaron any update on the v5 patch?
>
> Yes, I'm respinning now and will post by next week
>
> > //Eelco
> >
> >
> > On 20 Apr 2022, at 16:21, Reshma Sreekumar wrote:
> >
> >> Hi All,
> >>
> >> We encountered a dramatic drop of bandwidth on some OVS ports, and
> >> discovered that it was caused by overly broad slow_path flow that does
> not
> >> disappear as long as there is any traffic on the port. The flow looks
> like
> >> this:
> >>
> >> recirc_id(0),in_port(3),*eth()*,eth_type(0x0800),ipv4(frag=no),
> packets:1,
> >> bytes:54, used:0.928s,
> *actions:userspace(pid=4294967295,slow_path(match)) *
> >>
> >> Note that it intercepts _all_ IP traffic from a port.
> >>
> >> It looks like this situation is triggered when an IGMP packet arrives on
> >> the port in the presence of user-installed openflow rules that prevent
> it
> >> from being processed by the flow "priority=0 table=0 actions=NORMAL"
> that
> >> is created in any bridge by default.
> >>
> >> The following script reproduces this situation by installing a flow that
> >> redirects traffic, but any other user defined flow will do, if it
> diverts
> >> or drops the packet before it can reach the default flow.
> >>
> >> ```
> >> #!/usr/bin/sudo /bin/sh
> >>
> >> ip link del veth0 2>/dev/null
> >> ip link del lnxbr 2>/dev/null
> >> ovs-vsctl del-br ovsbr 2>/dev/null
> >>
> >> # cleanup done
> >>
> >> ovs-vsctl add-br ovsbr
> >> ip link add lnxbr type bridge
> >>
> >> ip link add veth0 type veth peer veth1
> >> ip link set veth1 master lnxbr
> >> ovs-vsctl add-port ovsbr veth0
> >>
> >> ofport=`ovs-dpctl show | awk '/veth0/{gsub(":","",$2); print $2}'`
> >> ovs-ofctl add-flow ovsbr cookie=0x1,priority=10,actions=${ofport}
> >> ip link set veth0 up
> >> ip link set veth1 up
> >> ip link set lnxbr up  # This triggers some multicast packets
> >>
> >> sleep 1  # Give OVS time to install kernel flows
> >> echo ==== userspace flows:
> >> ovs-ofctl dump-flows ovsbr
> >> echo ==== kernel flows:
> >> ovs-appctl dpctl/dump-flows
> >> ```
> >>
> >> ( In the above config, one end of a veth pair is enslaved to the ovs
> bridge
> >> while the other is part of a linux bridge. This is for the ease of
> >> generating IGMP traffic. )
> >>
> >> After running this, there is a kernel flow similar to the one shown
> above.
> >> Particular kernel flow depends on the contents of the user- defined
> flow;
> >> if user-defined flow had actions "output" or "resubmit", the kernel flow
> >> will be slow_path.
> >>
> >> This looks like a problem in the way OVS handles IGMP traffic? Maybe
> there
> >> is some way to mitigate it?
> >>
> >> Something worth noting is that, if the IGMP traffic hits the rule with
> >> "actions=NORMAL" (in the absence of any other matching rule of higher
> >> prio), then the userpath rule is installed with a filter
> >> *eth(src=xx:xx:xx:xx:xx,
> >> dst=**01:00:5e:00:00:16), *which looks like the right approach.
> >>
> >> Would be great if this could be fixed to behave the same even in the
> >> presence of other user-defined openflow rules matching the traffic, or
> if
> >> you could share some workaround for the same. Thanks in advance!
> >>
> >> Thanks,
> >> Reshma
> >> _______________________________________________
> >> dev mailing list
> >> [email protected]
> >> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
>
>
_______________________________________________
dev mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to