On Wed, Oct 25, 2023 at 1:18 PM Ilya Maximets <[email protected]> wrote:
> On 10/25/23 12:09, David Marchand wrote:
> >>> 2023-10-23T15:02:13.756Z|00082|bridge|INFO|bridge br10: deleted interface 
> >>> dpdkvhostuserclient0 on port 1
> >>> 2023-10-23T15:02:13.756Z|00083|dpif_netdev|INFO|PMD thread on numa_id: 1, 
> >>> core id: 88 destroyed.
> >>> 2023-10-23T15:02:13.772Z|00002|dpdk(pmd-c88/id:103)|INFO|PMD thread 
> >>> released DPDK lcore 2.
> >>> 2023-10-23T15:02:13.778Z|00084|dpif_netdev|INFO|PMD thread on numa_id: 0, 
> >>> core id: 21 destroyed.
> >>> 2023-10-23T15:02:13.778Z|00002|ofproto_dpif_xlate(pmd-c21/id:102)|WARN|received
> >>>  packet on unknown port 1 on bridge br10 while processing 
> >>> icmp6,in_port=1,vlan_tci=0x0000,dl_src=ca:76:e9:ff:a2:09,dl_dst=33:33:00:00:00:02,ipv6_src=fe80::c876:e9ff:feff:a209,ipv6_dst=ff02::2,ipv6_label=0x00000,nw_tos=0,nw_ecn=0,nw_ttl=255,nw_frag=no,icmp_type=133,icmp_code=0
> >>> 2023-10-23T15:02:13.791Z|00003|dpdk(pmd-c21/id:102)|INFO|PMD thread 
> >>> released DPDK lcore 1.
> >>> 2023-10-23T15:02:13.801Z|00085|dpdk|INFO|VHOST_CONFIG: 
> >>> (/root/ovs-dev/tests/system-dpdk-testsuite.dir/017/dpdkvhostclient0) free 
> >>> connfd 95
> >>> 2023-10-23T15:02:13.801Z|00086|netdev_dpdk|INFO|vHost Device 
> >>> '/root/ovs-dev/tests/system-dpdk-testsuite.dir/017/dpdkvhostclient0' not 
> >>> found
> >
> > I am a bit puzzled at this report.
> > It is similar to
> > https://mail.openvswitch.org/pipermail/ovs-dev/2022-July/396325.html.
> >
> > I understand this shows a race in OVS cleaning up sequence, with some
> > packet (triggering an upcall) received by a pmd on a port that is not
> > referenced in the ofproto bridge anymore.
> > Why did it show up again? This is probably due to my patch 7 in the v7
> > series which lets testpmd sends packets while deleting the vhu port.
> >
> > The easiest (laziest?) for me is probably to drop this patch 7 and
> > instead waive warnings about a vhu socket reconnection...
>
> The packets are coming from the kernel interface on the other side
> of testpmd, right?  In that case, can we just bring that interface
> down before removing OVS port to prevent random ipv6 traffic from
> flowing around?  Another similar option might be to set admin state
> DOWN on the OVS side for the vhost-user port.

Putting down the tap iface should do the job yes.

But now I wonder why we need such a setup with testpmd + a tap in the
mtu unit tests: no packet is being actively injected by the unit tests
themselves.

I get that testpmd will make sure that the vhost-user client port is
running in a "nominal" situation when changing the mtu, so ok to keep
it.
But can we remove those tap iface from testpmd (for those MTU tests)?


>
> > But I find it strange that there is a window in which OVS pmd threads
> > still poll packets (and complain) while the ports are being removed.
>
> OpenFlow ports are getting removed before their backing datapath ports,
> so there is always a small window where packets can arrive on datapath
> ports that do not have associated OpenFlow port numbers anymore.
> Reversing this might not be an option due to reference counting, but I
> don't remember exactly.
>
> Same applies to upcalls in kenrel datapath, because packets can be queued
> for upcall while the port is getting removed.  And it's even trickier to
> fix that for a kernel, because it's done fully asynchronously.

Ok, thanks for the context / explanations.


-- 
David Marchand

_______________________________________________
dev mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to