On 10/25/23 12:09, David Marchand wrote:
> Forwarding to dev@
> 
> On Mon, Oct 23, 2023 at 6:05 PM <[email protected]> wrote:
>>> 2023-10-23T15:02:12.622Z|00063|dpdk|INFO|VHOST_CONFIG: 
>>> (/root/ovs-dev/tests/system-dpdk-testsuite.dir/017/dpdkvhostclient0) virtio 
>>> is now ready for processing.
>>> 2023-10-23T15:02:12.622Z|00064|netdev_dpdk|INFO|vHost Device 
>>> '/root/ovs-dev/tests/system-dpdk-testsuite.dir/017/dpdkvhostclient0' has 
>>> been added on numa node 0
>>> 2023-10-23T15:02:13.592Z|00074|dpif_netdev|INFO|Performing pmd to rx queue 
>>> assignment using cycles algorithm.
>>> 2023-10-23T15:02:13.592Z|00075|dpif_netdev|INFO|Core 21 on numa node 0 
>>> assigned port 'dpdkvhostuserclient0' rx queue 0 (measured processing cycles 
>>> 0).
>>> 2023-10-23T15:02:13.592Z|00001|netdev_dpdk(ovs_vhost2)|INFO|State of queue 
>>> 0 ( tx_qid 0 ) of vhost device 
>>> '/root/ovs-dev/tests/system-dpdk-testsuite.dir/017/dpdkvhostclient0' 
>>> changed to 'enabled'
>>> 2023-10-23T15:02:13.592Z|00002|netdev_dpdk(ovs_vhost2)|INFO|State of queue 
>>> 1 ( rx_qid 0 ) of vhost device 
>>> '/root/ovs-dev/tests/system-dpdk-testsuite.dir/017/dpdkvhostclient0' 
>>> changed to 'enabled'
>>> 2023-10-23T15:02:13.595Z|00076|unixctl|DBG|received request dpctl/show[], 
>>> id=0
>>> 2023-10-23T15:02:13.596Z|00077|unixctl|DBG|replying with success, id=0: 
>>> "netdev@ovs-netdev:
>>>   lookups: hit:0 missed:2 lost:0
>>>   flows: 2
>>>   port 0: ovs-netdev (tap)
>>>   port 1: br10 (tap)
>>>   port 2: dpdkvhostuserclient0 (dpdkvhostuserclient: 
>>> configured_rx_queues=1, configured_tx_queues=1, mtu=9000, 
>>> requested_rx_queues=1, requested_tx_queues=1)
>>> "
>>> 2023-10-23T15:02:13.715Z|00078|dpif_netdev|INFO|Performing pmd to rx queue 
>>> assignment using cycles algorithm.
>>> 2023-10-23T15:02:13.715Z|00079|dpif_netdev|INFO|Core 21 on numa node 0 
>>> assigned port 'dpdkvhostuserclient0' rx queue 0 (measured processing cycles 
>>> 0).
>>> 2023-10-23T15:02:13.728Z|00080|unixctl|DBG|received request dpctl/show[], 
>>> id=0
>>> 2023-10-23T15:02:13.728Z|00081|unixctl|DBG|replying with success, id=0: 
>>> "netdev@ovs-netdev:
>>>   lookups: hit:0 missed:2 lost:0
>>>   flows: 2
>>>   port 0: ovs-netdev (tap)
>>>   port 1: br10 (tap)
>>>   port 2: dpdkvhostuserclient0 (dpdkvhostuserclient: 
>>> configured_rx_queues=1, configured_tx_queues=1, mtu=2000, 
>>> requested_rx_queues=1, requested_tx_queues=1)
>>> "
>>> 2023-10-23T15:02:13.756Z|00082|bridge|INFO|bridge br10: deleted interface 
>>> dpdkvhostuserclient0 on port 1
>>> 2023-10-23T15:02:13.756Z|00083|dpif_netdev|INFO|PMD thread on numa_id: 1, 
>>> core id: 88 destroyed.
>>> 2023-10-23T15:02:13.772Z|00002|dpdk(pmd-c88/id:103)|INFO|PMD thread 
>>> released DPDK lcore 2.
>>> 2023-10-23T15:02:13.778Z|00084|dpif_netdev|INFO|PMD thread on numa_id: 0, 
>>> core id: 21 destroyed.
>>> 2023-10-23T15:02:13.778Z|00002|ofproto_dpif_xlate(pmd-c21/id:102)|WARN|received
>>>  packet on unknown port 1 on bridge br10 while processing 
>>> icmp6,in_port=1,vlan_tci=0x0000,dl_src=ca:76:e9:ff:a2:09,dl_dst=33:33:00:00:00:02,ipv6_src=fe80::c876:e9ff:feff:a209,ipv6_dst=ff02::2,ipv6_label=0x00000,nw_tos=0,nw_ecn=0,nw_ttl=255,nw_frag=no,icmp_type=133,icmp_code=0
>>> 2023-10-23T15:02:13.791Z|00003|dpdk(pmd-c21/id:102)|INFO|PMD thread 
>>> released DPDK lcore 1.
>>> 2023-10-23T15:02:13.801Z|00085|dpdk|INFO|VHOST_CONFIG: 
>>> (/root/ovs-dev/tests/system-dpdk-testsuite.dir/017/dpdkvhostclient0) free 
>>> connfd 95
>>> 2023-10-23T15:02:13.801Z|00086|netdev_dpdk|INFO|vHost Device 
>>> '/root/ovs-dev/tests/system-dpdk-testsuite.dir/017/dpdkvhostclient0' not 
>>> found
> 
> I am a bit puzzled at this report.
> It is similar to
> https://mail.openvswitch.org/pipermail/ovs-dev/2022-July/396325.html.
> 
> I understand this shows a race in OVS cleaning up sequence, with some
> packet (triggering an upcall) received by a pmd on a port that is not
> referenced in the ofproto bridge anymore.
> Why did it show up again? This is probably due to my patch 7 in the v7
> series which lets testpmd sends packets while deleting the vhu port.
> 
> The easiest (laziest?) for me is probably to drop this patch 7 and
> instead waive warnings about a vhu socket reconnection...

The packets are coming from the kernel interface on the other side
of testpmd, right?  In that case, can we just bring that interface
down before removing OVS port to prevent random ipv6 traffic from
flowing around?  Another similar option might be to set admin state
DOWN on the OVS side for the vhost-user port.

> But I find it strange that there is a window in which OVS pmd threads
> still poll packets (and complain) while the ports are being removed.

OpenFlow ports are getting removed before their backing datapath ports,
so there is always a small window where packets can arrive on datapath
ports that do not have associated OpenFlow port numbers anymore.
Reversing this might not be an option due to reference counting, but I
don't remember exactly.

Same applies to upcalls in kenrel datapath, because packets can be queued
for upcall while the port is getting removed.  And it's even trickier to
fix that for a kernel, because it's done fully asynchronously.

Best regards, Ilya Maximets.
_______________________________________________
dev mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to