Hi, I'm seeing some unexpected(?) behavior after calling rte_event_port_unlink() with the SW eventdev driver (DPDK 17.11.2/18.02.1, RTE_EVENT_MAX_QUEUES_PER_DEV=255).
Scenario: - Run SW evendev on a service core - Start eventdev with e.g. 16 ports. Each core will have a dedicated port. - Create 1 atomic queue and link all active ports to it (some ports may not be linked). - Allocate N events and enqueue them to eventdev - Next, each worker core does a number of scheduling rounds concurrently. E.g. uint64_t rx_events = 0; while(rx_events < SCHED_ROUNDS) { num_deq = rte_event_dequeue_burst(dev_id, port_id, ev, 1, 0); if (num_deq) { rx_events++; rte_event_enqueue_burst(dev_id, port_id, ev, 1); } } - This works fine but problems occur when doing cleanup after the first loop finishes on some core. E.g. rte_event_port_unlink(dev_id, port_id, NULL, 0); while(1) { num_deq = rte_event_dequeue_burst(dev_id, port_id, ev, 1, 0); if (num_deq == 0) break; rte_event_enqueue_burst(dev_id, port_id, ev, 1); } - The events enqueued in the cleanup loop will ramdomly end up either back to the same port (which has already been unlinked) or to port 0, which is not used (mapping rte_lcore_id to port_id). As far as I understand the eventdev API, an eventdev port shouldn't have to be linked to the target queue for enqueue to work. Have I understoop something incorrectly or is there a bug in the SW scheduler? I can provide a simple test application for reproducing this issue. Below is an example rte_event_dev_dump() output when processing events with two cores (ports 2 and 3). The rest of the ports are not linked at all but somehow an event ends up to port 0 stalling the system. Regards, Matias EventDev todo-fix-name: ports 16, qids 1 rx 908342 drop 0 tx 908342 sched calls: 42577156 sched cq/qid call: 43120490 sched no IQ enq: 42122057 sched no CQ enq: 42122064 inflight 32, credits: 4064 Port 0 rx 0 drop 0 tx 2 inflight 2 Max New: 1024 Avg cycles PP: 0 Credits: 0 Receive burst distribution: 0:-nan% rx ring used: 0 free: 4096 cq ring used: 2 free: 14 Port 1 rx 0 drop 0 tx 0 inflight 0 Max New: 1024 Avg cycles PP: 0 Credits: 0 Receive burst distribution: 0:-nan% rx ring used: 0 free: 4096 cq ring used: 0 free: 16 Port 2 rx 524292 drop 0 tx 524290 inflight 0 Max New: 1024 Avg cycles PP: 190 Credits: 30 Receive burst distribution: 0:98% 1-4:1.82% rx ring used: 0 free: 4096 cq ring used: 0 free: 16 Port 3 rx 384050 drop 0 tx 384050 inflight 0 Max New: 1024 Avg cycles PP: 191 Credits: 0 Receive burst distribution: 0:100% 1-4:0.04% rx ring used: 0 free: 4096 cq ring used: 0 free: 16 ... Port 15 rx 0 drop 0 tx 0 inflight 0 Max New: 1024 Avg cycles PP: 0 Credits: 0 Receive burst distribution: 0:-nan% rx ring used: 0 free: 4096 cq ring used: 0 free: 16 Queue 0 (Atomic) rx 908342 drop 0 tx 908342 Per Port Stats: Port 0: Pkts: 2 Flows: 1 Port 1: Pkts: 0 Flows: 0 Port 2: Pkts: 524290 Flows: 0 Port 3: Pkts: 384050 Flows: 0 Port 4: Pkts: 0 Flows: 0 Port 5: Pkts: 0 Flows: 0 Port 6: Pkts: 0 Flows: 0 Port 7: Pkts: 0 Flows: 0 Port 8: Pkts: 0 Flows: 0 Port 9: Pkts: 0 Flows: 0 Port 10: Pkts: 0 Flows: 0 Port 11: Pkts: 0 Flows: 0 Port 12: Pkts: 0 Flows: 0 Port 13: Pkts: 0 Flows: 0 Port 14: Pkts: 0 Flows: 0 Port 15: Pkts: 0 Flows: 0 -- iqs empty --