> From: Hari Haran <info2hariha...@gmail.com> 
> Sent: Wednesday, July 19, 2023 4:30 PM
> To: Van Haaren, Harry <harry.van.haa...@intel.com>
> Cc: users@dpdk.org
> Subject: Re: Inflight value shown invalid in Event Dev Queue
> 
> Hi Harry Haaren (Yes :) )
> 
> I have given more details below, please check this. 

Please reply "in-line", it makes it easier to read the conversation for future 
readers, and gives reference to your replies.

> Device Configuration:
> Event Dev Queue : 1
> Number of ports : 3
> 
> Queue 0 depth - 32k
> Port 0, 1 amd 2 : Enqueue depth 4096, Dequeue depth 128
> 
> Cores: 
> Rx core - 1
> Workers cores - 2
> 
> Port 2:
> Used in Rx core - Used to post packets from Rx core to worker cores using 
> event dev queue .
> So port 2 used to post packets only.  
> API used: rte_event_enqueue_burst()
> 
> Port 0 and 1 linked with Event Dev Q 0 to dequeue the packets. These ports 
> used to dequeue the packets only. 
> Port 0 used in Worker core 1 - Only to receive the packets from Rx core using 
> event dev queue
> Port 1 used in worker core 2 - Only to receive the packets from Rx core using 
> event dev queue
> API used: rte_event_dequeue_burst()
> 
> Expected behaviour:
> 
> Port 2 enqueue packets to event dev Q in Rx core 
> Port 0 and 1 dequeue packets from event dev Q in two workers 
> 
> Event dev scheduler of queue 0, will schedule received packets in port 2 to 
> port 0 and 1. 
> 
> 
> Problem Description:
> 
> Port 0 - only received 4096 packets through event dev Q, after that no 
> packets available for this. 
> API used: rte_event_dequeue_burst()
> 
> Port 2 - Successfully enqueued 32k packets through event dev Q, after that 
> enqueue failure observed. 
> API used: rte_event_enqueue_burst()
> Looks like, event dev queue stalled at this point. 
> 
> Also why port 0 stats show inflight as 4096?

This seems to be the problem - are you returning the events to Eventdev?
Or calling the rte_event_dequeue_burst() API again (the "implicit releases" 
default value will automatically "complete" the events on the next dequeue() 
call, making the "inflights" go down, and allowing the Eventdev to make forward 
progress.

Please ensure that new events are enqueued with "NEW" type,
And that the worker cores are forwarding events with "FWD" type.

This ensures that the RX/producer core is back-pressured first, and that worker 
cores (who enqueue FWD type events) can make progress as there is still space 
in the device.
Typically, setting a "new_event_threshold" on the producer port 
(https://doc.dpdk.org/api/structrte__event__port__conf.html#a70bebdfb5211f97b81b46ff08594ddda)
 of 50% of the total capacity is a good starting point. The ideal new % amount 
depends on the workload itself, and how often one NEW event turns into N NEW 
events..

> Port 0 Stats:
>   rx   0  drop 0  tx   4096   inflight 4096
> 
> All Stats:
> Dev=0 Port=1EventDev todo-fix-name: ports 3, qids 1
> rx   32768
> drop 0
> tx   4096
> sched calls: 628945658
> sched cq/qid call: 628964843
> sched no IQ enq: 628926401
> sched no CQ enq: 628942982
> inflight 32768, credits: 0
> 
> 
> Port 0
>   rx   0  drop 0  tx   4096   inflight 4096
>   Max New: 32768  Avg cycles PP: 0    Credits: 0
>   Receive burst distribution:
>       0:100% 1-4:0.00% 5-8:0.00% 9-12:0.00%
>   rx ring used:    0 free: 4096
>   cq ring used:    0 free:  128
> Port 1
>   rx   0  drop 0  tx   0  inflight 0
>   Max New: 32768  Avg cycles PP: 0    Credits: 0
>   Receive burst distribution:
>       0:100%
>   rx ring used:    0 free: 4096
>   cq ring used:    0 free:  128
> Port 2
>   rx   32768  drop 0  tx   0  inflight 0
>   Max New: 32768  Avg cycles PP: 0    Credits: 0
>   Receive burst distribution:
>       0:-nan%
>   rx ring used:    0 free: 4096
>   cq ring used:    0 free:  128
> 
> Queue 0 (Atomic)
>   rx   32768  drop 0  tx   4096
>   Per Port Stats:
>     Port 0: Pkts: 4096    Flows: 1
>     Port 1: Pkts: 0   Flows: 0
>     Port 2: Pkts: 0   Flows: 0
>     Port 3: Pkts: 0   Flows: 0
>   iq 0: Used 28672
> 
> Regards,
> Hariharan

Regards, -Harry

<snip below older parts of conversation>

Reply via email to