Hi Ilya,
>>
>> +/* Flush tx queues.
>> + * This is done periodically to empty the intermediate queue in case
>> +of
>> + * fewer packets (< INTERIM_QUEUE_BURST_THRESHOLD) buffered in the
>queue.
>> + */
>> +static int
>> +netdev_dpdk_txq_flush(struct netdev *netdev, int qid , bool
>> +concurrent_txq) {
>> +    struct netdev_dpdk *dev = netdev_dpdk_cast(netdev);
>> +    struct dpdk_tx_queue *txq = &dev->tx_q[qid];
>> +
>> +    if (OVS_LIKELY(txq->dpdk_pkt_cnt)) {
>> +        if (OVS_UNLIKELY(concurrent_txq)) {
>> +            qid = qid % dev->up.n_txq;
>> +            rte_spinlock_lock(&dev->tx_q[qid].tx_lock);
>> +        }
>> +
>> +        netdev_dpdk_eth_tx_burst(dev, qid, txq->dpdk_burst_pkts,
>> +                                 txq->dpdk_pkt_cnt);
>
>The queue used for send and the locked one are different because you're
>remapping the qid before taking the spinlock.

>I suspect that we're always using right queue numbers in current
>implementation of dpif-netdev, but I need to recheck to be sure.

I believe the case you are referring here is the XPS case ('dynamic_txqs' true).
When we have to flush the packets we retrieve the qid from the 
'cached_tx_port->last_used_qid'
 that was initialized earlier by 'dpif_netdev_xps_get_tx_qid()'. The logic of 
remapping the qid and 
acquiring the spin lock in the above function is no different from current 
logic in master. Can you 
elaborate the specific case where this would break the functionality?

Please note that  in 'dpif_netdev_xps_get_tx_qid'  the qid can change and so we 
did flush the queue.  

- Bhanuprakash. 

>Anyway, logic of this function completely broken.
>
_______________________________________________
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to