Hi,

We're using multiqueue, and RSS doesn't always balance the load very well.  I 
had a clever idea to periodically measure the load distribution (cpu load on 
the IO cores)  in the background pthread, and use rte_eth_dev_rss_reta_update() 
to adjust the redirection table dynamically if the imbalance exceeds a given 
threshold.  In practice it seems to work nicely.   But I'm concerned about:

https://doc.dpdk.org/api/rte__ethdev_8h.html#a3c1540852c9cf1e576a883902c2e310d

Which states:

By default, all the functions of the Ethernet Device API exported by a PMD are 
lock-free functions which assume to not be invoked in parallel on different 
logical cores to work on the same target object. For instance, the receive 
function of a PMD cannot be invoked in parallel on two logical cores to poll 
the same Rx queue [of the same port]. Of course, this function can be invoked 
in parallel by different logical cores on different Rx queues. It is the 
responsibility of the upper level application to enforce this rule.

In this context, what is the "target object"?  The queue_id of the port?  Or 
the port itself?  Would I need to add port-level spinlocks around every 
invocation of rte_eth_dev_*()?  That's a hard no, it would destroy performance.

Alternatively, if I were to periodically call rte_eth_dev_rss_reta_update() 
from the IO cores instead of the background core, as the above paragraph 
suggests, that doesn't seem correct either.  The function takes a reta_conf[] 
array that affects all RETA entries for that port and maps them to a queue_id.  
Is it safe to remap RETA entries for a given port on one IO core while another 
IO core is potentially reading from its rx queue for that same port?  That 
problem seems not much different from remapping in the background core as I am 
now.

I'm starting to suspect this function was intended to be initialized once on 
startup before rte_eth_dev_start(), and/or the ports must be stopped before 
calling it.  If that's the case, then I'll call this idea too clever by half 
and give it up now.

Thanks in advance for your help!

-Scott

Reply via email to