On Fri, Oct 14, 2016 at 8:47 AM, Tal Abudi <talab...@gmail.com> wrote:
> Hi All
> Maybe a silly question.. But...
> When testing high performance (packet-rate) on a system with igb/ixgbe
> nics, is there difference if i turn multi queue off or I set each queues
> affinity to the same CPU ?
> My hunch says that MQ off (single queue) is better since the controller
> doesn't need to select the right queue for each RX/TX packet, but maybe
> it's negligible...
> Thanks,
> Tal Abudi

The hardware itself doesn't really care how many queues are being
used, however it can impact PCIe bus efficiency.

For igb the impact would be negligable on everything 82580 and newer,
on 82576 there is some impact with small packets due to descriptor
caching and limited PCIe throughput.  For ixgbe you might notice a
difference if you are handling small packets in an accelerated
environment with DPDK.  Specifically you can achieve 64B packet line
rate with 1 queue on 82599 and newer adapters supported by ixgbe.
However as you add additional queues the throughput will drop as the
descriptor fetching overhead for Rx will increase as blocks of packets
become interleaved between queues resulting in fewer descriptors being
fetched per read request.

Hope that helps.

- Alex

Check out the vibrant tech community on one of the world's most 
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
E1000-devel mailing list
To learn more about Intel&#174; Ethernet, visit 

Reply via email to