Re: [E1000-devel] MQ disabled VS same core affinity

2016-10-19 Thread Tal Abudi
Thanks !
I Appreciate your answer.


On Fri, Oct 14, 2016 at 11:59 PM, Alexander Duyck  wrote:

> On Fri, Oct 14, 2016 at 8:47 AM, Tal Abudi  wrote:
> > Hi All
> > Maybe a silly question.. But...
> > When testing high performance (packet-rate) on a system with igb/ixgbe
> > nics, is there difference if i turn multi queue off or I set each queues
> > affinity to the same CPU ?
> >
> > My hunch says that MQ off (single queue) is better since the controller
> > doesn't need to select the right queue for each RX/TX packet, but maybe
> > it's negligible...
> >
> > Thanks,
> > Tal Abudi
> >
>
> The hardware itself doesn't really care how many queues are being
> used, however it can impact PCIe bus efficiency.
>
> For igb the impact would be negligable on everything 82580 and newer,
> on 82576 there is some impact with small packets due to descriptor
> caching and limited PCIe throughput.  For ixgbe you might notice a
> difference if you are handling small packets in an accelerated
> environment with DPDK.  Specifically you can achieve 64B packet line
> rate with 1 queue on 82599 and newer adapters supported by ixgbe.
> However as you add additional queues the throughput will drop as the
> descriptor fetching overhead for Rx will increase as blocks of packets
> become interleaved between queues resulting in fewer descriptors being
> fetched per read request.
>
> Hope that helps.
>
> - Alex
>



-- 
Best regards,
Tal Abudi
--
Check out the vibrant tech community on one of the world's most 
engaging tech sites, SlashDot.org! http://sdm.link/slashdot___
E1000-devel mailing list
E1000-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about IntelĀ® Ethernet, visit 
http://communities.intel.com/community/wired


Re: [E1000-devel] MQ disabled VS same core affinity

2016-10-14 Thread Alexander Duyck
On Fri, Oct 14, 2016 at 8:47 AM, Tal Abudi  wrote:
> Hi All
> Maybe a silly question.. But...
> When testing high performance (packet-rate) on a system with igb/ixgbe
> nics, is there difference if i turn multi queue off or I set each queues
> affinity to the same CPU ?
>
> My hunch says that MQ off (single queue) is better since the controller
> doesn't need to select the right queue for each RX/TX packet, but maybe
> it's negligible...
>
> Thanks,
> Tal Abudi
>

The hardware itself doesn't really care how many queues are being
used, however it can impact PCIe bus efficiency.

For igb the impact would be negligable on everything 82580 and newer,
on 82576 there is some impact with small packets due to descriptor
caching and limited PCIe throughput.  For ixgbe you might notice a
difference if you are handling small packets in an accelerated
environment with DPDK.  Specifically you can achieve 64B packet line
rate with 1 queue on 82599 and newer adapters supported by ixgbe.
However as you add additional queues the throughput will drop as the
descriptor fetching overhead for Rx will increase as blocks of packets
become interleaved between queues resulting in fewer descriptors being
fetched per read request.

Hope that helps.

- Alex

--
Check out the vibrant tech community on one of the world's most 
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
E1000-devel mailing list
E1000-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about IntelĀ® Ethernet, visit 
http://communities.intel.com/community/wired