Hi Don,
We created pkt using a test device "Spirent_TestCenter_C100", and using
enough source addr and port as client. So i think the hash should be even. In
fact the irq hashed to about 8 cpus( total 20 cpus) acording to /proc/interrupt.
1. Is there anything could be done about "RSS is limited to 16 queues"? We
hope they are distributed to 20 cpu core.
2. The "82599 ATR on" take effect before or after RSS? The "ATR" function
is done in the hard chip or in the kernel software driver?
3. Also the "/proc/irq/<rxtx_irq_id>/smp_affinity" could be set to more
than one cpu cores for every irq? such as "ech 3 > /proc/irq/159/smp_affinity "
to make 159 irq to both 0 and 1 cpus? In my test, 159 still mapped to cpu 0,
but not cpu1. Is there any limit about this? I check the manual but got nothing
about the max cpu count for every irq. Any manual about this?
Thank you for help.
Best,
Liu Zh
At 2014-12-19 01:11:28, "Skidmore, Donald C" <donald.c.skidm...@intel.com>
wrote:
>Hey Liu,
>
>Couple of things you're probably seeing. First off just because you have 20
>queues doesn't mean your flows will hash to use all of those queues. In fact
>if you are testing this by sending traffic from just one system I would expect
>you to be using only one queue as you would only have one flow. We gain
>advantage of multiple queues by keeping all of the packets of a flow on the
>same CPU. Likewise as was mentioned in another reply RSS is limited to 16
>queues on 82599.
>
>Also 82599 has ATR on by default which has flow route precedence over RSS. It
>can support up to 63 queue, but selects the queues in a different method. It
>will create a hash bucket as it notices a SYN packet being transmitted and
>received packets will be placed on the same queue as it was transmitted on.
>If you have a 1-to-1 mapping between CPU/queue this will keep the flow on the
>same CPU as the application consuming it. But once again if you just have one
>application creating packets you will just have one flow and thus just use one
>queue.
>
>Thanks,
>-Don Skidmore <donald.c.skidm...@intel.com>
>
>> -----Original Message-----
>> From: lzh [mailto:lhqlzh...@163.com]
>> Sent: Thursday, December 18, 2014 12:33 AM
>> To: lzh; e1000-devel@lists.sourceforge.net
>> Cc: e1000-devel@lists.sourceforge.net
>> Subject: Re: [E1000-devel] RSS related questions
>>
>>
>>
>>
>> There is a another problem. When we set all 20 irqs to both cpu0 and
>> cpu1(echo 3 > "/proc/irq/<rxtx_irq_id>/smp_affinity".), the irq all go to
>> cpu0.
>> We use x86 64bit. so the irq balance is done by apic hardware (setting
>> "msi_set_affinity" in linux/arch/x86/kernel/apic/io_apic.c), why it doesn't
>> take effect?
>>
>>
>> At 2014-12-18 11:12:40, "lzh" <lhqlzh...@163.com> wrote:
>>
>> Hi,
>> We are using 82599 intel netcard that using 20 rxtx queues. We test it
>> on
>> cpu E5-2670 which has 20 cpu cores. We bind every queues to each cpu by
>> "/proc/irq/<rxtx_irq_id>/smp_affinity".
>> First, we test the throughput and find that the rx irq is not
>> distributed to all
>> cpu cores. Wd don't know why?
>>
>> Seconde, we check the driver of ixgbe, and found three macros as follows:
>> #define IXGBE_DCA_RXCTRL_CPUID_MASK 0x0000001F /* Rx CPUID Mask
>> */
>> #define IXGBE_DCA_RXCTRL_CPUID_MASK_82599 0xFF000000 /* Rx
>> CPUID Mask */
>> #define IXGBE_DCA_RXCTRL_CPUID_SHIFT_82599 24 /* Rx CPUID Shift */
>> Does it means that 82599 RSS could only return irq to most 8 irq queues?
>>
>> Any body could help?
>> Thank you.
>>
>> Best,
>> Liu.
>>
>>
>
------------------------------------------------------------------------------
Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server
from Actuate! Instantly Supercharge Your Business Reports and Dashboards
with Interactivity, Sharing, Native Excel Exports, App Integration & more
Get technology previously reserved for billion-dollar corporations, FREE
http://pubads.g.doubleclick.net/gampad/clk?id=164703151&iu=/4140/ostg.clktrk
_______________________________________________
E1000-devel mailing list
E1000-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel® Ethernet, visit
http://communities.intel.com/community/wired