于 2015年04月03日 01:52, Tantilov, Emil S 写道:
>> >
>> ># numactl --hardware
>> >available: 4 nodes (0-3)
>> >node 0 cpus: 0 1 2 3 4 20 21 22 23 24
>> >node 0 size: 24466 MB
>> >node 0 free: 22444 MB
>> >node 1 cpus: 5 6 7 8 9 25 26 27 28 29
>> >node 1 size: 16384 MB
>> >node 1 free: 15831 MB
>> >node 2 cpus: 10 11 12 13 14 30 31 32 33 34
>> >node 2 size: 16384 MB
>> >node 2 free: 15791 MB
>> >node 3 cpus: 15 16 17 18 19 35 36 37 38 39
>> >node 3 size: 24576 MB
>> >node 3 free: 22508 MB
>> >node distances:
>> >node 0 1 2 3
>> >0: 10 21 31 31
>> >1: 21 10 31 31
>> >2: 31 31 10 21
>> >3: 31 31 21 10
> Since you have 4 nodes you may want to check your board layout and try to pin 
> the queues and iperf to the same
>   node as the network interface. See if that helps.

Thanks for the hints.

UDP is used here, so it actually doesn't matter to tie iperf server on the same 
core for receiving the flow.
I didn't launch iperf server, only running iperf client to send *fragmented* 
UDP packets.
On the server side, still ifconfig shows dropped packets increasing, and 
ethtool -S confirmed with
rx_missed_errors also climbing.

Client: iperf -c SEVER_IP -u  -b 10G -i 1 -t 100000 -P 12 -l 30k

Server:
kernel: 4.0.0-rc4 , when buffer size(-l) >= 30k, no rx_missed_errors
                     when buffer size(-l) <  30k, confirmed rx_missed_errors

Server:
kernel: 2.6.32-358 , when buffer size(-l) >= 10k, no rx_missed_errors
                      when buffer size(-l) <  10k, confirmed rx_missed_errors

Any suggestions?

> If you want to debug your numa allocations in more detail, check out this 
> tool:
> http://www.intel.com/software/pcm

-- 
天下英雄出我辈,一入江湖岁月催。
鸿图霸业谈笑间,不胜人生一场醉。

------------------------------------------------------------------------------
BPM Camp - Free Virtual Workshop May 6th at 10am PDT/1PM EDT
Develop your own process in accordance with the BPMN 2 standard
Learn Process modeling best practices with Bonita BPM through live exercises
http://www.bonitasoft.com/be-part-of-it/events/bpm-camp-virtual- event?utm_
source=Sourceforge_BPM_Camp_5_6_15&utm_medium=email&utm_campaign=VA_SF
_______________________________________________
E1000-devel mailing list
E1000-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel&#174; Ethernet, visit 
http://communities.intel.com/community/wired

Reply via email to