I have a Dell 710 server using two 82575GB Intel Quad cards.  I am using two
interfaces on each card and using the bonding driver within linux.  I have run
into a problem where it seems when one of the interfaces in the bonding pair
hits about 116,000 packets/s that it is unable to handle more traffic.  Odd
thing is I don't really seem to see any interface errors via Ethtool etc.

What I do notice is that when it does do this the system load avg goes above 1.
 I have seen one of the ksoftirqd process sit at about 70%.  Sometimes a restart
of irqbalance will cause this to move or clear the issue.  I an not familiar
enough with the irq affinity settings to play with it myself yet.  

I would expect to be able to get a lot more out of the cards I am assuming
something is wrong with my setup.  I am using the default settings for the igb
driver and have made some adjustments to the kernel itself.  Generally most of
the packets through the system are being routed.

settings via the proc that I changed
net.core.optmem_max = 20480
# Increase number of incoming connections backlog
net.core.netdev_max_backlog = 4000
net.core.dev_weight = 64
# # Bump up default r/wmem to max
net.core.rmem_default = 262141
net.core.wmem_default = 262141
# # Bump up max r/wmem
net.core.rmem_max = 262141
net.core.wmem_max = 262141

This is a Dual Quad core 2.4Ghz Intel system


Kernel messages for one of the interfaces:
[    5.404672] igb 0000:0d:00.0: Intel(R) Gigabit Ethernet Network Connection
[    5.404677] igb 0000:0d:00.0: eth6: (PCIe:2.5Gb/s:Width x4) 00:1b:21:14:34:54
[    5.404762] igb 0000:0d:00.0: eth6: PBA No: E34573-001
[    5.404765] igb 0000:0d:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx
queue(s)
[    5.404796] igb 0000:0d:00.1: PCI INT B -> GSI 50 (level, low) -> IRQ 50
[    5.404807] igb 0000:0d:00.1: setting latency timer to 64
[    5.405636] igb 0000:0d:00.1: irq 155 for MSI/MSI-X
[    5.405645] igb 0000:0d:00.1: irq 156 for MSI/MSI-X
[    5.405655] igb 0000:0d:00.1: irq 157 for MSI/MSI-X
[    5.405665] igb 0000:0d:00.1: irq 158 for MSI/MSI-X
[    5.405675] igb 0000:0d:00.1: irq 159 for MSI/MSI-X
[    5.405684] igb 0000:0d:00.1: irq 160 for MSI/MSI-X
[    5.405694] igb 0000:0d:00.1: irq 161 for MSI/MSI-X
[    5.405703] igb 0000:0d:00.1: irq 162 for MSI/MSI-X
[    5.405720] igb 0000:0d:00.1: irq 163 for MSI/MSI-X


Some vmstat output

procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
 1  0    264 2018016 474692 4344560    0    0     0     0 180402 1209  0  4 95  0
 1  0    264 2018296 474692 4344616    0    0     0     0 189968 1186  0  3 97  0
 0  0    264 2017896 474692 4344668    0    0     0     0 183079 1345  0  3 97  0
 0  0    264 2017892 474692 4344728    0    0     0    16 189246 1161  0  7 93  0
 1  0    264 2018076 474692 4344772    0    0     0     0 177470 1013  0  6 94  0
 0  0    264 2017972 474692 4344820    0    0     0     0 185231 1145  0  3 97  0
 1  0    264 2018040 474692 4344864    0    0     0    16 165925 1577  0  7 93  0
 1  0    264 2017820 474692 4344932    0    0     0     0 173961 1245  0  7 93  0
 1  0    264 2017812 474692 4344988    0    0     0    20 184350 1282  0  3 97  0
 0  0    264 2018040 474692 4345048    0    0     0     0 177809 1185  0  3 97  0
 0  0    264 2017864 474692 4345108    0    0     0     0 181536 1217  0  3 97  0
 0  0    264 2017860 474692 4345164    0    0     0     0 181901 1143  0  4 96  0
 0  0    264 2017704 474692 4345216    0    0     0     0 170947 1233  0  9 90  0
 1  0    264 2017660 474692 4345272    0    0     0    20 173678 1087  0  6 94  0
 1  0    264 2016904 474692 4345928    0    0     0  1036 184045 1338  0  5 95  0
 1  0    264 2010440 474692 4351944    0    0     0     0 179177 1356  0  7 92  0
 1  0    264 2010300 474692 4352624    0    0     0   904 167765 1321  0  9 91  0
 1  0    264 2010460 474692 4352068    0    0     0   148 195913 1262  0  4 96  0
 0  0    264 2010464 474692 4352136    0    0     0     8 192341 1943  0  4 96  0
 0  0    264 2010412 474692 4352220    0    0     0     0 179244 1673  0  4 95  0
 0  0    264 2010516 474692 4352292    0    0     0     0 168384 1335  0  5 95  0
 0  0    264 2010020 474692 4352364    0    0     0    16 188664 1350  0  3 97  0


If you think the graphs and data from the interface and load on the server would
be helpful let me know I will figure out how to get them available to you.

Let me know what other information I can provide here.

I did see this post and it looks like it is related maybe to what I am doing,
but it seems to not have completed.

http://comments.gmane.org/gmane.linux.drivers.e1000.devel/7221

Thank you.



------------------------------------------------------------------------------
Write once. Port to many.
Get the SDK and tools to simplify cross-platform app development. Create 
new or port existing apps to sell to consumers worldwide. Explore the 
Intel AppUpSM program developer opportunity. appdeveloper.intel.com/join
http://p.sf.net/sfu/intel-appdev
_______________________________________________
E1000-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel® Ethernet, visit 
http://communities.intel.com/community/wired

Reply via email to