Greg, We are not using the SR-IOV mode so that shouldn't be an issue.
I understand that this would be non-standard but, our application is currently point to point from an FPGA streaming video over UDP to the Intel NIC. This issue we are having is when we get above around 5.5 Gbps with 9K UDP frames we see the softirq CPU usage on the CPU where our IRQ is assigned go to 100%. Our packets are not "distributable" because the source / destination IP address / port do not change which makes the 4-tuple hash the same for each packet. We changed the source UDP port on our FPGA firmware to send packets over a range of ports instead of just 1 and used the flow director to spread our packets across all the RX queues. We used the set_irq_affinity.sh script and then set the affinity on our packet processing threads to match to get cache hits. This allowed us to get to 8.5 Gbps but there were some image artifacts and the softirq CPU usage on every CPU was still pretty high. We suspect the image artifacts were due to dropped packets. Our application can handle out of order packets. We have determined that the best way for us to get close to 10 Gbps is to increase the MTU and stick with a single thread / RX queue solution. The issue we are running into is that all the scaling features are like RSC and RSS are not geared towards UDP traffic on a single UDP connection. I have the source code for the driver and I will do a quick modification to change it back to 16110 and try it out. Todd, I have revision 2.87 of the specification update which does have the two errata you mention. We don't use the QBRC or VFGORC counters or ETS so that shouldn't be a factor for us. Thanks for the excellent answers and your time, Kyle Brooks Member of Technical Staff L-3 Communications Cincinnati Electronics -----Original Message----- From: Greg Rose [mailto:gregory.v.r...@intel.com] Sent: Wednesday, February 27, 2013 4:38 PM To: Brooks, Kyle @ SSG - ISS - CIN Cc: e1000-de...@lists.sf.net; Noe, Jeff @ SSG - ISS - CIN; alexander.h.du...@intel.com Subject: Re: [E1000-devel] ixgbe MTU change On Wed, 27 Feb 2013 14:43:58 -0500 <kyle.bro...@l-3com.com> wrote: > Hello, > > > > I was wondering if there was a particular reason for changing the > maximum supported MTU on the ixgbe drive from 16110 to 9706? We have > a streaming video application that we would like to use 16 kbyte > frames with. I have both version 3.9.15 which supports a max MTU of > 16110 and 3.12.6 which only supports up to 9706. I tried to look for > an answer in the 82599 spec update and this mailing list and was > unsuccessful, I apologize if this has already been answered. It was a software change to the driver intended streamline the configuration code and to enforce the networking standard jumbo frame size of 9K. When the controller is in SR-IOV mode the 82599 Virtual Functions only support 9K jumbo frame sizes so it was determined that it would be a simplification of the configuration code path to just use 9K as the limit at all times. I guess that we were unaware of the fact that anyone actually used the controller supported maximum (while not in SR-IOV mode) of 16K since it is not an industry standard. Generally frames above 9K are not routable and in many cases switches don't support frame sizes above that limit. You can modify the driver to support the 16K frame size fairly easily if you wish. I'm also copying the originator the the driver change so we can get his additional input, if any. Regards, - Greg Networking Division Intel Corp. > > > > Thanks very much, > > > > Kyle Brooks > > Member of Technical Staff > > L-3 Communications Cincinnati Electronics > > > > ------------------------------------------------------------------------------ Everyone hates slow websites. So do we. Make your web apps faster with AppDynamics Download AppDynamics Lite for free today: http://p.sf.net/sfu/appdyn_d2d_feb _______________________________________________ E1000-devel mailing list E1000-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/e1000-devel To learn more about Intel® Ethernet, visit http://communities.intel.com/community/wired