> -----Original Message-----
> From: Skidmore, Donald C [mailto:donald.c.skidm...@intel.com]
> Sent: Tuesday, February 18, 2014 5:46 PM
> To: John-Paul Robinson; Brandeburg, Jesse
> Cc: e1000-devel@lists.sourceforge.net
> Subject: Re: [E1000-devel] questions on ixgbe and 10G performance
> expectations
> 
> > -----Original Message-----
> > From: John-Paul Robinson [mailto:j...@uab.edu]
> > Sent: Tuesday, February 18, 2014 2:59 PM
> > To: Brandeburg, Jesse
> > Cc: e1000-devel@lists.sourceforge.net
> > Subject: Re: [E1000-devel] questions on ixgbe and 10G performance
> > expectations
> >
> > On 02/17/2014 08:19 PM, Brandeburg, Jesse wrote:
> > > Forgive my top post.
> > >
> > > With the new kernel you may be running into needing faster cleanup
> to
> > increase tx speed. try increasing the interrupt rate via ethtool -C ethX
> rx-
> > usecs 10, yes I said rx because there is only one rate control for the
> interrupt.
> > >
> > > You can easily do line rate tx with 82599. The biggest limiter in tx only
> tests
> > is the amount of data in flight and the time it takes to get acks back.
> > >
> >
> > Thanks for the suggestions.
> >
> > So I tried upping the rx-usecs on the server install instance to 10
> (originally 1)
> > and saw a clear bump up to what I would consider line rate ~9.36Gbs.
> > Switching to 10 usecs sounds like its a decrease in interrupt rate
> though.
> >
> > Interestingly I tested my live iso version of ubuntu 12.04.4 desktop
> again and
> > see an ~9.39Gbs line rate with out any tuning (default ixgbe driver
> 3.13.10,
> > default rx-usecs@1, same upstream iperf server).
> > Switching to the rx-usecs=10 on this platform degraded the
> performance, to
> > 8.69Gbs.
> >
> > Ubuntu no longer maintains separate desktop and server kernels, so I'm
> > trusting all the core-kernel operation would be identical.  Thus the live
> iso
> > test is likely as pristine an experience as can be had, wrt stock
> performance.
> > It'd take it if I could get it. ;)
> >
> > I'd take from this that there is some functionality or setting introduce in
> an
> > actual system install that's introducing a hit on performance.
> > Any thoughts?
> >
> > > Also please make sure you have run the set_irq_affinity script to bind
> > interrupts to CPUs.
> > >
> >
> > I tried running `set_irg_affinity eth4` but it didn't appear to have any
> impact
> > on performance. If anything it degraded.
> >
> > > --
> > > Jesse Brandeburg
> > >
> > >
> > >> On Feb 17, 2014, at 5:42 PM, "Ben Greear"
> <gree...@candelatech.com>
> > wrote:
> > >>
> > >>> On 02/17/2014 02:19 PM, John-Paul Robinson wrote:
> > >>> Hi,
> > >>>
> > >>> I don't know if this topic is appropriate here, please direct me to
> > >>> a better place if not.
> > >>>
> > >>> I've been spending considerable time trying to measure the
> > >>> performance of our 10G fabric that uses Intel X520 cards.  The
> > >>> primary test machine has dual Intel(R) Xeon(R) CPU E5-2650 0 @
> > >>> 2.00GHz chip 8-core chips and 96GB RAM.
> > >>>
> > >>> The test machine is now running Ubuntu 12.04.4 server with kernel
> > >>> 3.11.0 with latest ixgbe driver 3.19.1.
> > >>>
> > >>> Using iperf (2.0.5) I see about 9.39Gbs steady inbound transfers
> > >>> (there are a few glitches where I've seen drop to 7Gbs but it
> > >>> recovers).  My outbound transfers, however, are about 8.83Gbs
> steady
> > >>> and tend to be more variable.
> > >>>
> > >>> This is the best performance I can get on the server.
> > >>>
> > >>> Interestingly when I boot the machine off the live CDROM image for
> > >>> Ubuntu 12.04.4 desktop, I see nice steady 9.39Gbs in both
> directions.
> > >>> This is the best performance i have seen with this card to-date.
> > >>>
> > >>> I've spent a lot of time with these cards and in general they have
> > >>> be very finicky, delivering inconsistent results from test to test,
> > >>> being very sensitive to driver and kernel versions.
> > >>>
> > >>> I've taken them from extremely erratic performance on Ubuntu
> 12.04.1
> > >>> with the stock ixgbe 3.6.7 driver to much higher, more stable
> > >>> performance simply by updating to ixgbe 3.11.33.  It would be nice
> > >>> to see a stable flatline performance at line speeds on kernel 3.11
> > >>> with the
> > >>> 3.19.1 driver.
> > >>>
> > >>> I'm wondering if there is a known configuration profile that allows
> > >>> these cards to perform at line speeds or if there are known issues
> > >>> or hardware incompatibilities.
> > >>>
> > >>> I know there are a lot of subtleties to performance tuning but
> > >>> performance on other cards in our fabric (btw from Brocade)
> deliver
> > >>> very consistent, stable, high performance line speed results over
> many
> > tests.
> > >>>
> > >>> I've been scratching my head for a while and am looking for a fresh
> > >>> perspective or deeper understanding.
> > >>
> > >> First, check 'dmesg' and make sure your NICs are using at least
> > >> x8 pci with 5GT/s.
> > >>
> > >> Check BIOS and disable 'VT-d' if it is on...it hurts performance by
> > >> 50% or so.
> > >>
> > >> Try using several (5-10) flows in iperf, maybe just use 5-10
> > >> instances of iperf so you get good usage of your cores.
> > >>
> > >> Thanks,
> > >> Ben
> > >>
> > >>
> > >> --
> > >> Ben Greear <gree...@candelatech.com>
> > >> Candela Technologies Inc  http://www.candelatech.com
> 
> 
> I would be interested in seeing how your interrupt are being spread out
> during your test.  Could you provide the delta from /proc/interrupts
> before and after your test, or just the results after a reboot for the port
> seeing the traffic?
> 
> Likewise it would be good to see the delta of the ethtool -S statists from
> before and after your test.
> 
> Thanks,
> -Don Skidmore <donald.c.skidm...@intel.com>
> 
> 

Another thing to check, is to ensure that irqblance service is disabled as this 
service interferes with set_irq_affinity.

It's possible the live CD does not run this service, whereas the installed 
system does.

Regards,
Jake


------------------------------------------------------------------------------
Managing the Performance of Cloud-Based Applications
Take advantage of what the Cloud has to offer - Avoid Common Pitfalls.
Read the Whitepaper.
http://pubads.g.doubleclick.net/gampad/clk?id=121054471&iu=/4140/ostg.clktrk
_______________________________________________
E1000-devel mailing list
E1000-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel&#174; Ethernet, visit 
http://communities.intel.com/community/wired

Reply via email to