Hi all,

This issue persists unfortunately. Attached is a log from an instrumented
TCP server (the sender), logging CWND values and the retransmits. This has
been run on two identical servers on the same switch - one at 100Mbit and
the other at 1Gbit. You can see that a small amount of losses occur after
1-2 seconds with the 1Gbit setup, limiting the congestion window to ~200
MSSs. The 100Mbit server is able to hit a CWND of 1092 stably. These
results are highly repeatable.

TSO/GSO/GRO are disabled on all hosts. Packet captures from both ends are
available upon request.

Any suggestions gratefully received!

Sam



On 21 May 2013 20:49, Sam Crawford <samcrawf...@gmail.com> wrote:

> Thanks for your reply Jesse.
>
> I've already tried disabling TSO, GSO and GRO - no joy I'm afraid.
>
> The qdisc queuing idea was new to me. I tried dropping it down to 100 and
> removing it completely, but there was no discernible effect.
>
> Thanks,
>
> Sam
>
>
> On 21 May 2013 20:22, Jesse Brandeburg <jesse.brandeb...@intel.com> wrote:
>
>> On Tue, 21 May 2013 19:24:24 +0100
>> Sam Crawford <samcrawf...@gmail.com> wrote:
>> > To be clear, this doesn't just affect this one hosting provider - it
>> seems
>> > to be common to all of our boxes. The issue only occurs when the sender
>> is
>> > connected at 1Gbps, the RTT is reasonably high (> ~60ms), and we use
>> TCP.
>> >
>> > By posting here I'm certainly not trying to suggest that the e1000e
>> driver
>> > is at fault... I'm just running out of ideas and could really use some
>> > expert suggestions on where to look next!
>>
>> I think you're overwhelming some intermediate buffers with send data
>> before they can drain, due to the burst send nature of TCP when
>> combined with TSO.  This is akin to bufferbloat.
>>
>> Try turning off TSO using ethtool.  This will restore the native
>> feedback mechanisms of TCP.  You may also want to reduce or eliminate
>> the send side qdisc queueing (the default is 1000, but you probably
>> need a lot less), but I don't think it will help as much.
>>
>> ethtool -K ethx tso off gso off
>>
>> you may even want to turn GRO off at both ends, as GRO will be messing
>> with your feedback as well.
>>
>> ethtool -K ethx gro off
>>
>> I'm a bit surprised that this issue isn't being understood natively by
>> the linux stack.  That said GRO and TSO are really focused on LAN
>> traffic, not WAN.
>>
>> Jesse
>>
>
>
------------------------------------------------------------------------------
See everything from the browser to the database with AppDynamics
Get end-to-end visibility with application monitoring from AppDynamics
Isolate bottlenecks and diagnose root cause in seconds.
Start your free trial of AppDynamics Pro today!
http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk
_______________________________________________
E1000-devel mailing list
E1000-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel&#174; Ethernet, visit 
http://communities.intel.com/community/wired

Reply via email to