Hello all,

I am trying to figure out what is causing a change in behavior of the TCP
stack on Linux?  I have a very simple test setup:

    1)  Windows machine running a test app to request data from the server
    2)  Linux (2.6.10 - yeah, I know... upgrade...) machine running test server
    3)  Gigabit ethernet between the two machines via a Cisco switch

The Windows machine sends a request to the Linux machine requesting the 
Linux machine send a block of data containing 502,132 bytes of data.  The
server on Linux makes a single send() call with the entire buffer (this to
reduce the user-to-kernel mode overhead of multiple calls.

If the Linux machine has just recently been booted, the transfer takes around
8 or 9 milliseconds.  If the Linux machine has been up for a while (but still
primarily idle), the transfer starts to take anywhere from 32 to 70 milli-
seconds.  Both the Windows machine and the Linux machine are for all practical
purposes idle and dedicated to this test process.  It seems the Linux TCP
stack is getting into a state where it decides to slow down the pace of the
transfer to the Windows machine?!?

When the transfer is fast, the time between frame sends is usually about
8 to 40 microseconds (with some variation).

When the transfer is slow, the time between frame sends starts off at a high
130 microseconds, then tapers down to 1/2 and/or 1/4 of that in a pattern
that looks too consistant to be random.  Here's the basic pattern:

    (time between packet sends in microseconds)
    130, 130, 130, 130, 130, 130, 130, 130, 68, 32, 32, 32, 32, 32

[   it's this pattern I'm hoping someone recognizes!  :o)  ]

After a group of packets are sent, the pattern starts again with a large
number then tapers down again and again until the entire transfer is done.

Questions going though my head:

    1)  Is some metric on the interface being used to determine the initial
        TCP transfer rate?
    2)  Is this some form of "slow start"? (doesn't sound it to me, but who
        knows?).  If so, can I verify that?  Then turn it off (or not do
        whatever is triggering it)??
    3)  What mechanism of TCP might account for such a pattern of behavior?

I have a dump of the delta times between packets for the fast and slow case
with some packet information (frame size, TCP flags, start of TCP data).
Not to take advantage of this mailing list, I've put the verbose information
on the following web page:

    http://www.klos.com/~patrick/TCPQuestion.html

Thanks for looking!

Patrick
========= For LAN/WAN Protocol Analysis, check out PacketView Pro! =========
    Patrick Klos                           Email: [EMAIL PROTECTED]
    Network/Embedded Software Engineer     Web:   http://www.klos.com/
    Klos Technologies, Inc.                Phone: 603-471-2547
==================== http://www.loving-long-island.com/ ====================
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to