I am playing with a simulated high latency connection and seeing an unexpected delay.
On a linux client, I set tc qdisc add dev eth0 root netem delay 200ms I then use wget to download the same 30k file 8 times on the command-line [wget uses keepalive for this. Makes sure the receive window is warmed up) In a packet capture, httpds host OS always waits for 1 ACK before sending the very final bit of data (the last, non-full frame). httpd is not aware of this time passing in %D so I think it's sitting in the send buffer. This happens even with a very large initial congestion window. I don't understand why the final write doesn't make it out of the send buffer until the first ACK finally comes back. Does anyone understand the phenomenon here? I have also printed out TCP_NODELAY and it is set as expected. http://people.apache.org/~covener/80.cap Thanks, -- Eric Covener [email protected]
