Over the last few days, have been battling a performance issue. It appears
to be something tricky about lwip configuration and use.

 

The system of concern is lwip ( tried upgrades to no avail from
1.32->1.4.0->1.4.1) on an ARM as an HTTP server 

transmitting LARGE files (50 mb+) to a PC client.

 

The symptom is that normally I'll see transfer rates of 900 kb/sec to 1
mb/sec; but, otherwise more in the 20 kb/sec ballpark.

with NOTHING in the middle ground, depending on configuration.

 

The signature of the problem has to do with what lwip is deciding to send
for the next packet - it appears to jump ONE
ADDITIONAL packet's worth of buffered data beyond the window, which then
leads to the client having to repeat an ACK,

and then a BIG pregnant pause (1.2 seconds typically) until lwip transmits
the missing packet (because it jumped the gun earlier),

and then this proceeds normally for a short period and then repeats.

 

Here is some data illustrating the issue (I've normalized the sequence
numbers for clarity):

 

PC Client                                                              lwip
Server

 
Transmits packet 0

 
Transmits packet 1

ACKs thru 1

 
Transmits packet 2

ACKs thru 2

 
Transmits packet 3

 
TRANSMITS PACKET 5!!!

ACKs thru 3

 
Transmits packet 6

ACKs thru 3

                                                <1.2 second delay>

 
Transmits packet 4

 

The above happens with WND 2 (or 1, seems like 1 is equivalent to 2?);
SNDBUF of 4 packets and SND_QLEN of 4 packets;

and calling tcp_write with 4 packets of data at a time.

 

If I expand WND to 4 and supply tcp_write with 2 packets of data at a time,
the above oddity does NOT occur.

 

So, my question is: what's going on with a packet seemingly being passed one
packets-worth beyond the window?

 

 

 

_______________________________________________
lwip-users mailing list
[email protected]
https://lists.nongnu.org/mailman/listinfo/lwip-users

Reply via email to