Hi all It is probably a logical explanation for the behavior I see with different TCP_MSS settings. Hope some one can explain me why.
Our device is communicating with our server with TCP. It is a classic server/client. Server sends a request and waits for a reply. Protocol data send from server and clint are 14 bytes. In some cases the server will send a big chunk of data. In that case, the server sends the data in 128 bytes chunks. With a TCP_MSS higher then 128 (256, 1024, 1460 etc), the client (lwip) takes longer time to receive the complete data transfer. Sometimes it takes the client 250 ms to receive the data. With TCP_MSS set to the 128 bytes chunks, it never takes more then 14-16 ms!!! (For the same data) Nagle algorithmic is disables in both ends! I'm using the "netbuf_copy_partial" to get the data chunks with len parameter set to 128 My own guess is, that when the TCP_MSS is set higher then 128, then either does "netbuf_copy_partial" wait a bit longer to see if it receive more data before it returns? (Event I have specified 128 bytes length in the function call). Or is the TCP/IP delaying the netconn otification, in case more data is coming due to the bigger TCP_MSS setting??? Thomas -- View this message in context: http://lwip.100.n7.nabble.com/TCP-MSS-questions-tp21215.html Sent from the lwip-users mailing list archive at Nabble.com. _______________________________________________ lwip-users mailing list [email protected] https://lists.nongnu.org/mailman/listinfo/lwip-users
