Joel Cunningham wrote: > Is the intent that an application would use the refused_data feature as part > of it's normal workflow? > Or is it expected that once this condition happens, the developer becomes > aware of it and either > increases resources in the mbox receive buffer implementation (to match the > configured window size) > or reduce the configured window size since the system can't handle the data > segment pattern?
Well, you couldn't tell "refused_data" is being used at runtime, so how could the developer become aware of it? You'd have to dig into the code after observing bad performance or discovering some strange in a wireshark log, for example... Being like that, you can see it as a speciality of a resource-constrained implementation of TCP to NOT drop a connection instantly on resource usage. Instead, the application is given some time to make room in its mbox (or whatever buffer was full, depending on the API) before accepting new segments. During that time, since new segments cannot be buffered, we don't accept them. And since we don't want to create even more traffic in an overload situation, we don't even send and ACK (with an old ACKNO), since there's nothing the remote host can improve: we can't accept new segments! By doing that, we give the application time to overcome resource shortage and only can hope the connection survives this. The problem here is that the OP has created resource usage by design, and that doesn't fit well with the intention of "refused_data". Plus it behaves clearly wrong in the situation where we would announce a zero window, which should be fixed. Simon _______________________________________________ lwip-users mailing list [email protected] https://lists.nongnu.org/mailman/listinfo/lwip-users
