Hi Ben, Thanks for the detailed reply.
Ultimately what I'm trying to accomplish is to minimise the occurrences of TIME_WAIT socket states on the server, and so to do that, I have to get the client to call the close(). Currently this is achieved by the server sending a disconnect application message to the client, which will react by calling close(). However there is always the case of an uncooperative client to consider, i.e., one which does not call close(), and instead keeps the connection open. In such cases the server, after some time delay, will cause close(); but now it ends up with the TIME_WAIT. So I'm trying to find a route around that TIME_WAIT case. So I thought a server side reset could work, i.e. only perform reset if a client has failed to call close(), and it does seem to work on my linux box, but perhaps this relies too much on variable features of the TCP stack. Darren -- You received this message because you are subscribed to the Google Groups "libuv" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. Visit this group at https://groups.google.com/group/libuv. For more options, visit https://groups.google.com/d/optout.
