Afternoon all, A few months back we had an issue with handling half closed TCP connections with HttpClient, and at the time I was advised to include something akin to the IdleConnectionEvictor - which we did and it's working very nicely in nearly all scenarios.
However, in the past few days we've encountered a few WebLogic based hosts that aren't playing fair. The following is one (extreme) example of the issue we're encountering: Time (ms) TCP action 0.0000 Client > Server [SYN] 0.5634 Server > Client [SYN,ACK] 1.2092 Client > Server [ACK] <-- TCP session established 312.5276 Server > Client [FIN,ACK] 313.1309 Client > Server [ACK] 401.5089 Client > Server [HTTP POST /blah] 403.2986 Server > Client [RST] In the above example, the server closes its side of the connection only 300ms after establishment (by sending the FIN). (As an aside I'm curious as to why HttpClient is taking 400ms after the TCP connection has been established to send the request - any suggestions are also much appreciated, but this doesn't happen often). But the above is an extreme example. We see other cases where the WebLogic server is closing the connection of a keep-alive connection around 10-15 seconds after the last request. Our IdleConnectionEvictor doesn't run that often, so we end up with unusable connections. We could just run IdleConnectionEvictor more often, but that's not really desirable. I'm going to be digging into the WebLogic side of things this afternoon (to see if there's any limits we can modify there), but it does seem as though there should be a nice way for HttpClient to detect such cases. I've got stale connection checking enabled already by the way. I'm interested in any feedback/ideas here! I can include a wire capture as an example if it would be helpful. Thanks again, Sam
