Anyone? Thanks,
Sam 2009/4/23 Sam Crawford <[email protected]> > Oleg and all, > I think I'm getting closer to tracking down the root cause of this bizarre > issue. We've been having a few occurrences every day, and it's getting worse > as load is increasing. > > I've attached a screenshot of wireshark, a HttpClient wire trace, and a > code snippet. > > The wireshark screenshot shows the httpclient host (10.69.13.28) connecting > to the server (10.96.109.6) and establishing a TCP connection (frames 1-3). > Then nearly 10 seconds later, without any traffic being sent, the server > sends a FIN. Nearly 20 seconds after that the client gives up, PSH's the > last of it's data and FIN's its side of the connection too. > > The HttpClient wire trace shows the request being started at 09:08:33 (the > same time as the wireshark capture starts), and everything seems to progress > normally initially (connection is established, headers are sent, etc). > However, the wire trace shows the headers being sent, but the wireshark > capture does not reflect this. I'm not blaming HttpClient for this, because > frame 30397 (the PSH) in the packet capture shows the headers being sent but > with no POST body. It looks to me like the InputStream that's being given to > HttpClient is somehow causing the issue. > > Now, the actual application of HttpClient here is a reverse proxy. It runs > on a GlassFish v2u2 J2EE container. I'm beginning to suspect that GlassFish > itself may be causing the issue. The code snippet attached shows how I'm > reading the input stream from the incoming HttpServletRequest and passing it > to HttpClient. > > I'm performing some additional packet captures now that will hopefully help > determine if: > (1) it's the originating client somehow malforming it's post data, which > causes the inputstream never to be fully read > (2) the originating client's request is fine and there's some issue with > our J2EE container. > > Does this make sense to you? Any additional thoughts/input would be > appreciated. I don't believe this is an issue with HttpClient, but thought > you may have some useful insights into the matter. > > Thanks, > > Sam > > > > > > 2009/4/14 Oleg Kalnichevski <[email protected]> > >> Sam Crawford wrote: >> >> Afternoon all, >>> A few months back we had an issue with handling half closed TCP >>> connections >>> with HttpClient, and at the time I was advised to include something akin >>> to >>> the IdleConnectionEvictor - which we did and it's working very nicely in >>> nearly all scenarios. >>> >>> However, in the past few days we've encountered a few WebLogic based >>> hosts >>> that aren't playing fair. >>> >>> The following is one (extreme) example of the issue we're encountering: >>> >>> Time (ms) TCP action >>> 0.0000 Client > Server [SYN] >>> 0.5634 Server > Client [SYN,ACK] >>> 1.2092 Client > Server [ACK] <-- TCP session established >>> 312.5276 Server > Client [FIN,ACK] >>> 313.1309 Client > Server [ACK] >>> 401.5089 Client > Server [HTTP POST /blah] >>> 403.2986 Server > Client [RST] >>> >>> In the above example, the server closes its side of the connection only >>> 300ms after establishment (by sending the FIN). (As an aside I'm curious >>> as >>> to why HttpClient is taking 400ms after the TCP connection has been >>> established to send the request - any suggestions are also much >>> appreciated, >>> but this doesn't happen often). >>> >>> >> This does not sound right. The stale connection check may cause a 20 to 30 >> millisecond delay (and generally should be avoided) but this is a bit too >> much. Can you produce a wire / context log of the session? >> >> >> But the above is an extreme example. We see other cases where the >>> WebLogic >>> server is closing the connection of a keep-alive connection around 10-15 >>> seconds after the last request. >>> >> >> Does the server send a 'Keep-alive' header with the response? >> >> >> Our IdleConnectionEvictor doesn't run that >> >>> often, so we end up with unusable connections. We could just run >>> IdleConnectionEvictor more often, but that's not really desirable. >>> >>> I'm going to be digging into the WebLogic side of things this afternoon >>> (to >>> see if there's any limits we can modify there), but it does seem as >>> though >>> there should be a nice way for HttpClient to detect such cases. I've got >>> stale connection checking enabled already by the way. >>> >>> >> Stale connection checking is (in most cases) evil and should be avoided. >> >> I'm interested in any feedback/ideas here! I can include a wire capture >>> as >>> an example if it would be helpful. >>> >>> >> A wire / context log that correlates with the TCP dump would be great. >> >> Oleg >> >> Thanks again, >>> >>> Sam >>> >>> >> >> --------------------------------------------------------------------- >> To unsubscribe, e-mail: [email protected] >> For additional commands, e-mail: [email protected] >> >> >
