Send the "connection: close" header on your request. That way, when the connection is "released" via releaseConnection(), rather than attempting to consume the rest of the response so that the connection can be treated as a "persistent" connection, and start reading at the next response, the underlying socket will simply be closed directly. Actually, if the server is sending you an "unbounded" request, it really should be returning "connection: close" on its responses. If we took your suggestion and simply stopped reading the stream sent by the server, when processing the _next_ request on the same socket, we'd run into lots of problems.
Indicate HTTP 1.0 level support. This should change the behavior of the server, again triggering behavior that will close the connection, rather than treat it as persistent.
getResponseBody() and getResponseBodyAsString() function as designed. If you are working with extremely large or unbounded data, don't use these functions. You could put in a bug request for _additional_ functions that take limits, but the API is not perfectly simple, insofar as there would need to be some way to indicate that the entire response was not read, so it would not be sufficient for the respective functions to return byte[] and String. This wouldn't change the underlying need to consume the entire response sent by the server though - we still want to support persistent connections.
If you think there is a genuine bug here, by all means point us at a server that causes you problems, or perhaps code for a servlet, and client-side sample code that fails in the way you indicate. I think I speak for most of the developers on this group in that we are genuinely concerned about the quality of the code, but absent a real-world scenario, reasonable test cases, or wire logs, are very hesitant to make API changes of the magnitude that you're suggesting.
-Eric.
Christian Kohlschuetter wrote:
Am Freitag, 10. Oktober 2003 18:02 schrieb Kalnichevski, Oleg:
I can easily provide test cases which will cause HttpClient to eat up all available memory throwing an OutOfMemoryError because of reading and reading from a never ending HTTP Response.
I would regard this behaviour as a bug.
Sure. The bug in the software on the server side. And the fix should be applied where it is due.
Oleg
Sorry, I can't follow your argument.
I thought HttpClient was a client for the _real-world_ HTTP-Servers, just as the HTTP Clients of modern web browsers are/should be (I guess that's why there is a switch to enable/disable "strict" mode in HttpClient).
However, it may not even be a bug on the server side to generate output with no end at all. And it would be no kind of problem if HttpClient would handle endless streams in better way.
But for now, it _will_ loop endlessly when a) ChunkedInputStream close() or exhaustInputStream is called b) getResponseBody() or getResponseBodyAsString() are called.
So, can you provide a bugfix for that, at least for non-strict mode, please? It would help a lot.
Christian
--------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
--------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]