Quintin Beukes wrote:
Hey,

So.. keepalive is sort of useless then?

Why? You just have to bear in mind HTTP connections may be closed at _any_ point of time by either side. Your code must be prepared to handle such cases.


 Because I realised what my problem
is. It wasn't synchronization. It was because connections closed by the
server, and then it fails when I try to re-use the connection.

Basically what I have is this:
I try once, if receiving a NoResponse exception, I remove it from the pool
and try again
If second try fails as well, it gets removed and I try again but with an
isStale() check first.

My problem is that these keep failing until I filtered through all
connections in the pool. In which case I start making new ones.

How does HttpClient handle this? Or what can I do to improve this into being
more reliable (in the sense of reducing failures to a minimum).


HttpClient can optionally test connections for being non-stale and re-open them if needed, but the stale check cannot be 100% reliable. Basically, well-behaved HTTP agents must be prepared to retry the request in case of an I/O failure.

Oleg


Q

On 6/21/08, Oleg Kalnichevski <[EMAIL PROTECTED]> wrote:
Quintin Beukes wrote:

Hey,

Isn't isStale() supposed to be able to tell me whether the server is
accepting responses?


No, it is intended to test if the connection is still valid on the client
side. Blocking I/O provides no good means of telling if the socket has been
closed by the peer. #isStale is a work-around for the problem.

Oleg

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]






---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to