2.4.5 Idle Timeout Of A Connection

"To avoid spurious timeouts, the value in idle-time-out SHOULD be half the 
peer's actual timeout threshold" 

So, to me, this means on the @open performative the client should flow (e.g.,) 
30000 as the idleTimeOut it would like to negotiate, but should actually only 
enforce that data is received from the other end within 60000 milliseconds 
before it closes the session+connection.

However, if that is the case, then the code in proton-c (pn_tick_amqp in 
transport.c) and proton-j (#tick() in TransportImpl.java) would appear to be 
doing the wrong thing?
Currently it *halves* the advertised remote_idle_timeout of the peer in order 
to determine what deadline to adhere to for sending empty keepalive frames to 
the remote end.
Similarly it uses its local_idle_timeout as-is to determine if the remote end 
hasn't send data recently enough (closing the link with resource-limit-exceeded 
when the deadline elapses). This would seem to mean that empty frames are being 
sent twice as often as they need to be, and resource-limit-exceeded is being 
fired too soon.

It seems to me that instead it should used remote_idle_timeout as-is for 
determining the deadline for sending data, and the local_idle_timeout specified 
by the client user should either be doubled when determining the deadline or 
halved before sending it in the @open frame.

Thoughts?

Cheers,
Dom
-- 

Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 
741598. 
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU

Reply via email to