shinrich opened a new issue #8200:
URL: https://github.com/apache/trafficserver/issues/8200


   This is an issue I saw while debugging the HTTP/2 to origin branch between 
two ATS servers.  But the issue could in theory be seen between another busy H2 
client and ATS acting as the H2 server.
   
   For ATS, Http2ConnectionState has the client_streams_in_count to track how 
many streams the session has active.  It checks this value against the the 
session's max_concurrent_stream setting to make sure creating a new stream does 
not violate the session's max_concurrent_stream policy.  Similarly, the H2 
client (ATS for my branch) must be keeping similar counts to track the peer's 
current stream count.  It should not send a new HEADER frame to the peer if 
that would push the peer over it's max_concurrent_stream limit.  Therefore, 
client_streams_in_count for each machine should be kept consistent.
   
   This consistency was not always maintained, because on the 
client_streams_in_count was not being decremented until the HttpSM was 
destroyed. Depending on caching and other operations this may be some time 
after the frame with the EOS was sent or received.
   
   So the client could dutifully be decrementing the stream count on receiving 
the EOS frame, and then have a lower stream count than the peer and send too 
many HEADER frames.  While this is only a STREAM error, it is likely that a 
DATA frame would be sent soon after resulting in a BAD stream ID error which is 
a connection error.  This would take down the session and all active streams 
(which could be quite a few streams on a busy session).
   
   In my test branch I pulled the client_streams_in_count into the 
Http2SStream::change_state method and Http2Stream::do_io_close and 
Http2Stream::initiating_close for streams that get shutdown from an odd state.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to