Interesting - that sounds like a bug in identifying concurrent requests for 
refresh of stale objects. 
The cache miss scenario is more straightforward to detect and is well tested in 
production, but, I do recall the stale object refresh doesn't quite work as 
expected. It was never a huge problem for us because we were more concerned 
with the heavier media segment files (which are typically cache miss) and 
manifest files are lower in number and tiny sized in comparison. I'll take a 
look to see why the refresh doesn't collapse as well as miss.
To your question, the expectation is to follow configured behavior. If the 
config says return a 502, requests should not be leaked. Can you confirm if you 
are seeing any leaks at all for segment files (cache misses)?
This does also bring up a good point that the newly builtin feature within the 
ATS core does not support a config option to entirely prevent a leak of 
requests. As it stands setting the config to "5" would fall back to proxying 
(leaking) to Origin right now. We can enhance that to support an option to 
short-circuit in the event the Origin is slower.
    On Friday, June 19, 2020, 03:05:02 AM PDT, ezko 
<erez.ko...@harmonicinc.com> wrote:  
 
 Thanks for the feedback.

We will definitely check out 8.1.x (any schedule for official release ?)

When the leakage happens all client requests are leaked (not sure if clients
are redirected to origin or ATS is forward proxying). I should mention that
it usually happens for HLS manifest refresh (cache stale) in testing
environment using benchmark tools like apache-bench (so there are thousands
of concurrent requests for the same object).

One question we do have regarding ColFw (either as plugin , or builtin) is ,
what is the expected behavior when the origin response is being delayed for
a few seconds  ? (for example due to a momentary network congestion)
Will ColFw "give up" waiting for the origin headers ? and what will happen
then ? will requests start leaking ?

BR,
Erez 





--
Sent from: http://apache-traffic-server.24303.n7.nabble.com/
  

Reply via email to