Hi,
I believe background fill is enabled and tuned correctly for RWW.
we are trying to simulate the REFRESH MISS problem using a small local
setup.
we used 3 docker containers running on a single 4 core Intel Xeon CPU
E3-1220 v5 @ 3.00GHz machine as follows:
1) nginx origin using image nginx:
Hmm, if you've background fill enabled and tuned correctly for RWW, the client
abort should not impact the Origin fetch.
https://docs.trafficserver.apache.org/en/7.1.x/admin-guide/configuration/cache-basics.en.html#admin-config-read-while-writer
However, if the Origin is completely broken or if
Hi Sudheer ,
we ran some more tests and actually found 2 leaks that were for a ~100 Kb
segment.
it's a strange scenario. for these segments we never saw a crc equaling
Cache Miss , instead the first crc was ERR_CLIENT_READ_ERROR.
So it seems like the first client to get the write lock aborted due
Interesting - that sounds like a bug in identifying concurrent requests for
refresh of stale objects.
The cache miss scenario is more straightforward to detect and is well tested in
production, but, I do recall the stale object refresh doesn't quite work as
expected. It was never a huge proble
Thanks for the feedback.
We will definitely check out 8.1.x (any schedule for official release ?)
When the leakage happens all client requests are leaked (not sure if clients
are redirected to origin or ATS is forward proxying). I should mention that
it usually happens for HLS manifest refresh (c
That’s odd.
Since you’ve
>> proxy.config.http.cache.open_write_fail_action 4
it seems like that should be sufficient to ensure concurrent requests aren’t
leaked. How many requests are you seeing going up to the Origin per ATS server?
The status code of 303 and crc=ERR_CONNECT_FAIL does seem
Can confirm that we are also seeing this, but only at times of extremely
high traffic (e.g. 20Gbps).
On Thu, Jun 18, 2020 at 5:57 AM ezko wrote:
> Hi,
> we are evaluating ATS 8.0.7 as reverse proxy for caching linear video
> (multiple instantaneous hits for the same content).
> we enabled RWW an