I'm going to try to and create an integration test for it so that I can show 
the setup doing the unexpected 500ms locking for stale+1 requests. I can then 
set the debug level and if I can't figure it out I will post back here with the 
integration tests.

The reason I posted on the devel list was because I wanted to adjusted the 
500ms lock / sleep period to be configurable, but seeing as I made the wrong 
assumption it turned into a user discussion, sorry!

Greets,
Roy

> On 27 Mar 2023, at 02:25, u5h <u5.ho...@gmail.com> wrote:
> 
> Hi, I had a same issue in those days.
> Did you try the proxy_cache_lock_timeout?
> 
> https://forum.nginx.org/read.php?2,276344,276349#msg-276349
> 
> But the below article said if you reduce simply the once busy loop time, it 
> may not resolve this problem for which based on the nginx event notification 
> mechanism in case it has many concurrently same content request.
> 
> https://blog.lumen.com/pulling-back-the-curtain-development-and-testing-for-low-latency-dash-support/
> 
> By the way, we might have been better to use nginx@mailing such a user level 
> discussion.
> 
> —
> Yugo Horie
> 
> On Fri, Mar 24, 2023 at 19:18 Maxim Dounin <mdou...@mdounin.ru 
> <mailto:mdou...@mdounin.ru>> wrote:
>> Hello!
>> 
>> On Fri, Mar 24, 2023 at 09:24:25AM +0100, Roy Teeuwen wrote:
>> 
>> > You are absolutely right, I totally forgot about the cache_lock. 
>> > I have listed our settings below.
>> > 
>> > The reason we are using the cache_lock is to save the backend 
>> > application to not get 100's of requests when a stale item is 
>> > invalid. Even if we have use_stale updating, we notice that only 
>> > the first request will use the stale item, the following 
>> > requests will do a new request even though there is already a 
>> > background request going on to refresh the stale item. (This 
>> > does not happen if we set keepalive to 0, where new connections 
>> > are being used, but has the performance degradation as 
>> > mentioned.) This was the reasoning for the cache_lock, but that 
>> > gives the issue about the 500ms lock, while the item might 
>> > already be refreshed after 100ms.
>> 
>> To re-iterate: proxy_cache_lock is not expected to affect requests 
>> if there is an existing cache item (and keepalive shouldn't affect 
>> proxy_cache_lock in any way; not to mention that the "keepalive" 
>> directive, which configures keepalive connections cache to 
>> upstream servers, does not accept the "0" value).
>> 
>> You may want to dig further into what actually happens in your 
>> configuration.  I would recommend to start with doing a debug log 
>> which shows the described behaviour, and then following the code 
>> to find out why the cache lock kicks in when it shouldn't.
>> 
>> -- 
>> Maxim Dounin
>> http://mdounin.ru/
>> _______________________________________________
>> nginx-devel mailing list
>> nginx-devel@nginx.org <mailto:nginx-devel@nginx.org>
>> https://mailman.nginx.org/mailman/listinfo/nginx-devel
> _______________________________________________
> nginx-devel mailing list
> nginx-devel@nginx.org
> https://mailman.nginx.org/mailman/listinfo/nginx-devel

_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel

Reply via email to