Hello! On Fri, Mar 24, 2023 at 09:24:25AM +0100, Roy Teeuwen wrote:
> You are absolutely right, I totally forgot about the cache_lock. > I have listed our settings below. > > The reason we are using the cache_lock is to save the backend > application to not get 100's of requests when a stale item is > invalid. Even if we have use_stale updating, we notice that only > the first request will use the stale item, the following > requests will do a new request even though there is already a > background request going on to refresh the stale item. (This > does not happen if we set keepalive to 0, where new connections > are being used, but has the performance degradation as > mentioned.) This was the reasoning for the cache_lock, but that > gives the issue about the 500ms lock, while the item might > already be refreshed after 100ms. To re-iterate: proxy_cache_lock is not expected to affect requests if there is an existing cache item (and keepalive shouldn't affect proxy_cache_lock in any way; not to mention that the "keepalive" directive, which configures keepalive connections cache to upstream servers, does not accept the "0" value). You may want to dig further into what actually happens in your configuration. I would recommend to start with doing a debug log which shows the described behaviour, and then following the code to find out why the cache lock kicks in when it shouldn't. -- Maxim Dounin http://mdounin.ru/ _______________________________________________ nginx-devel mailing list nginx-devel@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx-devel