Hey,

We are using NGINX as a proxy / caching layer for a backend application. Our 
backend has a relatively slow response time, ranging between the 100 to 300ms. 
We want the NGINX proxy to be as speedy as possible, to do this we have 
implemented the following logic:


- Cache all responses for 5 mins (based on cache control headers)
- Use stale cache for error's on the backend
- Do a background update for stale cache


The last part has an issue, namely if a first request reaches nginx, it will 
trigger a background request, but other requests for the same resource will be 
locked until this background request is finished instead of still returning the 
stale cache that is available. This is caused by the fact that there is a 
keepalive on the connection, which locks all subsequent requests until the 
background request is finished.


The issue that we are facing in this situation is that the locking is very 
long, namely 500ms hardcoded. I think it is caused by this:
https://github.com/nginx/nginx/blob/master/src/core/ngx_connection.c#L703


This means that our relatively slow backend of 100-200ms actually gets worse 
than better.


Is it an option to make this 500ms a configurable setting instead of 500ms? Are 
there any downsides to making this 500ms lower? I'd be willing to see if we can 
contribute this.


Another option that I'd tried is to set the keepalive to 0, so that every 
request is a new connection. In small amounts of requests this actually seemed 
to solve the issue, but the moment that we went to a real life situation, this 
degraded the performance massively, so we had to revert this


Greets,
Roy


_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel

Reply via email to