> It would help in a use-case when there are 2 NGINX processes, both working with the same cache directory.

Why would you want 2 nginx processes to use the same cache directory? Explain your situation, what's your end-goal, etc.

If it's no minimize the amount of origin requests, you can build multiple layers of cache (fast and slow storage if you want), use load balancing mechanisms such as uri based balancing to spread the cache cross multiple servers and maybe use some of the special flags for balancing, so even if a machine goes down it wouldn't cause a full shift of data.

I'm sure that regardless of what your goal is - someone here will be able to suggest a (better) and already supported solution.

rnmx18 wrote:
It would help in a use-case when there are 2 NGINX processes, both working
with the same cache directory.

NGINX-A runs with a proxy-cache-path /disk1/cache with zone name "cacheA".

NGINX-B runs with the same proxy-cache-path /disk1/cache with zone name
"cacheB".

When NGINX-B adds content to the cache (say for URL test/a.html), the file
gets added to cache as /disk/cache1/test/a.html (again, avoiding md5 for
simplicity).

I think it may be nice if a subsequent request for this URL to NGINX-A would
result in a hit, as the file is available in the disk. However, today it
does not result in a HIT, as the in-memory metadata is missing for NGINX-A
for this URL. So, it would fetch from origin and add it again to cache, and
update its in-memory metadata.

Otherwise, a restart of NGINX-A would build up the cache metadata for files
found in the cache directory.

Thanks
Rajesh

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,276624,276627#msg-276627

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Reply via email to