There are a couple of options described here that you could consider if you 
want to share your cache between NGINX instances:

https://www.nginx.com/blog/shared-caches-nginx-plus-cache-clusters-part-1/ 
<https://www.nginx.com/blog/shared-caches-nginx-plus-cache-clusters-part-1/> 
describes a sharded cache approach, where you load-balance by URI across the 
NGINX cache servers.  You can combine your front-end load balancers and 
back-end caches onto one tier to reduce your footprint if you wish

https://www.nginx.com/blog/shared-caches-nginx-plus-cache-clusters-part-2/ 
<https://www.nginx.com/blog/shared-caches-nginx-plus-cache-clusters-part-2/> 
describes an alternative HA (shared) approach that replicates the cache so that 
there’s no increased load on the origin server if one cache server fails.

It’s not possible to share a cache across instances by using a shared 
filesystem (e.g. nfs).

---
o...@nginx.com
Skype: owen.garrett
Cell: +44 7764 344779

> On 7 Jul 2017, at 14:39, Peter Booth <peter_bo...@me.com> wrote:
> 
> You could do that but it would be bad. Nginx' great performance is based on 
> serving files from a local Fisk and the behavior of a Linux page cache. If 
> you serve from a shared (nfs) filsystem then every request is slower. You 
> shouldn't slow down the common case just to increase cache hit rate.
> 
> Sent from my iPhone
> 
> On Jul 7, 2017, at 9:24 AM, Frank Dias <frank.d...@prodea.com 
> <mailto:frank.d...@prodea.com>> wrote:
> 
>> Have you thought about using a shared file system for the cache. This way 
>> all the nginx 's are looking at the same cached content.
>> 
>> On Jul 7, 2017 5:30 AM, Joan Tomàs i Buliart <joan.to...@marfeel.com 
>> <mailto:joan.to...@marfeel.com>> wrote:
>> Hi Lucas
>> 
>> On 07/07/17 12:12, Lucas Rolff wrote:
>> > Instead of doing round robin load balancing why not do a URI based 
>> > load balancing? Then you ensure your cached file is only present on a 
>> > single machine behind the load balancer.
>> 
>> Yes, we considered this option but it forces us to deploy and maintain 
>> another layer (LB+NG+AppServer). All cloud providers have round robin 
>> load balancers out-of-the-box but no one provides URI based load 
>> balancer. Moreover, in our scenario, our webservers layer is quite 
>> dynamic due to scaling up/down.
>> 
>> Best,
>> 
>> Joan
>> _______________________________________________
>> nginx mailing list
>> nginx@nginx.org <mailto:nginx@nginx.org>
>> http://mailman.nginx.org/mailman/listinfo/nginx 
>> <http://mailman.nginx.org/mailman/listinfo/nginx>
>> 
>> This message is confidential to Prodea unless otherwise indicated or 
>> apparent from its nature. This message is directed to the intended recipient 
>> only, who may be readily determined by the sender of this message and its 
>> contents. If the reader of this message is not the intended recipient, or an 
>> employee or agent responsible for delivering this message to the intended 
>> recipient:(a)any dissemination or copying of this message is strictly 
>> prohibited; and(b)immediately notify the sender by return message and 
>> destroy any copies of this message in any form(electronic, paper or 
>> otherwise) that you have.The delivery of this message and its information is 
>> neither intended to be nor constitutes a disclosure or waiver of any trade 
>> secrets, intellectual property, attorney work product, or attorney-client 
>> communications. The authority of the individual sending this message to 
>> legally bind Prodea is neither apparent nor implied,and must be 
>> independently verified.
>> _______________________________________________
>> nginx mailing list
>> nginx@nginx.org <mailto:nginx@nginx.org>
>> http://mailman.nginx.org/mailman/listinfo/nginx 
>> <http://mailman.nginx.org/mailman/listinfo/nginx>_______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Reply via email to