configuring multiple nginx workers

2016-06-27 Thread Charles Orth

Hi Gurus,

I am looking to develop a discovery service feature on top of nginx.
I have read http://www.aosabook.org/en/nginx.html and I have a couple 
questions

based on what I've seen in the code base.

If I have multiple workers configured using a single servers as an endpoint.
I want to leverage the ngx_http_upstream_init_main_conf functions to do 
the connection pooling for my service.


I see there is some mutex locking commented out in the code. What I want 
to confirm is if I have 2 or more workers configured, each worker has 
its own memory allocation for  ngx_http_upstream_main_conf_t  *umcf = 
conf; Thus we don't have to do any mutex locking between the separate 
worker processes. Each worker will have a pooled set of connections to 
the same endpoint.


Is my understanding correct?

Any help is greatly appreciated.

Charles

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Fwd: How can i have multiple nginx plus servers route to the same app servers with sticky sessions on?

2015-06-18 Thread Matt
I have multiple nginx instances behind an AWS elastic load balancer. In the
nginx config files, I am using ip_hash to force sticky sessions when
connecting upstream. Is there a way to sync the route tables between the
multiple nginx servers, so that no matter which nginx server handles the
request, the traffic is sent to the same backend application server.

When I first set this scenario up, I had no problems. But after heavy
testing with multiple clients from different parts of the world, I was able
to verify that the multiple nginx servers were not choosing the same
backend application servers to route to.

I attached a drawing that explains the architecture visually.

Matt
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Fwd: How can i have multiple nginx plus servers route to the same app servers with sticky sessions on?

2015-06-18 Thread Maxim Dounin
Hello!

On Thu, Jun 18, 2015 at 10:13:18AM -0700, Matt wrote:

 I have multiple nginx instances behind an AWS elastic load balancer. In the
 nginx config files, I am using ip_hash to force sticky sessions when
 connecting upstream. Is there a way to sync the route tables between the
 multiple nginx servers, so that no matter which nginx server handles the
 request, the traffic is sent to the same backend application server.
 
 When I first set this scenario up, I had no problems. But after heavy
 testing with multiple clients from different parts of the world, I was able
 to verify that the multiple nginx servers were not choosing the same
 backend application servers to route to.

First of all, as you use AWS, make sure all nginx instances 
properly see client addresses (and not addresses of Amazon ELB).  
If nginx sees ELB addresses instead, you have to configure the 
realip module appropriately, see 
http://nginx.org/en/docs/http/ngx_http_realip_module.html.

An additional problem which may hurt your case is upstream server 
errors.  See this message for a detailed explanation:

http://mailman.nginx.org/pipermail/nginx/2015-May/047590.html

-- 
Maxim Dounin
http://nginx.org/

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Multiple nginx instances share same proxy cache storage

2014-08-11 Thread itpp2012
Robert Paprocki Wrote:
---
 like rsyncing the cache contents between nodes thus would not work);
 are there any recommendations to achieve such a solution?

I would imagine a proxy location directive and location tag;

shared memory pool1 = nginx allocated and managed
shared memory pool2 = socket or tcp pool on a caching server elsewhere

The problem you have is speed and concurrency of requests, rsyncing a cache
requires a specific tag which needs to be respected by each instance using
it or you will have a battle between instances.

A better idea would be a database with a persistent connection, in memory
cached again to avoid duplicate queries.
ea. use the database for a central repository of cached items and local
memory to avoid hitting the database more then once for each item. No
disk-IO would be involved so it should also be non-blocking.

Posted at Nginx Forum: 
http://forum.nginx.org/read.php?2,252275,252479#msg-252479

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Multiple nginx instances share same proxy cache storage

2014-08-11 Thread Maxim Dounin
Hello!

On Sun, Aug 10, 2014 at 05:24:04PM -0700, Robert Paprocki wrote:

 Any options then to support an architecture with multiple nginx 
 nodes sharing or distributing a proxy cache between them? i.e., 
 a HAProxy machine load balances to several nginx nodes (for 
 failover reasons), and each of these nodes handles http proxy + 
 proxy cache for a remote origin? If nginx handles cache info in 
 memory, it seems that multiple instances could not be used to 
 maintain the same cache info (something like rsyncing the cache 
 contents between nodes thus would not work); are there any 
 recommendations to achieve such a solution?

Distinct caches will be best from failover point of view.

To maximize cache effeciency, you may consider using URI-based 
hashing to distribute requests between cache nodes.

-- 
Maxim Dounin
http://nginx.org/

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Multiple nginx instances share same proxy cache storage

2014-08-10 Thread Robert Paprocki
Any options then to support an architecture with multiple nginx nodes sharing 
or distributing a proxy cache between them? i.e., a HAProxy machine load 
balances to several nginx nodes (for failover reasons), and each of these nodes 
handles http proxy + proxy cache for a remote origin? If nginx handles cache 
info in memory, it seems that multiple instances could not be used to maintain 
the same cache info (something like rsyncing the cache contents between nodes 
thus would not work); are there any recommendations to achieve such a solution?

 On Aug 4, 2014, at 17:49, Maxim Dounin mdou...@mdounin.ru wrote:
 
 Hello!
 
 On Mon, Aug 04, 2014 at 07:42:20PM -0400, badtzhou wrote:
 
 I am thinking about setting up multiple nginx instances share single proxy
 cache storage using NAS, NFS or some kind of distributed file system. Cache
 key will be the same for all nginx instances.
 Will this theory work? What kind of problem will it cause(locking, cached
 corruption or missing metadata in the memory)?
 
 As soon as a cache is loaded, nginx relies on it's memory data to 
 manage cache (keep it under the specified size, remove inactive 
 items and so on).  As a result it won't be happy if you'll try to run 
 multiple nginx instances working with the same cache directory.  
 It can tolerate multiple instances working with the same cache for 
 a short period of time (e.g., during binary upgrade).  But running 
 nginx this way intentionally is a bad idea.
 
 Besides, using NFS (as well as other NASes) for nginx cache is a 
 bad idea due to blocking file operations.
 
 -- 
 Maxim Dounin
 http://nginx.org/
 
 ___
 nginx mailing list
 nginx@nginx.org
 http://mailman.nginx.org/mailman/listinfo/nginx

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Multiple nginx instances share same proxy cache storage

2014-08-04 Thread badtzhou
I am thinking about setting up multiple nginx instances share single proxy
cache storage using NAS, NFS or some kind of distributed file system. Cache
key will be the same for all nginx instances.
Will this theory work? What kind of problem will it cause(locking, cached
corruption or missing metadata in the memory)?

Posted at Nginx Forum: 
http://forum.nginx.org/read.php?2,252275,252275#msg-252275

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Multiple nginx instances share same proxy cache storage

2014-08-04 Thread Maxim Dounin
Hello!

On Mon, Aug 04, 2014 at 07:42:20PM -0400, badtzhou wrote:

 I am thinking about setting up multiple nginx instances share single proxy
 cache storage using NAS, NFS or some kind of distributed file system. Cache
 key will be the same for all nginx instances.
 Will this theory work? What kind of problem will it cause(locking, cached
 corruption or missing metadata in the memory)?

As soon as a cache is loaded, nginx relies on it's memory data to 
manage cache (keep it under the specified size, remove inactive 
items and so on).  As a result it won't be happy if you'll try to run 
multiple nginx instances working with the same cache directory.  
It can tolerate multiple instances working with the same cache for 
a short period of time (e.g., during binary upgrade).  But running 
nginx this way intentionally is a bad idea.

Besides, using NFS (as well as other NASes) for nginx cache is a 
bad idea due to blocking file operations.

-- 
Maxim Dounin
http://nginx.org/

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


multiple nginx

2013-08-18 Thread Edwin Lee
Hi,

Is is alright to have two installations of nginx on the same machine?
I have a running instance of nginx with php installed from distribution package 
manager.
Instead of writing another config, I would like to compile and install nginx 
from source code and run as second instance.
The second instance is to optimize for load balancing, reverse proxy, cache and 
modsecurity.

My concerns is would this break the system on debian squeeze?

Thanks for answering.

Edwin Lee

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: multiple nginx

2013-08-18 Thread MCoder
you could specify the configure file by -c option or even specify prefix by
-p

and could compile anther nginx instance by --prefix configure option


2013/8/18 Edwin Lee edwin...@proxyy.biz

 Hi,

 Is is alright to have two installations of nginx on the same machine?
 I have a running instance of nginx with php installed from distribution
 package manager.
 Instead of writing another config, I would like to compile and install
 nginx from source code and run as second instance.
 The second instance is to optimize for load balancing, reverse proxy,
 cache and modsecurity.

 My concerns is would this break the system on debian squeeze?

 Thanks for answering.

 Edwin Lee

 ___
 nginx mailing list
 nginx@nginx.org
 http://mailman.nginx.org/mailman/listinfo/nginx

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx