Thanks for the response. I have below configuration in nginx.conf worker_processes 8; pid /var/run/nginx.pid;
worker_rlimit_nofile 196886; worker_shutdown_timeout 10s ; include /etc/nginx/conf.d/main/*.conf; events { multi_accept on; worker_connections 16384; use epoll; } stream { upstream tcp-9005-simple_tcp_echo_go { #zone myzone 5m; server 127.0.0.1:9001; server 127.0.0.1:9002; server 127.0.0.1:9000; server 127.0.0.1:9003; } server { #listen 9005 ; listen 9005 reuseport ; proxy_pass tcp-9005-simple_tcp_echo_go; proxy_timeout 600s; proxy_next_upstream on; proxy_next_upstream_timeout 600s; proxy_next_upstream_tries 3; } } Each of the servers in upstream are the same process which responds after some delay. I am sending 10000 requests in almost parallel using something like below in a loop: *curl 127.0.0.1:9005 <http://127.0.0.1:9005> & * My expectation is that the 4 upstream servers should get requests in round-robin fashion like req1 - upstream srv 1 req2 - upstream srv 2 req3 - upstream srv 3 req4 - upstream srv 4 ... But I do not see that behaviour, I see something like below where the requests are not being sent in round robin fashion. I have tried with the below config and round robin does not happen as mentioned above. reuse-port on/off multi-accept on/off Try 1 (with reuse-port and multi-accept on): 127.0.0.1:9001 -- 2503 127.0.0.1:9002 -- 2501 127.0.0.1:9000 -- 2499 127.0.0.1:9003 -- 2497 Try 2 (without reuse-port and multi-accept on):: 127.0.0.1:9001 -- 2502 127.0.0.1:9002 -- 2501 127.0.0.1:9000 -- 2500 127.0.0.1:9003 -- 2497 Try 3 (with reuse-port and multi-accept off): 127.0.0.1:9001 -- 2502 127.0.0.1:9002 -- 2502 127.0.0.1:9000 -- 2499 127.0.0.1:9003 -- 2497 Try 4 (without reuse-port and multi-accept off): 127.0.0.1:9001 -- 2505 127.0.0.1:9002 -- 2499 127.0.0.1:9000 -- 2498 127.0.0.1:9003 -- 2498 Looks like round robin is happening wrt to a worker process. When I add the zone configuration or when I set worker-process to 1, it works and gives the expected result. I am using open source nginx 1.22.0 ( http://nginx.org/download/nginx-1.22.0.tar.gz) and have built the source code. Is my understanding of round-robin correct ? I feel something related to the zone is making it work properly. I also see this description in max_conns section: http://nginx.org/en/docs/stream/ngx_stream_upstream_module.html#server max_conns=number limits the maximum number of simultaneous connections to the proxied server (1.11.5). Default value is zero, meaning there is no limit. If the server group does not reside in the shared memory, the limitation works per each worker process. *Thanks & Regards,* *Vishwas * On Mon, Aug 8, 2022 at 8:19 AM Sergey A. Osokin <o...@freebsd.org.ru> wrote: > Hi, > > On Fri, Aug 05, 2022 at 07:40:35PM +0530, Vishwas Bm wrote: > > > > What is the use of zone in stream upstream > > http://nginx.org/en/docs/stream/ngx_stream_upstream_module.html#zone > > Since this is the part of the commercial subscription, I'd recommend > to contact NGINX Plus premium support team, please visit the following > page to get details, https://www.nginx.com/support/ > > > Does it have any impact on how loadbalancing happens when there are > > multiple worker process? > > No impact. > > > Also how is the size needs to be calculated ? > > Is 5m size sufficient for 10 worker process? > > That depends on the actual NGINX Plus configuration and other factors, > usually 64k is enough, but that number can be revisited with an extensive > testing in a lower environments. > > Thank you. > > -- > Sergey A. Osokin > _______________________________________________ > nginx mailing list -- nginx@nginx.org > To unsubscribe send an email to nginx-le...@nginx.org >
_______________________________________________ nginx mailing list -- nginx@nginx.org To unsubscribe send an email to nginx-le...@nginx.org