is upstream keepalive connetions adaptable with websocket?
Posted at Nginx Forum:
https://forum.nginx.org/read.php?2,272808,275486#msg-275486
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Hi Guys,
we solved the problem and I wanted to give you feedback about the solution.
Finally it was an problem with our linux ip routes.
After implementing source based policy routing this nginx configuration
worked.
Thank you for your support!
Kind Regards
Lars
Summary of Solution:
On 09/03/2017 21:10, larsg wrote:
> Hi Reinis,
>
> yes, IPs exist:
>
> ifconfig
> eth0: flags=4163 mtu 1500
> inet 192.168.1.130 netmask 255.255.255.0 broadcast 192.168.1.255
> ether fa:16:3e:1e:ad:da txqueuelen 1000 (Ethernet)
> ...
> eth1:
Hi Reinis,
yes, IPs exist:
ifconfig
eth0: flags=4163 mtu 1500
inet 192.168.1.130 netmask 255.255.255.0 broadcast 192.168.1.255
ether fa:16:3e:1e:ad:da txqueuelen 1000 (Ethernet)
...
eth1: flags=4163 mtu 1500
Hi everybody,
ok, I recognized another linux network problem that I solved now. Situation
now is like following:
When I call my upstream address via curl (on the nginx host) by selecting
the corresponding local interface (eth0-9 = 192.168.1.130-139) everything is
fine.
curl
> When enabling sysctl option "net.ipv4.ip_nonlocal_bind = 1" it is possible
> to use local IP addresses (192.168.1.130-139) as proxy_bind address.
> But than using such an address (other than 0.0.0.0), nginx will produce an
> error message.
Do the 192.168.1.130-139 IPs actually exist and are
Thanks for the advice.
I implemented this approach. Unfortunately not with 100% success.
When enabling sysctl option "net.ipv4.ip_nonlocal_bind = 1" it is possible
to use local IP addresses (192.168.1.130-139) as proxy_bind address.
But than using such an address (other than 0.0.0.0), nginx will
This is just a matter of number of ip addresses you have in a
proxy_bind pool and suitable hash function for the split_clients map.
Adding additional logic to proxy_bind ip address selection you still
can face the same problem.
On 3/8/17 9:45 PM, Tolga Ceylan wrote:
> is IP_BIND_ADDRESS_NO_PORT
is IP_BIND_ADDRESS_NO_PORT the best solution for OP's case? Unlike the
blog post with two backends, OP's case has one backend server. If any
of the hash slots exceed the 65K port limit, there's no chance to
recover. Despite having enough port capacity, the client will receive
an error if the
On 3/7/17 10:50 PM, larsg wrote:
> Hi,
>
> we are operating native nginx 1.8.1 on RHEL as a reverse proxy.
> The nginx routes requests to a backend server that can be reached from the
> proxy via a single internal IP address.
> We have to support a large number of concurrent websocket connections
On 3/8/17 3:57 AM, Tolga Ceylan wrote:
> Of course, with split_clients, you are at the mercy of the hashing and
> hope that this distribution will spread work
> evenly based on incoming client address space and the duration of
> these connections, so you might run into
> the limits despite having
Of course, with split_clients, you are at the mercy of the hashing and
hope that this distribution will spread work
evenly based on incoming client address space and the duration of
these connections, so you might run into
the limits despite having enough port capacity. More importantly, in
case
Yes, split_clients solution fits perfectly in the described use case.
Also, nginx >= 1.11.4 has support for IP_BIND_ADDRESS_NO_PORT socket
option ([1], [2]) on supported systems (Linux kernel >= 4.2, glibc >= 2.23)
which
may be helpful as well.
Quote from [1]:
[..]
Add IP_BIND_ADDRESS_NO_PORT
How about using
split_clients "${remote_addr}AAA" $proxy_ip {
10% 192.168.1.10;
10% 192.168.1.11;
...
* 192.168.1.19;
}
proxy_bind $proxy_ip;
where $proxy_ip
> Am 07.03.2017 um 22:12 schrieb Nelson Marcos :
>
> Do you really need to use different source ips or it's a solution that you
> picked?
>
> Also, is it a option to set the keepalive option in your upstream configure
> section?
>
Do you really need to use different source ips or it's a solution that you
picked?
Also, is it a option to set the keepalive option in your upstream configure
section?
http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive
Um abraço,
NM
2017-03-07 16:50 GMT-03:00 larsg
Hi,
we are operating native nginx 1.8.1 on RHEL as a reverse proxy.
The nginx routes requests to a backend server that can be reached from the
proxy via a single internal IP address.
We have to support a large number of concurrent websocket connections - say
100k to 500k.
As we don't want to
17 matches
Mail list logo