Set multi_accept off
And the web service did not hang this time
https://i.imgur.com/irbA5MO.png
But the connections in CLOSE_WAIT and LAST_ACK got a spike
Posted at Nginx Forum:
https://forum.nginx.org/read.php?2,282613,282675#msg-282675
___
nginx m
Its a shared server and I am unable to modify the domains CloudFlare/DNS
settings.
As said the question mostly is why Nginx is freezing for the same setup and
traffic while Apache handles it just fine.
If its an issue with the Nginx setting, what should I change? or is it a bug
in Nginx?
Nginx
Is your nginx/Apache site visible on the internet without any authentication?
If so, I recommend that you access your site directly, not through cloud flare
with redbot.org, which is the best HTTP debugger ever, for both the nginx and
Apache versions of the site and see how they compare.
Why is
1. What does GET / return?
2. You said that nginx was configured as a reverse proxy. Is / proxied to a
back-end?
3. Does GET / return the same content to different users?
4. Is the user-agent identical for these suspicious requests?
Sent from my iPhone
> On Jan 10, 2019, at 11:19 PM, gnusys wr
The TCP state graph for the situation is:
https://i.imgur.com/USECPtc.png
You can see at 16:55 the FIN_WAIT1 ,CLOSE_WAIT and ESTABLISHED takes a steep
climb, At this point Nginx hangs as the server has a script that checks stub
status and this doesn't finish. The server itself and all other servi
The domain is proxied over cloudflare and the access log shows a large
number of requests to the website from the cloudflare servers
121115 162.158.88.4
121472 162.158.89.99
121697 162.158.90.176
122265 162.158.91.97
122969 162.158.93.113
125020 162.158.91.103
126132 162.158.90.194
128913
How do you know that this is an attack and not “normal traffic?”
How are these requests different from regular requests?
What do the weblogs say about the “attack requests?"
> On 10 Jan 2019, at 10:30 PM, gnusys wrote:
>
> My Current settings are higher except the worker_process
>
> worker_pro
Can multi_accept be on cause this?
I have now set multi_accep to off and set up the Nginx again as a reverse
proxy. The attack is not ongoing now, so can't tell immediately if that
setting helps/not
Posted at Nginx Forum:
https://forum.nginx.org/read.php?2,282613,282646#msg-282646
_
My Current settings are higher except the worker_process
worker_processes 1;
worker_rlimit_nofile 69152;
worker_shutdown_timeout 10s;
thread_pool iopool threads=32 max_queue=65536;
I think the issue is that nginx accumulate ESTABLISHED and CLOSE_WAIT and
FIN_WAIT1
>From successive netstat -apn
Your web server logs should have the key to solving this.
Do you know what url was being requested? Do the URLs look valid?
Are there requests all for the same resource?
Are the requests coming from a single IP range?
Are the requests all coming with the same user-agent?
Does the time this starte
Try this;
worker_processes 2;
worker_rlimit_nofile 32767;
thread_pool iopool threads=16 max_queue=32767;
Posted at Nginx Forum:
https://forum.nginx.org/read.php?2,282613,282640#msg-282640
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org
I have more info on the system state at the time the CLOSE_WAIT connections
went sky rocketing
Memory
###
KiB Mem : 13174569+total, 8684164 free, 28138264 used, 94923264 buff/cache
KiB Swap: 4194300 total, 4194300 free,0 used. 86984112 avail Mem
This server is not using network drives and the only thing I can think of
is the temp paths set to /dev/shm
--http-client-body-temp-path=/dev/shm/client_temp
--http-proxy-temp-path=/dev/shm/proxy_temp
--http-fastcgi-temp-path=/dev/shm/fastcgi_temp
--http-uwsgi-temp-path=/dev/shm/uwsgi_temp
--http-
The issue was identified to be an enormous number of http request ( attack)
to one of the hosted domains that was using cloudflare. The traffic is
coming in from cloudflare and this was causing nginx to be exhausted in
terms of the TCP stack
#
# netstat -tn|
Hello!
On Thu, Jan 10, 2019 at 08:27:08AM +0530, Anoop Alias wrote:
> Have had a really strange issue on a Nginx server configured as a reverse
> proxy wherein the server stops responding when the network connections in
> ESTABLISHED state and FIN_WAIT state in very high compared to normal
> work
The important question here is not the connections in FIN_WAIT. It’s “why do
you have so many sockets in ESTABLISHED state?”
First thing to do is to run
netstat -ant | grep tcp and see where these connections are to.
Do you have a configuration that is causing an endless loop of requests?
Sent
Hi,
Have had a really strange issue on a Nginx server configured as a reverse
proxy wherein the server stops responding when the network connections in
ESTABLISHED state and FIN_WAIT state in very high compared to normal
working
If you see the below network graph, at around 00:30 hours there is a
17 matches
Mail list logo