Hello, I have strange issuses with nginx workers. For some time after start
Nginx I notice that some process of workers cause high load to CPU (
principally sys CPU).
At first I've got syscall traces from one of such process:
futex(0x157d914, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0x157d910,
P.S: we are using Gentoo with 4.4.1 kernel and CPU X3330 @ 2.66GHz
GenuineIntel GNU/Linux
Posted at Nginx Forum:
https://forum.nginx.org/read.php?2,264764,264766#msg-264766
___
nginx mailing list
nginx@nginx.org
Thank you.
We don't make any changes with files or overwrite them while sendfile
processing it.
Only create temp file and then mv it.
Maybe Is it the same bug concerned with treads aio like in this messeges,
https://forum.nginx.org/read.php?21,264701,265016#msg-265016
and it would be fixed in
Debug log regarding to hanged PID 7479 http://dev.vizl.org/debug.log.txt
Posted at Nginx Forum:
https://forum.nginx.org/read.php?2,264764,265188#msg-265188
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Thank you. Waiting for 1.9.13 branch.
Posted at Nginx Forum:
https://forum.nginx.org/read.php?2,264764,265536#msg-265536
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
We found, that we are running 'truncate -s 0' to file before removing them.
Can it potentially cause the mentioned above problems ?
Posted at Nginx Forum:
https://forum.nginx.org/read.php?2,264764,265516#msg-265516
___
nginx mailing list
Sorry for long answer, but we have doing some tests, and notice that probles
is appear when thread_pool enabled.
thread_pool default threads=128 max_queue=1024
We need to use thread_pool, and can't permenent disable it unfortunately
Posted at Nginx Forum:
user www;
worker_processes 16;
thread_pool default threads=128 max_queue=1024;
worker_rlimit_nofile 65536;
###timer_resolution 100ms;
#error_log /home/logs/error_log.nginx error;
error_log /home/logs/error_log.nginx.debug debug;
events {
worker_connections 3;
use epoll;
}
http {
And how about result cache file ?
Will it be only 1 object in cache when 2 client GET 1 file SIMULTANEOUSLY or
2 different objects in Nginx proxy_cache_path ?
Posted at Nginx Forum:
https://forum.nginx.org/read.php?2,281621,281686#msg-281686
___
nginx
Hello I notice strange behavior with keepalive and upstream module.
I've backend PHP-FPM with setting configuration
request_terminate_timeout = 5s
And nginx config:
upstream phpfpm {
server unix:/tmp/php-fpm-7.sock max_fails=0 fail_timeout=1s;
keepalive 8;
}
location ^\*.php$ {
include
10 matches
Mail list logo