So, let’s say say you have 8 cores and 1 RX and 1 TX queue:
Core 0: RX queue
Core 1: TX queue
Core 2 to 7: nginx processes
what tool or configuration file would I have to use to dedicate cores to
processes?
___
nginx mailing list
nginx@nginx.org
Perhaps you should use pidstat to validate which processes are running on the
two busy cores?
Did that an can confirm that CPU 5 and 6 are not exclusively used by networking
- but also by nginx and php-fpm.
___
nginx mailing list
nginx@nginx.org
Perhaps you should use pidstat to validate which processes are running on the
two busy cores?
> On Jan 11, 2018, at 6:25 AM, Vlad K. wrote:
>
> On 2018-01-11 11:59, Lucas Rolff wrote:
>> Now, in your case with php-fpm in the mix as well, controlling that
>> can be
On 2018-01-11 11:59, Lucas Rolff wrote:
Now, in your case with php-fpm in the mix as well, controlling that
can be hard ( not sure if you can pin php-fpm processes to cores ) –
but for nginx and RX/TX queues, it’s for sure possible.
Should be doable with cgroups / cpusets? CPUAffinity
In high traffic environments it generally make sense to “dedicate” a core to
each RX and TX queue you have on the NIC – this way you lower the chances of a
single core being overloaded from handling network and thus degrading
performance.
And then at same time within nginx, map the individual
Or would it make sense (if possible at all) to assign two or three more
cores to networking interrupts?
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
If it’s the same two cores, it might be another process that uses the same two
cores and thus happens to max out.
One very likely possibility would be interrupts from e.g. networking. You can
check /proc/interrupts to see where interrupts from the network happens.
From: nginx
Hello!
I have nginx with php-fpm running on a 16 core Ubuntu 16.04 instance. The
server is handling more than 10 million requests per hour.
https://imgur.com/a/iRZ7V
As you can see on the htop screenshot cores 6 and 7 are maxed out and
that's the case constantly - even after restarting nginx