On 29/04/2015 12:56 μμ, Krishna Kumar (Engineering) wrote:
> Dear all,
> 
> Sorry, my lab systems were down for many days and I could not get back
> on this earlier. After
> new systems were allocated, I managed to get all the requested
> information with a fresh ru
> (Sorry, this is a long mail too!). There are now 4 physical servers,
> running Debian 3.2.0-4-amd64,
> connected directly to a common switch:
> 
>     server1: Run 'ab' in a container, no cpu/memory restriction.
>     server2: Run haproxy in a container, configured with 4 nginx's,
> cpu/memory configured as
>                   shown below.
>     server3: Run 2 different nginx containers, no cpu/mem restriction.
>     server4: Run 2 different nginx containers, for a total of 4 nginx,
> no cpu/mem restriction.
> 
> The servers have 2 sockets, each with 24 cores. Socket 0 has cores
> 0,2,4,..,46 and Socket 1 has
> cores 1,3,5,..,47. The NIC (ixgbe) is bound to CPU 0. 

It is considered bad thing to bind all queues of NIC to 1 CPU as it
creates a major bottleneck. HAProxy will have to wait for the interrupt
to be processed by a single CPU which is saturated.

> Haproxy is started
> on cpu's:
> 2,4,6,8,10,12,14,16, so that is in the same cache line as the nic (nginx
> is run on different servers
> as explained above). No tuning on nginx servers as the comparison is between

how many workers to run on Nginx?

> 'ab' -> 'nginx' and 'ab' and 'haproxy' -> nginx(s). The cpus are
> "Intel(R) Xeon(R) CPU E5-2670 v3
> @ 2.30GHz". The containers are all configured with 8GB, server having
> 128GB memory.
> 
> mpstat and iostat were captured during the test, where the capture
> started after 'ab' started and
> capture ended just before 'ab' finished so as to get "warm" numbers.
> 
> ------------------------------------------------------------------------------------------------------------------------
> Request directly to 1 nginx backend server, size=256 bytes:
> 
> Command: ab -k -n 100000 -c 1000 <nginx>:80/256
>     Requests per second:    69749.02 [#/sec] (mean)
>     Transfer rate:          34600.18 [Kbytes/sec] received
> ------------------------------------------------------------------------------------------------------------------------
> Request to haproxy configured with 4 nginx backends (nbproc=4), size=256
> bytes:
> 
> Command: ab -k -n 100000 -c 1000 <haproxy>:80/256
>     Requests per second:    19071.55 [#/sec] (mean)
>     Transfer rate:          9461.28 [Kbytes/sec] received
> 
>         mpstat (first 4 processors only, rest are almost zero):
> Average:     CPU    %usr   %nice    %sys %iowait    %irq   %soft 
> %steal  %guest  %gnice   %idle
> Average:     all    0.44    0.00    1.59    0.00    0.00    2.96   
> 0.00    0.00    0.00   95.01
> Average:       0    0.25    0.00    0.75    0.00    0.00   98.01   
> 0.00    0.00    0.00    1.00

All network interrupts are processed by CPU 0 which is saturated.
You need to spread the queues of NIC to different CPUs. Either use
irqbalancer or the following 'ugly' script which you need to modify a
bit as I have 2 NICs and you have only 1. You also need to adjust the
number of queues, grep eth /proc/interrupts and you will find out how
many you have.

#!/bin/sh

awk '
    function get_affinity(cpus) {
        split(cpus,list,/,/)
        mask=0
        for (val in list) {
            mask+=lshift(1,list[val])
        }
        return mask
    }
    BEGIN {
        # Interrupt -> CPU core(s) mapping
        map["eth0-q0"]="0"
        map["eth0-q1"]="1"
        map["eth0-q2"]="2"
        map["eth0-q3"]="3"
        map["eth0-q4"]="4"
        map["eth0-q5"]="5"
        map["eth0-q6"]="6"
        map["eth0-q7"]="7"
        map["eth1-q0"]="12"
        map["eth1-q1"]="13"
        map["eth1-q2"]="14"
        map["eth1-q3"]="15"
        map["eth1-q4"]="16"
        map["eth1-q5"]="17"
        map["eth1-q6"]="18"
        map["eth1-q7"]="19"
    }
    /eth/ {
        irq=substr($1,0,length($1)-1)
        queue=$NF
        printf "%s (%s) -> %s
(%08X)\n",queue,irq,map[queue],get_affinity(map[queue])
        system(sprintf("echo %08X >
/proc/irq/%s/smp_affinity\n",get_affinity(map[queue]),irq))
    }
' /proc/interrupts

> Average:       1    1.26    0.00    5.28    0.00    0.00    2.51   
> 0.00    0.00    0.00   90.95
> Average:       2    2.76    0.00    8.79    0.00    0.00    5.78   
> 0.00    0.00    0.00   82.66
> Average:       3    1.51    0.00    6.78    0.00    0.00    3.02   
> 0.00    0.00    0.00   88.69
> 
>                 pidstat:
> Average:      105       471    5.00   33.50    0.00   38.50     -  haproxy 
> Average:      105       472    6.50   44.00    0.00   50.50     -  haproxy
> Average:      105       473    8.50   40.00    0.00   48.50     -  haproxy 
> Average:      105       475    2.50   14.00    0.00   16.50     -  haproxy
> ------------------------------------------------------------------------------------------------------------------------
> Request directly to 1 nginx backend server, size=64K
> 

I would like to see pidstat and mpstat while you test nginx.

Cheers,
Pavlos


Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to