On Fri, Apr 20, 2018 at 07:22:59PM +0200, Willy Tarreau wrote:

> On Fri, Apr 20, 2018 at 07:23:37PM +0300, Slawa Olhovchenkov wrote:
> > On Fri, Apr 20, 2018 at 05:33:34PM +0300, Slawa Olhovchenkov wrote:
> > 
> > > On Fri, Apr 20, 2018 at 04:23:00PM +0200, Willy Tarreau wrote:
> > > 
> > > > > > Thus for you it's better to stick to a single listener, and if you 
> > > > > > want to
> > > > > > increase the fairness between the sockets, you can reduce 
> > > > > > tune.maxaccept in
> > > > > > the global section like below :
> > > > > > 
> > > > > >   global
> > > > > >      tune.maxaccept 8
> > > > > 
> > > > > No significant differenece: average load rised, unequal CPU load still
> > > > > at same level
> > > > 
> > > > Then you can try setting it even lower (1) and decreasing maxpollevents 
> > > > :
> > > > 
> > > >    global
> > > >       tune.maxaccept 1
> > > >       tune.maxpollevents 20
> > > 
> > > No difference, disbalance about same.
> > > 
> > 
> > Can I got per-thread stats like connections count, bytes transfered?
> 
> Not by default, as such information are per-socket, per-frontend etc.
> However, by having on listening socket per thread, and "option socket-stats"
> you would have this. But in your case it doesn't work since only one socket
> receives all the traffic in any case.
> 
> The most detailed per-thread info you can get in your situation is by issuing
> "show activity" on the CLI, you will then get the information of the thread
> that processes the request :
> 
>   thread_id: 0
>   date_now: 1524244468.204596
>   loops: 754
>   wake_cache: 576
>   wake_tasks: 0
>   wake_applets: 14
>   wake_signal: 0
>   poll_exp: 590
>   poll_drop: 9
>   poll_dead: 0
>   poll_skip: 0
>   fd_skip: 0
>   fd_lock: 0
>   fd_del: 109905
>   conn_dead: 0
>   stream: 109922
>   empty_rq: 170
>   long_rq: 414
> 
> These information are quite internal, but the following ones will be
> useful :
>   - loops : number of calls to the event loop
>   - stream : cumulated number of streams processed
>     (~= connections for HTTP/1, ~= requests for HTTP/2)
>   - long_rq : number of passes via the scheduler where the runqueue had
>     more than 200 tasks active
>   - empty_rq : number of passes via the scheduler where the runqueue was
>     empty
> 
> The rest is probably irrelevant in your case. By doing it a few times
> you'll iterate over all threads. Or you can even have multiple stats
> sockets as well, but no need to do complex things first.

Ok, I am got respons from thread 3 and thread 7. Can I mean this
threads mapped to CPU 3 and 7?
CPU 3 busy at 0.42, CPU 7 busy at 0.17, show activity is about same:

thread_id: 3
date_now: 1524245903.896442
loops: 119914128 184812851 173923683 168285397 165260657 155576226 167823891 
142903166
wake_cache: 94411535 148233540 144512131 124600418 127222467 121950507 
127136520 112878590
wake_tasks: 58100 101310 22429 290349 154221 99918 207275 77221
wake_applets: 0 0 0 0 0 0 0 0
wake_signal: 0 0 0 0 0 0 0 0
poll_exp: 94469635 148334850 144534560 124890767 127376688 122050425 127343795 
112955811
poll_drop: 1238561 1910768 636228 4940166 2745656 1892360 3545183 1547972
poll_dead: 0 0 0 0 0 0 0 0
poll_skip: 0 0 0 0 0 0 0 0
fd_skip: 299688784 399135394 429530804 248537214 327557232 340203944 300964717 
331714706
fd_lock: 27928376 42948659 46096172 22460864 31946577 34536787 28610880 33200161
fd_del: 6494887 10577819 3337682 37986875 16346727 10378443 23322889 8187644
conn_dead: 0 0 0 0 0 0 0 0
stream: 18389406 28258935 9395022 73963729 40378554 27746726 52452679 22637467
empty_rq: 102497936 158696266 164717166 111903546 129615612 130001228 123814511 
121700524
long_rq: 0 0 0 0 0 0 0 0

thread_id: 7
date_now: 1524245909.919941
loops: 119974591 184907272 174012444 168371911 165345936 155659245 167910558 
142976547
wake_cache: 94459348 148309188 144585673 124664378 127288213 122015537 
127202211 112936359
wake_tasks: 58123 101357 22438 290505 154304 99961 207379 77257
wake_applets: 0 0 0 0 0 0 0 1
wake_signal: 0 0 0 0 0 0 0 0
poll_exp: 94517471 148410545 144608111 124954883 127442517 122115498 127409590 
113013617
poll_drop: 1239145 1911817 636564 4942753 2747081 1893376 3546921 1548730
poll_dead: 0 0 0 0 0 0 0 0
poll_skip: 0 0 0 0 0 0 0 0
fd_skip: 299841823 399338153 429745993 248661716 327726077 340384467 301118348 
331883440
fd_lock: 27943327 42970960 46120113 22472636 31963565 34555176 28626056 33217139
fd_del: 6497739 10583157 3339476 38006597 16354809 10383821 23334628 8191852
conn_dead: 0 0 0 0 0 0 0 0
stream: 18397654 28273212 9399987 74002315 40398851 27761908 52479235 22649415
empty_rq: 102550508 158777515 164801058 111960753 129682899 130070277 123878819 
121762731
long_rq: 0 0 0 0 0 0 0 0

Reply via email to