On Fri, Apr 20, 2018 at 05:08:29PM +0300, Slawa Olhovchenkov wrote:
> On Fri, Apr 20, 2018 at 03:55:25PM +0200, Willy Tarreau wrote:
>
> > On Fri, Apr 20, 2018 at 03:50:52PM +0300, Slawa Olhovchenkov wrote:
> > > Also some strange: after resart I see 100% busy on CPU#1 (other CPU as
> > > before -- from 0.05 to 0.4). This is busy loop over kevent:
> > >
> > > kqfd 11 cl 0 nc 0 eventlist 813400000 nevent 200 timeout 0.2000000
> > > ret 11 errno 0
> > >
> > > ev_kqueue.c:128
> > >
> > > Like some events not removed from eventlist and permanently
> > > re-activated.
> >
> > I'm just realizing that you're not on linux, sorry. The multi-bind trick
> > I proposed only works there as the system is the one doing the LB between
> > the sockets.
> >
> > In your case it's different, only one thread will likely take the traffic
> > (the last one bound) as it's the one whose socket replaces the previous
> > ones.
> >
> > Thus for you it's better to stick to a single listener, and if you want to
> > increase the fairness between the sockets, you can reduce tune.maxaccept in
> > the global section like below :
> >
> > global
> > tune.maxaccept 8
>
> No significant differenece: average load rised, unequal CPU load still
> at same level
Then you can try setting it even lower (1) and decreasing maxpollevents :
global
tune.maxaccept 1
tune.maxpollevents 20
It will force events to be smoothed over time, given more chances to various
threads to catch them while the others are working. It will increase the
average load. If you see a better behaviour, then you can play around these
values to see which one has the most significant effect.
I'm now thinking about something we could possibly do to perform some form
of internal thread load balancing when the OS cannot do it, and it should
not be difficult (a bit ugly but what is not ugly in load balancing anyway?).
Willy