On Thu, Apr 26, 2018 at 10:58:07AM +0300, Slawa Olhovchenkov wrote:
> > > I am mean in case of dedicated listen socket pooler also can be
> > > dedicated, for load planing. For example:
> > >
> > > frontend tcp1
> > > bind x.x.x.206:443
> > > bind-process 1/9-1/16
> > >
On Thu, Apr 26, 2018 at 09:49:53AM +0200, Willy Tarreau wrote:
> On Thu, Apr 26, 2018 at 10:35:51AM +0300, Slawa Olhovchenkov wrote:
> > On Thu, Apr 26, 2018 at 09:25:59AM +0200, Willy Tarreau wrote:
> >
> > > On Thu, Apr 26, 2018 at 10:21:27AM +0300, Slawa Olhovchenkov wrote:
> > > > > >
On Thu, Apr 26, 2018 at 10:35:51AM +0300, Slawa Olhovchenkov wrote:
> On Thu, Apr 26, 2018 at 09:25:59AM +0200, Willy Tarreau wrote:
>
> > On Thu, Apr 26, 2018 at 10:21:27AM +0300, Slawa Olhovchenkov wrote:
> > > > > Pollers distinct from frontend?
> > > > > Can I bind pollers to CPU?
> > > >
>
On Thu, Apr 26, 2018 at 09:25:59AM +0200, Willy Tarreau wrote:
> On Thu, Apr 26, 2018 at 10:21:27AM +0300, Slawa Olhovchenkov wrote:
> > > > Pollers distinct from frontend?
> > > > Can I bind pollers to CPU?
> > >
> > > Each thread has its own poller. Since you map threads to CPUs you indeed
> >
On Thu, Apr 26, 2018 at 10:21:27AM +0300, Slawa Olhovchenkov wrote:
> > > Pollers distinct from frontend?
> > > Can I bind pollers to CPU?
> >
> > Each thread has its own poller. Since you map threads to CPUs you indeed
> > have one poller per CPU.
>
> Each pooler pool all sockets or only
On Wed, Apr 25, 2018 at 03:49:09PM +0200, Willy Tarreau wrote:
> On Wed, Apr 25, 2018 at 04:24:42PM +0300, Slawa Olhovchenkov wrote:
> > > > TCP load rise CPU use on all core (0-15), I am expect rise CPU use
> > > > only on 8-15 core. What I am miss?
> > >
> > > It's unrelated to the frontend's
On Wed, Apr 25, 2018 at 04:24:42PM +0300, Slawa Olhovchenkov wrote:
> > > TCP load rise CPU use on all core (0-15), I am expect rise CPU use
> > > only on 8-15 core. What I am miss?
> >
> > It's unrelated to the frontend's bindings but to the way the socket's fd
> > is registered with pollers,
On Wed, Apr 25, 2018 at 12:20:57PM +0200, Willy Tarreau wrote:
> On Wed, Apr 25, 2018 at 01:12:25PM +0300, Slawa Olhovchenkov wrote:
> > My issuse don't originate in ordinary run time: if issuse don't exist
> > now -- issuse don't exist in future until reload.
>
> Slawa, Olivier managed to
On Wed, Apr 25, 2018 at 01:12:25PM +0300, Slawa Olhovchenkov wrote:
> My issuse don't originate in ordinary run time: if issuse don't exist
> now -- issuse don't exist in future until reload.
Slawa, Olivier managed to reproduce something very similar to your
description on his box, and your
On Tue, Apr 24, 2018 at 05:22:39PM +0200, Willy Tarreau wrote:
> Slawa,
>
> I have a question below :
>
> backend default-pool
> balance roundrobin
> option httpchk GET /health-check
> timeout connect 1000ms
> timeout server 5000ms
> server elb1
Slawa,
I have a question below :
backend default-pool
balance roundrobin
option httpchk GET /health-check
timeout connect 1000ms
timeout server 5000ms
server elb1 x.eu-west-1.elb.amazonaws.com:80 maxconn 7000 id 1 check
resolvers mydns
On Mon, Apr 23, 2018 at 09:41:18PM +0300, Slawa Olhovchenkov wrote:
> On Mon, Apr 23, 2018 at 08:32:39PM +0200, Willy Tarreau wrote:
>
> > On Mon, Apr 23, 2018 at 06:36:29PM +0300, Slawa Olhovchenkov wrote:
> > > On Sat, Apr 21, 2018 at 04:38:48PM +0300, Slawa Olhovchenkov wrote:
> > >
> > > >
On Mon, Apr 23, 2018 at 08:32:39PM +0200, Willy Tarreau wrote:
> On Mon, Apr 23, 2018 at 06:36:29PM +0300, Slawa Olhovchenkov wrote:
> > On Sat, Apr 21, 2018 at 04:38:48PM +0300, Slawa Olhovchenkov wrote:
> >
> > > On Fri, Apr 20, 2018 at 03:55:25PM +0200, Willy Tarreau wrote:
> > >
> > > >
On Mon, Apr 23, 2018 at 06:36:29PM +0300, Slawa Olhovchenkov wrote:
> On Sat, Apr 21, 2018 at 04:38:48PM +0300, Slawa Olhovchenkov wrote:
>
> > On Fri, Apr 20, 2018 at 03:55:25PM +0200, Willy Tarreau wrote:
> >
> > > Thus for you it's better to stick to a single listener, and if you want to
> >
On Sat, Apr 21, 2018 at 03:38:41PM +0200, Willy Tarreau wrote:
> On Sat, Apr 21, 2018 at 03:35:02PM +0300, Slawa Olhovchenkov wrote:
> > On Sat, Apr 21, 2018 at 01:49:37PM +0300, Slawa Olhovchenkov wrote:
> >
> > > On Fri, Apr 20, 2018 at 09:01:24PM +0200, Willy Tarreau wrote:
> > > > So between
On Sat, Apr 21, 2018 at 03:35:02PM +0300, Slawa Olhovchenkov wrote:
> On Sat, Apr 21, 2018 at 01:49:37PM +0300, Slawa Olhovchenkov wrote:
>
> > On Fri, Apr 20, 2018 at 09:01:24PM +0200, Willy Tarreau wrote:
> > > So between these two the traffic is perfectly balanced. I don't see why
> > > you
>
On Sat, Apr 21, 2018 at 01:49:37PM +0300, Slawa Olhovchenkov wrote:
> On Fri, Apr 20, 2018 at 09:01:24PM +0200, Willy Tarreau wrote:
> > So between these two the traffic is perfectly balanced. I don't see why you
> > wouldn't have other threads :-/
> >
> > Well, let's try something. In your
On Fri, Apr 20, 2018 at 09:01:24PM +0200, Willy Tarreau wrote:
> On Fri, Apr 20, 2018 at 08:59:32PM +0300, Slawa Olhovchenkov wrote:
> > Ok, I am got respons from thread 3 and thread 7.
>
> And never the other ones ? That's kind of strange!
I am stop after got distinct threads w/ different CPU
On Fri, Apr 20, 2018 at 08:59:32PM +0300, Slawa Olhovchenkov wrote:
> Ok, I am got respons from thread 3 and thread 7.
And never the other ones ? That's kind of strange!
> Can I mean this threads mapped to CPU 3 and 7?
No, this is totally unrelated, unless of course you're using cpu-map to
map
On Fri, Apr 20, 2018 at 08:32:28PM +0300, Slawa Olhovchenkov wrote:
> > > Can I got per-thread stats like connections count, bytes transfered?
> >
> > Not by default, as such information are per-socket, per-frontend etc.
> > However, by having on listening socket per thread, and "option
On Fri, Apr 20, 2018 at 07:22:59PM +0200, Willy Tarreau wrote:
> On Fri, Apr 20, 2018 at 07:23:37PM +0300, Slawa Olhovchenkov wrote:
> > On Fri, Apr 20, 2018 at 05:33:34PM +0300, Slawa Olhovchenkov wrote:
> >
> > > On Fri, Apr 20, 2018 at 04:23:00PM +0200, Willy Tarreau wrote:
> > >
> > > > > >
On Fri, Apr 20, 2018 at 07:22:59PM +0200, Willy Tarreau wrote:
> On Fri, Apr 20, 2018 at 07:23:37PM +0300, Slawa Olhovchenkov wrote:
> > On Fri, Apr 20, 2018 at 05:33:34PM +0300, Slawa Olhovchenkov wrote:
> >
> > > On Fri, Apr 20, 2018 at 04:23:00PM +0200, Willy Tarreau wrote:
> > >
> > > > > >
On Fri, Apr 20, 2018 at 07:23:37PM +0300, Slawa Olhovchenkov wrote:
> On Fri, Apr 20, 2018 at 05:33:34PM +0300, Slawa Olhovchenkov wrote:
>
> > On Fri, Apr 20, 2018 at 04:23:00PM +0200, Willy Tarreau wrote:
> >
> > > > > Thus for you it's better to stick to a single listener, and if you
> > > >
On Fri, Apr 20, 2018 at 05:33:34PM +0300, Slawa Olhovchenkov wrote:
> On Fri, Apr 20, 2018 at 04:23:00PM +0200, Willy Tarreau wrote:
>
> > > > Thus for you it's better to stick to a single listener, and if you want
> > > > to
> > > > increase the fairness between the sockets, you can reduce
>
On Fri, Apr 20, 2018 at 04:23:00PM +0200, Willy Tarreau wrote:
> > > Thus for you it's better to stick to a single listener, and if you want to
> > > increase the fairness between the sockets, you can reduce tune.maxaccept
> > > in
> > > the global section like below :
> > >
> > > global
> >
On Fri, Apr 20, 2018 at 05:08:29PM +0300, Slawa Olhovchenkov wrote:
> On Fri, Apr 20, 2018 at 03:55:25PM +0200, Willy Tarreau wrote:
>
> > On Fri, Apr 20, 2018 at 03:50:52PM +0300, Slawa Olhovchenkov wrote:
> > > Also some strange: after resart I see 100% busy on CPU#1 (other CPU as
> > > before
On Fri, Apr 20, 2018 at 03:55:25PM +0200, Willy Tarreau wrote:
> On Fri, Apr 20, 2018 at 03:50:52PM +0300, Slawa Olhovchenkov wrote:
> > Also some strange: after resart I see 100% busy on CPU#1 (other CPU as
> > before -- from 0.05 to 0.4). This is busy loop over kevent:
> >
> > kqfd 11 cl 0 nc
On Fri, Apr 20, 2018 at 03:50:52PM +0300, Slawa Olhovchenkov wrote:
> Also some strange: after resart I see 100% busy on CPU#1 (other CPU as
> before -- from 0.05 to 0.4). This is busy loop over kevent:
>
> kqfd 11 cl 0 nc 0 eventlist 81340 nevent 200 timeout 0.200
> ret 11 errno 0
>
>
On Fri, Apr 20, 2018 at 03:50:29PM +0200, Willy Tarreau wrote:
> On Fri, Apr 20, 2018 at 02:39:28PM +0300, Slawa Olhovchenkov wrote:
> > On Fri, Apr 20, 2018 at 09:46:23AM +0200, Willy Tarreau wrote:
> > > What you can do is to keep multiple listeners, each bound to a different
> > > thread,
On Fri, Apr 20, 2018 at 02:39:28PM +0300, Slawa Olhovchenkov wrote:
> On Fri, Apr 20, 2018 at 09:46:23AM +0200, Willy Tarreau wrote:
> > What you can do is to keep multiple listeners, each bound to a different
> > thread, exactly like you did with processes :
> >
> >bind :80 ... process 1/1
>
On Fri, Apr 20, 2018 at 09:46:23AM +0200, Willy Tarreau wrote:
> > Hmm, may be I am nor clean.
> > In process mode all 8 CPU have load 0.18. In thread mode summary
> > average load still about 0.18, but distinct CPU load now different:
> >
> > 0: 0.13
> > 1: 0.15
> > 2: 0.07
> > 3: 0.40
> > 4:
On Fri, Apr 20, 2018 at 09:46:23AM +0200, Willy Tarreau wrote:
> On Fri, Apr 20, 2018 at 09:41:08AM +0300, Slawa Olhovchenkov wrote:
> > On Fri, Apr 20, 2018 at 08:22:04AM +0200, Willy Tarreau wrote:
> >
> > > On Fri, Apr 20, 2018 at 09:11:47AM +0300, Slawa Olhovchenkov wrote:
> > > > > Try
On Fri, Apr 20, 2018 at 09:41:08AM +0300, Slawa Olhovchenkov wrote:
> On Fri, Apr 20, 2018 at 08:22:04AM +0200, Willy Tarreau wrote:
>
> > On Fri, Apr 20, 2018 at 09:11:47AM +0300, Slawa Olhovchenkov wrote:
> > > > Try 1.8.8, it contains the kqueue fix.
> > >
> > > Work (kqueue), nice!
> >
> >
On Fri, Apr 20, 2018 at 08:22:04AM +0200, Willy Tarreau wrote:
> On Fri, Apr 20, 2018 at 09:11:47AM +0300, Slawa Olhovchenkov wrote:
> > > Try 1.8.8, it contains the kqueue fix.
> >
> > Work (kqueue), nice!
>
> Excellent, thanks for your feedback!
Thank for fix!
> > Average load same as for
On Fri, Apr 20, 2018 at 09:11:47AM +0300, Slawa Olhovchenkov wrote:
> > Try 1.8.8, it contains the kqueue fix.
>
> Work (kqueue), nice!
Excellent, thanks for your feedback!
> Average load same as for multiprocess, but load of distinct CPU from
> 0.06 to 0.39. Is this normal or expected?
It can
On Thu, Apr 19, 2018 at 06:08:51PM +0200, Lukas Tribus wrote:
> Hello,
>
>
> On 19 April 2018 at 14:31, Slawa Olhovchenkov wrote:
> >> This is very useful, thank you. I'm seeing overall that when you're on
> >> 1.7.10+kqueue and 1.8.5+poll the overall %user is the same.
Hello,
On 19 April 2018 at 14:31, Slawa Olhovchenkov wrote:
>> This is very useful, thank you. I'm seeing overall that when you're on
>> 1.7.10+kqueue and 1.8.5+poll the overall %user is the same. However
>> it's the system which makes a huge difference there (to be expected
>>
On Mon, Mar 26, 2018 at 05:33:47PM +0200, Willy Tarreau wrote:
> > red line is 'sys' cpu time
> > green line is 'user' cpu time
>
> This is very useful, thank you. I'm seeing overall that when you're on
> 1.7.10+kqueue and 1.8.5+poll the overall %user is the same. However
> it's the system which
Hello,
On Mon, Mar 26, 2018 at 12:47:02PM +0300, Slawa Olhovchenkov wrote:
> On Mon, Mar 26, 2018 at 12:38:57PM +0300, Slawa Olhovchenkov wrote:
>
> > > Could you check with your previous configuration (nbproc 2 and no
> > > nbthread parameter) ? It could help to know if it is a threads related
On Mon, Mar 26, 2018 at 12:38:57PM +0300, Slawa Olhovchenkov wrote:
> > Could you check with your previous configuration (nbproc 2 and no
> > nbthread parameter) ? It could help to know if it is a threads related
> > problem or not.
> >
> > Then, try to disable kqueue (nokqueue option in
On Mon, Mar 26, 2018 at 11:03:42AM +0200, Christopher Faulet wrote:
> Le 26/03/2018 à 03:10, Slawa Olhovchenkov a écrit :
> > I am try to use multithreading w/ haproxy 1.8.5 (on FreeBSD
> > 11-stable) and see high and saw-tooth CPU consume.
> >
> > I am use next config:
> >
> > global
> >
Le 26/03/2018 à 03:10, Slawa Olhovchenkov a écrit :
I am try to use multithreading w/ haproxy 1.8.5 (on FreeBSD
11-stable) and see high and saw-tooth CPU consume.
I am use next config:
global
nbproc 1
nbthread 2
cpu-map auto:1/1-2 0-1
I am use for comprassion
I am try to use multithreading w/ haproxy 1.8.5 (on FreeBSD
11-stable) and see high and saw-tooth CPU consume.
I am use next config:
global
nbproc 1
nbthread 2
cpu-map auto:1/1-2 0-1
I am use for comprassion haproxy 1.7.10 w/ next config:
global
nbproc 2
43 matches
Mail list logo