On Mon, Apr 23, 2018 at 09:41:18PM +0300, Slawa Olhovchenkov wrote:
> On Mon, Apr 23, 2018 at 08:32:39PM +0200, Willy Tarreau wrote:
>
> > On Mon, Apr 23, 2018 at 06:36:29PM +0300, Slawa Olhovchenkov wrote:
> > > On Sat, Apr 21, 2018 at 04:38:48PM +0300, Slawa Olhovchenkov wrote:
> > >
> > > > On Fri, Apr 20, 2018 at 03:55:25PM +0200, Willy Tarreau wrote:
> > > >
> > > > > Thus for you it's better to stick to a single listener, and if you
> > > > > want to
> > > > > increase the fairness between the sockets, you can reduce
> > > > > tune.maxaccept in
> > > > > the global section like below :
> > > > >
> > > > > global
> > > > > tune.maxaccept 8
> > > > >
> > > > > The kqueue issue you report is still unclear to me however, I'm not
> > > > > much
> > > > > used to kqueue and always having a hard time decoding it.
> > > >
> > > > I am try to decode first event on looped thread.
> > > >
> > > > ev0 id 21 filt -2 flag 0 fflag 0 data 2400 udata 0
> > > >
> > > > This is EVFILT_WRITE (available 2400 bytes) on socket 21.
> > > >
> > > > This is DNS socket:
> > > >
> > > > 12651 haproxy 21 s - rw---n-- 9 0 UDP
> > > > 185.38.13.221:28298 8.8.8.8:53
> > > >
> > > > Actualy I am have only one dns requests per 2 seconds.
> > >
> > > Can this (DNS use) cause 100% CPU use?
> >
> > It should not but there could be a bug. Olivier tried to reproduce here but
> > failed to get any such problem. We'll definitely need your configuration,
> > we've been guessing too much now and we're not making any progress on this
> > issue.
>
> I am mean need some (timing) combonation of http requests and DNS
> request/response.
>
> global
> nbproc 1
> nbthread 8
> cpu-map auto:1/1-8 0-7
> log /dev/log local0
> tune.ssl.default-dh-param 2048
> tune.ssl.cachesize 1000000
> tune.ssl.lifetime 600
> tune.ssl.maxrecord 1460
> tune.maxaccept 1
> tune.maxpollevents 20
> maxconn 140000
> stats socket /var/run/haproxy.sock level admin
> user www
> group www
> daemon
>
> defaults
> log global
> mode http
> http-reuse always
> option http-keep-alive
> option httplog
> option dontlognull
> retries 3
> maxconn 140000
> backlog 4096
> timeout connect 5000
> timeout client 15000
> timeout server 50000
>
> listen stats
> bind :xxxx
> mode http
> log global
>
> maxconn 10
>
> timeout client 100s
> timeout server 100s
> timeout connect 100s
> timeout queue 100s
>
> stats enable
> stats hide-version
> stats refresh 30s
> stats show-node
> stats uri /haproxy?stats
>
> frontend balancer
> bind *:80
> bind *:443 ssl crt XXXX
> # remove X-Forwarded-For header
> http-request set-header X-Forwarded-Port %[dst_port]
> http-request set-header X-Forwarded-Proto https if { ssl_fc }
> http-request set-header X-Forwarded-Proto http if ! { ssl_fc }
> reqidel ^X-Forwarded-For:.*
> option dontlog-normal
> option forwardfor
> timeout client 50000ms
> timeout client 50000ms
> use_backend ssl-pool if { ssl_fc }
> default_backend default-pool
>
> backend default-pool
> balance roundrobin
> option httpchk GET /health-check
> timeout connect 1000ms
> timeout server 5000ms
> server elb1 x.eu-west-1.elb.amazonaws.com:80 maxconn 7000 id 1 check
> resolvers mydns resolve-prefer ipv4
> server elb2 y.eu-west-1.elb.amazonaws.com:80 maxconn 7000 id 2 check
> resolvers mydns resolve-prefer ipv4
>
> backend ssl-pool
> balance roundrobin
> option httpchk GET /health-check
> timeout connect 1000ms
> timeout server 35s
> server elb1 x.eu-west-1.elb.amazonaws.com:443 ssl verify none
> maxconn 7000 id 1 check resolvers mydns resolve-prefer ipv4
> server elb2 y.eu-west-1.elb.amazonaws.com:443 ssl verify none
> maxconn 7000 id 2 check resolvers mydns resolve-prefer ipv4
>
> resolvers mydns
> nameserver dns1 8.8.8.8:53
> resolve_retries 3
> timeout retry 1s
> hold other 30s
> hold refused 30s
> hold nx 30s
> hold timeout 30s
> hold valid 10s
Thank you, we'll retry with this.
Willy