haproxy-1.8.8 seamless reloads failing with abns@ sockets

2018-05-12 Thread Jarno Huuskonen
Hi,

I'm testing 1.8.8(1.8.8-52ec357 snapshot) and seamless reloads
(expose-fd listeners).

I'm testing with this config (missing some default timeouts):
--8<
global
stats socket /tmp/stats level admin expose-fd listeners

defaults
mode http
log global
option httplog
retries 2
timeout connect 1500ms
timeout client  10s
timeout server  10s

listen testme
bind ipv4@127.0.0.1:8080
server test_abns_server abns@wpproc1 send-proxy-v2

frontend test_abns
bind abns@wpproc1 accept-proxy
http-request deny deny_status 200
--8<

Reloads (kill -USR2 $(cat /tmp/haproxy.pid)) are failing:
"Starting frontend test_abns: cannot listen to socket []"
(And request to 127.0.0.1:8080 timeout).

I guess the problem is that on reload, haproxy is trying
to bind the abns socket again, because (proto_uxst.c) uxst_bind_listener /
uxst_find_compatible_fd doesn't find existing (the one copied over from
old process) file descriptor for this abns socket.

Is uxst_find_compatible_fd only looking for .X.tmp sockets
and ignoring abns sockets where path starts with \0 ?

Using unix socket instead of abns socket makes the reload work.

-Jarno

-- 
Jarno Huuskonen



Re: Cannot handle more than 1,000 clients / s

2018-05-12 Thread Daniel
Hi,

maybe you need to increase ulimit and max connections in haproxy config.

Am 12.05.18, 15:54 schrieb "Jarno Huuskonen" :

Hi,

On Fri, May 11, Marco Colli wrote:
> >
> > Do you get better results if you'll use http instead of https ?
> 
> 
> I already tested it yesterday and the results are pretty much the same
> (only a very small improvement, which is expected, but not a substantial
> change).

Couple of things to check:
- first: can you test serving the response straight from haproxy,
  something like:
frontend www-frontend
  ...
  http-request deny deny_status 200

- second: from the stats screen captures you sent looks like
  "backend www-backend" is limited to 500 sessions, try increasing
  backend fullconn
  
(https://cbonte.github.io/haproxy-dconv/1.6/configuration.html#4.2-fullconn)

Are you running haproxy 1.6.3 ? It's pretty old (December 2015).

-Jarno

-- 
Jarno Huuskonen







Re: Cannot handle more than 1,000 clients / s

2018-05-12 Thread Jarno Huuskonen
Hi,

On Fri, May 11, Marco Colli wrote:
> >
> > Do you get better results if you'll use http instead of https ?
> 
> 
> I already tested it yesterday and the results are pretty much the same
> (only a very small improvement, which is expected, but not a substantial
> change).

Couple of things to check:
- first: can you test serving the response straight from haproxy,
  something like:
frontend www-frontend
  ...
  http-request deny deny_status 200

- second: from the stats screen captures you sent looks like
  "backend www-backend" is limited to 500 sessions, try increasing
  backend fullconn
  (https://cbonte.github.io/haproxy-dconv/1.6/configuration.html#4.2-fullconn)

Are you running haproxy 1.6.3 ? It's pretty old (December 2015).

-Jarno

-- 
Jarno Huuskonen



Re: stable-bot: WARNING: 13 bug fixes in queue for next release

2018-05-12 Thread Tim Düsterhus
Hi

Am 07.05.2018 um 13:12 schrieb stable-...@haproxy.com:
> Thus the computed ideal release date for 1.8.9 would be 2018/05/10, which is 
> in one week or less.
> 

May, 10th expired. Of what use is the computed ideal release date when
it just expires and neither a release or a mail explaining why it needs
to be delayed comes in? As a side question: How is that date being computed?

Willy said in his reply to the first notification:

> Overall the purpose of this bot is to remind us stable maintainers
> about the need to issue a release soon and at the same time to help
> everyone else synchronise with this.

IMO for synchronization the date needs to be reliable.

Best regards
Tim Düsterhus



Re: Show: h-app-proxy – Application server inside haproxy

2018-05-12 Thread Aleksandar Lazic
Hi Tim.

Am 11.05.2018 um 20:57 schrieb Tim Düsterhus:
> Hi list,
> 
> I recently experimented with the Lua API to check out it's capabilities
> and wanted to show off the results:
> 
> I implemented a very simple short URL service entirely in haproxy with
> Redis as it's backend. No backend service needed :-)

Cool stuff ;-)
Thanks for sharing.

> Thanks to Thierry for his Redis Connection Pool implementation:
> http://blog.arpalert.org/2018/02/haproxy-lua-redis-connection-pool.html
> 
> Thierry, note that you made a small typo in your pool: r.release(conn)
> in renew should read r:release(conn).
> 
> Blog post  : https://bl.duesterhus.eu/20180511/
> GitHub : https://github.com/TimWolla/h-app-roxy
> Live Demo  : https://bl.duesterhus.eu/20180511/demo/DWhxJf2Gpt
> Hacker News: https://news.ycombinator.com/item?id=17049715
>
> Best regards
> Tim Düsterhus
> 
> PS: Don't use this at home or at work even :-)

;-)

Best regards
Aleks



Re: Haproxy support for handling concurrent requests from different clients

2018-05-12 Thread Igor Cicimov
On Fri, 11 May 2018 8:01 pm Mihir Shirali  wrote:

> Thanks Aleksandar for the help!
> I did look up some examples for setting 503 - but all of them (as you've
> indicated) seem based on src ip or src header. I'm guessing this is more
> suitable for a DOS/DDOS  attack? In our deployment, the likelihood of
> getting one request from multiple clients is more than multiple requests
> from a single client.
> As an update the rate-limit directive has helped. However, the only
> problem is that the client does not know that the server is busy and
> *could* time out. It would be great if it were possible to somehow send a
> 503 out , so the clients could retry after a random time.
>

Or even better 429.