Re: SSL termination with HA proxy

2019-04-15 Thread Aleksandar Lazic
Hi.

Am 15.04.2019 um 18:06 schrieb bhanu chandra suman:
> Hi,
> 
> As per your mail i can understand again my create certificates in that server
> (.pemkey). is it right

Yes.

For a TLS/SSL server is at least a Key and a Certificate required.

Do you have already a Key and a Certificate?

Maybe this post helps you to create certificates.
https://serversforhackers.com/c/using-ssl-certificates-with-haproxy

Regards
Aleks

> On Mon, Apr 15, 2019 at 9:27 PM Aleksandar Lazic  <mailto:al-hapr...@none.at>> wrote:
> 
> Hi.
> 
> Am 15.04.2019 um 17:55 schrieb bhanu chandra suman:
> >
> > root@ip-172-31-80-163:~# uname -a
> > Linux ip-172-31-80-163 4.15.0-1035-aws #37-Ubuntu SMP Mon Mar 18 
> 16:15:14 UTC
> > 20                                                                      
>  
>      
> >           19 x86_64 x86_64 x86_64 GNU/Linux
> > root@ip-172-31-80-163:~# haproxy -v
> > HA-Proxy version 1.8.8-1ubuntu0.4 2019/01/24
> > Copyright 2000-2018 Willy Tarreau  <mailto:wi...@haproxy.org> <mailto:wi...@haproxy.org
> <mailto:wi...@haproxy.org>>>
> 
> Well I assume this version have TLS/SSL enabled as you haven't used `-vv`!
> 
> Please take a look into this blog post which describes how to add TLS/SSL
> termination into haproxy.
> 
> 
> https://www.haproxy.com/blog/how-to-get-ssl-with-haproxy-getting-rid-of-stunnel-stud-nginx-or-pound/
> 
> Regards
> Aleks
> 
> > On Mon, Apr 15, 2019 at 8:58 PM Aleksandar Lazic  <mailto:al-hapr...@none.at>
> > <mailto:al-hapr...@none.at <mailto:al-hapr...@none.at>>> wrote:
> >
> >     Hi.
> >
> >     Please keep the Mailinglist in the loop.
> >
> >     Am 15.04.2019 um 17:27 schrieb bhanu chandra suman:
> >     > image.png
> >
> >     It's not easy to copy text from Screenshot's so please copy text 
> into
> the mail.
> >
> >     Please use 2 v.
> >
> >     haproxy -vv
> >
> >     Thanks.
> >
> >     > On Mon, Apr 15, 2019 at 8:53 PM Aleksandar Lazic 
>  <mailto:al-hapr...@none.at>
> >     <mailto:al-hapr...@none.at <mailto:al-hapr...@none.at>>
> >     > <mailto:al-hapr...@none.at <mailto:al-hapr...@none.at>
> <mailto:al-hapr...@none.at <mailto:al-hapr...@none.at>>>> wrote:
> >     >
> >     >     Hi.
> >     >
> >     >     Am 15.04.2019 um 17:19 schrieb bhanu chandra suman:
> >     >     > Hi Team,
> >     >     >
> >     >     > I installed haproxy in ubuntu machine. and after that i 
> edited the
> >     >     haproxy.cfg file.
> >     >
> >     >     Please can you tell us more about this.
> >     >
> >     >     haproxy -vv
> >     >     uname -a
> >     >
> >     >     > bind *:18083
> >     >     > mode http
> >     >     > default_backend backendnodes
> >     >     > backend backendnodes
> >     >     > balance roundrobin
> >     >     > option forwardfor
> >     >     > server node1 x.x.x.x:18083 check
> >     >     > server node2 x.x.x.x:18083 check
> >     >     > listen stats
> >     >     > bind :32700
> >     >     > stats enable
> >     >     > stats uri /
> >     >     > stats hide-version
> >     >     > stats auth user:password
> >     >     > Its working fine.but i need SSL termination with HA proxy.
> >     >     > could you please help me this issue.
> >     >
> >     >     Please take a look into this blog post which describes how 
> TLS/SSL
> >     Termination
> >     >     works in haproxy.
> >     >
> >     >   
> >   
>   
> https://www.haproxy.com/blog/how-to-get-ssl-with-haproxy-getting-rid-of-stunnel-stud-nginx-or-pound/
> >     >
> >     >     > --
> >     >     > S.B.C.Suman
> >     >
> >     >     Regards
> >     >     Aleks
> >     >
> >     >
> >     >
> >     > --
> >     > S.B.C.Suman
> >     > +91 9989894950.
> >
> >
> >
> > --
> > S.B.C.Suman
> > +91 9989894950.
> 
> 
> 
> -- 
> S.B.C.Suman
> +91 9989894950.




Re: SSL termination with HA proxy

2019-04-15 Thread Aleksandar Lazic
Hi.

Am 15.04.2019 um 17:55 schrieb bhanu chandra suman:
> 
> root@ip-172-31-80-163:~# uname -a
> Linux ip-172-31-80-163 4.15.0-1035-aws #37-Ubuntu SMP Mon Mar 18 16:15:14 UTC
> 20                                                                            
>  
>           19 x86_64 x86_64 x86_64 GNU/Linux
> root@ip-172-31-80-163:~# haproxy -v
> HA-Proxy version 1.8.8-1ubuntu0.4 2019/01/24
> Copyright 2000-2018 Willy Tarreau  <mailto:wi...@haproxy.org>>

Well I assume this version have TLS/SSL enabled as you haven't used `-vv`!

Please take a look into this blog post which describes how to add TLS/SSL
termination into haproxy.

https://www.haproxy.com/blog/how-to-get-ssl-with-haproxy-getting-rid-of-stunnel-stud-nginx-or-pound/

Regards
Aleks

> On Mon, Apr 15, 2019 at 8:58 PM Aleksandar Lazic  <mailto:al-hapr...@none.at>> wrote:
> 
> Hi.
> 
> Please keep the Mailinglist in the loop.
> 
> Am 15.04.2019 um 17:27 schrieb bhanu chandra suman:
> > image.png
> 
> It's not easy to copy text from Screenshot's so please copy text into the 
> mail.
> 
> Please use 2 v.
> 
> haproxy -vv
> 
> Thanks.
> 
> > On Mon, Apr 15, 2019 at 8:53 PM Aleksandar Lazic  <mailto:al-hapr...@none.at>
> > <mailto:al-hapr...@none.at <mailto:al-hapr...@none.at>>> wrote:
> >
> >     Hi.
> >
> >     Am 15.04.2019 um 17:19 schrieb bhanu chandra suman:
> >     > Hi Team,
> >     >
> >     > I installed haproxy in ubuntu machine. and after that i edited the
> >     haproxy.cfg file.
> >
> >     Please can you tell us more about this.
> >
> >     haproxy -vv
> >     uname -a
> >
> >     > bind *:18083
> >     > mode http
> >     > default_backend backendnodes
> >     > backend backendnodes
> >     > balance roundrobin
> >     > option forwardfor
> >     > server node1 x.x.x.x:18083 check
> >     > server node2 x.x.x.x:18083 check
> >     > listen stats
> >     > bind :32700
> >     > stats enable
> >     > stats uri /
> >     > stats hide-version
> >     > stats auth user:password
> >     > Its working fine.but i need SSL termination with HA proxy.
> >     > could you please help me this issue.
> >
> >     Please take a look into this blog post which describes how TLS/SSL
> Termination
> >     works in haproxy.
> >
> >   
>  
> https://www.haproxy.com/blog/how-to-get-ssl-with-haproxy-getting-rid-of-stunnel-stud-nginx-or-pound/
> >
> >     > --
> >     > S.B.C.Suman
> >
> >     Regards
> >     Aleks
> >
> >
> >
> > --
> > S.B.C.Suman
> > +91 9989894950.
> 
> 
> 
> -- 
> S.B.C.Suman
> +91 9989894950.




Re: SSL termination with HA proxy

2019-04-15 Thread Aleksandar Lazic
Hi.

Please keep the Mailinglist in the loop.

Am 15.04.2019 um 17:27 schrieb bhanu chandra suman:
> image.png

It's not easy to copy text from Screenshot's so please copy text into the mail.

Please use 2 v.

haproxy -vv

Thanks.

> On Mon, Apr 15, 2019 at 8:53 PM Aleksandar Lazic  <mailto:al-hapr...@none.at>> wrote:
> 
> Hi.
> 
> Am 15.04.2019 um 17:19 schrieb bhanu chandra suman:
> > Hi Team,
> >
> > I installed haproxy in ubuntu machine. and after that i edited the
> haproxy.cfg file.
> 
> Please can you tell us more about this.
> 
> haproxy -vv
> uname -a
> 
> > bind *:18083
> > mode http
> > default_backend backendnodes
> > backend backendnodes
> > balance roundrobin
> > option forwardfor
> > server node1 x.x.x.x:18083 check
> > server node2 x.x.x.x:18083 check
> > listen stats
> > bind :32700
> > stats enable
> > stats uri /
> > stats hide-version
> > stats auth user:password
> > Its working fine.but i need SSL termination with HA proxy.
> > could you please help me this issue.
> 
> Please take a look into this blog post which describes how TLS/SSL 
> Termination
> works in haproxy.
> 
> 
> https://www.haproxy.com/blog/how-to-get-ssl-with-haproxy-getting-rid-of-stunnel-stud-nginx-or-pound/
> 
> > --
> > S.B.C.Suman
> 
> Regards
> Aleks
> 
> 
> 
> -- 
> S.B.C.Suman
> +91 9989894950.




Re: SSL termination with HA proxy

2019-04-15 Thread Aleksandar Lazic
Hi.

Am 15.04.2019 um 17:19 schrieb bhanu chandra suman:
> Hi Team,
> 
> I installed haproxy in ubuntu machine. and after that i edited the 
> haproxy.cfg file.

Please can you tell us more about this.

haproxy -vv
uname -a

> bind *:18083
> mode http
> default_backend backendnodes
> backend backendnodes
> balance roundrobin
> option forwardfor
> server node1 x.x.x.x:18083 check
> server node2 x.x.x.x:18083 check
> listen stats
> bind :32700
> stats enable
> stats uri /
> stats hide-version
> stats auth user:password
> Its working fine.but i need SSL termination with HA proxy.
> could you please help me this issue.

Please take a look into this blog post which describes how TLS/SSL Termination
works in haproxy.

https://www.haproxy.com/blog/how-to-get-ssl-with-haproxy-getting-rid-of-stunnel-stud-nginx-or-pound/

> -- 
> S.B.C.Suman

Regards
Aleks



Re: [ANNOUNCE] haproxy-1.9.6

2019-04-09 Thread Aleksandar Lazic
Am 09.04.2019 um 18:06 schrieb Emmanuel Hocdet:
> 
>> Le 9 avr. 2019 à 09:58, Aleksandar Lazic > <mailto:al-hapr...@none.at>> a écrit :
>>
>> Hi Manu.
>>
>> Am 05.04.2019 um 12:36 schrieb Emmanuel Hocdet:
>>> Hi Aleks,
>>>
>>> Thanks you to have integrate BoringSSL!
>>>
>>>> Le 29 mars 2019 à 14:51, Aleksandar Lazic >>> <mailto:al-hapr...@none.at>
>>>> <mailto:al-hapr...@none.at>> a écrit :
>>>>
>>>> Am 29.03.2019 um 14:25 schrieb Willy Tarreau:
>>>>> Hi Aleks,
>>>>>
>>>>> On Fri, Mar 29, 2019 at 02:09:28PM +0100, Aleksandar Lazic wrote:
>>>>>> With openssl are 2 tests failed but I'm not sure because of the setup or 
>>>>>> a
>>>>>> bug.
>>>>>> https://gitlab.com/aleks001/haproxy19-centos/-/jobs/186769272
>>>>>
>>>>> Thank you for the quick feedback. I remember about the first one being
>>>>> caused by a mismatch in the exact computed response size due to headers
>>>>> encoding causing some very faint variations, though I have no idea why
>>>>> I don't see it here, since I should as well, I'll have to check my regtest
>>>>> script. For the second one, it looks related to the reactivation of the
>>>>> HEAD method in this test which was broken in previous vtest. But I'm
>>>>> seeing in your trace that you're taking it from the git repo so that
>>>>> can't be that. I need to dig as well.
>>>>>
>>>>>> With boringssl are 3 tests failed but I'm not sure because of the setup 
>>>>>> or a
>>>>>> bug.
>>>>>> https://gitlab.com/aleks001/haproxy-19-boringssl/-/jobs/186780822
>>>>>
>>>>> For this one I don't know, curl reports some unexpected EOFs. I don't
>>>>> see why it would fail only with boringssl. Did it use to work in the
>>>>> past ?
>>>>
>>>> No. The tests with Boringssl always failed in one or another way.
>>>>
>>>
>>> It’s strange. After quick test, it works in my environnements.
>>> I need to comment "${no-htx} option http-use-htx »
>>> to test with varnishtest.
>>
>> I use `make reg-tests -- --use-htx` to test with htx.
>>
>> https://gitlab.com/aleks001/haproxy-19-boringssl/blob/master/Dockerfile#L65
>>
>> Do you tested with or without htx?
>>
> 
> I use varnishtest, i will switch to vtest.
> I do again the test with and without htx, with 1.9 and 2.0dev, and it’s ok.

Then is it a problem on the gitlab environment as I thought.

Thank you for your time and approval.

Best regards
Aleks

>>>> https://gitlab.com/aleks001/haproxy-19-boringssl/-/jobs/157743825
>>>> https://gitlab.com/aleks001/haproxy-19-boringssl/-/jobs/157730793
>>>>
>>>> I'm not sure if the docker setup on gitlab is the limitation or just a bug.
>>>> Sorry to be so unspecific.
>>>>
>>>>> Thanks,
>>>>> Willy
>>>>
>>>> Regards
>>>> Aleks
>>>
>>> ++
>>> Manu
>>
>> Best regards
>> Aleks
> 




Re: [ANNOUNCE] haproxy-1.9.6

2019-04-09 Thread Aleksandar Lazic
Hi Manu.

Am 05.04.2019 um 12:36 schrieb Emmanuel Hocdet:
> Hi Aleks,
> 
> Thanks you to have integrate BoringSSL!
> 
>> Le 29 mars 2019 à 14:51, Aleksandar Lazic > <mailto:al-hapr...@none.at>> a écrit :
>>
>> Am 29.03.2019 um 14:25 schrieb Willy Tarreau:
>>> Hi Aleks,
>>>
>>> On Fri, Mar 29, 2019 at 02:09:28PM +0100, Aleksandar Lazic wrote:
>>>> With openssl are 2 tests failed but I'm not sure because of the setup or a 
>>>> bug.
>>>> https://gitlab.com/aleks001/haproxy19-centos/-/jobs/186769272
>>>
>>> Thank you for the quick feedback. I remember about the first one being
>>> caused by a mismatch in the exact computed response size due to headers
>>> encoding causing some very faint variations, though I have no idea why
>>> I don't see it here, since I should as well, I'll have to check my regtest
>>> script. For the second one, it looks related to the reactivation of the
>>> HEAD method in this test which was broken in previous vtest. But I'm
>>> seeing in your trace that you're taking it from the git repo so that
>>> can't be that. I need to dig as well.
>>>
>>>> With boringssl are 3 tests failed but I'm not sure because of the setup or 
>>>> a
>>>> bug.
>>>> https://gitlab.com/aleks001/haproxy-19-boringssl/-/jobs/186780822
>>>
>>> For this one I don't know, curl reports some unexpected EOFs. I don't
>>> see why it would fail only with boringssl. Did it use to work in the
>>> past ?
>>
>> No. The tests with Boringssl always failed in one or another way.
>>
> 
> It’s strange. After quick test, it works in my environnements.
> I need to comment "${no-htx} option http-use-htx »
> to test with varnishtest.

I use `make reg-tests -- --use-htx` to test with htx.

https://gitlab.com/aleks001/haproxy-19-boringssl/blob/master/Dockerfile#L65

Do you tested with or without htx?

>> https://gitlab.com/aleks001/haproxy-19-boringssl/-/jobs/157743825
>> https://gitlab.com/aleks001/haproxy-19-boringssl/-/jobs/157730793
>>
>> I'm not sure if the docker setup on gitlab is the limitation or just a bug.
>> Sorry to be so unspecific.
>>
>>> Thanks,
>>> Willy
>>
>> Regards
>> Aleks
> 
> ++
> Manu

Best regards
Aleks




Look at "HTTP/3 and QUIC: the details" from curlup 2019

2019-04-01 Thread Aleksandar Lazic
Hi.

In the last curlup 

https://daniel.haxx.se/blog/2019/04/01/curl-up-2019-is-over/

was a some interesting talks like QUIC: the details

https://youtu.be/mDc2kHPtavE

The slide link is on curlup or on youtube.

Regards
Aleks



Re: Upcoming haproxy build fixes for Cygwin & AIX

2019-04-01 Thread Aleksandar Lazic
Am 01.04.2019 um 15:15 schrieb Willy Tarreau:
> On Mon, Apr 01, 2019 at 03:04:24PM +0200, Aleksandar Lazic wrote:
>>> I managed to build this version with openssl 1.0.2 support on a very
>>> old Power3/333 MHz running AIX 5.1 and to run an H2 test. This sounds
>>> a bit like an anachronism though :-)
>>
>> 8-O
>>
>> https://www-01.ibm.com/software/support/lifecycleapp/PLCDetail.wss?q45=D700454U98638U83
>>
>> EOS: 01-Apr-2006
>>
>> As for today the Support ends *exactly* 12 Years ago.
>> At this time H2 wasn't on any Roadmap and now with HAProxy this platform gets
>> the shiny new H2 technology ;-)))
>>
>> Maybe OS/2 is the next platform.
>>
>> https://www-01.ibm.com/software/support/lifecycleapp/PLCDetail.wss?q45=F827166B21756H28
>> EOS 31-Dec-2006
> 
> I have been wondering if it was worth emitting an april's fool around
> all this support for dead platforms but I feared it would revive some
> feature requests, and with users complaining that haproxy ate all their
> PDP-11's memory :-)

and expected 1000 r/s ;-)

> Willy
> 




Re: Upcoming haproxy build fixes for Cygwin & AIX

2019-04-01 Thread Aleksandar Lazic
Am 01.04.2019 um 09:06 schrieb Willy Tarreau:
> On Mon, Apr 01, 2019 at 09:04:06AM +0800, ??? wrote:
>> Many thanks Willy, I will wait and to try and study your patch.
> 
> You're welcome.

[good infos snipped]

> I managed to build this version with openssl 1.0.2 support on a very
> old Power3/333 MHz running AIX 5.1 and to run an H2 test. This sounds
> a bit like an anachronism though :-)

8-O

https://www-01.ibm.com/software/support/lifecycleapp/PLCDetail.wss?q45=D700454U98638U83

EOS: 01-Apr-2006

As for today the Support ends *exactly* 12 Years ago.
At this time H2 wasn't on any Roadmap and now with HAProxy this platform gets
the shiny new H2 technology ;-)))

Maybe OS/2 is the next platform.

https://www-01.ibm.com/software/support/lifecycleapp/PLCDetail.wss?q45=F827166B21756H28
EOS 31-Dec-2006

> Cheers,
> Willy

Cheers,
Aleks



Re: Using haproxy to have SSH in a HTTPS connection with HTX

2019-03-31 Thread Aleksandar Lazic
Hi Matthias.

Am 31.03.2019 um 10:11 schrieb Matthias Fechner:
> Dear all,
> 
> as HTTP2 is getting stable in haproxy 1.9.6 I decided to give it a try.
> Currently I have the following setup:
>     frontend www-https
>     mode tcp
>     option tcplog
>     bind 0.0.0.0:443 ssl crt /usr/local/etc/haproxy/certs/
> alpn h2,http/1.1
>     bind :::443 ssl crt /usr/local/etc/haproxy/certs/ alpn
> h2,http/1.1
> 
>     tcp-request inspect-delay 5s
>     tcp-request content accept if HTTP
> 
>     acl client_attempts_ssh payload(0,7) -m bin 5353482d322e30
>     use_backend ssh if client_attempts_ssh
> 
>     use_backend nginx-http2-backend if { ssl_fc_alpn -i h2 }
>     default_backend nginx-http-backend
> 
>     backend nginx-http-backend
>     mode tcp
>     server www-1 127.0.0.1:8082 check send-proxy

I would do the following, untested.

>     backend nginx-http2-backend
     mode http
 option http-use-htx

>     http-request add-header X-Forwarded-Proto https

>     server www-1 127.0.0.1:8083 check send-proxy 
 add `alpn h2` to the server line

Best regards
aleks

> 
>     backend ssh
>     mode tcp
>     option tcplog
>     source 0.0.0.0 usesrc clientip
>     server ssh 192.168.200.6:22
>     timeout server 8h
> 
> What I understood correctly from the documentation:
> https://www.haproxy.com/de/blog/haproxy-1-9-has-arrived/
> 
> I must have the mode on http instead of tcp.
> 
> Is it possible to keep this ssh switch in place and use HTX for http
> traffic?
> (currently switching to http is not possible, as the mode for backend
> and frontend must by equal, so I have to use tcp or http for both of them)
> But if I switch to http, I cannot use the ssh backend anymore.
> 
> What do you recommend to get this solved (using another frontend you
> forward the traffic to it?).
> 
> Thanks.
> 
> Gruß
> Matthias
> 




Re: [ANNOUNCE] haproxy-1.9.6

2019-03-29 Thread Aleksandar Lazic
Am 29.03.2019 um 14:25 schrieb Willy Tarreau:
> Hi Aleks,
> 
> On Fri, Mar 29, 2019 at 02:09:28PM +0100, Aleksandar Lazic wrote:
>> With openssl are 2 tests failed but I'm not sure because of the setup or a 
>> bug.
>> https://gitlab.com/aleks001/haproxy19-centos/-/jobs/186769272
> 
> Thank you for the quick feedback. I remember about the first one being
> caused by a mismatch in the exact computed response size due to headers
> encoding causing some very faint variations, though I have no idea why
> I don't see it here, since I should as well, I'll have to check my regtest
> script. For the second one, it looks related to the reactivation of the
> HEAD method in this test which was broken in previous vtest. But I'm
> seeing in your trace that you're taking it from the git repo so that
> can't be that. I need to dig as well.
> 
>> With boringssl are 3 tests failed but I'm not sure because of the setup or a 
>> bug.
>> https://gitlab.com/aleks001/haproxy-19-boringssl/-/jobs/186780822
> 
> For this one I don't know, curl reports some unexpected EOFs. I don't
> see why it would fail only with boringssl. Did it use to work in the
> past ?

No. The tests with Boringssl always failed in one or another way.

https://gitlab.com/aleks001/haproxy-19-boringssl/-/jobs/157743825
https://gitlab.com/aleks001/haproxy-19-boringssl/-/jobs/157730793

I'm not sure if the docker setup on gitlab is the limitation or just a bug.
Sorry to be so unspecific.

> Thanks,
> Willy

Regards
Aleks



Re: [ANNOUNCE] haproxy-1.9.6

2019-03-29 Thread Aleksandar Lazic
Am 29.03.2019 um 11:50 schrieb Willy Tarreau:
> Hi,
> 
> HAProxy 1.9.6 was released on 2019/03/29. It added 34 new commits
> after version 1.9.5.
> 
> As mentioned in the 2.0-dev2 release, we've addressed quite a number
> of issues recently and these fixes have now been backported into this
> release.
> 
> Two issues affect checks and may occasionally cause crashes, one fixed
> by Olivier and the latest one by Ricardo Nabinger Sanchez. Christopher
> fixed two long standing problems, one affecting the way POST requests
> are processed by applets, which can sometimes leave data pending there
> unread forever, and another one related to the confusion created in
> 1.8's early H2 between an end of message and end of stream resulting
> in spurious aborts when option abortonclose is set. Olivier addressed
> a number of H2 stability issues, some related to connection error
> handling, other ones related to a lack of fairness between streams
> caused by the different stream processing flow in 1.9 vs 1.8 which can
> result in some streams facing a huge latency. Pierre Cheynier fixed
> the TLS 1.3 cipher suites, and William fixed a risk of crash in the
> master-worker code in the unlikely case where one of the embedded
> libraries would perform a fork() causing a waitpid() to succeed with
> an unregistered process. Radek Zajic fixed the IPv6 address hex format
> used in logs which seems to have been broken for a very long time, and
> Fred re-enabled the reg test we regularly disable when vtest breaks :-)
> 
> And this is one of the first release in which I did almost nothing,
> which is awesome (it proves I'm no longer the bottleneck blocking the
> project's ability to scale), so keep up the good work guys!
> 
> Please find the usual URLs below :
>Site index   : http://www.haproxy.org/
>Discourse: http://discourse.haproxy.org/
>Slack channel: https://slack.haproxy.org/
>Issue tracker: https://github.com/haproxy/haproxy/issues
>Sources  : http://www.haproxy.org/download/1.9/src/
>Git repository   : http://git.haproxy.org/git/haproxy-1.9.git/
>Git Web browsing : http://git.haproxy.org/?p=haproxy-1.9.git
>Changelog: http://www.haproxy.org/download/1.9/src/CHANGELOG
>Cyril's HTML doc : http://cbonte.github.io/haproxy-dconv/

Container with openssl 1.1.1b and boringssl.
https://hub.docker.com/r/me2digital/haproxy19

With openssl are 2 tests failed but I'm not sure because of the setup or a bug.
https://gitlab.com/aleks001/haproxy19-centos/-/jobs/186769272

With boringssl are 3 tests failed but I'm not sure because of the setup or a 
bug.
https://gitlab.com/aleks001/haproxy-19-boringssl/-/jobs/186780822


### openssl
HA-Proxy version 1.9.6 2019/03/29 - https://haproxy.org/
Build options :
  TARGET  = linux2628
  CPU = generic
  CC  = gcc
  CFLAGS  = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv
-Wno-unused-label -Wno-sign-compare -Wno-unused-parameter
-Wno-old-style-declaration -Wno-ignored-qualifiers -Wno-clobbered
-Wno-missing-field-initializers -Wtype-limits
  OPTIONS = USE_LINUX_SPLICE=1 USE_GETADDRINFO=1 USE_ZLIB=1 USE_REGPARM=1
USE_THREAD=1 USE_OPENSSL=1 USE_LUA=1 USE_PCRE=1 USE_PCRE_JIT=1 USE_TFO=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with OpenSSL version : OpenSSL 1.1.1b  26 Feb 2019
Running on OpenSSL version : OpenSSL 1.1.1b  26 Feb 2019
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
Built with Lua version : Lua 5.3.5
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT
IP_FREEBIND
Built with zlib version : 1.2.7
Running on zlib version : 1.2.7
Compression algorithms supported : identity("identity"), deflate("deflate"),
raw-deflate("deflate"), gzip("gzip")
Built with PCRE version : 8.32 2012-11-30
Running on PCRE version : 8.32 2012-11-30
PCRE library supports JIT : yes
Encrypted password support via crypt(3): yes
Built with multi-threading support.

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as  cannot be specified using 'proto' keyword)
  h2 : mode=HTXside=FE|BE
  h2 : mode=HTTP   side=FE
: mode=HTXside=FE|BE
: mode=TCP|HTTP   side=FE|BE

Available filters :
[SPOE] spoe
[COMP] compression
[CACHE] cache
[TRACE] trace
###

Boringssl

###
HA-Proxy version 1.9.6 2019/03/29 - https://haproxy.org/
Build options :
  TARGET  = linux2628
  CPU = generic
  CC  = gcc
  CFLAGS  = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv
-Wno-unused-label -Wno-sign-compare -Wno-unused-parameter
-Wno-old-style-declaration -Wno-ignored-qualifiers 

Re: FEATURE: Add range iterator item variable for server-template and zero-padding converter

2019-03-29 Thread Aleksandar Lazic
Hi.

Am 29.03.2019 um 09:34 schrieb Matous Jan Fialka:
> Hello,
> 
> please consider adding range iterator item variable (say `rng.iteritem`) for
> the `server-template` directive so that it can be expanded in the 
> `:`
> part of the statement or anywhere else where applicable (see in the example 
> snippet
> below).
> 
> Also to have general zero-padding converter (say `zeropad()`) to pad 
> values
> with zeroes would be splendid for use with `server-template` or elsewhere 
> (therefor
> I aggregated both things into single feature request).


Please can you open a issue for that Feature Request, thanks.
https://github.com/haproxy/haproxy/issues

Regards
Aleks

> -snip-
> 
>     zeropad()
>   Performs a zero-padding of preceding expression to the given .
> 
>   Example:
>     server-template s 3 "svc-%[rng.iteritem,zeropad(3)].domain.tld:80" 
> check
> 
>     # would be equivalent to:
>     server s1 svc-001.domain.tld:80 check
>     server s2 svc-002.domain.tld:80 check
>     server s3 svc-003.domain.tld:80 check
> 
> -snip-
> 
> I am not sure how hard it would be to implemented it but it could be very 
> helpful
> in case you use many backend servers with consistent sequential naming as 
> shown in
> the example snippet.
> 
> Many thanks for providing us wich such an excellent piece of software which
> *HAProxy*
> truly is!
> 
> Sincerely,
> 




Re: segfault in eb32sc_lookup_ge (1.9.4)

2019-03-24 Thread Aleksandar Lazic
Hi.

Please can you try to use 1.9.5 and see if still happen.

Best regards
Aleks


 Ursprüngliche Nachricht 
Von: "Максим Куприянов" 
Gesendet: 24. März 2019 16:59:59 MEZ
An: HAProxy 
Betreff: segfault in eb32sc_lookup_ge (1.9.4)

Hi!

I caught 2 segfaults on different machines. Both look the same:
haproxy[483437]: segfault at 8 ip 55d7185283fa sp 7f257955f5b8
error 4 in haproxy[55d7183b1000+1d7000]

Unfortunately I don't have core files and their configs are too big and
complex to share, but I figured out, the point of segfault if it could help:
pointer from segfault record: 0x55d7185283fa - 0x55d7183b1000 = 0x1773fa
(gdb) info line *0x1773fa
Line 121 of "ebtree/eb32sctree.h" starts at address 0x1773fa
 and ends at 0x1773fe .

executable info:
haproxy -v
HA-Proxy version 1.9.4-2 2019/02/28 - https://haproxy.org/
Build options :
  TARGET  = linux2628
  CPU = generic
  CC  = gcc
  CFLAGS  = -O2 -g -O2 -fPIE -fstack-protector --param=ssp-buffer-size=4
-Wformat -Werror=format-security -D_FORTIFY_SOURCE=2 -fno-strict-aliasing
-Wdeclaration-after-statement -fwrapv -Wno-unused-label -Wno-sign-compare
-Wno-unused-parameter -Wno-old-style-declaration -Wno-ignored-qualifiers
-Wno-clobbered -Wno-missing-field-initializers -Wtype-limits
  OPTIONS = USE_GETADDRINFO=1 USE_ZLIB=1 USE_REGPARM=1 USE_OPENSSL=1
USE_LUA=1 USE_PCRE=1 USE_PCRE_JIT=1 USE_TFO=1 USE_NS=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with OpenSSL version : OpenSSL 1.0.1f 6 Jan 2014
Running on OpenSSL version : OpenSSL 1.0.1f 6 Jan 2014
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : SSLv3 TLSv1.0 TLSv1.1 TLSv1.2
Built with Lua version : Lua 5.3.1
Built with network namespace support.
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT
IP_FREEBIND
Built with zlib version : 1.2.8
Running on zlib version : 1.2.8
Compression algorithms supported : identity("identity"),
deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with PCRE version : 8.31 2012-07-06
Running on PCRE version : 8.31 2012-07-06
PCRE library supports JIT : no (libpcre build without JIT?)
Encrypted password support via crypt(3): yes
Built with multi-threading support.

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as  cannot be specified using 'proto' keyword)
  h2 : mode=HTXside=FE|BE
  h2 : mode=HTTP   side=FE
: mode=HTXside=FE|BE
: mode=TCP|HTTP   side=FE|BE

Available filters :
[SPOE] spoe
[COMP] compression
[CACHE] cache
[TRACE] trace

--
Best regards,
Maksim Kupriianov


Re: Adding Configuration parts via File

2019-03-08 Thread Aleksandar Lazic
Hi.

In addition to Bruno's answer there was a thread on the ML which explains why 
such a "simple" directive like include isn't easy to implement.

https://www.mail-archive.com/haproxy@formilux.org/msg05215.html

As I also think that in some setups can a include can make the main config 
shorter it's a nightmare to debug, IMHO.

A nice solution is in nginx with the `-T` flag which shows the whole config 
with all includes on stdout.

Best regards.

Aleks


 Ursprüngliche Nachricht 
Von: Bruno Henc 
Gesendet: 8. März 2019 14:26:11 MEZ
An: haproxy@formilux.org
Betreff: Re: Adding Configuration parts via File

Hello Philipp,


I don't think there is a capability to include a list of ACLs. However, 
you can load the ip addresses once via the -f parameter:


acl is_admin src -f /etc/haproxy/admin_ip_list.txt


You would have to define an acl in each section, but the IP list would 
be the same for all rules.


For a more detailed overview of ACLs, check out this blog post:

https://www.haproxy.com/blog/introduction-to-haproxy-acls/


I do have to admit that including ACLs is a neat idea. Alternatively, 
global ACLs would be nice.


Does this workaround solve your use case?


Best regards,


Bruno Henc


On 3/8/19 2:17 PM, Philipp Kolmann wrote:
> Hi,
>
> I have ACLs for Source-IPs for Admins for several services. These ACLs 
> are identical for multiple listener-sections.
>
> Would it be possible to have a file with several acl snipplets and 
> source that at the proper section of the config file multiple times?
> I haven't found anything in the docs that would make this possible.
>
> My wished Setup:
>
> admin_acl.conf:
>
> acl is_admin src 10.0.0.1
> acl is_admin src 10.0.0.2
> acl is_admin src 10.0.0.3
> acl is_admin src 10.0.0.4
>
>
> haproxy.cfg:
>
> listen service1
>     bind 10.1.0.10:80
>     include admin_acl.conf
>
>      more parameters ...
>
>
> listen service2
>     bind 10.1.0.20:80
>     include admin_acl.conf
>
>      more parameters ...
>
>
> listen service3
>     bind 10.1.0.30:80
>     include admin_acl.conf
>
>      more parameters ...
>
>
> The admin_acl needs to be maintained only once and can be used 
> multiple times.
>
> Is this already possible? Could such an include option be made for the 
> config files?
>
> thanks
> Philipp
>




Re: Does anyone *really* use 51d or WURFL ?

2019-03-05 Thread Aleksandar Lazic
Hi.

Am 05.03.2019 um 13:47 schrieb Willy Tarreau:
> Hi all,
> 
> back to this old thread :
> 
> On Mon, Jan 21, 2019 at 03:36:22PM +0100, Willy Tarreau wrote:
>> I don't know if wurfl builds at all by the way since the last update to
>> the module is its introduction more than 2 years ago.
> 
> So now at least I've got the response. This code doesn't build and was
> broken no less than 9 months ago without anyone ever noticing. And it
> never cared to try to be thread-compatible over the last 28 months.
> This is not a feature but a useless heavy weight we're carrying behind
> us for no valid reason. Given that it hasn't ever been maintained since
> its introduction and obviously doesn't even have a user, I've removed
> it. I'd be tempted to backport the cleanup patch to 1.9 since it doesn't
> build there either, but I don't like to remove features in stable
> branches. One could argue that 1) not building, 2) not working, and 3)
> not being maintained doesn't exactly qualify as "stable". So maybe in the
> end I'll do it there as well. And I'm sure it will not affect distros!
> But I'm open to opinions on the subject.

>From my point of view if even the contributor does not care that this feature
works well, why should we (the community) care?

The 51d have answered quite fast with a fix.

>From haproxy point of view I would add a dedicated note and a Warning that this
feature does not work in the current version. If anyone want to fix this step
forward and fix it.

I would remove it in 2.x.

Jm2c

> Regards,
> Willy

Regards
Aleks



Re: http/2 server-push support

2019-02-27 Thread Aleksandar Lazic
Hi Patrick.

Am 26.02.2019 um 20:13 schrieb Patrick Hemmer:
> Now that we have h2 support on frontends, backends, trailers, etc, I'm hoping
> that server side server-push is somewhere on the roadmap. By "server side" I
> mean not this middleware based server-push methodology frequently used where a
> "Link" header is converted to a server push. But instead where the server 
> itself
> can generate the server-push responses.

What do you mean that the HAProxy should do as a proxy and what's wrong with
"Link" header?

As far as I understand the server-push is a method that a server side updates
the client like in chat application.

It would be nice to understand your idea and your use case to see what and how
HAProxy can handle it.

> Is there any plan on this, or when it might be available?
> 
> If it helps for prioritization, our use case for this is reduce processing
> overhead. A single request might require the client to have related resources,
> and those related resources might all involve computation that is shared. If
> each request is handled separately (e.g. Link header to server-push 
> conversion),
> it would result in a lot of duplicated work. So instead we want to do the
> computation once, and push out the multiple responses separately.
> 
> -Patrick

Regards
Aleks



Re: RTMP and Seamless Reload

2019-02-21 Thread Aleksandar Lazic
Hi Erlangga.

Am 20.02.2019 um 07:24 schrieb Erlangga Pradipta Suryanto:
> Hi Aleksandar,
> 
> Very sorry for the late reply. I was out of the office.
>> Ah OBS (=Open Broadcaster Software ?) something like this?
> Yes, the open broadcaster software, that's the tool that we use in our
> development environment.
>
>> How is in general the error handling of the used SW?
> The software will try to reconnect whenever a network interruption occurs, we
> have two stream, primary and backup, so when one stream is down, we have other
> stream to server the request,
> but this will result in the playlist not being complete for one of the stream,
> we'd like to minimize that. 
>
>> * when you reload the backend, does you have also interruption on the stream?
> Yes, the stream will be disconnected and OBS will try to reconnect again.

Which you want to avoid, as you have written in the message above, right.

>> * which algo do you plan to use for the backends, `leastconn`?
> We're using maxconn, we want to limit the number of connection that the 
> backend
> rtmp server have to 1 only
>
>> * How long will a session (tcp/rtmp) normally be?
> We're planning to stream for tv stations, so in theory it will always be
> streaming daily until the tv station stops it
>
>> * How fast can/will be the reconnect from the clients?
> It actually depends on the streaming software, in the case of OBS, we can set 
> it
> to reconnect immediately after disconnection.
>
>> * Is it a option to use DSR (=Direct Server Return) for the stream from rtmp
> source?
> I am not sure if we can use DSR, I will need to consult with our networking 
> team.
>> * Which mode do you plan to use http or tcp?
> We're using TCP.
> 
> We have tried using the runtime API to maintain current stream without reload
> and creating new process.
> We tried having several backends in MAINT state, and when we need one, we will
> update the ip and port in the runtime configuration.
> It covers our current needs of not losing any of the existing stream when a 
> new
> stream arrives, and since they run on the same process, we are sure that the 
> new
> stream will be routed to the new backend.
> We plan on going forward using the runtime API for the time being.

Sounds like a solution.

> Thanks,
> 
> *Erlangga Pradipta Suryanto*

Regards
Aleks

> __
> 
> *T. *+62118898168| *BBM PIN. D8F39521*__
> 
> *E. esuryanto*@bbmtek.com <mailto:mtal...@bbmtek.com>
> 
> 
> 
> 
> On Thu, Jan 31, 2019 at 10:39 PM Aleksandar Lazic  <mailto:al-hapr...@none.at>> wrote:
> 
> Hi Erlangga.
> 
> Am 31.01.2019 um 06:12 schrieb Erlangga Pradipta Suryanto:
> > Hi Aleksandar,
> >
> > Thank you for your reply.
> > As much as possible, we would like the stream to be not interrupted.
> > Though at some time, the stream will be closed and restarted.
> > We're still at POC stage right now, so we only use one haproxy, 
> nginx-rtmp
> server, and OBS to do the streaming
> 
> Ah OBS (=Open Broadcaster Software ?) something like this?
> 
> 
> https://obsproject.com/forum/resources/how-to-set-up-your-own-private-rtmp-server-using-nginx.50/
> 
> > If the current version hasn't supported that yet, we will need to look 
> for
> other option other than to reload the configuration.
> > We stumbled upon this article about runtime
> API, 
> https://www.haproxy.com/blog/dynamic-scaling-for-microservices-with-runtime-api/
> > We are currently testing it.
> 
> The dynamic configuration works like a charm but never the less you will
> have some interrupts as this is the nature of all networks.
> How is in general the error handling of the used SW?
> 
> I have some questions which you are maybe willing to answer.
> 
> * when you reload the backend, does you have also interruption on the 
> stream?
> * which algo do you plan to use for the backends, `leastconn`?
>   https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#4-balance
> * How long will a session (tcp/rtmp) normally be?
> * How fast can/will be the reconnect from the clients?
> * Is it a option to use DSR (=Direct Server Return) for the stream from 
> rtmp
> source?
> * Which mode do you plan to use http or tcp?
> 
> To get you right you wish to handover the client connected sockets
> (tcp/udp/unix) from the `old` process to the new process after a config
> reload, right?
> 
> I think this isn't a easy task nor I'm sure it's possible especially when
> you run the setup in HA setup with different "machines", but

Question about haproxy in front of coturn turnserver

2019-02-20 Thread Aleksandar Lazic
Hi.

I would like to run haproxy in front of the https://github.com/coturn/coturn
turnserver for nextcloud talk.

Have anyone tried this or have setup-ed successfully such a config?

I would like to disable the udp part on coturn `no-udp` just because for now
have haproxy not the option to proxy udp.

https://github.com/coturn/coturn/blob/master/examples/etc/turnserver.conf#L375-L378
https://github.com/coturn/coturn/blob/master/examples/etc/turnserver.conf#L390-L398

Any input is welcome ;-)

Regards
aleks



Re: Tune HAProxy in front of a large k8s cluster

2019-02-15 Thread Aleksandar Lazic
Am 15.02.2019 um 22:11 schrieb Joao Morais:
> 
>> Em 15 de fev de 2019, à(s) 08:43, Aleksandar Lazic 
>> escreveu:
>> 
>> Hi Joao.
>> 
>> Am 15.02.2019 um 11:15 schrieb Joao Morais:
>>> 
>>> Hi Aleks, sure. Regarding the config, it has currently about 4k lines
>>> only in the largest frontend because of the number of hostnames and paths
>>> being supported. About 98% is acl declarations, http-request, reqrep,
>>> redirect scheme, use_backend. Most of them I'll move to the backend and
>>> this will already improve performance. The question is: what about the
>>> 2200+ `use_backend` - is there anything else that could be done?
>> 
>> As I don't know the config, even a snippet could help, let me suggest you
>> to try to use a map for lookup for the backends.
>> 
>> https://www.haproxy.com/blog/introduction-to-haproxy-maps/
>> 
> Hey Aleks, this made my day. Thank you for remember me that map exist and a
> big thank you to The Author of map, map_beg and map_reg converters! Time to
> achieve a 5 digits rps.

Wow you mean from 600 rps to 6 rps "just" to switch to map.
That's amazing 8-O, especially it was just a wild guess ;-)

Would you like to share your config now?

>> Do you use DNS resolving for the hostnames?
>> 
> Nops. My guess so far is the size of the frontend and my tests have confirmed
> this.
> 
>>> / # haproxy -vv HA-Proxy version 1.8.17 2019/01/08
>> 
>> Event it's not critical, it would be nice when you can try 1.8.19 or
>> better 1.9.4 ;-)
>> 
> Moving to 1.8.19 soon and waiting 1.9.5 =)

;-)

> ~jm

Regards
Aleks



Re: Tune HAProxy in front of a large k8s cluster

2019-02-15 Thread Aleksandar Lazic
Hi Joao.

Am 15.02.2019 um 11:15 schrieb Joao Morais:
> 
> 
>> Em 15 de fev de 2019, à(s) 07:44, Aleksandar Lazic  
>> escreveu:
>>
>> Hi Joao.
>>
>> Am 15.02.2019 um 10:21 schrieb Joao Morais:
>>>
>>> Hi list, I'm tuning some HAProxy instances in front of a large kubernetes
>>> cluster. The config has about 500 hostnames (a la apache/nginx virtual
>>> hosts), 3 frontends, 1500 backends and 4000 servers. The first frontend is 
>>> on
>>> tcp mode binding :443, inspecting sni and doing a triage; the second 
>>> frontend
>>> is binding a unix socket with ca-file (tls authentication); the last 
>>> frontend
>>> is binding another unix socket, doing ssl-offload but without ca-file. This
>>> last one has about 80% of the hostnames. There is also a ssl-passthrough
>>> config - from the triage frontend straight to a tcp backend.
>>
>> Please can you tell us which haproxy you use and show us the config, thanks.
> 
> Hi Aleks, sure. Regarding the config, it has currently about 4k lines only in 
> the largest frontend because of the number of hostnames and paths being 
> supported. About 98% is acl declarations, http-request, reqrep, redirect 
> scheme, use_backend. Most of them I'll move to the backend and this will 
> already improve performance. The question is: what about the 2200+ 
> `use_backend` - is there anything else that could be done?

As I don't know the config, even a snippet could help, let me suggest you to try
to use a map for lookup for the backends.

https://www.haproxy.com/blog/introduction-to-haproxy-maps/

Do you use DNS resolving for the hostnames?

> / # haproxy -vv
> HA-Proxy version 1.8.17 2019/01/08

Event it's not critical, it would be nice when you can try 1.8.19 or better
1.9.4 ;-)

Regards
Aleks



Re: Tune HAProxy in front of a large k8s cluster

2019-02-15 Thread Aleksandar Lazic
Hi Joao.

Am 15.02.2019 um 10:21 schrieb Joao Morais:
> 
> Hi list, I'm tuning some HAProxy instances in front of a large kubernetes
> cluster. The config has about 500 hostnames (a la apache/nginx virtual
> hosts), 3 frontends, 1500 backends and 4000 servers. The first frontend is on
> tcp mode binding :443, inspecting sni and doing a triage; the second frontend
> is binding a unix socket with ca-file (tls authentication); the last frontend
> is binding another unix socket, doing ssl-offload but without ca-file. This
> last one has about 80% of the hostnames. There is also a ssl-passthrough
> config - from the triage frontend straight to a tcp backend.

Please can you tell us which haproxy you use and show us the config, thanks.

haproxy -vv

Regars
Aleks

> I'm observing some latency on moderate loads (200+ rps per instance) - on my
> tests, the p95 was about 25ms only in the proxy, and the major issue is that
> I cannot have a throughput above 600 rps. This latency moves easily from 25ms
> on p95 to 1s or more on p50 with 700+ rps. The problem is of course the big
> amount of rules in the frontend: haproxy need to check every single bit of
> configuration for every single host and every single path. Moving the testing
> hostname to a dedicated frontend with only its own rules give me with about
> 5ms of p95 latency and more than 5000 rps.
> 
> These are my ideas so far regarding tune such configuration:
> 
> * Move all possible rules to the backend. Some txn vars should be created in
> order to be inspected there. This will of course help but there is still a
> lot of `use_backend if  ` that cannot be removed, I
> think, which are being evaluated on every single request despite the hostname
> that I'm really interested. There are some hostnames without path acl, but
> there are also hostnames with 10+ different paths and its 10+ `use_backend`.
> 
> * Create some more frontends and unix sockets with at most 50 hostnames or
> so. Pros: after the triage, each frontend will have the `use_backend if` of
> only another 49 hostnames. Cons: if some client doesn't send the sni
> extension, the right frontend couldn't be found.
> 
> * Perhaps there is a hidden `if  do  done` that I'm
> missing which would improve performance, since I can help HAProxy to process
> only the keywords I'm really interested in that request.
> 
> * Nbthreads was already tested, I'm using 3 that has the best performance on
> a 8 cores VM. 4+ threads doesn’t scale. Nbprocs will also be used, I'm tuning
> a per process configuration now.
> 
> Is there any other approach I'm missing? Every single milisecond will help.
> 




Re: Early connection close, incomplete transfers

2019-02-15 Thread Aleksandar Lazic
Am 15.02.2019 um 08:47 schrieb Veiko Kukk:
> On 2019-02-14 18:29, Aleksandar Lazic wrote:
>>> Replaced HAproxy with Nginx for testing and with Nginx, not a single 
>>> connection
>>> was interrupted, did millions of requests.
>>
>> In 1.9.4 are a lot of fixed added.
>> please can you try your tests with 1.9.4, thanks.
> 
> Already did before writing my previous letter. No differencies.

Okay. Can you try to make some tcpdumps between haproxy and lighty, we need to
know what happens on the wire.

> Veiko

Regards
Aleks



Re: Early connection close, incomplete transfers

2019-02-14 Thread Aleksandar Lazic
Am 14.02.2019 um 15:31 schrieb Veiko Kukk:
> 
> On 2019-02-01 13:30, Veiko Kukk wrote:
>> On 2019-02-01 12:34, Aleksandar Lazic wrote:
>>
>>> Do you have any errors in lighthttpds log?
>>
>> Yes, it has error messages about not being enable to write to socket.
>>
>> Unrecoverable error writing to socket! errno 32, retries 12, ppoll
>> return 1, send return -1
>> ERROR: Couldn't write header data to socket! desired: 4565 / actual: -1
>>
>> I've tested with several hundred thoused requests, but it never
>> happens when using "mode tcp".
> 
> Replaced HAproxy with Nginx for testing and with Nginx, not a single 
> connection
> was interrupted, did millions of requests.

In 1.9.4 are a lot of fixed added.
please can you try your tests with 1.9.4, thanks.

> Veiko
Regards
aleks



Re: Compilation fails on OS-X

2019-02-14 Thread Aleksandar Lazic
Looks like apples llvm is not based on master branch.

https://news.ycombinator.com/item?id=16545037


 Ursprüngliche Nachricht 
Von: Frederic Lecaille 
Gesendet: 14. Februar 2019 16:13:01 MEZ
An: Patrick Hemmer 
CC: Olivier Houchard , Aleksandar Lazic 
, haproxy@formilux.org
Betreff: Re: Compilation fails on OS-X

On 2/14/19 3:12 PM, Patrick Hemmer wrote:
> 
> 
> On 2019/2/14 08:20, Frederic Lecaille wrote:
>> On 2/14/19 1:32 PM, Frederic Lecaille wrote:
>>> On 2/13/19 7:30 PM, Patrick Hemmer wrote:
>>>>
>>>>
>>>> On 2019/2/13 10:29, Olivier Houchard wrote:
>>>>> Hi Patrick,
>>>>>
>>>>> On Wed, Feb 13, 2019 at 10:01:01AM -0500, Patrick Hemmer wrote:
>>>>>> On 2019/2/13 09:40, Aleksandar Lazic wrote:
>>>>>>> Am 13.02.2019 um 14:45 schrieb Patrick Hemmer:
>>>>>>>> Trying to compile haproxy on my local machine for testing 
>>>>>>>> purposes and am
>>>>>>>> running into the following:
>>>>>>> Which compiler do you use?
>>>>>> # gcc -v
>>>>>> Configured with: 
>>>>>> --prefix=/Applications/Xcode.app/Contents/Developer/usr 
>>>>>> --with-gxx-include-dir=/usr/include/c++/4.2.1
>>>>>> Apple LLVM version 9.0.0 (clang-900.0.39.2)
>>>>>> Target: x86_64-apple-darwin17.7.0
>>>>>> Thread model: posix
>>>>>> InstalledDir: 
>>>>>> /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin
>>>>>>  
>>>>>>
>>>>>>
>>>>>>>>  # make TARGET=osx
>>>>>>>>  src/proto_http.c:293:1: error: argument to 'section' 
>>>>>>>> attribute is not
>>>>>>>> valid for this target: mach-o section specifier requires a 
>>>>>>>> segment and section
>>>>>>>> separated by a comma
>>>>>>>>  DECLARE_POOL(pool_head_http_txn, "http_txn", 
>>>>>>>> sizeof(struct http_txn));
>>>>>>>>  ^
>>>>>>>>  include/common/memory.h:128:2: note: expanded from 
>>>>>>>> macro 'DECLARE_POOL'
>>>>>>>>  REGISTER_POOL(, name, size)
>>>>>>>>  ^
>>>>>>>>  include/common/memory.h:123:2: note: expanded from 
>>>>>>>> macro 'REGISTER_POOL'
>>>>>>>>  INITCALL3(STG_POOL, 
>>>>>>>> create_pool_callback, (ptr), (name),
>>>>>>>> (size))
>>>>>>>>  ^
>>>>>>>>  include/common/initcall.h:102:2: note: expanded from 
>>>>>>>> macro 'INITCALL3'
>>>>>>>>  _DECLARE_INITCALL(stage, __LINE__, 
>>>>>>>> function, arg1, arg2,
>>>>>>>> arg3)
>>>>>>>>  ^
>>>>>>>>  include/common/initcall.h:78:2: note: expanded from macro
>>>>>>>> '_DECLARE_INITCALL'
>>>>>>>> __DECLARE_INITCALL(__VA_ARGS__)
>>>>>>>>  ^
>>>>>>>>  include/common/initcall.h:65:42: note: expanded from macro
>>>>>>>> '__DECLARE_INITCALL'
>>>>>>>> __attribute__((__used__,__section__("init_"#stg))) = \
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> Issue occurs on master, and the 1.9 branch
>>>>>>>>
>>>>>>>> -Patrick
>>>>> Does the (totally untested, because I have no Mac to test) patch 
>>>>> works for
>>>>> you ?
>>>>
>>>> Unfortunately not. Just introduces a lot of new errors:
>>>>
>>>>
>>>>  In file included from src/ev_poll.c:22:
>>>>  In file included from include/common/hathreads.h:26:
>>>>  include/common/initcall.h:134:22: error: expected ')'
>>>>  DECLARE_INIT_SECTION(STG_PREPARE);
>>>>                                           ^
>>>>  include/common/initcall.h:134:1: note: to match th

Re: [RFC PATCH] MEDIUM: compression: Add support for brotli compression

2019-02-14 Thread Aleksandar Lazic
Hi Tim.

Am 13.02.2019 um 17:57 schrieb Tim Duesterhus:
> Willy,
> Aleks,
> List,
> 
> this (absolutely non-ready-to-merge) patch adds support for brotli
> compression as suggested in issue #21: 
> https://github.com/haproxy/haproxy/issues/21

Cool ;-)

> It is tested on Ubuntu Xenial with libbrotli 1.0.3:
> 
>   [timwolla@~]apt-cache policy libbrotli-dev
>   libbrotli-dev:
>   Installed: 1.0.3-1ubuntu1~16.04.1
>   Candidate: 1.0.3-1ubuntu1~16.04.1
>   Version table:
>   *** 1.0.3-1ubuntu1~16.04.1 500
>   500 http://de.archive.ubuntu.com/ubuntu xenial-updates/main 
> amd64 Packages
>   100 /var/lib/dpkg/status
>   [timwolla@~]apt-cache policy libbrotli1
>   libbrotli1:
>   Installed: 1.0.3-1ubuntu1~16.04.1
>   Candidate: 1.0.3-1ubuntu1~16.04.1
>   Version table:
>   *** 1.0.3-1ubuntu1~16.04.1 500
>   500 http://de.archive.ubuntu.com/ubuntu xenial-updates/main 
> amd64 Packages
>   100 /var/lib/dpkg/status
> 
> I am successfully able access brotli compressed URLs with Google Chrome,
> this requires me to disable `gzip` though (because haproxy prefers to
> select gzip, I suspect because `br` is last in Chrome's `Accept-Encoding`
> header).

Does it change it when you use `br` as frist entry in `compression algo ... `

https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#4.2-compression%20algo

> I also am able to sucessfully download and decompress URLs with `curl`
> and the `brotli` CLI utility. The server I use as the backend for these
> tests has about 45ms RTT to my machine. The HTML page I use is some random
> HTML page on the server, the noise file is 1 MiB of finest /dev/urandom.
> 
> You'll notice that brotli compressed requests are both faster as well as
> smaller compared to gzip with the hardcoded brotli compression quality
> of 3. The default is 11, which is *way* slower than gzip.

How much more/less/equal CPU usage have brotli compared to gzip?

I'm a little bit disappointed from the size point of view, it is only ~6K less
then gzip, is it worth the amount of work for such a small gain of data 
reduction.

Regards
Aleks

>   + curl localhost:8080/*snip*.html -H 'Accept-Encoding: gzip'
>   % Total% Received % Xferd  Average Speed   TimeTime Time  
> Current
>   Dload  Upload   Total   SpentLeft  
> Speed
>   100 492800 492800 0   279k  0 --:--:-- --:--:-- 
> --:--:--  279k
>   + curl localhost:8080/*snip*.html -H 'Accept-Encoding: br'
>   % Total% Received % Xferd  Average Speed   TimeTime Time  
> Current
>   Dload  Upload   Total   SpentLeft  
> Speed
>   100 434010 434010 0   332k  0 --:--:-- --:--:-- 
> --:--:--  333k
>   + curl localhost:8080/*snip*.html -H 'Accept-Encoding: identity'
>   % Total% Received % Xferd  Average Speed   TimeTime Time  
> Current
>   Dload  Upload   Total   SpentLeft  
> Speed
>   100  127k  100  127k0 0   441k  0 --:--:-- --:--:-- 
> --:--:--  441k
>   + curl localhost:8080/noise -H 'Accept-Encoding: gzip'
>   % Total% Received % Xferd  Average Speed   TimeTime Time  
> Current
>   Dload  Upload   Total   SpentLeft  
> Speed
>   100 1025k0 1025k0 0  3330k  0 --:--:-- --:--:-- 
> --:--:-- 3338k
>   + curl localhost:8080/noise -H 'Accept-Encoding: br'
>   % Total% Received % Xferd  Average Speed   TimeTime Time  
> Current
>   Dload  Upload   Total   SpentLeft  
> Speed
>   100 1024k0 1024k0 0  3029k  0 --:--:-- --:--:-- 
> --:--:-- 3030k
>   + curl localhost:8080/noise -H 'Accept-Encoding: identity'
>   % Total% Received % Xferd  Average Speed   TimeTime Time  
> Current
>   Dload  Upload   Total   SpentLeft  
> Speed
>   100 1024k  100 1024k0 0  3003k  0 --:--:-- --:--:-- 
> --:--:-- 3002k
>   + ls -al
>   total 3384
>   drwxrwxr-x  2 timwolla timwolla4096 Feb 13 17:30 .
>   drwxrwxrwt 28 root root   69632 Feb 13 17:25 ..
>   -rw-rw-r--  1 timwolla timwolla 598 Feb 13 17:30 download
>   -rw-rw-r--  1 timwolla timwolla   43401 Feb 13 17:30 html-br
>   -rw-rw-r--  1 timwolla timwolla   49280 Feb 13 17:30 html-gz
>   -rw-rw-r--  1 timwolla timwolla  130334 Feb 13 17:30 html-id
>   -rw-rw-r--  1 timwolla timwolla 1048949 Feb 13 17:30 noise-br
>   -rw-rw-r--  1 timwolla timwolla 1049666 Feb 13 17:30 noise-gz
>   -rw-rw-r--  1 timwolla timwolla 1048576 Feb 13 17:30 noise-id
>   ++ zcat html-gz
>   + sha256sum html-id /dev/fd/63 /dev/fd/62
>   ++ brotli --decompress --stdout html-br
>   

Re: Compilation fails on OS-X

2019-02-13 Thread Aleksandar Lazic
Am 13.02.2019 um 14:45 schrieb Patrick Hemmer:
> Trying to compile haproxy on my local machine for testing purposes and am
> running into the following:

Which compiler do you use?

>         # make TARGET=osx
>     src/proto_http.c:293:1: error: argument to 'section' attribute is not
> valid for this target: mach-o section specifier requires a segment and section
> separated by a comma
>         DECLARE_POOL(pool_head_http_txn, "http_txn", sizeof(struct http_txn));
>         ^
>         include/common/memory.h:128:2: note: expanded from macro 
> 'DECLARE_POOL'
>                         REGISTER_POOL(, name, size)
>                         ^
>         include/common/memory.h:123:2: note: expanded from macro 
> 'REGISTER_POOL'
>                         INITCALL3(STG_POOL, create_pool_callback, (ptr), 
> (name),
> (size))
>                         ^
>         include/common/initcall.h:102:2: note: expanded from macro 'INITCALL3'
>                         _DECLARE_INITCALL(stage, __LINE__, function, arg1, 
> arg2,
> arg3)
>                         ^
>         include/common/initcall.h:78:2: note: expanded from macro
> '_DECLARE_INITCALL'
>                         __DECLARE_INITCALL(__VA_ARGS__)
>                         ^
>         include/common/initcall.h:65:42: note: expanded from macro
> '__DECLARE_INITCALL'
>                                
> __attribute__((__used__,__section__("init_"#stg))) =   \
> 
> 
> 
> Issue occurs on master, and the 1.9 branch
> 
> -Patrick




Re: HAProxy in front of Docker Enterprise problem

2019-02-13 Thread Aleksandar Lazic
Hi.

Am 13.02.2019 um 00:21 schrieb Norman Branitsky:
> I have an HAProxy 1.7 server sitting in front of a number of Docker Enterprise
> Manager nodes and Worker nodes.
> 
> The Worker nodes don’t appear to have any problem with HAProxy terminating the
> SSL and connecting to them via HTTP.
> 
> The Manager nodes are the problem.
> 
> They insist on installing their own certificates (either self-signed or CA 
> signed).
>
> They will only listen to HTTPS traffic.
> 
> So my generic frontend_main-ssl says:
> 
> bind :443  ssl crt /etc/CONFIG/haproxy-1.7/certs/cert.pem
> 
>  
> 
> The backend has the following server statement:
> 
> server xxx 10.240.12.248:443 ssl verify none
> 
>  
> 
> But apparently this doesn’t work – the client gets the SSL certificate 
> provided
> by the HAProxy server
>
> instead of the certificate provided by the Manager node. This causes the 
> Manager
> node to barf.

Do you have added the manger certificates in the cert.pem?

> Do I have to make HAProxy listen on 8443 and just do a tcp frontend/backend 
> for
> the Manager nodes?

It's one possibility. This way makes the setup easier and I don't think that you
want to intercept some http layer stuff for the docker registry.

> Norman Branitsky

Regards
aleks




Re: haproxy segfault

2019-02-12 Thread Aleksandar Lazic
Hi.

Am 12.02.2019 um 18:36 schrieb Mildis:
> Hi list,
> 
> haproxy is segfaulting multiple times these days for no apparent reason.
> At first i thought is was a load issue but even few RPS made it crash.
> 
> Symptoms are always the same : segfault of a worker then spawn of a new.
> If load is very high, spawned worker segfault immediatly.
> 
> In the messages log, the offset is always the same (+1e2000).
> 
> I'm running 1.9.4 (from vincent bernat package) in Debian stretch.
> 
> In haproxy logs :
> Feb 12 11:36:54 ns3089939 haproxy[32688]: [ALERT] 042/113654 (32688) : 
> Current worker #1 (32689) exited with code 139 (Segmentation fault)
> Feb 12 11:36:54 ns3089939 haproxy[32688]: [ALERT] 042/113654 (32688) : 
> exit-on-failure: killing every workers with SIGTERM
> Feb 12 11:36:54 ns3089939 haproxy[32688]: [WARNING] 042/113654 (32688) : All 
> workers exited. Exiting... (139)
> 
> In /var/log/messages
> kernel: traps: haproxy[32689] general protection ip:561e5b799375 
> sp:7ffe6fd3f2f0 error:0 in haproxy[561e5b72d000+1e2000]
> 
> "show errors" is empty.
> 
> How could I diagnose further without impacting production too much ?

Can you activate coredumps

ulimit -c unlimited

you should find the core in

/tmp

or just search for core on the filesystem

You can get a backtrace with the following command as soon as you have a 
coredump

gdb /usr/sbin/haproxy #YOUR_CORE_FILE#
bt full

> Thanks,
> mildis

Regards
aleks




Re: Anyone heard about DPDK?

2019-02-12 Thread Aleksandar Lazic
Hi all.

Wow so much feedback, thanks all for the answers ;-)

Am 12.02.2019 um 15:23 schrieb Alexandre Cassen:
> There has been a lot of applications/stack built around DPDK last few years.
> Mostly because people found it easy to code stuff around DPDK and are so happy
> to display perf graph about their DPDK application vs plain Linux Kernel 
> stack.

Would you like to share such a comparison?

> My intention here would be to warn a little bit about this collective 
> enthusiasm
> around DPDK. Integrating DPDK is easy and mostly fun (even if you have to 
> learn
> and dig into their rte lib and mbuf related), but most of people are 
> completely
> blind about security ! Ok Linux kernel and netdev is slow in respect of NIC
> available nowadays (10G, 40G and multiple 100G on core-networks), but using
> Linux TCP/IP stack you will benefit the hardcore hacking task done during last
> 30years by Linux netdev core guys ! this long process mostly fix and solve
> hardcore issues and for some : security issues. And you will certainly not be
> protected by a 'super fast' self proclaimed performance soft. Mostly because
> these applications are mostly features oriented than security or protocol
> full-picture, and are using this 'super fast, best of ever' argument to 
> enforce
> people mind to adopt.

When I take a look into the doc then I see some security informations.

https://doc.dpdk.org/guides/prog_guide/rte_security.html

How does such a application handle the security topic?

> The way DPDK is working in polling mode is certainly not the best at all. DPDK
> is PCI 'stealing' NIC from kernel to handle/manage itself in userspace by
> forcing active loop (100% CPU polling) to handle descriptors and convert to
> mbuf. latter you can 'forward' mbuf to Linux kernel by using KNI netdevice to
> use Linux Kernel machinery as a slow-path for complicated/not_focused
> packet-flow (most application are using KNI for ARP,DHCP,...). But most of the
> time application are implementing 'minimal' adjacent network features to make 
> it
> work in its networking environment : and here is the problem: you are focused 
> on
> perf and because of it you are making shortcut about considering potential
> threats... a prediction could be to see large number of network security holes
> opened, and specially an old bunch of security holes making a fun revival (a 
> lot
> of fun with TCP)

So this means that a application can be used with DPDK when it uses the
KNI (=Kernel NIC Interface) right?

https://doc.dpdk.org/guides/prog_guide/kernel_nic_interface.html

How much "slower" is the way via KNI?

> In contrast recent Linux Kernel introduced XDP and eBPF machinery that are
> certainly much more future proof than DPDK. First consideration in XDP design 
> is
> : you only TAP in data/packet you are interested in and not making an hold-up 
> on
> whole traffic. So XDP is for fast path but only for protocol or workflow
> identified. You program and attach an eBPF program to a specific NIC, if there
> is no match then packet simply continue its journey into Linux Kernel stack.
> 
> XDP is a response from kernel netdev community to address DPDK users. The fact
> that DPDK introduced and extended PMP to support AF_XDP is certainly a sign 
> that
> XDP is going/doing into the right direction.

Sounds a interesting future for the linux kernel.

When we take a look into the container and cloud world, does this DPDK makes any
sense? I mean when I run a container on AWS/Google/Azure I'm normally so far
from any Hardware that this high traffic possibility isn't available for the
container, right?

To the list members:
Maybe it's offtopic from the HAProxy list so please apologize for all the noise.

> regs,
> Alexandre

Regards
Aleks

> On 12/02/2019 14:04, Federico Iezzi wrote:
>> Nowadays most VNF (virtual network function) in the telco operators are built
>> around DPDK. Not demos, most 5G will be like that. 4G is migrating as we 
>> speak
>> on this new architecture.
>> There isn't any TCP stack built-it but the libraries can be used to build 
>> one.
>> VPP has integrated DPDK in this way.
>>
>> Linux network stack is not designed to managed millions of packets per 
>> second,
>> DPDK bypass it completely offloading everything in userspace. The beauty is
>> that also the physical nic drivers are in userspace using specific DPDK
>> drivers. Linux networking stack works in interrupt mode, DPDK is in polling
>> mode, basically with a while true.
>>
>>  From F5 at the dpdk summit as a relevant reference to what HAProxy does.
>> https://dpdksummitnorthamerica2018.sched.com/event/IhiF/dpdk-on-f5-big-ip-virtual-adcs-brent-blood-f5-networks
>>
>> https://www.youtube.com/watch?v=6zu81p3oTeo
>>
>> Regards,
>> Federico
>>
>> On Tue, 12 Feb 2019 at 11:08, Julien Laffaye > > wrote:
>>
>>     Something like http://seastar.io/ or https://fd.io/ ? :)
>>
>>     On Mon, Feb 11, 2019 at 11:25 AM Baptiste >     

Re: [PATCH] CONTRIB: contrib/prometheus-exporter: Add a Prometheus exporter for HAProxy

2019-02-11 Thread Aleksandar Lazic
Am 11.02.2019 um 10:40 schrieb Christopher Faulet:
> Le 09/02/2019 à 10:47, Aleksandar Lazic a écrit :
>> Hi Christopher.
>>
>> Am 07-02-2019 22:09, schrieb Christopher Faulet:
>>> Hi,
>>>
>>> This patch adds a new component in contrib. It is a Prometheus
>>> exporter for HAProxy.
>>
>> [snipp]
>>
>>> More details in the README.
>>>
>>> I'm not especially a Prometheus expert. And I must admit I never use
>>> it. So if anyone have comments or suggestions, he is welcome.
>>
>> Just for my curiosity, what's wrong with the haproxy_exporter especially
>> that haproxy_exporter uses the csv format from haproxy ?
>>
> 
> Hi Aleks,
> 
> Nothing wrong. haproxy_exporter works pretty well AFAIK. It is just an 
> external
> component and it may seem a bit annoying to deploying it instead of having 
> such
> functionality built-in in HAProxy. Furthermore, haproxy_exporter "only" 
> exports
> proxies and servers statistics. Unlike the built-in exporter, haproxy_exporter
> is limited to what the stats applet exposes in HTTP. So, it cannot export 
> global
> information. And with a bit more work, we can imagine to export even more info
> from the built-in exporter.

You are absolutely right.

> However, to mitigate what I said, it is not an aim for HAProxy to support all
> monitoring and alerting tools (Prometheus, Graphite, InfluxDB, OpenTSDB...). 
> So
> it was added under contrib and not officially integrated into HAPRoxy. It is a
> first step of a reflection on the output format for stats to let all kind of
> external tools to retrieve them. But it not a priority either. It was just a
> quick development and now we will wait and see if there is a particular demand
> to go further and how we could address it, if any.

I will add this prometheus-exporter to the dev builds as soon as it's in the 
source.

Maybe we should think about ci with github?

https://github.com/marketplace/category/continuous-integration

Regards
Aleks



Re: Anyone heard about DPDK?

2019-02-10 Thread Aleksandar Lazic
Am 10.02.2019 um 12:06 schrieb Lukas Tribus:
> On Sun, 10 Feb 2019 at 10:48, Aleksandar Lazic  wrote:
>>
>> Hi.
>>
>> I have seen this in some twitter posts and asked me if it's something 
>> useable for a Loadbalancer like HAProxy ?
>>
>> https://www.dpdk.org/
>>
>> To be honest it looks like a virtual NIC, but I'm not sure.
> 
> See:
> https://www.mail-archive.com/haproxy@formilux.org/msg26748.html

8-O Sorry I have forgotten that Question.
Sorry the noise and thanks for your patience.

> lukas

Greetings
Aleks



Anyone heard about DPDK?

2019-02-10 Thread Aleksandar Lazic
Hi.

I have seen this in some twitter posts and asked me if it's something useable 
for a Loadbalancer like HAProxy ?
 
https://www.dpdk.org/

To be honest it looks like a virtual NIC, but I'm not sure.

Regards
Aleks



Re: [PATCH] CONTRIB: contrib/prometheus-exporter: Add a Prometheus exporter for HAProxy

2019-02-09 Thread Aleksandar Lazic

Hi Christopher.

Am 07-02-2019 22:09, schrieb Christopher Faulet:

Hi,

This patch adds a new component in contrib. It is a Prometheus
exporter for HAProxy.


[snipp]


More details in the README.

I'm not especially a Prometheus expert. And I must admit I never use
it. So if anyone have comments or suggestions, he is welcome.


Just for my curiosity, what's wrong with the haproxy_exporter especially
that haproxy_exporter uses the csv format from haproxy ?


Thanks


Regards
Aleks



Re: [ANNOUNCE] haproxy-1.9.4

2019-02-07 Thread Aleksandar Lazic
Am 06.02.2019 um 17:19 schrieb Willy Tarreau:
> Hi Aleks,
> 
> On Wed, Feb 06, 2019 at 05:16:58PM +0100, Aleksandar Lazic wrote:
>> Maybe this patch was to late for 1.9.4 please can you consider to add it
>> to 2.0 and later 1.9.5, thanks.
>>
>> https://www.mail-archive.com/haproxy@formilux.org/msg32693.html
> 
> I wanted to check it with Christopher first but I know he's busy working
> on some extremely boring stuff, and don't want to risk trading his stuff
> for a review :-)

;-)

> I'll also have to correct a number of spelling mistakes so better be sure
> before doing this.

Ah cool. thanks.

BTW:

the openssl reg-tests was passed without errors

https://gitlab.com/aleks001/haproxy19-centos/-/jobs/157330203

## Starting vtest ##
Testing with haproxy version: 1.9.4
0 tests failed, 0 tests skipped, 35 tests passed

the boringssl reg-tests passed with errors.

https://gitlab.com/aleks001/haproxy-19-boringssl/-/jobs/157330626
## Starting vtest ##
Testing with haproxy version: 1.9.4
#top  TEST ./reg-tests/connection/b0.vtc FAILED (8.790) exit=2
1 tests failed, 0 tests skipped, 34 tests passed
## Gathering results ##



> Thanks!
> Willy

Regards
Aleks



Re: Weighted Backend's

2019-02-06 Thread Aleksandar Lazic
Hi James.

Am 06.02.2019 um 16:16 schrieb James Root:
> Hi All,
> 
> I am doing some research and have not really found a great way to configure
> HAProxy to get the desired results. The problem I face is that I a service
> backed by two separate collections of servers. I would like to split traffic
> between these two clusters (either using percentages or weights). Normally, I
> would configure a single backend and calculate my weights to get the desired
> effect. However, for my use case, the list of servers can be update 
> dynamically
> through the API. To maintain correct weighting, I would then have to
> re-calculate the weights of every entry to maintain a correct balance.
>
> An alternative I found was to do the following in my configuration file:
>
> backend haproxy-test
> balance roundrobin
> server cluster1 u...@cluster1.sock weight 90
> server cluster2 u...@cluster2.sock weight 10
> 
> listen cluster1
>     bind u...@cluster1.sock
>     balance roundrobin
>     server s1 127.0.0.1:8081 
> 
> listen cluster2
>     bind u...@cluster2.sock
>     balance roundrobin
>     server s1 127.0.0.1:8082 
>     server s2 127.0.0.1:8083 
> 
> This works, but is a bit nasty because it has to take another round trip 
> through
> the kernel. Ideally, there would be a way to accomplish this without having to
> open unix sockets, but I couldn't find any examples or any leads in the 
> haproxy
> docs.
> 
> I was wondering if anyone on this list had any ideas to accomplish this 
> without
> using extra unix sockets? Or an entirely different way to get the same effect?

Well as we don't know which version of HAProxy do you use I will suggest you a
solution based on 1.9.

I would try to use the set-priority-* feature

https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#4.2-http-request%20set-priority-class
https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#4.2-http-request%20set-priority-offset

https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#7.3.2-prio_class
https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#7.3.2-prio_offset

https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#7.3.3-src

I would try the following, untested but I think you get the idea.

frontend clusters

  bind u...@cluster1.sock
  bind u...@cluster2.sock

  balance roundrobin

  # I'm not sure if src works with unix sockets like this
  # maybe you need to remove the unix@ part.
  acl src-cl1 src u...@cluster1.sock
  acl src-cl2 src u...@cluster2.sock

  http-request set-priority-class -10s if src-cl1
  http-request set-priority-class +10s if src-cl2

#  http-request set-priority-offset 5s if LOGO
#  http-request set-priority-offset 5s if LOGO

  use_backend cluster1 if priority-class < 5
  use_backend cluster2 if priority-class > 5


backend cluster1
server s1 127.0.0.1:8081

backend cluster2
server s1 127.0.0.1:8082
server s2 127.0.0.1:8083

There are a lot of fetching functions so maybe you find a better solution with
another fetch function as I don't know your application.

https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#7

In case you haven't seen it there is also a management interface for haproxy.

https://cbonte.github.io/haproxy-dconv/1.9/management.html#9.3
https://www.haproxy.com/blog/dynamic-configuration-haproxy-runtime-api/

> Thanks,
> James Root

Regards
Aleks



Re: [ANNOUNCE] haproxy-1.9.4

2019-02-06 Thread Aleksandar Lazic
Hi willy.

Am 06.02.2019 um 15:25 schrieb Willy Tarreau:
> Hi,
> 
> HAProxy 1.9.4 was released on 2019/02/06. It added 65 new commits
> after version 1.9.3.

Images are updated.

https://hub.docker.com/r/me2digital/haproxy-19-boringssl
https://hub.docker.com/r/me2digital/haproxy19

Maybe this patch was to late for 1.9.4 please can you consider to add it
to 2.0 and later 1.9.5, thanks.

https://www.mail-archive.com/haproxy@formilux.org/msg32693.html

Regards
Aleks

> The main focus in terms of time spent was clearly on end-to-end H2
> correctness, which involves both the H2 protocol itself and the idle
> connections management. It's difficult to enumerate in details all the
> issues that were addressed, but these generally range from not failing
> a connection when failing a stream can be sufficient to counting the
> number of pre-allocated streams on an idle idle outgoing connection to
> make sure it still has stream IDs left. Some server-side idle timeout
> errors could occasionally lead to the whole connection being closed.
> 
> One check was added to prevent an HTX frontend from dynamically branching
> to a non-HTX backend (and conversely), as only the static branches were
> addressed till now.
> 
> There were some improvements on memory allocation failures, a number of
> places were not tested anymore (or this was new code). Ah and a memory
> leak on the unique_id was addressed (it could happen with TCP instances
> when declared in a defaults section).
> 
> Etags are now rewritten from strong to weak by the compression. I had no
> idea this concept of weak vs strong existed at all :-)
> 
> And in addition to this, yesterday two other interesting problems were
> reported and addressed :
>   - the first one is about using certain L7 features at the load balancing
> layer (such as "balance hdr") in HTX mode which could crash haproxy.
> It was in fact caused by the loss of one patch during the multiple
> liftings of the code prior to the merge. That's now fixed. I'm still
> amazed we managed to lose only one patch in this ocean of code!
>  
>   - the other one is quite nasty and impacts all supported versions. Haproxy
> currently performs very deep compatibility tests on your rules, frontends
> and backends after parsing the configuration. But a corner case remained
> by which it was possible to have a frontend bound on, say, processes
> 1 and 2, tracking a key stored in a table present only in process 1 that
> would in turn rely on peers on process 1 as well. Here there is a problem,
> when the frontend receives connections on process 2, the resolved pointers
> for the table end up pointing to a completely different location in a
> parallel universe, then peers are activated to push the data while the
> section has been deallocated... So the relevant checks have been added
> to make sure that a process doesn't try to interact with a section that
> is not present for this process. This covers the track-sc* actions, the
> sc_* sample keywords, and SPOE filters. I was extremely cautious to cover
> the strict minimum so as not to impact any harmless config. It *is*
> possible that one of your config will refuse to load if it is already
> bogus. Please note that if this happens, it means this config is wrong
> and already presents the risk of random crashes. *Do not* rollback if
> this happens, please ask for help here instead. (I in fact expect that
> nobody will see these errors, meaning that the amount of complex and
> bogus configs in field is rather low).
> 
> The rest is pretty low impact and standard.
> 
> Please find the usual URLs below :
>Site index   : http://www.haproxy.org/
>Discourse: http://discourse.haproxy.org/
>Slack channel: https://slack.haproxy.org/
>Issue tracker: https://github.com/haproxy/haproxy/issues
>Sources  : http://www.haproxy.org/download/1.9/src/
>Git repository   : http://git.haproxy.org/git/haproxy-1.9.git/
>Git Web browsing : http://git.haproxy.org/?p=haproxy-1.9.git
>Changelog: http://www.haproxy.org/download/1.9/src/CHANGELOG
>Cyril's HTML doc : http://cbonte.github.io/haproxy-dconv/
> 
> Willy
> ---
> Complete changelog :
> Christopher Faulet (2):
>   BUG/MEDIUM: mux-h1: Don't add "transfer-encoding" if message-body is 
> forbidden
>   BUG/MAJOR: htx/backend: Make all tests on HTTP messages compatible with 
> HTX
> 
> Jérôme Magnin (1):
>   DOC: add a missing space in the documentation for bc_http_major
> 
> Kevin Zhu (1):
>   BUG/MINOR: deinit: tcp_rep.inspect_rules not deinit, add to deinit
> 
> Olivier Houchard (11):
>   BUG/MEDIUM: connections: Don't forget to remove CO_FL_SESS_IDLE.
>   MINOR: xref: Add missing barriers.
>   BUG/MEDIUM: peers: Handle mux creation failure.
>   BUG/MEDIUM: checks: Check that conn_install_mux succeeded.
>   BUG/MEDIUM: servers: Only 

Re: info defaults maxconn

2019-02-06 Thread Aleksandar Lazic
Hi Federico.

Am 06.02.2019 um 15:33 schrieb Federico Iezzi:
> Hey there,
> 
> Maybe this is gonna be a very simple answer.
> In HAProxy 1.5.18 seems that the defaults maxconn have a global influence and 
> not per backend one.
> 
> In my case I have global maxconn at 5120001, while defaults at 256. What I'm 
> trying to achieve is to set for all my backends the same maxconn without 
> having the parameter everywhere.
> 
> Testing it, I basically saturated the 256 connections right away and 
> everything was queued. But that happened globally and not on a per-backend 
> basis.
> 
> Is that expected?

Yes, AFAIK.

Default/FE/Listen maxconn
https://cbonte.github.io/haproxy-dconv/1.5/configuration.html#4-maxconn

```
Fix the maximum number of concurrent connections on a frontend
...
```

Backend maxconn is default 0
https://cbonte.github.io/haproxy-dconv/1.5/configuration.html#5.2-maxconn


```
...
The default value is "0" which means unlimited.
...
```
> Thanks!
> Federico

Regards
Aleks



Re: Opinions about DoH (=DNS over HTTPS) as resolver for HAProxy

2019-02-04 Thread Aleksandar Lazic
Hi Lukas.
Am 04.02.2019 um 21:39 schrieb Lukas Tribus:
> Hello,
> 
> On Mon, 4 Feb 2019 at 12:14, Aleksandar Lazic  wrote:
>>
>> Hi.
>>
>> I have just opened a new Issue about DoH for resolving.
>>
>> https://github.com/haproxy/haproxy/issues/33
>>
>> As I know that this is a major change in the Infrastructure I would like to 
>> here what you think about this suggestion.
>>
>> My opinion was at the beginning against this change as there was only some 
>> big provider but now there are some tutorials and other providers for DoH I 
>> think now it's a good Idea.
> 
> Frankly I don't see a real use-case. DoH is interesting for clients
> roaming around networks that don't have a local DNS resolver or with a
> completely untrusted or compromised connectivity to their DNS server.
> A haproxy instance on the other hand is usually something installed in
> a stable datacenter, often with a local resolver, and it is resolving
> names you configured with destination IP's that are visible to an
> attacker anyway.

A possible use-case is:

Let's say you have a hybrid cloud setup (on-prem, AWS, Azure, ...) and the
networks are connected via a unsecured L2/L3 internet connectivity.

The networks are routed and the HAProxy VM/Container must resolve an
internal Backend via DNS but some regulations does not allow to send
plain DNS via the internet.

Internal APP <-> INTERNET <-> HAProxy Pub Cloud <-> Client
  ||
Internal DNS <-> DoH<->

The Solution is to use a DoH on-prem which resolves the internal Backend
via classic DNS internally and send the answer back to HAProxy via HTTPS.

Such a Setup helps to keep some VPN/IPSec setups out of the game.
I hope I have described the use-case in understandable words.

> The DNS implementation is still lacking an important feature (TCP
> mode), which Baptiste does not really have time to work on as far as I
> can tell and would actually address a problem for certain huge
> deployments. At the same time I'm not sure I can up with a *real*
> use-case for DoH in haproxy - and there is always the possibility to
> install a local UDP to DoH resolver. Also a lot of setups nowadays are
> either systemd or docker managed, both of which ship their own
> resolver anyway (providing a local UDP/TCP service).

Ack. It's not a small part, imho.

On this wiki are some DOH Tools which show how DoH could be implemented.

https://github.com/curl/curl/wiki/DNS-over-HTTPS

> I'm not sure what the complexity of DoH is. I assume it's non trivial
> to do in a non-blocking way, without question more complicated than
> TCP mode.

I don't agree on this as I think there are more or less equal hard to
implement. But I must say I'm only a "sometimes" Developer so I'm sure
I miss all the detail which make the difference.

> So I'm not a fan of pushing DoH into haproxy. Especially if the
> use-case is unclear. But those are just my two cents.

Thank you.

> Also CC'ing Baptiste.
> 
> 
> cheers,
> lukas

Regards
aleks



Opinions about DoH (=DNS over HTTPS) as resolver for HAProxy

2019-02-04 Thread Aleksandar Lazic
Hi.

I have just opened a new Issue about DoH for resolving.

https://github.com/haproxy/haproxy/issues/33

As I know that this is a major change in the Infrastructure I would like to 
here what you think about this suggestion.

My opinion was at the beginning against this change as there was only some big 
provider but now there are some tutorials and other providers for DoH I think 
now it's a good Idea.

Best regards
Aleks



Re: [PATCH] DOC: Add HTX part in the documentation

2019-02-02 Thread Aleksandar Lazic
Sorry have forgotten to add.

Need to backport to 1.9

Regards
Aleks


 Ursprüngliche Nachricht 
Von: Aleksandar Lazic 
Gesendet: 2. Februar 2019 10:01:26 MEZ
An: haproxy@formilux.org
Betreff: [PATCH] DOC: Add HTX part in the documentation

Hi.

attached a doc update for the new features of HAProxy 1.9.

I hope the patch full fills the CONTRIBUTING rules as
I haven't send patched to the list for long time ;-)

Regards
Aleks



[PATCH] DOC: Add HTX part in the documentation

2019-02-02 Thread Aleksandar Lazic

Hi.

attached a doc update for the new features of HAProxy 1.9.

I hope the patch full fills the CONTRIBUTING rules as
I haven't send patched to the list for long time ;-)

Regards
AleksFrom c0e025e81b87a23f679aff80bddc02a96c4d43b0 Mon Sep 17 00:00:00 2001
From: Aleksandar Lazic 
Date: Sat, 2 Feb 2019 09:54:55 +0100
Subject: [PATCH] DOC: Add HTX part in the documentation

---
 doc/configuration.txt | 50 +--
 1 file changed, 48 insertions(+), 2 deletions(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index fe5eb250..38ed12ed 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -192,8 +192,7 @@ HAProxy supports 4 connection modes :
 For HTTP/2, the connection mode resembles more the "server close" mode : given
 the independence of all streams, there is currently no place to hook the idle
 server connection after a response, so it is closed after the response. HTTP/2
-is only supported for incoming connections, not on connections going to
-servers.
+supports now end 2 end mode and trailers which is requierd for gRPC.
 
 
 1.2. HTTP request
@@ -384,6 +383,53 @@ Response headers work exactly like request headers, and as such, HAProxy uses
 the same parsing function for both. Please refer to paragraph 1.2.2 for more
 details.
 
+1.3.3. HAProxy HTX
+--
+In this version of HAProxy was a new http-engine developed. With this huge 
+rewrite of the http engine is it now possible to add "easier" some other and 
+new protocols.
+
+It is requierd to use option http-use-htx to activate this new engine.
+
+With HTX is it now possible to handle the following protocols with HAProxy.
+
+TCP <> HTTP/X
+SSL/TLS <> TCP
+SSL/TLS <> HTTP/X
+HTTP/1.x <> HTTP/2
+HTTP/2 <> HTTP/1.x
+
+The Diagramm below was described in this post.
+https://www.mail-archive.com/haproxy@formilux.org/msg31727.html
+
+
+   +-+ stream
+   | all HTTP processing | layer
+   +-+
+   ^ ^ ^
+   HTX | HTX | HTX | normalised
+   v v v  interface
+   +--+ ++ ++ 
+   |applet| | HTTP/1 | | HTTP/2 | whatever layer (called mux for now
+   +--+ ++ ++ but may change once we have others,
+cache || || could be presentation in OSI)
+stats | +--+  | 
+Lua svc   | |TLS   |  | transport layer
+  | +--+  |
+  |   |   | 
++-+ 
+| TCP/Unix/socketpair | control layer
++-+ 
+  |   
++--+
+|file descriptor   |  socket layer
++--+
+  |
+ +---+
+ | operating |
+ |  system   |
+ +---+
+
 
 2. Configuring HAProxy
 --
-- 
2.20.1



Re: Early connection close, incomplete transfers

2019-02-01 Thread Aleksandar Lazic
Hi.

Do you have any errors in lighthttpds log?

Regards
Aleks


 Ursprüngliche Nachricht 
Von: Veiko Kukk 
Gesendet: 1. Februar 2019 12:33:39 MEZ
An: Aleksandar Lazic 
CC: haproxy@formilux.org
Betreff: Re: Early connection close, incomplete transfers


On 2019-01-31 12:57, Aleksandar Lazic wrote:
> Willy have found some issues which are added in the code of 2.0 tree.
> Do you have a chance to test this branch or do you want to wait for
> the next 1.9 release?

I tested stable 1.9.3 and 1.9 preview version Willy gave link here 
https://www.mail-archive.com/haproxy@formilux.org/msg32678.html
There is no difference in my tests.

> I'm not sure if it affects you as we haven't seen the config yet.
> Maybe you can share your config also so that we can see if your setup
> could be effected.

Commented timeouts are original timeouts, I had increased those to make 
sure, I'm not hitting any timeouts when creating higher load with tests. 
Maxconn values  serve the same purpose.

global
   log /dev/log local0
   daemon
   nbproc 1
   nbthread 16
   maxconn 
   user haproxy
   spread-checks 5
   tune.ssl.default-dh-param 2048
   ssl-default-bind-options no-sslv3 no-tls-tickets
   ssl-default-bind-ciphers 
ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:AES128-GCM-SHA256:AES128-SHA256:AES128-SHA:!DSS
   ssl-default-server-options no-sslv3 no-tls-tickets
   ssl-default-server-ciphers 
ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:AES128-GCM-SHA256:AES128-SHA256:AES128-SHA:!DSS
   tune.ssl.cachesize 10
   tune.ssl.lifetime 1800
   stats socket /var/run/haproxy.sock.stats1 mode 640 group vault process 
1 level admin

defaults
   log global
   mode http
   option httplog
   option contstats
   option log-health-checks
   retries 5
   #timeout http-request 5s
   timeout http-request 99s
   #timeout http-keep-alive 20s
   timeout http-keep-alive 99s
   #timeout connect 10s
   timeout connect 99s
   #timeout client 30s
   timeout client 99s
   timeout server 120s
   #timeout client-fin 10s
   timeout client-fin 99s
   #timeout server-fin 10s
   timeout server-fin 99s

listen main_frontend
   bind *:443 ssl crt /etc/vault/cert.pem crt /etc/letsencrypt/certs/ 
maxconn 
   bind *:80 maxconn 
   option forwardfor
   acl local_lighty_down nbsrv(lighty_load_balancer) lt 1
   monitor-uri /load_balance_health
   monitor fail if local_lighty_down
   default_backend lighty_load_balancer

backend lighty_load_balancer
   stats enable
   stats realm statistics
   http-response set-header Access-Control-Allow-Origin *
   option httpchk HEAD /dl/index.html
   server lighty0 127.0.0.1:9000 check maxconn  fall 2 inter 15s rise 
5 id 1

Test results

httpress test output summary:

1 requests launched
thread 3: 1000 connect, 1000 requests, 983 success, 17 fail, 6212668130 
bytes, 449231 overhead
thread 9: 996 connect, 996 requests, 979 success, 17 fail, 6187387690 
bytes, 447403 overhead
thread 4: 998 connect, 998 requests, 980 success, 18 fail, 6193707800 
bytes, 447860 overhead
thread 1: 1007 connect, 1007 requests, 988 success, 19 fail, 6244268680 
bytes, 451516 overhead
thread 8: 998 connect, 998 requests, 977 success, 21 fail, 6174747470 
bytes, 446489 overhead
thread 7: 1001 connect, 1001 requests, 970 success, 31 fail, 6130506700 
bytes, 443290 overhead
thread 10: 997 connect, 997 requests, 983 success, 14 fail, 6212668130 
bytes, 449231 overhead
thread 6: 1004 connect, 1004 requests, 986 success, 18 fail, 6231628460 
bytes, 450602 overhead
thread 5: 999 connect, 999 requests, 982 success, 17 fail, 6206348020 
bytes, 448774 overhead
thread 2: 1000 connect, 1000 requests, 981 success, 19 fail, 6200027910 
bytes, 448317 overhead

TOTALS:  1 connect, 1 requests, 9809 success, 191 fail, 100 
(100) real concurrency
TRAFFIC: 6320110 avg bytes, 457 avg overhead, 61993958990 bytes, 4482713 
overhead
TIMING:  81.014 seconds, 121 rps, 747335 kbps, 825.9 ms avg req time


HAproxy log sections of incomplete transfers (6320535 bytes should be 
transferred with this test data set):
  127.0.0.1:33054 [01/Feb/2019:11:22:48.178] main_frontend 
lighty_load_balancer/lighty0 0/0/0/0/298 200 425 - - SD-- 
100/100/99/99/0 0/0 "
  127.0.0.1:32820 [01/Feb/2019:11:22:48.068] main_frontend 
lighty_load_balancer/lighty0 0/0/0/0/409 200 4990 - - SD-- 99/99/98/98/0 
0/0 "
  127.0.0.1:34330 [01/Feb/2019:11:22:49.199] main_frontend 
lighty_load_balancer/lighty0 0/0/0/0/90 200 425 - - SD-- 100/100/99/99/0 
0/0 "
  127.0.0.1:34344 [01/Feb/2019:11:22:49.201] main_frontend 
lighty_load_balancer/lighty0 0/0/0/0/88 200 425 - - SD-- 99/99/98/98/0 
0/0

Re: RTMP and Seamless Reload

2019-01-31 Thread Aleksandar Lazic
Hi Erlangga.

Am 31.01.2019 um 06:12 schrieb Erlangga Pradipta Suryanto:
> Hi Aleksandar,
> 
> Thank you for your reply.
> As much as possible, we would like the stream to be not interrupted.
> Though at some time, the stream will be closed and restarted.
> We're still at POC stage right now, so we only use one haproxy, nginx-rtmp 
> server, and OBS to do the streaming

Ah OBS (=Open Broadcaster Software ?) something like this?

https://obsproject.com/forum/resources/how-to-set-up-your-own-private-rtmp-server-using-nginx.50/

> If the current version hasn't supported that yet, we will need to look for 
> other option other than to reload the configuration.
> We stumbled upon this article about runtime API, 
> https://www.haproxy.com/blog/dynamic-scaling-for-microservices-with-runtime-api/
> We are currently testing it.

The dynamic configuration works like a charm but never the less you will have 
some interrupts as this is the nature of all networks.
How is in general the error handling of the used SW?

I have some questions which you are maybe willing to answer.

* when you reload the backend, does you have also interruption on the stream?
* which algo do you plan to use for the backends, `leastconn`?
  https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#4-balance
* How long will a session (tcp/rtmp) normally be?
* How fast can/will be the reconnect from the clients?
* Is it a option to use DSR (=Direct Server Return) for the stream from rtmp 
source?
* Which mode do you plan to use http or tcp?

To get you right you wish to handover the client connected sockets 
(tcp/udp/unix) from the `old` process to the new process after a config reload, 
right?

I think this isn't a easy task nor I'm sure it's possible especially when you 
run the setup in HA setup with different "machines", but I'm not the expert 
about this topic.

> *Erlangga Pradipta Suryanto* | Software Engineer, BBM

Regards
Aleks

> __
> 
> *T. *+62118898168| *BBM PIN. D8F39521*__
> 
> *E. esuryanto*@bbmtek.com <mailto:mtal...@bbmtek.com>__
> 
> Follow us on: Facebook <https://www.facebook.com/bbm/> | Twitter 
> <https://twitter.com/BBM> | Instagram 
> <https://www.instagram.com/discoverbbm/> | LinkedIn 
> <https://www.linkedin.com/company/discoverbbm> | YouTube 
> <https://www.youtube.com/bbm> | Vidio <https://www.vidio.com/@bbm> 
> 
> /BBM used under license by Creative Media Works Pte Ltd //(Co. Regn. No. 
> 201609444E)/
> 
> This e-mail is intended only for named addressee(s) and may contain 
> confidential and/or privileged information. If you are not the named 
> addressee or have received this e-mail in error, please notify the sender 
> immediately. The unauthorised disclosure, use or reproduction of this email's 
> content is prohibited. Unless expressly agreed, no reliance on this email may 
> be made. 
> 
> 
> 
> On Wed, Jan 30, 2019 at 7:20 PM Aleksandar Lazic  <mailto:al-hapr...@none.at>> wrote:
> 
> Hi.
> 
> Am 30.01.2019 um 13:08 schrieb Erlangga Pradipta Suryanto:
> > Hi,
> >
> > I'm trying to use haproxy to proxy rtmp stream to an nginx rtmp backend.
> > what we want to achieve is, we will add more nginx rtmp servers on the 
> backend, and when we do we want to reload the haproxy config without closing 
> the current stream.
> > We tested this by configuring haproxy with one backend and start one 
> stream, then we update the configuration to include one more backend then 
> issue the reload command to haproxy.
> > The stream is still going but when checking the process and the network 
> using ps and netstat, the old process is still up and it is still serving the 
> stream.
> > What we had in thought was that the old process could pass the stream 
> to the new process.
> >
> > We tried this using haproxy 1.8.17 and 1.9.3 and this is the haproxy 
> configuration that we use
> >
> > global
> >         debug
> >         log /dev/log    local0
> >         log /dev/log    local1 notice
> >         chroot /var/lib/haproxy
> >         stats socket /run/haproxy/admin.sock mode 660 level admin 
> expose-fd listeners
> >         stats timeout 30s
> >         user haproxy
> >         group haproxy
> >         daemon
> >
> >         # Default SSL material locations
> >         ca-base /etc/ssl/certs
> >         crt-base /etc/ssl/private
> >
> >         # Default ciphers to use on SSL-enabled listening sockets.
> >         # For more information, see ciphers(1SSL). This list is from:
> >         #  
> https://hynek

Re: Early connection close, incomplete transfers

2019-01-31 Thread Aleksandar Lazic
Hi.

Am 31.01.2019 um 10:29 schrieb Veiko Kukk:
> HAproxy 1.9.3, but happens also with 1.7.10, 1.7.11.
> 
> Connections are getting closed during data transfer phase at random sizes on 
> backend. Sometimes just as little as 420 bytes get transferred, but usually 
> more is transferred before sudden end of connection. HAproxy logs have 
> connection closing status SD-- when this happens.

Willy have found some issues which are added in the code of 2.0 tree.
Do you have a chance to test this branch or do you want to wait for the next 
1.9 release?

I'm not sure if it affects you as we haven't seen the config yet.
Maybe you can share your config also so that we can see if your setup could be 
effected.

Best regards
Aleks

> Basic components of system look like this:
> Client --> HAproxy --> HTTP server --> Caching Proxy --> Remote origin
> 
> Our HTTP server part is compiling data from chunks it gets from local cache. 
> When it receives request from client via HAproxy, it sends response header, 
> then fetches chunks, compiles those and sends data client.
> 
> SD-- happens more frequently when connection between benchmarking tool and 
> HAproxy is fast, e.g. when doing tests where client side is not loaded much. 
> Happens much more for http than for https.
> 
> For example:
> 
> httpress -t1 -c10 -n1000 URL (rarely or not at all)
> 250 requests launched
> 500 requests launched
> 750 requests launched
> 1000 requests launched
> 
> TOTALS:  1000 connect, 1000 requests, 1000 success, 0 fail, 10 (10) real 
> concurrency
> TRAFFIC: 667959622 avg bytes, 452 avg overhead, 667959622000 bytes, 452000 
> overhead
> TIMING:  241.023 seconds, 4 rps, 2706393 kbps, 2410.2 ms avg req time
> 
> httpress -t10 -c10 -n1000 URL (happens frequently)
> 
> 2019-01-31 08:44:15 [26361:0x7fdc91a23700]: body [0] read connection closed
> 2019-01-31 08:44:15 [26361:0x7fdc91a23700]: body [0] read connection closed
> 2019-01-31 08:44:16 [26361:0x7fdc91a23700]: body [0] read connection closed
> 2019-01-31 08:44:16 [26361:0x7fdc91a23700]: body [0] read connection closed
> 2019-01-31 08:44:17 [26361:0x7fdc91a23700]: body [0] read connection closed
> 2019-01-31 08:44:18 [26361:0x7fdc91a23700]: body [0] read connection closed
> 2019-01-31 08:44:18 [26361:0x7fdc91a23700]: body [0] read connection closed
> 1000 requests launched
> 2019-01-31 08:44:19 [26361:0x7fdc82ffd700]: body [0] read connection closed
> thread 6: 73 connect, 73 requests, 72 success, 1 fail, 48093092784 bytes, 
> 32544 overhead
> thread 10: 72 connect, 72 requests, 72 success, 0 fail, 48093092784 bytes, 
> 32544 overhead
> thread 7: 73 connect, 73 requests, 72 success, 1 fail, 48093092784 bytes, 
> 32544 overhead
> thread 4: 88 connect, 88 requests, 67 success, 21 fail, 44753294674 bytes, 
> 30284 overhead
> thread 9: 111 connect, 111 requests, 56 success, 55 fail, 37405738832 bytes, 
> 25312 overhead
> thread 5: 82 connect, 82 requests, 68 success, 14 fail, 45421254296 bytes, 
> 30736 overhead
> thread 1: 86 connect, 86 requests, 68 success, 18 fail, 45421254296 bytes, 
> 30736 overhead
> thread 8: 184 connect, 184 requests, 29 success, 155 fail, 19370829038 bytes, 
> 13108 overhead
> thread 3: 73 connect, 73 requests, 73 success, 0 fail, 48761052406 bytes, 
> 32996 overhead
> thread 2: 158 connect, 158 requests, 39 success, 119 fail, 26050425258 bytes, 
> 17628 overhead
> 
> TOTALS:  1000 connect, 1000 requests, 616 success, 384 fail, 10 (10) real 
> concurrency
> TRAFFIC: 667959622 avg bytes, 452 avg overhead, 411463127152 bytes, 278432 
> overhead
> TIMING:  170.990 seconds, 3 rps, 2349959 kbps, 2775.8 ms avg req time
> 
> Because of thread count differences, -t1 (one thread) test is much more 
> loaded on client side than it is with -t10 (ten threads).
> 
> Random samples from HAproxy log (proper size of the object in HAproxy logs is 
> 667960042 bytes for that test file).
> 0/0/0/0/903 200 270807819 - - SD-- 10/10/9/9/0 0/0
> 0/0/0/0/375 200 101926854 - - SD-- 10/10/9/9/0 0/0
> 0/0/0/0/725 200 243340623 - - SD-- 10/10/9/9/0 0/0
> 0/0/0/0/574 200 183069594 - - SD-- 11/11/9/9/0 0/0
> 0/0/0/0/648 200 208194175 - - SD-- 10/10/9/9/0 0/0
> 0/0/0/0/1130 200 270807819 - - SD-- 10/10/9/9/0 0/0
> 0/0/0/0/349 200 90597175 - - SD-- 10/10/9/9/0 0/0
> 
> Our HTTP server logs contain hard unrecoverable errors about unable to write 
> to socket when HAproxy closes connection:
> Return Code: 32. Transferred 79389313 out of 667959622 Bytes in 809 msec
> Return Code: 32. Transferred 198965568 out of 667959622 Bytes in 986 msec
> Return Code: 32. Transferred 126690257 out of 667959622 Bytes in 825 msec
> Return Code: 32. Transferred 270807399 out of 667959622 Bytes in 1273 msec
> Return Code: 32. Transferred 171663764 out of 667959622 Bytes in 1075 msec
> Return Code: 32. Transferred 169362556 out of 667959622 Bytes in 1146 msec
> Return Code: 32. Transferred 167789692 out of 667959622 Bytes in 937 msec
> Return Code: 32. Transferred 199752000 out of 667959622 Bytes in 1110 msec
> 

Re: RTMP and Seamless Reload

2019-01-30 Thread Aleksandar Lazic
Hi.

Am 30.01.2019 um 13:08 schrieb Erlangga Pradipta Suryanto:
> Hi,
> 
> I'm trying to use haproxy to proxy rtmp stream to an nginx rtmp backend.
> what we want to achieve is, we will add more nginx rtmp servers on the 
> backend, and when we do we want to reload the haproxy config without closing 
> the current stream.
> We tested this by configuring haproxy with one backend and start one stream, 
> then we update the configuration to include one more backend then issue the 
> reload command to haproxy.
> The stream is still going but when checking the process and the network using 
> ps and netstat, the old process is still up and it is still serving the 
> stream.
> What we had in thought was that the old process could pass the stream to the 
> new process.
> 
> We tried this using haproxy 1.8.17 and 1.9.3 and this is the haproxy 
> configuration that we use
> 
> global
>         debug
>         log /dev/log    local0
>         log /dev/log    local1 notice
>         chroot /var/lib/haproxy
>         stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd 
> listeners
>         stats timeout 30s
>         user haproxy
>         group haproxy
>         daemon
> 
>         # Default SSL material locations
>         ca-base /etc/ssl/certs
>         crt-base /etc/ssl/private
> 
>         # Default ciphers to use on SSL-enabled listening sockets.
>         # For more information, see ciphers(1SSL). This list is from:
>         #  https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
>         # An alternative list with additional directives can be obtained from
>         #  
> https://mozilla.github.io/server-side-tls/ssl-config-generator/?server=haproxy
>         ssl-default-bind-ciphers 
> ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS
>         ssl-default-bind-options no-sslv3
> 
> defaults
>         log     global
>         mode    tcp
>         option  tcplog
>         option  dontlognull
>         timeout connect 5000
>         timeout client  5
>         timeout server  5
>         errorfile 400 /etc/haproxy/errors/400.http
>         errorfile 403 /etc/haproxy/errors/403.http
>         errorfile 408 /etc/haproxy/errors/408.http
>         errorfile 500 /etc/haproxy/errors/500.http
>         errorfile 502 /etc/haproxy/errors/502.http
>         errorfile 503 /etc/haproxy/errors/503.http
>         errorfile 504 /etc/haproxy/errors/504.http
> 
> frontend ft_rtpm
>         bind *:1935 name rtmp
>         mode tcp
>         maxconn 600
>         default_backend bk_rtmp
> 
> backend bk_rtmp 
>         mode tcp
>         server media01 172.17.1.213:1935 check maxconn 1 weight 10
>         #uncomment the line below then reload
>         #server media02 172.17.1.217:1935 check maxconn 1 weight 10
> 
> Is there a way to pass the stream to the new process created by the reload?

Well afaIk is this not possible with the current versions.
Why isn't a reconnect to the new process not good or possible?
Which SW is in use between haproxy?

> Thank you,
> 
> *Erlangga Pradipta Suryanto* | Software Engineer, BBM

Regards
Aleks

> __
> 
> *T. *+62118898168| *BBM PIN. D8F39521*__
> 
> *E. esuryanto*@bbmtek.com __
> 
> Follow us on: Facebook  | Twitter 
>  | Instagram 
>  | LinkedIn 
>  | YouTube 
>  | Vidio  
> 
> /BBM used under license by Creative Media Works Pte Ltd //(Co. Regn. No. 
> 201609444E)/
> 
> This e-mail is intended only for named addressee(s) and may contain 
> confidential and/or privileged information. If you are not the named 
> addressee or have received this e-mail in error, please notify the sender 
> immediately. The unauthorised disclosure, use or reproduction of this email's 
> content is prohibited. Unless expressly agreed, no reliance on this email may 
> be made. 
> 




Re: HTTP connection is reset after each request

2019-01-30 Thread Aleksandar Lazic
Hi Luke.

Am 30.01.2019 um 12:58 schrieb Luke Seelenbinder:
> Hi Aleks,
> 
> You're correct for http/1.1, but unfortunately, nothing I found after a 
> pretty long search indicated 1.8.x supports an h2 frontend with reusable 
> backend connections (h1.1 or h2).

Looks like You are also right for h2 case.
I haven't seen h2 in Marco's configuration therefor I haven't assumed he use h2.

Let's see what's Marco's answer is ;-)

> I stuck with h/1.1 until 1.9 was released because of this.
> 
> Best,
> Luke

Regards
Aleks

> —
> Luke Seelenbinder
> Stadia Maps | Founder
> stadiamaps.com
> 
> ‐‐‐ Original Message ‐‐‐
> On Wednesday, January 30, 2019 12:02 PM, Aleksandar Lazic 
>  wrote:
> 
>> Hi.
>>
> 
>> Am 30.01.2019 um 11:53 schrieb Marco Corte:
>>
> 
>>> Il 2019-01-30 11:40 Luke Seelenbinder ha scritto:
>>>
> 
>>>> Are you on 1.9.x? 1.8.x does not support reuse of backend connections
>>>> when using an h2 frontend. 1.9.x does support this and it works quite
>>>> nicely.
>>>
> 
>>> Yes! I am on version 1.8.17.
>>> Thank you for the explanation!
>>
> 
>> Well somehow it supports
>> https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4-http-reuse
>>
> 
>> I would play with the timeouts
>>
> 
>> https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4-timeout 
>> http-keep-alive
>> https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4-timeout 
>> http-request
>>
> 
>> There are some more timeouts which starts in the doc at `timeout check` in 
>> this section.
>> https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4.1
>>
> 
>> never the less 700ms is "relatively" long so I would also add a check in the 
>> server line.
>>
> 
>>> .marcoc
>>
> 
>> Regards
>> Aleks
> 




Re: HTTP connection is reset after each request

2019-01-30 Thread Aleksandar Lazic
Hi.

Am 30.01.2019 um 11:53 schrieb Marco Corte:
> Il 2019-01-30 11:40 Luke Seelenbinder ha scritto:
> 
> 
>> Are you on 1.9.x? 1.8.x does not support reuse of backend connections
>> when using an h2 frontend. 1.9.x does support this and it works quite
>> nicely.
> 
> Yes! I am on version 1.8.17.
> Thank you for the explanation!

Well somehow it supports
https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4-http-reuse

I would play with the timeouts

https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4-timeout%20http-keep-alive
https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4-timeout%20http-request

There are some more timeouts which starts in the doc at `timeout check` in this 
section.
https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4.1

never the less 700ms is "relatively" long so I would also add a check in the 
server line.

> .marcoc

Regards
Aleks



Cache question

2019-01-29 Thread Aleksandar Lazic
Hi.

I plan to use HAProxy 1.9.x cache with ~50-100k Objects which will could use 
1-2G RAM.

Have anyone used the cache features in prod with such specs?

The Idea is to use HAProxy in AUS for a Webserver in FR for caching as the 
latency delays the delivery from FR to AUS Clients.

Thank you for your answer.

Best regards
Aleks



Re: [ANNOUNCE] haproxy-1.9.3

2019-01-29 Thread Aleksandar Lazic
Am 29.01.2019 um 06:52 schrieb Willy Tarreau:
> Hi,
> 
> HAProxy 1.9.3 was released on 2019/01/29. It added 35 new commits after
> version 1.9.2.
> 
> It mainly addresses a few stability issues affecting versions up to 1.9.2.
> Several of these issues are only reproducible when using H2 to connect to
> the servers and are caused by various incorrect or insufficient error
> handling when facing failures during connection reuse. Another issue was
> a side effect of the fixes on mailers (which still use the checks
> infrastructure) that resulted in a crash when using agent-check. A last
> minor fix for the checks was made to address a timeout issue, and checks
> are expected to be in a better shape now.
> 
> Another issue was reported on the way our SSL stack deals with KeyUpdate
> messages that are part of TLS 1.3. These were identified as renegotiation
> attempts and were dropped, causing some communication issues with Chrome
> when they attempted to make use of them. Apparently we were not the only
> ones so it's a side effect of reusing a feature which has long had to be
> disabled everywhere. Now the issue was addressed, and it's important that
> distros update their packages to get this part fixed when they use OpenSSL
> 1.1.1 so that we don't leave early bugs on the net which prevent security
> features from reliably being used. This patch was also backported into the
> 1.8 branch and will be present in the next 1.8 release.
> 
> On the less important issues, some better control for stream limits were
> enforced on outgoing H2 connections. We used to observe batches of errors
> when the server was refusing too high stream IDs after it sent a GOAWAY,
> now we can react faster. In addition, in order to avoid this situation at
> all (as Nginx wants to close by default after 1000 streams over the same
> connection), we've added a "max-reuse" server parameter indicating how
> many times a connection may be reused. For example setting this to 990
> is enough to always stop reusing a connection before nginx sends its
> GOAWAY.
> 
> The H2 mux was not respecting the reserve in HTX mode, leading to the
> impossibility to manipulate headers and to some request or response
> errors. Some other small issues affecting the reserve size in HTX were
> addressed, though some of them are now a bit foggy to me.
> 
> That's about all for this release. I still have some pending fixes that
> I preferred to delay a bit and that I'll backport for the next 1.9 :
>   - make outgoing connection reuse failure fail more gracefully and
> support a retry ; we have everything for this, it just required a
> few changes in the connection setup code that I didn't feel bold
> enough to integrate into this one.
> 
>   - H2 will check that the content-length header matches the amount of
> DATA (standards compliance)
> 
>   - H2 currently don't use the server's advertised MAX_CONCURRENT_STREAMS
> setting and only uses its global one, but it's not much complicated
> to address. I expect that we may face some of these sooner or later.
> 
>   - there's this ":authority" header field missing from H2 requests that
> we should apparently add when upgrading H1 to H2.
> 
>   - regarding the reported issue of some large objects transfers over H2
> from some specific clients being truncated during reloads, I brought
> the issue to the IETF HTTP working group. Some gave me examples showing
> my initial idea of watching WINDOW_UPDATE messages will not work. However
> I managed to design another solution that I will experiment with soon
> in 2.0-dev. If it ends up working fine enough, we'll backport it to 1.9.
> 
> Last, if you feel like you'd like to contribute but don't know where to
> start, please have a look at the issue tracker (see the URL below), have
> a look at the bugs and if you feel like you can work on one of them, just
> mention it in the issue and propose a patch.
> 
> Please find the usual URLs below :
>Site index   : http://www.haproxy.org/
>Discourse: http://discourse.haproxy.org/
>Slack channel: https://slack.haproxy.org/
>Issue tracker: https://github.com/haproxy/haproxy/issues
>Sources  : http://www.haproxy.org/download/1.9/src/
>Git repository   : http://git.haproxy.org/git/haproxy-1.9.git/
>Git Web browsing : http://git.haproxy.org/?p=haproxy-1.9.git
>Changelog: http://www.haproxy.org/download/1.9/src/CHANGELOG
>Cyril's HTML doc : http://cbonte.github.io/haproxy-dconv/

Docker Images are also updated:

https://hub.docker.com/r/me2digital/haproxy19
https://hub.docker.com/r/me2digital/haproxy-19-boringssl

Both have some errors at `make reg-tests`, I think that this could be a problem 
with containerized testing.
Does anyone run haproxy in container with h2?

###
https://gitlab.com/aleks001/haproxy-19-boringssl/-/jobs/152750687

Testing with haproxy version: 1.9.3
#top  TEST ./reg-tests/connection/b0.vtc 

Re: V1.9 SSL engine and ssl-mode-async is unstable

2019-01-25 Thread Aleksandar Lazic

Hi.

Am 25-01-2019 08:55, schrieb Kevin Zhu:


HI HAProxy Team,:
I am trying to use Intel qat work with HAProxy-1.9.0, but it work very 
unstable. and i had other try HAProxy-1.8.16 and it work will, How can 
i find what is wrong?
1.8.16 and 1.9.0 use same hardwave and system to running and compile, 
and use the same config file, the attach file is config file


Please can you explain "very unstable" a little bit more.

Can you try 1.9.2/3 ?

Do you have any errors or warnings in the logs?
Maybe you can use loglevel debug?


Thanks of any help.
Best regards


Regards
Aleks

haproxy.conf
Description: Binary data


Re: h1-client to h2-server host header / authority conversion failure.?

2019-01-25 Thread Aleksandar Lazic

Hi List.

Am 25-01-2019 01:01, schrieb PiBa-NL:

Hi List,

Attached a regtest which i 'think' should pass.

**   s1    0.0 === expect tbl.dec[1].key == ":authority"
 s1    0.0 EXPECT tbl.dec[1].key (host) == ":authority" failed

It seems to me the Host <> Authority conversion isn't happening
properly.? But maybe i'm just making a mistake in the test case...

I was using HA-Proxy version 2.0-dev0-f7a259d 2019/01/24 with this 
test.


The test was inspired by the attempt to connect to mail.google.com ,
as discussed in the "haproxy 1.9.2 with boringssl" mail thread.. Not
sure if this is the main problem, but it seems suspicious to me..


That's one of the reason why I love this community ;-)

As I'm just one of this Community, I want to say, Thanks all on the list 
to be part of HAProxy ;-).



Regards,

PiBa-NL (Pieter)


Regards
Aleks



Re: haproxy 1.9.2 with boringssl

2019-01-24 Thread Aleksandar Lazic
Am 24.01.2019 um 15:09 schrieb Aleksandar Lazic:
> Am 24.01.2019 um 03:49 schrieb Willy Tarreau:
>> On Wed, Jan 23, 2019 at 09:37:46PM +0100, Aleksandar Lazic wrote:
>>>
>>> Am 23.01.2019 um 21:27 schrieb Willy Tarreau:
>>>> On Wed, Jan 23, 2019 at 09:08:00PM +0100, Aleksandar Lazic wrote:
>>>>> Should it be possible to have fe with h1 and be server h2(alpn h2), as I
>>>>> expect this or similar return value when I go thru haproxy?
>>>>
>>>> Yes absolutely. That's even what I'm doing on my tests to try to fix
>>>> the issues reported by Luke.
>>>
>>> Okay, perfect.
>>>
>>> Would you like to share your config so that I can see what's wrong with my
>>> config, thanks.
>>
>> Sure, here's a copy-paste, hoping I don't mess with anything :-)
>>
>>   defaults
>> mode http
>> option http-use-htx
>> option httplog
>> log stdout format raw daemon
>> timeout connect 4s
>> timeout client 10s
>> timeout server 10s
>>
>>   frontend decrypt
>> bind :4445
>> bind :4446 proto h2
>> bind :4443 ssl crt rsa+dh2048.pem npn h2 alpn h2
>> default_backend trace
>>
>>   backend trace
>> stats uri /stat
>> server s1 127.0.0.1:443 ssl alpn h2 verify none
>> #server s2 127.0.0.1:80
>> #server s3 127.0.0.1:80 proto h2
>>
>> As you can see you just connect to port 4445.
> 
> Many thanks.
> Sorry for the long mail thread but I'm not able to get a proper answer from 
> the ssl backend.

Please ignore this mail.
There is a problem within the container as a curl in the container have the 
same problem as haproxy, so it's related to the container run.

> I have made the setup more easier.
> 
> This setup does not return the stats page.
> curl => haproxy-19 with openssl => openssl s_server internal stats page
> 
> This setup does return the stats page.
> 
> ###
> curl -vk https://207.154.204.236:4443
> * About to connect() to 207.154.204.236 port 4443 (#0)
> *   Trying 207.154.204.236...
> * Connected to 207.154.204.236 (207.154.204.236) port 4443 (#0)
> * Initializing NSS with certpath: sql:/etc/pki/nssdb
> * skipping SSL peer certificate verification
> * SSL connection using TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
> * Server certificate:
> *   subject: CN=h2test.livesystem.at
> *   start date: Jan 24 12:18:25 2019 GMT
> *   expire date: Apr 24 12:18:25 2019 GMT
> *   common name: h2test.livesystem.at
> *   issuer: CN=Let's Encrypt Authority X3,O=Let's Encrypt,C=US
>> GET / HTTP/1.1
>> User-Agent: curl/7.29.0
>> Host: 207.154.204.236:4443
>> Accept: */*
>>
> * HTTP 1.0, assume close after body
> < HTTP/1.0 200 ok
> < Content-type: text/html
> <
> 
> 
> 
> s_server -www -alpn h2 -cert 
> /root/.caddy/acme/acme-v02.api.letsencrypt.org/sites/h2test.livesystem.at/h2test.livesystem.at.crt
>  -key 
> /root/.caddy/acme/acme-v02.api.letsencrypt.org/sites/h2test.livesystem.at/h2test.livesystem.at.key
>  -accept 4443 -debug -msg
> Secure Renegotiation IS supported
> Ciphers supported in s_server binary
> .
> ###
> 
> # openssl version
> OpenSSL 1.0.2k-fips  26 Jan 2017
> 
> # curl -V
> curl 7.29.0 (x86_64-redhat-linux-gnu) libcurl/7.29.0 NSS/3.34 zlib/1.2.7 
> libidn/1.28 libssh2/1.4.3
> Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 
> pop3s rtsp scp sftp smtp smtps telnet tftp
> Features: AsynchDNS GSS-Negotiate IDN IPv6 Largefile NTLM NTLM_WB SSL libz 
> unix-sockets
> 
> 
> defaults
> mode http
> option http-use-htx
> option httplog
> log stdout format raw daemon debug
> timeout connect 4s
> timeout client 10s
> timeout server 10s
> 
> frontend decrypt
> bind :4445
> bind :4446 proto h2
> #bind :4443 ssl crt rsa+dh2048.pem npn h2 alpn h2
> default_backend trace
> 
> backend trace
> stats uri /stat
> 
> # localhosts ip
> server s1 207.154.204.236:4443 ssl alpn h2 verify none
> 
> 
> 
> podman run --rm -it \
> -e SERVICE_DEST=mail.google.com \
> -e LOGLEVEL=debug \
> -e NUM_THREADS=8 \
> -e DNS_SRV001=1.1.1.1 \
> -e DNS_SRV002=8.8.8.8 \
> -e STATS_PORT=7411 \
> -e STATS_USER=test \
> -e STATS_PASSWORD=test \
> -e SERVICE_TCP_PORT=8443 \
> -e SERVICE_NAME=google-mail \
> -e SERVICE_DEST_IP=mail.google.com \
> -e 

Re: haproxy 1.9.2 with boringssl

2019-01-24 Thread Aleksandar Lazic
Am 24.01.2019 um 03:49 schrieb Willy Tarreau:
> On Wed, Jan 23, 2019 at 09:37:46PM +0100, Aleksandar Lazic wrote:
>>
>> Am 23.01.2019 um 21:27 schrieb Willy Tarreau:
>>> On Wed, Jan 23, 2019 at 09:08:00PM +0100, Aleksandar Lazic wrote:
>>>> Should it be possible to have fe with h1 and be server h2(alpn h2), as I
>>>> expect this or similar return value when I go thru haproxy?
>>>
>>> Yes absolutely. That's even what I'm doing on my tests to try to fix
>>> the issues reported by Luke.
>>
>> Okay, perfect.
>>
>> Would you like to share your config so that I can see what's wrong with my
>> config, thanks.
> 
> Sure, here's a copy-paste, hoping I don't mess with anything :-)
> 
>   defaults
> mode http
> option http-use-htx
> option httplog
> log stdout format raw daemon
> timeout connect 4s
> timeout client 10s
> timeout server 10s
> 
>   frontend decrypt
> bind :4445
> bind :4446 proto h2
> bind :4443 ssl crt rsa+dh2048.pem npn h2 alpn h2
> default_backend trace
> 
>   backend trace
> stats uri /stat
> server s1 127.0.0.1:443 ssl alpn h2 verify none
> #server s2 127.0.0.1:80
> #server s3 127.0.0.1:80 proto h2
> 
> As you can see you just connect to port 4445.

Many thanks.
Sorry for the long mail thread but I'm not able to get a proper answer from the 
ssl backend.

I have made the setup more easier.

This setup does not return the stats page.
curl => haproxy-19 with openssl => openssl s_server internal stats page

This setup does return the stats page.

###
curl -vk https://207.154.204.236:4443
* About to connect() to 207.154.204.236 port 4443 (#0)
*   Trying 207.154.204.236...
* Connected to 207.154.204.236 (207.154.204.236) port 4443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* skipping SSL peer certificate verification
* SSL connection using TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
* Server certificate:
*   subject: CN=h2test.livesystem.at
*   start date: Jan 24 12:18:25 2019 GMT
*   expire date: Apr 24 12:18:25 2019 GMT
*   common name: h2test.livesystem.at
*   issuer: CN=Let's Encrypt Authority X3,O=Let's Encrypt,C=US
> GET / HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 207.154.204.236:4443
> Accept: */*
>
* HTTP 1.0, assume close after body
< HTTP/1.0 200 ok
< Content-type: text/html
<



s_server -www -alpn h2 -cert 
/root/.caddy/acme/acme-v02.api.letsencrypt.org/sites/h2test.livesystem.at/h2test.livesystem.at.crt
 -key 
/root/.caddy/acme/acme-v02.api.letsencrypt.org/sites/h2test.livesystem.at/h2test.livesystem.at.key
 -accept 4443 -debug -msg
Secure Renegotiation IS supported
Ciphers supported in s_server binary
.
###

# openssl version
OpenSSL 1.0.2k-fips  26 Jan 2017

# curl -V
curl 7.29.0 (x86_64-redhat-linux-gnu) libcurl/7.29.0 NSS/3.34 zlib/1.2.7 
libidn/1.28 libssh2/1.4.3
Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 
pop3s rtsp scp sftp smtp smtps telnet tftp
Features: AsynchDNS GSS-Negotiate IDN IPv6 Largefile NTLM NTLM_WB SSL libz 
unix-sockets


defaults
mode http
option http-use-htx
option httplog
log stdout format raw daemon debug
timeout connect 4s
timeout client 10s
timeout server 10s

frontend decrypt
bind :4445
bind :4446 proto h2
#bind :4443 ssl crt rsa+dh2048.pem npn h2 alpn h2
default_backend trace

backend trace
stats uri /stat

# localhosts ip
server s1 207.154.204.236:4443 ssl alpn h2 verify none



podman run --rm -it \
-e SERVICE_DEST=mail.google.com \
-e LOGLEVEL=debug \
-e NUM_THREADS=8 \
-e DNS_SRV001=1.1.1.1 \
-e DNS_SRV002=8.8.8.8 \
-e STATS_PORT=7411 \
-e STATS_USER=test \
-e STATS_PASSWORD=test \
-e SERVICE_TCP_PORT=8443 \
-e SERVICE_NAME=google-mail \
-e SERVICE_DEST_IP=mail.google.com \
-e SERVICE_DEST_PORT=443 \
-e CONFIG_FILE=/mnt/haproxy2.cfg \
-e DEBUG=1 -v /tmp/:/mnt/ \
-p 4445 --expose 4445 \
--net host \
me2digital/haproxy19


###
openssl s_server -www -alpn h2 \
-cert 
~/.caddy/acme/acme-v02.api.letsencrypt.org/sites/h2test.livesystem.at/h2test.livesystem.at.crt
 \
-key 
~/.caddy/acme/acme-v02.api.letsencrypt.org/sites/h2test.livesystem.at/h2test.livesystem.at.key
 \
-accept 4443 -debug -msg
###

###
[root@doh-001 ~]# curl -vk http://127.0.0.1:4445
* About to connect() to 127.0.0.1 port 4445 (#0)
*   Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 4445 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 127.0.0.1:4445
> Accept: */*
>
* HTTP 1.0, assume close after body
< HTTP/1.0 503 Service Unavailable
< cache-control: no-cache
< con

Re: haproxy 1.9.2 with boringssl

2019-01-23 Thread Aleksandar Lazic


Am 23.01.2019 um 21:27 schrieb Willy Tarreau:
> On Wed, Jan 23, 2019 at 09:08:00PM +0100, Aleksandar Lazic wrote:
>> Should it be possible to have fe with h1 and be server h2(alpn h2), as I
>> expect this or similar return value when I go thru haproxy?
> 
> Yes absolutely. That's even what I'm doing on my tests to try to fix
> the issues reported by Luke.

Okay, perfect.

Would you like to share your config so that I can see what's wrong with my 
config, thanks.

>> I haven't seen any log option to get the backend request method, I think this
>> should be a feature request ;-).
> 
> What do you mean with "backend request method" precisely ?

As the log is for frontends It would be nice to be able to get this infos from 
below also for the backend to see what was send to the backend server.
The problem what I see is that a tcpdump/tshark does not help to see what's 
transfered on the wire when the backend talks via TLS.

https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#8.2.4

### current variables

  | H | %HM  | HTTP method (ex: POST)| string  |
  | H | %HP  | HTTP request URI without query string (path)  | string  |
  | H | %HQ  | HTTP request URI query string (ex: ?bar=baz)  | string  |
  | H | %HU  | HTTP request URI (ex: /foo?bar=baz)   | string  |
  | H | %HV  | HTTP version (ex: HTTP/1.0)   | string  |

Possible new
  | H | %bM  | Backend HTTP method (ex: POST)| string   
   |
  | H | %bP  | Backend HTTP request URI without query string (path)  | string   
   |
  | H | %bQ  | Backend HTTP request URI query string (ex: ?bar=baz)  | string   
   |
  | H | %bU  | Backend HTTP request URI (ex: /foo?bar=baz)   | string   
   |
  | H | %bV  | Backend HTTP version (ex: HTTP/1.0)   | string   
   |

###

> Willy

Aleks



Re: haproxy 1.9.2 with boringssl

2019-01-23 Thread Aleksandar Lazic
Hi Willy.

Am 23.01.2019 um 19:50 schrieb Willy Tarreau:
> Hi Aleks,
> 
> On Wed, Jan 23, 2019 at 06:58:25PM +0100, Aleksandar Lazic wrote:
>> backend be_generic_tcp
>>   mode http
>>   balance source
>>   timeout check 5s
>>   option tcp-check
>>
>>   server "${SERVICE_NAME}" ${SERVICE_DEST_IP}:${SERVICE_DEST_PORT} check 
>> inter 5s proto h2 ssl ssl-min-ver TLSv1.3 verify none
> 
> You need to replace "proto h2" with "alpn h2", so that the application
> protocol is announced to the other host, otherwise it will stick to the
> default, very likely "http/1.1", while haproxy talks h2 there. This can
> explain the 502 when the other side rejected your request.

I have changed it but still no lock.

Should it be possible to have fe with h1 and be server h2(alpn h2), as I expect 
this or similar return value when I go thru haproxy?

I haven't seen any log option to get the backend request method, I think this 
should be a feature request ;-).


curl -vo /dev/null https://mail.google.com:443
*   Trying 172.217.21.229...
* Connected to mail.google.com (172.217.21.229) port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
*   CAfile: /etc/pki/tls/certs/ca-bundle.crt
  CApath: none
* SSL connection using TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
* Server certificate:
*   subject: CN=mail.google.com,O=Google LLC,L=Mountain 
View,ST=California,C=US
*   start date: Dec 19 08:16:00 2018 GMT
*   expire date: Mar 13 08:16:00 2019 GMT
*   common name: mail.google.com
*   issuer: CN=Google Internet Authority G3,O=Google Trust Services,C=US
> GET / HTTP/1.1
> User-Agent: curl/7.29.0
> Host: mail.google.com
> Accept: */*
>
< HTTP/1.1 301 Moved Permanently
< Location: /mail/
< Expires: Wed, 23 Jan 2019 20:01:34 GMT
< Date: Wed, 23 Jan 2019 20:01:34 GMT
< Cache-Control: private, max-age=7776000
< Content-Type: text/html; charset=UTF-8
< X-Content-Type-Options: nosniff
< X-Frame-Options: SAMEORIGIN
< X-XSS-Protection: 1; mode=block
< Server: GSE
< Alt-Svc: clear
< Accept-Ranges: none
< Vary: Accept-Encoding
< Transfer-Encoding: chunked
<
{ [data not shown]
* Connection #0 to host mail.google.com left intact


Config is now this.

###
cat /tmp/haproxy.cfg
# https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#3
global
  # nodaemon

  log stdout format rfc5424 daemon "${LOGLEVEL}"

  stats socket /tmp/sock1 mode 666 level admin
  stats timeout 1h
  tune.ssl.default-dh-param 2048
  ssl-server-verify none

  nbthread "${NUM_THREADS}"


defaults
  log global

# the format is described at
# https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#4

# copied from
# 
https://github.com/haproxytech/haproxy-docker-arm64v8/blob/master/cfg_files/haproxy.cfg
  retries 3
  timeout http-request10s
  timeout queue   1m
  timeout connect 10s
  timeout client  1m
  timeout server  1m
  timeout http-keep-alive 10s
  timeout check   10s
  maxconn 3000

  default-server resolve-prefer ipv4 inter 5s resolvers mydns
  option http-use-htx
  option httplog

  log-format ">>> %ci:%cp [%tr] %ft %b/%s %TR/%Tw/%Tc/%Tr/%Ta %ST %B %CC %CS 
%tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs %{+Q}r %rt %sslv %sslc"

resolvers mydns
  nameserver dns1 "${DNS_SRV001}":53
  nameserver dns2 "${DNS_SRV002}":53
  resolve_retries   3
  timeout retry 1s
  hold valid   10s

listen stats
bind :"${STATS_PORT}"
mode http
# Health check monitoring uri.
monitor-uri /healthz

# Add your custom health check monitoring failure condition here.
# monitor fail if 
stats enable
stats hide-version
stats realm Haproxy\ Statistics
stats uri /
stats auth "${STATS_USER}":"${STATS_PASSWORD}"

frontend public_tcp
  bind :"${SERVICE_TCP_PORT}" alpn h2,http/1.1

  mode http
  log global

  default_backend be_generic_tcp


backend be_generic_tcp
  mode http
  balance source
  timeout check 5s
  option tcp-check

  server "${SERVICE_NAME}" ${SERVICE_DEST_IP}:${SERVICE_DEST_PORT} check inter 
5s alpn h2 ssl ssl-min-ver TLSv1.3 verify none
###

Log of haproxy

<29>1 2019-01-23T20:00:30+00:00 doh-001 haproxy 1 - - Proxy stats started.
<29>1 2019-01-23T20:00:30+00:00 doh-001 haproxy 1 - - Proxy public_tcp started.
<29>1 2019-01-23T20:00:30+00:00 doh-001 haproxy 1 - - Proxy be_generic_tcp 
started.
[WARNING] 022/200030 (1) : be_generic_tcp/google-mail changed its IP from 
172.217.21.229 to 172.217.18.165 by mydns/dns1.
<29>1 2019-01-23T20:00:30+00:00 doh-001 haproxy 1 - - 
be_generic_tcp/google-mail changed its IP from 172.217.21.229 to 172.217.18.165 
by mydns/dns1.

:public_tcp.accept(0006)=000c from [127.0.0.1:54

Re: haproxy 1.9.2 with boringssl

2019-01-23 Thread Aleksandar Lazic
haproxy-19-boringssl

using CONFIG_FILE   :/mnt/haproxy.cfg
<29>1 2019-01-23T17:50:45+00:00 doh-001 haproxy 1 - - Proxy stats started.
<29>1 2019-01-23T17:50:45+00:00 doh-001 haproxy 1 - - Proxy public_tcp started.
<29>1 2019-01-23T17:50:45+00:00 doh-001 haproxy 1 - - Proxy be_generic_tcp 
started.
[WARNING] 022/175045 (1) : be_generic_tcp/google-mail changed its IP from 
172.217.21.229 to 216.58.207.69 by mydns/dns1.
<29>1 2019-01-23T17:50:45+00:00 doh-001 haproxy 1 - - 
be_generic_tcp/google-mail changed its IP from 172.217.21.229 to 216.58.207.69 
by mydns/dns1.
<30>1 2019-01-23T17:50:50+00:00 doh-001 haproxy 1 - - 127.0.0.1:54178 
[23/Jan/2019:17:50:50.727] public_tcp public_tcp/ -1/-1/-1/-1/0 0 0 - - 
PR-- 1/1/0/0/0 0/0 ""
<30>1 2019-01-23T17:50:50+00:00 doh-001 haproxy 1 - - 127.0.0.1:54178 
[23/Jan/2019:17:50:50.715] public_tcp be_generic_tcp/google-mail 0/0/13/-1/13 
502 208 - - SH-- 1/1/0/0/0 0/0 "GET / HTTP/1.1"
####

I thought that haproxy translates the http/1.1 cal to http/2 call, is this a 
proper assumption?
What's my mistake and thanks for help?

Thanks for help

Regards
Aleks

Am 22.01.2019 um 19:38 schrieb Aleksandar Lazic:
> Hi.
> 
> I have now build haproxy with boringssl and it looks quite good.
> 
> Is it the recommended way to simply make a git clone without any branch or 
> tag?
> Does anyone know how the KeyUpdate can be tested?
> 
> ###
> HA-Proxy version 1.9.2 2019/01/16 - https://haproxy.org/
> Build options :
>   TARGET  = linux2628
>   CPU = generic
>   CC  = gcc
>   CFLAGS  = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv
> -Wno-unused-label -Wno-sign-compare -Wno-unused-parameter
> -Wno-old-style-declaration -Wno-ignored-qualifiers -Wno-clobbered
> -Wno-missing-field-initializers -Wtype-limits -Wshift-negative-value
> -Wshift-overflow=2 -Wduplicated-cond -Wnull-dereference
>   OPTIONS = USE_LINUX_SPLICE=1 USE_GETADDRINFO=1 USE_ZLIB=1 USE_REGPARM=1
> USE_THREAD=1 USE_OPENSSL=1 USE_LUA=1 USE_PCRE2=1 USE_PCRE2_JIT=1 USE_TFO=1
> 
> Default settings :
>   maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200
> 
> Built with OpenSSL version : BoringSSL
> OpenSSL library supports TLS extensions : yes
> OpenSSL library supports SNI : yes
> OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
> Built with Lua version : Lua 5.3.5
> Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT
> IP_FREEBIND
> Built with zlib version : 1.2.11
> Running on zlib version : 1.2.11
> Compression algorithms supported : identity("identity"), deflate("deflate"),
> raw-deflate("deflate"), gzip("gzip")
> Built with PCRE2 version : 10.31 2018-02-12
> PCRE2 library supports JIT : yes
> Encrypted password support via crypt(3): yes
> Built with multi-threading support.
> 
> Available polling systems :
>   epoll : pref=300,  test result OK
>poll : pref=200,  test result OK
>  select : pref=150,  test result OK
> Total: 3 (3 usable), will use epoll.
> 
> Available multiplexer protocols :
> (protocols marked as  cannot be specified using 'proto' keyword)
>   h2 : mode=HTXside=FE|BE
>   h2 : mode=HTTP   side=FE
> : mode=HTXside=FE|BE
> : mode=TCP|HTTP   side=FE|BE
> 
> Available filters :
>   [SPOE] spoe
>   [COMP] compression
>   [CACHE] cache
>   [TRACE] trace
> ###
> 
> I also wanted to run the reg-tests but they fails.
> 
> https://gitlab.com/aleks001/haproxy-19-boringssl/-/jobs/149523589
> 
> -
> ...
> + cd /usr/src/haproxy
> + VTEST_PROGRAM=/usr/src/VTest/vtest HAPROXY_PROGRAM=/usr/local/sbin/haproxy
> make reg-tests
> ...
> ## Starting vtest ##
> Testing with haproxy version: 1.9.2
> #top  TEST ./reg-tests/http-rules/h2.vtc FAILED (0.856) exit=2
> #top  TEST ./reg-tests/mailers/k_healthcheckmail.vtc FAILED (7.742) exit=2
> #top  TEST ./reg-tests/log/b0.vtc TIMED OUT (kill -9)
> #top  TEST ./reg-tests/log/b0.vtc FAILED (10.008) signal=9
> #top  TEST ./reg-tests/http-messaging/h2.vtc FAILED (0.745) exit=2
> 4 tests failed, 0 tests skipped, 29 tests passed
> ## Gathering results ##
> ## Test case: ./reg-tests/log/b0.vtc ##
> ## test results in: 
> "/tmp/haregtests-2019-01-22_18-28-24.aBghMD/vtc.3398.357fd753"
> ## Test case: ./reg-tests/mailers/k_healthcheckmail.vtc ##
> ## test results in: 
> "/tmp/haregtests-2019-01-22_18-28-24.aBghMD/vtc.3398.477fdc0b"
>  c27.0 EXPECT resp.http.mailsreceived (11) == "16" 

Re: H2 Server Connection Resets (1.9.2)

2019-01-23 Thread Aleksandar Lazic
Hi Lukas.

Am 23.01.2019 um 10:24 schrieb Luke Seelenbinder:
> Hi Willy,
> 
> Thanks for continuing to look into this. 
> 
>>
> 
>> I've place an nginx instance after my local haproxy dev config, and
>> found something which might explain what you're observing : the process
>> apparently leaks FDs and fails once in a while, causing 500 to be returned :
> 
> That's fascinating. I would have thought nginx would have had a bit better 
> care given to things like that. . .

This can be fixed with increasing the ulimits ;-).

> Oddly enough, I cannot find any log entries that approximate this. However, 
> it's possible since we're primarily (99+%) using nginx as a reverse-proxy 
> that the fd issues wouldn't appear for us.

What's your ulimit for nginx process?

> My next thought is to try tcpdump to try to determine what's on the wire when 
> the CD-- and SD-- pairs appear, but since our stack is SSL e2e, that might 
> prove difficult. Any suggestions?

If you have enough log space you can try to activate debug log in nginx and 
haproxy.

https://nginx.org/en/docs/debugging_log.html
https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#log => debug

This will have some impacts on the performance as every request creates a lot 
of loglines!

It would be interesting which error you have in the nginx log when the CD/SD 
happen as the 'http2 flood detected' is not in the logs.

Which release of nginx do you use?
http://hg.nginx.org/nginx/tags

Maybe there are some errors in the log which can be found in this directory.
http://hg.nginx.org/nginx/file/release-1.15.8/src/http/v2/

> One more interesting piece of data: if we use htx without h2 on the backends, 
> we only see CD-- entries consistently (with a very, very few SD-- entries). 
> Thus, it would seem whatever is causing the issue is directly related to h2 
> backends. I further think we can safely say it is directly related to h2 
> streams breaking (due to client-side request cancellations) resulting in the 
> whole connection breaking in HAProxy or nginx (though determining which will 
> be the trick).
> 
> There's also a strong possibility we replace nginx with HAProxy entirely for 
> our SSL + H2 setup as we overhaul the backends, so this problem will probably 
> be resolved by removing the problematic interaction.

What was the main reason to use the nginx between the haproxy and backends?
What's the backends?

Regards
Aleks

> I'm still working on running h2load against our nginx servers to see if that 
> turns anything up.
> 
>> And at this point the connection is closed and reopened for new requests.
>> There's never any GOAWAY sent.
> 
> If I'm understanding this correctly, that implies as long as nginx sends 
> GOAWAY properly, HAProxy will not attempt to reuse the connection?
> 
>> I managed to work around the problem by limiting the number of total
>> requests per connection. I find this extremely dirty but if it helps...
>> I just need to figure how to best do it, so that we can use it as well
>> for H2 as for H1.
> 
> We're pretty satisfied with our h2 fe <-> be h1.1 setup right now, so we will 
> probably stick with that for now, since we don't want to have any more 
> operational issues from bleeding-edge bugs. (Not a comment on HAProxy, per 
> se, just a business reality. :-) ) I'm more than happy to try out anything 
> you turn up on our staging setup!
> 
> Best,
> Luke
> 
> 
> —
> Luke Seelenbinder
> Stadia Maps | Founder
> stadiamaps.com
> 
> ‐‐‐ Original Message ‐‐‐
> On Wednesday, January 23, 2019 8:28 AM, Willy Tarreau  wrote:
> 
>> Hi Luke,
>>
> 
>> I've place an nginx instance after my local haproxy dev config, and
>> found something which might explain what you're observing : the process
>> apparently leaks FDs and fails once in a while, causing 500 to be returned :
>>
> 
>> 2019/01/23 08:22:13 [crit] 25508#0: *36705 open() 
>> "/usr/local/nginx/html/index.html" failed (24: Too many open files), client: 
>> 1>
>> 2019/01/23 08:22:13 [crit] 25508#0: accept4() failed (24: Too many open 
>> files)
>>
> 
>> 127.0.0.1 - - [23/Jan/2019:08:22:13 +0100] "GET / HTTP/2.0" 500 579 "-" 
>> "Mozilla/4.0 (compatible; MSIE 7.01; Windows)"
>>
> 
>> The ones are seen by haproxy :
>>
> 
>> 127.0.0.1:47098 [23/Jan/2019:08:22:13.589] decrypt trace/ngx 0/0/0/0/0 500 
>> 701 - -  1/1/0/0/0 0/0 "GET / HTTP/1.1"
>>
> 
>> And at this point the connection is closed and reopened for new requests.
>> There's never any GOAWAY sent.
>>
> 
>> I managed to work around the problem by limiting the number of total
>> requests per connection. I find this extremely dirty but if it helps...
>> I just need to figure how to best do it, so that we can use it as well
>> for H2 as for H1.
>>
> 
>> Best regards,
>> Willy
> 




Re: haproxy 1.9.2 with boringssl

2019-01-22 Thread Aleksandar Lazic
Am 22.01.2019 um 21:45 schrieb Adam Langley:
> On Tue, Jan 22, 2019 at 12:13 PM Aleksandar Lazic  wrote:
>> Sorry for my dump question, I just want to be save not to break something.
>>
>> It would be nice to have the option '-key-update' in client.cc and server.cc
>> where can I put this feature request for boringssl?
>>
>> That would be make the test easy with this command.
>>
>> `./tool/bssl s_client -key-update -connect $test-haproxy-instance `
> 
> bssl is just for human experimentation, it shouldn't be used in
> something like a test because we break the interface from
> time-to-time. (Also note that BoringSSL in general "is not intended
> for general use, as OpenSSL is. We don't recommend that third parties
> depend upon it." https://boringssl.googlesource.com/boringssl)

Yes I have read it and was surprised, but it is how it is.

> You may well be better off using OpenSSL for a test like that, or
> perhaps writing a C/C++ program (which will probably work for either
> OpenSSL or BoringSSL).

Well thanks.
Currently I have no time wo look into this topic.

> Cheers
> 
> AGL

Regards
Aleks



Re: haproxy 1.9.2 with boringssl

2019-01-22 Thread Aleksandar Lazic
Tim.

Am 22.01.2019 um 20:57 schrieb Tim Düsterhus:

> Aleks,
> 
> Am 22.01.19 um 20:50 schrieb Aleksandar Lazic:
>> This means that the function in haproxy works but the check should be 
>> adopted to
>> match both cases, right?
> 
> At least one should investigate what exactly is happening here (the
> differences between the libc is a guess) and possibly file a bug for
> either glibc or musl. I believe what musl is doing here is correct and
> thus glibc must be incorrect.
> 
> Consider filing a tracking bug in haproxy's issue tracker to verify
> where / who exactly is doing something wrong.

Done.
https://github.com/haproxy/haproxy/issues/23

>> Do you think that in general the alpine/musl is a good idea or should I stay 
>> on
>> centos as for my other images?
> 
> FWIW: There already is an Alpine image for haproxy in Docker Official
> Images:
> https://github.com/docker-library/haproxy/blob/master/1.9/alpine/Dockerfile

Yep, I know, this uses openssl I was curious how difficult is is to run haproxy 
with boringssl.

Never the less this Dockerfile have "only" 2 failed tests.


## Starting vtest ##
Testing with haproxy version: 1.9.2
#top  TEST ./reg-tests/http-rules/h2.vtc FAILED (0.904) exit=2
#top  TEST ./reg-tests/mailers/k_healthcheckmail.vtc FAILED (7.804) exit=2
2 tests failed, 0 tests skipped, 31 tests passed
## Gathering results ##
## Test case: ./reg-tests/http-rules/h2.vtc ##
## test results in: 
"/tmp/haregtests-2019-01-22_20-26-25.BmFdCB/vtc.1383.3d3a039a"
 s10.0 EXPECT req.http.test3maskff (2001:db8:c001:c01a:::10:0) == 
"2001:db8:c001:c01a:0::10:0" failed
## Test case: ./reg-tests/mailers/k_healthcheckmail.vtc ##
## test results in: 
"/tmp/haregtests-2019-01-22_20-26-25.BmFdCB/vtc.1383.06fe4e21"
 c27.0 EXPECT resp.http.mailsreceived (11) == "16" failed
make: *** [Makefile:1102: reg-tests] Error 1


This looks like your assumption with musl<>glibc ipv6 handling is different.


> Personally I'm a Debian guy, for containers I prefer Debian based and
> CentOS / RHEL I don't use at all.

Interesting is that even the debian based Image have failed tests

https://github.com/docker-library/haproxy/tree/master/1.9

But this could be a know bug and is fixed in the current git

-
## Starting vtest ##
Testing with haproxy version: 1.9.2
#top  TEST ./reg-tests/mailers/k_healthcheckmail.vtc FAILED (7.808) exit=2
1 tests failed, 0 tests skipped, 32 tests passed
## Gathering results ##
## Test case: ./reg-tests/mailers/k_healthcheckmail.vtc ##
## test results in: 
"/tmp/haregtests-2019-01-22_20-56-31.QMI0Ue/vtc.5740.39907fe1"
 c27.0 EXPECT resp.http.mailsreceived (11) == "16" failed
Makefile:1102: recipe for target 'reg-tests' failed
make: *** [reg-tests] Error 1
+ egrep -r ^ /tmp/haregtests-2019-01-22_20-56-31.QMI0Ue/failedtests.log 
/tmp/haregtests-2019-01-22_20-56-31.QMI0Ue/vtc.5740.39907fe1
/tmp/haregtests-2019-01-22_20-56-31.QMI0Ue/failedtests.log:## Test case: 
./reg-tests/mailers/k_healthcheckmail.vtc ##
/tmp/haregtests-2019-01-22_20-56-31.QMI0Ue/failedtests.log:## test results in: 
"/tmp/haregtests-2019-01-22_20-56-31.QMI0Ue/vtc.5740.39907fe1"
/tmp/haregtests-2019-01-22_20-56-31.QMI0Ue/failedtests.log: c27.0 
EXPECT resp.http.mailsreceived (11) == "16" failed
/tmp/haregtests-2019-01-22_20-56-31.QMI0Ue/vtc.5740.39907fe1/INFO:Test case: 
./reg-tests/mailers/k_healthcheckmail.vtc
/tmp/haregtests-2019-01-22_20-56-31.QMI0Ue/vtc.5740.39907fe1/h1/cfg:global
/tmp/haregtests-2019-01-22_20-56-31.QMI0Ue/vtc.5740.39907fe1/h1/cfg:stats 
socket 
"/tmp/haregtests-2019-01-22_20-56-31.QMI0Ue/vtc.5740.39907fe1/h1/stats.sock" 
level admin mode 600
/tmp/haregtests-2019-01-22_20-56-31.QMI0Ue/vtc.5740.39907fe1/h1/cfg:stats 
socket "fd@${cli}" level admin
/tmp/haregtests-2019-01-22_20-56-31.QMI0Ue/vtc.5740.39907fe1/h1/cfg:
/tmp/haregtests-2019-01-22_20-56-31.QMI0Ue/vtc.5740.39907fe1/h1/cfg:global
/tmp/haregtests-2019-01-22_20-56-31.QMI0Ue/vtc.5740.39907fe1/h1/cfg:
lua-load /usr/src/haproxy/./reg-tests/mailers/k_healthcheckmail.lua
/tmp/haregtests-2019-01-22_20-56-31.QMI0Ue/vtc.5740.39907fe1/h1/cfg:defaults
/tmp/haregtests-2019-01-22_20-56-31.QMI0Ue/vtc.5740.39907fe1/h1/cfg:
frontend femail
/tmp/haregtests-2019-01-22_20-56-31.QMI0Ue/vtc.5740.39907fe1/h1/cfg:
mode tcp
/tmp/haregtests-2019-01-22_20-56-31.QMI0Ue/vtc.5740.39907fe1/h1/cfg:
bind "fd@${femail}"
/tmp/haregtests-2019-01-22_20-56-31.QMI0U

Re: haproxy 1.9.2 with boringssl

2019-01-22 Thread Aleksandar Lazic
Am 22.01.2019 um 20:54 schrieb Adam Langley:
> On Tue, Jan 22, 2019 at 11:45 AM Aleksandar Lazic  wrote:
>> Can it be reused to test a specific server like?
>>
>> ssl/test/runner/runner -test "KeyUpdate-ToServer" 127.0.0.1:8443
> 
> Not easily: it drives the implementation under test by forking a
> process and has quite a complex interface via command-line arguments.
> (I.e. 
> https://boringssl.googlesource.com/boringssl/+/eadef4730e66f914d7b9cbb2f38ecf7989f992ed/ssl/test/test_config.h)
> 
>> or should be a small c/go program be used for that test?
> 
> You could easily tweak transport_common.cc to call SSL_key_update
> before each SSL_write or so.

Great.

To be on the save site, I would like to add the following lines

###
if (!SSL_key_update(ssl, SSL_KEY_UPDATE_NOT_REQUESTED)) {
  fprintf(stderr, "SSL_key_update failed.\n");
  return false;
}
###

before this line.

https://boringssl.googlesource.com/boringssl/+/master/tool/transport_common.cc#706

Sorry for my dump question, I just want to be save not to break something.

It would be nice to have the option '-key-update' in client.cc and server.cc
where can I put this feature request for boringssl?

That would be make the test easy with this command.

`./tool/bssl s_client -key-update -connect $test-haproxy-instance `

> Cheers
> 
> AGL

Regards
Aleks



Re: haproxy 1.9.2 with boringssl

2019-01-22 Thread Aleksandar Lazic
Tim.

Am 22.01.2019 um 20:26 schrieb Tim Düsterhus:
> Aleks,
> 
> Am 22.01.19 um 19:38 schrieb Aleksandar Lazic:
>> ## test results in: 
>> "/tmp/haregtests-2019-01-22_18-28-24.aBghMD/vtc.3398.76167f9e"
>>  s10.0 EXPECT req.http.test3maskff (2001:db8:c001:c01a:::10:0) ==
>> "2001:db8:c001:c01a:0::10:0" failed
> 
> The difference here is that the test expects an IPv6 address that's not
> maximally compressed, while you get a IPv6 address that *is* maximally
> compressed. I would guess that this is the difference in behaviour
> between glibc and musl (as you are using an Alpine container).

Ah that explains this error.

This means that the function in haproxy works but the check should be adopted to
match both cases, right?

Do you think that in general the alpine/musl is a good idea or should I stay on
centos as for my other images?

Any Idea for the other failed tests?

-
## Starting vtest ##
Testing with haproxy version: 1.9.2
#top  TEST ./reg-tests/http-rules/h2.vtc FAILED (0.859) exit=2
#top  TEST ./reg-tests/mailers/k_healthcheckmail.vtc FAILED (7.739) exit=2
#top  TEST ./reg-tests/log/b0.vtc TIMED OUT (kill -9)
#top  TEST ./reg-tests/log/b0.vtc FAILED (10.001) signal=9
#top  TEST ./reg-tests/http-messaging/h2.vtc FAILED (0.752) exit=2
4 tests failed, 0 tests skipped, 29 tests passed
## Gathering results ##
## Test case: ./reg-tests/http-messaging/h2.vtc ##
## test results in: 
"/tmp/haregtests-2019-01-22_19-34-55.EKMMnc/vtc.3399.7739e83e"
 c1h2  0.0 Wrong frame type HEADERS (1) wanted WINDOW_UPDATE
## Test case: ./reg-tests/log/b0.vtc ##
## test results in: 
"/tmp/haregtests-2019-01-22_19-34-55.EKMMnc/vtc.3399.2776263d"
## Test case: ./reg-tests/http-rules/h2.vtc ##
## test results in: 
"/tmp/haregtests-2019-01-22_19-34-55.EKMMnc/vtc.3399.0900be1e"
 s10.0 EXPECT req.http.test3maskff (2001:db8:c001:c01a:::10:0) ==
"2001:db8:c001:c01a:0::10:0" failed
## Test case: ./reg-tests/mailers/k_healthcheckmail.vtc ##
## test results in: 
"/tmp/haregtests-2019-01-22_19-34-55.EKMMnc/vtc.3399.506e5b2b"
 c27.0 EXPECT resp.http.mailsreceived (11) == "16" failed
-

> Best regards
> Tim Düsterhus

Regards
Aleks



Re: haproxy 1.9.2 with boringssl

2019-01-22 Thread Aleksandar Lazic
Am 22.01.2019 um 20:30 schrieb Adam Langley:
> On Tue, Jan 22, 2019 at 11:16 AM Aleksandar Lazic  wrote:
>> Agree that I get a 400 with this command.
>>
>> `echo 'K' | ./tool/bssl s_client -connect mail.google.com:443`
> 
> (Note that "K" on its own line does not send a KeyUpdate message with
> BoringSSL's bssl tool. It just sends "K\n".)
> 
>> How does boringssl test if the KeyUpdate on a server works?
> 
> If you're asking how BoringSSL's internal tests exercise KeyUpdates
> then we maintain a fork of Go's TLS stack that is extensively modified
> to be able to generate a large variety of TLS patterns. That is used
> to exercise KeyUpdates in a number of ways:
> https://boringssl.googlesource.com/boringssl/+/eadef4730e66f914d7b9cbb2f38ecf7989f992ed/ssl/test/runner/runner.go#2779

Thanks.

Can it be reused to test a specific server like?

ssl/test/runner/runner -test "KeyUpdate-ToServer" 127.0.0.1:8443

or should be a small c/go program be used for that test?

> Cheers
> 
> AGL

Regards
Aleks



Re: haproxy 1.9.2 with boringssl

2019-01-22 Thread Aleksandar Lazic
Am 22.01.2019 um 20:04 schrieb Adam Langley:
> On Tue, Jan 22, 2019 at 10:54 AM Aleksandar Lazic  wrote:
>> Do have boringssl a similar tool like s_client?
> 
> BoringSSL builds tool/bssl (in the build directory), which is similar.
> However it doesn't have any magic inputs that can trigger a KeyUpdate
> message like OpenSSL's s_client.

Thanks.
The test is already runnig as I got your answer.

https://gitlab.com/aleks001/haproxy-19-boringssl/-/jobs/149540960

Agree that I get a 400 with this command.

`echo 'K' | ./tool/bssl s_client -connect mail.google.com:443`

How does boringssl test if the KeyUpdate on a server works?

> Cheers
> 
> AGL

Regards
Aleks



Re: haproxy 1.9.2 with boringssl

2019-01-22 Thread Aleksandar Lazic
Am 22.01.2019 um 19:54 schrieb Aleksandar Lazic:
> Cool, thanks.
> 
> Do have boringssl a similar tool like s_client?
> 
> I don't like to build openssl just for s_client call :-)

Answer my own question.

bssl is the boringssl tool command.

The open question is why the tests fails in container?

> Regards
> Aleks
> 
> 
>  Ursprüngliche Nachricht 
> Von: Janusz Dziemidowicz 
> Gesendet: 22. Jänner 2019 19:49:15 MEZ
> An: Aleksandar Lazic 
> CC: HAProxy 
> Betreff: Re: haproxy 1.9.2 with boringssl
> 
> wt., 22 sty 2019 o 19:40 Aleksandar Lazic  napisał(a):
>>
>> Hi.
>>
>> I have now build haproxy with boringssl and it looks quite good.
>>
>> Is it the recommended way to simply make a git clone without any branch or 
>> tag?
>> Does anyone know how the KeyUpdate can be tested?
> 
> openssl s_client -connect HOST:PORT (openssl >= 1.1.1)
> Just type 'K' and press enter. If the server is broken then connection
> will be aborted.
> 
> www.github.com:443, currently broken:
> read R BLOCK
> K
> KEYUPDATE
> read R BLOCK
> read:errno=0
> 
> mail.google.com:443, working:
> read R BLOCK
> K
> KEYUPDATE
> 
> 
> 




Re: haproxy 1.9.2 with boringssl

2019-01-22 Thread Aleksandar Lazic
Cool, thanks.

Do have boringssl a similar tool like s_client?

I don't like to build openssl just for s_client call :-)

Regards
Aleks


 Ursprüngliche Nachricht 
Von: Janusz Dziemidowicz 
Gesendet: 22. Jänner 2019 19:49:15 MEZ
An: Aleksandar Lazic 
CC: HAProxy 
Betreff: Re: haproxy 1.9.2 with boringssl

wt., 22 sty 2019 o 19:40 Aleksandar Lazic  napisał(a):
>
> Hi.
>
> I have now build haproxy with boringssl and it looks quite good.
>
> Is it the recommended way to simply make a git clone without any branch or 
> tag?
> Does anyone know how the KeyUpdate can be tested?

openssl s_client -connect HOST:PORT (openssl >= 1.1.1)
Just type 'K' and press enter. If the server is broken then connection
will be aborted.

www.github.com:443, currently broken:
read R BLOCK
K
KEYUPDATE
read R BLOCK
read:errno=0

mail.google.com:443, working:
read R BLOCK
K
KEYUPDATE





haproxy 1.9.2 with boringssl

2019-01-22 Thread Aleksandar Lazic
Hi.

I have now build haproxy with boringssl and it looks quite good.

Is it the recommended way to simply make a git clone without any branch or tag?
Does anyone know how the KeyUpdate can be tested?

###
HA-Proxy version 1.9.2 2019/01/16 - https://haproxy.org/
Build options :
  TARGET  = linux2628
  CPU = generic
  CC  = gcc
  CFLAGS  = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv
-Wno-unused-label -Wno-sign-compare -Wno-unused-parameter
-Wno-old-style-declaration -Wno-ignored-qualifiers -Wno-clobbered
-Wno-missing-field-initializers -Wtype-limits -Wshift-negative-value
-Wshift-overflow=2 -Wduplicated-cond -Wnull-dereference
  OPTIONS = USE_LINUX_SPLICE=1 USE_GETADDRINFO=1 USE_ZLIB=1 USE_REGPARM=1
USE_THREAD=1 USE_OPENSSL=1 USE_LUA=1 USE_PCRE2=1 USE_PCRE2_JIT=1 USE_TFO=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with OpenSSL version : BoringSSL
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
Built with Lua version : Lua 5.3.5
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT
IP_FREEBIND
Built with zlib version : 1.2.11
Running on zlib version : 1.2.11
Compression algorithms supported : identity("identity"), deflate("deflate"),
raw-deflate("deflate"), gzip("gzip")
Built with PCRE2 version : 10.31 2018-02-12
PCRE2 library supports JIT : yes
Encrypted password support via crypt(3): yes
Built with multi-threading support.

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as  cannot be specified using 'proto' keyword)
  h2 : mode=HTXside=FE|BE
  h2 : mode=HTTP   side=FE
: mode=HTXside=FE|BE
: mode=TCP|HTTP   side=FE|BE

Available filters :
[SPOE] spoe
[COMP] compression
[CACHE] cache
[TRACE] trace
###

I also wanted to run the reg-tests but they fails.

https://gitlab.com/aleks001/haproxy-19-boringssl/-/jobs/149523589

-
...
+ cd /usr/src/haproxy
+ VTEST_PROGRAM=/usr/src/VTest/vtest HAPROXY_PROGRAM=/usr/local/sbin/haproxy
make reg-tests
...
## Starting vtest ##
Testing with haproxy version: 1.9.2
#top  TEST ./reg-tests/http-rules/h2.vtc FAILED (0.856) exit=2
#top  TEST ./reg-tests/mailers/k_healthcheckmail.vtc FAILED (7.742) exit=2
#top  TEST ./reg-tests/log/b0.vtc TIMED OUT (kill -9)
#top  TEST ./reg-tests/log/b0.vtc FAILED (10.008) signal=9
#top  TEST ./reg-tests/http-messaging/h2.vtc FAILED (0.745) exit=2
4 tests failed, 0 tests skipped, 29 tests passed
## Gathering results ##
## Test case: ./reg-tests/log/b0.vtc ##
## test results in: 
"/tmp/haregtests-2019-01-22_18-28-24.aBghMD/vtc.3398.357fd753"
## Test case: ./reg-tests/mailers/k_healthcheckmail.vtc ##
## test results in: 
"/tmp/haregtests-2019-01-22_18-28-24.aBghMD/vtc.3398.477fdc0b"
 c27.0 EXPECT resp.http.mailsreceived (11) == "16" failed
## Test case: ./reg-tests/http-messaging/h2.vtc ##
## test results in: 
"/tmp/haregtests-2019-01-22_18-28-24.aBghMD/vtc.3398.7aab2925"
 c1h2  0.0 Wrong frame type HEADERS (1) wanted WINDOW_UPDATE
## Test case: ./reg-tests/http-rules/h2.vtc ##
## test results in: 
"/tmp/haregtests-2019-01-22_18-28-24.aBghMD/vtc.3398.76167f9e"
 s10.0 EXPECT req.http.test3maskff (2001:db8:c001:c01a:::10:0) ==
"2001:db8:c001:c01a:0::10:0" failed
make: *** [Makefile:1102: reg-tests] Error 1
-
###

Have anyone tried to run the tests in a containerized environment?

Regards
Aleks



Re: Automatic Redirect transformations using regex?

2019-01-22 Thread Aleksandar Lazic
Am 21.01.2019 um 23:40 schrieb Joao Guimaraes:
> Hi Haproxy team!
> 
> I've been trying to figure out how to perform automatic redirects based on
> source URL transformations. 
> 
> *Basically I need the following redirect: *
> 
> mysite.*abc* redirected to *abc*.mysite.com .

Maybe you can reuse the solution from reg-tests dir.

47 # redirect Host: example.org / subdomain.example.org
48 http-request redirect prefix
%[req.hdr(Host),lower,regsub(:\d+$,,),map_str(${testdir}/h3.map)] code 301
if { hdr(Host),lower,regsub(:\d+$,,),map_str(${testdir}/h3.map) -m found }

This solution uses a map for redirect.

http://git.haproxy.org/?p=haproxy-1.9.git;a=blob;f=reg-tests/http-rules/h3.vtc;h=55bb2687d3abe02ee74eca5283e50b039d6d162e;hb=HEAD#l47

> Note that mysite.abc is not fixed, must apply to whatever abc wants to be.
> 
> *Other examples:*
> *
> *
> 
> mysite.fr TO fr.mysite.com
> mysite.es TO es.mysite.com
> mysite.us TO us.mysite.com
> mysite.de TO de.mysite.com
> mysite.uk TO uk.mysite.com
> 
> 
> Thanks in advance!
> Joao Guimaraes

Best regards
Aleks



Re: H2 Server Connection Resets (1.9.2)

2019-01-21 Thread Aleksandar Lazic
Hi Luke.

Am 21.01.2019 um 10:30 schrieb Luke Seelenbinder:
> Hi all,
> 
> One more bug (or configuration hole) from our transition to 1.9.x using 
> end-to-end h2 connections.
> 
> After enabling h2 backends (technically `server … alpn h2,http/1.1`), we 
> began seeing a high number of backend /server/ connection resets. A 
> reasonable number of client-side connection resets due to timeouts, etc., is 
> normal, but the server connection resets were new.
> 
> I believe the root cause is that our backend servers are NGINX servers, which 
> by default have a 1000 request limit per h2 connection 
> (https://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_requests). 
> As far as I can tell there's no way to set this to unlimited. That resulted 
> in NGINX resetting the HAProxy backend connections and thus resulted in user 
> requests being dropped or returning 404s (oddly enough; though this may be as 
> a result of the outstanding bug related to header manipulation and HTX mode).

Do you have such a info in the nginx log?

"http2 flood detected"

It's the message from this lines

https://trac.nginx.org/nginx/browser/nginx/src/http/v2/ngx_http_v2.c#L4517


> This wouldn't be a problem if one of the following were true:
> 
> - HAProxy could limit the number of times it reused a connection

Can you try to set some timeout values for `timeout http-keep-alive`
https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#timeout%20http-keep-alive

I assume that this timeout could be helpful because of this block in the doc

https://cbonte.github.io/haproxy-dconv/1.9/configuration.html

```
  - KAL : keep alive ("option http-keep-alive") which is the default mode : all
requests and responses are processed, and connections remain open but idle
between responses and new requests.
```

and this code part

https://github.com/haproxy/haproxy/blob/v1.9.0/src/backend.c#L1164

> - HAProxy could retry a failed request due to backend server connection reset 
> (possibly coming in 2.0 with L7 retries?)

Mind you to create a issue for that if there isn't one already?

> - NGINX could set that limit to unlimited.

Isn't `unsigned int` not enought ?
How many idle connections do you have for how long time?

> Our http-reuse is set to aggressive, but that doesn't make much difference, I 
> don't think, since safe would result in the same behavior (the connection is 
> reusable…but only for a limited number of requests).
> 
> We've worked around this by only using h/1.1 on the backends, which isn't a 
> big problem for us, but I thought I would raise the issue, since I'm sure a 
> lot of folks are using haproxy <-> nginx pairings, and this is a bit of a 
> subtle result of that in full h2 mode.

Can you try to increase the max-requests to 20 in nginx

The `max_requests` is defined as `ngx_uint_t` which is `unsigned int`

I have found this in the nginx source.

https://www.nginx.com/resources/wiki/extending/api/main/#ngx-uint-t
https://trac.nginx.org/nginx/browser/nginx/src/http/v2/ngx_http_v2_module.h#L27
https://trac.nginx.org/nginx/browser/nginx/src/http/v2/ngx_http_v2_module.c#L85

> Thanks again for such great software—I've found it pretty fantastic to run in 
> production. :)

Just for my curiosity, have you seen any changes for your solution with the htx
/H2 e2e?

> Best,
> Luke

Best regards
Aleks

> —
> Luke Seelenbinder
> Stadia Maps | Founder
> stadiamaps.com
> 




Re: HAProxy with OpenSSL 1.1.1 breaks when TLS 1.3 KeyUpdate is used.

2019-01-20 Thread Aleksandar Lazic
Thank you for clarification.

Regard
Aleks



 Ursprüngliche Nachricht 
Von: Adam Langley 
Gesendet: 21. Jänner 2019 00:12:59 MEZ
An: Aleksandar Lazic 
CC: haproxy@formilux.org, Willy Tarreau , eb...@haproxy.com
Betreff: Re: HAProxy with OpenSSL 1.1.1 breaks when TLS 1.3 KeyUpdate is used.

On Sun, Jan 20, 2019 at 3:04 PM Aleksandar Lazic  wrote:
> which refers to 
> https://www.openssl.org/docs/manmaster/man3/SSL_key_update.html
>
> instead of the  suggested Patch?

The SSL_key_update function enqueues a KeyUpdate message to be sent.
The problem is that if a /client/ of HAProxy sends a KeyUpdate,
HAProxy thinks that it's a pre-TLS 1.3 renegotiation message and drops
the connection.

Thus the patch seeks to address that. HAProxy may also want to do
something like send a KeyUpdate for every x MBs of data sent, or y
minutes of time elapsed, but that would be a separate feature. (And
one needs to be a little cautious because OpenSSL 1.1.1 will only
accept 32 KeyUpdate messages per connection.)


Cheers

AGL




Re: HAProxy with OpenSSL 1.1.1 breaks when TLS 1.3 KeyUpdate is used.

2019-01-20 Thread Aleksandar Lazic
Hi.

As far as I understood the keyupdate

https://tools.ietf.org/html/rfc8446 4.6.3

which you refer proper isn't it also a option to use

https://wiki.openssl.org/index.php/TLS1.3#Renegotiation

which refers to https://www.openssl.org/docs/manmaster/man3/SSL_key_update.html

instead of the  suggested Patch?

Best regards
Aleks


 Ursprüngliche Nachricht 
Von: Willy Tarreau 
Gesendet: 20. Jänner 2019 23:41:17 MEZ
An: Adam Langley 
CC: haproxy@formilux.org, eb...@haproxy.com
Betreff: Re: HAProxy with OpenSSL 1.1.1 breaks when TLS 1.3 KeyUpdate is used.

Hi Adam,

[ccing Emeric]

On Sun, Jan 20, 2019 at 01:12:44PM -0800, Adam Langley wrote:
> KeyUpdate messages are a feature of TLS 1.3 that allows the symmetric
> keys of a connection to be periodically rotated. It's
> mandatory-to-implement in TLS 1.3, but not mandatory to use. Google
> Chrome tried enabling KeyUpdate and promptly broke several sites, at
> least some of which are using HAProxy.
> 
> The cause is that HAProxy's code to disable TLS renegotiation[1] is
> triggering for TLS 1.3 post-handshake messages. But renegotiation has
> been removed in TLS 1.3 and post-handshake messages are no longer
> abnormal.

Interesting!

> Thus I'm attaching a patch to only enforce that check when
> the version of a TLS connection is <= 1.2.

I think that it makes sense. I'll wait for Emeric's check regarding any
possibly overlooked impact anywhere else if some other parts would assume
that this didn't happen anymore.

> Since sites that are using HAProxy with OpenSSL 1.1.1 will break when
> Chrome reenables KeyUpdate without this change, I'd like to suggest it
> as a candidate for backporting to stable branches.

Sure! OpenSSL 1.1.1 is supported on 1.9 and 1.8 so this should be backported
there.

Just out of curiosity, if such out-of-band messages are enabled again in
1.3, do you think this might have any particular impacts on something like
kTLS where the TLS stream is deciphered by the kernel ? I don't know how
such messages can safely be delivered to userland in this case, nor if
they're needed there at all.

> [1] https://github.com/haproxy/haproxy/blob/master/src/ssl_sock.c#L1472
> 
> 
> Thank you
> 
> AGL
> 
> --
> Adam Langley a...@imperialviolet.org https://www.imperialviolet.org

Thanks!
Willy




Re: haproxy issue tracker discussion

2019-01-18 Thread Aleksandar Lazic
Cool, thanks :-)


 Ursprüngliche Nachricht 
Von: Lukas Tribus 
Gesendet: 18. Jänner 2019 14:14:06 MEZ
An: Aleksandar Lazic 
CC: haproxy , Willy Tarreau , "Tim 
Düsterhus" 
Betreff: Re: haproxy issue tracker discussion

Hello Aleksandar,


On Fri, 18 Jan 2019 at 12:54, Aleksandar Lazic  wrote:
>
> Hi.
>
> As there are now the github templates in the repo can / should we start to 
> create issues &  features on github?

Yes, you can go ahead and start filing bugs and features.

There's some minor tweaking yet to be done regarding the subsystem
labels, but that's not really a blocking issue. Once that is done, I
will send out a proper announcement on the list (tomorrow, probably).


Lukas




Re: haproxy issue tracker discussion

2019-01-18 Thread Aleksandar Lazic
Hi.

As there are now the github templates in the repo can / should we start to 
create issues &  features on github?

Regards
Aleks


 Ursprüngliche Nachricht 
Von: Willy Tarreau 
Gesendet: 14. Jänner 2019 04:11:17 MEZ
An: "Tim Düsterhus" 
CC: Lukas Tribus , haproxy 
Betreff: Re: haproxy issue tracker discussion

On Mon, Jan 14, 2019 at 03:06:54AM +0100, Tim Düsterhus wrote:
> May I suggest the following to move forward?
(...)
> That way we can test the process with a small, unimportant, test issue.
> The automated closing based on the labels can than be added a few days
> later. I don't expect huge numbers of issues right away, so they can be
> closed by hand.

Sounds good to me.

Thanks guys!
Willy




Re: [ANNOUNCE] haproxy-1.9.2

2019-01-18 Thread Aleksandar Lazic

Hi Willy,

Am 17-01-2019 15:41, schrieb Willy Tarreau:

Hi Aleks,

On Thu, Jan 17, 2019 at 01:02:56PM +0100, Aleksandar Lazic wrote:

> Very likely, yes. If you want to inspect the body you simply have to
> enable "option http-buffer-request" so that haproxy waits for the body
> before executing rules. From there, indeed you can pass whatever Lua
> code on req.body. I don't know if there would be any value in trying
> to implement some protobuf converters to decode certain things natively.
> What I don't know is if the contents can be deserialized even without
> compiling the proto files.

Agree. I would be interesting to here a good use case and a solution 
for that,

at least haproxy have the possibility to do it ;-)


From what I've seen, gRPC stream is reasonably easy to decode, and 
protobuf
doesn't require the proto file, it will just emit indexes, types and 
values,
which is enough as long as the schema doesn't change. I've seen that 
Thrift
is pretty similar. So we could decide about routing or priorities based 
on

values passed in the protocol :-)


;-)


>> As we have now a separated protocol handling layer (htx) how difficult is it 
to
>> add `mode fast-cgi` like `mode http`?
>
> We'd like to have this for 2.0. But it wouldn't be "mode fast-cgi" but
> rather "proto fast-cgi" on the server lines to replace the htx-to-h1 mux
> with an htx-to-fcgi one, because fast-cgi is another representation of
> HTTP. The "mode http" setting is what enables all HTTP processing
> (http-request rules, cookie parsing etc). Thus you definitely want to
> have it enabled.

Full Ack.

This means that I can use QUICK+HTTP/3 => php-fpm with haproxy, in the 
future ;-)


Yes.

Fast cgi isn't a bad protocol (IMHO) but sadly it was not as wide 
spread as

http(s) even it has multiplexing and keep alive feature in it.


I remember that when we checked with Thierry, there were some issues to
implement multiplexing which resulted in nobody really implementing it
in practice. I *think* the problem was due to the framing or the huge
risk of head-of-line blocking making it impossible (or very hard) to
sacrify a stream when the client doesn't read it, without damaging the
other ones. Thus it was mostly in-order delivery in the end.

(... links ...)
All of them looks to the keep alive flag but not to the multiplex 
flag.


So this doesn't seem to have change much :-)


Not as I know. From my point of view is the keep alive feature the one 
which
should be supported and the multiplex feature not, but that's just my 
opinion.



Python is different, as always, they use mainly wsgi, AFAIK.
https://wsgi.readthedocs.io/en/latest/


OK.


I forgotten Ruby, they use also another protocol.
http://rack.github.io/

For ruby we can use http as there are a lot of web servers which have 
rack already

implemented ;-)

https://www.digitalocean.com/community/tutorials/a-comparison-of-rack-web-servers-for-ruby-web-applications


uwsgi have also there on protocol
https://uwsgi-docs.readthedocs.io/en/latest/Protocol.html


I remember having looked at this one many years ago when it was
presented as a replacement for fcgi, but I got contradictory feedback
depending on whom I talked to. I don't know how widespread it is
nowadays.


Well it's not as widespread as fcgi and wsgi, AFAIK.
Let's focus on fcgi and see what's the feedback is.

I can open a issue in github as soon as it's ready to track the 
feedback.



Cheers,
Willy


Regards
Aleks



Re: [ANNOUNCE] haproxy-1.9.2

2019-01-17 Thread Aleksandar Lazic
Hi Willy.

Am 17.01.2019 um 04:25 schrieb Willy Tarreau:
> Hi Aleks,
> 
> On Wed, Jan 16, 2019 at 11:52:12PM +0100, Aleksandar Lazic wrote:
>> For service routing are the standard haproxy content routing options possible
>> (path, header, ...) , right?
> 
> Yes absolutely.
> 
>> If someone want to route based on grpc content he can use lua with body 
>> content
>> right?
>>
>> For example this library https://github.com/Neopallium/lua-pb
> 
> Very likely, yes. If you want to inspect the body you simply have to
> enable "option http-buffer-request" so that haproxy waits for the body
> before executing rules. From there, indeed you can pass whatever Lua
> code on req.body. I don't know if there would be any value in trying
> to implement some protobuf converters to decode certain things natively.
> What I don't know is if the contents can be deserialized even without
> compiling the proto files.

Agree. I would be interesting to here a good use case and a solution for that,
at least haproxy have the possibility to do it ;-)

>>> That's about all. With each major release we feel like version dot-2
>>> works pretty well. This one is no exception. We'll see in 6 months if
>>> it was wise :-)
>>
>> So you would say I can use it in production with htx ;-)
> 
> As long as you're still a bit careful, yes, definitely. haproxy.org has
> been running it in production since 1.9-dev9 or so. Since 1.9.0 was
> released, we've had one crash a few times (fixed in 1.9.1) and two
> massive slowdowns due to non-expiring connections reaching the frontend's
> maxconn limit (fixed in 1.9.2).

Yep agree. In prod is always good to keep an eye on it.

>> and the docker image is also updated ;-)
>>
>> https://hub.docker.com/r/me2digital/haproxy19
> 
> Thanks.
> 
>> As we have now a separated protocol handling layer (htx) how difficult is it 
>> to
>> add `mode fast-cgi` like `mode http`?
> 
> We'd like to have this for 2.0. But it wouldn't be "mode fast-cgi" but
> rather "proto fast-cgi" on the server lines to replace the htx-to-h1 mux
> with an htx-to-fcgi one, because fast-cgi is another representation of
> HTTP. The "mode http" setting is what enables all HTTP processing
> (http-request rules, cookie parsing etc). Thus you definitely want to
> have it enabled.

Full Ack.

This means that I can use QUICK+HTTP/3 => php-fpm with haproxy, in the future 
;-)

Fast cgi isn't a bad protocol (IMHO) but sadly it was not as wide spread as
http(s) even it has multiplexing and keep alive feature in it.

>> I ask because php have not a production ready http implementation but a 
>> robust
>> fast cgi process manager (php-fpm). There are several possible solution to 
>> add
>> http to php (nginx+php-fpm, uwsgi+php-fpm, uwsgi+embeded php) but all this
>> solutions requires a additional hop.
>>
>> My wish is to have such a flow.
>>
>> haproxy -> *.php  => php-fpm
>> -> *.static-files => nginx,h2o
> 
> It's *exactly* what I've been wanting for a long time as well. Mind you
> that Thierry implemented some experimental fast-cgi code many years ago
> in 1.3! By then we were facing some strong architectural limitations,
> but now I think we should have everything ready thanks to the muxes.

Oh wow 1.3. 8-O

In 2014 have Baptiste written a blog post how to make health checks for php-fpm
so it looks like fast-cgi is a long time on the table.

https://alohalb.wordpress.com/2014/06/06/binary-health-check-with-haproxy-1-5-php-fpmfastcgi-probe-example/

Just in case it's interesting here some receiver implementations links for
popular servers.

https://github.com/php-src/php/blob/master/main/fastcgi.h
https://github.com/php-src/php/blob/master/main/fastcgi.c

https://github.com/unbit/uwsgi/blob/master/proto/fastcgi.c
https://github.com/unbit/uwsgi/blob/master/plugins/router_fcgi/router_fcgi.c

https://golang.org/src/net/http/fcgi/fcgi.go
https://golang.org/src/net/http/fcgi/child.go

https://docs.rs/crate/fastcgi/1.0.0/source/src/lib.rs

All of them looks to the keep alive flag but not to the multiplex flag.

Python is different, as always, they use mainly wsgi, AFAIK.
https://wsgi.readthedocs.io/en/latest/

uwsgi have also there on protocol
https://uwsgi-docs.readthedocs.io/en/latest/Protocol.html

>> I have take a look into fcgi protocol but sadly I'm not a good enough 
>> programmer
>> for that task. I can offer the tests for the implementation.
> 
> That's good to know, thanks!
> 
> Cheers,
> Willy

Regards
Aleks



Re: [ANNOUNCE] haproxy-1.9.2

2019-01-16 Thread Aleksandar Lazic
Hi.

Am 16.01.2019 um 19:02 schrieb Willy Tarreau:
> Hi,
> 
> HAProxy 1.9.2 was released on 2019/01/16. It added 58 new commits
> after version 1.9.1.
> 
> It addresses a number of lower importance pending issues that were not
> yet merged into 1.9.1, one bug in the cache and fixes some long-standing
> limitations that were affecting H2.
> 
> The highest severity issue but the hardest to trigger as well is the
> one affecting the cache, as it's possible to corrupt the shared memory
> segment when using some asymmetric caching rules, and crash the process.
> There is a workaround though, which consists in always making sure an
> "http-request cache-use" action is always performed before an
> "http-response cache-store" action (i.e.  the conditions must match).
> This bug already affects 1.8 and nobody noticed so I'm not worried :-)
> 
> The rest is of lower importance but mostly annoyance. One issue was
> causing the mailers to spam the server in loops. Another one affected
> idle server connections (I don't remember the details after seeing
> several of them to be honest), apparently the stats page could crash
> when using HTX, and there were still a few cases where stale HTTP/1
> connections would never leave in HTX (after certain situations of client
> timeout). The 0-RTT feature was broken when openssl 1.1.1 was released
> due to the anti-replay protection being enabled by default there (which
> makes sense since not everyone uses it with HTTP and proper support),
> this is now fixed.
> 
> While we have been observing a slowly growing amount of orphaned connections
> on haproxy.org last week (several per hour), and since the recent fixes we
> could confirm that it's perfectly clean now.
> 
> There's a small improvement regarding the encryption of TLS tickets. We
> used to support 128 bits only and it looks like the default setting
> changed 2 years ago without us noticing. Some users were asking for 256
> bit support, so that was implemented and backported. It will work
> transparently as the key size is determined automatically. We don't
> think it would make sense at this point to backport this to 1.8, but if
> there is compelling demand for this Emeric knows how to do it.
> 
> Regarding the long-standing limitations affecting H2, some of you
> probably remember that haproxy used not to support CONTINUATION frames,
> which was causing an issue with one very old version of chromium, and
> that it didn't support trailers, making it incompatible with gRPC (which
> may also use CONTINUATION). This has constantly resulted in h2spec to
> return 6 failed tests. These limitations could be addressed in 2.0-dev
> relatively easily thanks to the much better new architecture, and I
> considered it was right to backport these patches so that we don't have
> to work around them anymore. I'd say that while from a developer's
> perspective these limitations were not bugs ("works as designed"), from
> the user's perspective they definitely were.
> 
> I could try this with the gRPC helloworld tests (which by the way support
> H2 in clear text) :
> 
>haproxy$ cat h2grpc.cfg
>defaults
> mode http
> timeout client 5s
> timeout server 5s
> timeout connect 1s
> 
>listen grpc
> log stdout format raw local0
> option httplog
> option http-use-htx
> bind :50052 proto h2
> server srv1 127.0.0.1:50051 proto h2
>haproxy$ ./haproxy -d -f h2grpc.cfg
> 
>grpc$ go run examples/helloworld/greeter_server/main.go &
>grpc$ go run examples/helloworld/greeter_client/main.go haproxy 
>2019/01/04 11:11:40 Received: haproxy
>2019/01/04 11:11:40 Greeting: Hello haproxy
> 
>(...)haproxy$ ./haproxy -d -f h2grpc.cfg
>:grpc.accept(0008)=000b from [127.0.0.1:37538] ALPN=  
>:grpc.clireq[000b:]: POST /helloworld.Greeter/SayHello 
> HTTP/2.0
>:grpc.clihdr[000b:]: content-type: application/grpc 
>:grpc.clihdr[000b:]: user-agent: grpc-go/1.18.0-dev   
>:grpc.clihdr[000b:]: te: trailers
>:grpc.clihdr[000b:]: grpc-timeout: 994982u
>:grpc.clihdr[000b:]: host: localhost:50052
>:grpc.srvrep[000b:000c]: HTTP/2.0 200
>:grpc.srvhdr[000b:000c]: content-type: application/grpc
>:grpc.srvcls[000b:000c]
>:grpc.clicls[000b:000c]
>:grpc.closed[000b:000c]
>127.0.0.1:37538 [04/Jan/2019:11:11:40.705] grpc grpc/srv1 0/0/0/1/1 200 
> 116 - -  1/1/0/0/0 0/0 "POST /helloworld.Greeter/SayHello HTTP/2.0"
> 
> In the past we'd get an error from the client saying that the response
> came without trailers. So now this limitation is expected to be just bad
> old memories.

That's great ;-) ;-)

For service routing are the standard haproxy content routing options possible
(path, header, ...) , right?

If someone want to route based on grpc content he can use lua with body 

Re: How to replicate RedirectMatch (apache reverse proxy) in Haproxy

2019-01-16 Thread Aleksandar Lazic
Hi.

Am 16.01.2019 um 16:35 schrieb mirko stefanelli:
> Hi to all,
> 
> we are trying to move from Apache reverse proxy to Haproxy, you can see below 
> a
> part of del file Apache httpd.conf:
> 
> 
>  ServerName dipendenti.xxx.xxx.it
>  ErrorLog logs/intranet_ssl_error_log
>  TransferLog logs/intranet_ssl_access_log
>  LogLevel info
>  ProxyRequests Off
>  ProxyPreserveHost On
>  ProxyPass / http://intranet.xx.xxx/
>  ProxyPassReverse / http://intranet.xxx.xxx/
>  RedirectMatch ^/$ https://dipendenti.xxx.xxx.it  /
> 
>  SSLEngine on
>  SSLProxyEngine On
>  SSLProtocol all -SSLv2
>  SSLCipherSuite HIGH:MEDIUM:!aNULL:!MD5
> 
>  SSLCertificateFile /etc/pki/tls/certs/STAR_xt.crt
>  SSLCertificateKeyFile /etc/pki/tls/private/.pem
>  SSLCertificateChainFile /etc/pki/tls/certs/STAR_xxx_ca-bundle.crt
>  BrowserMatch "MSIE [2-5]" \
>              nokeepalive ssl-unclean-shutdown \
>              downgrade-1.0 force-response-1.0
> 
> 
> As you can see here we use RedirectMatch to force respons in HTTPS.
> 
> Here part of conf on HAproxy:
> 
> in frontend part:
> 
> bind *:443 ssl crt /etc/haproxy/ssl/ #here are stored each certificates
> 
> acl acl_dipendenti hdr_dom(host) -i dipendenti.xxx.xxx.it
> 
> use_backend dipendenti if acl_dipendenti
> 
> in backend part:
> 
> backend dipendenti
>         log 127.0.0.1:514 local6 debug
>         stick-table type ip size 20k peers mypeers
>         server intranet 10.xxx.xxx.xxx:80 check
> 
> When we start service we connect to https://dipendenti.xxx.xxx.it, but
> during navigation seems that haproxy respons change from HTTPS to HTTP.
> 
> Can you suggests some idea in order to investigate on this behavior?

Maybe you get a startpoint on this blog post.

https://www.haproxy.com/blog/howto-write-apache-proxypass-rules-in-haproxy/

> Regards,
> Mirko.

Regards
Aleks



Re: Get client IP

2019-01-16 Thread Aleksandar Lazic
Hi.

Am 16.01.2019 um 06:43 schrieb Vũ Xuân Học:
> Dear,
> 
> I fixed it. I use { src x.x.x.x ... } in use_backend and it worked.
> 
> Many thanks,

Great ;-).

How about the origin issue with the ssl, how is the solution now?

Best regards
Aleks

> -Original Message-
> From: Vũ Xuân Học  
> Sent: Wednesday, January 16, 2019 10:37 AM
> To: 'Aleksandar Lazic' ; 'haproxy@formilux.org' 
> ; 'PiBa-NL' 
> Subject: RE: Get client IP
> 
> Hi,
> 
> I have other problem. I want to only allow some ip access my website. Please 
> show me how to allow some IP by domain name.
> 
> I try with: tcp-request connection reject if { hdr(host) crmone.thaison.vn } 
> !{ src x.x.x.x x.x.x.y } but it’s not work. I get error message: 
>
>   keyword 'hdr' which is incompatible with 'frontend 
> tcp-request connection rule'
> 
> I try with some other keyword but not successful.
> 
> 
> 
> 
> 
> -Original Message-
> From: Aleksandar Lazic 
> Sent: Monday, January 14, 2019 5:20 PM
> To: Vũ Xuân Học ; haproxy@formilux.org; 'PiBa-NL' 
> 
> Subject: Re: Get client IP
> 
> Hi.
> 
> Am 14.01.2019 um 03:11 schrieb Vũ Xuân Học:
>> Hi,
>>
>>  
>>
>> I don’t know how to use ssl in http mode. I have many site with many 
>> certificate.
>>
>> As you see:
>>
>> …
>>
>> bind 192.168.0.4:443   (I NAT port 443 from firewall to HAProxy IP
>> 192.168.0.4)
>>
>> …
>>
>> # Define hosts
>>
>> acl host_1 req.ssl_sni -i ebh.vn
>>
>> acl host_2 req.ssl_sni hdr_end(host) -i einvoice.com.vn
>>
>> … (many acl like above)
>>
>>
>> use_backend eBH if host_1
>>
>>use_backend einvoice443 if host_2
> 
> You can use maps for this.
> https://www.haproxy.com/blog/introduction-to-haproxy-maps/
> 
> The openshift router have a complex but usable solution. Don't get confused 
> with the golang template stuff in there.
> 
> https://github.com/openshift/router/blob/master/images/router/haproxy/conf/haproxy-config.template#L180
> 
> https://github.com/openshift/router/blob/master/images/router/haproxy/conf/haproxy-config.template#L198
> 
> Regards
> Aleks
> 
>> *From:* Aleksandar Lazic 
>> *Sent:* Monday, January 14, 2019 8:45 AM
>> *To:* haproxy@formilux.org; Vũ Xuân Học ; 'PiBa-NL'
>> 
>> *Subject:* RE: Get client IP
>>
>>  
>>
>> Hi.
>>
>> As you use IIS I strongly suggest to terminate the https on haproxy 
>> and use mode http instead of tcp.
>>
>> Here is a blog post about basic setup of haproxy with ssl
>>
>> https://www.haproxy.com/blog/how-to-get-ssl-with-haproxy-getting-rid-o
>> f-stunnel-stud-nginx-or-pound/
>>
>> I assume that haproxy have the client ip as the setup works in the http 
>> config.
>>
>> Best regards
>> Aleks
>>
>> --
>> --
>>
>> *Von:*"Vũ Xuân Học" mailto:ho...@thaison.vn>>
>> *Gesendet:* 14. Jänner 2019 02:17:23 MEZ
>> *An:* 'PiBa-NL' > <mailto:piba.nl@gmail.com>>, 'Aleksandar Lazic'
>> mailto:al-hapr...@none.at>>, haproxy@formilux.org 
>> <mailto:haproxy@formilux.org>
>> *Betreff:* RE: Get client IP
>>
>>  
>>
>> Thanks for your help
>>
>>  
>>
>> I try config HAProxy with accept-proxy like this:
>>
>> frontend ivan
>>
>>  
>>
>> bind 192.168.0.4:443 accept-proxy
>>
>> mode tcp
>>
>> option tcplog
>>
>>  
>>
>> #option forwardfor
>>
>>  
>>
>> reqadd X-Forwarded-Proto:\ https
>>
>>  
>>
>> then my website can not access.
>>
>> I use IIS as webserver and I don’t know how to accept proxy, I only 
>> know config X-Forwarded-For like this
>>
>> http://www.loadbalancer.org/blog/iis-and-x-forwarded-for-header/
>>
>>  
>>
>>  
>>
>> *From:* PiBa-NL mailto:piba.nl@gmail.com>>
>> *Sent:* Sunday, January 13, 2019 10:06 PM
>> *To:* Aleksandar Lazic > <mailto:al-hapr...@none.at>>; Vũ Xuân Học > <mailto:ho...@thaison.vn>>; haproxy@formilux.org 
>> <mailto:haproxy@formilux.org>
>> *Subject:* Re: Get client IP
>>
>>  
>>
>> Hi,
>>
>> Op 13-1-2019 om 13:11 schreef Aleksandar Lazic:
>>
>> Hi.
>>
>>  
>>
>&

Re: Get client IP

2019-01-14 Thread Aleksandar Lazic
Hi.

Am 14.01.2019 um 03:11 schrieb Vũ Xuân Học:
> Hi,
> 
>  
> 
> I don’t know how to use ssl in http mode. I have many site with many 
> certificate.
> 
> As you see:
> 
> …
> 
> bind 192.168.0.4:443   (I NAT port 443 from firewall to HAProxy IP 
> 192.168.0.4)
> 
> …
> 
> # Define hosts
> 
>     acl host_1 req.ssl_sni -i ebh.vn
> 
>     acl host_2 req.ssl_sni hdr_end(host) -i einvoice.com.vn
> 
>     … (many acl like above)
> 
> 
>     use_backend eBH if host_1
> 
>    use_backend einvoice443 if host_2

You can use maps for this.
https://www.haproxy.com/blog/introduction-to-haproxy-maps/

The openshift router have a complex but usable solution. Don't get confused with
the golang template stuff in there.

https://github.com/openshift/router/blob/master/images/router/haproxy/conf/haproxy-config.template#L180

https://github.com/openshift/router/blob/master/images/router/haproxy/conf/haproxy-config.template#L198

Regards
Aleks

> *From:* Aleksandar Lazic 
> *Sent:* Monday, January 14, 2019 8:45 AM
> *To:* haproxy@formilux.org; Vũ Xuân Học ; 'PiBa-NL'
> 
> *Subject:* RE: Get client IP
> 
>  
> 
> Hi.
> 
> As you use IIS I strongly suggest to terminate the https on haproxy and use 
> mode
> http instead of tcp.
> 
> Here is a blog post about basic setup of haproxy with ssl
> 
> https://www.haproxy.com/blog/how-to-get-ssl-with-haproxy-getting-rid-of-stunnel-stud-nginx-or-pound/
> 
> I assume that haproxy have the client ip as the setup works in the http 
> config.
> 
> Best regards
> Aleks
> 
> ----
> 
> *Von:*"Vũ Xuân Học" mailto:ho...@thaison.vn>>
> *Gesendet:* 14. Jänner 2019 02:17:23 MEZ
> *An:* 'PiBa-NL' mailto:piba.nl@gmail.com>>,
> 'Aleksandar Lazic' mailto:al-hapr...@none.at>>,
> haproxy@formilux.org <mailto:haproxy@formilux.org>
> *Betreff:* RE: Get client IP
> 
>  
> 
> Thanks for your help
> 
>  
> 
> I try config HAProxy with accept-proxy like this:
> 
> frontend ivan
> 
>  
> 
>     bind 192.168.0.4:443 accept-proxy
> 
>     mode tcp
> 
>     option tcplog
> 
>  
> 
> #option forwardfor
> 
>  
> 
>     reqadd X-Forwarded-Proto:\ https
> 
>  
> 
> then my website can not access.
> 
> I use IIS as webserver and I don’t know how to accept proxy, I only know 
> config
> X-Forwarded-For like this
> 
> http://www.loadbalancer.org/blog/iis-and-x-forwarded-for-header/
> 
>  
> 
>  
> 
> *From:* PiBa-NL mailto:piba.nl@gmail.com>>
> *Sent:* Sunday, January 13, 2019 10:06 PM
> *To:* Aleksandar Lazic mailto:al-hapr...@none.at>>; Vũ 
> Xuân
> Học mailto:ho...@thaison.vn>>; haproxy@formilux.org
> <mailto:haproxy@formilux.org>
> *Subject:* Re: Get client IP
> 
>  
> 
> Hi,
> 
> Op 13-1-2019 om 13:11 schreef Aleksandar Lazic:
> 
> Hi.
> 
>  
> 
> Am 13.01.2019 um 12:17 schrieb Vũ Xuân Học:
> 
> Hi,
> 
>  
> 
> Please help me to solve this problem.
> 
>  
> 
> I use HAProxy version 1.5.18, SSL transparent mode and I can not get 
> client IP
> 
> in my .net mvc website. With mode http, I can use option forwardfor 
> to catch
> 
> client ip but with tcp mode, my web read X_Forwarded_For is null.
> 
>  
> 
>  
> 
>  
> 
> My diagram:
> 
>  
> 
> Client => Firewall => HAProxy => Web
> 
>  
> 
>  
> 
>  
> 
> I read HAProxy document, try to use send-proxy. But when use 
> send-proxy, I can
> 
> access my web.
> 
>  
> 
> This is my config:
> 
>  
> 
> frontend test2233
> 
>  
> 
>     bind *:2233
> 
>  
> 
>     option forwardfor
> 
>  
> 
>  
> 
>  
> 
>     default_backend testecus
> 
>  
> 
> backend testecus
> 
>  
> 
>     mode http
> 
>  
> 
>     server web1 192.168.0.151:2233 check
> 
>  
> 
> Above config work, and I can get the client IP
> 
>  
> 
> That's good as it's `mode http` therefore haproxy can see the http 
> traffic.
> 
> Indeed it can insert the http forwardfor header with 'mode http'.
> 
>  
> 
>  
> 
> Config with SSL:
> 
>   

RE: Get client IP

2019-01-13 Thread Aleksandar Lazic
Hi.

As you use IIS I strongly suggest to terminate the https on haproxy and use 
mode http instead of tcp.

Here is a blog post about basic setup of haproxy with ssl

https://www.haproxy.com/blog/how-to-get-ssl-with-haproxy-getting-rid-of-stunnel-stud-nginx-or-pound/

I assume that haproxy have the client ip as the setup works in the http config.

Best regards
Aleks


 Ursprüngliche Nachricht 
Von: "Vũ Xuân Học" 
Gesendet: 14. Jänner 2019 02:17:23 MEZ
An: 'PiBa-NL' , 'Aleksandar Lazic' , 
haproxy@formilux.org
Betreff: RE: Get client IP

Thanks for your help

 

I try config HAProxy with accept-proxy like this:

frontend ivan
 
bind 192.168.0.4:443 accept-proxy
mode tcp
option tcplog
 
#option forwardfor
 
reqadd X-Forwarded-Proto:\ https
 

then my website can not access. 

I use IIS as webserver and I don’t know how to accept proxy, I only know config 
X-Forwarded-For like this

http://www.loadbalancer.org/blog/iis-and-x-forwarded-for-header/ 

 

 

From: PiBa-NL  
Sent: Sunday, January 13, 2019 10:06 PM
To: Aleksandar Lazic ; Vũ Xuân Học ; 
haproxy@formilux.org
Subject: Re: Get client IP

 

Hi,

Op 13-1-2019 om 13:11 schreef Aleksandar Lazic:

Hi.
 
Am 13.01.2019 um 12:17 schrieb Vũ Xuân Học:

Hi,
 
Please help me to solve this problem.
 
I use HAProxy version 1.5.18, SSL transparent mode and I can not get client IP
in my .net mvc website. With mode http, I can use option forwardfor to catch
client ip but with tcp mode, my web read X_Forwarded_For is null.
 
 
 
My diagram:
 
Client => Firewall => HAProxy => Web
 
 
 
I read HAProxy document, try to use send-proxy. But when use send-proxy, I can
access my web.
 
This is my config:
 
frontend test2233
 
bind *:2233
 
option forwardfor
 
 
 
default_backend testecus
 
backend testecus
 
mode http
 
server web1 192.168.0.151:2233 check
 
Above config work, and I can get the client IP

 
That's good as it's `mode http` therefore haproxy can see the http traffic.

Indeed it can insert the http forwardfor header with 'mode http'.



 
 

Config with SSL:
 
frontend ivan
 
bind 192.168.0.4:443
mode tcp
option tcplog
 
#option forwardfor
 
reqadd X-Forwarded-Proto:\ https

 
This can't work as you use `mode tcp` and therefore haproxy can't see the http
traffic.
 
From my point of view have you now 2 options.
 
* use https termination on haproxy. Then you can add this http header.

Thats one option indeed.



 
* use accept-proxy in the bind line. This option requires that the firewall is
able to send the PROXY PROTOCOL header to haproxy.
https://cbonte.github.io/haproxy-dconv/1.5/configuration.html#5.1-accept-proxy

I dont expect a firewall to send such a header. And if i understand correctly 
the 'webserver' would need to be configured to accept proxy-protocol.
The modification to make in haproxy would be to configure send-proxy[-v2-ssl-cn]
http://cbonte.github.io/haproxy-dconv/1.9/snapshot/configuration.html#5.2-send-proxy
And how to configure it with for example nginx:
https://wakatime.com/blog/23-how-to-scale-ssl-with-haproxy-and-nginx

 
 
The different modes are described in the doc
https://cbonte.github.io/haproxy-dconv/1.5/configuration.html#4-mode
 
Here is a blog post about basic setup of haproxy with ssl
https://www.haproxy.com/blog/how-to-get-ssl-with-haproxy-getting-rid-of-stunnel-stud-nginx-or-pound/
 

acl tls req.ssl_hello_type 1
 
tcp-request inspect-delay 5s
 
tcp-request content accept if tls
 
 
 
# Define hosts
 
acl host_1 req.ssl_sni -i ebh.vn
 
acl host_2 req.ssl_sni hdr_end(host) -i einvoice.com.vn
 

 
   use_backend eBH if host_1
 
   use_backend einvoice443 if host_2
 
 
 
backend eBH
 
mode tcp
 
balance roundrobin
 
option ssl-hello-chk
 
   server web1 192.168.0.153:443 maxconn 3 check #cookie web1
 
   server web1 192.168.0.154:443 maxconn 3 check #cookie web2
 
 
 
Above config doesn’t work, and I can not get the client ip. I try server web1
192.168.0.153:443 send-proxy and try server web1 192.168.0.153:443 send-proxy-v2
but I can’t access my web.

 
This is expected as the Firewall does not send the PROXY PROTOCOL header and the
bind line is not configured for that.

Firewall's by themselves will never use proxy-protocol at all. That it doesn't 
work with send-proxy on the haproxy server line is likely because the 
webservice that is receiving the traffic isn't configured to accept the proxy 
protocol. How to configure a ".net mvc website" to accept that is something i 
don't know if it is even possible at all..



 
 

Many thanks,

 
Best regards
Aleks
 

Thanks & Best Regards! 

* VU XUAN HOC
 

Regards,
PiBa-NL (Pieter)



Re: Get client IP

2019-01-13 Thread Aleksandar Lazic
Hi.

Am 13.01.2019 um 12:17 schrieb Vũ Xuân Học:
> Hi,
> 
> Please help me to solve this problem.
> 
> I use HAProxy version 1.5.18, SSL transparent mode and I can not get client IP
> in my .net mvc website. With mode http, I can use option forwardfor to catch
> client ip but with tcp mode, my web read X_Forwarded_For is null.
> 
>  
> 
> My diagram:
> 
> Client => Firewall => HAProxy => Web
> 
>  
> 
> I read HAProxy document, try to use send-proxy. But when use send-proxy, I can
> access my web.
> 
> This is my config:
> 
> frontend test2233
> 
>     bind *:2233
> 
>     option forwardfor
> 
>  
> 
>     default_backend testecus
> 
> backend testecus
> 
>     mode http
> 
>     server web1 192.168.0.151:2233 check
> 
> Above config work, and I can get the client IP

That's good as it's `mode http` therefore haproxy can see the http traffic.

> Config with SSL:
> 
> frontend ivan
> 
>     bind 192.168.0.4:443
>     mode tcp
>     option tcplog
> 
> #option forwardfor
> 
>     reqadd X-Forwarded-Proto:\ https

This can't work as you use `mode tcp` and therefore haproxy can't see the http
traffic.

>From my point of view have you now 2 options.

* use https termination on haproxy. Then you can add this http header.
* use accept-proxy in the bind line. This option requires that the firewall is
able to send the PROXY PROTOCOL header to haproxy.
https://cbonte.github.io/haproxy-dconv/1.5/configuration.html#5.1-accept-proxy

The different modes are described in the doc
https://cbonte.github.io/haproxy-dconv/1.5/configuration.html#4-mode

Here is a blog post about basic setup of haproxy with ssl
https://www.haproxy.com/blog/how-to-get-ssl-with-haproxy-getting-rid-of-stunnel-stud-nginx-or-pound/

>     acl tls req.ssl_hello_type 1
> 
>     tcp-request inspect-delay 5s
> 
>     tcp-request content accept if tls
> 
>  
> 
>     # Define hosts
> 
>     acl host_1 req.ssl_sni -i ebh.vn
> 
>     acl host_2 req.ssl_sni hdr_end(host) -i einvoice.com.vn
> 
> 
> 
>    use_backend eBH if host_1
> 
>    use_backend einvoice443 if host_2
> 
>  
> 
> backend eBH
> 
>     mode tcp
> 
>     balance roundrobin
> 
>     option ssl-hello-chk
> 
>    server web1 192.168.0.153:443 maxconn 3 check #cookie web1
> 
>    server web1 192.168.0.154:443 maxconn 3 check #cookie web2
> 
>  
> 
> Above config doesn’t work, and I can not get the client ip. I try server web1
> 192.168.0.153:443 send-proxy and try server web1 192.168.0.153:443 
> send-proxy-v2
> but I can’t access my web.

This is expected as the Firewall does not send the PROXY PROTOCOL header and the
bind line is not configured for that.

> Many thanks,

Best regards
Aleks

> Thanks & Best Regards! 
> 
> * VU XUAN HOC
>  Mobile: 0169.8081005
> **cid:image001.jpg@01D102DF.ABB9D420
> THAISON TECHNOLOGY DEVELOPMENT COMPANY
> *  Add  * :*  11 Dang Thuy Tram, Hoang Quoc Viet, Cau Giay, Ha Noi
>   Tel *: *+84.4.37545222 
>   Fax  *  : *+84.4.37545223
>   Email       *  : *ho...@thaison.vn *
> *  Web         *  :*http://www.thaison.vn; http://www.einvoice.vn; 
> http://www.etax.vn;  http://www.ebh.vn
> 
>  
> 




Re: haproxy issue tracker discussion

2019-01-10 Thread Aleksandar Lazic
Am 09.01.2019 um 15:22 schrieb Willy Tarreau:
> Hi Tim,
> 
> On Wed, Jan 09, 2019 at 12:58:30PM +0100, Tim Düsterhus wrote:
>> Am 09.01.19 um 05:31 schrieb Willy Tarreau:
>>> Except that the "naturally" part here is manually performed by someone,
>>> and an issue tracker is nothing more than an organized todo list, which
>>> *is* useful to remind that you missed some backports. It regularly happens
>>> to us, like when the safety of some fixes is not certain and we prefer to
>>> let them run for a while in the most recent versions before backporting
>>> them to older branches. This is exactly where an issue tracker is needed,
>>> to remind us that these fixes are still needed in older branches.
>>
>> So the commits are not being cherry-picked in the original order? I
>> imagined that the process went like this:
>>
>> 1. List all the commits since the last cherry-picks
>> 2. Look in the commit message to see whether the commit should be
>> backported.
>> 3. Cherry-pick the commit.
> 
> It's what we *try* to do, but cherry-picking never is rocket science, for
> various reasons, some ranging from uncertainty regarding some fixes that
> need to cool down later, other because an add-on was made, requiring an
> extra patch that are much more convenient to deal with together (think
> about bisect for example). That's why I created the git-show-backport
> script which gives us significant help in comparing lists of commits from
> various branches.
> 
>>> If the issue tracker only tracks issues related to the most recent branch,
>>
>> I believe you misunderstood me. What I attempted to say is:
>>
>> The issue tracker tracks which branches the bug affects. But IMO it does
>> not need to track whether the backport already happened, because the
>> information that the backport needs to happen is in the commit itself
>> (see above).
> 
> For me it is important to have the info that the branch is still unfixed
> because as I explained, the presence of a given commit is not equivalent
> to the issue being fixed. A commit is for a branch. It will often beckport
> as a 1-to-1 to the closest branches, but 10% of the time you need to
> backport extra stuff as well that is not part of the fix but which the
> fix uses, and sometimes you figure that the issue is still not completely
> fixed despite the backport being there because it's more subtle.
> 
>>> it will only solve the problem for this branch. For example, Veiko Kukk
>>> reported in November that compression in 1.7.11 was broken again. How do
>>> I know this ? Just because I've added an entry for this in my TODO file.
>>> This bug is apparently a failed backport, so it requires that the original
>>> bug is reopened and that any backport attempt to an older version is paused.
>>
>> Is the failed backport a new bug or is it not? I'd say it's a new bug,
>> because the situation changed. It's a new bug (someone messed up the
>> backport) that affects haproxy-1.7, but does not affect haproxy-dev. You
>> describe it as an old bug that needs to be re-opened.
> 
> For me it's not a new bug at all, it's the same description. Worse, often
> it will even be the one the reporter used! For example someone might report
> an issue with 1.7, that we diagnose covers 1.7 to 2.0-dev. We finally find
> the bug and if it in 2.0-dev then backport it. The backport stops working
> when reaching 1.7. It's hard to claim it's a new bug while it exactly is the
> bug the person reported! Doing otherwise would make issue lookups very
> cumbersome, even more than the mailing list archives where at least you
> can sort by threads. Thus for me it's only the status in the old branch
> which is not resolved. It's also more convenient for users looking for a
> solution to figure that the same bug is already fixed in 1.8 and that
> possibly an upgrade would be the path to least pain.
> 
>>> You'll note that for many of them the repository is only a mirror by
>>> the way, so that's another hint.
>>
>> I suspect the reason is simple: The project already had a working issue
>> tracker that predated GitHub. Many of these projects are way older than
>> GitHub.
> 
> It's possible.
> 
>> Here's some more recent projects that probably grew up with GitHub. I
>> can't comment how they do the backports, though:
>>
>> https://github.com/nodejs/node/issues (has LTS / Edge)
>> https://github.com/zfsonlinux/zfs/issues (has stable / dev)
>> https://github.com/antirez/redis/issues
>> https://github.com/moby/moby/issues (tons of automation based on an
>> issue template)
> 
> I only knew 3 of them by name and never used any ;-)
> 
> Node is interesting here. the have tags per affected version. E.g.
> https://github.com/nodejs/node/issues/25221

I like this as then you can see all effected Versions for a issue and PR.

> I tend to think that if labels already mark the relevance to a branch,
> then they override the status and probably we don't really care about
> the status. The "moby" project above does that by the way, 

Re: Question about Maglev algorithm

2018-12-29 Thread Aleksandar Lazic
Am 29.12.2018 um 19:25 schrieb Valentin Vidic:
> On Sat, Dec 29, 2018 at 06:03:51PM +0100, Aleksandar Lazic wrote:
>> I thought I have misunderstood the Idea behind maglev, thanks for 
>> clarification.
> 
> Found another mention of Maglev [Eis16] for high-level load balancing (between
> datacenters):
> 
>   https://landing.google.com/sre/sre-book/chapters/load-balancing-frontend/

Thanks.

As explained from Willy the Eis16 is for IP packages, I think.

```
.
.

Our current VIP load balancing solution [Eis16] uses packet encapsulation. A
network load balancer puts the forwarded packet into another IP packet with
Generic Routing Encapsulation (GRE) [Han94], and uses a backend’s address as the
destination. A backend receiving the packet strips off the outer IP+GRE layer
and processes the inner IP packet as if it were delivered directly to its
network interface. The network load balancer and the backend no longer need to
exist in the same broadcast domain; they can even be on separate continents as
long as a route between the two exists.
.
.

```

As in the Kubernetes environments are more and more SDNs in use I'm asking my
self if this algorithm could have some benefit. The network setup and in general
the IT have a high change rate let's keep it in mind and let us see what the
future brings or requires ;-)

QUICK is coming "quick" over the edge and this will change a lot especially for
reverse proxies like haproxy, IMHO.

Regards
Aleks



Re: Question about Maglev algorithm

2018-12-29 Thread Aleksandar Lazic
Am 29.12.2018 um 07:41 schrieb Willy Tarreau:
> On Fri, Dec 28, 2018 at 07:55:11PM +0100, Aleksandar Lazic wrote:
>> Well as far as I understood the pdf one of the biggest difference is that
>> Maglev is a distributed system where the consistent hash is for local system.
> 
> No, not at all. The difference is that it's designed for packet processing
> so they have to take care of connection tracking and per-packet processing
> cost. From what I've read in the paper, it could be seen as a subset of
> what we already do :
>   - server weights are not supported in Maglev (and very likely not needed)
>   - slow start is not supported
>   - server insertion/removal can be extremely expensive (O(N^2)) due to the
> way they need to build the hash table for fast lookup
>   - no possibility for bounded load either
> 
> It's really important to understand the different focus of the algorithm,
> being packet-oriented instead of L7-oriented. This explains a number of
> differences and choices. I think Maglev is excellent for what it does and
> that our mechanism wouldn't be as fast if used on a per-packet basis. But
> conversely, we already do the same and even much more by default because
> we work at a different layer.

I thought I have misunderstood the Idea behind maglev, thanks for clarification.

> Willy

Cheers
Aleks



Re: Question about Maglev algorithm

2018-12-28 Thread Aleksandar Lazic
Well as far as I understood the pdf one of the biggest difference is that 
Maglev is a distributed system where the consistent hash is for local system.

What I think is if consistent hash uses the peers table for balancing it could 
be similar to Maglev, but I'm not a algo expert, just an Idea.

I don't know if it's have any benefits for haproxy, I have seen this algo on 
envoy site and wanted to know what the experts here means about it :-)

https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/load_balancing/load_balancers

Regards
Aleks


 Ursprüngliche Nachricht 
Von: Aaron West 
Gesendet: 28. Dezember 2018 19:36:03 MEZ
An: HAProxy 
Betreff: Re: Question about Maglev algorithm

I've not used it yet with IPVS because I have nothing with a new enough
Kernel (4.18+ I think), however, isn't this quite similar to HAProxy's
consistent hash options?

Aaron
Loadbalancer.org


Question about Maglev algorithm

2018-12-28 Thread Aleksandar Lazic
Hi.

Have anyone take a look into the Maglev algorithm ?

This paper looks very interesting 
https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/44824.pdf

Regards
Aleks



Tweet about Facebook's implementation and deployment of QUIC

2018-12-28 Thread Aleksandar Lazic
Hi.

I just have seen this tweet, maybe it's also interesting for you.

Subodh Iyengar (@__subodh) twitterte um 10:18 nachm. on Mi., Dez. 26, 2018:

Slides for my presentation at ACM conext on Facebook's implementation and 
deployment of QUIC are now live 
https://conferences2.sigcomm.org/co-next/2018/slides/epiq-keynote.pdf . I 
presented some numbers on latency reductions we've seen so far as well as the 
new load balancing infrastructure we've built for QUIC.
(https://twitter.com/__subodh/status/1078037284908261377)

Regards
Aleks



RE: Http HealthCheck Issue

2018-12-27 Thread Aleksandar Lazic
Hi Praveen.

That's because the http health check is on a different network layer.

The server line defines the tcp layer and the http check the application layer.

Please take a look into the doc about check and http-check.

http://cbonte.github.io/haproxy-dconv/1.7/configuration.html#5.2-check

http://cbonte.github.io/haproxy-dconv/1.7/configuration.html#4.2-option%20httpchk

As I don't know how deep the knowledge of the different layers is, let me 
suggest you this articel to refresh the knowledge.

https://en.m.wikipedia.org/wiki/TCP/IP_model

Regards
aleks


 Ursprüngliche Nachricht 
Von: "UPPALAPATI, PRAVEEN" 
Gesendet: 27. Dezember 2018 06:24:43 MEZ
An: Aleksandar Lazic , haproxy 
Betreff: RE: Http HealthCheck Issue

Hi Alex,

If I have one vhost representing all the nexus host's how can haproxy identify 
which server is down ?

I guess health check is to determine which server is healthy right if a vhost 
masks all the servers in the backend list how would haproxy divert the traffic?

Please advise.

Thanks,
Praveen.

-Original Message-----
From: Aleksandar Lazic [mailto:al-hapr...@none.at] 
Sent: Thursday, December 20, 2018 2:34 AM
To: UPPALAPATI, PRAVEEN ; haproxy 
Subject: Re: Http HealthCheck Issue

Hi Praveen.

Please keep the list in the loop, thanks.

Am 20.12.2018 um 07:00 schrieb UPPALAPATI, PRAVEEN:
> Hi Alek,
> 
> Now I am totally confused:
> 
> When I say :
> 
> 
> backend bk_8093_read
> balancesource
> http-response set-header X-Server %s
> option log-health-checks
> option httpchk GET 
> /nexus/repository/rawcentral/com.att.swm.attpublic/healthcheck.txt 
> HTTP/1.1\r\nHost:\ server1.com:8093\r\nAuthorization:\ Basic\ ...
> server primary8093r server1.com:8093 check verify none
> server backUp08093r server2.com:8093 check backup verify none 
> server backUp18093r server3.com:8093 check backup verify none
> 
> Here server1.com,server2.com and physical server hostnames.

That's the issue. You should have a vhost name which is for all servers the 
same.

> When I define HOST header which server should I define I was expecting 
> haproxy will formulate that to 
> 
> https://urldefense.proofpoint.com/v2/url?u=http-3A__server1.com-3A8093_nexus_repository_rawcentral_com.att.swm.attpublic_healthcheck.txt=DwIFaQ=LFYZ-o9_HUMeMTSQicvjIg=V0kSKiLhQKpOQLIjj3-g9Q=OZ-O9xFFKCfaCy769FVtGWPRgXm2eydV92WYJPam8Yg=--6orxuqJAZSK_-qumJjuZEhy1Iru7mkgPPJvYpw4RQ=
> https://urldefense.proofpoint.com/v2/url?u=http-3A__server2.com-3A8093_nexus_repository_rawcentral_com.att.swm.attpublic_healthcheck.txt=DwIFaQ=LFYZ-o9_HUMeMTSQicvjIg=V0kSKiLhQKpOQLIjj3-g9Q=OZ-O9xFFKCfaCy769FVtGWPRgXm2eydV92WYJPam8Yg=eJkYBH7JAMAIU1ZIHXCFbOs3RLA0OtMHp1ky_rN-d7s=
> https://urldefense.proofpoint.com/v2/url?u=http-3A__server3.com-3A8093_nexus_repository_rawcentral_com.att.swm.attpublic_healthcheck.txt=DwIFaQ=LFYZ-o9_HUMeMTSQicvjIg=V0kSKiLhQKpOQLIjj3-g9Q=OZ-O9xFFKCfaCy769FVtGWPRgXm2eydV92WYJPam8Yg=sUUE7fa4pe0lN8iFaAuznj-m8sgwQS2X2xeeva-yCEM=
> 
> to monitor which servers are live right , so how could I dynamically populate 
> the HOST?

I have written the following:

> I assume that nexus have a general URL and not srv1,srv2, ...

This is also mentioned in the config example.

https://urldefense.proofpoint.com/v2/url?u=https-3A__help.sonatype.com_repomanager2_installing-2Dand-2Drunning_running-2Dbehind-2Da-2Dreverse-2Dproxy=DwIFaQ=LFYZ-o9_HUMeMTSQicvjIg=V0kSKiLhQKpOQLIjj3-g9Q=OZ-O9xFFKCfaCy769FVtGWPRgXm2eydV92WYJPam8Yg=vJtNj9Bd4EYr_kdknJiGcOOfWnV6Mgna31-JAbiKwq4=

This means on the nexus should be a generic hostname which is for all servers
the same.

Maybe you can share the nexus config, nexus setup and the nexus version, as it's
normally not a big deal to setup a vhost.

Regards
Aleks


PS: What's this urldefense.proofpoint.com crap 8-O

> Please advice.
> 
> Thanks,
> Praveen.
> 
> 
> 
> -Original Message-
> From: Aleksandar Lazic [mailto:al-hapr...@none.at] 
> Sent: Wednesday, December 19, 2018 3:25 PM
> To: UPPALAPATI, PRAVEEN 
> Cc: Jonathan Matthews ; Cyril Bonté 
> ; haproxy@formilux.org
> Subject: Re: Http HealthCheck Issue
> 
> Am 19.12.2018 um 21:04 schrieb UPPALAPATI, PRAVEEN:
>> Ok then do I need to add the haproxy server?
> 
> I suggest to use a `curl -v
> /nexus/v1/repository/rawcentral/com.att.swm.attpublic/healthcheck.txt`
> and see how curl make the request.
> 
> I assume that nexus have a general URL and not srv1,srv2, ...
> For example.
> 
> ###
> curl -vo /dev/null 
> https://urldefense.proofpoint.com/v2/url?u=https-3A__www.haproxy.org=DwIFaQ=LFYZ-o9_HUMeMTSQicvjIg=V0kSKiLhQKpOQLIjj3-g9Q=CaQ1GDp8D6XzObaEV3Ad9IQ3Q1TwhAAYhFQ24IgwP68=53v5RKBVFzzKyU7JGcd8i6eBlGyIfSavQBRkoYcXZm8=
> 
> * Rebuilt URL to: 
> https://urldefense.proofpoint.com/v2/url?u=https-3A__www.haproxy.or

Re: haproxy AIX 7.1.0.0 compile issues

2018-12-26 Thread Aleksandar Lazic

Hi Patrick.

Am 26-12-2018 22:26, schrieb Overbey, Patrick (Sioux Falls):


Hello,

First off, I want to say thank you for your hard work on haproxy. It is 
a very fine piece of software.
I recently ran into a bug compiling haproxy 1.9+ on AIX 7.1 
7100-00-03-1115 using gmake 4.2 and gcc 8.1.0. I had previously had 1.8 
compiling using this same setup with a few minor code changes to 
variable names in vars.c and connection.c. I made those same changes in 
version 1.9, but have now ran into a compile issue that I cannot get 
around due to initcall.h being new in 1.9.


Please can you tell us which `minor code changes` was necessary to be 
able to compile on AIX 7.1.



Here are the compile errors I am seeing.

ld: 0711-345 Use the -bloadmap or -bnoquiet option to obtain more 
information.


What do you get when you add `-bnoquiet` to the LDFLAGS?
Please can you share the compile output with `V=1`  activated.


ld: 0711-317 ERROR: Undefined symbol: __start_init_STG_PREPARE

ld: 0711-317 ERROR: Undefined symbol: __stop_init_STG_PREPARE

ld: 0711-317 ERROR: Undefined symbol: __start_init_STG_LOCK

ld: 0711-317 ERROR: Undefined symbol: __stop_init_STG_LOCK

ld: 0711-317 ERROR: Undefined symbol: __start_init_STG_ALLOC

ld: 0711-317 ERROR: Undefined symbol: __stop_init_STG_ALLOC

ld: 0711-317 ERROR: Undefined symbol: __start_init_STG_POOL

ld: 0711-317 ERROR: Undefined symbol: __stop_init_STG_POOL

ld: 0711-317 ERROR: Undefined symbol: __start_init_STG_REGISTER

ld: 0711-317 ERROR: Undefined symbol: __stop_init_STG_REGISTER

ld: 0711-317 ERROR: Undefined symbol: __start_init_STG_INIT

ld: 0711-317 ERROR: Undefined symbol: __stop_init_STG_INIT

Would anyone have suggestions for how to fix this?

Also, as a note I am able to compile haproxy 1.5 out of the box, but 
starting with version 1.6 is where I run into compile errors. Is there 
support for these compile bugs or am I on my own?


Please can you be more precise which compile errors you got, thanks.


Thanks for any help you can offer.

PATRICK OVERBEY

Software Development Engineer Staff

Product Development/Bank Solutions

Office: 605-362-1260  x7290

FISERV
JOIN US @ FORUM 2019
Fiserv | Join Our Team | Twitter | LinkedIn | Facebook
FORTUNE Magazine WORLD'S MOST ADMIRED COMPANIES® 2014 | 2015 | 2016 | 
2017 | 2018
(c) 2018 Fiserv Inc. or its affiliates. Fiserv is a registered 
trademark of Fiserv Inc. Privacy Policy




Re: HA Proxy Load Balancer

2018-12-21 Thread Aleksandar Lazic
Hi Lance.

Please keep the list in the loop as there are several other persons which can
also help, thank you.

Am 21.12.2018 um 14:49 schrieb Lance Melancon:
> I hope this helps in what you are requesting. So this config works great but I
> need to redirect the server to a sub site as in myserver.net/site. We are
> looking for the exact syntax to add to the haproxy.cfg. I’m including my
> programmer that may understand your feedback better than myself. We did try
> several things referring to the documentation with no luck. Thanks!

docx with embedded Images is not a very secure nor a common format on this list,
due to this fact let me copy the content of the docx here and comment it inline
and answer below.

> Haproxy.cfg:
> global
>log /dev/log local0
>log /dev/log local1 notice
>chroot /var/lib/haproxy
>stats timeout 30s
>user haproxy
>group haproxy
>daemon
>maxconn 15000
> 
> defaults
>log global
>mode http
>option httplog
>option dontlognull
>timeout connect 5000
>timeout client 5
>timeout server 5
> 
> frontend myserver.net
>bind *:443
>mode tcp

Okay here is the problem.

As the haproxy is only used for tcp proxying not for http you will not be able
to make what you want.

https://cbonte.github.io/haproxy-dconv/1.7/configuration.html#4-mode

>maxconn 15000
>default_backend hac_cluster
> 
> backend hac_cluster
>mode tcp
>balance leastconn
>server myserver 192.1.1.1:443 check maxconn 5000
>server myserver 192.1.1.2:443 check maxconn 5000
> 
>listen statistics
>bind *:80

I would not recommend to put statistics on port 80, but that's only my opinion.

>mode http
>stats enable
>stats hide-version
>stats refresh 30s
>stats show-node
>stats auth myserver:password   
>stats admin if TRUE
>stats uri /lbstats
> 
> 
> haproxy -vv
>> ## excerpt from image
> Version 1.7.8
> No compression libs, openssl, pcre nor lua support

On which platform is this haproxy running?
Is haproxy installed from the package management or was it build from sources?

To be able to do what you want you will need to do the following steps.

* Install haproxy with openssl support

* get the certificates from the backend server and add it to the haproxy

https://www.haproxy.com/blog/how-to-get-ssl-with-haproxy-getting-rid-of-stunnel-stud-nginx-or-pound/
  - Pay attention that you copy teh certificates into the chroot dir
>chroot /var/lib/haproxy

* create a frontend acl for the path `acl my_site path_beg -i /site`

* create a use_backend line `use_backend my_site if my_site`

* create a backend with the name `my_site` with the server line like
  `server myserver myserver.net: ...`

As I mentioned before it's not a easy task to dig into this topic, therefore I
strongly recommend to give you and your programmer some time to understand how
load balancing on level 6(TLS/SSL) + 7(http) works.

Here are some links which could help to get a better picture of HAProxy and LB
in general.
http://www.haproxy.org/download/1.7/doc/intro.txt
https://www.haproxy.com/blog/the-four-essential-sections-of-an-haproxy-configuration/
https://www.haproxy.com/blog/introduction-to-haproxy-acls/

In any case please post some logs, configs or anything directly in the mail body
so that the persons which reads this list via a console are able to follow it
without to open a word document.

We are glad to help as long as we can read the mails ;-)

Very best regards
Aleks


> -Original Message-
> From: Aleksandar Lazic 
> Sent: Thursday, December 20, 2018 4:21 PM
> To: Lance Melancon 
> Cc: haproxy@formilux.org
> Subject: Re: HA Proxy Load Balancer
> 
>  
> 
> CAUTION: This email originated from outside Cypress-Fairbanks ISD. Do not 
> click
> links or open attachments unless you recognize the sender and know the content
> is safe.
> 
>  
> 
>  
> 
>  
> 
> Hi Lance.
> 
>  
> 
> Am 20-12-2018 21:41, schrieb Lance Melancon:
> 
>> Thanks for the info. Unfortunately I am not a programmer by a long
> 
>> shot and syntax is a big problem for me. I tried a few things but no
> 
>> luck and I can't find any examples of a redirect.
> 
>> So do I need both the backend and acl statements?
> 
>> I'm simply trying to use mysite.net to direct to mysite.net/website.
> 
>> Any time I use a / the config fails.
> 
>  
> 
> I'm not sure if you have read and understand my last mail?
> 
> Have you time to dig into this topic as it isn't a quick shot, mostly AFAIK.
> 
>  
> 
> We need some more infos to be able to help you.
> 
>  
> 
>> haproxy -vv
> 
>> anonymized config
> 
>  

Re: HA Proxy Load Balancer

2018-12-20 Thread Aleksandar Lazic

Hi Lance.

Am 20-12-2018 21:41, schrieb Lance Melancon:

Thanks for the info. Unfortunately I am not a programmer by a long
shot and syntax is a big problem for me. I tried a few things but no
luck and I can't find any examples of a redirect.
So do I need both the backend and acl statements?
I'm simply trying to use mysite.net to direct to mysite.net/website.
Any time I use a / the config fails.


I'm not sure if you have read and understand my last mail?
Have you time to dig into this topic as it isn't a quick shot, mostly 
AFAIK.


We need some more infos to be able to help you.


haproxy -vv
anonymized config


Regards
Aleks


-Original Message-
From: Aleksandar Lazic 
Sent: Thursday, December 20, 2018 2:00 PM
To: Lance Melancon 
Cc: haproxy@formilux.org
Subject: Re: HA Proxy Load Balancer

CAUTION: This email originated from outside Cypress-Fairbanks ISD. Do
not click links or open attachments unless you recognize the sender
and know the content is safe.



Hi Lance.

Am 20-12-2018 18:20, schrieb Lance Melancon:


We are testing the load balancer and it's working but I can't see how
to direct the server to a specific website such as server.net/site. Is
this possible? Syntax? Thanks!


Well yes. I think it is a good starting point to read and understand
this blog article.

https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.haproxy.com%2Fblog%2Fusing-haproxy-as-an-api-gateway-part-1%2Fdata=02%7C01%7CLance.melancon%40cfisd.net%7C6aa4b53295ce4715f0b308d666b5b424%7C12ac55e201c5446abe37be3ef2056122%7C0%7C1%7C636809327941066192sdata=TCDRAt2XnDHm8IpoeJVVHnDt7Vcf7SnRo%2B6iIgAZ5kg%3Dreserved=0

What you want to do is "HTTP Routing"

For example a short snipplet
###

acl my_site path_beg -i /site

...
use_backend my_site if my_site

###

I would help a lot to have some more Information from you like.

haproxy -vv
anonymized config

As we don't know how much knowledge do you have about http I want to
tell you that this statement "server.net/site" 2 parts.

Host: server.net
Path: /site

This is explained in detail in the doc.
https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fcbonte.github.io%2Fhaproxy-dconv%2F1.9%2Fconfiguration.html%231data=02%7C01%7CLance.melancon%40cfisd.net%7C6aa4b53295ce4715f0b308d666b5b424%7C12ac55e201c5446abe37be3ef2056122%7C0%7C1%7C636809327941066192sdata=SzilrSGyMgnpUAgQs%2F0U6%2BzCPH7ToIjK1R1zxESfRP4%3Dreserved=0

Hth
Aleks


CYPRESS-FAIRBANKS ISD CONFIDENTIALITY NOTICE: This email, including
any attachments, is for the sole use of the intended recipient(s) and
may contain confidential student and/or employee information.
Unauthorized use and/or disclosure is prohibited under federal and
state law. If you are not the intended recipient, you may not use,
disclose, copy or disseminate this information. Please call the sender
immediately or reply by email and destroy all copies of the original
message, including any attachments. Unless expressly stated in this
e-mail, nothing in this message should be construed as a digital or
electronic signature.

CYPRESS-FAIRBANKS ISD CONFIDENTIALITY NOTICE: This email, including
any attachments, is for the sole use of the intended recipient(s) and
may contain confidential student and/or employee information.
Unauthorized use and/or disclosure is prohibited under federal and
state law. If you are not the intended recipient, you may not use,
disclose, copy or disseminate this information. Please call the sender
immediately or reply by email and destroy all copies of the original
message, including any attachments. Unless expressly stated in this
e-mail, nothing in this message should be construed as a digital or
electronic signature.




Re: [ANNOUNCE] haproxy-1.9.0

2018-12-20 Thread Aleksandar Lazic

Hi Willy.

Am 20-12-2018 10:29, schrieb Willy Tarreau:

On Thu, Dec 20, 2018 at 09:17:00AM +0100, Aleksandar Lazic wrote:
Runtime API Improvements: It would be nice when you add a block that 
hanging or
dead processes can also be debugged with this API now. Maybe I have 
overseen it.


It is already the case. It's not shown on the article, but the main 
benefit
that comes from this mechanism is that you have the list of current and 
old

processes, and that you can access them all. It is something I've been
missing for a long time, to have a CLI connection to an old process 
that

does not want to die, to see what's happening.


Yep. Exactly that info would be nice to have in the article.
It's a USP (unique selling proposition) IMHO ;-)

Server Queue Priority Control: It would be nice to have a example for 
server

decision based on Server Queue Priority.


It's not a server decision, it's a dequeuing decision. To give you a 
concrete
example of what I'm using on my build farm, I'm using haproxy to 
load-balance
distcc traffic to a bunch of build nodes. Some files are large and slow 
to
compile, others are small and build fast. The time it takes to build 
the
largest file can be almost as long as the total build time. If these 
files
start to build after the other ones, the total build time increases 
because
I have to wait for such a large file to be built on one node at the 
end. So
instead what I'm doing is that I sort the queue by file size. Each time 
a

connection slot is available on a server, instead of picking the oldest
entry, haproxy picks the one with the largest file. This way large 
files

start to build before small ones and the build completes much faster.

But there is a caveat to doing this : if you have a very large number 
of

large files, you can leave small files starving in the queue till the
timeout strikes. That's what happened to me when building my kernels. 
So
I changed from set-priority-class to set-priority-offset to address 
this.

Now large files are built up to 10 seconds earlier and small files are
built up to 10 seconds later. By doing this I can respect both the size
ordering and bound the distance between the extremes, and I don't have
a timeout anymore.


As always well explained thanks.


This is what my config looks like (with the old set-priority-class
still present but commented out), I'm copy-pasting it here because I
think it's self-explanatory :


Cool thanks.

# wait for the payload to arrive before connecting to the 
server

tcp-request inspect-delay 1m
tcp-request content reject unless { distcc_param(DOTI) -m found 
}


# convert kilobytes to classes (negated)
# test: tcp-request content set-var(sess.size) str(2b95)
tcp-request content set-var(sess.size) distcc_param(DOTI)
tcp-request content set-var(sess.prio)
var(sess.size),div(-1024),add(2047)
tcp-request content set-var(sess.prio) int(0) if {
var(sess.prio) -m int lt 0 }
tcp-request content set-var(sess.prio) int(2047)  if {
var(sess.prio) -m int gt  2047 }
#tcp-request content set-priority-class var(sess.prio)
# offset up to +10 seconds for small files
tcp-request content set-priority-offset var(sess.prio),mul(5)

balance leastconn
default-server on-marked-down shutdown-sessions

# mcbin: 4xA72: their 4 CPUs must be used before any of the 
miqi

server lg1192.168.0.201:3632 check weight 5 maxconn 4
server lg2192.168.0.202:3632 check weight 5 maxconn 4
server lg3192.168.0.203:3632 check weight 5 maxconn 4
server lg4192.168.0.204:3632 check weight 5 maxconn 4
server lg5192.168.0.205:3632 check weight 5 maxconn 4
server lg6192.168.0.206:3632 check weight 5 maxconn 4
server lg7192.168.0.207:3632 check weight 5 maxconn 4

# miqi: 4xA17
server miqi-1 192.168.0.225:3632 check weight 1 maxconn 4
server miqi-2 192.168.0.226:3632 check weight 1 maxconn 4
server miqi-3 192.168.0.227:3632 check weight 1 maxconn 4
server miqi-4 192.168.0.228:3632 check weight 1 maxconn 4
server miqi-5 192.168.0.229:3632 check weight 1 maxconn 4
server miqi-6 192.168.0.230:3632 check weight 1 maxconn 4
server miqi-7 192.168.0.231:3632 check weight 1 maxconn 4
server miqi-8 192.168.0.232:3632 check weight 1 maxconn 4
server miqi-9 192.168.0.233:3632 check weight 1 maxconn 4
server miqi-a 192.168.0.234:3632 check weight 1 maxconn 4

Willy


Regards
Aleks



Re: HA Proxy Load Balancer

2018-12-20 Thread Aleksandar Lazic

Hi Lance.

Am 20-12-2018 18:20, schrieb Lance Melancon:

We are testing the load balancer and it's working but I can't see how 
to direct the server to a specific website such as server.net/site. Is 
this possible? Syntax? Thanks!


Well yes. I think it is a good starting point to read and understand 
this blog article.


https://www.haproxy.com/blog/using-haproxy-as-an-api-gateway-part-1/

What you want to do is "HTTP Routing"

For example a short snipplet
###

acl my_site path_beg -i /site

...
use_backend my_site if my_site

###

I would help a lot to have some more Information from you like.

haproxy -vv
anonymized config

As we don't know how much knowledge do you have about http I want to 
tell you that this statement "server.net/site" 2 parts.


Host: server.net
Path: /site

This is explained in detail in the doc.
http://cbonte.github.io/haproxy-dconv/1.9/configuration.html#1

Hth
Aleks

CYPRESS-FAIRBANKS ISD CONFIDENTIALITY NOTICE: This email, including any 
attachments, is for the sole use of the intended recipient(s) and may 
contain confidential student and/or employee information. Unauthorized 
use and/or disclosure is prohibited under federal and state law. If you 
are not the intended recipient, you may not use, disclose, copy or 
disseminate this information. Please call the sender immediately or 
reply by email and destroy all copies of the original message, 
including any attachments. Unless expressly stated in this e-mail, 
nothing in this message should be construed as a digital or electronic 
signature.




Re: Http HealthCheck Issue

2018-12-20 Thread Aleksandar Lazic
Hi Praveen.

Please keep the list in the loop, thanks.

Am 20.12.2018 um 07:00 schrieb UPPALAPATI, PRAVEEN:
> Hi Alek,
> 
> Now I am totally confused:
> 
> When I say :
> 
> 
> backend bk_8093_read
> balancesource
> http-response set-header X-Server %s
> option log-health-checks
> option httpchk GET 
> /nexus/repository/rawcentral/com.att.swm.attpublic/healthcheck.txt 
> HTTP/1.1\r\nHost:\ server1.com:8093\r\nAuthorization:\ Basic\ ...
> server primary8093r server1.com:8093 check verify none
> server backUp08093r server2.com:8093 check backup verify none 
> server backUp18093r server3.com:8093 check backup verify none
> 
> Here server1.com,server2.com and physical server hostnames.

That's the issue. You should have a vhost name which is for all servers the 
same.

> When I define HOST header which server should I define I was expecting 
> haproxy will formulate that to 
> 
> http://server1.com:8093/nexus/repository/rawcentral/com.att.swm.attpublic/healthcheck.txt
> http://server2.com:8093/nexus/repository/rawcentral/com.att.swm.attpublic/healthcheck.txt
> http://server3.com:8093/nexus/repository/rawcentral/com.att.swm.attpublic/healthcheck.txt
> 
> to monitor which servers are live right , so how could I dynamically populate 
> the HOST?

I have written the following:

> I assume that nexus have a general URL and not srv1,srv2, ...

This is also mentioned in the config example.

https://help.sonatype.com/repomanager2/installing-and-running/running-behind-a-reverse-proxy

This means on the nexus should be a generic hostname which is for all servers
the same.

Maybe you can share the nexus config, nexus setup and the nexus version, as it's
normally not a big deal to setup a vhost.

Regards
Aleks


PS: What's this urldefense.proofpoint.com crap 8-O

> Please advice.
> 
> Thanks,
> Praveen.
> 
> 
> 
> -Original Message-
> From: Aleksandar Lazic [mailto:al-hapr...@none.at] 
> Sent: Wednesday, December 19, 2018 3:25 PM
> To: UPPALAPATI, PRAVEEN 
> Cc: Jonathan Matthews ; Cyril Bonté 
> ; haproxy@formilux.org
> Subject: Re: Http HealthCheck Issue
> 
> Am 19.12.2018 um 21:04 schrieb UPPALAPATI, PRAVEEN:
>> Ok then do I need to add the haproxy server?
> 
> I suggest to use a `curl -v
> /nexus/v1/repository/rawcentral/com.att.swm.attpublic/healthcheck.txt`
> and see how curl make the request.
> 
> I assume that nexus have a general URL and not srv1,srv2, ...
> For example.
> 
> ###
> curl -vo /dev/null 
> https://urldefense.proofpoint.com/v2/url?u=https-3A__www.haproxy.org=DwIFaQ=LFYZ-o9_HUMeMTSQicvjIg=V0kSKiLhQKpOQLIjj3-g9Q=CaQ1GDp8D6XzObaEV3Ad9IQ3Q1TwhAAYhFQ24IgwP68=53v5RKBVFzzKyU7JGcd8i6eBlGyIfSavQBRkoYcXZm8=
> 
> * Rebuilt URL to: 
> https://urldefense.proofpoint.com/v2/url?u=https-3A__www.haproxy.org_=DwIFaQ=LFYZ-o9_HUMeMTSQicvjIg=V0kSKiLhQKpOQLIjj3-g9Q=CaQ1GDp8D6XzObaEV3Ad9IQ3Q1TwhAAYhFQ24IgwP68=k3KIXBm21aPgFFyUUyjSUxggMPVWIqkWYwdaWeDZ6NI=
>   % Total% Received % Xferd  Average Speed   TimeTime Time  
> Current
>  Dload  Upload   Total   SpentLeft  Speed
>   0 00 00 0  0  0 --:--:-- --:--:-- --:--:-- 
> 0*
>   Trying 51.15.8.218...
> * Connected to 
> https://urldefense.proofpoint.com/v2/url?u=http-3A__www.haproxy.org=DwIFaQ=LFYZ-o9_HUMeMTSQicvjIg=V0kSKiLhQKpOQLIjj3-g9Q=CaQ1GDp8D6XzObaEV3Ad9IQ3Q1TwhAAYhFQ24IgwP68=WMHYWYJP4ycNXvPYov7PnJdkQB26fgfXm_ByW2BCM8g=
>  (51.15.8.218) port 443 (#0)
> * found 148 certificates in /etc/ssl/certs/ca-certificates.crt
> * found 599 certificates in /etc/ssl/certs
> * ALPN, offering http/1.1
> * SSL connection using TLS1.2 / ECDHE_RSA_AES_128_GCM_SHA256
> *server certificate verification OK
> *server certificate status verification SKIPPED
> *common name: *.haproxy.org (matched)
> *server certificate expiration date OK
> *server certificate activation date OK
> *certificate public key: RSA
> *certificate version: #3
> *subject: OU=Domain Control Validated,OU=EssentialSSL
> Wildcard,CN=*.haproxy.org
> *start date: Fri, 21 Apr 2017 00:00:00 GMT
> *expire date: Mon, 20 Apr 2020 23:59:59 GMT
> *issuer: C=GB,ST=Greater Manchester,L=Salford,O=COMODO CA
> Limited,CN=COMODO RSA Domain Validation Secure Server CA
> *compression: NULL
> * ALPN, server accepted to use http/1.1
>> GET / HTTP/1.1
>> Host: 
>> https://urldefense.proofpoint.com/v2/url?u=http-3A__www.haproxy.org=DwIFaQ=LFYZ-o9_HUMeMTSQicvjIg=V0kSKiLhQKpOQLIjj3-g9Q=CaQ1GDp8D6XzObaEV3Ad9IQ3Q1TwhAAYhFQ24IgwP68=WMHYWYJP4ycNXvPYov7PnJdkQB26fgfXm_ByW2BCM8g=
>>  # <<<< That's the host header which is missing in your

Re: [ANNOUNCE] haproxy-1.9.0

2018-12-20 Thread Aleksandar Lazic
Am 20.12.2018 um 06:48 schrieb Willy Tarreau:
> On Wed, Dec 19, 2018 at 11:31:33PM +0100, Aleksandar Lazic wrote:
>>> Well, I know that so quick a summary doesn't do justice to the developers
>>> having done all this amazing work, but I've seen that some of my coworkers
>>> have started to write an article detailing all these new features, so I
>>> won't waste my time paraphrasing them. I'll pass the URL here once this
>>> article becomes public. No, I'm not lazy, I'm tired and hungry ;-)
> 
> And here comes the link, it's more detailed than above :
> 
> https://www.haproxy.com/blog/haproxy-1-9-has-arrived/

good written ;-).

2 suggestions.

Runtime API Improvements: It would be nice when you add a block that hanging or
dead processes can also be debugged with this API now. Maybe I have overseen it.

Server Queue Priority Control: It would be nice to have a example for server
decision based on Server Queue Priority.

> I still have to catch up with a large number of unresponded e-mails, and
> once done, I'll send another update explaining how I hope we can organizer
> our work better for the next steps.
> 
>> Amazing work to the whole team.
>>
>> Take a good food, it's easy in France ;-), and a deep sleep to recharge your
>> batteries, you and your team have more the deserve it.
> 
> Thanks Aleks, now both done. Very good cassoulet less than 2km away from
> the office ;-)

;-)

> Cheers,
> Willy

Regards
Aleks



Re: [ANNOUNCE] haproxy-1.9.0

2018-12-19 Thread Aleksandar Lazic
Hi.

Am 19.12.2018 um 19:33 schrieb Willy Tarreau:
> Hi,
> 
> HAProxy 1.9.0 was released on 2018/12/19. It added 45 new commits
> after version 1.9-dev11.
> 
> We still had a number of small issues causing the various artefacts that
> have been visible on haproxy.org since this week-end, but now everything
> looks OK. So it's better to release before we discover new issues :-)

Good Idea ;-)

The image is available.

###
docker run --rm --entrypoint /usr/local/sbin/haproxy me2digital/haproxy19 -vv
HA-Proxy version 1.9.0 2018/12/19 - https://haproxy.org/
Build options :
  TARGET  = linux2628
  CPU = generic
  CC  = gcc
  CFLAGS  = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv
-Wno-unused-label -Wno-sign-compare -Wno-unused-parameter
-Wno-old-style-declaration -Wno-ignored-qualifiers -Wno-clobbered
-Wno-missing-field-initializers -Wtype-limits
  OPTIONS = USE_LINUX_SPLICE=1 USE_GETADDRINFO=1 USE_ZLIB=1 USE_REGPARM=1
USE_OPENSSL=1 USE_LUA=1 USE_PCRE=1 USE_PCRE_JIT=1 USE_TFO=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with OpenSSL version : OpenSSL 1.1.1a  20 Nov 2018
Running on OpenSSL version : OpenSSL 1.1.1a  20 Nov 2018
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
Built with Lua version : Lua 5.3.5
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT
IP_FREEBIND
Built with zlib version : 1.2.7
Running on zlib version : 1.2.7
Compression algorithms supported : identity("identity"), deflate("deflate"),
raw-deflate("deflate"), gzip("gzip")
Built with PCRE version : 8.32 2012-11-30
Running on PCRE version : 8.32 2012-11-30
PCRE library supports JIT : yes
Encrypted password support via crypt(3): yes
Built with multi-threading support.

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as  cannot be specified using 'proto' keyword)
  h2 : mode=HTXside=FE|BE
  h2 : mode=HTTP   side=FE
: mode=HTXside=FE|BE
: mode=TCP|HTTP   side=FE|BE

Available filters :
[SPOE] spoe
[COMP] compression
[CACHE] cache
[TRACE] trace
###

> Speaking more seriously, in the end, what we expected to be just a technical
> release looks pretty nice on the features perspective. The features by
> themselves are not high-level but address a wide number of integration
> cases that overall make this version really appealing.
> 
> In the end 1.9 brings to end users (as a quick summary) :
>   - end-to-end HTTP/2
>   - advanced master process management with its own CLI
>   - much more scalable multi-threading
>   - regression test suite
>   - priority-based dequeueing
>   - better cache supporting larger objects
>   - early hints (HTTP 103)
>   - cipher suites for TLS 1.3
>   - random balance algorithm
>   - fine-grained timers for better observability
>   - stdout logging for containers and systemd
> 
> And the rest which has kept us very busy was needed to achieve this and
> to pave the way to future developments and more contributions from people
> who won't have to know the internals as deeply as it used to be needed.
> It's expected that the road to 2.0 will be calmer now.
> 
> Well, I know that so quick a summary doesn't do justice to the developers
> having done all this amazing work, but I've seen that some of my coworkers
> have started to write an article detailing all these new features, so I
> won't waste my time paraphrasing them. I'll pass the URL here once this
> article becomes public. No, I'm not lazy, I'm tired and hungry ;-)

Amazing work to the whole team.

Take a good food, it's easy in France ;-), and a deep sleep to recharge your
batteries, you and your team have more the deserve it.

> Please find the usual URLs below :
>Site index   : http://www.haproxy.org/
>Discourse: http://discourse.haproxy.org/
>Slack channel: https://slack.haproxy.org/
>Sources  : http://www.haproxy.org/download/1.9/src/
>Git repository   : http://git.haproxy.org/git/haproxy-1.9.git/
>Git Web browsing : http://git.haproxy.org/?p=haproxy-1.9.git
>Changelog: http://www.haproxy.org/download/1.9/src/CHANGELOG
>Cyril's HTML doc : http://cbonte.github.io/haproxy-dconv/
> 
> Willy

Very best regards
aleks

> ---
> Complete changelog :
> Christopher Faulet (7):
>   BUG/MEDIUM: compression: Use the right buffer pointers to compress 
> input data
>   BUG/MINOR: mux_pt: Set CS_FL_WANT_ROOM when count is zero in rcv_buf() 
> callback
>   BUG/MEDIUM: stream: Forward the right amount of data before infinite 
> forwarding
>   BUG/MINOR: proto_htx: Call the HTX version of the function managing 
> client 

Re: Http HealthCheck Issue

2018-12-19 Thread Aleksandar Lazic
Am 19.12.2018 um 21:04 schrieb UPPALAPATI, PRAVEEN:
> Ok then do I need to add the haproxy server?

I suggest to use a `curl -v
/nexus/v1/repository/rawcentral/com.att.swm.attpublic/healthcheck.txt`
and see how curl make the request.

I assume that nexus have a general URL and not srv1,srv2, ...
For example.

###
curl -vo /dev/null https://www.haproxy.org

* Rebuilt URL to: https://www.haproxy.org/
  % Total% Received % Xferd  Average Speed   TimeTime Time  Current
 Dload  Upload   Total   SpentLeft  Speed
  0 00 00 0  0  0 --:--:-- --:--:-- --:--:-- 0*
  Trying 51.15.8.218...
* Connected to www.haproxy.org (51.15.8.218) port 443 (#0)
* found 148 certificates in /etc/ssl/certs/ca-certificates.crt
* found 599 certificates in /etc/ssl/certs
* ALPN, offering http/1.1
* SSL connection using TLS1.2 / ECDHE_RSA_AES_128_GCM_SHA256
*server certificate verification OK
*server certificate status verification SKIPPED
*common name: *.haproxy.org (matched)
*server certificate expiration date OK
*server certificate activation date OK
*certificate public key: RSA
*certificate version: #3
*subject: OU=Domain Control Validated,OU=EssentialSSL
Wildcard,CN=*.haproxy.org
*start date: Fri, 21 Apr 2017 00:00:00 GMT
*expire date: Mon, 20 Apr 2020 23:59:59 GMT
*issuer: C=GB,ST=Greater Manchester,L=Salford,O=COMODO CA
Limited,CN=COMODO RSA Domain Validation Secure Server CA
*compression: NULL
* ALPN, server accepted to use http/1.1
> GET / HTTP/1.1
> Host: www.haproxy.org #  That's the host header which is missing in your
check line
> User-Agent: curl/7.47.0
> Accept: */*
>
< HTTP/1.1 200 OK
< date: Wed, 19 Dec 2018 21:09:30 GMT
< server: Apache
< last-modified: Wed, 19 Dec 2018 18:32:39 GMT
< etag: "504ff5-148d4-57d643d22eab7"
< accept-ranges: bytes
< content-length: 84180
< content-type: text/html
< age: 511
<
{ [16150 bytes data]

###

Btw.: This is also shown in the manual in the Example of the option.

https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4-option%20httpchk

`option httpchk OPTIONS * HTTP/1.1\r\nHost:\ www `

The manual is good, I suggest to read it several times, what I do always ;-)

You should also avoid `X-` in the response header in your config

`http-response set-header X-Server %s`

As Norman mentioned it in on the list couples of days before.

https://www.mail-archive.com/haproxy@formilux.org/msg32110.html

Best regards
Aleks

> -Original Message-
> From: Jonathan Matthews [mailto:cont...@jpluscplusm.com] 
> Sent: Wednesday, December 19, 2018 1:32 PM
> To: UPPALAPATI, PRAVEEN 
> Cc: Cyril Bonté ; haproxy@formilux.org
> Subject: Re: Http HealthCheck Issue
> 
> On Wed, 19 Dec 2018 at 19:23, UPPALAPATI, PRAVEEN  wrote:
>>
>> Hmm. Wondering why do we need host header? I was able to do curl without the 
>> header. I did not find anything in the doc.
> 
> "curl" automatically adds a Host header unless you are directly
> hitting an IP address.
> 




Re: MQTT CONNECT parsing in Lua

2018-12-11 Thread Aleksandar Lazic
Hi Baptiste.

Am 11.12.2018 um 03:29 schrieb Baptiste:
> Hi guys,
> 
> At last AWS conference, I met with a engineer who was using HAProxy to
> load-balance IoT devices through HAProxy using MQTT protocol and he was
> complaining about the poor performance of the server with 10k of devices just
> get reconnecting.

Have you any chance to aks the engineer if your solution have better performance
then his?

> He pointed SSL performance but also authentication (validation of username /
> password).

Do you have some more details about his SSL/TLS performance problem stuff?

> So I wrote a small MQTT library for HAProxy which allows parsing the MQTT
> CONNECT message, the very first one being sent by a client.
> The library allows the following:
> * validation of the message (through a converter)
> * fetch any field from the connect message (client id, username, password,
> etc...) for fun and profit (routing, persistence, rate or concurrent 
> connection
> enforcement, etc...)
> * write your own authentication validation module on top of HAProxy
> 
> The code is there, including some HAProxy configuration examples:
> https://github.com/bedis/haproxy_mqtt_lua
> 
> I hope this will be useful to some of you.
> I am planning to write in native C the converter and the fetch above.

In general , cool ;-)

> Baptiste

Regards
Aleks



Re: sample fetch: add bc_http_major

2018-12-07 Thread Aleksandar Lazic
Hi Jerome.

Am 07.12.2018 um 15:37 schrieb Jerome Magnin:
> Hi Aleks,
> 
> On Fri, Dec 07, 2018 at 01:46:53PM +0100, Aleksandar Lazic wrote:
>> Hi Jerome.
>> [...] 
>> I suggest to use a dedicated function for that, jm2c.
>>
>> { "bc_http_major", smp_fetch_bc_http_major, 0, NULL, SMP_T_SINT, 
>> SMP_USE_L4SRV },
>>
> 
> If you look at src/ssl_sock.c there are several fetches applying to both
> frontend and backend connection, and each pair uses the same function. I
> shamelessly copied^W^Wtook example from them.

Got it. Thanks for answer.

> Jérôme

Regards
Aleks



Re: Simply adding a filter causes read error

2018-12-07 Thread Aleksandar Lazic
Hi.

Am 07.12.2018 um 08:37 schrieb flamese...@yahoo.co.jp:
> Hi
> 
> I tested more, and found that even with option http-pretend-keepalive enabled,
> 
> if I increase the test duration , the read error still appear.

Please can you show us some logs when the error appears.
Can you also tell us some data about the Server on which haproxy, wrk and nginx
is running and how the network setup looks like.

maybe you reach some system limits as Compression requires some more os/hw
resources.

Regards
Aleks

> Running 3m test @ http://10.0.3.15:8000
>   10 threads and 1000 connections
>   Thread Stats   Avg      Stdev     Max   +/- Stdev
>     Latency    19.84ms   56.36ms   1.34s    92.83%
>     Req/Sec    23.11k     2.55k   50.64k    87.10%
>   45986426 requests in 3.33m, 36.40GB read
>   Socket errors: connect 0, read 7046, write 0, timeout 0
> Requests/sec: 229817.63
> Transfer/sec:    186.30MB
> 
> thanks
> 
> - Original Message -
> *From:* "flamese...@yahoo.co.jp" 
> *To:* Aleksandar Lazic ; "haproxy@formilux.org"
> 
> *Date:* 2018/12/7, Fri 09:06
> *Subject:* Re: Simply adding a filter causes read error
> 
> Hi,
> 
> Thanks for the reply, I thought the mail format is corrupted..
> 
> I tried option http-pretend-keepalive, seems read error is gone, but 
> timeout
> error raised(maybe its because the 1000 connections of wrk)
> 
> Thanks
> 
> - Original Message -
> *From:* Aleksandar Lazic 
> *To:* flamese...@yahoo.co.jp; "haproxy@formilux.org" 
> 
> *Date:* 2018/12/6, Thu 23:53
> *Subject:* Re: Simply adding a filter causes read error
> 
> Hi.
> 
> Am 06.12.2018 um 15:20 schrieb flamese...@yahoo.co.jp
> <mailto:flamese...@yahoo.co.jp>:
> > Hi,
> >
> > I have a haproxy(v1.8.14) in front of several nginx backends,
> everything works
> > fine until I add compression in haproxy.
> 
> There is a similar thread about this topic.
> 
> https://www.mail-archive.com/haproxy@formilux.org/msg31897.html
> 
> Can you try to add this option in your config and see if the problem 
> is
> gone.
> 
> option http-pretend-keepalive
> 
> Regards
> Aleks
> 
> > My config looks like this:
> >
> > ### Config start #
> > global
> >     maxconn         100
> >     daemon
> >     nbproc 2
> >
> > defaults
> >     retries 3
> >     option redispatch
> >     timeout client  60s
> >     timeout connect 60s
> >     timeout server  60s
> >     timeout http-request 60s
> >     timeout http-keep-alive 60s
> >
> > frontend web
> >     bind *:8000
> >
> >     mode http
> >     default_backend app
> > backend app
> >     mode http
> >     #filter compression
> >     #filter trace 
> >     server nginx01 10.0.3.15:8080
> > ### Config end #
> >
> >
> > Lua script used in wrk:
> > a.lua:
> >
> > local count = 0
> >
> > request = function()
> >     local url = "/?count=" .. count
> >     count = count + 1
> >     return wrk.format(
> >     'GET',
> >     url
> >     )
> > end
> >
> >
> > 01. wrk test against nginx: everything if OK
> >
> > wrk -c 1000 -s a.lua http://10.0.3.15:8080 <http://10.0.3.15:8080/>
> > Running 10s test @ http://10.0.3.15:8080 <http://10.0.3.15:8080/>
> >   2 threads and 1000 connections
> >   Thread Stats   Avg      Stdev     Max   +/- Stdev
> >     Latency    34.83ms   17.50ms 260.52ms   76.48%
> >     Req/Sec    12.85k     2.12k   17.20k    62.63%
> >   255603 requests in 10.03s, 1.23GB read
> > Requests/sec:  25476.45
> > Transfer/sec:    125.49MB
> >
> >
> > 02. Wrk test against haproxy, no filters: everything is OK
> >
> > wrk -c 1000 -s a.lua http://10.0.3.15:8000 <http://10.0.3.15:8000/>
> > Running 10s test @ http://10.0.3.15:8000 <http://10.0.3.15:8000/>
>  

  1   2   3   4   5   6   >