Re: load-server-state-from-file "automatic" transfer?

2019-07-29 Thread Daniel Schneller
Hi!

Thanks for taking a look and explaining. Should I create a ticket on GitHub for 
this?

Daniel


> On 25. Jul 2019, at 10:44, William Lallemand  wrote:
> 
> On Thu, Jul 25, 2019 at 10:23:24AM +0200, Aleksandar Lazic wrote:
>> Hi.
>> 
>> Am 25.07.2019 um 10:06 schrieb William Lallemand:
>>> On Thu, Jul 25, 2019 at 08:07:45AM +0200, Baptiste wrote:
 Hi Daniel,
 
 You're making a good point. Use the file system was the simplest and
 fastest way to go when we first designed this feature 4 or 5 years ago.
 I do agree that now with master/worker and threaded model being pushed that
 using the runtime-api may make sense and would be even more "cloud native".
 
 Maybe @William would have an advice on this one.
 
 Baptiste
>>> 
>>> Hi,
>>> 
>>> The simplest way to do that with the current architecture would be to do the
>>> same thing as the "seamless reload" feature (-x).
>>> 
>>> The new process will need to connect to the old one, send the `show servers
>>> state` command, and then parse it using the server state file parser.
>>> 
>>> However, what I don't like with this, is that we still need to configure a
>>> "stats socket" manually in the configuration, it is not doable yet using the
>>> internal socketpair of the master-worker model.
>> 
>> How about to catch the idea from Daniel to use a *internal* peers setup for 
>> such
>> states?
>> 
> 
> I don't think it makes sense to use peers for that.
> 
> The idea to do it with the stats socket is good, we only need to improve the
> master-worker so a worker could use the socketpair to connect to another
> worker. The only drawback is that it needs a configured stats socket in the
> current model, but we already have this limitation with the seamless reload.
> 
> --
> William Lallemand



signature.asc
Description: Message signed with OpenPGP


load-server-state-from-file "automatic" transfer?

2019-07-24 Thread Daniel Schneller
Hi!

I have been looking into load-server-state-from file to prevent 500 errors being
reported after a service reload. Currently we are seeing these, because the new
instance comes up and first wants to see the minimum configured number of health
checks for a backend server to succeed, before it hands requests to it.

From what I can tell, the state file needs to be saved manually before a service
reload, so that the new process coming up can read it back. I can do that, of 
course,
but I was wondering what the reasoning was to not have this data transferred to 
a
new process in a similar fashion as file handles or stick-tables (via peers)?

Thanks a lot!

Daniel



--
Daniel Schneller
Principal Cloud Engineer
GPG key at https://keybase.io/dschneller

CenterDevice GmbH
Rheinwerkallee 3
53227 Bonn
www.centerdevice.com
__
Geschäftsführung: Dr. Patrick Peschlow, Dr. Lukas Pustina, Michael Rosbach, 
Handelsregister-Nr.: HRB 18655, HR-Gericht: Bonn, USt-IdNr.: DE-815299431

Diese E-Mail einschließlich evtl. beigefügter Dateien enthält vertrauliche 
und/oder rechtlich geschützte Informationen. Wenn Sie nicht der richtige 
Adressat sind oder diese E-Mail irrtümlich erhalten haben, informieren Sie 
bitte sofort den Absender und löschen Sie diese E-Mail und evtl. beigefügte 
Dateien umgehend. Das unerlaubte Kopieren, Nutzen oder Öffnen evtl. beigefügter 
Dateien sowie die unbefugte Weitergabe dieser E-Mail ist nicht gestattet.

Pflichtinformationen gemäß Artikel 13 DSGVO
Im Falle des Erstkontakts sind wir gemäß Art. 12, 13 DSGVO verpflichtet, Ihnen 
folgende datenschutzrechtliche Pflichtinformationen zur Verfügung zu stellen: 
Wenn Sie uns per E-Mail kontaktieren, verarbeiten wir Ihre personenbezogenen 
Daten nur, soweit an der Verarbeitung ein berechtigtes Interesse besteht (Art. 
6 Abs. 1 lit. f DSGVO), Sie in die Datenverarbeitung eingewilligt haben (Art. 6 
Abs. 1 lit. a DSGVO), die Verarbeitung für die Anbahnung, Begründung, 
inhaltliche Ausgestaltung oder Änderung eines Rechtsverhältnisses zwischen 
Ihnen und uns erforderlich ist (Art. 6 Abs. 1 lit. b DSGVO) oder eine sonstige 
Rechtsnorm die Verarbeitung gestattet. Ihre personenbezogenen Daten verbleiben 
bei uns, bis Sie uns zur Löschung auffordern, Ihre Einwilligung zur Speicherung 
widerrufen oder der Zweck für die Datenspeicherung entfällt (z. B. nach 
abgeschlossener Bearbeitung Ihres Anliegens). Zwingende gesetzliche 
Bestimmungen – insbesondere steuer- und handelsrechtliche Aufbewahrungsfristen 
– bleiben unberührt. Sie haben jederzeit das Recht, unentgeltlich Auskunft über 
Herkunft, Empfänger und Zweck Ihrer gespeicherten personenbezogenen Daten zu 
erhalten. Ihnen steht außerdem ein Recht auf Widerspruch, auf 
Datenübertragbarkeit und ein Beschwerderecht bei der zuständigen 
Aufsichtsbehörde zu. Ferner können Sie die Berichtigung, die Löschung und unter 
bestimmten Umständen die Einschränkung der Verarbeitung Ihrer personenbezogenen 
Daten verlangen. Details entnehmen Sie unserer Datenschutzerklärung 
(https://www.centerdevice.de/datenschutz/). Unseren Datenschutzbeauftragten 
erreichen Sie per E-Mail unter erdm...@sicdata.de.




signature.asc
Description: Message signed with OpenPGP


Re: DNS Resolver Issues

2019-03-24 Thread Daniel Schneller
Hello!

I am currently on vacation for two weeks, but I'll see to it when I get back.
There is no particular reason for the specific check address here, as you 
correctly figured. It is just an artefact of the template used to create the 
configuration. I can remove that, but there might be cases were it matters 
(though I don't think we have any ATM AFAIR). Would not have guessed there 
would be different resolution paths; if this is intentional, a note in the 
documentation would be helpful. I can provide that when I am back and when 
there is clarity on why it would be like this.

Thank you very much for your help!

Cheers,
Daniel


> On 23. Mar 2019, at 14:53, PiBa-NL  wrote:
> 
> Hi Daniel, Baptiste,
> 
> @Daniel, can you remove the 'addr loadbalancer-internal.xxx.yyy' from the 
> server check? It seems to me that that name is not being resolved by the 
> 'resolvers'. And even if it would it would be kinda redundant as it is in the 
> example as it is the same as the servername.?. Not sure how far below 
> scenarios are all explained by this though..
> 
> @Baptiste, is it intentional that a wrong 'addr' dns name makes haproxy fail 
> to start despite having the supposedly never failing 'default-server 
> init-addr last,libc,none' ? Is it possibly a good feature request to support 
> re-resolving a dns name for the addr setting as well ?
> 
> Regards,
> PiBa-NL (Pieter)
> 
> Op 21-3-2019 om 20:37 schreef Daniel Schneller:
>> Hi!
>> 
>> Thanks for the response. I had looked at the "hold" directives, but since 
>> they all seem to have reasonable defaults, I did not touch them.
>> I specified 10s explictly, but it did not make a difference.
>> 
>> I did some more tests, however, and it seems to have more to do with the 
>> number of responses for the initial(?) DNS queries.
>> Hopefully these three tables make sense and don't get mangled in the mail. 
>> The "templated"
>> proxy is defined via "server-template" with 3 "slots". The "regular" one 
>> just as "server".
>> 
>> 
>> Test 1: Start out  with both "valid" and "broken" DNS entries. Then comment 
>> out/add back
>> one at a time as described in (1)-(5).
>> Each time after changing /etc/hosts, restart dnsmasq and check haproxy via 
>> hatop.
>> Haproxy started fresh once dnsmasq was set up to (1).
>> 
>>|  state   state
>> /etc/hosts |  regular templated
>>|-
>> (1) BRK|  UP/L7OK DOWN/L4TOUT
>> VALID  |  MAINT/resolution
>>|  UP/L7OK
>>|
>> 
>> (2) BRK|  DOWN/L4TOUT DOWN/L4TOUT
>> #VALID |  MAINT/resolution
>>|  MAINT/resolution
>>|
>> (3) #BRK   |  UP/L7OK UP/L7OK
>> VALID  |  MAINT/resolution
>>|  MAINT/resolution
>>|
>> (4) BRK|  UP/L7OK UP/L7OK
>> VALID  |  DOWN/L4TOUT
>>|  MAINT/resolution
>>|
>> (5) BRK|  DOWN/L4TOUT DOWN/L4TOUT
>> #VALID |  MAINT/resolution
>>|  MAINT/resolution
>>   This all looks normal and as expected. As soon as the 
>> "VALID" DNS entry is present, the
>> UP state follows within a few seconds.
>>   
>> 
>> Test 2: Start out "valid only" (1) and proceed as described in (2)-(5), 
>> again restarting
>> dnsmasq each time, and haproxy reloaded after dnsmasq was set up to (1).
>> 
>>|  state   state
>> /etc/hosts |  regular templated
>>|
>> (1) #BRK   |  UP/L7OK MAINT/resolution
>> VALID  |  MAINT/resolution
>>|  UP/L7OK
>>|
>> (2) BRK|  UP/L7OK DOWN/L4TOUT
>> VALID  |  MAINT/resolution

Re: DNS Resolver Issues

2019-03-21 Thread Daniel Schneller
T
VALID  |  UP/L7OK
   |  MAINT/resolution
   |  
(5) BRK|  DOWN/L4TOUT DOWN/L4TOUT
#VALID |  MAINT/resolution
   |  MAINT/resolution  
  


Here it becomes interesting. In (1) both regular and templated proxies are 
DOWN, of course.
However, adding in a second DNS response in (2) brings the templated proxy UP, 
but the regular
one stays DOWN. Only when in (3) the valid response is the only one presented, 
does it go 
UP as well. Adding the broken one back (4) is of no consequence then. And 
again, after
leaving just the broken response (5), both correctly go DOWN.

So it would appear that if haproxy starts with just a single "broken" DNS 
response, adding
a healthy one later one is not recognized. Instead, it stays DOWN. "Replacing" 
the single
broken response with a single "valid" response, however, brings it to life, and 
it won't be 
discouraged by bringing the broken one back in. 

Tests 1 and 2 make sense to me, but test 3 I don't understand. For now, I have 
worked
around the issue by defining all my relevant backends with server-template and 
at least
2 slots, but I would still like to understand it. And maybe it is a bug, after 
all ;)

Kind regards, and thanks for a great piece of software!

Daniel





> On 21. Mar 2019, at 14:28, Bruno Henc  wrote:
> 
> Hello Daniel,
> 
> 
> You might be missing the hold-valid directive in your resolvers section: 
> https://www.haproxy.com/documentation/hapee/1-9r1/onepage/#5.3.2-timeout
> 
> This should force HAProxy to fetch the DNS record values from the resolver.
> 
> A reload of the HAProxy instance also forces the instances to query all 
> records from the resolver.
> 
> Can you please retest with the updated configuration and report back the 
> results?
> 
> 
> Best regards,
> 
> Bruno Henc
> 
> ‐‐‐ Original Message ‐‐‐
> On Thursday, March 21, 2019 12:09 PM, Daniel Schneller 
>  wrote:
> 
>> Hello!
>> 
>> Friendly bump :)
>> I'd be willing to amend the documentation once I understand what's going on 
>> :D
>> 
>> Cheers,
>> Daniel
>> 
>>> On 18. Mar 2019, at 20:28, Daniel Schneller 
>>> daniel.schnel...@centerdevice.com wrote:
>>> Hi everyone!
>>> I assume I am misunderstanding something, but I cannot figure out what it 
>>> is.
>>> We are using haproxy in AWS, in this case as sidecars to applications so 
>>> they need not
>>> know about changing backend addresses at all, but can always talk to 
>>> localhost.
>>> Haproxy listens on localhost and then forwards traffic to an ELB instance.
>>> This works great, but there have been two occasions now, where due to a 
>>> change in the
>>> ELB's IP addresses, our services went down, because the backends could not 
>>> be reached
>>> anymore. I don't understand why haproxy sticks to the old IP address 
>>> instead of going
>>> to one of the updated ones.
>>> There is a resolvers section which points to the local dnsmasq instance 
>>> (there to send
>>> some requests to consul, but that's not used here). All other traffic is 
>>> forwarded on
>>> to the AWS DNS server set via DHCP.
>>> I managed to get timely updates and updated backend servers when using 
>>> server-template,
>>> but form what I understand this should not really be necessary for this.
>>> This is the trimmed down sidecar config. I have not made any changes to dns 
>>> timeouts etc.
>>> resolvers default
>>> 
>>> dnsmasq
>>> 
>>> 
>>> 
>>> nameserver local 127.0.0.1:53
>>> listen regular
>>> bind 127.0.0.1:9300
>>> option dontlog-normal
>>> server lb-internal loadbalancer-internal.xxx.yyy:9300 resolvers default 
>>> check addr loadbalancer-internal.xxx.yyy port 9300
>>> listen templated
>>> bind 127.0.0.1:9200
>>> option dontlog-normal
>>> option httpchk /haproxy-simple-healthcheck
>>> server-template lb-internal 2 loadbalancer-internal.xxx.yyy:9200 resolvers 
>>> default check port 9299
>>> To simulate changing ELB adresses, I added entries for 
>>> loadbalancer-internal.xxx.yyy in /etc/hosts
>>> and to be able to control them via dnsmasq.
>>> I tried different scenarios, but could not reliably predict what would 
>>> happen in all cases.
>>> The address ending in 52 (marked as "valid" below) 

Re: DNS Resolver Issues

2019-03-21 Thread Daniel Schneller
Hello!

Friendly bump :)
I'd be willing to amend the documentation once I understand what's going on :D

Cheers,
Daniel


> On 18. Mar 2019, at 20:28, Daniel Schneller 
>  wrote:
> 
> Hi everyone!
> 
> I assume I am misunderstanding something, but I cannot figure out what it is.
> We are using haproxy in AWS, in this case as sidecars to applications so they 
> need not
> know about changing backend addresses at all, but can always talk to 
> localhost.
> 
> Haproxy listens on localhost and then forwards traffic to an ELB instance. 
> This works great, but there have been two occasions now, where due to a 
> change in the
> ELB's IP addresses, our services went down, because the backends could not be 
> reached
> anymore. I don't understand why haproxy sticks to the old IP address instead 
> of going
> to one of the updated ones.
> 
> There is a resolvers section which points to the local dnsmasq instance 
> (there to send
> some requests to consul, but that's not used here). All other traffic is 
> forwarded on
> to the AWS DNS server set via DHCP.
> 
> I managed to get timely updates and updated backend servers when using 
> server-template,
> but form what I understand this should not really be necessary for this. 
> 
> This is the trimmed down sidecar config. I have not made any changes to dns 
> timeouts etc.
> 
> resolvers default
>  # dnsmasq
>  nameserver local 127.0.0.1:53
> 
> listen regular
>  bind 127.0.0.1:9300
>  option dontlog-normal
>  server lb-internal loadbalancer-internal.xxx.yyy:9300 resolvers default 
> check addr loadbalancer-internal.xxx.yyy port 9300
> 
> listen templated
>  bind 127.0.0.1:9200
>  option dontlog-normal
>  option httpchk /haproxy-simple-healthcheck
>  server-template lb-internal 2 loadbalancer-internal.xxx.yyy:9200 resolvers 
> default check  port 9299
> 
> 
> To simulate changing ELB adresses, I added entries for 
> loadbalancer-internal.xxx.yyy in /etc/hosts
> and to be able to control them via dnsmasq.
> 
> I tried different scenarios, but could not reliably predict what would happen 
> in all cases.
> 
> The address ending in 52 (marked as "valid" below) is a currently (as of the 
> time of testing) 
> valid IP for the ELB. The one ending in 199 (marked "invalid") is an unused 
> private IP address
> in my VPC.
> 
> 
> Starting with /etc/hosts:
> 
> 10.205.100.52  loadbalancer-internal.xxx.yyy# valid
> 10.205.100.199 loadbalancer-internal.xxx.yyy# invalid
> 
> haproxy starts and reports:
> 
> regular:   lb-internal UP/L7OK
> templated: lb-internal1  DOWN/L4TOUT
>   lb-internal2UP/L7OK
> 
> That's expected. Now when I edit /etc/hosts to _only_ contain the _invalid_ 
> address
> and restart dnsmasq, I would expect both proxies to go fully down. But only 
> the templated
> proxy behaves like that:
> 
> regular:   lb-internal UP/L7OK
> templated: lb-internal1  DOWN/L4TOUT
>   lb-internal2  MAINT (resolution)
> 
> Reloading haproxy in this state leads to:
> 
> regular:   lb-internal   DOWN/L4TOUT
> templated: lb-internal1  MAINT (resolution)
>   lb-internal2  DOWN/L4TOUT
> 
> After fixing /etc/hosts to include the valid server again and restarting 
> dnsmasq:
> 
> regular:   lb-internal   DOWN/L4TOUT
> templated: lb-internal1UP/L7OK
>   lb-internal2  DOWN/L4TOUT
> 
> 
> Shouldn't the regular proxy also recognize the change and bring the backend 
> up or down
> depending on the DNS change? I have waited for several health check rounds 
> (seeing 
> "* L4TOUT" and "L4TOUT") toggle, but it still never updates.
> 
> I also tried to have _only_ the invalid address in /etc/hosts, then 
> restarting haproxy.
> The regular backends will never recognize it when I add the valid one back in.
> 
> The templated one does, _unless_ I set it up to have only 1 instead of 2 
> server slots.
> In that case it behaves will also only pick up the valid server when reloaded.
> 
> On the other hand, it _will_ recognize when I remove the valid server without 
> a reload
> on the next health check, but _not_ bring them back in and make the proxy UP 
> when it 
> comes back.
> 
> 
> I assume my understanding of something here is broken, and I would gladly be 
> told
> about it :)
> 
> 
> Thanks a lot!
> Daniel
> 
> 
> Version Info:
> --
> $ haproxy -vv
> HA-Proxy version 1.8.19-1ppa1~trusty 2019/02/12
> Copyright 2000-2019 Willy Tarreau 
> 
> Build options :
>  TARGET  = linux2628
>  CPU = generic
>  CC  = gcc
>  CFLAGS  = -O2 -g -O2 -fPIE -fstack-protec

DNS Resolver Issues

2019-03-18 Thread Daniel Schneller
ersion : 8.31 2012-07-06
Running on PCRE version : 8.31 2012-07-06
PCRE library supports JIT : no (libpcre build without JIT?)
Built with zlib version : 1.2.8
Running on zlib version : 1.2.8
Compression algorithms supported : identity("identity"), deflate("deflate"), 
raw-deflate("deflate"), gzip("gzip")
Built with network namespace support.

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available filters :
    [SPOE] spoe
[COMP] compression
[TRACE] trace

-- 
Daniel Schneller
Principal Cloud Engineer

CenterDevice GmbH
Rheinwerkallee 3
53227 Bonn
www.centerdevice.com

__
Geschäftsführung: Dr. Patrick Peschlow, Dr. Lukas Pustina, Michael Rosbach, 
Handelsregister-Nr.: HRB 18655, HR-Gericht: Bonn, USt-IdNr.: DE-815299431

Diese E-Mail einschließlich evtl. beigefügter Dateien enthält vertrauliche 
und/oder rechtlich geschützte Informationen. Wenn Sie nicht der richtige 
Adressat sind oder diese E-Mail irrtümlich erhalten haben, informieren Sie 
bitte sofort den Absender und löschen Sie diese E-Mail und evtl. beigefügter 
Dateien umgehend. Das unerlaubte Kopieren, Nutzen oder Öffnen evtl. beigefügter 
Dateien sowie die unbefugte Weitergabe dieser E-Mail ist nicht gestattet.





HAProxy keeps using outdated IPs when backend (ELB) address changes

2018-08-27 Thread Daniel Schneller
ate"), 
raw-deflate("deflate"), gzip("gzip")
Built with OpenSSL version : OpenSSL 1.0.1f 6 Jan 2014
Running on OpenSSL version : OpenSSL 1.0.1f 6 Jan 2014
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 8.31 2012-07-06
Running on PCRE version : 8.31 2012-07-06
PCRE library supports JIT : no (libpcre build without JIT?)
Built with Lua version : Lua 5.3.1
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT 
IP_FREEBIND
Built with network namespace support

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available filters :
[COMP] compression
[TRACE] trace
[SPOE] spoe
-


--
Daniel Schneller
Principal Cloud Engineer

CenterDevice GmbH
Rheinwerkallee 3
53227 Bonn
www.centerdevice.com

__
Geschäftsführung: Dr. Patrick Peschlow, Dr. Lukas Pustina, Michael Rosbach, 
Handelsregister-Nr.: HRB 18655, HR-Gericht: Bonn, USt-IdNr.: DE-815299431

Diese E-Mail einschließlich evtl. beigefügter Dateien enthält vertrauliche 
und/oder rechtlich geschützte Informationen. Wenn Sie nicht der richtige 
Adressat sind oder diese E-Mail irrtümlich erhalten haben, informieren Sie 
bitte sofort den Absender und löschen Sie diese E-Mail und evtl. beigefügter 
Dateien umgehend. Das unerlaubte Kopieren, Nutzen oder Öffnen evtl. beigefügter 
Dateien sowie die unbefugte Weitergabe dieser E-Mail ist nicht gestattet.




signature.asc
Description: Message signed with OpenPGP


Re: Clarification re Timeouts and Session State in the Logs

2018-08-24 Thread Daniel Schneller
Hi!

Thanks for that input. I would like to understand what's going before making 
changes. :)

Cheers,
Daniel


> On 24. Aug 2018, at 00:56, Igor Cicimov  
> wrote:
> 
> Hi Daniel,
> 
> We had similar issue in 2015, and the answer was: server timeout was too 
> short. Simple.
> 
> On Thu, 23 Aug 2018 9:56 pm Daniel Schneller 
>  <mailto:daniel.schnel...@centerdevice.com>> wrote:
> Friendly bump.
> I'd volunteer to do some documentation amendments once I understand the issue 
> better :D
> 
>> On 21. Aug 2018, at 16:17, Daniel Schneller 
>> > <mailto:daniel.schnel...@centerdevice.com>> wrote:
>> 
>> Hi!
>> 
>> I am trying to wrap my head around an issue we are seeing where there are 
>> many HTTP 504 responses sent out to clients.
>> 
>> I suspect that due to a client bug they stop sending data midway during the 
>> data phase of the request, but they keep the connection open.
>> 
>> What I see in the haproxy logs is a 504 response with termination flags 
>> "sHNN".
>> That I read as haproxy getting impatient (timeout server) waiting for 
>> response headers from the backend. The backend, not having seen the complete 
>> request yet, can't really answer at this point, of course.
>> I am wondering though, why it is that I see the I don't see a termination 
>> state indicating a client problem.
>> 
>> So my question (for now ;-)) boils down to these points:
>> 
>> 1) When does the server timeout actually start counting? Am I right to 
>> assume it is from the last moment the server sent or (in this case) received 
>> some data?
>> 
>> 2) If both "timeout server" and "timeout client" are set to the same value, 
>> and the input stalls (after the headers) longer than that, is it just that 
>> the implementation is such that the server side timeout "wins" when it comes 
>> to setting the termination flags?
>> 
>> 3) If I set the client timeout shorter than the server timeout and produced 
>> this situation, should I then see a cD state?  If so, would I be right to 
>> assume that if the server were now to stall, the log could again be 
>> misleading in telling me that the client timeout expired first?
>> 
>> I understand it is difficult to tell "who's to blame" for an inactivity 
>> timeout without knowledge about the content or final size of the request -- 
>> I just need some clarity on how the read the logs :)
>> 
>> 
>> Thanks!
>> Daniel
>> 
>> 
>> 
>> 
>> --
>> Daniel Schneller
>> Principal Cloud Engineer
>> 
>> CenterDevice GmbH
>> Rheinwerkallee 3
>> 53227 Bonn
>> www.centerdevice.com <http://www.centerdevice.com/>
>> 
>> __
>> Geschäftsführung: Dr. Patrick Peschlow, Dr. Lukas Pustina, Michael Rosbach, 
>> Handelsregister-Nr.: HRB 18655, HR-Gericht: Bonn, USt-IdNr.: DE-815299431
>> 
>> Diese E-Mail einschließlich evtl. beigefügter Dateien enthält vertrauliche 
>> und/oder rechtlich geschützte Informationen. Wenn Sie nicht der richtige 
>> Adressat sind oder diese E-Mail irrtümlich erhalten haben, informieren Sie 
>> bitte sofort den Absender und löschen Sie diese E-Mail und evtl. beigefügter 
>> Dateien umgehend. Das unerlaubte Kopieren, Nutzen oder Öffnen evtl. 
>> beigefügter Dateien sowie die unbefugte Weitergabe dieser E-Mail ist nicht 
>> gestattet.
>> 
>> 
> 



signature.asc
Description: Message signed with OpenPGP


Clarification re Timeouts and Session State in the Logs

2018-08-21 Thread Daniel Schneller
Hi!

I am trying to wrap my head around an issue we are seeing where there are many 
HTTP 504 responses sent out to clients.

I suspect that due to a client bug they stop sending data midway during the 
data phase of the request, but they keep the connection open.

What I see in the haproxy logs is a 504 response with termination flags "sHNN".
That I read as haproxy getting impatient (timeout server) waiting for response 
headers from the backend. The backend, not having seen the complete request 
yet, can't really answer at this point, of course.
I am wondering though, why it is that I see the I don't see a termination state 
indicating a client problem.

So my question (for now ;-)) boils down to these points:

1) When does the server timeout actually start counting? Am I right to assume 
it is from the last moment the server sent or (in this case) received some data?

2) If both "timeout server" and "timeout client" are set to the same value, and 
the input stalls (after the headers) longer than that, is it just that the 
implementation is such that the server side timeout "wins" when it comes to 
setting the termination flags?

3) If I set the client timeout shorter than the server timeout and produced 
this situation, should I then see a cD state?  If so, would I be right to 
assume that if the server were now to stall, the log could again be misleading 
in telling me that the client timeout expired first?

I understand it is difficult to tell "who's to blame" for an inactivity timeout 
without knowledge about the content or final size of the request -- I just need 
some clarity on how the read the logs :)


Thanks!
Daniel




--
Daniel Schneller
Principal Cloud Engineer

CenterDevice GmbH
Rheinwerkallee 3
53227 Bonn
www.centerdevice.com

__
Geschäftsführung: Dr. Patrick Peschlow, Dr. Lukas Pustina, Michael Rosbach, 
Handelsregister-Nr.: HRB 18655, HR-Gericht: Bonn, USt-IdNr.: DE-815299431

Diese E-Mail einschließlich evtl. beigefügter Dateien enthält vertrauliche 
und/oder rechtlich geschützte Informationen. Wenn Sie nicht der richtige 
Adressat sind oder diese E-Mail irrtümlich erhalten haben, informieren Sie 
bitte sofort den Absender und löschen Sie diese E-Mail und evtl. beigefügter 
Dateien umgehend. Das unerlaubte Kopieren, Nutzen oder Öffnen evtl. beigefügter 
Dateien sowie die unbefugte Weitergabe dieser E-Mail ist nicht gestattet.




signature.asc
Description: Message signed with OpenPGP


Re: Bug when passing variable to mapping function

2018-07-09 Thread Daniel Schneller
Hi!

Thanks for your analysis. I was away for a few days, hence the late
response.

> AFAIK this works on 1.7.11 but seems to be broken on all 1.8.x.

Interesting. I tried the new config locally with 1.8 and found the bug.
Production servers are still on 1.7, so it would even have worked :)
I am glad, though, that I found this on 1.8, sparing me the trouble some
time down the road when they get updated.


> > I think this is the commit that breaks map_regm in this case:
> > b5997f740b21ebb197e10a0f2fe9dc13163e1772 (MAJOR: threads/map: Make
> > acls/maps thread safe).
> >
> > If I revert this commit from pattern.c:pattern_exec_match
> > then the map_regm \1 backref seems to work.
>
> I think I found what's replacing the \000 as first char:
> in (map.c) sample_conv_map:
> /* In the regm case, merge the sample with the input. */
> if ((long)private == PAT_MATCH_REGM) {
> str = get_trash_chunk();
> str->len = exp_replace(str->str, str->size,
smp->data.u.str.str,
>pat->data->u.str.str,
>(regmatch_t *)smp->ctx.a[0]);
>
> Before call to get_trash_chunk() smp->data.u.str.str is for example
> 'distri.com' and after get_trash_chunk() smp->data.u.str.str
> is '\000istri.com'.

I had a look at that code, but I must admit my understanding of the
concepts (trash chunk? some optimization, I assume?) and C as a language is
too limited to make a patch myself.
Is this on any of the developers' radar?

Thanks a lot :)

Daniel

On 29 June 2018 at 07:14, Jarno Huuskonen  wrote:

> Hi,
>
> On Thu, Jun 28, Jarno Huuskonen wrote:
> > I think this is the commit that breaks map_regm in this case:
> > b5997f740b21ebb197e10a0f2fe9dc13163e1772 (MAJOR: threads/map: Make
> > acls/maps thread safe).
> >
> > If I revert this commit from pattern.c:pattern_exec_match
> > then the map_regm \1 backref seems to work.
>
> I think I found what's replacing the \000 as first char:
> in (map.c) sample_conv_map:
> /* In the regm case, merge the sample with the input. */
> if ((long)private == PAT_MATCH_REGM) {
> str = get_trash_chunk();
> str->len = exp_replace(str->str, str->size,
> smp->data.u.str.str,
>pat->data->u.str.str,
>(regmatch_t *)smp->ctx.a[0]);
>
> Before call to get_trash_chunk() smp->data.u.str.str is for example
> 'distri.com' and after get_trash_chunk() smp->data.u.str.str
> is '\000istri.com'.
>
> At the moment I don't have time to dig deeper, but hopefully this
> helps a little bit.
>
> -Jarno
>
> --
> Jarno Huuskonen
>
>


-- 

-- 
Daniel Schneller
Principal Cloud Engineer

CenterDevice GmbH  | Hochstraße 11
   | 42697 Solingen
tel: +49 1754155711| Deutschland
daniel.schnel...@centerdevice.de   | www.centerdevice.de

Geschäftsführung: Dr. Patrick Peschlow, Dr. Lukas Pustina,
Michael Rosbach, Handelsregister-Nr.: HRB 18655,
HR-Gericht: Bonn, USt-IdNr.: DE-815299431


Bug when passing variable to mapping function

2018-06-25 Thread Daniel Schneller
, 25 Jun 2018 14:13:09 GMT
:be_test.srvcls[0007:adfd]
0001:fe_test.clicls[0007:]
0001:fe_test.closed[0007:]
-



Now, the interesting thing is the server's debug output:

-- Server Output 
Host: distri.com
User-Agent: curl/7.54.0
Accept: */*
X-Distri-Direct-From-Manual-Var: distri
X-Distri-Mapped-From-Header: distri
X-Distri-Direct-From-Var: distri.com
X-Distri-Mapped-From-Var: %00istri

127.0.0.1 - - [25/Jun/2018 16:30:48] "GET /example.txt HTTP/1.1" 200 -
-

See the X-Distri-Mapped-From-Var header's value. It has what seems to be a 
nul-byte
instead of the first character of the domain name. The other X- headers
before it are meant to narrow down where the bug actually happens.

It would appear that it is somehow related to passing a variable's value
into the mapping function or its return from there. Interestingly, the
issue does _not_ show when simply putting the variable value into a header
(X-Distri-Direct-From-Var) or when calling the mapping function with the
header lookup instead of the intermediate variable 
(X-Distri-Mapped-From-Header).


One more tidbit: If I change the mapping file to this:
--
^(.*)\.(.*)$ a\1
--

The generated header header changes to:
--
X-Distri-Mapped-From-Var: aaistri
--

Looks like some off-by-one error?


Cheers,
Daniel




--
Daniel Schneller
Principal Cloud Engineer

CenterDevice GmbH
Rheinwerkallee 3
53227 Bonn
www.centerdevice.com

__
Geschäftsführung: Dr. Patrick Peschlow, Dr. Lukas Pustina, Michael Rosbach, 
Handelsregister-Nr.: HRB 18655, HR-Gericht: Bonn, USt-IdNr.: DE-815299431

Diese E-Mail einschließlich evtl. beigefügter Dateien enthält vertrauliche 
und/oder rechtlich geschützte Informationen. Wenn Sie nicht der richtige 
Adressat sind oder diese E-Mail irrtümlich erhalten haben, informieren Sie 
bitte sofort den Absender und löschen Sie diese E-Mail und evtl. beigefügter 
Dateien umgehend. Das unerlaubte Kopieren, Nutzen oder Öffnen evtl. beigefügter 
Dateien sowie die unbefugte Weitergabe dieser E-Mail ist nicht gestattet.




signature.asc
Description: Message signed with OpenPGP


Re: Reverse String (or get 2nd level domain sample)?

2018-06-25 Thread Daniel Schneller
Hi again!

I found a working config using the map_regm converter.
I think it is somewhat overcomplicated for what it is supposed to achieve, but 
for now it works.

Leaving this here for reference:

# Remove port numbers from the Host header -- we do not rely on different 
ports for the same domain, and this makes ACL matching clearer
http-request replace-value Host '(.*):.*' '\1'

# Store the (now port-free) request Host in a transaction scoped variable 
for use in response ACLs
http-request set-var(txn.host) req.hdr(Host)

# Store the 2nd level domain (lower case) as the distributor. This uses a 
simple map file with just a single
# regex, because the inline regsub function does not support backrefs which 
are needed for variable number of subdomains.
http-request set-var(txn.distributor) 
var(txn.host),map_regm(distributors.map,"unknown"),lower

# Add a X-Distributor header for the application, overwriting anything the 
client may have claimed
http-request set-header X-Distributor %[var(txn.distributor)]

The distributors.map file contents looks like this:

(.*\.)+(.*)\.(.*) \2

Looks more complicated than it is. The first "(.*\.)+" greedily matches 
subdomains (in our case only domains with at least three parts are valid) and 
their trailing dots.
The second capture group matches the second level domain, followed by a dot, 
and then the final capture group "(.*)" for the top level domain.
The final one doesn't _have_ to be a group, because I drop the top level domain 
anyway. , but I find it more readable this way.

Anything that matches this regex is replaced with just the value of the 2nd 
capture group (i. e. the 2nd level domain).

(tested with haproxy 1.8, but this should also work with earlier versions IMO).

Cheers,
Daniel





> On 25. Jun 2018, at 12:29, Daniel Schneller 
>  wrote:
> 
> Hi!
> 
> Just double checking to make sure I am not simply blind: Is there a way to 
> reverse a string using a sample converter?
> 
> Background: I need to extract just the second level domain from the host 
> header. So for sub.sample.example.com <http://sub.sample.example.com/> I need 
> to fetch "example".
> 
> Using the "word" converter and a "." as the separator I can get at the 
> individual components, but because the number of nested subdomains varies, I 
> cannot use that directly.
> 
> My idea was to just reverse the full domain (removing a potential port number 
> first), get word(2) and reverse again. Is that possible? Or is there an even 
> better function I can use? I am thinking this must be a common use case, but 
> googling "haproxy" and "reverse" will naturally turn up lots of results 
> talking about "reverse proxying".
> 
> If possible, I would like to avoid using maps to keep this thing as generic 
> as possible.
> 
> Thanks a lot!
> 
> Daniel
> 
> 
> --
> Daniel Schneller
> Principal Cloud Engineer
> 
> CenterDevice GmbH
> Rheinwerkallee 3
> 53227 Bonn
> www.centerdevice.com <http://www.centerdevice.com/>
> 
> __
> Geschäftsführung: Dr. Patrick Peschlow, Dr. Lukas Pustina, Michael Rosbach, 
> Handelsregister-Nr.: HRB 18655, HR-Gericht: Bonn, USt-IdNr.: DE-815299431
> 
> Diese E-Mail einschließlich evtl. beigefügter Dateien enthält vertrauliche 
> und/oder rechtlich geschützte Informationen. Wenn Sie nicht der richtige 
> Adressat sind oder diese E-Mail irrtümlich erhalten haben, informieren Sie 
> bitte sofort den Absender und löschen Sie diese E-Mail und evtl. beigefügter 
> Dateien umgehend. Das unerlaubte Kopieren, Nutzen oder Öffnen evtl. 
> beigefügter Dateien sowie die unbefugte Weitergabe dieser E-Mail ist nicht 
> gestattet.
> 
> 



signature.asc
Description: Message signed with OpenPGP


Reverse String (or get 2nd level domain sample)?

2018-06-25 Thread Daniel Schneller
Hi!

Just double checking to make sure I am not simply blind: Is there a way to 
reverse a string using a sample converter?

Background: I need to extract just the second level domain from the host 
header. So for sub.sample.example.com <http://sub.sample.example.com/> I need 
to fetch "example".

Using the "word" converter and a "." as the separator I can get at the 
individual components, but because the number of nested subdomains varies, I 
cannot use that directly.

My idea was to just reverse the full domain (removing a potential port number 
first), get word(2) and reverse again. Is that possible? Or is there an even 
better function I can use? I am thinking this must be a common use case, but 
googling "haproxy" and "reverse" will naturally turn up lots of results talking 
about "reverse proxying".

If possible, I would like to avoid using maps to keep this thing as generic as 
possible.

Thanks a lot!

Daniel


--
Daniel Schneller
Principal Cloud Engineer

CenterDevice GmbH
Rheinwerkallee 3
53227 Bonn
www.centerdevice.com

__
Geschäftsführung: Dr. Patrick Peschlow, Dr. Lukas Pustina, Michael Rosbach, 
Handelsregister-Nr.: HRB 18655, HR-Gericht: Bonn, USt-IdNr.: DE-815299431

Diese E-Mail einschließlich evtl. beigefügter Dateien enthält vertrauliche 
und/oder rechtlich geschützte Informationen. Wenn Sie nicht der richtige 
Adressat sind oder diese E-Mail irrtümlich erhalten haben, informieren Sie 
bitte sofort den Absender und löschen Sie diese E-Mail und evtl. beigefügter 
Dateien umgehend. Das unerlaubte Kopieren, Nutzen oder Öffnen evtl. beigefügter 
Dateien sowie die unbefugte Weitergabe dieser E-Mail ist nicht gestattet.




signature.asc
Description: Message signed with OpenPGP


Re: 4xx statistics made useless through health checks?

2017-11-21 Thread Daniel Schneller
Hi Pieter, 

>> Good point. I wanted to avoid, however, having these “high level” health 
>> checks from the many many sidecars being routed through to the actual 
>> backends.
>> Instead, I considered it enough to “only” check if the central haproxy is 
>> available. In case it is, the sidecars rely on it doing the actual health 
>> checks of the backends and responding with 503 or similar, when all backends 
>> for a particular request happen to be down.
> Maybe monitor-uri perhaps together with 'monitor fail' could help ?: 
> http://cbonte.github.io/haproxy-dconv/1.8/snapshot/configuration.html#4.2-monitor-uri
> It says it wont log or forward the request.. not sure but maybe stats will 
> also skip it.

Yes, that’s exactly what’s shown in that linked repo. Thanks for chiming in :)

> Regards,
> PiBa-NL / Pieter
> 

-- 
Daniel Schneller
Principal Cloud Engineer
 
CenterDevice GmbH  | Hochstraße 11
   | 42697 Solingen
tel: +49 1754155711| Deutschland
daniel.schnel...@centerdevice.de   | www.centerdevice.de

Geschäftsführung: Dr. Patrick Peschlow, Dr. Lukas Pustina,
Michael Rosbach, Handelsregister-Nr.: HRB 18655,
HR-Gericht: Bonn, USt-IdNr.: DE-815299431


Re: 4xx statistics made useless through health checks?

2017-11-21 Thread Daniel Schneller
> On 21. Nov. 2017, at 14:08, Lukas Tribus <lu...@ltri.eu> wrote:
> [...]
> Instead of hiding specific errors counters, why not send an actual
> HTTP request that triggers a 200 OK response? So health checking is
> not exempt from the statistics and only generates error statistics
> when actual errors occur?

Good point. I wanted to avoid, however, having these “high level” health checks 
from the many many sidecars being routed through to the actual backends.
Instead, I considered it enough to “only” check if the central haproxy is 
available. In case it is, the sidecars rely on it doing the actual health 
checks of the backends and responding with 503 or similar, when all backends 
for a particular request happen to be down.

However, your idea and a little more Googling led me to this Github repo 
https://github.com/jvehent/haproxy-aws#healthchecks-between-elb-and-haproxy 
where they configure a dedicated “health check frontend” (albeit in their case 
to work around an AWS/ELB limitation re/ PROXY protocol). I think I will adapt 
this and configure the sidecars to health check on a dedicated port like this.

I’ll let you know how it goes.

Thanks a lot for your thoughts, so far :)

Daniel

-- 
Daniel Schneller
Principal Cloud Engineer
 
CenterDevice GmbH  | Hochstraße 11
   | 42697 Solingen
tel: +49 1754155711| Deutschland
daniel.schnel...@centerdevice.de   | www.centerdevice.de

Geschäftsführung: Dr. Patrick Peschlow, Dr. Lukas Pustina,
Michael Rosbach, Handelsregister-Nr.: HRB 18655,
HR-Gericht: Bonn, USt-IdNr.: DE-815299431



Re: 4xx statistics made useless through health checks?

2017-11-21 Thread Daniel Schneller
Hi Lukas,

thanks — was just about to reply to myself, but you beat me to it ;)

> Yes, we have "option dontlognull" for that:
> http://cbonte.github.io/haproxy-dconv/1.7/configuration.html#4-option%20dontlognull

We can get rid of the logging via dontlognull, of course. Was about to mention 
that I had configured that in the meantime so get rid of the log spam.

However, I still wonder if there is a good way to discern these from 
“actual"bad requests in the stats, so that we can rely on the error counters to 
show “real” problems.

Some kind of “haproxy-to-haproxy” health checking that does not spoil the 
counters?

Daniel


Daniel Schneller
Principal Cloud Engineer
 
CenterDevice GmbH  | Hochstraße 11
   | 42697 Solingen
tel: +49 1754155711| Deutschland
daniel.schnel...@centerdevice.de   | www.centerdevice.de

Geschäftsführung: Dr. Patrick Peschlow, Dr. Lukas Pustina,
Michael Rosbach, Handelsregister-Nr.: HRB 18655,
HR-Gericht: Bonn, USt-IdNr.: DE-815299431



Encrypted Passwords Documentation Patch

2017-11-06 Thread Daniel Schneller
Hi!

Attached find a documentation patch for the encrypted passwords in userlists.

It adds a warning about the potentially significant CPU cost the modern
algorithms with their thousands of hashing rounds can incur. In our case it
made the difference between haproxy’s CPU usage being hardly noticeable at all
to it almost eating a full core, even for a not very busy site.

Tested with 1.6, but this applies to all versions, if I am not mistaken.

Cheers,
Daniel


-- 
Daniel Schneller
Principal Cloud Engineer
 
CenterDevice GmbH  | Hochstraße 11
   | 42697 Solingen
tel: +49 1754155711| Deutschland
daniel.schnel...@centerdevice.de   | www.centerdevice.de

Geschäftsführung: Dr. Patrick Peschlow, Dr. Lukas Pustina,
Michael Rosbach, Handelsregister-Nr.: HRB 18655,
HR-Gericht: Bonn, USt-IdNr.: DE-815299431



0001-DOC-Add-note-about-encrypted-password-CPU-usage.patch
Description: Binary data


Re: Force Sticky session on HaProxy

2017-10-18 Thread Daniel Schneller
Hi,

maybe I am missing something, but isn’t this what  
http://cbonte.github.io/haproxy-dconv/1.6/configuration.html#4.2-cookie 
<http://cbonte.github.io/haproxy-dconv/1.6/configuration.html#4.2-cookie> is 
supposed to do for you?
We are using this (in prefix mode) to make sure the same JSESSIONID gets to the 
same backend every time.
As the information is in the cookie, there is no state to be lost on the 
haproxy side.

Daniel

-- 
Daniel Schneller
Principal Cloud Engineer
 
CenterDevice GmbH  | Hochstraße 11
   | 42697 Solingen
tel: +49 1754155711| Deutschland
daniel.schnel...@centerdevice.de   | www.centerdevice.de

Geschäftsführung: Dr. Patrick Peschlow, Dr. Lukas Pustina,
Michael Rosbach, Handelsregister-Nr.: HRB 18655,
HR-Gericht: Bonn, USt-IdNr.: DE-815299431


> On 18. Oct. 2017, at 11:58, Gibson, Brian (IMS) <gibs...@imsweb.com> wrote:
> 
> I've used peers for this situation personally.
> 
> Sent from Nine<http://www.9folders.com/ <http://www.9folders.com/>>
> 
> From: Aaron West <aa...@loadbalancer.org <mailto:aa...@loadbalancer.org>>
> Sent: Oct 18, 2017 5:33 AM
> To: Devendra Joshi
> Cc: HAProxy
> Subject: Re: Force Sticky session on HaProxy
> 
> I've used something like this before:
> 
> stick store-response res.cook(JSESSIONID)
> stick match req.cook(JSESSIONID)
> 
> "stick on" does this I think:
> 
> stick match req.cook(JSESSIONID)
> stick store-request req.cook(JSESSIONID)
> 
> As the client doesn't have the cookie at the beginning of the
> connection it has to wait to store it until it's received from the
> server, I have a vague memory that I had issues with using simply
> "stick on" for this so switched to the first method above.
> 
> There is a massive problem with my suggestion however, if you clear
> the stick table or restart the service(Which will clear the stick
> table) then users lose persistence until they close their browsers and
> start a new session or the server issues a new cookie. Obviously
> reloads while synchronising the stick table should be fine.
> 
> However, i'm sure there will be a far better solution so I'm just
> starting the ball rolling really...
> 
> Aaron West
> 
> Loadbalancer.org Ltd.
> 
> www.loadbalancer.org 
> <http://www.loadbalancer.org/><http://www.loadbalancer.org 
> <http://www.loadbalancer.org/>>
> 
> +1 888 867 9504 / +44 (0)330 380 1064
> aa...@loadbalancer.org <mailto:aa...@loadbalancer.org>
> 
> LEAVE A REVIEW | DEPLOYMENT GUIDES | BLOG
> 
> 
> 
> 
> Information in this e-mail may be confidential. It is intended only for the 
> addressee(s) identified above. If you are not the addressee(s), or an 
> employee or agent of the addressee(s), please note that any dissemination, 
> distribution, or copying of this communication is strictly prohibited. If you 
> have received this e-mail in error, please notify the sender of the error.



Re: Question related to gpc0_rate values in stick-table

2017-10-17 Thread Daniel Schneller
> Can you please provide the details of the known issue which might be related 
> to this situation? I have gone through the list of all unknown issues but 
> couldn't find the relevant one.

If you are referring to the list of bugs in the old version compared to more 
current ones, I doubt you (or anyone else, with a reasonable amount of effort) 
will find an exact match describing your issue.
It could be something that was fixed (or not) along with other changes since 
that very old release.
So you should upgrade and see if the issue remains. If so, it will be a much 
more reasonable starting point to figure out where the issue comes from.

Regards,
Daniel



-- 
Daniel Schneller
Principal Cloud Engineer
 
CenterDevice GmbH  | Hochstraße 11
   | 42697 Solingen
tel: +49 1754155711| Deutschland
daniel.schnel...@centerdevice.de   | www.centerdevice.de

Geschäftsführung: Dr. Patrick Peschlow, Dr. Lukas Pustina,
Michael Rosbach, Handelsregister-Nr.: HRB 18655,
HR-Gericht: Bonn, USt-IdNr.: DE-815299431


> On 17. Oct. 2017, at 10:41, Saurabh Patwardhan 
> <saurabh.patward...@netcracker.com> wrote:
> 
> Gentle Reminder.
> Can you please share your thought on the below queries?
> 
> Regards,
> Saurabh
> 
> -Original Message-
> From: Saurabh Patwardhan
> Sent: Thursday, October 05, 2017 3:15 PM
> To: 'Lukas Tribus'; haproxy@formilux.org
> Cc: Manoj Mahadik; Tatiana Parshina
> Subject: RE: Question related to gpc0_rate values in stick-table
> 
> Thanks Lukas.
> Can you please provide the details of the known issue which might be related 
> to this situation? I have gone through the list of all unknown issues but 
> couldn't find the relevant one.
> 
> Regards,
> Saurabh
> 
> -Original Message-
> From: Lukas Tribus [mailto:lu...@gmx.net]
> Sent: Monday, September 25, 2017 11:05 PM
> To: Saurabh Patwardhan; haproxy@formilux.org
> Cc: Manoj Mahadik; Tatiana Parshina
> Subject: Re: Question related to gpc0_rate values in stick-table
> 
> [External Email]
> 
> 
> 
> 
> Hello,
> 
> 
> Am 25.09.2017 um 10:34 schrieb Saurabh Patwardhan:
>> 
>> Hi HAProxy Team,
>> 
>> 
>> 
>> We are using haproxy 1.5.2 as a load balancer for our solution.
>> 
> 
> Before going any further here, notice that 1.5.2 is 3 years old and has a 
> huge amount of bugs:
> http://www.haproxy.org/bugs/bugs-1.5.2.html
> 
> I strongly suggest you upgrade the code before diving into a full blown 
> investigation, as very likely you hit at least one of the 199 known bugs in 
> haproxy 1.5.2.
> 
> 
> Regards,
> Lukas
> 
> 
> 
> 
> 
> The information transmitted herein is intended only for the person or entity 
> to which it is addressed and may contain confidential, proprietary and/or 
> privileged material. Any review, retransmission, dissemination or other use 
> of, or taking of any action in reliance upon, this information by persons or 
> entities other than the intended recipient is prohibited. If you received 
> this in error, please contact the sender and delete the material from any 
> computer.
> 



Re: Inspect data sent through haproxy and create statistics

2017-09-28 Thread Daniel Schneller
Hi!

I am doing something similar — in our case there is not single endpoint, but a 
few of the endpoints behave differently depending on what the JSON payload is.
For that, I capture up to 256 bytes of the body. In our case that’s plenty to 
find the json snippet I am looking for. Then, per ACLs I match regexes for each 
action.
Finally, this ACL (and maybe others) are used to set a transaction variable 
(txn.op) in our case.
In the log-format, I then log this operation as an additional field. That way, 
I can see all the timing values and other stuff the regular logging contains, 
and can then filter by the operation type.

The config looks something like this (the real thing is really long and I don’t 
have enough time to strip it down to a working example right now)

...
log-format %ci:%cp\ [%t]\ %ft\ %b/%s\ %Tq/%Tw/%Tc/%Tr/%Tt\ %ST\ %B\ %CC\ %CS\ 
%tsc\ %ac/%fc/%bc/%sc/%rc\ %sq/%bq\ %ID\ %sslv\ %sslc\ %{+Q}[var(txn.op)]\ 
%{+Q}[var(txn.host),lower,word(2,'.')]\ %{+Q}r
…

listen api_internal
  ...
  option http-buffer-request

  # Capture the body for the backends to inspect. This can only
  # be done in the frontend. Declare a capture slot with the implicit id=0 for 
256 bytes.
  declare capture request len 256   # id 0
  # Store up to 256 bytes of request body in the slot with id 0 for later 
inspection.
  http-request capture req.body id 0

  # Content based ACLs to discern request types
  acl EMPTY_BODY req.body_len -m int eq 0
  acl rq_query_ids query -m beg -i ids=
  …
  acl rq_accept_wildcard req.hdr(Accept) -i '*/*'
  acl rq_body_action_delete req.body -m reg -i '"action"\s*:\s*"delete"'
  ...
  # Detect "relevant" operations and stick them in a variable for logging
  # Set a default first in case nothing known matches
  http-request set-var(txn.op) str(Unkwn)
  ...
  http-request set-var(txn.op) str(DocDelMul) if METH_POST rq_path_documents 
rq_content_type_json rq_body_action_delete# Delete Multiple Documents
  …

Hope that helps.


Daniel


-- 
Daniel Schneller
Principal Cloud Engineer
 
CenterDevice GmbH  | Hochstraße 11
   | 42697 Solingen
tel: +49 1754155711| Deutschland
daniel.schnel...@centerdevice.de   | www.centerdevice.de

Geschäftsführung: Dr. Patrick Peschlow, Dr. Lukas Pustina,
Michael Rosbach, Handelsregister-Nr.: HRB 18655,
HR-Gericht: Bonn, USt-IdNr.: DE-815299431


> On 20. Sep. 2017, at 09:46, Garbage <garb...@gmx.de> wrote:
> 
> 
> 
> I know that haproxy exposes metrics like „Tr“ and “Tt”. My problem with the 
> application I have to proxy is that this application hides all of her 
> functionality behind one endpoint. To address the app I have to POST to (just 
> an example) https://hpalm/endpoint.
> The payload that gets POSTed always has this shape:
> 
> {
> 0: \001C\0:conststr:GetServerSettings,
> 
> 
> 
> The string behind „conststr“ is the real API endpoint, there are 10s of 
> different meanings.
> 
> Is it possible to have haproxy or a “plugin” inspect the POST body, extract 
> the string and create metrics like “Tr” and “Tt” for each of those values ?
> 
> 
> 
> 



Re: Enable SSL Forward Secrecy

2017-09-01 Thread Daniel Schneller
Hi,inspired by this, I added a paragraph with links to the documentation.Small patch attached.Cheers,Daniel

0001-DOC-Refer-to-Mozilla-TLS-info-config-generator.patch
Description: Binary data

-- Daniel SchnellerPrincipal Cloud Engineer CenterDevice GmbH                  | Hochstraße 11                                   | 42697 Solingentel: +49 1754155711                | Deutschlanddaniel.schnel...@centerdevice.de   | www.centerdevice.deGeschäftsführung: Dr. Patrick Peschlow, Dr. Lukas Pustina,Michael Rosbach, Handelsregister-Nr.: HRB 18655,HR-Gericht: Bonn, USt-IdNr.: DE-815299431

On 1. Sep. 2017, at 19:05, Willy Tarreau  wrote:On Fri, Sep 01, 2017 at 07:04:36PM +0200, Willy Tarreau wrote:Hi Cyril,s/Cyril/Lukas, sorry guys, that's what happens when I read one e-mailand reply to another one at the same time :-)Willy

[PATCH] DOC: Add note about "* " prefix in CSV stats

2017-09-01 Thread Daniel Schneller
Just a little documentation patch I wrote, after stumbling across this:https://github.com/dschneller/bosun/commit/6ca776dd6543d123a135b4a84a5e3e66093c3986

0001-DOC-Add-note-about-prefix-in-CSV-stats.patch
Description: Binary data
Cheers,Daniel
-- Daniel SchnellerPrincipal Cloud Engineer CenterDevice GmbH                  | Hochstraße 11                                   | 42697 Solingentel: +49 1754155711                | Deutschlanddaniel.schnel...@centerdevice.de   | www.centerdevice.deGeschäftsführung: Dr. Patrick Peschlow, Dr. Lukas Pustina,Michael Rosbach, Handelsregister-Nr.: HRB 18655,HR-Gericht: Bonn, USt-IdNr.: DE-815299431




Re: Enable SSL Forward Secrecy

2017-08-30 Thread Daniel Schneller
Darn! Looking at the “openssl ciphers” Julian provided earlier, my mind 
“autocompleted" the missing trailing “E” in ECDH (/me facepalms).

Thanks, Cyril, for pointing that out!

I was starting to doubt myself here :)

Cheers,
Daniel

-- 
Daniel Schneller
Principal Cloud Engineer
 
CenterDevice GmbH  | Hochstraße 11
   | 42697 Solingen
tel: +49 1754155711| Deutschland
daniel.schnel...@centerdevice.de   | www.centerdevice.de

Geschäftsführung: Dr. Patrick Peschlow, Dr. Lukas Pustina,
Michael Rosbach, Handelsregister-Nr.: HRB 18655,
HR-Gericht: Bonn, USt-IdNr.: DE-815299431


> On 30. Aug. 2017, at 15:41, Cyril Bonté <cyril.bo...@free.fr> wrote:
> 
>> De: "Julian Zielke" <jzie...@next-level-integration.com>
>> À: "Cyril Bonté" <cyril.bo...@free.fr>
>> Cc: haproxy@formilux.org
>> Envoyé: Mercredi 30 Août 2017 15:11:47
>> Objet: AW: Enable SSL Forward Secrecy
>> 
>> Hi Cyril,
>> 
>> tired it without success. Maybe HaProxy isn't just capable of doing
>> this.
> 
> Oh well, indeed the "!kECDHE" excludes the ciphers from the list.
> You should retry without it (with or without RFC names in the ciphers list)
> 
>>> ssl-default-bind-ciphers
>>> TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA:TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA:
>>> TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA:AES256+EECDH:AES256+EDH:TLSv1+HIGH
>>> :!aNULL:!eNULL:!3DES:!RC4:!CAMELLIA:!DH:!kECDHE:@STRENGTH:!DHE
> 
> Cyril Bonté
> 



Re: Enable SSL Forward Secrecy

2017-08-30 Thread Daniel Schneller
Ok, running out of ideas here.
You might want to try re-enabling TLS 1.0 and 1.1, just to to see if the 
response clients see changes at all.
Please post the haproxy log output  — if necessary, reproduce on a separate 
instance, should it contain sensitive information.

If that doesn’t shed any light, you need to capture the traffic on the haproxy 
host — ideally you can filter by source IP to ensure you don’t get any “real” 
traffic in there. No idea if ssllabs comes from a predictable IP, but if not, 
you might use  https://github.com/rbsec/sslscan 
<https://github.com/rbsec/sslscan>  for a similar scan, but from a local 
network. That way you'd know the client IP.

Then either look at the pcap file with Wireshark — which should be able to show 
the handshaking attempts in detail — or upload it somewhere for others to see.
In that case, make especially sure that you don’t have any real traffic in 
there.

Daniel

-- 
Daniel Schneller
Principal Cloud Engineer
 
CenterDevice GmbH  | Hochstraße 11
   | 42697 Solingen
tel: +49 1754155711| Deutschland
daniel.schnel...@centerdevice.de   | www.centerdevice.de

Geschäftsführung: Dr. Patrick Peschlow, Dr. Lukas Pustina,
Michael Rosbach, Handelsregister-Nr.: HRB 18655,
HR-Gericht: Bonn, USt-IdNr.: DE-815299431


> On 30. Aug. 2017, at 12:56, Julian Zielke 
> <jzie...@next-level-integration.com> wrote:
> 
> Hi,
>  
> I see the handshake failures in debug mode, yes. The machine inly has 
> IPTABLES running with a few rules but not SNAT, DNAT or any
> other kind of software instance in front of it.
>  
> Here’s a small part of the config:
>  
> frontend f_ui_https_vonovia_00_01
>   bind :443 ssl crt /dvol01/haproxy/certs/
>   bind-process 1
>   mode http
>   reqadd x-forwarded-proto:\ https # force https
>   option forwardfor except 127.0.0.1
>   monitor-uri /haproxy_test
>   option httplog # log http header information (in debug-mode)
>   option http-ignore-probes # ignore preload-functions of some browsers
>   ⋮
>  
> The rest is just an acl-group filtering IPs on certain URLs and a 
> response-rewrite from the server’s hostname because it response with his 
> internal servername rather than
> the URL being called with.
>  
> Julian
>  
> Von: Daniel Schneller [mailto:daniel.schnel...@centerdevice.com 
> <mailto:daniel.schnel...@centerdevice.com>] 
> Gesendet: Mittwoch, 30. August 2017 12:40
> An: Julian Zielke <jzie...@next-level-integration.com 
> <mailto:jzie...@next-level-integration.com>>
> Cc: Georg Faerber <ge...@riseup.net <mailto:ge...@riseup.net>>; 
> haproxy+h...@formilux.org <mailto:haproxy+h...@formilux.org> 
> <haproxy@formilux.org <mailto:haproxy@formilux.org>>
> Betreff: Re: Enable SSL Forward Secrecy
>  
> Well, that’s quite extensive.
>  
> But still, the server at portal-vonovia.next-level-apps.com 
> <http://portal-vonovia.next-level-apps.com/> only agrees to one of 
>  
> TLS_RSA_WITH_AES_256_CBC_SHA (0x35)
> TLS_RSA_WITH_AES_128_CBC_SHA (0x2f)
>  
> which according to https://testssl.sh/openssl-rfc.mapping.html 
> <https://testssl.sh/openssl-rfc.mapping.html> correspond to 
>  
> AES256-SHA
> AES128-SHA
>  
> in the OpenSSL cipher names — both obviously without FS.
>  
> Are you sure your DNS resolves to the haproxy in question, and that there is 
> nothing in between it and external clients? Any other TLS aware 
> proxies/firewalls?
> Can you post a minimal haproxy config that reproduces the issue?
>  
> Please verify you can see the requests coming in by checking haproxy’s log. 
> You should be able to at least see the requests being rejected due to bad 
> handshakes.
>  
> Daniel
>  
> -- 
> Daniel Schneller
> Principal Cloud Engineer
>  
> CenterDevice GmbH  | Hochstraße 11
>| 42697 Solingen
> tel: +49 1754155711| Deutschland
> daniel.schnel...@centerdevice.de <mailto:daniel.schnel...@centerdevice.de>   
> | www.centerdevice.de <http://www.centerdevice.de/>
> 
> Geschäftsführung: Dr. Patrick Peschlow, Dr. Lukas Pustina,
> Michael Rosbach, Handelsregister-Nr.: HRB 18655,
> HR-Gericht: Bonn, USt-IdNr.: DE-815299431
> 
>  
> On 30. Aug. 2017, at 12:26, Julian Zielke <jzie...@next-level-integration.com 
> <mailto:jzie...@next-level-integration.com>> wrote:
>  
> Whoips I copied thw wrong line. Here’s the output:
>  
> ECDHE-RSA-AES256-GCM-SHA384
> ECDHE-ECDSA-AES256-GCM-SHA384
> ECDHE-RSA-AES256-SHA384
> ECDHE-ECDSA-AES256-SHA384
> ECDHE-RSA-AES256-SHA
> ECDHE-ECDSA-AES256-SHA
> SRP-DSS-AES-256-CBC-SHA
> SRP-RSA-AES-256-CBC-SHA
> SRP-AE

Re: Enable SSL Forward Secrecy

2017-08-30 Thread Daniel Schneller
Well, that’s quite extensive.

But still, the server at portal-vonovia.next-level-apps.com 
<http://portal-vonovia.next-level-apps.com/> only agrees to one of 

TLS_RSA_WITH_AES_256_CBC_SHA (0x35)
TLS_RSA_WITH_AES_128_CBC_SHA (0x2f)

which according to https://testssl.sh/openssl-rfc.mapping.html 
<https://testssl.sh/openssl-rfc.mapping.html> correspond to 

AES256-SHA
AES128-SHA

in the OpenSSL cipher names — both obviously without FS.

Are you sure your DNS resolves to the haproxy in question, and that there is 
nothing in between it and external clients? Any other TLS aware 
proxies/firewalls?
Can you post a minimal haproxy config that reproduces the issue?

Please verify you can see the requests coming in by checking haproxy’s log. You 
should be able to at least see the requests being rejected due to bad 
handshakes.

Daniel

-- 
Daniel Schneller
Principal Cloud Engineer
 
CenterDevice GmbH  | Hochstraße 11
   | 42697 Solingen
tel: +49 1754155711| Deutschland
daniel.schnel...@centerdevice.de   | www.centerdevice.de

Geschäftsführung: Dr. Patrick Peschlow, Dr. Lukas Pustina,
Michael Rosbach, Handelsregister-Nr.: HRB 18655,
HR-Gericht: Bonn, USt-IdNr.: DE-815299431


> On 30. Aug. 2017, at 12:26, Julian Zielke 
> <jzie...@next-level-integration.com> wrote:
> 
> Whoips I copied thw wrong line. Here’s the output:
>  
> ECDHE-RSA-AES256-GCM-SHA384
> ECDHE-ECDSA-AES256-GCM-SHA384
> ECDHE-RSA-AES256-SHA384
> ECDHE-ECDSA-AES256-SHA384
> ECDHE-RSA-AES256-SHA
> ECDHE-ECDSA-AES256-SHA
> SRP-DSS-AES-256-CBC-SHA
> SRP-RSA-AES-256-CBC-SHA
> SRP-AES-256-CBC-SHA
> DH-DSS-AES256-GCM-SHA384
> DHE-DSS-AES256-GCM-SHA384
> DH-RSA-AES256-GCM-SHA384
> DHE-RSA-AES256-GCM-SHA384
> DHE-RSA-AES256-SHA256
> DHE-DSS-AES256-SHA256
> DH-RSA-AES256-SHA256
> DH-DSS-AES256-SHA256
> DHE-RSA-AES256-SHA
> DHE-DSS-AES256-SHA
> DH-RSA-AES256-SHA
> DH-DSS-AES256-SHA
> DHE-RSA-CAMELLIA256-SHA
> DHE-DSS-CAMELLIA256-SHA
> DH-RSA-CAMELLIA256-SHA
> DH-DSS-CAMELLIA256-SHA
> ECDH-RSA-AES256-GCM-SHA384
> ECDH-ECDSA-AES256-GCM-SHA384
> ECDH-RSA-AES256-SHA384
> ECDH-ECDSA-AES256-SHA384
> ECDH-RSA-AES256-SHA
> ECDH-ECDSA-AES256-SHA
> AES256-GCM-SHA384
> AES256-SHA256
> AES256-SHA
> CAMELLIA256-SHA
> PSK-AES256-CBC-SHA
> ECDHE-RSA-AES128-GCM-SHA256
> ECDHE-ECDSA-AES128-GCM-SHA256
> ECDHE-RSA-AES128-SHA256
> ECDHE-ECDSA-AES128-SHA256
> ECDHE-RSA-AES128-SHA
> ECDHE-ECDSA-AES128-SHA
> SRP-DSS-AES-128-CBC-SHA
> SRP-RSA-AES-128-CBC-SHA
> SRP-AES-128-CBC-SHA
> DH-DSS-AES128-GCM-SHA256
> DHE-DSS-AES128-GCM-SHA256
> DH-RSA-AES128-GCM-SHA256
> DHE-RSA-AES128-GCM-SHA256
> DHE-RSA-AES128-SHA256
> DHE-DSS-AES128-SHA256
> DH-RSA-AES128-SHA256
> DH-DSS-AES128-SHA256
> DHE-RSA-AES128-SHA
> DHE-DSS-AES128-SHA
> DH-RSA-AES128-SHA
> DH-DSS-AES128-SHA
> DHE-RSA-SEED-SHA
> DHE-DSS-SEED-SHA
> DH-RSA-SEED-SHA
> DH-DSS-SEED-SHA
> DHE-RSA-CAMELLIA128-SHA
> DHE-DSS-CAMELLIA128-SHA
> DH-RSA-CAMELLIA128-SHA
> DH-DSS-CAMELLIA128-SHA
> ECDH-RSA-AES128-GCM-SHA256
> ECDH-ECDSA-AES128-GCM-SHA256
> ECDH-RSA-AES128-SHA256
> ECDH-ECDSA-AES128-SHA256
> ECDH-RSA-AES128-SHA
> ECDH-ECDSA-AES128-SHA
> AES128-GCM-SHA256
> AES128-SHA256
> AES128-SHA
> SEED-SHA
> CAMELLIA128-SHA
> PSK-AES128-CBC-SHA
> ECDHE-RSA-RC4-SHA
> ECDHE-ECDSA-RC4-SHA
> ECDH-RSA-RC4-SHA
> ECDH-ECDSA-RC4-SHA
> RC4-SHA
> RC4-MD5
> PSK-RC4-SHA
> ECDHE-RSA-DES-CBC3-SHA
> ECDHE-ECDSA-DES-CBC3-SHA
> SRP-DSS-3DES-EDE-CBC-SHA
> SRP-RSA-3DES-EDE-CBC-SHA
> SRP-3DES-EDE-CBC-SHA
> EDH-RSA-DES-CBC3-SHA
> EDH-DSS-DES-CBC3-SHA
> DH-RSA-DES-CBC3-SHA
> DH-DSS-DES-CBC3-SHA
> ECDH-RSA-DES-CBC3-SHA
> ECDH-ECDSA-DES-CBC3-SHA
> DES-CBC3-SHA
> PSK-3DES-EDE-CBC-SHA
>  
> Von: Julian Zielke [mailto:jzie...@next-level-integration.com 
> <mailto:jzie...@next-level-integration.com>] 
> Gesendet: Mittwoch, 30. August 2017 12:23
> An: Daniel Schneller <daniel.schnel...@centerdevice.com 
> <mailto:daniel.schnel...@centerdevice.com>>
> Cc: Georg Faerber <ge...@riseup.net <mailto:ge...@riseup.net>>; 
> haproxy+h...@formilux.org <mailto:haproxy+h...@formilux.org> 
> <haproxy@formilux.org <mailto:haproxy@formilux.org>>
> Betreff: AW: Enable SSL Forward Secrecy
>  
> Output is:
>  
> SRP-DSS-AES-256-CBC-SHA
> SRP-RSA-AES-256-CBC-SHA
> SRP-AES-256-CBC-SHA
> ECDH-RSA-AES256-SHA
> ECDH-ECDSA-AES256-SHA
> AES256-SHA
> PSK-AES256-CBC-SHA
> SRP-DSS-AES-128-CBC-SHA
> SRP-RSA-AES-128-CBC-SHA
> SRP-AES-128-CBC-SHA
> ECDH-RSA-AES128-SHA
> ECDH-ECDSA-AES128-SHA
> AES128-SHA
> PSK-AES128-CBC-SHA
>  
> Julian
>  
> Von: Daniel Schneller [mailto:danie

Re: Enable SSL Forward Secrecy

2017-08-30 Thread Daniel Schneller
Ok, so that’s not it. What about the ciphers output?


-- 
Daniel Schneller
Principal Cloud Engineer
 
CenterDevice GmbH  | Hochstraße 11
   | 42697 Solingen
tel: +49 1754155711| Deutschland
daniel.schnel...@centerdevice.de   | www.centerdevice.de

Geschäftsführung: Dr. Patrick Peschlow, Dr. Lukas Pustina,
Michael Rosbach, Handelsregister-Nr.: HRB 18655,
HR-Gericht: Bonn, USt-IdNr.: DE-815299431


> On 30. Aug. 2017, at 12:19, Julian Zielke 
> <jzie...@next-level-integration.com> wrote:
> 
> The output is:
>  
> Built with OpenSSL version : OpenSSL 1.0.2g  1 Mar 2016
> Running on OpenSSL version : OpenSSL 1.0.2g  1 Mar 2016
> OpenSSL library supports TLS extensions : yes
> OpenSSL library supports SNI : yes
> OpenSSL library supports prefer-server-ciphers : yes
>  
> Haproxy Version is 1.7.9.
>  
> Julian
>  
> Von: Daniel Schneller [mailto:daniel.schnel...@centerdevice.com] 
> Gesendet: Mittwoch, 30. August 2017 11:58
> An: Julian Zielke <jzie...@next-level-integration.com>
> Cc: Georg Faerber <ge...@riseup.net>; haproxy+h...@formilux.org 
> <haproxy@formilux.org>
> Betreff: Re: Enable SSL Forward Secrecy
>  
> Also, please run haproxy -vv to get some idea about what SSL library it 
> actually uses.
>  
>  
> -- 
> Daniel Schneller
> Principal Cloud Engineer
>  
> CenterDevice GmbH  | Hochstraße 11
>| 42697 Solingen
> tel: +49 1754155711| Deutschland
> daniel.schnel...@centerdevice.de <mailto:daniel.schnel...@centerdevice.de>   
> | www.centerdevice.de <http://www.centerdevice.de/>
> 
> Geschäftsführung: Dr. Patrick Peschlow, Dr. Lukas Pustina,
> Michael Rosbach, Handelsregister-Nr.: HRB 18655,
> HR-Gericht: Bonn, USt-IdNr.: DE-815299431
> 
>  
> On 30. Aug. 2017, at 11:52, Julian Zielke <jzie...@next-level-integration.com 
> <mailto:jzie...@next-level-integration.com>> wrote:
>  
> Hi Georg,
> 
> tried this already without effect.
> 
> - Julian
> 
> -Ursprüngliche Nachricht-
> Von: Georg Faerber [mailto:ge...@riseup.net <mailto:ge...@riseup.net>]
> Gesendet: Mittwoch, 30. August 2017 11:51
> An: haproxy@formilux.org <mailto:haproxy@formilux.org>
> Betreff: Re: Enable SSL Forward Secrecy
> 
> On 17-08-30 09:33:23, Julian Zielke wrote:
> 
> Hi,
> 
> I'm struggeling with enabling SSL forward secrecy in my haproxy 1.7 setup.
> 
> So far the global settings look like:
> 
>  tune.ssl.default-dh-param 2048 # tune shared secred to 2048bits
> 
>  ssl-default-bind-options force-tlsv12 no-sslv3
>  ssl-default-bind-ciphers 
> TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA:TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA:TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA:AES256+EECDH:AES256+EDH:TLSv1+HIGH:!aNULL:!eNULL:!3DES:!RC4:!CAMELLIA:!DH:!kECDHE:@STRENGTH:!DHE
>  ssl-default-server-options force-tlsv12 no-sslv3
>  ssl-default-server-ciphers 
> TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA:TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA:TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA:AES256+EECDH:AES256+EDH:TLSv1+HIGH:!aNULL:!eNULL:!3DES:!RC4:!CAMELLIA:!DH:!kECDHE:@STRENGTH:!DHE
> 
>  ssl-server-verify required
>  tune.ssl.cachesize 10
>  tune.ssl.lifetime 600
>  tune.ssl.maxrecord 1460
> 
> and in my https UI I've set:
> 
> ### ssl forward secrecy tweak
> # Distinguish between secure and insecure requests
>   acl secure dst_port eq 443
> 
> # Mark all cookies as secure if sent over SSL
>   rsprep ^Set-Cookie:\ (.*) Set-Cookie:\ \1;\ Secure if secure
> 
> # Add the HSTS header with a 1 year max-age
>   rspadd Strict-Transport-Security:\ max-age=31536000 if secure
> 
> Still Qualys gives me an A- rating telling me:
> The server does not support Forward Secrecy with the reference browsers. 
> Grade reduced to A-.
> 
> Any clue how to fix this?
> 
> Try to add no-tls-tickets [1].
> 
> Cheers,
> Georg
> 
> 
> [1] 
> https://cbonte.github.io/haproxy-dconv/1.7/configuration.html#no-tls-tickets 
> <https://cbonte.github.io/haproxy-dconv/1.7/configuration.html#no-tls-tickets>
> Wichtiger Hinweis: Der Inhalt dieser E-Mail ist vertraulich und 
> ausschließlich für den bezeichneten Adressaten bestimmt. Wenn Sie nicht der 
> vorgesehene Adressat dieser E-Mail oder dessen Vertreter sein sollten, so 
> beachten Sie bitte, dass jede Form der Kenntnisnahme, Veröffentlichung, 
> Vervielfältigung oder Weitergabe des Inhalts dieser E-Mail unzulässig ist. 
> Wir bitten Sie, sich in diesem Fall mit dem Absender der E-Mail in Verbindung 
> zu setzen. Wir möchten Sie außerdem darauf hinweisen, dass die Kommunikation 
> per E-Mail über das Internet unsicher ist, da für un

Re: Enable SSL Forward Secrecy

2017-08-30 Thread Daniel Schneller
Also, please run haproxy -vv to get some idea about what SSL library it 
actually uses.


-- 
Daniel Schneller
Principal Cloud Engineer
 
CenterDevice GmbH  | Hochstraße 11
   | 42697 Solingen
tel: +49 1754155711| Deutschland
daniel.schnel...@centerdevice.de   | www.centerdevice.de

Geschäftsführung: Dr. Patrick Peschlow, Dr. Lukas Pustina,
Michael Rosbach, Handelsregister-Nr.: HRB 18655,
HR-Gericht: Bonn, USt-IdNr.: DE-815299431


> On 30. Aug. 2017, at 11:52, Julian Zielke 
> <jzie...@next-level-integration.com> wrote:
> 
> Hi Georg,
> 
> tried this already without effect.
> 
> - Julian
> 
> -Ursprüngliche Nachricht-
> Von: Georg Faerber [mailto:ge...@riseup.net]
> Gesendet: Mittwoch, 30. August 2017 11:51
> An: haproxy@formilux.org
> Betreff: Re: Enable SSL Forward Secrecy
> 
> On 17-08-30 09:33:23, Julian Zielke wrote:
>> Hi,
>> 
>> I'm struggeling with enabling SSL forward secrecy in my haproxy 1.7 setup.
>> 
>> So far the global settings look like:
>> 
>>  tune.ssl.default-dh-param 2048 # tune shared secred to 2048bits
>> 
>>  ssl-default-bind-options force-tlsv12 no-sslv3
>>  ssl-default-bind-ciphers 
>> TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA:TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA:TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA:AES256+EECDH:AES256+EDH:TLSv1+HIGH:!aNULL:!eNULL:!3DES:!RC4:!CAMELLIA:!DH:!kECDHE:@STRENGTH:!DHE
>>  ssl-default-server-options force-tlsv12 no-sslv3
>>  ssl-default-server-ciphers 
>> TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA:TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA:TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA:AES256+EECDH:AES256+EDH:TLSv1+HIGH:!aNULL:!eNULL:!3DES:!RC4:!CAMELLIA:!DH:!kECDHE:@STRENGTH:!DHE
>> 
>>  ssl-server-verify required
>>  tune.ssl.cachesize 10
>>  tune.ssl.lifetime 600
>>  tune.ssl.maxrecord 1460
>> 
>> and in my https UI I've set:
>> 
>> ### ssl forward secrecy tweak
>> # Distinguish between secure and insecure requests
>>   acl secure dst_port eq 443
>> 
>> # Mark all cookies as secure if sent over SSL
>>   rsprep ^Set-Cookie:\ (.*) Set-Cookie:\ \1;\ Secure if secure
>> 
>> # Add the HSTS header with a 1 year max-age
>>   rspadd Strict-Transport-Security:\ max-age=31536000 if secure
>> 
>> Still Qualys gives me an A- rating telling me:
>> The server does not support Forward Secrecy with the reference browsers. 
>> Grade reduced to A-.
>> 
>> Any clue how to fix this?
> 
> Try to add no-tls-tickets [1].
> 
> Cheers,
> Georg
> 
> 
> [1] 
> https://cbonte.github.io/haproxy-dconv/1.7/configuration.html#no-tls-tickets
> Wichtiger Hinweis: Der Inhalt dieser E-Mail ist vertraulich und 
> ausschließlich für den bezeichneten Adressaten bestimmt. Wenn Sie nicht der 
> vorgesehene Adressat dieser E-Mail oder dessen Vertreter sein sollten, so 
> beachten Sie bitte, dass jede Form der Kenntnisnahme, Veröffentlichung, 
> Vervielfältigung oder Weitergabe des Inhalts dieser E-Mail unzulässig ist. 
> Wir bitten Sie, sich in diesem Fall mit dem Absender der E-Mail in Verbindung 
> zu setzen. Wir möchten Sie außerdem darauf hinweisen, dass die Kommunikation 
> per E-Mail über das Internet unsicher ist, da für unberechtigte Dritte 
> grundsätzlich die Möglichkeit der Kenntnisnahme und Manipulation besteht
> 
> Important Note: The information contained in this e-mail is confidential. It 
> is intended solely for the addressee. Access to this e-mail by anyone else is 
> unauthorized. If you are not the intended recipient, any form of disclosure, 
> reproduction, distribution or any action taken or refrained from in reliance 
> on it, is prohibited and may be unlawful. Please notify the sender 
> immediately. We also would like to inform you that communication via e-mail 
> over the internet is insecure because third parties may have the possibility 
> to access and manipulate e-mails.



Re: Enable SSL Forward Secrecy

2017-08-30 Thread Daniel Schneller
The cipher suite list only shows two possible ciphers — both not suitable for 
FS.

TLS_RSA_WITH_AES_256_CBC_SHA
TLS_RSA_WITH_AES_128_CBC_SHA

This is also why all the modern browsers are marked as “No FS” — they can’t use 
a FS cipher.

Try this on your haproxy instance:

$ openssl ciphers 
'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA:TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA:TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA:AES256+EECDH:AES256+EDH:TLSv1+HIGH:!aNULL:!eNULL:!3DES:!RC4:!CAMELLIA:!DH:!kECDHE:@STRENGTH:!DHE'
 | tr ':' '\n'

(I copied the ciphers list from your earlier mail).
On my box this results in 

ECDHE-RSA-AES256-GCM-SHA384
ECDHE-ECDSA-AES256-GCM-SHA384
ECDHE-RSA-AES256-SHA384
ECDHE-ECDSA-AES256-SHA384
ECDHE-RSA-AES256-SHA
ECDHE-ECDSA-AES256-SHA
SRP-DSS-AES-256-CBC-SHA
SRP-RSA-AES-256-CBC-SHA
SRP-AES-256-CBC-SHA
ECDH-RSA-AES256-SHA
ECDH-ECDSA-AES256-SHA
AES256-SHA
PSK-AES256-CBC-SHA
ECDHE-RSA-AES128-SHA
ECDHE-ECDSA-AES128-SHA
SRP-DSS-AES-128-CBC-SHA
SRP-RSA-AES-128-CBC-SHA
SRP-AES-128-CBC-SHA
ECDH-RSA-AES128-SHA
ECDH-ECDSA-AES128-SHA
AES128-SHA
PSK-AES128-CBC-SHA

Check the output on your load balancer — maybe the OpenSSL version just too old?

Regards,
Daniel

-- 
Daniel Schneller
Principal Cloud Engineer
 
CenterDevice GmbH  | Hochstraße 11
   | 42697 Solingen
tel: +49 1754155711| Deutschland
daniel.schnel...@centerdevice.de   | www.centerdevice.de

Geschäftsführung: Dr. Patrick Peschlow, Dr. Lukas Pustina,
Michael Rosbach, Handelsregister-Nr.: HRB 18655,
HR-Gericht: Bonn, USt-IdNr.: DE-815299431


> On 30. Aug. 2017, at 11:42, Julian Zielke 
> <jzie...@next-level-integration.com> wrote:
> 
> Hi,
>  
> sure I can share it since the site since it’s secured already in many ways:
>  
> https://www.ssllabs.com/ssltest/analyze.html?d=portal-vonovia.next-level-apps.com=on
>  
>   • Julian
>  
> Von: Daniel Schneller [mailto:daniel.schnel...@centerdevice.com] 
> Gesendet: Mittwoch, 30. August 2017 11:39
> An: Julian Zielke <jzie...@next-level-integration.com>
> Cc: haproxy+h...@formilux.org <haproxy@formilux.org>
> Betreff: Re: Enable SSL Forward Secrecy
>  
> Hi,
>  
> You might want to include a link to your Qualys results to help others see 
> what exactly they say.
> At a casual glance the ciphers looks ok, but it would be easier to see the 
> SSLlabs output.
> If you don’t want to share it, I suggest scrolling down and looking at the 
> results of the per-browser handshakes and go through them — IIRC there is 
> some “FS” vs. “No FS” marker there.
>  
> Regards,
> Daniel
>  
> -- 
> Daniel Schneller
> Principal Cloud Engineer
>  
> CenterDevice GmbH  | Hochstraße 11
>| 42697 Solingen
> tel: +49 1754155711| Deutschland
> daniel.schnel...@centerdevice.de   | www.centerdevice.de
> 
> Geschäftsführung: Dr. Patrick Peschlow, Dr. Lukas Pustina,
> Michael Rosbach, Handelsregister-Nr.: HRB 18655,
> HR-Gericht: Bonn, USt-IdNr.: DE-815299431
> 
>  
> On 30. Aug. 2017, at 11:33, Julian Zielke 
> <jzie...@next-level-integration.com> wrote:
>  
> Hi,
>  
> I’m struggeling with enabling SSL forward secrecy in my haproxy 1.7 setup.
>  
> So far the global settings look like:
>  
>   tune.ssl.default-dh-param 2048 # tune shared secred to 2048bits
>  
>   ssl-default-bind-options force-tlsv12 no-sslv3
>   ssl-default-bind-ciphers 
> TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA:TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA:TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA:AES256+EECDH:AES256+EDH:TLSv1+HIGH:!aNULL:!eNULL:!3DES:!RC4:!CAMELLIA:!DH:!kECDHE:@STRENGTH:!DHE
>   ssl-default-server-options force-tlsv12 no-sslv3
>   ssl-default-server-ciphers 
> TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA:TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA:TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA:AES256+EECDH:AES256+EDH:TLSv1+HIGH:!aNULL:!eNULL:!3DES:!RC4:!CAMELLIA:!DH:!kECDHE:@STRENGTH:!DHE
>  
>   ssl-server-verify required
>   tune.ssl.cachesize 10
>   tune.ssl.lifetime 600
>   tune.ssl.maxrecord 1460
>  
> and in my https UI I’ve set:
>  
> ### ssl forward secrecy tweak
> # Distinguish between secure and insecure requests
>acl secure dst_port eq 443
>  
> # Mark all cookies as secure if sent over SSL
>rsprep ^Set-Cookie:\ (.*) Set-Cookie:\ \1;\ Secure if secure
>  
> # Add the HSTS header with a 1 year max-age
>rspadd Strict-Transport-Security:\ max-age=31536000 if secure
>  
> Still Qualys gives me an A- rating telling me:
> The server does not support Forward Secrecy with the reference browsers. 
> Grade reduced to A-.
>  
> Any clue how to fix this?
>  
>   • Julian
>  
>  
> Wichtiger Hinweis: Der Inhalt dieser E-Mail ist vertraulich und 
> ausschließ

Re: Enable SSL Forward Secrecy

2017-08-30 Thread Daniel Schneller
Hi,

You might want to include a link to your Qualys results to help others see what 
exactly they say.
At a casual glance the ciphers looks ok, but it would be easier to see the 
SSLlabs output.
If you don’t want to share it, I suggest scrolling down and looking at the 
results of the per-browser handshakes and go through them — IIRC there is some 
“FS” vs. “No FS” marker there.

Regards,
Daniel

-- 
Daniel Schneller
Principal Cloud Engineer
 
CenterDevice GmbH  | Hochstraße 11
   | 42697 Solingen
tel: +49 1754155711| Deutschland
daniel.schnel...@centerdevice.de   | www.centerdevice.de

Geschäftsführung: Dr. Patrick Peschlow, Dr. Lukas Pustina,
Michael Rosbach, Handelsregister-Nr.: HRB 18655,
HR-Gericht: Bonn, USt-IdNr.: DE-815299431


> On 30. Aug. 2017, at 11:33, Julian Zielke 
> <jzie...@next-level-integration.com> wrote:
> 
> Hi,
>  
> I’m struggeling with enabling SSL forward secrecy in my haproxy 1.7 setup.
>  
> So far the global settings look like:
>  
>   tune.ssl.default-dh-param 2048 # tune shared secred to 2048bits
>  
>   ssl-default-bind-options force-tlsv12 no-sslv3
>   ssl-default-bind-ciphers 
> TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA:TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA:TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA:AES256+EECDH:AES256+EDH:TLSv1+HIGH:!aNULL:!eNULL:!3DES:!RC4:!CAMELLIA:!DH:!kECDHE:@STRENGTH:!DHE
>   ssl-default-server-options force-tlsv12 no-sslv3
>   ssl-default-server-ciphers 
> TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA:TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA:TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA:AES256+EECDH:AES256+EDH:TLSv1+HIGH:!aNULL:!eNULL:!3DES:!RC4:!CAMELLIA:!DH:!kECDHE:@STRENGTH:!DHE
>  
>   ssl-server-verify required
>   tune.ssl.cachesize 10
>   tune.ssl.lifetime 600
>   tune.ssl.maxrecord 1460
>  
> and in my https UI I’ve set:
>  
> ### ssl forward secrecy tweak
> # Distinguish between secure and insecure requests
>acl secure dst_port eq 443
>  
> # Mark all cookies as secure if sent over SSL
>rsprep ^Set-Cookie:\ (.*) Set-Cookie:\ \1;\ Secure if secure
>  
> # Add the HSTS header with a 1 year max-age
>rspadd Strict-Transport-Security:\ max-age=31536000 if secure
>  
> Still Qualys gives me an A- rating telling me:
> The server does not support Forward Secrecy with the reference browsers. 
> Grade reduced to A-.
>  
> Any clue how to fix this?
>  
> Julian
>  
>  
> Wichtiger Hinweis: Der Inhalt dieser E-Mail ist vertraulich und 
> ausschließlich für den bezeichneten Adressaten bestimmt. Wenn Sie nicht der 
> vorgesehene Adressat dieser E-Mail oder dessen Vertreter sein sollten, so 
> beachten Sie bitte, dass jede Form der Kenntnisnahme, Veröffentlichung, 
> Vervielfältigung oder Weitergabe des Inhalts dieser E-Mail unzulässig ist. 
> Wir bitten Sie, sich in diesem Fall mit dem Absender der E-Mail in Verbindung 
> zu setzen. Wir möchten Sie außerdem darauf hinweisen, dass die Kommunikation 
> per E-Mail über das Internet unsicher ist, da für unberechtigte Dritte 
> grundsätzlich die Möglichkeit der Kenntnisnahme und Manipulation besteht
> 
> Important Note: The information contained in this e-mail is confidential. It 
> is intended solely for the addressee. Access to this e-mail by anyone else is 
> unauthorized. If you are not the intended recipient, any form of disclosure, 
> reproduction, distribution or any action taken or refrained from in reliance 
> on it, is prohibited and may be unlawful. Please notify the sender 
> immediately. We also would like to inform you that communication via e-mail 
> over the internet is insecure because third parties may have the possibility 
> to access and manipulate e-mails.
> 



Re: req.cook_cnt() broken?

2017-08-25 Thread Daniel Schneller
On 24. Aug. 2017, at 01:50, Cyril Bonté <cyril.bo...@free.fr> wrote:
> 
> You're right. currently, the code and the documentation don't say the same 
> things.
> 
> Can you try the attached patch ?
> 
> -- 
> Cyril Bonté
> 

Thanks for the patch!

Tried against 1.8,  1.7.9, and 1.6.13 just now. Works as expected with all 
three. :D

Any chance of getting this fix backported to the 1.7 and ideally 1.6 branches?

It would come in handy on a production system currently running 1.6 that I 
cannot easily upgrade to 1.7.


Cheers,
Daniel


-- 
Daniel Schneller
Principal Cloud Engineer
 
CenterDevice GmbH  | Hochstraße 11
   | 42697 Solingen
tel: +49 1754155711| Deutschland
daniel.schnel...@centerdevice.de   | www.centerdevice.de

Geschäftsführung: Dr. Patrick Peschlow, Dr. Lukas Pustina,
Michael Rosbach, Handelsregister-Nr.: HRB 18655,
HR-Gericht: Bonn, USt-IdNr.: DE-815299431


Re: req.cook_cnt() broken?

2017-08-23 Thread Daniel Schneller
Kindly bumping this during the summer vacation time for potentially new 
recipients :)


> On 21. Aug. 2017, at 21:14, Daniel Schneller 
> <daniel.schnel...@centerdevice.com> wrote:
> 
> Hi!
> 
> According to the documentation
> 
>  req.cook_cnt([]) : integer
>  Returns an integer value representing the number of occurrences of the cookie
>   in the request, or all cookies if  is not specified.
> 
> it should be possible to do something like this to reject a request if it 
> contains more than  cookies total. I do not know the cookie names in 
> advance. I am trying to reject malicious requests with hundreds or thousands 
> of cookies, trying to exhaust memory in my backend servers. Tomcat has a 
> maximum number of cookies per request setting, but I’d like to reject these 
> before they even get to the backends.
> 
> I thought this would work (for n=2):
> 
>   frontend fe-test
>   bind 0.0.0.0:8070
>   http-request deny deny_status 400 if { req.cook_cnt() gt 2 }
>   http-request auth realm tomcat
>   default_backend be-test
> 
> 
> However, it does not work. The count is always 0, hence the ACL always passes 
> and I get a 401 response from the next ACL in line.
> 
> root@tomcat:~# curl -v -b 'C1=v1; C1=v2; C1=v3' tomcat:8070
> * Rebuilt URL to: tomcat:8070/
> * Hostname was NOT found in DNS cache
> *   Trying 127.0.1.1...
> * Connected to tomcat (127.0.1.1) port 8070 (#0)
>> GET / HTTP/1.1
>> User-Agent: curl/7.35.0
>> Host: tomcat:8070
>> Accept: */*
>> Cookie: C1=v1; C1=v2; C1=v3
>> 
> * HTTP 1.0, assume close after body
> < HTTP/1.0 401 Unauthorized
> < Cache-Control: no-cache
> < Connection: close
> < Content-Type: text/html
> < WWW-Authenticate: Basic realm="tomcat"
> <
> 401 Unauthorized
> You need a valid user and password to access this content.
> 
> * Closing connection 0
> 
> 
> When I change the ACL to include a cookie name, it works:
> 
>http-request deny deny_status 400 if { req.cook_cnt("C1") gt 2 }
> 
> root@tomcat:~# curl -v -b 'C1=v1; C1=v2; C1=v3' tomcat:8070
> * Rebuilt URL to: tomcat:8070/
> * Hostname was NOT found in DNS cache
> *   Trying 127.0.1.1...
> * Connected to tomcat (127.0.1.1) port 8070 (#0)
>> GET / HTTP/1.1
>> User-Agent: curl/7.35.0
>> Host: tomcat:8070
>> Accept: */*
>> Cookie: C1=v1; C1=v2; C1=v3
>> 
> * HTTP 1.0, assume close after body
> < HTTP/1.0 400 Bad request
> < Cache-Control: no-cache
> < Connection: close
> < Content-Type: text/html
> <
> 400 Bad request
> Your browser sent an invalid request.
> 
> * Closing connection 0
> 
> 
> 
> I tried to figure out what the code does, to see if I am doing something 
> wrong and found this in proto_http.c:
> 
> --
> /* Iterate over all cookies present in a request to count how many occurrences
> * match the name in args and args->data.str.len. If  is non-null, then
> * multiple cookies may be parsed on the same line. The returned sample is of
> * type UINT. Accepts exactly 1 argument of type string.
> */
> static int
> smp_fetch_cookie_cnt(const struct arg *args, struct sample *smp, const char 
> *kw, void *private)
> {
>   struct http_txn *txn;
>   struct hdr_idx *idx;
>   struct hdr_ctx ctx;
>   const struct http_msg *msg;
>   const char *hdr_name;
>   int hdr_name_len;
>   int cnt;
>   char *val_beg, *val_end;
>   char *sol;
> 
>   if (!args || args->type != ARGT_STR)
>   return 0;
> --
> 
> So without being very C-savvy, this appears to exit early when there is no 
> parameter of type string passed in.
> 
> I hope someone can shed some light on this. :)
> 
> Thanks in advance,
> Daniel
> 
> 
> -- 
> Daniel Schneller
> Principal Cloud Engineer
> 
> CenterDevice GmbH  | Hochstraße 11
>   | 42697 Solingen
> tel: +49 1754155711| Deutschland
> daniel.schnel...@centerdevice.de   | www.centerdevice.de
> 
> Geschäftsführung: Dr. Patrick Peschlow, Dr. Lukas Pustina,
> Michael Rosbach, Handelsregister-Nr.: HRB 18655,
> HR-Gericht: Bonn, USt-IdNr.: DE-815299431
> 
> 




req.cook_cnt() broken?

2017-08-21 Thread Daniel Schneller
Hi!

According to the documentation

  req.cook_cnt([]) : integer
  Returns an integer value representing the number of occurrences of the cookie
   in the request, or all cookies if  is not specified.

it should be possible to do something like this to reject a request if it 
contains more than  cookies total. I do not know the cookie names in 
advance. I am trying to reject malicious requests with hundreds or thousands of 
cookies, trying to exhaust memory in my backend servers. Tomcat has a maximum 
number of cookies per request setting, but I’d like to reject these before they 
even get to the backends.

I thought this would work (for n=2):

frontend fe-test
bind 0.0.0.0:8070
http-request deny deny_status 400 if { req.cook_cnt() gt 2 }
http-request auth realm tomcat
default_backend be-test


However, it does not work. The count is always 0, hence the ACL always passes 
and I get a 401 response from the next ACL in line.

root@tomcat:~# curl -v -b 'C1=v1; C1=v2; C1=v3' tomcat:8070
* Rebuilt URL to: tomcat:8070/
* Hostname was NOT found in DNS cache
*   Trying 127.0.1.1...
* Connected to tomcat (127.0.1.1) port 8070 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.35.0
> Host: tomcat:8070
> Accept: */*
> Cookie: C1=v1; C1=v2; C1=v3
>
* HTTP 1.0, assume close after body
< HTTP/1.0 401 Unauthorized
< Cache-Control: no-cache
< Connection: close
< Content-Type: text/html
< WWW-Authenticate: Basic realm="tomcat"
<
401 Unauthorized
You need a valid user and password to access this content.

* Closing connection 0


When I change the ACL to include a cookie name, it works:

http-request deny deny_status 400 if { req.cook_cnt("C1") gt 2 }

root@tomcat:~# curl -v -b 'C1=v1; C1=v2; C1=v3' tomcat:8070
* Rebuilt URL to: tomcat:8070/
* Hostname was NOT found in DNS cache
*   Trying 127.0.1.1...
* Connected to tomcat (127.0.1.1) port 8070 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.35.0
> Host: tomcat:8070
> Accept: */*
> Cookie: C1=v1; C1=v2; C1=v3
>
* HTTP 1.0, assume close after body
< HTTP/1.0 400 Bad request
< Cache-Control: no-cache
< Connection: close
< Content-Type: text/html
<
400 Bad request
Your browser sent an invalid request.

* Closing connection 0



I tried to figure out what the code does, to see if I am doing something wrong 
and found this in proto_http.c:

--
/* Iterate over all cookies present in a request to count how many occurrences
 * match the name in args and args->data.str.len. If  is non-null, then
 * multiple cookies may be parsed on the same line. The returned sample is of
 * type UINT. Accepts exactly 1 argument of type string.
 */
static int
smp_fetch_cookie_cnt(const struct arg *args, struct sample *smp, const char 
*kw, void *private)
{
struct http_txn *txn;
struct hdr_idx *idx;
struct hdr_ctx ctx;
const struct http_msg *msg;
const char *hdr_name;
int hdr_name_len;
int cnt;
char *val_beg, *val_end;
char *sol;

if (!args || args->type != ARGT_STR)
return 0;
--

So without being very C-savvy, this appears to exit early when there is no 
parameter of type string passed in.

I hope someone can shed some light on this. :)

Thanks in advance,
Daniel


-- 
Daniel Schneller
Principal Cloud Engineer
 
CenterDevice GmbH  | Hochstraße 11
   | 42697 Solingen
tel: +49 1754155711| Deutschland
daniel.schnel...@centerdevice.de   | www.centerdevice.de

Geschäftsführung: Dr. Patrick Peschlow, Dr. Lukas Pustina,
Michael Rosbach, Handelsregister-Nr.: HRB 18655,
HR-Gericht: Bonn, USt-IdNr.: DE-815299431





Re: fields vs word converter, unexpected "0" result

2017-08-01 Thread Daniel Schneller
On 1. Aug. 2017, at 17:32, Holger Just  wrote:

> GET / HTTP/1.1
> Host: 127.0.0.1:8881
> User-Agent: curl/7.43.0
> Accept: */*
> 
> The HTTP 1.1 specification requires that a Host header is always sent
> along with the request. Curl specifically always sends the host from the
> given URL, unless it was explicitly overwritten.
> 
> Thus, in your case the fetch extracts the second part from the given IP
> address, which is 0 in your case.
> 

Ha! Thanks. I actually had used curl -v before, but selective vision must have 
kicked in with all the 127.0.0.1’s :)

Any idea on the difference between “word” and “field”, though?

Daniel




fields vs word converter, unexpected "0" result

2017-08-01 Thread Daniel Schneller
Hi!

First, the basics:

--
root@haproxy-1:~# haproxy -vv
HA-Proxy version 1.6.13-1ppa1~trusty 2017/06/19
Copyright 2000-2017 Willy Tarreau <wi...@haproxy.org>

Build options :
  TARGET  = linux2628
  CPU = generic
  CC  = gcc
  CFLAGS  = -g -O2 -fPIE -fstack-protector --param=ssp-buffer-size=4 -Wformat 
-Werror=format-security -D_FORTIFY_SOURCE=2
  OPTIONS = USE_ZLIB=1 USE_REGPARM=1 USE_OPENSSL=1 USE_LUA=1 USE_PCRE=1 USE_NS=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200




root@haproxy-1:/vagrant# cat mini.cfg

global
  log /dev/log len 350 local0 info
  stats socket /var/run/haproxy.stat user haproxy group haproxy mode 600 level 
admin

defaults
  mode http
  log global
  option httplog
  option dontlognull
  option http-keep-alive
  option redispatch
  timeout http-request 10s
  timeout queue 1m
  timeout connect 5s
  timeout client 2m
  timeout server 2m
  timeout http-keep-alive 10s
  timeout check 5s
  retries 3
  maxconn 2000


listen demo
  bind 127.0.0.1:8881
  log-format %ci:%cp\ [%t]\ %{+Q}[var(txn.host),lower,word(2,'.')]
  http-request set-var(txn.host) req.hdr(Host)
  server dummy dummy:8081 check
———


I need to log the 2nd subdomain of any incoming request. The above is the 
minimum config
I used to demo this behaviour, the actual one is a bit more involved ;)

First question: What is the difference between the “word” and “field” 
converters?
I tried both, but the behavior described below is the same regardless.



Now, this is what confuses me (request interleaved with corresponding log line):

root@haproxy-1:~# curl -s http://127.0.0.1:8881 -H "Host: aa.bb.cc”
Aug  1 15:12:42 haproxy-1 haproxy[3049]: 127.0.0.1:45868 
[01/Aug/2017:15:12:42.528] "bb"


root@haproxy-1:~# curl -s http://127.0.0.1:8881 -H "Host: cc"
Aug  1 15:12:47 haproxy-1 haproxy[3049]: 127.0.0.1:45871 
[01/Aug/2017:15:12:47.296] ""


root@haproxy-1:~# curl -s http://127.0.0.1:8881 -H "Host:"
Aug  1 15:12:50 haproxy-1 haproxy[3049]: 127.0.0.1:45872 
[01/Aug/2017:15:12:50.695] ""


root@haproxy-1:~# curl -s http://127.0.0.1:8881
Aug  1 15:12:55 haproxy-1 haproxy[3049]: 127.0.0.1:45875 
[01/Aug/2017:15:12:55.198] "0"


While the first three are expected, the last one confuses me. Why would leaving 
the header out result in “0” being logged?

Ideally, I’d like this to show as “-“, but empty string would be fine, too.
But “0” is pretty counter-intuitive.

It’s not strictly horrible, but at least it is unexpected and would also 
collide with cases where the actual 2nd subdomain was called “0”.

Is this a bug, or am I doing something wrong? 


Thanks,
Daniel

-- 
Daniel Schneller
Principal Cloud Engineer
 
CenterDevice GmbH  | Hochstraße 11
   | 42697 Solingen
tel: +49 1754155711| Deutschland
daniel.schnel...@centerdevice.de   | www.centerdevice.de

Geschäftsführung: Dr. Patrick Peschlow, Dr. Lukas Pustina,
Michael Rosbach, Handelsregister-Nr.: HRB 18655,
HR-Gericht: Bonn, USt-IdNr.: DE-815299431





Re: haproxy does not capture the complete request header host sometimes

2017-06-22 Thread Daniel Schneller
Hi!

Phew, I was following this one with some concern, fearing it could be something 
more serious just waiting to hit us, too ;-)
Great that the issue was found, thanks for that!

There is just one thing I wanted to note regarding

> […] It can be backported in 1.7, 1.6 and 1.5. I finally marked this patch as 
> a bug fix.

If I read the patch correctly, even though it is classified as “MINOR” it will 
fail with an error on startup, when the configuration has a value outside the 
range.
When backporting into the stable branches, this can lead to updates failing 
with existing configuration files — which may or may not be what you expect 
when doing minor upgrades.
Granted, when you currently use an out-of-range value, you probably _want_ this 
fix, but still might hit you unexpectedly.

It should be made very prominent in the release notes.

Cheers,
Daniel

-- 
Daniel Schneller
Principal Cloud Engineer
 
CenterDevice GmbH  | Hochstraße 11
   | 42697 Solingen
tel: +49 1754155711| Deutschland
daniel.schnel...@centerdevice.de   | www.centerdevice.de

Geschäftsführung: Dr. Patrick Peschlow, Dr. Lukas Pustina,
Michael Rosbach, Handelsregister-Nr.: HRB 18655,
HR-Gericht: Bonn, USt-IdNr.: DE-815299431


> On 21. Jun. 2017, at 17:00, Christopher Faulet <cfau...@haproxy.com> wrote:
> 
> Le 13/06/2017 à 14:16, Christopher Faulet a écrit :
>> Le 13/06/2017 à 10:31, siclesang a écrit :
>>> haproxy balances by host,but often captures   a part of  request header
>>> host or null, and requests balance to default server.
>>> 
>>> how to debug it ,
>>> 
>> Hi,
>> I'll try to help you. Can you share your configuration please ? It could
>> help to find a potential bug.
>> Could you also provide the tcpdump of a buggy request ?
>> And finally, could you upgrade your HAProxy to the last 1.6 version
>> (1.6.12) to be sure ?
> 
> Hi,
> 
> Just for the record. After some exchanges in private with siclesang, we found 
> the bug in the configuration parser, because of a too high value for 
> tune.http.maxhdr. Here is the explanation:
> 
> Well, I think I found the problem. This is not a bug (not really). There
> is something I missed in your configuration. You set tune.http.maxhdr to
> 64000. I guess you keep this parameter during all your tests. This is an
> invalid value. It needs to be in the range [0, 32767]. This is mandatory
> to avoid integer overflow. the size of the array where headers offsets
> are stored is a signed short.
> 
> To be fair, there is no check on this value during the configuration
> parsing. And the documentation does not specify any range for this
> parameter. I will post a fix very quickly to avoid errors.
> 
> BTW, this is a really huge value. The default one is 101. You can
> legitimately increase this value. But there is no reason to have 64000
> headers in an HTTP message. IMHO, 1000/2000 is already an very huge limit.
> 
> I attached a patch to improve the configuration parsing and to update the 
> documentation. It can be backported in 1.7, 1.6 and 1.5. I finally marked 
> this patch as a bug fix.
> 
> Thanks siclesang for your help,
> -- 
> Christopher Faulet
> <0001-BUG-MINOR-cfgparse-Check-if-tune.http.maxhdr-is-in-t.patch>



Re: truncated request in log lines

2017-05-16 Thread Daniel Schneller
This is a limitation of the syslog protocol, IIRC.

-- 
Daniel Schneller
Principal Cloud Engineer
 
CenterDevice GmbH  | Hochstraße 11
   | 42697 Solingen
tel: +49 1754155711| Deutschland
daniel.schnel...@centerdevice.de   | www.centerdevice.de

Geschäftsführung: Dr. Patrick Peschlow, Dr. Lukas Pustina,
Michael Rosbach, Handelsregister-Nr.: HRB 18655,
HR-Gericht: Bonn, USt-IdNr.: DE-815299431


> On 16. May. 2017, at 13:28, Stéphane Cottin <stephane.cot...@vixns.com> wrote:
> 
> Hi,
> 
> Version: haproxy 1.7.2
> 
> I'm logging to a unix socket, allowing long lines.
> 
>  log /dev/log len 8192 local0
> [...]
>  option dontlognull
>  option log-separate-errors
>  option httplog
> 
> I'm also capturing the referer header.
> 
>  capture request header Referer len 4096
> 
> When using large strings (length > 1024 ) the request is truncated to 1024 
> characters in the log line, the captured header is not.
> The log line is still valid, quotes are present, only the end of the request 
> string is missing.
> 
> Do I miss some config parameter ? setting the len in the log configuration 
> directive shouldn't prevent this ?
> 
> 
> Stéphane
> 



Re: Automatic Certificate Switching Idea

2017-05-15 Thread Daniel Schneller
> 
> That's perfect! Your feedback and possible trouble in doing this will
> also definitely help!
> 

Oh, if experience tells me one thing, no matter how “straightforward” this may 
look, there _will_ be trouble ;-)

Cheers
Daniel


-- 
Daniel Schneller
Principal Cloud Engineer
 
CenterDevice GmbH  | Hochstraße 11
   | 42697 Solingen
tel: +49 1754155711| Deutschland
daniel.schnel...@centerdevice.de   | www.centerdevice.de

Geschäftsführung: Dr. Patrick Peschlow, Dr. Lukas Pustina,
Michael Rosbach, Handelsregister-Nr.: HRB 18655,
HR-Gericht: Bonn, USt-IdNr.: DE-815299431





Re: Automatic Certificate Switching Idea

2017-05-12 Thread Daniel Schneller
Willy,

thanks for your elaborate reply! See my remarks below.

> possible impacts nor complexity (but I don't want to have the complete MS
> Office suite merged in, just Word, Excel and PowerPoint :-)).


:-D

>  - renewed certs can and will sometimes provide extra alt names, so
>they are not always 100% equivalent.
> […]

> That said, given that we can already look up a cert based on a name,
> maybe in fact we could load all of them and just try to find a more
> recent one if the first one reported by the SNI is outdated. I don't
> know if that solves everything there.


It actually might. In the end it would be something like a map, with the
key being the domain, and the value a list of pointers to the actual
certificates, sorted by remaining validity, having shortest first.

> In any case, this will not provide any benefit regarding let's encrypt
> or such solutions, because the next cert would have to be known in
> advance and loaded already, so reloads will have to be performed to
> take it into account. So I think that the approach making it possible
> to feed them over the CLI would still be mor interesting (and possibly
> complementary).

I think it would benefit Let’s Encrypt and similar scenarios. I would
still require reloads to pick up newly added certificates. But as renewed
certificates overlap their predecessors’ validity period, dropping them
into a directory and just doing a reload maybe once a day would work.
Clients would still get the older one, until it finally expired, but that
should not matter, as we are not talking about revocations where
switching to a new cert is wanted quickly.

> Daniel I'm pretty sure that most users
> would prefer the approach consisting in picking the most recent
> valid cert instead of the last one as you'd like. I don't really
> know if it's common to issue a cert with a "not-before" date in the
> future. And that might be the whole point in the end.


Well, I was just thinking about the not-after date. In general, from a
client perspective it shouldn’t matter to get an older one, until it
really expires. And the case where you have a new certificate
already, and you want it handed out to clients ASAP is already taken
care of today — just replace the file and reload :-)
Unless I misunderstood what you meant when referring to the
“not-before” date.

Daniel

PS: This is an interesting discussion, and I am happy to continue
it, if anyone feels the same. As I said, I will try to solve this via
provisioning scripts in the meantime, so there is no time pressure.


-- 
Daniel Schneller
Principal Cloud Engineer
 
CenterDevice GmbH  | Hochstraße 11
   | 42697 Solingen
tel: +49 1754155711| Deutschland
daniel.schnel...@centerdevice.de   | www.centerdevice.de

Geschäftsführung: Dr. Patrick Peschlow, Dr. Lukas Pustina,
Michael Rosbach, Handelsregister-Nr.: HRB 18655,
HR-Gericht: Bonn, USt-IdNr.: DE-815299431



Re: Automatic Certificate Switching Idea

2017-05-09 Thread Daniel Schneller
Hi!

> On 9. May. 2017, at 00:30, Lukas Tribus <lu...@gmx.net> wrote:
> 
> [...]
> I'm opposed to heavy feature-bloating for provisioning use-cases, that
> can quite easily fixed where the fix belongs - the provisioning layer.

You are right, that this can be handled outside / in the provisioning layer. 
And I have no problem implementing it there, if it is considered too narrow a 
niche feature. However, I was curious to see, if this is something that other 
people also need constantly — sometimes you believe you are in a specific 
bubble, but aren’t. But from the amount of feedback the original post 
generated, I think I know my anser already ;-)

Also, if it was something that could be implemented in a 10-liner (I know, I 
exaggerate :-)) it might still have been a useful addition.

I’ll see what I can get whipped up externally. Depending on how well I can get 
it separated from our specific setup, I might then release it into the wild for 
the select few who might find it useful :)

Daniel

-- 
Daniel Schneller
Principal Cloud Engineer
 
CenterDevice GmbH  | Hochstraße 11
   | 42697 Solingen
tel: +49 1754155711| Deutschland
daniel.schnel...@centerdevice.de   | www.centerdevice.de

Geschäftsführung: Dr. Patrick Peschlow, Dr. Lukas Pustina,
Michael Rosbach, Handelsregister-Nr.: HRB 18655,
HR-Gericht: Bonn, USt-IdNr.: DE-815299431





Re: Passing SNI value ( ssl_fc_sni ) to backend's verifyhost.

2017-05-08 Thread Daniel Schneller
Just my 2c, I very much support Kevin’s argument.
Even though we are not (yet) verifying backends — because currently we _are_ in 
a private LAN — we are planning to deploy parts of our application to public 
cloud infrastructure soon, so it would be a quite important feature.

Regards,
Daniel


-- 
Daniel Schneller
Principal Cloud Engineer
 
CenterDevice GmbH  | Hochstraße 11
   | 42697 Solingen
tel: +49 1754155711| Deutschland
daniel.schnel...@centerdevice.de   | www.centerdevice.de

Geschäftsführung: Dr. Patrick Peschlow, Dr. Lukas Pustina,
Michael Rosbach, Handelsregister-Nr.: HRB 18655,
HR-Gericht: Bonn, USt-IdNr.: DE-815299431


> On 6. May. 2017, at 19:18, Kevin McArthur <ke...@stormtide.ca> wrote:
> 
> 1. The Snowden leaks and the whole "SSL added and removed here" issue, for 
> example. TLS on internal networks is more important these days due to local 
> network implants and other security issues on LANs.
> 
> 2. Our use case is actually DigitalOcean where there is "private networking" 
> but it is shared among many customers. Operating without TLS on this 
> semi-private network would be unwise.
> 3. Most of the public tutorials for re-encrypt bridged TLS are simply 
> incurring TLS overhead while providing no TLS security. (eg SSL on but, 
> verify none enabled, verifyhost not set, etc)
> 
> 4. Use cases like CDN proxy of public servers. Think Cloudflare's Full SSL 
> (Strict) setup... 
> --
> 
> Kevin
> On 2017-05-05 7:20 PM, Igor Cicimov wrote:
>> 
>> 
>> On 6 May 2017 2:04 am, "Kevin McArthur" <ke...@stormtide.ca 
>> <mailto:ke...@stormtide.ca>> wrote:
>> When doing tls->haproxy->tls (bridged https) re-encryption with SNI, we need 
>> to verify the backend certificate against the SNI value requested by the 
>> client.
>> 
>> Something like server options:
>> 
>> server app1 app1.example.ca:443 <http://app1.example.ca:443/> ssl no-sslv3 
>> sni ssl_fc_sni verify required verifyhost ssl_fc_sni
>> 
>> However, the "verifyhost ssl_fc_sni" part doesn't work at current. Is there 
>> any chance I could get this support patched in?
>> 
>> Most folks seem to be either ignoring the backend server validation, setting 
>> verify none, or are stripping tls altogether leaving a pretty big security 
>> hole.
>> Care to elaborate why is this a security hole if the backend servers are in 
>> internal LAN which usually is the case when terminating ssl on the proxy?
>> 
>> --
>> 
>> Kevin McArthur
>> 
> 



Re: Automatic Certificate Switching Idea

2017-04-30 Thread Daniel Schneller
Hi!

Yes, you got it right. I have no idea if there are technical limitations in the 
SSL library or other parts of the code that would make several certificate/key 
pairs for the same domain infeasible. 

If there were hard restrictions, it could certainly be done "externally" with a 
set of clever scripts and haproxy reloads, but IMO it would be a less brittle 
solution if it were built right in. Also, it wouldn't require separate scripts 
per platform and init system (SysV, systemd, …). 

Daniel


> On 30. Apr 2017, at 11:50, Aleksandar Lazic <al-hapr...@none.at> wrote:
> 
> HI.
> 
> Am 28-04-2017 09:26, schrieb Daniel Schneller:
> 
>> Hello!
>> I am managing a few haproxy instances that each manage a good number of 
>> domains and do the TLS termination on behalf of what you might call "hosted" 
>> sites.
>> Most of the clients connecting to these haproxys implement certificate 
>> pinning and verify that the certificate presented by the server is on a 
>> white list for their respective domains.
>> We have alerts on upcoming expirations with a few weeks advance notice, so 
>> that we can tell our customers to get a renewal done with their CA and 
>> provide it to us. Then clients (mostly mobile apps) get be updated, built 
>> and released to include both the current and the renewed certificates for a 
>> while. Once the current cert has actually expired, it will be removed from 
>> the white list with the next update.
>> To give the end users the longest possible opportunity to download and 
>> install the updated client, we perform the certificate replacement on 
>> haproxy very close to the actual expiration point in time.
>> With an increasing number of domains and certificates, and the tendency 
>> toward shorter certificate life times, some cert is about to expire all the 
>> time, making this a rather regular task.
>> So I was wondering if there was a better way to achieve the client-friendly 
>> "last minute" replacements without having to manually care about the exact 
>> timing and hopefully never making a mistake.
>> If haproxy could load multiple certificates for the same domain (similar to 
>> what it currently already does for wildcard and more specific domain 
>> certificates), and would additionally consider their expiration dates,  
>> serving the one with the least remaining validity as long as it was still 
>> valid, but then automatically switch to an available replacement once the 
>> expiration is reached, we could just schedule regular (maybe daily) reloads 
>> (to let haproxy read any new files in) and just drop any renewed 
>> certificate/key files into the appropriate directory as soon as you got them.
>> I would welcome feedback on this idea, if only to be pointed at the obvious 
>> and glaring shortcomings it may have :D
> 
> This sounds to me a very interesting use case especially with the let's 
> encrypt certificates.
> 
> To reflect what I have understand I will try to explain your request from my 
> point of view.
> 
> mysecurewww.mysecuredomain.xxx will expire 01.02.2017
> This certificate pair will created at 01.10.2016
> 
> Now you will create another certificate pair on 07.01.2017 with the same name 
> and add it to haproxy, right?
> 
> This ends up with 2 certificate pairs for the same domain but with different 
> expire time.
> Is this possible with the ssl lib?
> 
> Is this the scenario you have described?
> 
>> Cheers,
>> Daniel
> 
> Regards
> Aleks
> 
>> --
>> Daniel Schneller
>> Principal Cloud Engineer
>> CenterDevice GmbH  | Hochstraße 11
>> | 42697 Solingen
>> tel: +49 1754155711| Deutschland
>> daniel.schnel...@centerdevice.de   | www.centerdevice.de
>> Geschäftsführung: Dr. Patrick Peschlow, Dr. Lukas Pustina,
>> Michael Rosbach, Handelsregister-Nr.: HRB 18655,
>> HR-Gericht: Bonn, USt-IdNr.: DE-815299431



Automatic Certificate Switching Idea

2017-04-28 Thread Daniel Schneller
Hello!

I am managing a few haproxy instances that each manage a good number of domains 
and do the TLS termination on behalf of what you might call “hosted” sites.

Most of the clients connecting to these haproxys implement certificate pinning 
and verify that the certificate presented by the server is on a white list for 
their respective domains.

We have alerts on upcoming expirations with a few weeks advance notice, so that 
we can tell our customers to get a renewal done with their CA and provide it to 
us. Then clients (mostly mobile apps) get be updated, built and released to 
include both the current and the renewed certificates for a while. Once the 
current cert has actually expired, it will be removed from the white list with 
the next update.

To give the end users the longest possible opportunity to download and install 
the updated client, we perform the certificate replacement on haproxy very 
close to the actual expiration point in time.

With an increasing number of domains and certificates, and the tendency toward 
shorter certificate life times, some cert is about to expire all the time, 
making this a rather regular task.

So I was wondering if there was a better way to achieve the client-friendly 
“last minute” replacements without having to manually care about the exact 
timing and hopefully never making a mistake.

If haproxy could load multiple certificates for the same domain (similar to 
what it currently already does for wildcard and more specific domain 
certificates), and would additionally consider their expiration dates,  serving 
the one with the least remaining validity as long as it was still valid, but 
then automatically switch to an available replacement once the expiration is 
reached, we could just schedule regular (maybe daily) reloads (to let haproxy 
read any new files in) and just drop any renewed certificate/key files into the 
appropriate directory as soon as you got them. 

I would welcome feedback on this idea, if only to be pointed at the obvious and 
glaring shortcomings it may have :D

Cheers,
Daniel


-- 
Daniel Schneller
Principal Cloud Engineer
 
CenterDevice GmbH  | Hochstraße 11
   | 42697 Solingen
tel: +49 1754155711| Deutschland
daniel.schnel...@centerdevice.de   | www.centerdevice.de

Geschäftsführung: Dr. Patrick Peschlow, Dr. Lukas Pustina,
Michael Rosbach, Handelsregister-Nr.: HRB 18655,
HR-Gericht: Bonn, USt-IdNr.: DE-815299431




Re: Certificate order

2017-04-18 Thread Daniel Schneller
Hi!

Not being very familiar with the code, so I thought I’d ask before something 
changes unexpectedly :)
I asked about certificate ordering a while ago, too, and I seem to remember 
(and we currently rely on this) that exact domain matches are “weighted higher” 
than wildcard matches on purpose, so that if I just dump the certificates in a 
directory, it will pick a more specific one over a wildcard that is also there 
as a “catchall”.

Not saying one or the other is right or wrong, but if this should be merged, it 
must be made very clear that people might have to change their setups.

Daniel



-- 
Daniel Schneller
Principal Cloud Engineer
 
CenterDevice GmbH  | Hochstraße 11
   | 42697 Solingen
tel: +49 1754155711| Deutschland
daniel.schnel...@centerdevice.de   | www.centerdevice.de

Geschäftsführung: Dr. Patrick Peschlow, Dr. Lukas Pustina,
Michael Rosbach, Handelsregister-Nr.: HRB 18655,
HR-Gericht: Bonn, USt-IdNr.: DE-815299431


> On 10. Apr. 2017, at 20:02, Sander Hoentjen <san...@hoentjen.eu> wrote:
> 
> This is a corrected patch against 1.7.5.
> 
> On 04/10/2017 05:00 PM, Sander Hoentjen wrote:
>> No scratch that, this is wrong.
>> 
>> On 04/10/2017 04:57 PM, Sander Hoentjen wrote:
>>> The attached patch against haproxy 1.7.5 honours crt order also for
>>> wildcards.
>>> 
>>> On 04/07/2017 03:42 PM, Sander Hoentjen wrote:
>>>> Hi Sander,
>>>> 
>>>> On 04/06/2017 02:06 PM, Sander Klein wrote:
>>>>> Hi Sander,
>>>>> 
>>>>> On 2017-04-06 10:45, Sander Hoentjen wrote:
>>>>>> Hi guys,
>>>>>> 
>>>>>> We have a setup where we sometimes have multiple certificates for a
>>>>>> domain. We use multiple directories for that and would like the
>>>>>> following behavior:
>>>>>> - Look in dir A for any match, use it if found
>>>>>> - Look in dir B for any match, use it if found
>>>>>> - Look in dir .. etc
>>>>>> 
>>>>>> This works great, except for wildcards. Right now a domain match in dir
>>>>>> B takes precedence over a wildcard match in dir A.
>>>>>> 
>>>>>> Is there a way to get haproxy to behave the way I describe?
>>>>> I have been playing with this some time ago and my solution was to
>>>>> just think about the order of certificate loading. I then found out
>>>>> that the last certificate was preferred if it matched. Not sure if
>>>>> this has changed over time.
>>>> This does not work for wildcard certs, it seems they are always tried last.
>>>> 
>>>> Regards,
>>>> Sander
>>>> 
>> 
> 
> 



Re: SSL Termination or Passthrough

2017-02-17 Thread Daniel Schneller
Damn. I shouldn't respond to questions after midnight :-(. I completely 
overread this is about client certificates until now. Sorry for missing that, 
Sam; and thanks Willy for the interesting link. 

One question comes up for me though, after reading it (unless I am still not 
awake enough, in which case I apologize upfront). The article contains 
instructions about a cron job to periodically fetch a CRL and put it in the 
place where haproxy expects it. But doesn't haproxy load the file just once on 
startup? Would replacing it like that even be noticed?

Daniel

> On 18 Feb 2017, at 07:28, Willy Tarreau  wrote:
> 
>> On Fri, Feb 17, 2017 at 07:20:14PM -0500, Sam Crowell wrote:
>> Thanks for the response Daniel.  What is the best way to handle SSL traffic
>> through a load balancer to maintain original client certificates?  Just use
>> mode TCP and passthrough?  Is there a way to do that without turning off
>> hostname verifier at the client level?
> 
> If you want to transfer client certificates to the server, you have to
> pass them in HTTP headers or using the proxy protocol for non-HTTP
> services. This means that you'll rely on haproxy to validate these
> client certs using the CA and possibly CRL though.
> 
> There's a good example here :
> 
>   https://raymii.org/s/tutorials/haproxy_client_side_ssl_certificates.html
> 
> Hoping this helps,
> Willy



Re: SSL Termination or Passthrough

2017-02-17 Thread Daniel Schneller
You should be able to configure haproxy in TCP mode and have it appear 
transparent, without the clients complaining. You won't be able to do anything 
on the http level, of course, but passing encrypted streams back and forth is a 
completely valid use case. Just keep anything TLS out of the haproxy config for 
these front ends and backends. :-)

> On 18 Feb 2017, at 01:27, Sam Crowell <crowes...@gmail.com> wrote:
> 
> I guess it’s probably the same answer, it’s working as intended and even with 
> passthrough the load balancer certificate does not match the backend server 
> so it still throws the warning which makes sense.
>> On February 17, 2017 at 7:20:14 PM, Sam Crowell (crowes...@gmail.com) wrote:
>> 
>> Thanks for the response Daniel.  What is the best way to handle SSL traffic 
>> through a load balancer to maintain original client certificates?  Just use 
>> mode TCP and passthrough?  Is there a way to do that without turning off 
>> hostname verifier at the client level?
>> 
>> Thanks,
>> Sam
>> 
>>> On February 17, 2017 at 7:13:23 PM, Daniel Schneller 
>>> (daniel.schnel...@centerdevice.com) wrote:
>>> 
>>> Sam,
>>> 
>>> This not working the way you would like is the corner stone and one of the 
>>> key features of TLS. It is designed to ensure there is nothing in the 
>>> middle between the client and the server. If you need to inspect the 
>>> traffic, by definition you cannot without the clients trusting your 
>>> certificate (or its issuing authority as a whole).
>>> To be precise, you can't pose as the real server, because for that you 
>>> would not need the public certificate of the server (which you can easily 
>>> get), but its private key. By definition, you won't be able to get a hold 
>>> of it, as the real server alone has it.
>>> 
>>> All inspecting TLS proxies communicate with their own private 
>>> key/certificate pair with the client. There is no way around that.
>>> 
>>> Regards,
>>> Daniel
>>> 
>>> 
>>> > On 18 Feb 2017, at 00:47, Sam Crowell <crowes...@gmail.com> wrote:
>>> >
>>> > Is there a way to do SSL termination at the load balancer, but then send 
>>> > the original certificate to the backend server? I have seen plenty of 
>>> > notes and configs for SSL passthrough and SSL termination with 
>>> > re-encryption by the load balancer certificate.
>>> >
>>> > Even with passthrough, I still have to disable hostname verifier because 
>>> > the backend server doesn't match the load balancer certificate.
>>> >
>>> > I know there has to be a way to do this, I just can't find it in the 
>>> > documentation or on the internet.
>>> >
>>> > Thanks for the help and keep up the great work.
>>> >
>>> > Thanks,
>>> > Paul
>>> >


Re: SSL Termination or Passthrough

2017-02-17 Thread Daniel Schneller
Sam,

This not working the way you would like is the corner stone and one of the key 
features of TLS. It is designed to ensure there is nothing in the middle 
between the client and the server. If you need to inspect the traffic, by 
definition you cannot without the clients trusting your certificate (or its 
issuing authority as a whole). 
To be precise, you can't pose as the real server, because for that you would 
not need the public certificate of the server (which you can easily get), but 
its private key. By definition, you won't be able to get a hold of it, as the 
real server alone has it. 

All inspecting TLS proxies communicate with their own private key/certificate 
pair with the client. There is no way around that. 

Regards,
Daniel


> On 18 Feb 2017, at 00:47, Sam Crowell  wrote:
> 
> Is there a way to do SSL termination at the load balancer, but then send the 
> original certificate to the backend server?  I have seen plenty of notes and 
> configs for SSL passthrough and SSL termination with re-encryption by the 
> load balancer certificate.
> 
> Even with passthrough, I still have to disable hostname verifier because the 
> backend server doesn't match the load balancer certificate.
> 
> I know there has to be a way to do this, I just can't find it in the 
> documentation or on the internet.
> 
> Thanks for the help and keep up the great work.
> 
> Thanks,
> Paul
> 



Re: Haproxy issue

2017-02-14 Thread Daniel Schneller
Just adding this back to the list.

> On 14. Feb. 2017, at 18:12, <thibault.dai...@orange.com> 
> <thibault.dai...@orange.com> wrote:
> 
> Ok thank you for your reply !
> I found my error
> Haproxy was not happy with http request
>  
> I was doing https://test-rest.net:8089 <https://test-rest.net:8089/> so error 
> because I was checking only “test-rest.net <http://test-rest.net/>”
>  
> So I change my file and add ‘:8089’ after  “acl host_rest_services  
> hdr(Host) -i test-rest.net <http://test-rest.net:8089/>:8089 
> <http://test-rest.net:8089/>” :
>  
> frontend rest-in
> bind *:8089 ssl crt /etc/ssl/*
> reqadd X-Forwarded-Proto:\ https
>  acl host_rest_services  hdr(Host) -i test-rest.net 
> <http://test-rest.net:8089/>:8089 <http://test-rest.net:8089/>
>  use_backend rest_services if host_rest_services
>  backend rest_services
>  server shstand 10.0.0.2:8089 ssl verify none
> So It works
>  
> De : Daniel Schneller [mailto:daniel.schnel...@centerdevice.com] 
> Envoyé : mardi 14 février 2017 17:17
> À : Skarbek, John
> Cc : DAIGNE Thibault OBS/OAB; haproxy+h...@formilux.org
> Objet : Re: Haproxy issue
>  
> Hi!
> 
> frontend rest-in
> bind *:8089 ssl crt /etc/ssl/*
> acl host_rest_services  hdr(Host) -i test-rest.net 
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__test-2Drest.net=DwQFAg=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0=BZ2S09kcMRiJIUh57WZsng=552Ig6nEOMH9wg2ks1WhxrWEHEMdG8cMbcbCkjD_Gog=UVyuxV46M6Snh0R8nhltBxZLBuVcrVsIcTLAaVp7nQg=>
> It would appear that your acl was not matched.  It was up to `rest-in` to 
> determine how to route the request, and since there's no other backend to 
> choose from, there was no place for the request to go.  I would advise that 
> you validate your acl is configured as you desired.
> My guess would be that the Host header actually says “test-rest.net:8089 
> <http://test-rest.net:8089/>”, which would lead the ACL to not match.
>  
> It would also be a good idea to setup a `default backend` as a way to help 
> test where your requests are going.
> For debugging these kinds of things I usually run haproxy in debug mode: 
> haproxy -d -f haproxy.cfg 
> That way it will echo incoming and outgoing headers.
>  
> Daniel
>  
> -- 
> Daniel Schneller
> Principal Cloud Engineer
>  
> CenterDevice GmbH  | Hochstraße 11
>| 42697 Solingen
> tel: +49 1754155711| Deutschland
> daniel.schnel...@centerdevice.de <mailto:daniel.schnel...@centerdevice.de>   
> | www.centerdevice.de <http://www.centerdevice.de/>
> 
> Geschäftsführung: Dr. Patrick Peschlow, Dr. Lukas Pustina,
> Michael Rosbach, Handelsregister-Nr.: HRB 18655,
> HR-Gericht: Bonn, USt-IdNr.: DE-815299431
> _
> 
> Ce message et ses pieces jointes peuvent contenir des informations 
> confidentielles ou privilegiees et ne doivent donc
> pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu 
> ce message par erreur, veuillez le signaler
> a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
> electroniques etant susceptibles d'alteration,
> Orange decline toute responsabilite si ce message a ete altere, deforme ou 
> falsifie. Merci.
> 
> This message and its attachments may contain confidential or privileged 
> information that may be protected by law;
> they should not be distributed, used or copied without authorisation.
> If you have received this email in error, please notify the sender and delete 
> this message and its attachments.
> As emails may be altered, Orange is not liable for messages that have been 
> modified, changed or falsified.
> Thank you.



Re: Haproxy issue

2017-02-14 Thread Daniel Schneller
Hi!
>> frontend rest-in
>> 
>> bind *:8089 ssl crt /etc/ssl/*
>> 
>> acl host_rest_services  hdr(Host) -i test-rest.net 
>> <https://urldefense.proofpoint.com/v2/url?u=http-3A__test-2Drest.net=DwQFAg=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0=BZ2S09kcMRiJIUh57WZsng=552Ig6nEOMH9wg2ks1WhxrWEHEMdG8cMbcbCkjD_Gog=UVyuxV46M6Snh0R8nhltBxZLBuVcrVsIcTLAaVp7nQg=>
> It would appear that your acl was not matched.  It was up to `rest-in` to 
> determine how to route the request, and since there's no other backend to 
> choose from, there was no place for the request to go.  I would advise that 
> you validate your acl is configured as you desired.
> 
My guess would be that the Host header actually says “test-rest.net:8089 
<http://test-rest.net:8089/>”, which would lead the ACL to not match.

> It would also be a good idea to setup a `default backend` as a way to help 
> test where your requests are going.
> 
For debugging these kinds of things I usually run haproxy in debug mode: 
haproxy -d -f haproxy.cfg 
That way it will echo incoming and outgoing headers.

Daniel

-- 
Daniel Schneller
Principal Cloud Engineer
 
CenterDevice GmbH  | Hochstraße 11
   | 42697 Solingen
tel: +49 1754155711| Deutschland
daniel.schnel...@centerdevice.de   | www.centerdevice.de

Geschäftsführung: Dr. Patrick Peschlow, Dr. Lukas Pustina,
Michael Rosbach, Handelsregister-Nr.: HRB 18655,
HR-Gericht: Bonn, USt-IdNr.: DE-815299431


Re: ACL randomly failing

2017-02-13 Thread Daniel Schneller
Mathieu, 

I have often been fooled like this by multiple haproxy instances running at the 
same time.
Whenever I had restarted them with config changes there were sometimes open 
client connections keeping instances with older configs alive. Those would 
respond to a random set of the connections.
So I suggest you make sure first you have exactly one instance running, e. g. 
with “ps aux | grep haproxy”.

Daniel

-- 
Daniel Schneller
Principal Cloud Engineer
 
CenterDevice GmbH  | Hochstraße 11
   | 42697 Solingen
tel: +49 1754155711| Deutschland
daniel.schnel...@centerdevice.de   | www.centerdevice.de

Geschäftsführung: Dr. Patrick Peschlow, Dr. Lukas Pustina,
Michael Rosbach, Handelsregister-Nr.: HRB 18655,
HR-Gericht: Bonn, USt-IdNr.: DE-815299431


> On 13. Feb. 2017, at 15:45, Mathieu Poussin <math...@lodgify.com> wrote:
> 
> Hello.
> 
> I have setup HAProxy on our environment and I can see a very strange 
> behaviour. 
> 
> I have the following configuration (Just a part of it) :
> 
> global
> chroot /var/lib/haproxy
> user haproxy
> group haproxy
> daemon
> tune.maxrewrite 4096
> 
> 
> ### Defaults ###
> 
> 
> defaults
> modehttp
> option  httplog
> option  dontlog-normal
> option  dontlognull
> option  log-health-checks
> option  redispatch
> option http-server-close
> unique-id-header X-LB-Request-ID
> log-format %{+Q}r\ %ST\ "%CC"\ "%hr"\ "%CS"\ "%hs"\ %ID
> 
> timeout connect 5000
> timeout client 5
> timeout server 5
> 
> frontend websitemanager
> bind *:8004
> log global
> capture request header Host len 128
> capture request header X-Real-IP len 128
> capture request header X-LB-Request-ID len 128
>   capture request header X-HAProxy-Key len 128
>   http-request set-var(txn.x_haproxy_key) req.hdr(X-HAProxy-Key)
>   http-request set-var(txn.x_real_ip) req.hdr(X-Real-IP)
> http-request set-var(txn.url) url
> mode http
> default_backend websitemanager
> 
> backend websitemanager
> mode http
> log global
> balance roundrobin
> option httpchk GET /health/ HTTP/1.0
> http-check expect ! string false
>   acl debug_headers var(txn.x_real_ip) xxx.xxx.xxx.xxx
> acl debug_headers var(txn.x_haproxy_key) -m str -i xxx
> acl debug_headers var(txn.referer) -m sub -i haproxy-key=xxx
> acl debug_headers var(txn.url) -m sub -i haproxy-key=xxx
> http-response set-header X-HAProxy-Frontend-Name "%f" if debug_headers
> http-response set-header X-HAProxy-Frontend-Socket "%fi:%fp" if 
> debug_headers
> http-response set-header X-HAProxy-Backend-Group "%b" if debug_headers
> http-response set-header X-HAProxy-Backend-Name "%s" if debug_headers
> http-response set-header X-HAProxy-Backend-Socket "%si:%sp" if 
> debug_headers
> http-response set-header X-HAProxy-Via "%H" if debug_headers
> http-response set-header X-HAProxy-TerminationState "%ts" if 
> debug_headers
> http-response set-header X-Real-IP "%[var(txn.x_real_ip)]"
> server gc-certmgr-live-1 10.0.0.49:80 check observe layer7 on-error 
> mark-down slowstart 10s weight 100
> server gc-certmgr-live-2 10.0.0.50:80 check observe layer7 on-error 
> mark-down slowstart 10s weight 100
> server gc-certmgr-live-3 10.0.0.51:80 check observe layer7 on-error 
> mark-down slowstart 10s weight 100
> 
> And many other fronted/backend combo with the same configuration (The same 
> ACL).
> Basically, I want the X-HAProxy headers to appears in any of the following 
> condition :
> - The connection is coming from HQ (Specific X-Real-IP header)
> - The header X-HAProxy-Key header is present and set to the correct key
> - The Referer contains the key
> - The URL contains the key as parameter
> 
> I have a nginx in front of this setup, that is setting up the X-Real-IP.
> I’ve checked the logs, and the connection is forwarded to HAProxy in all the 
> cases, so nginx is not the cause of the issue (Or at least it’s still 
> forwarding to HAProxy)
> 
> 
> Almost half of the requests are failing the ACL where they should work 
> without issue (Because the source IP matches or because of the connection 
> string.
> 
> It’s completely random, I have no idea why it’s doing that.
> 
> What could be the cause ? I could not find much googling for this issue.
> 
> My version is HA-Proxy version 1.7.2-6edf8f-4
> 
> Thank you.
> Best regards,
> Mathieu
> 



Re: http-send-name-header for response?

2017-02-09 Thread Daniel Schneller
Hi!

I know this is not exactly what you want, but as your example does not show a 
persistence cookie, you could use that.
See https://cbonte.github.io/haproxy-dconv/1.7/configuration.html#4.2-cookie 
<https://cbonte.github.io/haproxy-dconv/1.7/configuration.html#4.2-cookie> 
You could also delete it from the request in the frontend on the way in to 
prevent the request from actually sticking to a single server.

Daniel

-- 
Daniel Schneller
Principal Cloud Engineer
 
CenterDevice GmbH  | Hochstraße 11
   | 42697 Solingen
tel: +49 1754155711| Deutschland
daniel.schnel...@centerdevice.de   | www.centerdevice.de

Geschäftsführung: Dr. Patrick Peschlow, Dr. Lukas Pustina,
Michael Rosbach, Handelsregister-Nr.: HRB 18655,
HR-Gericht: Bonn, USt-IdNr.: DE-815299431


> On 9. Feb. 2017, at 17:32, Mark Staudinger <mark.staudin...@nyi.net> wrote:
> 
> Hi Folks,
> 
> Given a setup where I have a backend like so:
> 
> backend production
>balance roundrobin
>hash-type consistent
>http-check expect status 200
>option httpchk GET /\ HTTP/1.1\r\nHost:\ myhost.net\r\nUser-agent:\ 
> healthcheck\r\nConnection:\ close
>   server prod_1   192.168.1.10:80 weight 50 maxconn 150 check inter 1m
>   server prod_2   192.168.1.20:80 weight 50 maxconn 150 check inter 1m
>   server prod_3   192.168.1.30:80 weight 50 maxconn 150 check inter 1m
> 
> I'd like to report which of the servers handled this particular request, by 
> way of HTTP response header.  For a variety of reasons, this isn't best done 
> by the backend servers themselves.
> 
> I was eager to try this:
> 
>   http-send-name-header Origin-Server
> 
> but it appears this sends the name to the backend as a request header.  Is 
> there a similar feature that will do this with a response header, or some 
> combination of http-response set-header that will perform the equivalent?  
> I'm looking to return (to the frontend and then on the client) something like
> 
> Origin-Server: prod_2
> 
> Best Regards,
> Mark Staudinger
> 



Re: Debug Log: Response headers logged before rewriting

2017-02-07 Thread Daniel Schneller
Hello everyone!

While I have since figured out what my original problem was, the original 
question remains.

Is this intentional, am I missing something, or both? :)

Cheers,
Daniel


> On 3. Feb. 2017, at 13:40, Daniel Schneller 
> <daniel.schnel...@centerdevice.com> wrote:
> 
> Hi there!
> 
> I currently trying to figure out a problem with request and response header 
> rewriting.
> To make things easier I run haproxy in debug mode, so I get the client/server 
> conversation all dumped to my terminal.
> I am wondering, however, if I am missing something, because apparently the 
> output of the response shows only what the backend server sent in response to 
> a request, but any changes I make to the response headers are not to be seen 
> in haproxy’s output. 
> 
> In my case I have a 
> 
> http-response replace-header Location '(http|https):\/\/my.domain\/(.*)' '/\2'
> 
> which appears to work, because the client gets the rewritten response, but 
> the debug output looks like this (somewhat redacted)
> 
> 002:front.accept(000b)=0012 from [1.2.3.4:62699]
> 002:front.clireq[0012:]: GET 
> /authorize?client_id=xxx_uri=yyy=zzz_type=code 
> HTTP/1.1
> 002:front.clihdr[0012:]: Host: my.domain
> 
> 
> 002:back.srvrep[0012:0013]: HTTP/1.1 302 Found
> 002:back.srvhdr[0012:0013]: Server: Apache-Coyote/1.1
> 002:back.srvhdr[0012:0013]: Location: 
> https://my.domain/login?client_id=xxx_uri=yyy_type=code
>   ^
>   | to be removed |
> 
> 
> 003:front.clireq[0012:0013]: GET 
> /login?client_id=xxx_uri=yyy_type=code HTTP/1.1
>^^^
> | obviously removed
> 
> 003:front.clihdr[0012:0013]: Host: my.domain
> …
> 
> 
> This is just one of the rewrites that happen, and it makes things more 
> cumbersome to debug, because I need to capture both the server’s and the 
> client’s logs and merge them together.
> 
> Is there a switch or config setting I am missing that would show what the 
> server actually puts on the wire towards the client?
> 
> Thanks
> Daniel
> 
> 
> 
> -- 
> Daniel Schneller
> Principal Cloud Engineer
> 
> CenterDevice GmbH  | Hochstraße 11
>   | 42697 Solingen
> tel: +49 1754155711| Deutschland
> daniel.schnel...@centerdevice.de   | www.centerdevice.de
> 
> Geschäftsführung: Dr. Patrick Peschlow, Dr. Lukas Pustina,
> Michael Rosbach, Handelsregister-Nr.: HRB 18655,
> HR-Gericht: Bonn, USt-IdNr.: DE-815299431
> 
> 




Debug Log: Response headers logged before rewriting

2017-02-03 Thread Daniel Schneller
Hi there!

I currently trying to figure out a problem with request and response header 
rewriting.
To make things easier I run haproxy in debug mode, so I get the client/server 
conversation all dumped to my terminal.
I am wondering, however, if I am missing something, because apparently the 
output of the response shows only what the backend server sent in response to a 
request, but any changes I make to the response headers are not to be seen in 
haproxy’s output. 

In my case I have a 

http-response replace-header Location '(http|https):\/\/my.domain\/(.*)' '/\2'

which appears to work, because the client gets the rewritten response, but the 
debug output looks like this (somewhat redacted)

002:front.accept(000b)=0012 from [1.2.3.4:62699]
002:front.clireq[0012:]: GET 
/authorize?client_id=xxx_uri=yyy=zzz_type=code HTTP/1.1
002:front.clihdr[0012:]: Host: my.domain


002:back.srvrep[0012:0013]: HTTP/1.1 302 Found
002:back.srvhdr[0012:0013]: Server: Apache-Coyote/1.1
002:back.srvhdr[0012:0013]: Location: 
https://my.domain/login?client_id=xxx_uri=yyy_type=code
  ^
  | to be removed |


003:front.clireq[0012:0013]: GET 
/login?client_id=xxx_uri=yyy_type=code HTTP/1.1
^^^
 | obviously removed

003:front.clihdr[0012:0013]: Host: my.domain
…


This is just one of the rewrites that happen, and it makes things more 
cumbersome to debug, because I need to capture both the server’s and the 
client’s logs and merge them together.

Is there a switch or config setting I am missing that would show what the 
server actually puts on the wire towards the client?

Thanks
Daniel



-- 
Daniel Schneller
Principal Cloud Engineer
 
CenterDevice GmbH  | Hochstraße 11
   | 42697 Solingen
tel: +49 1754155711| Deutschland
daniel.schnel...@centerdevice.de   | www.centerdevice.de

Geschäftsführung: Dr. Patrick Peschlow, Dr. Lukas Pustina,
Michael Rosbach, Handelsregister-Nr.: HRB 18655,
HR-Gericht: Bonn, USt-IdNr.: DE-815299431





TLS certificate precedence

2017-01-25 Thread Daniel Schneller
Hi!

From the (1.6) configuration documentation I understand that for the “crt” bind 
option all files in a directory will be read in alphabetical order (exclusions 
through reserved extensions notwithstanding).

It goes on to say

> The certificates will be presented to clients who provide a
> valid TLS Server Name Indication field matching one of their CN or alt
> subjects.  Wildcards are supported, where a wildcard character '*' is used
> instead of the first hostname component […]
I am wondering what the precedence is if there are two certificates matching a 
particular domain.

Say I have two certificates available, one wildcard, and one Extended 
Validation cert, named like this:

cert_001.wildcard.mydomain.com.pem
cert_002.www.mydomain.crt.pem

and a configuration like this

> frontend web_ssl-sni-based
 >   bind 192.168.205.7:452 ssl crt /etc/haproxy/ssl/

Am I correct to assume (unfortunately I cannot try this out right now) that if 
a request comes in for “www.mydomain.com” it will get served with the wildcard 
certificate, because that one sorts first by filename? Or is there some 
precedence implementation that would prefer the more specific cert where the 
domain actually matches one of the the CN / SAN fields?

Thanks,
Daniel



-- 
Daniel Schneller
Principal Cloud Engineer
 
CenterDevice GmbH  | Hochstraße 11
   | 42697 Solingen
tel: +49 1754155711| Deutschland
daniel.schnel...@centerdevice.de   | www.centerdevice.de

Geschäftsführung: Dr. Patrick Peschlow, Dr. Lukas Pustina,
Michael Rosbach, Handelsregister-Nr.: HRB 18655,
HR-Gericht: Bonn, USt-IdNr.: DE-815299431




Re: HAproxy / Reverse proxy Debian

2017-01-12 Thread Daniel Schneller
> This email server do have ssl/TLS activated.

As I expected. Apparently that iRedMail server uses nginx. 
Right now, if you talk to haproxy, it decrypts the traffic and then sends it on 
to nginx in plain text. However, on that port nginx expects encrypted traffic — 
hence your 400 error message.

If you want to configure TLS on the mail server / web server itself, there is 
no need to configure haproxy for TLS at all. 
Switch it to TCP mode and remove the TLS configuration. That way it will just 
hand the still encrypted traffic over to nginx.




-- 
Daniel Schneller
Principal Cloud Engineer
 
CenterDevice GmbH  | Hochstraße 11
   | 42697 Solingen
tel: +49 1754155711| Deutschland
daniel.schnel...@centerdevice.de   | www.centerdevice.de

Geschäftsführung: Dr. Patrick Peschlow, Dr. Lukas Pustina,
Michael Rosbach, Handelsregister-Nr.: HRB 18655,
HR-Gericht: Bonn, USt-IdNr.: DE-815299431


> On 12. Jan. 2017, at 14:30, Thierry <lenai...@maelenn.org> wrote:
> 
> Bonjour Daniel,
> 
> I am not sure to understand.
> I am using iRedMail as email server.
> This email server do have ssl/TLS activated.
> 
> **
> 
> listen 888 http2;
>ssl on;
>ssl_certificate /etc/ssl/certs/cert.chained.crt;
>ssl_certificate_key /etc/ssl/private/cert.key;
>ssl_trusted_certificate /etc/ssl/certs/GandiStandardSSLCA2.pem;
>ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
>include /etc/nginx/sslciphers.conf;
>add_header Strict-Transport-Security "max-age=15768000";
>ssl_prefer_server_ciphers on;
>ssl_dhparam /etc/ssl/dhparams.pem;
>ssl_stapling on;
>ssl_stapling_verify on;
>resolver 8.8.8.8 8.8.4.4 valid=300s;
>resolver_timeout 10s;
> 
> *
> 
> My email client do work well with these certificates and if I change the NAT 
> of my router, I can reach the email web interface (Sogo) through HTTPS 
> request.
> Why is not possible to pass HTTPS trafic from the HAproxy to my email server 
> ? Will be the same pb with my web server ..
> 
> Thx
> 
> 
> Le jeudi 12 janvier 2017 à 15:16:57, vous écriviez :
> 
> 
> Sounds as if you have nginx set up for TLS termination, too.
> This does not make sense, because haproxy will already have decrypted the 
> traffic.
> Make sure nginx does not expect https on what in your config would be 
> ip_email_server:888.
> 
> 
> 
> -- 
> Daniel Schneller
> Principal Cloud Engineer
> 
> CenterDevice GmbH  | Hochstraße 11
>   | 42697 Solingen
> tel: +49 1754155711| Deutschland
> daniel.schnel...@centerdevice.de <mailto:daniel.schnel...@centerdevice.de>   
> | www.centerdevice.de <http://www.centerdevice.de/>
> 
> Geschäftsführung: Dr. Patrick Peschlow, Dr. Lukas Pustina,
> Michael Rosbach, Handelsregister-Nr.: HRB 18655,
> HR-Gericht: Bonn, USt-IdNr.: DE-815299431
> 
> 
> 
> On 12. Jan. 2017, at 14:14, Thierry <lenai...@maelenn.org 
> <mailto:lenai...@maelenn.org>> wrote:
> 
> Re: HAproxy / Reverse proxy Debian 
> Bonjour Daniel,
> 
> I have resolved my problem, HAproxy do start now (ssl ok).
> But when trying to reach my email server, I now do have a:
> 
> 400 Bad gateway - The plain HTTP request was sent to HTTPS port - Nginx
> 
> It should not be the case because 'reqadd x-forwarded-proto:\ https' suppose 
> to correct this ?? And with 'redirect scheme https if !{ ssl_fc }' it should 
> be 100% full HTTPS.
> 
> frontend email-https
>   bind *:444 ssl crt /etc/ssl/private/full_certs.crt
>   reqadd X-Forwarded-Proto:\ https
>   default_backend https-email
> 
> backend https-email
>   redirect scheme https if !{ ssl_fc }
>   server email_hostname ip_email_server:888
> 
> Thx
> 
> 
> 
> 
> 
> Le jeudi 12 janvier 2017 à 14:44:19, vous écriviez :
> 
> 
> Re-adding the list.
> 
> And:
> 
> 
> Do I have to "cat file.key file.crt file.pem > certi.chained.crt" ??
> 
> Yes. Though I am not sure what file.crt and file.pem are :)
> 
> 
> 
> 
> 
> Cheers,
> Daniel
> 
> 
> -- 
> Daniel Schneller
> Principal Cloud Engineer
> 
> CenterDevice GmbH  | Hochstraße 11
>  | 42697 Solingen
> tel: +49 1754155711| Deutschland
> daniel.schnel...@centerdevice.de <mailto:daniel.schnel...@centerdevice.de>   
> | www.centerdevice.de <http://www.centerdevice.de/>
> 
> Geschäftsführung: Dr. Patrick Peschlow, Dr. Lukas Pustina,
> Michael Rosbach, Handelsregister-Nr.: HRB 18655,
> HR-Gericht: Bonn, USt-IdNr.: DE-815299431
> 
> 
> 
> On

Re: HAproxy / Reverse proxy Debian

2017-01-12 Thread Daniel Schneller
Sounds as if you have nginx set up for TLS termination, too.
This does not make sense, because haproxy will already have decrypted the 
traffic.
Make sure nginx does not expect https on what in your config would be 
ip_email_server:888.



-- 
Daniel Schneller
Principal Cloud Engineer
 
CenterDevice GmbH  | Hochstraße 11
   | 42697 Solingen
tel: +49 1754155711| Deutschland
daniel.schnel...@centerdevice.de   | www.centerdevice.de

Geschäftsführung: Dr. Patrick Peschlow, Dr. Lukas Pustina,
Michael Rosbach, Handelsregister-Nr.: HRB 18655,
HR-Gericht: Bonn, USt-IdNr.: DE-815299431


> On 12. Jan. 2017, at 14:14, Thierry <lenai...@maelenn.org> wrote:
> 
> Bonjour Daniel,
> 
> I have resolved my problem, HAproxy do start now (ssl ok).
> But when trying to reach my email server, I now do have a:
> 
> 400 Bad gateway - The plain HTTP request was sent to HTTPS port - Nginx
> 
> It should not be the case because 'reqadd x-forwarded-proto:\ https' suppose 
> to correct this ?? And with 'redirect scheme https if !{ ssl_fc }' it should 
> be 100% full HTTPS.
> 
> frontend email-https
>bind *:444 ssl crt /etc/ssl/private/full_certs.crt
>reqadd X-Forwarded-Proto:\ https
>default_backend https-email
> 
> backend https-email
>redirect scheme https if !{ ssl_fc }
>server email_hostname ip_email_server:888
> 
> Thx
> 
> 
> 
> 
> 
> Le jeudi 12 janvier 2017 à 14:44:19, vous écriviez :
> 
> 
> Re-adding the list.
> 
> And:
> 
> 
> Do I have to "cat file.key file.crt file.pem > certi.chained.crt" ??
> 
> Yes. Though I am not sure what file.crt and file.pem are :)
> 
> 
> 
> 
> Cheers,
> Daniel
> 
> 
> -- 
> Daniel Schneller
> Principal Cloud Engineer
> 
> CenterDevice GmbH  | Hochstraße 11
>   | 42697 Solingen
> tel: +49 1754155711| Deutschland
> daniel.schnel...@centerdevice.de <mailto:daniel.schnel...@centerdevice.de>   
> | www.centerdevice.de <http://www.centerdevice.de/>
> 
> Geschäftsführung: Dr. Patrick Peschlow, Dr. Lukas Pustina,
> Michael Rosbach, Handelsregister-Nr.: HRB 18655,
> HR-Gericht: Bonn, USt-IdNr.: DE-815299431
> 
> 
> 
> On 12. Jan. 2017, at 13:27, Thierry <lenai...@maelenn.org 
> <mailto:lenai...@maelenn.org>> wrote:
> 
> Hi,
> 
> You are right, I am using the v1.7.1-1 on Debian.
> I do have paid ssl certificate (.key, .crt, .pem). They all are in non 
> world-readable folder.
> Do I have to "cat file.key file.crt file.pem > certi.chained.crt" ??
> 
> Thx
> 
> 
> Thierry,
> 
> 
> 
> always helps to know the haproxy version you use.
> As for your error message, do you have private key, your site’s
> certificate and all necessary chain certificates in the crt files you 
> reference in your config?
> 
> 
> 
> IIRC they need to be in the order 
> 
> 
> 
> 1. key
> 2. site cert (“leaf”)
> 3. intermediates
> 
> 
> 
> Make sure to have these files not world-readable as they contain secret 
> crypto material.
> 
> 
> 
> HTH,
> Daniel
> 
> 
> 
> 
> 
> 
> 
> 
> -- 
> Cordialement,
> Thierrye-mail : lenai...@maelenn.org 
> <mailto:lenai...@maelenn.org>


Re: HAproxy / Reverse proxy Debian

2017-01-12 Thread Daniel Schneller
Re-adding the list.

And:

> Do I have to "cat file.key file.crt file.pem > certi.chained.crt" ??

Yes. Though I am not sure what file.crt and file.pem are :)

Cheers,
Daniel


-- 
Daniel Schneller
Principal Cloud Engineer
 
CenterDevice GmbH  | Hochstraße 11
   | 42697 Solingen
tel: +49 1754155711| Deutschland
daniel.schnel...@centerdevice.de   | www.centerdevice.de

Geschäftsführung: Dr. Patrick Peschlow, Dr. Lukas Pustina,
Michael Rosbach, Handelsregister-Nr.: HRB 18655,
HR-Gericht: Bonn, USt-IdNr.: DE-815299431


> On 12. Jan. 2017, at 13:27, Thierry <lenai...@maelenn.org> wrote:
> 
> Hi,
> 
> You are right, I am using the v1.7.1-1 on Debian.
> I do have paid ssl certificate (.key, .crt, .pem). They all are in non 
> world-readable folder.
> Do I have to "cat file.key file.crt file.pem > certi.chained.crt" ??
> 
> Thx
> 
>> Thierry,
> 
> 
>> always helps to know the haproxy version you use.
>> As for your error message, do you have private key, your site’s
>> certificate and all necessary chain certificates in the crt files you 
>> reference in your config?
> 
> 
>> IIRC they need to be in the order 
> 
> 
>> 1. key
>> 2. site cert (“leaf”)
>> 3. intermediates
> 
> 
>> Make sure to have these files not world-readable as they contain secret 
>> crypto material.
> 
> 
>> HTH,
>> Daniel
> 
> 
> 
> 
> 
> 



Re: HAproxy / Reverse proxy Debian

2017-01-12 Thread Daniel Schneller
Thierry,

always helps to know the haproxy version you use.
As for your error message, do you have private key, your site’s certificate and 
all necessary chain certificates in the crt files you reference in your config?

IIRC they need to be in the order 

1. key
2. site cert (“leaf”)
3. intermediates

Make sure to have these files not world-readable as they contain secret crypto 
material.

HTH,
Daniel


-- 
Daniel Schneller
Principal Cloud Engineer
 
CenterDevice GmbH  | Hochstraße 11
   | 42697 Solingen
tel: +49 1754155711| Deutschland
daniel.schnel...@centerdevice.de   | www.centerdevice.de

Geschäftsführung: Dr. Patrick Peschlow, Dr. Lukas Pustina,
Michael Rosbach, Handelsregister-Nr.: HRB 18655,
HR-Gericht: Bonn, USt-IdNr.: DE-815299431


> On 12. Jan. 2017, at 09:23, Thierry <lenai...@maelenn.org> wrote:
> 
> no SSL certificate specified for bind '*:888' at 
> [/etc/haproxy/haproxy.cfg:52] (use 'crt')



Re: Bytes in / out counters for TCP Keepalive Sessions

2016-09-15 Thread Daniel Schneller
Hello again!

I introduced the option for my RabbitMQ frontend — and took the chance to 
update to 1.6.9 at the same time :) —  but the counters on the stats page do 
not really show anything usable, still.

I came across this old thread, discussing that the option is basically broken 
since 1.3.xx and that there is no good way to fix it.

Is there any chance of it returning, or should it maybe marked as broken in the 
docs at least, maybe issue a warning on startup?

http://www.serverphorums.com/read.php?10,747628

Thanks :)
Daniel


-- 
Daniel Schneller
Principal Cloud Engineer
 
CenterDevice GmbH  | Merscheider Straße 1
   | 42699 Solingen
tel: +49 1754155711| Deutschland
daniel.schnel...@centerdevice.de   | www.centerdevice.de




> On 08.09.2016, at 21:12, Daniel Schneller <daniel.schnel...@centerdevice.com> 
> wrote:
> 
> 
> Adding the list back. Sorry for dropping it earlier. 
> 
> 
> On 8 Sep 2016, at 19:56, PiBa-NL <piba.nl@gmail.com 
> <mailto:piba.nl@gmail.com>> wrote:
> 
>> Hi,
>> Op 8-9-2016 om 15:43 schreef Daniel Schneller:
>>>> http://cbonte.github.io/haproxy-dconv/1.7/snapshot/configuration.html#4.2-option%20contstats
>>>>  
>>>> <http://cbonte.github.io/haproxy-dconv/1.7/snapshot/configuration.html#4.2-option%20contstats>
>>> Indeed, that sounds like it. So, 1.6 would not have helped me here ;)
>>> But good to know that this is the expected behavior.
>> Just for clarity.. despite my link pointing to a 1.7 manual page, 1.4 
>> already have that same contstat option available for you to use. 
>> http://cbonte.github.io/haproxy-dconv/1.4/configuration.html#option 
>> <http://cbonte.github.io/haproxy-dconv/1.4/configuration.html#option> 
>> contstats
>> Regards,
>> PiBa-NL
> 
> Damn. Thanks for pointing that out again, I did not even think to search for 
> it in older doc releases! Very cool. :)
> 
> Daniel
> 



Re: Bytes in / out counters for TCP Keepalive Sessions

2016-09-08 Thread Daniel Schneller

Adding the list back. Sorry for dropping it earlier. 


> On 8 Sep 2016, at 19:56, PiBa-NL <piba.nl@gmail.com> wrote:
> 
> Hi,
> Op 8-9-2016 om 15:43 schreef Daniel Schneller:
>>> http://cbonte.github.io/haproxy-dconv/1.7/snapshot/configuration.html#4.2-option%20contstats
>> Indeed, that sounds like it. So, 1.6 would not have helped me here ;)
>> But good to know that this is the expected behavior.
> Just for clarity.. despite my link pointing to a 1.7 manual page, 1.4 already 
> have that same contstat option available for you to use. 
> http://cbonte.github.io/haproxy-dconv/1.4/configuration.html#option contstats
> Regards,
> PiBa-NL

Damn. Thanks for pointing that out again, I did not even think to search for it 
in older doc releases! Very cool. :)

Daniel



Bytes in / out counters for TCP Keepalive Sessions

2016-09-07 Thread Daniel Schneller
Hello!

We have just placed haproxy (1.5) in front of our RabbitMQ servers.
The statistics show no change in bytes in / bytes out until
a connection is closed — which in Rabbit’s case should be virtually never.

The configuration looks like this:

frontend fe_rabbitmq
  bind 192.168.205.7:5672
  timeout client  3h
  mode tcp
  option clitcpka
  default_backend be_rabbitmq

backend be_rabbitmq
  mode tcp
  timeout server  3h
  server app-m-03 app-m-03:5672 check on-marked-down shutdown-sessions
  server app-m-02 app-m-02:5672 check on-marked-down shutdown-sessions
  server app-m-01 app-m-01:5672 check on-marked-down shutdown-sessions


Is this the expected behavior? If so, is there any configuration option
we can change to show “live” stats of bytes flowing through the persistent
connections?

Thanks!
Daniel



-- 
Daniel Schneller
Principal Cloud Engineer
 
CenterDevice GmbH  | Merscheider Straße 1
   | 42699 Solingen
tel: +49 1754155711| Deutschland
daniel.schnel...@centerdevice.de   | www.centerdevice.de







Re: HTTP 429 Too Many Requests

2016-06-24 Thread Daniel Schneller
Thank you very much. That will be a good opportunity to work with the Lua 
functionality. As the value for the retry-after header should be variable for 
different situations, the error file would not help; but for simple scenarios 
it will be perfectly fine, leaving the right information in the logs and being 
nice and readable :-)


> On 24 Jun 2016, at 23:13, Cyril Bonté <cyril.bo...@free.fr> wrote:
> 
>> Le 24/06/2016 à 22:57, Daniel Schneller a écrit :
>> That is indeed pretty cool :-)
>> Would the addition of a header work the way I originally suggested, though?
> 
> Only by adding an errorfile for 429 status.
> Or you can play with lua !
> For example :
>  http-request use-service lua.shaping if 
> 
> and the lua service :
> core.register_service("shaping", "http", function(applet)
>   applet:set_status(429)
>   applet:add_header("Content-Type", "text/plain")
>   applet:add_header("Retry-After", "60")
>   applet:start_response()
>   applet:send("Please come back later")
> end )
> 
> 
>> 
>>> On 24 Jun 2016, at 21:57, Cyril Bonté <cyril.bo...@free.fr> wrote:
>>> 
>>> Hi all,
>>> 
>>>> Le 24/06/2016 à 21:33, James Brown a écrit :
>>>> +1 I am also using a fake backend with no servers and a 503 errorfile,
>>>> and it confuses everybody who looks at the config or the metrics. Being
>>>> able to directly emit a 429 would be fantastic.
>>> 
>>> Interestingly, it already exists since 1.6-dev2 [1] for "http-request deny" 
>>> but the documentation is absolutely missing. And it has recently been fixed 
>>> by Willy [2].
>>> 
>>> Another point is that everything in the code seems to be ready to use the 
>>> same option with tarpit... except the configuration parser.
>>> 
>>> The syntax is :
>>> http-request deny [deny_status ]
>>> 
>>> Example :
>>> http-request deny deny_status 429
>>> 
>>> [1] 
>>> http://www.haproxy.org/git?p=haproxy-1.6.git;a=commit;h=108b1dd69d4e26312af465237487bdb855b0de60
>>> [2] 
>>> http://www.haproxy.org/git?p=haproxy-1.6.git;a=commit;h=60f01f8c89e4fb2723d5a9f2046286e699567e0b
>>> 
>>>> 
>>>> On Fri, Jun 24, 2016 at 10:30 AM, Daniel Schneller
>>>> <daniel.schnel...@centerdevice.com
>>>> <mailto:daniel.schnel...@centerdevice.com>> wrote:
>>>> 
>>>>   Hello!
>>>> 
>>>>   We use haproxy as an L7 rate limiter based on tracking certain header
>>>>   fields and URLs. A more detailed description of what we do can be found
>>>>   in a blog post I wrote about this some time ago:
>>>> 
>>>>   https://blog.codecentric.de/en/2014/12/haproxy-http-header-rate-limiting
>>>> 
>>>>   Our exact setup has changed a bit since then, but the gist remains the
>>>>   same:
>>>> 
>>>>   * Calculate the rate of requests by tracking those with identical
>>>>authorization header values
>>>>   * If they exceed a threshold, slow the client down (tarpit) and ask
>>>>them to come back after a certain period by sending them HTTP 429:
>>>> 
>>>>HTTP/1.1 429 Too Many Requests
>>>>Cache-Control: no-cache
>>>>Connection: close
>>>>Content-Type: text/plain
>>>>Retry-After: 60
>>>> 
>>>>Too Many Requests (HAP429).
>>>> 
>>>>   I am currently refactoring our haproxy config to make it more readable
>>>>   and maintainable; while doing so, I would like to get rid of the
>>>>   somewhat crude pseudo backend in which I specify the errorfile for
>>>>   status code 500, replacing 500 with 429 when sending it out to the
>>>>   client. This, of course, leads to the status code being 500 the logs
>>>>   and other inconveniences.
>>>> 
>>>>   My suggestion about how to handle this would be an extension to the
>>>>   "http-request deny" directive. Currently it will always respond with
>>>>   HTTP status code 403. If there were a configuration setting allowing me
>>>>   to specify different code (like 429 in my case) as the reason for the
>>>>   rejection, that would be an elegant solution. Using an "http-request
>>>>   set-header" would even allow me to specify different values for the
>>>>   "Retry-After:" header to inform well-written clients after which time
>>>>   they should come back and try again.
>>>> 
>>>>   Does that sound like a sensible addition?
>>>> 
>>>>   Cheers,
>>>>   Daniel
>>>> 
>>>> 
>>>> 
>>>>   --
>>>>   Daniel Schneller
>>>>   Principal Cloud Engineer
>>>> 
>>>>   CenterDevice GmbH
>>>>   https://www.centerdevice.de
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> --
>>>> James Brown
>>>> Engineer
>>> 
>>> 
>>> --
>>> Cyril Bonté
> 
> 
> -- 
> Cyril Bonté



Re: HTTP 429 Too Many Requests

2016-06-24 Thread Daniel Schneller
That is indeed pretty cool :-)
Would the addition of a header work the way I originally suggested, though?

> On 24 Jun 2016, at 21:57, Cyril Bonté <cyril.bo...@free.fr> wrote:
> 
> Hi all,
> 
>> Le 24/06/2016 à 21:33, James Brown a écrit :
>> +1 I am also using a fake backend with no servers and a 503 errorfile,
>> and it confuses everybody who looks at the config or the metrics. Being
>> able to directly emit a 429 would be fantastic.
> 
> Interestingly, it already exists since 1.6-dev2 [1] for "http-request deny" 
> but the documentation is absolutely missing. And it has recently been fixed 
> by Willy [2].
> 
> Another point is that everything in the code seems to be ready to use the 
> same option with tarpit... except the configuration parser.
> 
> The syntax is :
>  http-request deny [deny_status ]
> 
> Example :
>  http-request deny deny_status 429
> 
> [1] 
> http://www.haproxy.org/git?p=haproxy-1.6.git;a=commit;h=108b1dd69d4e26312af465237487bdb855b0de60
> [2] 
> http://www.haproxy.org/git?p=haproxy-1.6.git;a=commit;h=60f01f8c89e4fb2723d5a9f2046286e699567e0b
> 
>> 
>> On Fri, Jun 24, 2016 at 10:30 AM, Daniel Schneller
>> <daniel.schnel...@centerdevice.com
>> <mailto:daniel.schnel...@centerdevice.com>> wrote:
>> 
>>Hello!
>> 
>>We use haproxy as an L7 rate limiter based on tracking certain header
>>fields and URLs. A more detailed description of what we do can be found
>>in a blog post I wrote about this some time ago:
>> 
>>https://blog.codecentric.de/en/2014/12/haproxy-http-header-rate-limiting
>> 
>>Our exact setup has changed a bit since then, but the gist remains the
>>same:
>> 
>>* Calculate the rate of requests by tracking those with identical
>> authorization header values
>>* If they exceed a threshold, slow the client down (tarpit) and ask
>> them to come back after a certain period by sending them HTTP 429:
>> 
>> HTTP/1.1 429 Too Many Requests
>> Cache-Control: no-cache
>> Connection: close
>> Content-Type: text/plain
>> Retry-After: 60
>> 
>> Too Many Requests (HAP429).
>> 
>>I am currently refactoring our haproxy config to make it more readable
>>and maintainable; while doing so, I would like to get rid of the
>>somewhat crude pseudo backend in which I specify the errorfile for
>>status code 500, replacing 500 with 429 when sending it out to the
>>client. This, of course, leads to the status code being 500 the logs
>>and other inconveniences.
>> 
>>My suggestion about how to handle this would be an extension to the
>>"http-request deny" directive. Currently it will always respond with
>>HTTP status code 403. If there were a configuration setting allowing me
>>to specify different code (like 429 in my case) as the reason for the
>>rejection, that would be an elegant solution. Using an "http-request
>>set-header" would even allow me to specify different values for the
>>"Retry-After:" header to inform well-written clients after which time
>>they should come back and try again.
>> 
>>Does that sound like a sensible addition?
>> 
>>Cheers,
>>Daniel
>> 
>> 
>> 
>>--
>>Daniel Schneller
>>Principal Cloud Engineer
>> 
>>CenterDevice GmbH
>>https://www.centerdevice.de
>> 
>> 
>> 
>> 
>> 
>> 
>> --
>> James Brown
>> Engineer
> 
> 
> -- 
> Cyril Bonté



HTTP 429 Too Many Requests

2016-06-24 Thread Daniel Schneller

Hello!

We use haproxy as an L7 rate limiter based on tracking certain header
fields and URLs. A more detailed description of what we do can be found
in a blog post I wrote about this some time ago: 

https://blog.codecentric.de/en/2014/12/haproxy-http-header-rate-limiting

Our exact setup has changed a bit since then, but the gist remains the
same:

* Calculate the rate of requests by tracking those with identical
 authorization header values
* If they exceed a threshold, slow the client down (tarpit) and ask
 them to come back after a certain period by sending them HTTP 429:

 HTTP/1.1 429 Too Many Requests
 Cache-Control: no-cache
 Connection: close
 Content-Type: text/plain
 Retry-After: 60

 Too Many Requests (HAP429).

I am currently refactoring our haproxy config to make it more readable
and maintainable; while doing so, I would like to get rid of the
somewhat crude pseudo backend in which I specify the errorfile for
status code 500, replacing 500 with 429 when sending it out to the
client. This, of course, leads to the status code being 500 the logs
and other inconveniences.

My suggestion about how to handle this would be an extension to the
"http-request deny" directive. Currently it will always respond with
HTTP status code 403. If there were a configuration setting allowing me
to specify different code (like 429 in my case) as the reason for the
rejection, that would be an elegant solution. Using an "http-request
set-header" would even allow me to specify different values for the
"Retry-After:" header to inform well-written clients after which time
they should come back and try again.

Does that sound like a sensible addition?

Cheers,
Daniel



--
Daniel Schneller
Principal Cloud Engineer

CenterDevice GmbH
https://www.centerdevice.de





Re: Compilation problem: haproxy 1.6.5 (latest) on Solaris 11

2016-05-19 Thread Daniel Schneller
On the http://www.haproxy.org  homepage there is a 
link to each version’s repo.


Cheers,
Daniel



> On 19.05.2016, at 15:30, Jonathan Fisher  wrote:
> 
> Cool, thanks! 
> 
> Where is the git repo for haproxy? having trouble finding the official one, 
> all I can find is a mirror on github.
> 
> On Thu, May 19, 2016 at 1:33 AM, Vincent Bernat  > wrote:
>  ❦ 18 mai 2016 22:56 +0200, Pavlos Parissis  > :
> 
> >> Also, where is the bugtracker for haproxy? I can file a report if you
> >> want to save time.
> >
> > As far as I know there isn't any bugtracker. Posting problems in this
> > ML is enough to kick the investigation. So far this model works quite
> > well
> 
> Yes, Willy will notice the patch at some point and maybe merge it if
> he's OK with it.
> --
> Habit is habit, and not to be flung out of the window by any man, but coaxed
> down-stairs a step at a time.
> -- Mark Twain, "Pudd'nhead Wilson's Calendar
> 
> 
> 
> -- 
> 
> Jonathan S. Fisher
> Senior Software Engineer
> https://twitter.com/exabrial 
> http://www.tomitribe.com 
> https://www.tomitribe.io 


Re: nbproc 1 vs >1 performance

2016-04-14 Thread Daniel Schneller
Trying not to hijack the thread here, but it seems to fit well in the context:

Does this mean that in the following could happen due to the difference in 
BSD/Linux SO_REUSEPORT:

1. haproxy process “A” binds say port 1234
2. client A connects to 1234 and keeps the connection open
3. /etc/init.d/haproxy restart
4. haproxy process “B” starts and _also_ binds 1234
5. haproxy “A” is still around, due to client A
6. client B connects to 1234

Am I right to assume that client B can be handled by _either_ haproxy “A” or 
“B” depending on the hash result underneath SO_REUSEPORT’s implementation? If 
so, that would explain some issues I had in the past when quickly iterating 
config changes and restarting haproxy each time, but sometimes getting results 
that could only have come from an older config?

Thanks,
Daniel


-- 
Daniel Schneller
Principal Cloud Engineer
CenterDevice GmbH


> On 14.04.2016, at 12:01, Willy Tarreau <w...@1wt.eu> wrote:
> 
> On Thu, Apr 14, 2016 at 10:17:10AM +0200, Willy Tarreau wrote:
>> So I guess that indeed, if not all the processes a frontend is bound to
>> have a corresponding bind line, this can cause connection issues as some
>> incoming connections will be distributed to queues that nobody listens to.
> 
> I said rubish here. It's the same socket that is shared between all these
> processes, so there's no issue with all of them picking from the same queue
> even if in the end only one picks it. However I want to fix it to make things
> cleaner and easier to debug and observe (and not fool netstat as the previous
> example showed).
> 
> Willy
> 



Re: CIDR Notation in ACL -- silent failure

2016-04-12 Thread Daniel Schneller
On 12.04.2016, at 14:07, Willy Tarreau  wrote:I will at least provide a documentation patch then, soon.OK.As promised, a few words, hopefully clarifying things in the docs.

0001-DOC-Clarify-IPv4-address-mask-notation-rules.patch
Description: Binary data
Cheers,Daniel

Re: CIDR Notation in ACL -- silent failure

2016-04-12 Thread Daniel Schneller
Hi Willy!

Thanks for looking into this. As mentioned in an earlier post I don’t have any 
relevant C skills (but have been writing Java other languages); but still I 
went into the code, telling myself “how hard could it be to add a warning for 
less than three dots with a mask”. I quickly started to doubt myself when I 
tried to understand what’s going on. I am glad you being intimately familiar 
with that code come to the same conclusion as I did, namely that it is not 
trivial to add that warning :)

I will at least provide a documentation patch then, soon.
However two things I wanted to bring up, still:

> Also the address parser supports host names. So you can very well have
> foo/31 where foo resolves to 192.168.0.42 and it will work.


Is this a serious use case? I can only imagine this is a recipe for disaster. 
Starting from multiple A records for a DNS entry to complete intransparency 
when doing this with anything other than a /32 mask. So this alone would IMO 
warrant a warning to be issued :)

> The problem is that in IPv4, 192.168.42 is host 42 on network
> 192.168 so it is in fact 192.168.0.42. Thus in cases where we support
> both addresses and networks, 192.168.42/X is ambigous. For example,
> someone could have a first LB on 192.168.0.42 and write a rule based
> on "192.168.42". Then the second host comes and instead of creating
> a new entry with 192.168.43, he rightfully appends a /31 mask and
> this gives : 192.168.42/31. The problem is that for the user, this
> means 192.168.0.42/31 while according to the RFC above it would rather
> mean 192.168.42.0/31.

Same here: I must admit I just learned from you that this a working notation 
for IPv4.

But IMO there is a far smaller chance that this is practically used than the 
CIDR notation mentioned above, making the config valid, but much harder to 
read. 

So while being technically valid (just as 0x3ed63e9f or 1054228127 would be — 
both equivalent to one of Google.com’s public IPv4 addresses 62.214.62.159) 
more often than not I imagine those will be typos or other accidental mistakes 
in config files.

I might be alone here, but I believe a warning (not a failure) about these 
rather unorthodox notations being used would improve things :)

Thoughts?

Daniel


-- 
Daniel Schneller
Principal Cloud Engineer
 
CenterDevice GmbH  | Merscheider Straße 1
   | 42699 Solingen
tel: +49 1754155711| Deutschland
daniel.schnel...@centerdevice.de   | www.centerdevice.de




> On 12.04.2016, at 11:59, Willy Tarreau <w...@1wt.eu> wrote:
> 
> Hi guys,
> 
> On Sat, Apr 09, 2016 at 03:38:39PM +0200, Pavlos Parissis wrote:
>> On 09/04/2016 02:59 , Daniel Schneller wrote:
>>> Hi Pavlos!
>>> 
>>>> On 09.04.2016, at 11:39, Pavlos Parissis
>>>> <pavlos.paris...@gmail.com <mailto:pavlos.paris...@gmail.com>> wrote:
>>>> 
>>>> On 08/04/2016 11:59 , Daniel Schneller wrote:
>>>>> Hi!
>>>>> 
>>>>> I noticed that while this ACL matches my source IP of
>>>>> 192.168.42.123:
>>>>> 
>>>>> acl src_internal_net src 192.168.42.0/24
>>>>> 
>>>>> this one does _not_:
>>>>> 
>>>>> acl src_internal_net src 192.168.42/24
>>>>> 
>>>>> While not strictly part of RFC 4632 (yet), leaving out trailing
>>>>> .0 octets is a very common notation and is probably going to be
>>>>> included in a future RFC update (as per Errata 1577): 
>>>>> https://www.rfc-editor.org/errata_search.php?rfc=4632=1577
>>>>> 
>>>>> If there are concerns against this notation, the config parser
>>>>> should at least issue a WARNING or even ERROR about this, because
>>>>> I found it it quite confusing. Especially if ACLs are used for
>>>>> actual access control, this can have nasty consequences.
>>>>> 
>>>>> What do you think?
>>>>> 
>>>> 
>>>> I had a similar discussion with a colleague for another software
>>>> and I am against it:
>>>> 
>>>> 1) In 2016 it is a bit weird to speak about classful networks
>>> 
>>> Not sure I understand what you mean. RFC 4632 is called Class*less*
>>> Inter-domain Routing (CIDR). That???s the whole point, not having fixed
>>> A/B/C sized networks. Still, especially for the RFC 1918 (Private
>>> Addresses) even the RFC itself uses the shorter notation (section
>>> 3):
>>> 
>>> The Internet Assigned Numbers Authority (IANA) has reserved the 
>>> following three blocks of th

Re: CIDR Notation in ACL -- silent failure

2016-04-09 Thread Daniel Schneller
Hi Pavlos!

> On 09.04.2016, at 11:39, Pavlos Parissis <pavlos.paris...@gmail.com> wrote:
> 
> On 08/04/2016 11:59 πμ, Daniel Schneller wrote:
>> Hi!
>> 
>> I noticed that while this ACL matches my source IP of 192.168.42.123:
>> 
>> acl src_internal_net src 192.168.42.0/24
>> 
>> this one does _not_:
>> 
>> acl src_internal_net src 192.168.42/24
>> 
>> While not strictly part of RFC 4632 (yet), leaving out trailing .0 
>> octets is a very common notation and is probably going to be included 
>> in a future RFC update (as per Errata 1577): 
>> https://www.rfc-editor.org/errata_search.php?rfc=4632=1577
>> 
>> If there are concerns against this notation, the config parser should 
>> at least issue a WARNING or even ERROR about this, because I found it 
>> it quite confusing. Especially if ACLs are used for actual access 
>> control, this can have nasty consequences.
>> 
>> What do you think?
>> 
> 
> I had a similar discussion with a colleague for another software and
> I am against it:
> 
> 1) In 2016 it is a bit weird to speak about classful networks

Not sure I understand what you mean. RFC 4632 is called Class*less* 
Inter-domain Routing (CIDR).
That’s the whole point, not having fixed A/B/C sized networks. Still, 
especially for the RFC 1918 (Private Addresses) even the RFC itself uses the 
shorter notation (section 3):

   The Internet Assigned Numbers Authority (IANA) has reserved the
   following three blocks of the IP address space for private internets:

 10.0.0.0-   10.255.255.255  (10/8 prefix)
 172.16.0.0  -   172.31.255.255  (172.16/12 prefix)
 192.168.0.0 -   192.168.255.255 (192.168/16 prefix)

This is from 1996, even then talking about class*less*. 
But maybe I misunderstood your point?


> 2) In may introduce ambiguity due to #2

What #2 are you referring to? My 2nd example? How would it introduce ambiguity?

Cheers,
Daniel




CIDR Notation in ACL -- silent failure

2016-04-08 Thread Daniel Schneller
Hi!

I noticed that while this ACL matches my source IP of 192.168.42.123:

acl src_internal_net src 192.168.42.0/24

this one does _not_:

acl src_internal_net src 192.168.42/24

While not strictly part of RFC 4632 (yet), leaving out trailing .0 
octets is a very common notation and is probably going to be included 
in a future RFC update (as per Errata 1577): 
https://www.rfc-editor.org/errata_search.php?rfc=4632=1577

If there are concerns against this notation, the config parser should 
at least issue a WARNING or even ERROR about this, because I found it 
it quite confusing. Especially if ACLs are used for actual access 
control, this can have nasty consequences.

What do you think?

Cheers,
Daniel


-- 
Daniel Schneller
Principal Cloud Engineer
CenterDevice GmbH 



Re: Segfault with stick-tables

2016-03-29 Thread Daniel Schneller
Could have thought of that before…
Here’s the valgrind info after installing the debug symbols.


root@haproxy-1:/var/crash# valgrind haproxy -d -f 
/vagrant/configs/crasht-test.cfg 
==4802== Memcheck, a memory error detector
==4802== Copyright (C) 2002-2013, and GNU GPL'd, by Julian Seward et al.
==4802== Using Valgrind-3.10.1 and LibVEX; rerun with -h for copyright info
==4802== Command: haproxy -d -f /vagrant/configs/crasht-test.cfg
==4802== 
Note: setting global.maxconn to 2000.
Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result FAILED
Total: 3 (2 usable), will use epoll.
Using epoll() as the polling mechanism.
[WARNING] 088/121911 (4802) : [haproxy.main()] Cannot raise FD limit to 4031.
:fe_http.accept(0005)=0007 from [192.168.0.154:59442]
:fe_http.clireq[0007:]: POST /v2/documents HTTP/1.1
:fe_http.clihdr[0007:]: Host: api.centerdevice.de
:fe_http.clihdr[0007:]: Content-Type: application/json
:fe_http.clihdr[0007:]: Connection: close
:fe_http.clihdr[0007:]: Accept: application/json
:fe_http.clihdr[0007:]: User-Agent: Paw/2.3.2 (Macintosh; OS 
X/10.11.4) ASIHTTPRequest/v1.8.1-61
:fe_http.clihdr[0007:]: Authorization: Bearer 
d9bf4d6d-945e-4cd1-a760-92a96739f260
:fe_http.clihdr[0007:]: Accept-Encoding: gzip
:fe_http.clihdr[0007:]: Content-Length: 118
==4802== Invalid read of size 8
==4802==at 0x19AAF3: smp_fetch_sc_inc_gpc0 (in /usr/sbin/haproxy)
==4802==by 0x1A0CB6: sample_process (in /usr/sbin/haproxy)
==4802==by 0x19DA43: acl_exec_cond (in /usr/sbin/haproxy)
==4802==by 0x1654F8: http_req_get_intercept_rule (in /usr/sbin/haproxy)
==4802==by 0x16A556: http_process_req_common (in /usr/sbin/haproxy)
==4802==by 0x197E0D: process_stream (in /usr/sbin/haproxy)
==4802==by 0x12CCE4: process_runnable_tasks (in /usr/sbin/haproxy)
==4802==by 0x1232CC: run_poll_loop (in /usr/sbin/haproxy)
==4802==by 0x11FB5A: main (in /usr/sbin/haproxy)
==4802==  Address 0x0 is not stack'd, malloc'd or (recently) free'd
==4802== 
==4802== 
==4802== Process terminating with default action of signal 11 (SIGSEGV)
==4802==  Access not within mapped region at address 0x0
==4802==at 0x19AAF3: smp_fetch_sc_inc_gpc0 (in /usr/sbin/haproxy)
==4802==by 0x1A0CB6: sample_process (in /usr/sbin/haproxy)
==4802==by 0x19DA43: acl_exec_cond (in /usr/sbin/haproxy)
==4802==by 0x1654F8: http_req_get_intercept_rule (in /usr/sbin/haproxy)
==4802==by 0x16A556: http_process_req_common (in /usr/sbin/haproxy)
==4802==by 0x197E0D: process_stream (in /usr/sbin/haproxy)
==4802==by 0x12CCE4: process_runnable_tasks (in /usr/sbin/haproxy)
==4802==by 0x1232CC: run_poll_loop (in /usr/sbin/haproxy)
==4802==by 0x11FB5A: main (in /usr/sbin/haproxy)
==4802==  If you believe this happened as a result of a stack
==4802==  overflow in your program's main thread (unlikely but
==4802==  possible), you can try to increase the size of the
==4802==  main thread stack using the --main-stacksize= flag.
==4802==  The main thread stack size used in this run was 8388608.
==4802== 
==4802== HEAP SUMMARY:
==4802== in use at exit: 589,450 bytes in 1,347 blocks
==4802==   total heap usage: 1,642 allocs, 295 frees, 659,781 bytes allocated
==4802== 
==4802== LEAK SUMMARY:
==4802==definitely lost: 0 bytes in 0 blocks
==4802==indirectly lost: 0 bytes in 0 blocks
==4802==  possibly lost: 84,028 bytes in 1,032 blocks
==4802==still reachable: 505,422 bytes in 315 blocks
==4802== suppressed: 0 bytes in 0 blocks
==4802== Rerun with --leak-check=full to see details of leaked memory
==4802== 
==4802== For counts of detected and suppressed errors, rerun with: -v
==4802== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0)
Segmentation fault (core dumped)





> On 29.03.2016, at 14:16, Daniel Schneller <daniel.schnel...@centerdevice.com> 
> wrote:
> 
> Hi!
> 
> I am seeing a segfault upon the first request coming through the 
> configuration below.
> 
> My intention is to enforce a) a total request limit per minute and b) a 
> separate limit for certain API paths. For that purpose, in addition to the 
> be_api_external table, which I intend to use for the total request rate, I 
> created a separate dummy backend to get another table (be_tbl_search) for 
> search API calls. In the real config, there would be a handful of these.
> 
> I reduced the config as far as I could to demonstrate.
> 
> ===
> ...
> 
> frontend fe_http
>   bind 192.168.1.3:80
>   http-request capture hdr(Authorization)   len 64   # id 2
>   default_backend be_api_external
> 
> backend be_tbl_search
>   stick-table type string len 64 size 50k expire 60s store gpc0_rate(60s)
&

Segfault with stick-tables

2016-03-29 Thread Daniel Schneller
==by 0x11FB5A: ??? (in /usr/sbin/haproxy)
==4628==by 0x5D58EC4: (below main) (libc-start.c:287)
==4628==  If you believe this happened as a result of a stack
==4628==  overflow in your program's main thread (unlikely but
==4628==  possible), you can try to increase the size of the
==4628==  main thread stack using the --main-stacksize= flag.
==4628==  The main thread stack size used in this run was 8388608.
==4628== 
==4628== HEAP SUMMARY:
==4628== in use at exit: 589,331 bytes in 1,345 blocks
==4628==   total heap usage: 1,641 allocs, 296 frees, 659,752 bytes allocated
==4628== 
==4628== LEAK SUMMARY:
==4628==definitely lost: 0 bytes in 0 blocks
==4628==indirectly lost: 0 bytes in 0 blocks
==4628==  possibly lost: 83,909 bytes in 1,030 blocks
==4628==still reachable: 505,422 bytes in 315 blocks
==4628== suppressed: 0 bytes in 0 blocks
==4628== Rerun with --leak-check=full to see details of leaked memory
==4628== 
==4628== For counts of detected and suppressed errors, rerun with: -v
==4628== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0)
Segmentation fault (core dumped)





-- 
Daniel Schneller
Principal Cloud Engineer
 
CenterDevice GmbH  | Merscheider Straße 1
   | 42699 Solingen
tel: +49 1754155711| Deutschland
daniel.schnel...@centerdevice.de   | www.centerdevice.de






DOC Patch: tune.vars.xxx-max-size

2016-03-21 Thread Daniel Schneller
From 29bddd461c30bc850633350ac81e3c9fd7b56cb8 Mon Sep 17 00:00:00 2001
From: Daniel Schneller <d...@danielschneller.de>
Date: Mon, 21 Mar 2016 20:46:57 +0100
Subject: [PATCH] DOC: Clarify tunes.vars.xxx-max-size settings

Adds a little more clarity to the description of the maximum sizes of
the different variable scopes and adds a note about what happens when
the space allocated for variables is too small.

Also fixes some typos and grammar/spelling issues re/ variables and
their naming conventions, copied throughout the document.
---
 doc/configuration.txt | 227 +-
 1 file changed, 114 insertions(+), 113 deletions(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index c9cca4f..5147626 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -1400,16 +1400,22 @@ tune.vars.global-max-size 
 tune.vars.reqres-max-size 
 tune.vars.sess-max-size 
 tune.vars.txn-max-size 
-  These four tunes helps to manage the allowed amount of memory used by the
-  variables system. "global" limits the memory for all the systems. "sess" 
limit
-  the memory by session, "txn" limits the memory by transaction and "reqres"
-  limits the memory for each request or response processing. during the
-  accounting, "sess" embbed "txn" and "txn" embed "reqres".
-
-  By example, we considers that "tune.vars.sess-max-size" is fixed to 100,
-  "tune.vars.txn-max-size" is fixed to 100, "tune.vars.reqres-max-size" is
-  also fixed to 100. If we create a variable "txn.var" that contains 100 bytes,
-  we cannot create any more variable in the other contexts.
+  These four tunes help to manage the maximum amount of memory used by the
+  variables system. "global" limits the overall amount of memory available
+  for all scopes. "sess" limits the memory for the session scope, "txn" for
+  the transaction scope, and "reqres" limits the memory for each request or
+  response processing.
+  Memory accounting is hierarchical, meaning more coarse grained limits
+  include the finer grained ones: "sess" includes "txn", and "txn" includes
+  "reqres".
+
+  For example, when "tune.vars.sess-max-size" is limited to 100,
+  "tune.vars.txn-max-size" and "tune.vars.reqres-max-size" cannot exceed
+  100 either. If we create a variable "txn.var" that contains 100 bytes,
+  all available space is consumed.
+  Notice that exceeding the limits at runtime will not result in an error
+  message, but values might be cut off or corrupted. So make sure to accurately
+  plan for the amount of space needed to store all your variables.
 
 tune.zlib.memlevel 
   Sets the memLevel parameter in zlib initialization for each session. It
@@ -3765,17 +3771,17 @@ http-request { allow | deny | tarpit | auth [realm 
] | redirect  |
   Is used to set the contents of a variable. The variable is declared
   inline.
 
- The name of the variable starts by an indication about its
-   scope. The allowed scopes are:
- "sess" : the variable is shared with all the session,
- "txn"  : the variable is shared with all the transaction
+ The name of the variable starts with an indication about
+   its scope. The scopes allowed are:
+ "sess" : the variable is shared with the whole session
+ "txn"  : the variable is shared with the transaction
   (request and response)
- "req"  : the variable is shared only during the request
+ "req"  : the variable is shared only during request
+  processing
+ "res"  : the variable is shared only during response
   processing
- "res"  : the variable is shared only during the response
-  processing.
This prefix is followed by a name. The separator is a '.'.
-   The name may only contain characters 'a-z', 'A-Z', '0-9',
+   The name may only contain characters 'a-z', 'A-Z', '0-9'
and '_'.
 
  Is a standard HAProxy expression formed by a sample-fetch
@@ -4077,17 +4083,17 @@ http-response { allow | deny | add-header   
| set-nice  |
   Is used to set the contents of a variable. The variable is declared
   inline.
 
- The name of the variable starts by an indication about its
-   scope. The allowed scopes are:
- "sess" : the variable is shared with all the session,
-   

http-request capture id frontend/backend not working?

2016-03-19 Thread Daniel Schneller
Hi!

I am trying to capture an HTTP Request Header that gets added under certain 
circumstances in the backend. From the documentation I understand I can use a 
capture slot for that. This is what I tried in my stripped down config file:

...
frontend fe_http
  bind 192.168.1.3:80
  declare capture request len 32
  default_backend be_api

backend be_api
  balance leastconn
  option httplog
  # this would have ACLs in the real use case
  http-request add-header  X-CD-Operation Upload
  http-request capture hdr(X-CD-Operation) id 0

  server api01 api01:8081


However, when I start HAProxy (1.6.3 from the ppa:vbernat/haproxy1.6 repo), I 
get this error message:

$ haproxy -d -f test.cfg 
[ALERT] 077/124109 (13586) : Proxy 'be_api': unable to find capture id '0' 
referenced by http-request capture rule.
[ALERT] 077/124109 (13586) : Fatal errors found in configuration

$ haproxy --version
HA-Proxy version 1.6.3 2015/12/25

I assume I misunderstood something thoroughly, but I am at a loss.

Cheers,
Daniel




Re: http-request capture id frontend/backend not working?

2016-03-19 Thread Daniel Schneller
Trying to understand this better, I came across 

commit 3e7d15e744d5f0137dd266efba1f317895a31273
Author: Baptiste Assmann <bed...@gmail.com>
Date:   Tue Nov 3 23:31:35 2015 +0100

BUG/MINOR: http rule: http capture 'id' rule points to a non existing id

It is possible to create a http capture rule which points to a capture slot
id which does not exist.

...

It applies of course to both http-request and http-response rules.
(cherry picked from commit e9544935e86278dfa3d49fb4b97b860774730625)


Which changes this piece of code in cfgparse.c

/* parse http-request capture rules to ensure id really exists */
list_for_each_entry(hrqrule, >http_req_rules, list) {
  if (hrqrule->action  != ACT_CUSTOM ||
  hrqrule->action_ptr != http_action_req_capture_by_id)
continue;
  if (hrqrule->arg.capid.idx >= curproxy->nb_req_cap) { 

Alert("Proxy '%s': unable to find capture id '%d' referenced by 
http-request capture rule.\n",
  curproxy->id, hrqrule->arg.capid.idx);
cfgerr++;
  }
}


I am not a C programmer, but to me it seems it will bail if it does find a 
referenced ID that is not declared in the current proxy entry. As my 
declaration is in the frontend, but the actual capture tries to reference it in 
the backend, they are in different proxies, making this check fail?

Daniel


> On 18.03.2016, at 13:43, Daniel Schneller <daniel.schnel...@centerdevice.com> 
> wrote:
> 
> Hi!
> 
> I am trying to capture an HTTP Request Header that gets added under certain 
> circumstances in the backend. From the documentation I understand I can use a 
> capture slot for that. This is what I tried in my stripped down config file:
> 
> ...
> frontend fe_http
>   bind 192.168.1.3:80
>   declare capture request len 32
>   default_backend be_api
> 
> backend be_api
>   balance leastconn
>   option httplog
>   # this would have ACLs in the real use case
>   http-request add-header  X-CD-Operation Upload
>   http-request capture hdr(X-CD-Operation) id 0
> 
>   server api01 api01:8081
> 
> 
> However, when I start HAProxy (1.6.3 from the ppa:vbernat/haproxy1.6 repo), I 
> get this error message:
> 
> $ haproxy -d -f test.cfg 
> [ALERT] 077/124109 (13586) : Proxy 'be_api': unable to find capture id '0' 
> referenced by http-request capture rule.
> [ALERT] 077/124109 (13586) : Fatal errors found in configuration
> 
> $ haproxy --version
> HA-Proxy version 1.6.3 2015/12/25
> 
> I assume I misunderstood something thoroughly, but I am at a loss.
> 
> Cheers,
> Daniel
> 
> 



Re: SSL backends stopped working

2015-04-23 Thread Daniel Schneller
Have you checked the time/date on the Haproxy host?
If they are wrong, the certificate might look bad from HAProxy’s point of view.


Daniel
-- 
Daniel Schneller
Infrastructure Architect / Developer
CenterDevice GmbH




 On 23.04.2015, at 10:00, i...@linux-web-development.de wrote:
 
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256
 
 Hi!
 
 I'm having trouble with one of our HAProxy-Servers that uses a backend with 
 TLS. When starting HAProxy the backend will report all servers as down:
 
 Server web_remote/apache_rem_1 is DOWN, reason: Layer6 invalid response, 
 info: SSL handshake failure, check duration: 41ms. 1 active and 0 backup 
 servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
 
 
 My backend configuration is as follows:
 
 backend web_remote
balance leastconn
option  httpchk HEAD /
option  redispatch
retries 3
 
default-server  inter 5000 rise 2 fall 5 maxconn 1 maxqueue 5
 
server apache_rem_1  1.2.3.4:12345 check maxconn 1000 maxqueue 
 5000 ssl ca-file /etc/ssl/web.pem
server apache_rem_2  2001:1:2:3:4:5:6:8:12345  check maxconn 1000 maxqueue 
 5000 ssl ca-file /etc/ssl/web.pem
 
 
 This backend worked just fine until now, a quick wget on the server also 
 worked and openssl s_client reports the certificate of the backend to be 
 valid.
 
 I couldn't find anything on the list except that the error would be due to 
 SSL_ABORT, but I'm not sure what this is supposed to tell me...
 
 Is there anything else for HAProxy/TLS that could be configured wrong? How 
 could I debug this issue when everything else reports the handshake was 
 successful?
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v2
 
 iQIcBAEBCAAGBQJVOKYDAAoJEJGDW18KFrBD7p4P/05tlwkxRUJwVoI3Tl1Q3+xI
 upIcN9MfTHPpA6ilVkT2S43HxyZ7RYgYGRs6LEcipLJOhGSxIHcPgGZKwsMJK8NO
 cldP20A0SoRvkUsro1UWOj/iqAsxg+j6IYNxuBJUb5i2yG6KFlp/PupJJI1QDUov
 NzyfjqIh9iSgRA6j3jJSYUDLg5KM3Frl8O0GQysztxF8fihambx8vYjlEkIyrrtc
 obmRN3hyIHnJC3oTfhEtpyg8ihV8B6XCNCEHXLonEa8QQ4lIluKhDmh+LsydZ/og
 oEFQeBNp8VfRVIx8iT1ixNFAtw85ZcB0X5GpUMxHZ5l4IscD2THCfqge+nbOIoCw
 9gHitbrKEe323DXIAiv/xWiJZNw3DwDyPDIXFLypBH2F6ZRSosBMyFwkj5omj3ey
 FKAL6DLXDylMgbrihSKA381GktPa5Vr/QmlMjr924VVDbQBmgFBiF7MKeSFHoAjT
 AJvWXplp8jIb7c1wo5vOVEa3MqLEW6Me+r2RvbAiDbQbXmVbRGmVgXo0WeZ2xgMq
 yhFAoW4JvgrrAqNdocXxc2DoP7BU51zu4b9qq4aPECUzyODpLYtU/PCDNBuvBcWI
 erGvwQt6iJP5C8NDHz/Q2mEdBgAq5K+qoSDn5CK+pmWDdR26AVRU8bH8Np4JP2ec
 c+qlPjicDRLalAn3jmQa
 =9FK7
 -END PGP SIGNATURE-