Re: Reuse backend connections

2018-06-29 Thread Bryan Talbot


> On Jun 29, 2018, at Jun 29, 12:50 PM, Leela Kalidindi (lkalidin) 
>  wrote:
> 
> Not for Remote desktop protocol, it is for haproxy backend server with option 
> persist as in
> "HAPROXY_0_BACKEND_HEAD": "\nbackend {backend}\n balance {balance}\n mode 
> http\n option httplog\n  option forwardfor\n option http-keep-alive\n option 
> persist\n http-reuse aggressive\n maxconn 16\n",
>  


You need to stop playing 20 questions on the mailing list and RTFM already.

https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#option%20persist 


-Bryan




Re: Reuse backend connections

2018-06-29 Thread Bryan Talbot


> On Jun 29, 2018, at Jun 29, 12:42 PM, Leela Kalidindi (lkalidin) 
>  wrote:
> 
> Bryan,
>  
> One another follow-up question - what does persist do?  Thanks!
>  


https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#persist 


is for 

https://en.wikipedia.org/wiki/Remote_Desktop_Protocol 


Is that what you were asking?

-Bryan



Re: Reuse backend connections

2018-06-29 Thread Bryan Talbot


> On Jun 29, 2018, at Jun 29, 12:38 PM, Leela Kalidindi (lkalidin) 
>  wrote:
> 
> Hi Bryan,
>  
> Thanks a lot for the prompt response.
>  
> Is there a such kind of thing to leave the backend connections open forever 
> that can serve any client request? 
>  


No, not to my knowledge.

-Bryan



Re: Reuse backend connections

2018-06-29 Thread Bryan Talbot


> On Jun 29, 2018, at Jun 29, 5:11 AM, Leela Kalidindi (lkalidin) 
>  wrote:
> 
> Hi,
>  
> How can I enforce haproxy to reuse limited backend connections regardless of 
> number of client connections? Basically I do not want to recreate backend 
> connection for every front end client.  
>  
> "HAPROXY_0_BACKEND_HEAD": "\nbackend {backend}\n balance {balance}\n mode 
> http\n option httplog\n  option forwardfor\n option http-keep-alive\n option 
> persist\n http-reuse aggressive\n maxconn 16\n",
> "HAPROXY_0_FRONTEND_HEAD": "\nfrontend {backend}\n  bind 
> {bindAddr}:{servicePort}\n  mode http\n  option httplog\n  option 
> forwardfor\n option http-keep-alive\n maxconn 16\n"
>  
> I currently have the above configuration, but still backend connections are 
> getting closed when the next client request comes in.
>  
> Could someone help me with the issue?  Thanks in advance!
>  


I suspect that there is a misunderstanding of what backend connection re-use 
means. Specifically this portion from the documentation seems to trip people up:


https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#http-reuse 

No connection pool is involved, once a session dies, the last idle connection
it was attached to is deleted at the same time. This ensures that connections
may not last after all sessions are closed.

I suspect that in your testing, you send one request, observe TCP state, then 
send a second request and expect the second request to use the same TCP 
connection. This is not how the feature works. The feature is optimized to 
support busy / loaded servers where the TCP open rate should be minimized. This 
allows a server to avoid, say opening 2,000 new connections per second, and 
instead just keep re-using a handful. It’s not a connection pool that pre-opens 
10 connections and keeps them around in case they might be needed.

-Bryan



Re: Bug: haproxy fails to build with USE_THREAD=

2018-02-05 Thread Bryan Talbot
Bisecting the 1.9 / master branch shows the build break (on OSX) with



abeaff2d543fded7ffc14dd908d673c59d725155 is the first bad commit
commit abeaff2d543fded7ffc14dd908d673c59d725155
Author: Willy Tarreau 
Date:   Mon Feb 5 19:43:30 2018 +0100

BUG/MINOR: fd/threads: properly dereference fdcache as volatile

In fd_rm_from_fd_list(), we have loops waiting for another change to
complete, in case we don't have support for a double CAS. But these
ones fail to place a compiler barrier or to dereference the fdcache
as a volatile, resulting in an endless loop on the first collision,
which is visible when run on MIPS32.

No backport needed.






> On Feb 5, 2018, at Feb 5, 12:36 PM, Tim Düsterhus  wrote:
> 
> Hi
> 
> if haproxy is build without USE_THREAD (e.g. by using TARGET=generic or
> by explicitely setting USE_THREAD=) it fails to link, because
> import/plock.h is not included when src/fd.c is being compiled.
> 
>> src/fd.c: In function ‘fd_rm_from_fd_list’:
>> src/fd.c:268:9: warning: implicit declaration of function ‘pl_deref_int’ 
>> [-Wimplicit-function-declaration]
>>  next = pl_deref_int(&fdtab[fd].cache.next);
>> ^
> 
> *snip*
> 
>> src/fd.o: In function `fd_rm_from_fd_list':
>> /scratch/haproxy/src/fd.c:268: undefined reference to `pl_deref_int'
>> /scratch/haproxy/src/fd.c:276: undefined reference to `pl_deref_int'
>> collect2: error: ld returned 1 exit status
>> Makefile:898: recipe for target 'haproxy' failed
>> make: *** [haproxy] Error 1
> 
> Best regards
> Tim Düsterhus
> 




Re: [ANNOUNCE] haproxy-1.8-rc1 : the last mile

2017-11-01 Thread Bryan Talbot


> On Nov 1, 2017, at Nov 1, 3:28 AM, Aleksandar Lazic  wrote:
> 
> 
> There is now a shiny new docker image with the rc1.
> 
> docker run --rm --entrypoint /usr/local/sbin/haproxy me2digital/haproxy18 -vv
> 


For the past couple of years, I’ve also been maintaining a base docker image 
for haproxy. It is interesting to see how other’s structure the build and 
configuration. 

I see that you include a base / default configuration file while I’ve left that 
completely up to the user to provide one. Given how many different ways people 
use haproxy, it didn’t seem that there was any one “basic” config that would 
work beyond a trivial example. I’m curious how useful the configuration you’ve 
packaged is. I use my image as a base which I then repackage use-case specific 
configuration files into for deployments and I assume anyone else using the 
image does the same thing, but I do not have any feedback about that.


https://hub.docker.com/r/fingershock/haproxy-base/ 


-Bryan



Re: HAProxy1.7.9-http-reuse

2017-10-26 Thread Bryan Talbot


> On Oct 26, 2017, at Oct 26, 6:13 PM, karthikeyan.rajam...@thomsonreuters.com 
> wrote:
> 
>  
> Yes the log indicates that. But the RTT via ping is 204 ms, with http-reuse 
> always/aggressive option the connection is reused & we expect a time close to 
> ping+ a small overhead time, the http-resuse always seem to have no impact on 
> the  total time taken.
> We are looking to get the option working.


I’d bet that it’s working but that it doesn’t do what you're assuming it does.

It’s not a connection pool that keeps connections open to a backend when there 
are no current requests. As the last paragraph and note of 
https://cbonte.github.io/haproxy-dconv/1.7/configuration.html#http-reuse 
 says


No connection pool is involved, once a session dies, the last idle connection
it was attached to is deleted at the same time. This ensures that connections
may not last after all sessions are closed.

Note: connection reuse improves the accuracy of the "server maxconn" setting,
because almost no new connection will be established while idle connections
remain available. This is particularly true with the "always" strategy.

So, testing one connection at a time one would not expect to see any 
difference. The benefit comes when there are many concurrent requests.

One way to check if the feature is working would be to run your ‘ab’ test with 
some concurrency N and inspect the active TCP connections from local proxy to 
remote proxy. If the feature is working, I would expect to see about N 
(something less) TCP connections that are reused for multiple requests. If 
there are 1000 requests sent with concurrency 10 and 1000 different TCP 
connections used the feature isn’t working (or the connections are private).

-Bryan



Re: HAProxy1.7.9-http-reuse

2017-10-26 Thread Bryan Talbot
Hello


> On Oct 26, 2017, at Oct 26, 3:13 PM, karthikeyan.rajam...@thomsonreuters.com 
> wrote:
> 
> Hi,
> We have  the set up working, the ping time from local to remote haproxy is 
> 204 ms.
> The time taken for the web page when accessed by the browser is 410 ms.
> We want the latency to be 204 ms when accessed by the browser. We configured 
> to reuse http & with http-reuse aggresive|always|safe options
> but could not reduce the 410 ms to 204 ms. It is always 410 ms. Please let us 
> know how we can reuse http & reduce out latency.
>  

> 4. Local haproxy log
> 172.31.x.x:53202 [26/Oct/2017:21:02:36.368] http_front http_back/web1 
> 0/0/204/205/410 200 89 - -  0/0/0/0/0 0/0 {} "GET / HTTP/1.0"


This log line says that it took your local proxy 204 ms to connect to the 
remote proxy and that the first response bytes from the remote proxy were 
received by the local proxy 205 ms later for a total round trip time of 410 ms 
(after rounding).

The only way to get the total time to be equal to the network latency times 
would be to make the remote respond in 0 ms (or less!). If the two proxies are 
actually 200 ms apart, I don’t see how you could do much better.

-Bryan



Re: Deny with 413 request too large

2017-05-22 Thread Bryan Talbot
>>> 
>>>  errorfile 413 /usr/local/etc/haproxy/errors/413.http
>>>  http-request deny deny_status 413 if { req.body_size gt 10485760 }
>>> 
>>> ... HAProxy complains with:
>>> 
>>>  [WARNING] 135/001448 (27) : parsing [/etc/haproxy/haproxy.cfg:15] : status 
>>> code 413 not handled by 'errorfile', error customization will be ignored.
>>>  [WARNING] 135/001448 (27) : parsing [/etc/haproxy/haproxy.cfg:89] : status 
>>> code 413 not handled, using default code 403.
>>> 
>>> How should I configure HAProxy in order to deny with 413?
>> 
> 
> In my understanding I should only use a 400badreq.http like message on an 
> errorfile 400 config line, otherwise if HAProxy need to issue a 400 status 
> code, my 413 status code would be issued instead.
> 
> Is this a valid feature request or there are technical reasons why this has 
> been done that way?
> 
> Hints are welcome.


I think the way to do it is to create a backend to handle deny with the special 
message and then use the backend to reject the request. You can have different 
backends to handle any special case and not pollute the normal error responses.

frontend http
… normal stuff here
  use_backend req_too_big if { req.body_size gt 10485760 }


backend req_too_big
  errorfile 400 /path/to/my/error400_req_too_big.http
  http-request deny 400



-Bryan




Re: haproxy "inter" and "timeout check", retries and "fall"

2017-05-19 Thread Bryan Talbot

> On May 18, 2017, at May 18, 2:58 AM, Jiafan Zhou  
> wrote:
> 
> Hi Bryan,
> 
> For reference:
> 
> 
>> defaults
>> timeout http-request10s
>> timeout queue   1m
>> timeout connect 10s
>> timeout client  1m
>> timeout server  1m
>> timeout http-keep-alive 10s
>> timeout check   10s
>> 
> 
> - For "timeout check" and "inter", it was for some troubleshooting and would 
> like to understand the behaviour a bit more. By reading haproxy official 
> document, it is not clear to me.
> 
> I think in my case, it uses the "timeout check" as 10 seconds. There is no 
> "inter" parameter in the configuration.
> 
> 

Ten seconds for a health check to respond is an eternity. Personally, I’d 
expect a response 1000 times faster than that. Why do you want it to be so 
long? What problems with the default health check was this super long timeout 
meant to resolve?

> But here I try to understand which value will use if "timeout check" is 
> present, but "inter" is not. I already set the timeout check".
> 
> - Finally, I think I am still right about the "fall" (default to 3) and 
> "rise" (default to 2).
> 
> It takes up to 50 seconds to converge the server, as far as the haproxy is 
> concerned.
> 
> 


I don’t think that health checks are run concurrently to the same server in a 
backend. This means that if your server is accepting the TCP connection but not 
responding before the “timeout check” timer strikes, then you could be seeing 
40+ second times to detect the failure especially if there are delays in making 
the connection for the healthcheck too.

The defaults should detect a down server in 3 consecutive failures with 2 
seconds between each check so 6 seconds or so.

-Bryan



Re: Deny with 413 request too large

2017-05-17 Thread Bryan Talbot

> On May 15, 2017, at May 15, 6:35 PM, Joao Morais  
> wrote:
> 
> This is working but sounds a hacky workaround since I’m using another status 
> code. If I try to use:
> 
>errorfile 413 /usr/local/etc/haproxy/errors/413.http
>http-request deny deny_status 413 if { req.body_size gt 10485760 }
> 
> ... HAProxy complains with:
> 
>[WARNING] 135/001448 (27) : parsing [/etc/haproxy/haproxy.cfg:15] : status 
> code 413 not handled by 'errorfile', error customization will be ignored.
>[WARNING] 135/001448 (27) : parsing [/etc/haproxy/haproxy.cfg:89] : status 
> code 413 not handled, using default code 403.
> 
> How should I configure HAProxy in order to deny with 413?



You’ve already found it. AFAIK, that’s the only way.

-Bryan




Re: haproxy "inter" and "timeout check", retries and "fall"

2017-05-15 Thread Bryan Talbot

> On May 13, 2017, at May 13, 10:59 PM, Jiafan Zhou  
> wrote:
> 
> 
> Hi all,
> 
> The version of haproxy I use is: 
> 
> # haproxy -version
> HA-Proxy version 1.5.2 2014/07/12
> Copyright 2000-2014 Willy Tarreau  

This version is so old. I’m sure there must be hundreds of bugs fixed over the 
last 3 years. Why not use a properly current version?


> I have a question regarding the Health Check. In the documentation of 
> haproxy, it mentions the below for the "timeout check" and "inter":
> 
> Now I am wondering here which one and what value will be used for healthcheck 
> interval. Is it "timeout check" as 10 seconds, or the "inter" as the default 
> 2 seconds?
> 
> 

Why not just set the health check values that you care about and not worry 
about guessing what they’ll end up being when only some are set and some are 
using defaults? If you need / expect them to be a particular value for proper 
system operation, I’d set them no matter what the defaults may be declared to 
be. 


> Another question, since I defined the "retries" to be 3, in the case of 
> server connection failure, will it reconnect 3 times? Or does it use the 
> "fall" parameter (which defaults to 3 here as well) instead for healthcheck 
> retry?
> 
> 


“retries” is for dispatching requests and is not used for health checks.


> So in this configuration, in the case of server failure, does it wait for up 
> to 30 seconds (3 fall or retries), then 20 seconds (2 rise), before the 
> server is considered operational? (in total 50 seconds)
> 
> 

retries are not considered, only health check specific settings like “fail”, 
“inter"

> Thanks,
> 
> Jiafan
> 



Re: haproxy

2017-05-12 Thread Bryan Talbot

> On May 11, 2017, at May 11, 7:51 AM, Jose Alarcon  
> wrote:
> 
> Hello,
> 
> excuseme my english is very bad, i need know how change configuration haproxy 
> pasive/active manually not using keepalived.
> 

There is no standard way because that is not a feature of haproxy. High 
availability of the proxy is managed by an external tool like keepalived.

-Bryan


> i need this information for a highscholhomework.
> 
> thanks.
> 
> my native lenguaje is spanish.-




Re: Haproxy 1.5.4 unable to accept new TCP request, backlog full, tens of thousands close_wait connection

2017-04-26 Thread Bryan Talbot

> On Apr 26, 2017, at Apr 26, 2:13 AM, jaseywang  wrote:
> 
> Hi
> @Willy @Cyril do you have any recommended config for ssl related setting, we 
> now use nbproc and cpu-map to distribute the load to each cpu, though haproxy 
> can work with cdn now, it's performance is not as good as before without cdn, 
> user time of each core is almost saturated.
> Thanks.


I think that most would recommend using a TLS config from 
https://wiki.mozilla.org/Security/Server_Side_TLS 
 unless you have specific 
needs and expert knowledge to make up your own.

-Bryan



Re: low load client payload intermittently dropped with a "cD" error (v1.7.3)

2017-04-10 Thread Bryan Talbot

> On Apr 8, 2017, at Apr 8, 2:24 PM, Lincoln Stern  
> wrote:
> 
> I'm not sure how to interpret this, but it appears that haproxy is dropping
> client payload intermittently (1/100).  I have included tcpdumps and logs to
> show what is happening.
> 
> Am I doing something wrong?  I have no idea what could be causing this or how
> to go about debugging it.  I cannot reproduce it, but I do observe in 
> production ~2 times
> a day across 20 instances and 2K connections.
> 
> Any help or advice would be greatly appreciated.
> 
> 
> 

You’re in TCP mode with 60 second timeouts. So, if the connection is idle for 
that long then the proxy will disconnect. If you need idle connections to stick 
around longer and mix http and tcp traffic then you probably want to set 
“timeout tunnel” to however long you’re willing to let idle tcp connections sit 
around and not impact http timeouts. If you only need long-lived tcp “tunnel” 
connections, then you can instead just increase both your “timeout client” and 
“timeout server” timeouts to cover your requirements.

-Bryan



> What I'm trying to accomplish is to provide HA availability over two routes
> (i.e. internet providers).  One acts as primary and I gave it a "static-rr"
> "weight" of 256 and the other as backup and has a weight of "1".  Backup
> should only be used in case of primary failure.
> 
> 
> log:
> Apr  4 18:55:27 app055 haproxy[13666]: 127.0.0.1:42262 
>  [04/Apr/2017:18:54:41.585] ws-local servers/server1 
> 1/86/45978 4503 5873 -- 0/0/0/0/0 0/0
> Apr  4 22:46:37 app055 haproxy[13666]: 127.0.0.1:47130 
>  [04/Apr/2017:22:46:36.931] ws-local servers/server1 
> 1/62/663 7979 517 -- 0/0/0/0/0 0/0
> Apr  4 22:46:38 app055 haproxy[13666]: 127.0.0.1:32931 
>  [04/Apr/2017:22:46:37.698] ws-local servers/server1 
> 1/55/405 3062 553 -- 1/1/1/1/0 0/0
> Apr  4 22:46:43 app055 haproxy[13666]: 127.0.0.1:41748 
>  [04/Apr/2017:22:46:43.190] ws-local servers/server1 
> 1/115/452 7979 517 -- 2/2/2/2/0 0/0
> Apr  4 22:46:46 app055 haproxy[13666]: 127.0.0.1:57226 
>  [04/Apr/2017:22:46:43.576] ws-local servers/server1 
> 1/76/3066 2921 538 -- 1/1/1/1/0 0/0
> Apr  4 22:46:47 app055 haproxy[13666]: 127.0.0.1:39656 
>  [04/Apr/2017:22:46:47.072] ws-local servers/server1 
> 1/67/460 8254 528 -- 1/1/1/1/0 0/0
> Apr  4 22:47:38 app055 haproxy[13666]: 127.0.0.1:39888 
>  [04/Apr/2017:22:46:38.057] ws-local servers/server1 
> 1/63/60001 0 0 cD 0/0/0/0/0 0/0 
> Apr  5 08:44:55 app055 haproxy[13666]: 127.0.0.1:42650 
>  [05/Apr/2017:08:44:05.529] ws-local servers/server1 
> 1/53/49645 4364 4113 -- 0/0/0/0/0 0/0
> 
> 
> tcpdump:
> 22:46:38.057127 IP 127.0.0.1.39888 > 127.0.0.1.9011: Flags [S], seq 
> 2113072542, win 43690, options [mss 65495,sackOK,TS val 82055529 ecr 
> 0,nop,wscale 7], length 0
> 22:46:38.057156 IP 127.0.0.1.9011 > 127.0.0.1.39888: Flags [S.], seq 
> 3284611992, ack 2113072543, win 43690, options [mss 65495,sackOK,TS val 
> 82055529 ecr 82055529,nop,wscale 7], length 0
> 22:46:38.057178 IP 127.0.0.1.39888 > 127.0.0.1.9011: Flags [.], ack 1, win 
> 342, options [nop,nop,TS val 82055529 ecr 82055529], length 0
> 22:46:38.057295 IP 10.10.10.10.34289 > 99.99.99.99.8000: Flags [S], seq 
> 35567, win 29200, options [mss 1460,sackOK,TS val 82055529 ecr 
> 0,nop,wscale 7], length 0
> 22:46:38.060539 IP 127.0.0.1.39888 > 127.0.0.1.9011: Flags [P.], seq 1:199, 
> ack 1, win 342, options [nop,nop,TS val 82055530 ecr 82055529], length 198
> 22:46:38.060598 IP 127.0.0.1.9011 > 127.0.0.1.39888: Flags [.], ack 199, win 
> 350, options [nop,nop,TS val 82055530 ecr 82055530], length 0
> ... client payload acked ...
> 22:46:38.120527 IP 99.99.99.99.8000 > 10.10.10.10.34289: Flags [S.], seq 
> 4125907118, ack 35568, win 28960, options [mss 1460,sackOK,TS val 
> 662461622 ecr 82055529,nop,wscale 8], length 0
> 22:46:38.120619 IP 10.10.10.10.34289 > 99.99.99.99.8000: Flags [.], ack 1, 
> win 229, options [nop,nop,TS val 82055545 ecr 662461622], length 0
> ... idle timeout by server 5 seconds later...
> 22:46:43.183207 IP 99.99.99.99.8000 > 10.10.10.10.34289: Flags [F.], seq 1, 
> ack 1, win 114, options [nop,nop,TS val 662466683 ecr 82055545], length 0
> 22:46:43.183387 IP 127.0.0.1.9011 > 127.0.0.1.39888: Flags [F.], seq 1, ack 
> 199, win 350, options [nop,nop,TS val 82056810 ecr 82055530], length 0
> 22:46:43.184011 IP 10.10.10.10.34289 > 99.99.99.99.8000: Flags [.], ack 2, 
> win 229, options [nop,nop,TS val 82056811 ecr 662466683], length 0
> 22:46:43.184025 IP 127.0.0.1.39888 > 127.0.0.1.9011: Flags [.], ack 2, win 
> 342, options [nop,nop,TS val 82056811 ecr 82056810], length 0
> 22:46:43.184715 IP 127.0.0.1.39888 > 127.0.0.1.9011: Flags [P.], seq 199:206, 
> ack 2, win 342, options [nop,nop,TS val 82056811 ecr 82056810], length 7
> 22:46:43.184795 IP 127.0.0.1.9011 > 1

Re: stick-table ,show table, use field

2017-03-30 Thread Bryan Talbot

> On Mar 30, 2017, at Mar 30, 10:19 AM, Arnall  wrote:
> 
> Hello everyone,
> 
> when using socat to show a stick-table i have lines like this :
> 
> # table: dummy_table, type: ip, size:52428800, used:33207
> 
> 0x7f202f800720: key=aaa.bbb.ccc.ddd use=0 exp=599440 gpc0=0 
> conn_rate(5000)=19 conn_cur=0 http_req_rate(1)=55
> 
> ../...
> 
> I understand all the fields except 2 :
> 
> used:33207
> 
> use=0
> 
> I found nothing in the doc, any idea ?
> 


I believe that these are documented in the management guides and not the config 
guides.

https://cbonte.github.io/haproxy-dconv/1.6/management.html#9.2-show%20table 


Here, I think that ‘used’ for the table is the number of entries that currently 
exist in the table, and ‘use’ for an entry is the number of sessions that 
concurrently match that entry.

-Bryan



Re: [PATCH][RFC] MEDIUM: global: add a 'grace' option to cap the soft-stop time

2017-03-15 Thread Bryan Talbot

> On Mar 15, 2017, at Mar 15, 4:44 PM, Cyril Bonté  wrote:
> 
> Several use cases may accept to abruptly close the connections when the
> instance is stopping instead of waiting for timeouts to happen.
> This option allows to specify a grace period which defines the maximum
> time to spend to perform a soft-stop (occuring when SIGUSR1 is
> received).
> 
> With this global option defined in the configuration, once all connections are
> closed or the grace time is reached, the instance will quit.


Most of the other settings for time-limits include the word “timeout”. Maybe 
“timeout grace”, “timeout shutdown”, “timeout exit” or something is more 
consistent with other configuration options?

-Bryan




Re: Layer 7 Headers

2017-02-06 Thread Bryan Talbot

> On Feb 6, 2017, at Feb 6, 4:24 PM, Andrew Kroenert 
>  wrote:
> 
> Hey Guys
> 
> Quick one, Can anyone confirm any difference between the following header 
> manipulations in haproxy


Well, they’re very different … the first alters the response and the second 
alters the request.

If your haproxy version supports the http-request / http-response methods, 
those should probably be preferred over the older rspadd / reqadd which are 
kept for backwards compatibility.



> 
> 1.
> rspadd Server:\ Test


This adds a “Server: Test” header line to the response sent by the server 
before forwarding to the client.
https://cbonte.github.io/haproxy-dconv/1.7/configuration.html#rspadd 



> 
> 2.
> http-request add-header Server Test
> 


This adds a “Server: Test” header line request sent by the client before 
forwarding to the server.
https://cbonte.github.io/haproxy-dconv/1.7/configuration.html#4.2-http-request 



-Bryan



Re: Queries Rearding to the Redirections According to the ports

2017-02-02 Thread Bryan Talbot

> On Feb 1, 2017, at Feb 1, 1:21 AM, parag bharne  
> wrote:
> 
> Above Conditions will work for 80 port, for SSL It works on 443, but for 
> other port i.e 8080 The SSL cannot get access.


The sample configs do not make much sense given this statement so it’s hard to 
say what you’re trying to do.

My recommendation is to simplify your config and get it working for both your 
sites with only HTTPS. Then add support to redirect HTTP requests to the 
working HTTPS listener.




> 
> See My Configuration File I have Tried
> ### First Configuration###
> frontend www-http
> bind *:80
> bind *:443 ssl crt /etc/apache2/ssl/apache.pem
> reqadd X-Forwarded-Proto:\ https
> default_backend tcp-backend
> mode tcp
> 
> frontend www-http
> bind *:80
> bind *:443 ssl crt /etc/apache2/ssl/apache.pem
> reqadd X-Forwarded-Proto:\ https
> default_backend www-backend
> mode tcp
> 
> 
> backend tcp-backend
> redirect scheme https if !{ ssl_fc }
> server example 1.0.0.0:8080 <http://1.0.0.0:8080/> check
> 
> backend www-backend
>  redirect scheme https if !{ ssl_fc }
>  server example.com <http://example.com/> 1.0.0.1:80 <http://1.0.0.1/> 
> check
> 
> ## Second Configuration ##
> 
> frontend www-http2
> bind *:80
> bind *:443 ssl crt /etc/apache2/ssl/apache.pem
> reqadd X-Forwarded-Proto:\ https
> default_backend tcp-backend
> mode tcp
> 
> frontend tcp-http1
> bind *:81
> bind *:81 ssl crt /etc/apache2/ssl/apache.pem
> reqadd X-Forwarded-Proto:\ https
> default_backend www-backend
> mode tcp
> 
> backend tcp-backend
> redirect scheme https if !{ ssl_fc }
> server example.com <http://example.com/> 1.0.0.0:8080 
> <http://1.0.0.0:8080/> check
> 
> backend www-backend
>  redirect scheme https if !{ ssl_fc }
>  server example.com <http://example.com/> 1.0.0.1:80 <http://1.0.0.1/> 
> check
> 
> #####Please Help mi to 
> Confiuration Chnages if any have. Give some hints to do that one 
> 
> Thanks and Regards 
>Parag Bharne
> 
> 
> On Wed, Feb 1, 2017 at 12:59 PM, Bryan Talbot  <mailto:bryan.tal...@playnext.com>> wrote:
> 
>> On Jan 31, 2017, at Jan 31, 11:26 PM, parag bharne 
>> mailto:parag.bha...@bizruntime.com>> wrote:
>> 
>> HI,
>> Here our scenario where we wnat to work using haproxy
>> 
>> (client) -> http://www.example.com <http://www.example.com/> -> (redirect) 
>> -> https://www.example.com <https://www.example.com/>
>> (client) -> http://www.example.com:8080 <http://www.example.com:8080/> -> 
>> (redirect) ->
>> https://www.example.com <https://www.example.com/>:8080
>> 
>> This is Possible in haproxy or not, plz try to reply as fast as possible
>> 
> 
> Yes.
> 
> 
> 
>> Parag Bharne
> 
> 



Re: Queries Rearding to the Redirections According to the ports

2017-01-31 Thread Bryan Talbot

> On Jan 31, 2017, at Jan 31, 11:26 PM, parag bharne 
>  wrote:
> 
> HI,
> Here our scenario where we wnat to work using haproxy
> 
> (client) -> http://www.example.com  -> (redirect) -> 
> https://www.example.com 
> (client) -> http://www.example.com:8080  -> 
> (redirect) ->
> https://www.example.com :8080
> 
> This is Possible in haproxy or not, plz try to reply as fast as possible
> 

Yes.



> Parag Bharne



Re: How can I change the URI when forwarding to a server

2017-01-12 Thread Bryan Talbot

> On Jan 12, 2017, at Jan 12, 5:26 AM, Jürgen Haas  
> wrote:
> 
> Hi all,
> 
> I wonder if I can change the uri that the server receives without doing
> a redirect.


You’re looking for http-request with set-uri or set-path + set-query: 
https://cbonte.github.io/haproxy-dconv/1.6/configuration.html#4.2-http-request 


-Bryan



> 
> Example:
> Request from client: https://www.example.com/login/username?p1=something
> Request received by server: /login.php?s=username&p1=something
> 
> More general:
> - if path begins with /login/*[?*]
> - add the first * as a query parameter s to the query
> - keep other optional query parameters in place
> 
> Is anything like that possible?
> 
> 
> Thanks
> Jürgen
> 



Re: HTTP redirects while still allowing keep-alive

2017-01-10 Thread Bryan Talbot

> On Jan 10, 2017, at Jan 10, 12:28 AM, Ciprian Dorin Craciun 
>  wrote:
> 
> On Tue, Jan 10, 2017 at 9:36 AM, Cyril Bonté  wrote:
>> This is because haproxy behaves differently depending on the the Location
>> URL :
>> - beginning with /, it will allow HTTP keep-alived connections (Location:
>> /redir/foo)
>> - otherwise it unconditionnally won't, and there's no option to change this
>> (Location: http://mysite/redir)
> 
> 


Whatever the reason for forcing the connection closed -- it only closes when 
the scheme changes. Redirecting to a different host or port when using a 
“scheme less” URI allows the connection to be kept open.


listen http
bind :8000
http-request redirect location //127.0.0.2:8001/redir




$> curl -L -v 127.0.0.1:8000/foo
*   Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 8000 (#0)
> GET /foo HTTP/1.1
> Host: 127.0.0.1:8000
> User-Agent: curl/7.52.1
> Accept: */*
>
< HTTP/1.1 302 Found
< Cache-Control: no-cache
< Content-length: 0
< Location: //127.0.0.2:8001/redir
<
* Curl_http_done: called premature == 0
* Connection #0 to host 127.0.0.1 left intact
* Issue another request to this URL: 'http://127.0.0.2:8001/redir'
*   Trying 127.0.0.2…


Maybe that will be useful to Ciprian to make the redirect to a new hostname but 
keep the connection to the old host open if that’s what is needed.

-Bryan





Re: HTTP redirects while still allowing keep-alive

2017-01-09 Thread Bryan Talbot

> On Jan 8, 2017, at Jan 8, 2:03 PM, Ciprian Dorin Craciun 
>  wrote:
> 
> Quick question:  how can I configure HAProxy to redirect (via
> `http-request redirect ...`) without HAProxy sending the `Connection:
> close` header, thus still allowing keep-alive on this connection.

I do not see the behavior you describe, but I also do no know what haproxy 
version you might be using or what your config might be like.

haproxy version 1.7.1 with a proxy config like that shown below does not close 
the connection and contains no “connection: close” header for me.

listen http
bind :8000
http-request redirect prefix /redir



$> curl -v http://127.0.0.1:8000/foo
*   Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 8000 (#0)
> GET /foo HTTP/1.1
> Host: 127.0.0.1:8000
> User-Agent: curl/7.52.1
> Accept: */*
>
< HTTP/1.1 302 Found
< Cache-Control: no-cache
< Content-length: 0
< Location: /redir/foo
<
* Curl_http_done: called premature == 0
* Connection #0 to host 127.0.0.1 left intact


-Bryan



> 
> My use-case is the following:  I have a stubborn server that insists
> on pointing to the "wrong" resource URL's, thus on a page load, I get
> a storm of redirects, each with a different connection (due to the
> `Connection: close` reply header).
> 
> 
> I tried to skim the documentation and search the internet (and the
> mailing list archives), but no such topic popped-up, thus I have the
> feeling this is quite impossible as of now...
> 
> Thanks,
> Ciprian.
> 




Re: Working with Multiple HTTPS Applications with haproxy

2016-11-28 Thread Bryan Talbot

> On Nov 23, 2016, at Nov 23, 2:35 AM, Deepak Shakya  wrote:
> 
> I want to setup haproxy to be able to proxy multiple https applications on 
> the same https port
> 
> Something like this:
> 
> Client/Browser  ---(https)--->  haproxy:8443/app1 ---(https)--->  
> app1-server:8101 (Default)
> Client/Browser  ---(https)--->  haproxy:8443/app2 ---(https)--->  
> app2-server:8102
> 
> I was thinking to have SSL Pass-through for the above case and here is my 
> configuration for the same.
> 
> frontend pmc-fe 0.0.0.0:8443 
> mode tcp
> option tcplog
> default_backend app1-be
> 
> acl app2_aclpath_beg /app2/
> use_backend app2-be if app2_acl
> 
> backend app1-be
> mode tcp
> stick-table type ip size 200k expire 30m
> stick on src
> server app1-server app1-server:8101
> 
> backend app2-be
> reqrep ^([^\ ]*\ /)app2[/]?(.*) \1\2
> server app2-server app2-server:8102
> 
> 
> But, this is not working? Can somebody guide me?


If this is actually your config then SSL is not decrypted at the proxy and 
there is no way for the app2_acl to ever match. If you want to inspect HTTP 
content in the proxy, then you must terminate SSL in the proxy too.


-Bryan



Re: Haproxy subdomain going to wrong backend

2016-11-14 Thread Bryan Talbot
Use “reply-all” so the thread stays on the list.


> On Nov 14, 2016, at Nov 14, 4:33 AM, Azam Mohammed  wrote:
> 
> Hi Bryan,
> 
> Thanks for your email.
> 
> I was doing a bit of testing on haproxy.
> 
> I used hdr to match the subdomain in frontend but I got 503 "503 Service 
> Unavailable" No server is available to handle this request.
> 
> Haproxy Log:
> http-in http-in/ -1/-1/-1/-1/163 503 212 - - SC-- 4/4/0/0/0 0/0 "GET 
> /favicon.ico HTTP/1.1"
> 
> 
> http-in http-in/ -1/-1/-1/-1/0 503 212 - - SC-- 2/2/0/0/0 0/0 "GET 
> /favicon.ico HTTP/1.1"
> 
> But using hdr_dom(host) works fine
> 
> Haproxy Log:
> 
> 

Clearly the host header being sent isn’t the exact strings that you’re checking 
for. 

-Bryan



> http-in ppqa2argaamplus/web01 0/0/2/26/28 200 1560 - - --VN 6/6/0/0/0 0/0 
> "GET /content/ar/images/argaam-plus-icon.ico HTTP/1.1"
> 
> All our websites are developed on ASP.NET .
> 
> I want to use hdr (as you mention this match exact string) to match the 
> subdomain.
> 
> Could you please help me to fix this.
> 
> 
> --
> 
> Thanks & Regards, 
>  
> Azam Sheikh Mohammed
> IT Network & System Admin
>   


Re: Haproxy subdomain going to wrong backend

2016-11-10 Thread Bryan Talbot

> On Nov 9, 2016, at Nov 9, 4:45 AM, Azam Mohammed  wrote:
> 
> Also we have exact same Haproxy config on QA and UAT environment and works 
> fine.
> 
> QA Environment:
> Haproxy Version: HA-Proxy version 1.5.4
> OS Version: CentOS release 6.3 (Final)
> 
> UAT Environment:
> Haproxy Version: HA-Proxy version 1.3.26
> OS Version: CentOS release 5.6 (Final)
> 

I didn’t notice before, but both of these versions are quite old. You should 
consider upgrading them when possible. I’m sure there are many critical 
security issues that have been fixed in the years since these were released.

-Bryan




Re: Haproxy subdomain going to wrong backend

2016-11-10 Thread Bryan Talbot
… please include the list in your responses too


> On Nov 10, 2016, at Nov 10, 4:09 AM, Azam Mohammed  wrote:
> 
> Hi Bryan,
> 
> Thanks for your reply.
> 
> Putting "use_backend test" first in Haproxy config worked fine.
> 
> But I have few more question based on the solution.
> 
> As you said both the url_subdomain and url_test acls match the string 
> ‘subdomain.domain.com <http://subdomain.domain.com/>' we get this issue.  But 
> in the ACL section full URL is specified, then why acl url_subdomain is 
> catching the requested with the URL test.subdomain.domain.com 
> <http://test.subdomain.domain.com/>. I believe url_subdomain should alway 
> match subdomain.domain.com <http://subdomain.domain.com/> only.
> 

hdr_dom matches domains (strings terminated by a dot (.) or whitespace. Since 
you seem to be expecting an exact string match, just use hdr.


> If there is anything to do with the levels of subdomain, is it mentioned in 
> the Haproxy documentation to use the precedence. Could please point me where 
> to look in Haproxy documentation for this.   
> 


The documentation is quite extensive and you can find specifics about req.hdr 
at https://cbonte.github.io/haproxy-dconv/1.6/configuration.html#7.3.6-req.hdr 
<https://cbonte.github.io/haproxy-dconv/1.6/configuration.html#7.3.6-req.hdr>

-Bryan


> --
> 
> Thanks & Regards, 
>  
> Azam Sheikh Mohammed
> IT Network & System Admin
>  
> D a n a t
> Al-Shatha Tower Office 1305, Dubai Internet City | P.O.Box: 502113, Dubai, 
> UAE | Tel: +971 4 368 8468 Ext. 133 | Fax:  +971 4 368 8232 | Mobile:  +971 
> 55 498 8089 | Email: a...@danatev.com <mailto:a...@danatev.com>
> On Thu, Nov 10, 2016 at 12:46 AM, Bryan Talbot  <mailto:bryan.tal...@playnext.com>> wrote:
> 
>> On Nov 9, 2016, at Nov 9, 4:45 AM, Azam Mohammed > <mailto:a...@danatev.com>> wrote:
>> 
>> Hello,
>> 
>>  
>>  
>> acl  url_subdomain   hdr_dom(host)   -i  subdomain.domain.com 
>> <http://subdomain.domain.com/>
>> acl  url_test hdr_dom(host)   -i  
>> test.subdomain.domain.com <http://test.subdomain.domain.com/>
>>  
>>  
>> use_backend subdomain if url_subdomain
>> 
>> use_backend test   if url_test
>> 
>>  
>>  
>> Both the subdomain has different web pages. Now if we enter 
>> test.subdomain.domain.com <http://test.subdomain.domain.com/> in the browser 
>> it goes into subdomain.domain.com <http://subdomain.domain.com/> backend. We 
>> have no idea what is causing this issue.
>> 
>>  
> 
> 
> Both the url_subdomain and url_test acts match the string 
> ‘subdomain.domain.com <http://subdomain.domain.com/>’.
> 
> Make the ACL match be more specific or put the “use_backend test” first since 
> it is already more specific.
> 
> -Bryan
> 
> 
> 



Re: Haproxy subdomain going to wrong backend

2016-11-09 Thread Bryan Talbot

> On Nov 9, 2016, at Nov 9, 4:45 AM, Azam Mohammed  wrote:
> 
> Hello,
> 
>  
>  
> 
> acl  url_subdomain   hdr_dom(host)   -i  subdomain.domain.com 
> 
> acl  url_test hdr_dom(host)   -i  
> test.subdomain.domain.com 
>  
>  
> use_backend subdomain if url_subdomain
> 
> use_backend test   if url_test
> 
>  
>  
> Both the subdomain has different web pages. Now if we enter 
> test.subdomain.domain.com  in the browser 
> it goes into subdomain.domain.com  backend. We 
> have no idea what is causing this issue.
> 
>  


Both the url_subdomain and url_test acts match the string 
‘subdomain.domain.com’.

Make the ACL match be more specific or put the “use_backend test” first since 
it is already more specific.

-Bryan




Re: ECDSA and HAProxy help

2016-10-13 Thread Bryan Talbot

> On Oct 13, 2016, at Oct 13, 3:19 PM, Thierry Fournier 
>  wrote:
> 
> 
> The negociated cipher is "AECDH-AES256-SHA", and I don't know if this
> cipher is ECDSA :) At least it seems to work.
> 
> Thierry
> 


That’s not a cipher that would normally be considered “good” to use since it 
doesn’t perform any message authentication [1].
It may (or may not) be enough to trigger the memory leak you’re looking for 
though. However, if you’d like to go with a full EC stack and use a realistic 
cipher, then get it working with one of these.


$> openssl ciphers -v 'ECDSA:!NULL'
ECDHE-ECDSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH Au=ECDSA Enc=AESGCM(256) 
Mac=AEAD
ECDHE-ECDSA-AES256-SHA384 TLSv1.2 Kx=ECDH Au=ECDSA Enc=AES(256)  Mac=SHA384
ECDHE-ECDSA-AES256-SHA  SSLv3 Kx=ECDH Au=ECDSA Enc=AES(256)  Mac=SHA1
ECDHE-ECDSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH Au=ECDSA Enc=AESGCM(128) 
Mac=AEAD
ECDHE-ECDSA-AES128-SHA256 TLSv1.2 Kx=ECDH Au=ECDSA Enc=AES(128)  Mac=SHA256
ECDHE-ECDSA-AES128-SHA  SSLv3 Kx=ECDH Au=ECDSA Enc=AES(128)  Mac=SHA1
ECDHE-ECDSA-RC4-SHA SSLv3 Kx=ECDH Au=ECDSA Enc=RC4(128)  Mac=SHA1
ECDHE-ECDSA-DES-CBC3-SHA SSLv3 Kx=ECDH Au=ECDSA Enc=3DES(168) Mac=SHA1




1. https://en.wikipedia.org/wiki/Authenticated_encryption 



-Bryan (not a cryptographer)




Re: ECDSA and HAProxy help

2016-10-11 Thread Bryan Talbot

> On 12 Oct 2016 8:45 am, "Igor Cicimov"  > wrote:
> >
> > On 11 Oct 2016 7:05 pm, "Thierry Fournier"  > > wrote:
> > > I'm currently trying to investigate about a little leak of memory in
> > > the certificates loading, and I try to test ECDSA certificates and
> > > cipher.
> > >
> > > I can't done this :( I don't understand anything in the ECDSA
> > > certificate process.
> > >
> > > My test certificate is generated from a little chain where the root CA
> > > is autosigned. So the root CA and the 2 intermediate are RSA
> > > certificates. The ECDSA certificate is build with these commands:
> > >
> > >openssl ecparam -name secp521r1 -genkey -param_enc explicit -out \
> > >   $CN.ecdsa.key
> 
> 


I ran into this as well and it turns out that s_client and s_server do not seem 
to play nicely with curves when using -param_enc explicit and instead prefer to 
only deal with named curves.

Encode the key params using named curve that both sides can accept and your 
test should work.

Also, see 
https://groups.google.com/forum/#!topic/mailing.openssl.users/Rg6yV4ccWeo 


-Bryan




Re: HAProxy Build Error with TARGET

2016-09-13 Thread Bryan Talbot

> On Sep 13, 2016, at Sep 13, 9:16 PM, Coscend@HAProxy 
>  wrote:
> 
> Hello HAProxy Community,
> 
> We are upgrading from HAProxy 1.6.7 to 1.6.9 by building from source.  We
> would appreciate any vector on the issue we are facing with specifying
> TARGET in make and makefile.  

What source are you using?


> 
> It is building fine with TARGET=linux2628.  
> However, we are getting a build error with TARGET=linux310 (see log summary
> and detailed log below).  Makefile also has TARGET=linux310.  Our Linux
> version, uname -r gives 3.10.0-229.el7.x86_64


The official source at http://git.haproxy.org/git/haproxy-1.6.git does not 
define TARGET for linux310 anywhere that I can find.

-Bryan




Re: unique-id-header logged twice ?

2016-08-23 Thread Bryan Talbot

> On Aug 23, 2016, at Aug 23, 5:43 PM, Jakov Sosic  wrote:
> 
> Hi guys,
> 
> 
> Later I log it in Apache in custom log format:
> 
> LogFormat "%a %l %u [%{%d/%b/%Y:%T}t,%{msec_frac}t %{%z}t] \"%r\" %>s %b 
> \"%{Referer}i\" \"%{User-Agent}i\" \"%{X-Unique-ID}i\"" combined_uniqueid
> 
> 
> But, lately I've notice - very rarely but still happened, a request which 
> logged two unique ids.
> 
> After verfying ips and ports, I conclude that first request has:
> 
> SRC: 192.168.50.200 [client_ip]
> DST: 192.168.50.99  [haproxy_ip]
> 
> second one though, has:
> 
> SRC: 192.168.50.99  [haproxy_ip]
> DST: 192.168.50.99  [haproxy_ip]
> 
> 
> An example:
> 
> [Fri Aug 19 12:20:13.468461 2016] [-:error] [pid 9390] [client 
> 192.168.1.99:53393] [request_id: 
> C0A801C8:DE3E_C0A80163:0050_573D9359_39BE5:6408, 
> C0A80163:CBB9_C0A80163:0050_573D935C_39BE8:6408] .
> 
> 
> Any ideas what could have happened over here?
> 


Assuming you don’t loop connections through haproxy itself, (backend -> other 
haproxy frontend) which is sometimes done for TLS termination and logging 
reasons, then I can think of two other options.

Perhaps an eavesdropper / developer setup another proxy to snoop / debug 
traffic on your load balancer?

Someone is sending requests with the (X-Unique-Id) header already set. If you 
don’t explicitly remove any incoming values, the unique-id-header will add a 
new one.


-Bryan




Re: question about http request rate limit

2016-08-15 Thread Bryan Talbot

> On Aug 15, 2016, at Aug 15, 2:00 AM, Artem Lalaiants  
> wrote:
> 
> Hello,
> 
> Can somebody the http_req_rate counter still counts requests with the 
> following configuration even after all the traffic starts coming through 
> "error" backend only?
> 
> frontend all-requests 0.0.0.0:80 
> tcp-request content track-sc0 fe_id() table es_backend
> use_backend error_429 if { sc_http_req_rate(0) gt 9 }
> backend es_backend
> stick-table size 1 expire 10s type integer store http_req_rate(10s)
> backend error_429
> mode http
> errorfile 429 /etc/haproxy/errors/429rate.http
> 
> I expect http_req_rate to be decreased once the traffic goes to another 
> backend (error_429 in my case). Am I missing something and how it can be 
> achieved?
> 


You’re tracking based on the id of the frontend so no matter what backend is 
used, the key for fe_id() is still incremented for the request.

-Bryan



Re: Matching of NULL bytes impossible with rstring

2016-08-15 Thread Bryan Talbot

> On Aug 15, 2016, at Aug 15, 4:06 AM, Ariano-Tim Donda 
>  wrote:
> 
> For my project it must be possible to check different bytes from \x00 to \xFF 
> via tcp-check expect rstring. But it is not possible to check NULL bytes. 
> Everything after the first NULL byte will be ignored.
> My test configuration:
> tcp-check send-binary 9C0800870100
> tcp-check expect rstring ^\x9a\x00\x00.{5}\x00{13}\x10\x00\x04


Strings are null terminated. I’m guessing that maybe the ‘binary’ matcher is 
what you want to use instead of ’string'?

-Bryan



>  
>  
> Best regards/Mit freundlichen Grüßen
> -
> Ariano-Tim Donda
> System Engineer HSM
>  
> Utimaco IS GmbH
> Germanusstr. 4
> 52080 Aachen
> Germany
>  
> phone: +49 241 1696 – 220
> www.utimaco.com 
>  
> 
> 
> Utimaco IS GmbH
> Germanusstr. 4, D.52080 Aachen, Germany, Tel: +49-241-1696-0, www.utimaco.com 
> 
> Seat: Aachen – Registergericht Aachen HRB 18922
> VAT ID No.: DE 815 496 496
> Managementboard: Malte Pollmann (Chairman) CEO, Dr. Frank J. Nellissen CFO
> 
> This communication is confidential. We only send and receive email on the 
> basis of the terms set out at https://www.utimaco.com/en/e-mail-disclaimer/ 
> 


Re: SEGV with sc_trackers

2016-08-13 Thread Bryan Talbot

> On Aug 13, 2016, at Aug 13, 11:00 AM, Lukas Tribus  wrote:
> 
> Here's a stacktrace on Linux without compiler optimizations:
> 


Thank you Lukas. I did forget to mention that it occurs on Linux and OS X but 
that I only had build/debug tools handy on OSX.

-Bryan



> Program terminated with signal SIGSEGV, Segmentation fault.
> #0  0x080f504b in smp_fetch_sc_trackers (args=0x9b86de0, smp=0xbfffdfd4, 
> kw=0x81317c7 "sc0_trackers", private=0x0) at src/stream.c:3265
> 3265smp->data.u.sint = stkctr_entry(stkctr)->ref_cnt;
> (gdb) bt
> #0  0x080f504b in smp_fetch_sc_trackers (args=0x9b86de0, smp=0xbfffdfd4, 
> kw=0x81317c7 "sc0_trackers", private=0x0) at src/stream.c:3265
> #1  0x080fa9b3 in sample_process (px=0x9b85120, sess=0x9b84cb8, 
> strm=0x9b84d68, opt=6, expr=0x9b86dc0, p=0xbfffdfd4) at src/sample.c:1060
> #2  0x080f8d4b in acl_exec_cond (cond=0x9b86d38, px=0x9b85120, 
> sess=0x9b84cb8, strm=0x9b84d68, opt=6) at src/acl.c:1145
> #3  0x080abf45 in http_req_get_intercept_rule (px=0x9b85120, rules=0x9b8515c, 
> s=0x9b84d68, deny_status=0xbfffe11c) at src/proto_http.c:3314
> #4  0x080ade7d in http_process_req_common (s=0x9b84d68, req=0x9b84d74, 
> an_bit=16, px=0x9b85120) at src/proto_http.c:4157
> #5  0x080f0d0b in process_stream (t=0x9b84d10) at src/stream.c:1819
> #6  0x0805b19c in process_runnable_tasks () at src/task.c:238
> #7  0x0804d432 in run_poll_loop () at src/haproxy.c:1692
> #8  0x0804e090 in main (argc=3, argv=0xbfffe404) at src/haproxy.c:2059
> (gdb) bt full
> #0  0x080f504b in smp_fetch_sc_trackers (args=0x9b86de0, smp=0xbfffdfd4, 
> kw=0x81317c7 "sc0_trackers", private=0x0) at src/stream.c:3265
>stkctr = 0x81594b4 
> #1  0x080fa9b3 in sample_process (px=0x9b85120, sess=0x9b84cb8, 
> strm=0x9b84d68, opt=6, expr=0x9b86dc0, p=0xbfffdfd4) at src/sample.c:1060
>conv_expr = 0x8159540 
> #2  0x080f8d4b in acl_exec_cond (cond=0x9b86d38, px=0x9b85120, 
> sess=0x9b84cb8, strm=0x9b84d68, opt=6) at src/acl.c:1145
>suite = 0x9b86d78
>term = 0x9b86d90
>expr = 0x9b86c08
>acl = 0x9b86f40
>smp = {flags = 4, data = {type = 2, u = {sint = 0, ipv4 = {s_addr = 
> 0}, ipv6 = {__in6_u = {__u6_addr8 = '\000' , __u6_addr16 = {
>0, 0, 0, 0, 0, 0, 0, 0}, __u6_addr32 = {0, 0, 0, 0}}}, str 
> = {str = 0x0, size = 0, len = 0}, meth = {meth = HTTP_METH_OPTIONS, str = {
>  str = 0x0, size = 0, len = 0, ctx = {p = 0x0, i = 0, ll 
> = 0, d = 0, a = {0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0}}, px = 0x9b85120,
>  sess = 0x9b84cb8, strm = 0x9b84d68, opt = 6}
>acl_res = ACL_TEST_FAIL
>suite_res = ACL_TEST_PASS
>cond_res = ACL_TEST_FAIL
> #3  0x080abf45 in http_req_get_intercept_rule (px=0x9b85120, rules=0x9b8515c, 
> s=0x9b84d68, deny_status=0xbfffe11c) at src/proto_http.c:3314
>ret = 0
>sess = 0x9b84cb8
>txn = 0x9b84f40
>cli_conn = 0x0
>rule = 0x9b86b80
>ctx = {line = 0xb76e550f <__memcpy_ssse3+31> "\201\303\061\317\003", 
> idx = -1073749820, val = 134893733, vlen = 0, tws = -1073749784,
>  del = 134895703, prev = 163073640}
>auth_realm = 0x0
>act_flags = 2
>len = 0
> #4  0x080ade7d in http_process_req_common (s=0x9b84d68, req=0x9b84d74, 
> an_bit=16, px=0x9b85120) at src/proto_http.c:4157
>sess = 0x9b84cb8
>txn = 0x9b84f40
>msg = 0x9b84f94
>rule = 0x9b84db0
>wl = 0x0
>verdict = 135013966
>deny_status = 2
> #5  0x080f0d0b in process_stream (t=0x9b84d10) at src/stream.c:1819
>max_loops = 199
>ana_list = 48
>ana_back = 48
>flags = 9469954
>srv = 0x0
>s = 0x9b84d68
>sess = 0x9b84cb8
>rqf_last = 8421376
>rpf_last = 2147483648
>rq_prod_last = 7
>rq_cons_last = 0
>rp_cons_last = 7
>rp_prod_last = 0
>req_ana_back = 0
>req = 0x9b84d74
>res = 0x9b84da8
>si_f = 0x9b84eac
>si_b = 0x9b84ec4
> #6  0x0805b19c in process_runnable_tasks () at src/task.c:238
>t = 0x9b84d10
>max_processed = 0
> #7  0x0804d432 in run_poll_loop () at src/haproxy.c:1692
>next = 0
> #8  0x0804e090 in main (argc=3, argv=0xbfffe404) at src/haproxy.c:2059
>err = 0
>retry = 200
>limit = {rlim_cur = 4011, rlim_max = 4011}
>errmsg = "\000 v\267\264\342\377\277\000 v\267 
> $v\267p\201\267\tp\201\267\tt\361b\267x\201\267\t\000\000\000\000<\000\000\000\002\000\000\000\b\343\377\277\000
>  \025\b8\000\000\000 $v\267\030\343\377\277\000e{\267\030\260\021\b\000 
> \025\b\032\000\000\000\244\201\267\t\030\343\377\277\037\333\017\b\252\201\267\t\222H\023\b"
>pidfd = -1
> (gdb) quit
> 




SEGV with sc_trackers

2016-08-12 Thread Bryan Talbot
I have a config that produces a segfault when using the sc0_trackers but works 
when using sc0_conn_cur. I’m not 100% sure that my use is correct but I don’t 
think it should SEGV either way.

This config produces the crash when processing a simple request from curl. The 
intent of the stick table counters is to limit rates per-source, using the 
frontend table, and per-service (request path), using the backend table. If the 
line using sc0_trackers is commented out, the processing works as expected. 
Also, if the backend uses sc1 instead of sc0 then there is no crash either. I 
am assuming that sc0 trackers from two different tables can both be used at the 
same time with different values; this does seem to work with sc_conn_cur but 
not so with sc_trackers.


Is this a proper use of sc_trackers? 




btalbot-lt:haproxy-1.6$ cat crash.cfg
global

defaults
timeout client 1s
timeout server 1s
timeout connect 1s
mode http

frontend http
bind :8000
stick-table type ip size 100k expire 60m store http_req_rate(10s)
acl is_special path_beg /special
http-request track-sc0 hdr(X-Forwarded-For)   if is_special
http-request track-sc0 str(/special)   table be1  if is_special
http-request set-header X-Haproxy-ACL 
%[req.fhdr(X-Haproxy-ACL,-1)]high-request-rate, if is_special { 
sc0_http_req_rate() ge 10 }
http-request set-header X-Haproxy-ACL 
%[req.fhdr(X-Haproxy-ACL,-1)]high-service-concur,   if is_special { 
sc0_trackers(be1) gt 50 }
#http-request set-header X-Haproxy-ACL 
%[req.fhdr(X-Haproxy-ACL,-1)]high-service-concur,   if is_special { 
sc0_conn_cur(be1) gt 50 }

backend be1
stick-table type string size 100k expire 10m





btalbot-lt:haproxy-1.6$ ./haproxy -vv
HA-Proxy version 1.6.7-f7a1f0-17 2016/08/10
Copyright 2000-2016 Willy Tarreau 

Build options :
  TARGET  = generic
  CPU = generic
  CC  = gcc
  CFLAGS  = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement
  OPTIONS = USE_ZLIB=1 USE_OPENSSL=1 USE_PCRE=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Encrypted password support via crypt(3): no
Built with zlib version : 1.2.5
Compression algorithms supported : identity("identity"), deflate("deflate"), 
raw-deflate("deflate"), gzip("gzip")
Built with OpenSSL version : OpenSSL 1.0.2h  3 May 2016
Running on OpenSSL version : OpenSSL 1.0.2h  3 May 2016
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 8.39 2016-06-14
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built without Lua support

Available polling systems :
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 2 (2 usable), will use poll.



The crash can be triggered with curl


btalbot-lt:haproxy-1.6$ ./haproxy -f ./crash.cfg -d
Note: setting global.maxconn to 2000.
Available polling systems :
   poll : pref=200,  test result OK
 select : pref=150,  test result FAILED
Total: 2 (1 usable), will use poll.
Using poll() as the polling mechanism.
:http.accept(0004)=0005 from [127.0.0.1:63079]
:http.clireq[0005:]: GET /special/batman HTTP/1.1
:http.clihdr[0005:]: Host: 127.0.0.1:8000
:http.clihdr[0005:]: User-Agent: curl/7.50.1
:http.clihdr[0005:]: Accept: */*
:http.clihdr[0005:]: X-Forwarded-For: 1.2.3.4
Segmentation fault: 11 (core dumped)




btalbot-lt:haproxy-1.6$ curl -isS --url "http://127.0.0.1:8000/special/batman"; 
-H "X-Forwarded-For: 1.2.3.4"
curl: (52) Empty reply from server



A backtrace from the core file is

btalbot-lt:haproxy-1.6$ lldb -c /cores/core.9762 ./haproxy
(lldb) target create "./haproxy" --core "/cores/core.9762"
warning: (x86_64) /cores/core.9762 load command 74 LC_SEGMENT_64 has a fileoff 
+ filesize (0x26ccd000) that extends beyond the end of the file (0x26ccc000), 
the segment will be truncated to match
warning: (x86_64) /cores/core.9762 load command 75 LC_SEGMENT_64 has a fileoff 
(0x26ccd000) that extends beyond the end of the file (0x26ccc000), ignoring 
this section
Core file '/cores/core.9762' (x86_64) was loaded.
(lldb) pla sta
  Platform: host
Triple: x86_64h-apple-macosx
OS Version: 10.11.6 (15G31)
Kernel: Darwin Kernel Version 15.6.0: Thu Jun 23 18:25:34 PDT 2016; 
root:xnu-3248.60.10~1/RELEASE_X86_64
  Hostname: 127.0.0.1
WorkingDir: /Users/btalbot/git/haproxy-1.6
(lldb) bt
* thread #1: tid = 0x, 0x0001093d436d 
haproxy`smp_fetch_sc_trackers(args=, smp=0x7fff568a74f0, 
kw=, private=0x) + 61 at stream.c:3299, stop 
reason = signal SIGSTOP
  * frame #0: 0x0001093d436d 
haproxy`smp_fetch_sc_trackers(args=, smp=0x7fff568a74f0, 
kw=, private=0x) + 61 at stream.c:3299
frame #1: 0x0001093d8c18 haproxy`sample_process(px=0x7f933480ae00, 
sess=0x7f9333c050e0, strm=0x00

Re: [PATCH] MINOR: Fixes the build of 1.7-dev3 on OSX

2016-07-05 Thread Bryan Talbot
I didn’t see other discussion about it, but commit 3015a2 seems to have fixed 
this issue. Thank you.

-Bryan



> On Jul 1, 2016, at Jul 1, 2:09 PM, Bryan Talbot  
> wrote:
> 
> 
>> On Jul 1, 2016, at Jul 1, 9:36 AM, 유준희 > <mailto:kail...@gmail.com>> wrote:
>> 
>> I found below error on 90fd35c3a726e613e36ea0399507778b094181a0 with OS X 
>> 11.5 (El capitan)
> 
> 
> Issue introduced with
> 
> 93b227db9502f72f894c83708cd49c41925158b2 is the first bad commit
> commit 93b227db9502f72f894c83708cd49c41925158b2
> Author: Bertrand Jacquin mailto:jacqu...@amazon.com>>
> Date:   Sat Jun 4 15:11:10 2016 +0100
> 
> -Bryan
> 
> 
> 
>> 
>> $ make TARGET=generic
>> 
>> gcc -Iinclude -Iebtree -Wall  -O2 -g -fno-strict-aliasing 
>> -Wdeclaration-after-statement   -DTPROXY -DENABLE_POLL  
>> -DCONFIG_HAPROXY_VERSION=\"1.7-dev3-90fd35-69\" 
>> -DCONFIG_HAPROXY_DATE=\"2016/06/30\" -c -o src/connection.o src/connection.c
>> src/connection.c:739:65: error: no member named 'source' in 'struct tcphdr'
>> ((struct sockaddr_in *)&conn->addr.from)->sin_port = 
>> hdr_tcp->source;
>>  ~~~ 
>>  ^
>> src/connection.c:743:63: error: no member named 'dest' in 'struct tcphdr'
>> ((struct sockaddr_in *)&conn->addr.to 
>> <http://addr.to/>)->sin_port = hdr_tcp->dest;
>>~~~  ^
>> src/connection.c:772:67: error: no member named 'source' in 'struct tcphdr'
>> ((struct sockaddr_in6 *)&conn->addr.from)->sin6_port = 
>> hdr_tcp->source;
>>
>> ~~~  ^
>> src/connection.c:776:65: error: no member named 'dest' in 'struct tcphdr'
>> ((struct sockaddr_in6 *)&conn->addr.to 
>> <http://addr.to/>)->sin6_port = hdr_tcp->dest;
>>  ~~~ 
>>  ^
>> 4 errors generated.
>> make: *** [src/connection.o] Error 1
>> 
>> The reason why is 'struct tcphdr' in  doesn't have source and dest 
>> member:
>> 
>> 
>> ...
>> 81 struct tcphdr {
>> 82 unsigned shortth_sport;   /* source port */
>> 83 unsigned shortth_dport;   /* destination port */
>> 84 tcp_seq th_seq;   /* sequence number */
>> ...
>> 
>> 
>> After I patch, it complied and seems like working properly.
>> I check the functionality with very simple configuration.
>> 
>> global
>> maxconn10
>> 
>> defaults
>> mode http
>> timeout connect 5000ms
>> timeout client 5ms
>> timeout server 5ms
>> 
>> frontend http-in
>> mode http
>> bind *:8080
>> default_backend servers
>> 
>> backend servers
>> balanceroundrobin
>> server server1 127.0.0.1:8081 <http://127.0.0.1:8081/>
>> server server2 127.0.0.1:8082 <http://127.0.0.1:8082/>
>> server server3 127.0.0.1:8083 <http://127.0.0.1:8083/>
>> 
>> 
>> 
>> Thanks
>> - Jun Hee Yoo
>> 
>> -- 
>> 踏雪野中去
>> <0001-BUILD-Can-t-build-on-OS-X-11.5.patch>
> 



Re: [PATCH] MINOR: Fixes the build of 1.7-dev3 on OSX

2016-07-01 Thread Bryan Talbot

> On Jul 1, 2016, at Jul 1, 9:36 AM, 유준희  wrote:
> 
> I found below error on 90fd35c3a726e613e36ea0399507778b094181a0 with OS X 
> 11.5 (El capitan)


Issue introduced with

93b227db9502f72f894c83708cd49c41925158b2 is the first bad commit
commit 93b227db9502f72f894c83708cd49c41925158b2
Author: Bertrand Jacquin 
Date:   Sat Jun 4 15:11:10 2016 +0100

-Bryan



> 
> $ make TARGET=generic
> 
> gcc -Iinclude -Iebtree -Wall  -O2 -g -fno-strict-aliasing 
> -Wdeclaration-after-statement   -DTPROXY -DENABLE_POLL  
> -DCONFIG_HAPROXY_VERSION=\"1.7-dev3-90fd35-69\" 
> -DCONFIG_HAPROXY_DATE=\"2016/06/30\" -c -o src/connection.o src/connection.c
> src/connection.c:739:65: error: no member named 'source' in 'struct tcphdr'
> ((struct sockaddr_in *)&conn->addr.from)->sin_port = 
> hdr_tcp->source;
>  ~~~  
> ^
> src/connection.c:743:63: error: no member named 'dest' in 'struct tcphdr'
> ((struct sockaddr_in *)&conn->addr.to 
> )->sin_port = hdr_tcp->dest;
>~~~  ^
> src/connection.c:772:67: error: no member named 'source' in 'struct tcphdr'
> ((struct sockaddr_in6 *)&conn->addr.from)->sin6_port = 
> hdr_tcp->source;
>
> ~~~  ^
> src/connection.c:776:65: error: no member named 'dest' in 'struct tcphdr'
> ((struct sockaddr_in6 *)&conn->addr.to 
> )->sin6_port = hdr_tcp->dest;
>  ~~~  
> ^
> 4 errors generated.
> make: *** [src/connection.o] Error 1
> 
> The reason why is 'struct tcphdr' in  doesn't have source and dest 
> member:
> 
> 
> ...
> 81 struct tcphdr {
> 82 unsigned short th_sport;   /* source port */
> 83 unsigned short th_dport;   /* destination port */
> 84 tcp_seq th_seq;/* sequence number */
> ...
> 
> 
> After I patch, it complied and seems like working properly.
> I check the functionality with very simple configuration.
> 
> global
> maxconn10
> 
> defaults
> mode http
> timeout connect 5000ms
> timeout client 5ms
> timeout server 5ms
> 
> frontend http-in
> mode http
> bind *:8080
> default_backend servers
> 
> backend servers
> balanceroundrobin
> server server1 127.0.0.1:8081 
> server server2 127.0.0.1:8082 
> server server3 127.0.0.1:8083 
> 
> 
> 
> Thanks
> - Jun Hee Yoo
> 
> -- 
> 踏雪野中去
> <0001-BUILD-Can-t-build-on-OS-X-11.5.patch>



Re: authorization haproxy.

2016-06-16 Thread Bryan Talbot

> On Jun 15, 2016, at Jun 15, 1:35 AM, Aleksander Maltzev 
>  wrote:
> 
> Hello.
> I use authorization haproxy.
> I have a many users in haproxy.cfg  userlist
> how to make a personal file for user list ?


Short answer: don’t do that.

AFAIK, that feature is meant to allowed controlled admin access to the proxy 
and is not mean to authenticate users for applications behind the proxy. 
Application users should be authenticated by the app with the users stored in 
external storage someplace.

-Bryan




Re: Bug when loading multiple configuration files

2016-05-24 Thread Bryan Talbot
The OP didn’t provide many details, but I am able to reproduce this too using 
1.7-dev and the config files shown below. Git bisect shows the break at the 
commit mentioned.


$> cat haproxy.cfg haproxy2.cfg
global

defaults
timeout client 5s
timeout server 5s
timeout connect 5s
mode http

listen www
bind :8000


listen www2
bind :8001


$> cat git-bisect-run.sh
#!/bin/bash -e
make clean
make TARGET=generic USE_OPENSSL=1 ADDLIB=-lcrypto 
SSL_INC=/usr/local/opt/openssl/include SSL_LIB=/usr/local/opt/openssl/lib 
USE_ZLIB=1 USE_PCRE=1 -j4
./haproxy -c -f ./haproxy.cfg -f ./haproxy2.cfg || exit 1
./haproxy -vv





> On May 24, 2016, at May 24, 4:50 AM, Ben Cabot  wrote:
> 
> Hi all,
> I think we have found an issue when using multiple configuration
> files. The config parser tries to register the listen section twice
> causing the error below.
> 
> [root@lbmaster haproxy]# /usr/local/sbin/haproxy -f
> /etc/haproxy/haproxy.cfg -f /etc/haproxy/haproxy_manual.cfg
> [ALERT] 144/113841 (10937) : register section 'listen': already registered.
> [ALERT] 144/113841 (10937) : Could not open configuration file
> /etc/haproxy/haproxy_manual.cfg : Success
> 
> 
> It looks to be introduced in 5e4261b0 but I'm unsure how to fix it.
> Please can someone take a look.
> 
> Thanks,
> 
> Ben
> 




Re: Performance considerations for ACL order and type

2016-05-17 Thread Bryan Talbot

> On May 17, 2016, at May 17, 3:32 PM, Sean Decker  
> wrote:
> 
> I'm wondering if there are any significant performance implications for the 
> order of our ACLs known without doing multiple rounds of testing. Here is an 
> example mixing path_beg and path_reg. 


IMO:

Make it easiest to read and satisfy the config parser. Then, if you have issues 
with performance — say you need to do 100,000 RPS — worry about such slight 
optimizations.

-Bryan





Re: SNI Support for Health Check on Backend Server

2016-03-11 Thread Bryan Talbot
This passes config check for me using 1.6 HEAD


btalbot-lt:haproxy-1.6$ cat haproxy.cfg
global

defaults
timeout client 5s
timeout server 5s
timeout connect 5s
mode http

listen https
bind :443
server dev05 192.168.1.10:443 check ssl sni str(prontotest.orthobanc.com)
verify none



btalbot-lt:haproxy-1.6$ ./haproxy -f ./haproxy.cfg -c
Configuration file is valid



btalbot-lt:haproxy-1.6$ ./haproxy -vv
HA-Proxy version 1.6.3-079e34-67 2016/03/10
Copyright 2000-2015 Willy Tarreau 

Build options :
  TARGET  = generic
  CPU = generic
  CC  = gcc
  CFLAGS  = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement
  OPTIONS = USE_ZLIB=1 USE_OPENSSL=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Encrypted password support via crypt(3): no
Built with zlib version : 1.2.5
Compression algorithms supported : identity("identity"),
deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with OpenSSL version : OpenSSL 1.0.2g  1 Mar 2016
Running on OpenSSL version : OpenSSL 1.0.2g  1 Mar 2016
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built without PCRE support (using libc's regex instead)
Built without Lua support

Available polling systems :
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 2 (2 usable), will use poll.



On Fri, Mar 11, 2016 at 5:23 PM, William D. Roush <
william.ro...@roushtech.net> wrote:

> Using: "server dev05 192.168.1.10:443 check ssl sni str(www.mysite.com)
> verify none"
>
>
>
> Proxy 'www.mysite.com', server 'dev05' [/etc/haproxy/haproxy.cfg:62]
> verify is enabled by default but no CA file specified. If you're running on
> a LAN where you're certain to trust the server's certificate, please set an
> explicit 'verify none' statement on the 'server' line, or use
> 'ssl-server-verify none' in the global section to disable server-side
> verifications by default.
>
>
>
>
>
> Using: "server dev05 192.168.1.10:443 check sni str(
> prontotest.orthobanc.com) ssl verify none "
>
>
>
> parsing [/etc/haproxy/haproxy.cfg:62] : 'server dev-web-06' unknown
> keyword 'none'.
>
>
>
>
>
> William Roush | www.roushtech.net
>
>
>
> *From:* Bryan Talbot [mailto:bryan.tal...@ijji.com]
> *Sent:* Friday, March 11, 2016 5:32 PM
> *To:* William D. Roush 
> *Cc:* haproxy@formilux.org
> *Subject:* Re: SNI Support for Health Check on Backend Server
>
>
>
> There is a recently reported but for this. Try putting "verify none" AFTER
> the "sni" keyword in your server line.
>
>
>
> -Bryan
>
>
>
>
>
> On Fri, Mar 11, 2016 at 2:08 PM, William D. Roush <
> william.ro...@roushtech.net> wrote:
>
> Hey Everybody,
>
>
>
> Been struggling trying to get SNI to work with health checks, even using
> 1.6 and a server configuration of this:
>
>
>
> dev05 192.168.1.10:443 check ssl verify none sni str(www.mysite.com)
>
>
>
> It will still not send the SNI information to the backend server during
> health checks.
>
>
>
>
>
> Am I missing some additional options here? Or is this unsupported in 1.6?
> Is this slated for 1.7?
>
>
> Thanks!
>
> William Roush
>
> william.ro...@roushtech.net
>
>
>
> *http://www.roushtech.net/ <http://www.roushtech.net/>*
>
>
>


Re: SNI Support for Health Check on Backend Server

2016-03-11 Thread Bryan Talbot
There is a recently reported but for this. Try putting "verify none" AFTER
the "sni" keyword in your server line.

-Bryan


On Fri, Mar 11, 2016 at 2:08 PM, William D. Roush <
william.ro...@roushtech.net> wrote:

> Hey Everybody,
>
>
> Been struggling trying to get SNI to work with health checks, even using
> 1.6 and a server configuration of this:
>
>
> dev05 192.168.1.10:443 check ssl verify none sni str(www.mysite.com)
>
>
> It will still not send the SNI information to the backend server during
> health checks.
>
>
>
> Am I missing some additional options here? Or is this unsupported in 1.6?
> Is this slated for 1.7?
>
>
> Thanks!
>
> William Roush
>
> william.ro...@roushtech.net
>
>
>
> *http://www.roushtech.net/ *
>


Re: Keep-alive causing latency spike

2016-02-27 Thread Bryan Talbot
On Sat, Feb 27, 2016 at 12:24 PM, CJ Ess  wrote:

> Hey folks, I could use some help figuring this one out. My environment
> looks like this:
>
>
> The way I am monitoring the request latency is by averaging the Tt field
> from the haproxy logs by second.
>
>
>
The Tt values include Tq which includes the keep-alive time that occurs
between requests. The latency you're measuring is the request processing
time + any "think" time that occurs between transactions over the same
session.

-Bryan


Re: gpc0_rate computing incorrectly with peer replication turned in [in 1.6.3]

2016-02-24 Thread Bryan Talbot
On Wed, Feb 24, 2016 at 6:05 PM, James Brown  wrote:
>
> We use a gpc0 counter for rate-limiting certain requests in our
application. It was working fine with 1.5.14, but as soon as I upgraded to
1.6.3, we started seeing the gpc0_rate value go crazy – it's currently
showing values in the hundreds of thousands when the underlying gpc0
counter has
> stick-table type string len 32 size 512 expire 5m store
gpc0,gpc0_rate(5m),http_req_rate(10s) peers lbsj
>
>


I didn't realize that stick tables without a server-id entry like this
would be replicated to remotes. My reading of the docs for 1.5 and 1.6
stick-table peers option makes it seem like ONLY stick-table entries with a
server-id are replicated to remotes. Maybe this is not the case?

Entries which associate keys to server IDs are kept synchronized with the
remote peers declared in this section.



https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#stick-table
https://cbonte.github.io/haproxy-dconv/configuration-1.6.html#stick-table

-Bryan


P.S.  Now I think I know why we got a bunch of 'too many errors' responses
from EasyPost today!


stick table replication

2016-02-24 Thread Bryan Talbot
>From the docs, it looks like stick tables entries are only replicated when
they store a server-id. This makes sense if stick tables are only used for
sticky-sessions shared across multiple proxy instances.

Is there a way to get stick table replication to occur when the stick table
is not used for session affinity?

I'd like to use them to rate-limit access to sensitive resources but with
the current setup, it seems that each (clustered) proxy instance will apply
limits independently meaning that I can only approximate the limits that
will be actually enforced.

Thank you,
-Bryan


Re: Feature Request for log stdout ...

2016-02-18 Thread Bryan Talbot
Sorry I'm a bit late to this party but when running in a container it's
also easy to configure haproxy to log to a unix socket and bind mount that
socket to the host.

in haproxy.cnf

log /dev/log local2


Then when launching the container an option like "-v /var/log:/var/log"
works quite well to get container syslogs to the host.

-Bryan



On Thu, Feb 18, 2016 at 6:22 AM, Willy Tarreau  wrote:

> Hi Aleks,
>
> On Thu, Feb 18, 2016 at 02:53:29PM +0100, Aleksandar Lazic wrote:
> > But this moves just the stdout handling to other tools and does not
> > solve the problem with blocking handling of std*, as far as I have
> > understood right.
>
> Yes it does because if the logging daemon blocks, logs are simply lost
> on the UDP socket between haproxy and the daemon without blocking
> haproxy.
>
> > It also 'violates' the best practice of docker.
> >
> >
> https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/#run-only-one-process-per-container
>
> Well it's written "in almost all cases". Otherwise you would not even
> be allowed to use nbproc or the systemd wrapper. If you consider your
> deamon as the log-dedicated process, it's OK :-)
>
> > Okay this could be solved with the linking as described in the link.
> >
> > For openshift I will try to use 2 container in 1 pod.
> >
> > If there any interests I can write here if this works ;-)
>
> Sure, please report anyway.
>
> Cheers,
> Willy
>
>
>


Re: Old instances continue to accept connections after graceful reload

2016-02-05 Thread Bryan Talbot
On Fri, Feb 5, 2016 at 7:07 PM, Bryan Talbot  wrote:

> I think you're just attempting to reload haproxy too fast. There are race
> conditions in getting the list of running pids and passing them into
> haproxy -- that list changes before the next proxy is started.
>

After further investigation, I think this answer is only partially correct.
Reloads are happening too fast, but the -wait option for consul wasn't
fully effective.

You've probably seen this old bug on this issue that seems to affect consul
after version 0.10.0
https://github.com/hashicorp/consul-template/issues/442

I don't use consul but took this opportunity to try it out. It seems to
have issues delivering signals -- at least for haproxy -- as those bug
reports attest.

To avoid the signal handling issue I got your setup to work using consul
but with a processes structured so that consul doesn't manage the haproxy
processes but just send HUP signals to trigger a reload. For me, this is
working even when starting and stopping 10s of backend processes per second.

The process tree in the docker container now looks like this (line wrapping
is going to mess this up):

docker top haproxy xf
PID TTY STATTIME
 COMMAND
14272   ?   Ss  0:00
 \_ /bin/sh /entrypoint.sh
14286   ?   Sl  0:00
 | \_ consul-template -config=/tmp/haproxy.ctmpl.cfg -log-level=debug
14287   ?   S   0:00
 | \_ /usr/local/sbin/haproxy-systemd-wrapper -f
/usr/local/etc/haproxy/haproxy.cfg -p /run/haproxy.pid
15380   ?   S   0:00
 | \_ /usr/local/sbin/haproxy -f /usr/local/etc/haproxy/haproxy.cfg -p
/run/haproxy.pid -Ds -sf 184
15381   ?   Ss  0:00
 | \_ /usr/local/sbin/haproxy -f /usr/local/etc/haproxy/haproxy.cfg -p
/run/haproxy.pid -Ds -sf 184




Since it uses haproxy-systemd-wrapper to manage the haproxy processes and
the alpine haproxy package doesn't include it, I used my own haproxy docker
contain which does include the wrapper. Below are the files from your
config with changes that make this work for me.



$> cat Dockerfile

FROM fingershock/haproxy-base:1.6.3
MAINTAINER Pure Storage version: 0.1

ENV CONSUL_TEMPLATE_VERSION=0.12.2
ENV CONSUL_URL=
https://releases.hashicorp.com/consul-template/${CONSUL_TEMPLATE_VERSION}/consul-template_${CONSUL_TEMPLATE_VERSION}_linux_amd64.zip

# Download consul-template
RUN ( wget --no-verbose --no-check-certificate ${CONSUL_URL} -O
/tmp/consul_template.zip && \
  unzip -d /tmp/ -o /tmp/consul_template.zip && \
  mv /tmp/consul-template /usr/bin && \
  rm -rf /tmp/* )

COPY *.http /etc/haproxy/errors/

COPY haproxy.ctmpl.cfg /tmp/
COPY haproxy.ctmpl /tmp/
COPY entrypoint.sh /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]





$> cat entrypoint.sh
#!/bin/sh

# This script starts haproxy and consul.
# It then remains around as PID1 and exits when a signal to terminate is
received. It will also
# exit if any child processes die (haproxy-systemd-wrapper, consul).

#set -o xtrace
set -o errexit
if [ -n "$BASH_VERSION" ]; then
# These are not POSIX and won't work (or are not needed) with dash
shell but are needed for bash
# enable Monitor so job control is active and we get SIGCHLD (when
using bash)
set -o monitor
set -o pipefail
fi

if [ $# -ne 0 ]; then
  exec "$@"
fi


# generate initial configuration
consul-template -config=/tmp/haproxy.ctmpl.cfg -log-level=debug -once

# and watch for changes
consul-template -config=/tmp/haproxy.ctmpl.cfg -log-level=debug &

# start haproxy using systemd-wrapper as that process should not exit and
manages haproxy daemons
/usr/local/sbin/haproxy-systemd-wrapper -f
/usr/local/etc/haproxy/haproxy.cfg -p /run/haproxy.pid &


# exit for normal termination signals
quit()
{
exit 0
}
# exit if haproxy or consul stops running
childquit()
{
exit 1
}

trap childquit CHLD
trap quit INT QUIT TERM

wait




$> cat haproxy.ctmpl.cfg
template {
  source = "/tmp/haproxy.ctmpl"
  destination = "/usr/local/etc/haproxy/haproxy.cfg"
  command = "killall -s HUP haproxy-systemd-wrapper || true"
}



-Bryan




>
> Your test case is reloading haproxy about 10 times per second. There are
> several reports on this list about that being an issue. AFAIK, the issue
> isn't with the proxy code itself, but just with the way that the list of
> running processes are collected and signaled.
>
> Anyway, for your case, adding a -wait=5s option to consul-template makes
> the problem go away for me.
>
> Maybe you can get away with reloading every 1 second, but multiple times
> per second is not likely to work reliably.
>
>
> 

Re: Old instances continue to accept connections after graceful reload

2016-02-05 Thread Bryan Talbot
I think you're just attempting to reload haproxy too fast. There are race
conditions in getting the list of running pids and passing them into
haproxy -- that list changes before the next proxy is started.

Your test case is reloading haproxy about 10 times per second. There are
several reports on this list about that being an issue. AFAIK, the issue
isn't with the proxy code itself, but just with the way that the list of
running processes are collected and signaled.

Anyway, for your case, adding a -wait=5s option to consul-template makes
the problem go away for me.

Maybe you can get away with reloading every 1 second, but multiple times
per second is not likely to work reliably.


-Bryan


On Fri, Feb 5, 2016 at 6:11 PM, Maciej Katafiasz  wrote:

> I will try with explicit -reap, but we have actually investigated that
> as one possible cause, and consul-template reports automatically
> engaging reap mode as it sees itself as PID 1, and then reaps the
> processes that actually exit correctly. The problem is that old
> haproxy processes continue to run and process requests when they're
> not supposed to, which is different to dead, but unreaped processes
> that you'd see with faulty PID 1. As I said, most of the "aha!"
> moments we've had with this problem turned out to be spurious
> correlations and the issue ultimately came back. It's just that
> sometimes, for no articulable reason, it works fine, and then the next
> time it doesn't.
>
> Cheers,
>
> On 5 February 2016 at 16:59, Cyril Bonté  wrote:
> > Hi,
> >
> >
> > Le 06/02/2016 01:03, Maciej Katafiasz a écrit :
> >>
> >> On 5 February 2016 at 16:02, Maciej Katafiasz
> >>  wrote:
> >>>
> >>> Link to the tarball:
> >>> https://purestorage.app.box.com/s/nnzqueais46plzd9xfisnmkeab7j9s0y
> >>>
> >>> I will be sending it as an attachment in a separate mail as a followup
> >>> to this one, in case the mailing list software scrubs attachments
> >>> and/or considers them spam.
> >>
> >>
> >> And here's the tarball as an attachment
> >
> >
> > This looks to be concern more consul inside a docker container than a
> > haproxy itself. But this may explain some similar reports made by other
> > users recently.
> >
> > Use the -reap option with consul-template, and haproxy will reload
> > correctly.
> >
> > As quick example :
> > $ docker run --name=haproxy -d --net=host haproxy-bugtest consul-template
> > -config=/tmp/haproxy.ctmpl.cfg -log-level=debug -reap
> >
> > $ docker exec -ti haproxy sh
> > # ps aux
> > ...
> > 16 root   0:00 haproxy -f /etc/haproxy/haproxy.cfg -d -p
> > /var/run/haproxy.pid -sf
> > ...
> >
> > # haproxy -f /etc/haproxy/haproxy.cfg -d -p /var/run/haproxy.pid -sf 16
> &>
> > /tmp/debug.log&
> >
> > # ps aux
> > ...
> >27 root   0:00 haproxy -f /etc/haproxy/haproxy.cfg -d -p
> > /var/run/haproxy.pid -sf 16
> > ...
> > => No more PID 16
> >
> >
> >
> >
> > --
> > Cyril Bonté
>
>


Re: haproxy reloads, stale listeners, SIGKILL required

2016-02-02 Thread Bryan Talbot
On Tue, Feb 2, 2016 at 4:11 PM, David Birdsong 
wrote:

>
>
> On Tue, Feb 2, 2016 at 7:09 PM Bryan Talbot  wrote:
>
>> On Tue, Feb 2, 2016 at 3:56 PM, David Birdsong 
>> wrote:
>>
>>> Has nobody else run into this w/ consul? Given the plethora of tools
>>> around consul and haproxy and templating, I know others are using reloads
>>> to keep backend current, but the old haproxy PIDs stick around listening w/
>>> incorrect backends.
>>>
>>
>> I'm not using consul but am using haproxy in a docker container and
>> reloading when backend hosts change registrations. I haven't seen this
>> issue. I run using haproxy-systemd-wrapper and HUP that process to reload.
>>
>
> does that mean the wrapper ensures that an old process exits and forces if
> it doesn't eventually?
>


I don't believe it forces children proxies to exit but does pass on
selected signals (like HUP) to them.

-Bryan


Re: haproxy reloads, stale listeners, SIGKILL required

2016-02-02 Thread Bryan Talbot
On Tue, Feb 2, 2016 at 3:56 PM, David Birdsong 
wrote:

> Has nobody else run into this w/ consul? Given the plethora of tools
> around consul and haproxy and templating, I know others are using reloads
> to keep backend current, but the old haproxy PIDs stick around listening w/
> incorrect backends.
>

I'm not using consul but am using haproxy in a docker container and
reloading when backend hosts change registrations. I haven't seen this
issue. I run using haproxy-systemd-wrapper and HUP that process to reload.

-Bryan




>
> On Thu, Jan 28, 2016 at 8:52 PM David Birdsong 
> wrote:
>
>> On Thu, Jan 28, 2016 at 6:35 PM, Pavlos Parissis <
>> pavlos.paris...@gmail.com> wrote:
>>
>>> On 28/01/2016 10:35 μμ, David Birdsong wrote:
>>> > I've been running into a problem for a few weeks that I was hoping to
>>> > see disappear w/ a simple upgrade to 1.6.3.
>>> >
>>> > I'm using consul and it's templating to dynamically expand a backend
>>> > list which then runs an haproxy reload using the init scripts in the
>>> > contrib dir.
>>> >
>>> > I haven't been able to trace how the situation is triggered,but
>>> > basically I find haproxy processes that are still listening on their
>>> > bound sockets long after having received a reload signal via the  '-sf'
>>> > parameter.
>>> >
>>> > My first fix was to ensure that reloads weren't happening too fast and
>>> > potentially stomping on the pid file which would explain a process
>>> never
>>> > getting a signal--so I now have a lock + sleeps for the automatic
>>> reload
>>> > script. No change.
>>> >
>>> > What's probably most interesting is that a SIGKILL is necessary to
>>> > remove the 'stale' processes that are still listening on their sockets
>>> > (as determined by lsof.)
>>> >
>>> > I can supply the config if needed, there is a single listener w/ a pem
>>> > directory specified if that's helpful.
>>> >
>>>
>>> The previous processes don't die that fast because there are connections
>>> still alive on them. Which is caused by very long timeout settings.
>>>
>>
>> The haproxy instance has many tcp-mode connections, and so, yes, the
>> processes often hang around for many days. I'm familiar with haproxy's
>> exiting when all connections have finished.
>>
>> What I'm looking for help about is the listen socket remaining. lsof
>> indicates a listener on the port and so accepts new connections on the
>> 'stale' process and configuration file.
>>
>>
>>>
>>> The behavior you see is normal.
>>>
>>> Cheers,
>>> Pavlos
>>>
>>>
>>>
>>>


Re: Why is req.hdr not working for me?

2016-02-02 Thread Bryan Talbot
Because you're ignoring the warnings that haproxy generates when you run
with that configuration

[WARNING] 032/111847 (11107) : parsing [./haproxy.cfg:6] : acl
'ORIGIN_PRESENT' will never match because it only involves keywords that
are incompatible with 'backend http-response header rule'

[WARNING] 032/111847 (11107) : parsing [./haproxy.cfg:7] : 'http-response'
: sample fetch  may not be reliably used here because it
needs 'HTTP request headers' which is not available here.



On Tue, Feb 2, 2016 at 11:04 AM, Ryan Heaton  wrote:

> Hi.
>
> I’m using HA-Proxy 1.5.14.
>
> I’ve got configuration like this:
>
> listen heatonra 0.0.0.0:
>   server cargo localhost:8080
>   acl ORIGIN_PRESENT req.hdr(Origin) -m found
>   http-response set-header Hello World if ORIGIN_PRESENT
>   http-response set-header Access-Control-Allow-Origin %[req.hdr(Origin)]
>
> But when I send a request with the “Origin” header, the use of
> “req.hdr(Origin)” doesn’t seem to work:
>
> $ curl -X HEAD -v -H "Origin: myself" 
> http://localhost:/platform/collections/tree
> *   Trying 127.0.0.1...
> * Connected to localhost (127.0.0.1) port  (#0)
> > HEAD /platform/collections/tree HTTP/1.1
> > Host: localhost:
> > User-Agent: curl/7.43.0
> > Accept: */*
> > Origin: myself
> >
> < HTTP/1.1 200 OK
> < Server: Apache-Coyote/1.1
> < Content-Location: /platform/collections/tree
> < Cache-Control: no-transform, max-age=604800
> < X-PROCESSING-TIME: 1
> < Content-Type: application/xml
> < Content-Length: 0
> < Date: Tue, 02 Feb 2016 18:55:44 GMT
> < Access-Control-Allow-Origin:
> <
> * Connection #0 to host localhost left intact
>
> I’m expecting “Hello: World” and “Access-Control-Allow-Origin: myself” in
> the response.
>
> Can anyone explain where my expectations are incorrect or why I’m not
> seeing what I’m expecting?
>
> Thanks!
>
> -Ryan
> ​
>


Re: keep-alive problems and best practices question

2016-01-22 Thread Bryan Talbot
On Fri, Jan 22, 2016 at 3:18 AM, Piotr Rybicki  wrote:

>
> Found it. Seems like this issue:
>
> http://www.serverphorums.com/read.php?10,1341691
>
>
>> haproxy 1.5.15, linux 3.18.24
>>>
>>
>>


This issue was fixed in 1.5 with 3de8e7ab8 in November but there hasn't
been a release with it yet.

1.6.3 has the fix already.

Maybe it's time for 1.5.16?

-Bryan


Re: HAProxy is not able to bind

2016-01-12 Thread Bryan Talbot
On Tue, Jan 12, 2016 at 12:23 PM, Lobron, David  wrote:

> Hi All,
>
>
>
> 0
> down vote
> favorite
>

Copy-and-pasted from Stack Overflow?




>
> listen  rtt 172.28.11.94:9500
> mode tcp
> bind 172.28.11.94:9500 ssl crt /etc/haproxy/cert.pem
>


> [ALERT] 011/114700 (6149) : Starting proxy rtt: cannot bind socket  [
> 172.28.11.94:9500]
>
> The warning about select() not working is a little strange, but it seems
> like it's falling back to poll(), which should be fine. But I can't figure
> out why it can't bind to port 9500 when I run it as root, as I'm doing
> here. Any help would be much appreciated!
>
>
Don't bind to the same port twice. Remove the IP:PORT from the listen line
and that should solve your problem.

-Bryan


Re: Contribution for HAProxy: Peer Cipher based SSL CTX switching

2015-12-09 Thread Bryan Talbot
On Wed, Dec 9, 2015 at 10:54 AM, Dave Zhu (yanbzhu) 
wrote:

>
> I was able to add functionality for loading multiple certs when the crt
> bind option is a directory. That’s in patch 4. Patch 2 now contains 4, 5,
> and 6.
>
>
Still passing basic tests for me including the crt directory support.
Thanks for that!

https://github.com/btalbot/dual-stack-test


btalbot-lt:dual-stack-test$ vagrant ssh -c ./share/testit.sh
Configuration file is valid
old OpenSSL to dual-stack port expecting 2048 bit RSA cert ... success
old OpenSSL to ecc-only port expecting error ... success
old OpenSSL to rsa-only port expecting 2048 bit RSA cert ... success
old OpenSSL to crt-dir port expecting 2048 bit RSA cert ... success
new OpenSSL to dual-stack port expecting 256 bit ECDSA cert ... success
new OpenSSL to ecc-only port expecting 256 bit ECDSA cert ... success
new OpenSSL to rsa-only port expecting 2048 bit RSA cert ... success
new OpenSSL to dual-stack port expecting 2048 bit RSA cert ... success
new OpenSSL to crt-dir port expecting 256 bit ECDSA cert ... success
new OpenSSL to crt-dir port expecting 2048 bit RSA cert ... success


Re: Contribution for HAProxy: Peer Cipher based SSL CTX switching

2015-12-08 Thread Bryan Talbot
On Tue, Dec 8, 2015 at 11:18 AM, Dave Zhu (yanbzhu) 
wrote:

> Hey Bryan,
>
> I believe I have gotten to the bottom of the behavior that you are seeing:
>
>
>1. 0.9.8 client cannot connect to dual cert port: This was a bug on my
>part. I neglected to set a DHE keys for the SSL_CTX with multiple certs.
>I’ve attached another set of patches (1-5 are identical, 6 is new) that
>fixes this.
>
>
yep, patch 6 fixes this problem for me.



>
>1. ECC capable client does not use ECC cipher: I believe this is due
>to your test configuration. Openssl prefers RSA ciphers by default, and so
>if you don’t specify an ECC cipher first, it will always pick an RSA
>cipher. Your test uses "./openssl-1.0.2e/apps/openssl s_client -connect
>127.0.0.1:8443” as the command, which will use the default cipher
>list. Try specifying an ECC cipher as the first cipher and it should work.
>
>
Of course, I should have realized that too. I've updated the bind ciphers
to prioritize ECDSA over RSA and that fixes the issue. So the basic tests I
defined before are all passing now but only when the crt line specifies a
"pem" file that doesn't exist and .ecdsa / .rsa files are loaded from that
base.


Now, about using the crt bind option with a directory of certs
https://cbonte.github.io/haproxy-dconv/configuration-1.6.html#crt (Bind
options)

How should that work, especially if there are .ocsp and .issuer data in the
crt directory? Currently, the ECDSA certificate seems to always be used
even for non-ECC capable clients but I suspect that's due to the .ecdsa
cert being loaded first and your patches do not cover that use case yet.



-Bryan


Re: Contribution for HAProxy: Peer Cipher based SSL CTX switching

2015-12-07 Thread Bryan Talbot
Glad you were able to get to the bottom of the crash.

With the newest 5 patches, I'm still not seeing the behavior I am
expecting. To make my testing and expectations hopefully more clear, I've
pushed them to github (https://github.com/btalbot/dual-stack-test)  From a
laptop with Vagrant installed, it should be a simple process to provision a
host for testing and run the test script.

What I am expecting is that OpenSSL 0.9.8 client will be able to connect to
an haproxy port that is bound to both ECDSA and RSA certificates. This
doesn't work for me and the connection fails the SSL handshake.

I'm also expecting that a newer OpenSSL which support ECC will connect AND
negotiate and use the 256 bit ECDSA certificate and not the RSA cert. My
tests always show the ECC capable client still getting the RSA certificate.



-Bryan




On Mon, Dec 7, 2015 at 1:44 PM, Willy Tarreau  wrote:

> On Mon, Dec 07, 2015 at 08:48:53PM +, Dave Zhu (yanbzhu) wrote:
> > Hey Willy
> >
> > On 12/7/15, 3:11 PM, "Willy Tarreau"  wrote:
> > >
> > >Yep, thanks for the pointer. So indeed gcc's inline version of strncpy
> > >*is*
> > >bogus. strncpy() has no right to guess the destination size.
> > >
> > >I suspect that if you just do this it would work (prefix the array with
> > >'&'
> > >and use [0] :
> > >
> > >   strncpy((char *)&s_kt->name.key[0], trash.str, i);
> > >
> > >Thanks,
> > >Willy
> >
> > You would be correct in this guess :)
> >
> > So what零 the preference? Should I change it to use this weird version of
> > strcpy, or change it to memcpy?
>
> I'd prefer the memcpy() anyway. Please keep your comment and add the
> link to gcc's bugzilla so that nobody is tempted to change this later
> for any reason, and please mention that it's the inlined version of
> strncpy() which refuses to write into a char[0].
>
> You have my full support if you want to add some dirty words there to
> express your feelings about the compiler which dies on valid C code...
>
> Thanks,
> Willy
>
>


Re: Contribution for HAProxy: Peer Cipher based SSL CTX switching

2015-12-05 Thread Bryan Talbot
On Fri, Dec 4, 2015 at 10:17 AM, Bryan Talbot  wrote:

> On Fri, Dec 4, 2015 at 6:15 AM, Dave Zhu (yanbzhu) 
> wrote:
>
>> Hey Bryan,
>> it’s strange that it’s always loading the ECC cert. I just tested the
>> code on my end and I’m not seeing this behavior.
>>
>>
> I see it on OSX, I'll test on Linux today.
>
>

On Ubuntu VERSION="14.04.3 LTS, Trusty Tahr" with OpenSSL 1.0.2e compiled
from source, haproxy is crashing with your patches and a bind line of
  bind :8443 ssl crt ./var/tls/localhost.pem

If I change the bind to be
  bind :8443 ssl crt ./var/tls/
it doesn't crash.

OpenSSL 1.0.2e was built and installed to /usr/local/ssl/ with "./config &&
make && make test && sudo make install"
haproxy 1.6.2 was built from source

make -j 4 TARGET=linux2628 USE_OPENSSL=1 SSL_INC=/usr/local/ssl/include
SSL_LIB=/usr/local/ssl/lib USE_ZLIB=1 ADDLIB=-ldl all

$> ./haproxy -vv
HA-Proxy version 1.6.2 2015/11/03
Copyright 2000-2015 Willy Tarreau 

Build options :
  TARGET  = linux2628
  CPU = generic
  CC  = gcc
  CFLAGS  = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement
  OPTIONS = USE_ZLIB=1 USE_OPENSSL=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.8
Compression algorithms supported : identity("identity"),
deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with OpenSSL version : OpenSSL 1.0.2e 3 Dec 2015
Running on OpenSSL version : OpenSSL 1.0.2e 3 Dec 2015
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built without PCRE support (using libc's regex instead)
Built without Lua support
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT
IP_FREEBIND

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.



$>  ./haproxy -f ./tls-test-haproxy.cfg -c
*** buffer overflow detected ***: ./haproxy terminated
=== Backtrace: =
/lib/x86_64-linux-gnu/libc.so.6(+0x7338f)[0x7f59577da38f]
/lib/x86_64-linux-gnu/libc.so.6(__fortify_fail+0x5c)[0x7f5957871c9c]
/lib/x86_64-linux-gnu/libc.so.6(+0x109b60)[0x7f5957870b60]
/lib/x86_64-linux-gnu/libc.so.6(__stpncpy_chk+0x0)[0x7f595786ffc0]
./haproxy[0x48dc4e]
./haproxy[0x490ec8]
./haproxy[0x493090]
./haproxy[0x4932d1]
./haproxy[0x41e27d]
./haproxy[0x42a680]
./haproxy[0x406676]
./haproxy[0x40490c]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5)[0x7f5957788ec5]
./haproxy[0x405963]
=== Memory map: 
0040-006cb000 r-xp  08:01 268022
/home/vagrant/haproxy-1.6.2/haproxy
008ca000-008cb000 r--p 002ca000 08:01 268022
/home/vagrant/haproxy-1.6.2/haproxy
008cb000-008dc000 rw-p 002cb000 08:01 268022
/home/vagrant/haproxy-1.6.2/haproxy
008dc000-008ed000 rw-p  00:00 0
01aee000-01b0f000 rw-p  00:00 0
 [heap]
7f5957551000-7f5957567000 r-xp  08:01 2286
/lib/x86_64-linux-gnu/libgcc_s.so.1
7f5957567000-7f5957766000 ---p 00016000 08:01 2286
/lib/x86_64-linux-gnu/libgcc_s.so.1
7f5957766000-7f5957767000 rw-p 00015000 08:01 2286
/lib/x86_64-linux-gnu/libgcc_s.so.1
7f5957767000-7f5957922000 r-xp  08:01 2269
/lib/x86_64-linux-gnu/libc-2.19.so
7f5957922000-7f5957b21000 ---p 001bb000 08:01 2269
/lib/x86_64-linux-gnu/libc-2.19.so
7f5957b21000-7f5957b25000 r--p 001ba000 08:01 2269
/lib/x86_64-linux-gnu/libc-2.19.so
7f5957b25000-7f5957b27000 rw-p 001be000 08:01 2269
/lib/x86_64-linux-gnu/libc-2.19.so
7f5957b27000-7f5957b2c000 rw-p  00:00 0
7f5957b2c000-7f5957b2f000 r-xp  08:01 2138
/lib/x86_64-linux-gnu/libdl-2.19.so
7f5957b2f000-7f5957d2e000 ---p 3000 08:01 2138
/lib/x86_64-linux-gnu/libdl-2.19.so
7f5957d2e000-7f5957d2f000 r--p 2000 08:01 2138
/lib/x86_64-linux-gnu/libdl-2.19.so
7f5957d2f000-7f5957d3 rw-p 3000 08:01 2138
/lib/x86_64-linux-gnu/libdl-2.19.so
7f5957d3-7f5957d48000 r-xp  08:01 2166
/lib/x86_64-linux-gnu/libz.so.1.2.8
7f5957d48000-7f5957f47000 ---p 00018000 08:01 2166
/lib/x86_64-linux-gnu/libz.so.1.2.8
7f5957f47000-7f5957f48000 r--p 00017000 08:01 2166
/lib/x86_64-linux-gnu/libz.so.1.2.8
7f5957f48000-7f5957f49000 rw-p 00018000 08:01 2166
/lib/x86_64-linux-gnu/libz.so.1.2.8
7f5957f49000-7f5957f52000 r-xp  08:01 2314
/lib/x86_64-linux-gnu/libcrypt-2.19.so
7f5957f52000-7f5958152000 ---p 9000 08:01 2314
/lib/x86_64-linux-gnu/libcrypt-2.19.so
7f5958152000-7f5958153000 r--p 9000 08:01 2314
/lib/x86_64-linux-gnu/libcrypt-2.19.so
7f5958153000-7f5958154000 rw-p a000 08:01 2314
/lib/x86_64-linux-gnu/libcrypt-2.19.so
7f5958154000-7f5958182000 rw-p  00:00 0
7f5958182000-7f59581a5000 r-xp  08:01 2235
/lib/x86_64-linux-gnu

Re: HAProxy setup

2015-12-04 Thread Bryan Talbot
On Fri, Dec 4, 2015 at 5:16 AM, Milos Zupancic  wrote:

> Hi,
>
> I am looking for a solution on how to setup HaProxy and Tomcat with SSL
> termination + passing client certificate to the backend tomcat.
>
>
> backend c-https
> mode http
> balance roundrobin
> cookie SERVERID insert nocache
> server ljvfep4 192.168.0.10:20443 check inter 2000 rise 2 fall 2
> server ljvfep3 192.168.0.11:20443 check inter 2000 rise 2 fall 2
>
>
> This would give me a 502 bad gateway error. If i access the tomcat
> directly all works as expected.
> And suggestions ?
>
>

>From these port numbers and your statement about "Tomcat with SSL" it seems
like you're expecting an SSL connection from haproxy to tomcat. If that's
the case, you'll need to add the appropriate ssl options to the server
lines too.

https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#ssl (Server
and default-server options)

-Bryan


Re: Contribution for HAProxy: Peer Cipher based SSL CTX switching

2015-12-04 Thread Bryan Talbot
On Fri, Dec 4, 2015 at 6:15 AM, Dave Zhu (yanbzhu) 
wrote:

> Hey Bryan,
> it’s strange that it’s always loading the ECC cert. I just tested the code
> on my end and I’m not seeing this behavior.
>
>
I see it on OSX, I'll test on Linux today.



> Back to your original problem though, do the certs share a CN or SAN?
> That’s the only way that they would get loaded together into a shared
> context.
>
>
Yes, the entire DN is identical for the two certs including the CN. There
is no SAN on these.


btalbot-lt:haproxy-1.6$ openssl x509 -subject -issuer -noout -pubkey -in
var/tls/localhost.pem.rsa
subject= /C=US/ST=CA/L=San Jose/O=iJJi Engineering/OU=Test
Certificate/CN=localhost.local
issuer= /C=US/ST=CA/L=San Jose/O=iJJi Engineering/OU=Test
Certificate/CN=localhost.local
-BEGIN PUBLIC KEY-
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAzfd+4oUNDoF0xAjWfsg0
Ch/SVr6IOzLZPjU1z7OpNMgBbn0AQZ8znc070EJlkLdk8AjSp8EaLktz3vCPcT/J
wAJgc28/7RUIcUpLMEfSVYXyGhBDFJS0rUDM9FKXOkrxGt22e6zlrvarpQTW/05W
NLJq5ZmsvydNEEG55KBouBU/e2PlMOiRHwgOGZU4i+5XnVfvkd90A+TQiC2PhVh3
56cslp8wfcULmJ2dF3EpuiwNSaQZ8YbNWBqO2vZ7FGUwjiLD0atf9ysVJp87trvp
lA57R4TjiOAQpEdcgdiGUjJ2SjPPApS6XZUxjrlazkeL27ZPkezB3pn+NQ7BQQU1
6wIDAQAB
-END PUBLIC KEY-


btalbot-lt:haproxy-1.6$ openssl x509 -subject -issuer -noout -pubkey -in
var/tls/localhost.pem.ecdsa
subject= /C=US/ST=CA/L=San Jose/O=iJJi Engineering/OU=Test
Certificate/CN=localhost.local
issuer= /C=US/ST=CA/L=San Jose/O=iJJi Engineering/OU=Test
Certificate/CN=localhost.local
-BEGIN PUBLIC KEY-
MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEQFfhz8mRC3sRZp8+hJKBTx1Qz3Mm
FPVD/Wt9giz4E0oH/a8XLnvul0q+RqzW9K7v/IFQtGxxRjgahHlUW7fw/Q==
-END PUBLIC KEY-


Re: Contribution for HAProxy: Peer Cipher based SSL CTX switching

2015-12-03 Thread Bryan Talbot
Another odd thing is that both certs are loaded even if the ECC cert
doesn't have the proper name.

In my testing with a bind line of
  bind :8443 ssl crt ./var/tls/localhost.pem

the ECC cert is loaded if it is in that directory no matter what the file
name is.

-Bryan




On Thu, Dec 3, 2015 at 2:15 PM, Bryan Talbot  wrote:

> On Thu, Dec 3, 2015 at 2:00 PM, Dave Zhu (yanbzhu) 
> wrote:
>
>> Hey Bryan.
>>
>> I noticed that you gave HAProxy a directory. You have to give it the name
>> of the cert instead of the directory.
>>
>> So your config should be:
>>
>>   bind :8443 ssl crt ./var/tls/localhost.pem
>>
>>
>>
>
> I get the same behavior with that configuration.
>
> Hopefully loading certs from a directory instead of naming them all will
> be enabled in a future patch since I think a lot of existing configs load
> them that way.
>
> -Bryan
>
>


Re: Contribution for HAProxy: Peer Cipher based SSL CTX switching

2015-12-03 Thread Bryan Talbot
On Thu, Dec 3, 2015 at 2:00 PM, Dave Zhu (yanbzhu) 
wrote:

> Hey Bryan.
>
> I noticed that you gave HAProxy a directory. You have to give it the name
> of the cert instead of the directory.
>
> So your config should be:
>
>   bind :8443 ssl crt ./var/tls/localhost.pem
>
>
>

I get the same behavior with that configuration.

Hopefully loading certs from a directory instead of naming them all will be
enabled in a future patch since I think a lot of existing configs load them
that way.

-Bryan


Re: Contribution for HAProxy: Peer Cipher based SSL CTX switching

2015-12-03 Thread Bryan Talbot
Hi Dave.

I've applied the patches but things are not working as I expected. It could
be that my expectations are incorrect though. I'm expecting that with two
(ECC and RSA) self-signed testing certificates deployed with the haproxy
config shown below that ECC capable clients will connect and use the ECC
certificate while old clients that do not support ECC will connect and use
the RSA certificate.

What I'm seeing is that when an older OpenSSL client that does not support
ECC attempts to connect, it fails to handshake if the ECC certificate is
available in haproxy. If I remove the ECC certificate completely, the
handshake completes and a suitable RSA cipher is used.

OpenSSL from OSX fails when haproxy has RSA and ECC cert in ./var/tls/

btalbot-lt:tls$ /usr/bin/openssl version
OpenSSL 0.9.8zg 14 July 2015

btalbot-lt:tls$ echo | /usr/bin/openssl s_client -connect localhost:8443
CONNECTED(0003)
78356:error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert
handshake
failure:/BuildRoot/Library/Caches/com.apple.xbs/Sources/OpenSSL098/OpenSSL098-59/src/ssl/s23_clnt.c:593:



but works when haproxy has only RSA cert in ./var/tls/

btalbot-lt:tls$ echo | /usr/bin/openssl s_client -connect localhost:8443
CONNECTED(0003)
depth=0 /C=US/ST=CA/L=San Jose/O=iJJi Engineering/OU=Test
Certificate/CN=localhost.local
verify error:num=18:self signed certificate
...
New, TLSv1/SSLv3, Cipher is DHE-RSA-AES128-SHA
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
SSL-Session:
Protocol  : TLSv1
Cipher: DHE-RSA-AES128-SHA
Session-ID:
7715FF5B9964E190619862C0D7D926E5B5519A3D40661264C7451D9D6BD1B0C9
Session-ID-ctx:
Master-Key:
CC09E45F63C345EA9400D8E2AA34985CC85151BE8358D338FA526A3D3F02ED9E2E69AFD6D0DF01B325036FCCAEF940C8
Key-Arg   : None
Start Time: 1449175301
Timeout   : 300 (sec)
Verify return code: 18 (self signed certificate)




btalbot-lt:haproxy-1.6$ ./haproxy -vv
HA-Proxy version 1.6.2-5f5296-22 2015/12/03
Copyright 2000-2015 Willy Tarreau 

Build options :
  TARGET  = generic
  CPU = generic
  CC  = gcc
  CFLAGS  = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement
  OPTIONS = USE_ZLIB=1 USE_OPENSSL=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Encrypted password support via crypt(3): no
Built with zlib version : 1.2.5
Compression algorithms supported : identity("identity"),
deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with OpenSSL version : OpenSSL 1.0.2d 9 Jul 2015
Running on OpenSSL version : OpenSSL 1.0.2d 9 Jul 2015
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built without PCRE support (using libc's regex instead)
Built without Lua support

Available polling systems :
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 2 (2 usable), will use poll.




btalbot-lt:haproxy-1.6$ cat tls-test-haproxy.cfg
global
  log 127.0.0.1:1514 local2
  ssl-default-bind-options no-sslv3
  ssl-default-bind-ciphers
ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!3DES:!MD5:!PSK
  tune.ssl.default-dh-param 1024
  tune.bufsize 16384
  tune.maxrewrite 1024


defaults
  timeout connect 5s
  timeout queue  50s
  timeout client 50s
  timeout server 50s
  log global
  modehttp
  option  httplog
  option  dontlognull
  option  http-keep-alive


listen https
  bind :8443 ssl crt ./var/tls/
  monitor-uri /test





btalbot-lt:haproxy-1.6$ ls -l1 ./var/tls/
localhost.pem.ecdsa
localhost.pem.rsa




btalbot-lt:haproxy-1.6$ git remote -v
origin http://git.haproxy.org/git/haproxy-1.6.git (fetch)
origin http://git.haproxy.org/git/haproxy-1.6.git (push)




btalbot-lt:haproxy-1.6$ git log origin..HEAD
commit 5f5296f7d766a37f6c55ddcb728ba436172a94ad
Author: yanbzhu 
Date:   Wed Dec 2 13:54:14 2015 -0500

MINOR: ssl: Added multi cert support for crt-list config keyword

Same functionality as previous commit, but added support to crt-list
keyword.

Note that it's not practical to support SNI filters with multicerts, so
any SNI filters that's provided to the crt-list is ignored if a
multi-cert opertion is used.

commit 98c7a958dbc93f2f58acde0b851f8423bac86005
Author: yanbzhu 
Date:   Wed Dec 2 13:01:29 2015 -0500

MEDIUM: ssl: Added support for creating SSL_CTX with multiple certs

Added ability for users to specify multiple certificates that 

Re: what's the difference between rspdel and http-response del-header

2015-12-03 Thread Bryan Talbot
On Wed, Dec 2, 2015 at 8:50 PM, Ruoshan Huang 
wrote:

> hi,
> I’m a confused about the difference between `rspdel` and
> `http-response del-header`. if all I want is to delete a hdr of plain text
> instead of regular expression, does `http-response del-header` perform
> faster? under what circumstance should I use `rspxxx` directives instead?
>


rspdel is older and remains for backwards compatibility.

http-response del-header should be used for new configurations. I believe
that performance should be similar.

-Bryan


Re: [SPAM] Re: Contribution for HAProxy: Peer Cipher based SSL CTX switching

2015-11-30 Thread Bryan Talbot
On Mon, Nov 30, 2015 at 3:32 PM, Olivier Doucet  wrote:

> Hello,
>
> I'm digging out this thread, because having multiple certificate for one
> single domain (SNI) but with different key types (RSA/ECDSA) can really be
> a great functionality. Is there some progress ? How can we help ?
>


I'd love to see better support for multiple certificate key types for the
same SNI too.

That said, it is possible to serve both EC and RSA keyed certificates using
haproxy 1.6 now. See
http://blog.haproxy.com/2015/07/15/serving-ecc-and-rsa-certificates-on-same-ip-with-haproxy/
for details. It's not exactly pretty but it does seem to work.




>
> A subsidiary question is : how much ECDSA certificates are supported ? So
> if I use a single ECDSA certificate, how many people wont be able to see my
> content ?
>
>
>
They're pretty well supported by modern clients. Exactly what that means is
a bit fuzzy though: see
https://wiki.mozilla.org/Security/Server_Side_TLS#DHE_and_ECDHE_support for
additional details.

If your clients are all "modern" browsers and mobile devices, you're
probably good. If there are old clients, or other systems calling an API
there can be issues especially if they are using Java <= 7.

I've also discovered that Amazon CloudFront doesn't support EC certificates
at all. Can't use them in CloudFront distributions and CloudFront won't
connect to an Origin that uses them.

-Bryan


Re: Owncloud through Haproxy makes upload not possible

2015-11-20 Thread Bryan Talbot
On Fri, Nov 20, 2015 at 1:39 AM, Piotr Kubaj  wrote:

>
>
> Unfortunately, using 1.5.15 didn't change anything. The logs are:
>

We can see from the logs below that the connection is aborting with CD or
sH codes.

The docs say:

 CD   The client unexpectedly aborted during data transfer. This can be
  caused by a browser crash, by an intermediate equipment between the
  client and haproxy which decided to actively break the connection,
  by network routing issues between the client and haproxy, or by a
  keep-alive session between the server and the client terminated first
  by the client.


 sH   The "timeout server" stroke before the server could return its
  response headers. This is the most common anomaly, indicating too
  long transactions, probably caused by server or database saturation.
  The immediate workaround consists in increasing the "timeout server"
  setting, but it is important to keep in mind that the user experience
  will suffer from these long response times. The only long term
  solution is to fix the application.


The logs show that haproxy is able to accept the connection from your
client and make the connection to a backend; however, the client is then
being disconnected before the server responds or the server is taking too
long to respond and haproxy returns an error.

Maybe you have a firewall that's causing troubles?



Nov 20 10:24:43 anongoth haproxy[86788]: 46.248.161.165:13481
> [20/Nov/2015:10:23:45.309] https-in~ owncloud/node1 5791/0/0/-1/58637
> -1 0 - - CD-- 2/2/2/2/0 0/0 "POST
> /index.php/apps/files/ajax/upload.php HTTP/1.1"
> Nov 20 10:25:11 anongoth haproxy[86788]: 46.248.161.165:57472
> [20/Nov/2015:10:23:58.280] https-in~ owncloud/node1 14900/0/1/-1/73036
> -1 0 - - CD-- 1/1/1/1/0 0/0 "POST
> /index.php/apps/files/ajax/upload.php HTTP/1.1"
> Nov 20 10:28:21 anongoth haproxy[86788]: 46.248.161.165:45063
> [20/Nov/2015:10:26:54.272] https-in~ owncloud/node1 58/0/1/-1/87092
> 504 194 - - sH-- 0/0/0/0/0 0/0 "POST
> /index.php/apps/files/ajax/upload.php HTTP/1.1"
> Nov 20 10:28:22 anongoth haproxy[86788]: 46.248.161.165:17696
>


Re: Owncloud through Haproxy makes upload not possible

2015-11-18 Thread Bryan Talbot
On Wed, Nov 18, 2015 at 3:45 AM, Piotr Kubaj  wrote:

> Hi,
>
> I've got a home server with 1 public IP, on which I host a couple of my
> websites. Each of them is in a separate jail. Haproxy listens on the
> outgoing IP and directs the traffic to the appropriate jail. Each of my
> websites works fast. However, if uploading files in Owncloud goes VERY
> slow and in the end I get a Bad Gateway error if the file is larger than
> ~100KB. Smaller files go through, but slowly. If I make the Owncloud
> jail listen on the external IP and connect directly to it, there's no
> problem, so it must be something about Haproxy configuration.
>


Hard to guess what the issue is but haproxy logging would probably help.
However, the logging configuration is a bit of a mess as it's configured to
be "all on" and "all off" at the same time. There are also other
configuration oddities.



>
> My operating system is FreeBSD 10.2-RELEASE-p7/amd64. Each jail is at
> the same version. Haproxy is at 1.6.2 version. I'm not sure if that
> matters, but I use Lighttpd 1.4.37 as a WWW server.
>
> Below is my haproxy.conf:
> global
> ssl-default-bind-options no-sslv3 no-tls-tickets force-tlsv12
> ssl-default-bind-ciphers AES256+EECDH:AES256+EDH
> tune.ssl.default-dh-param 4096
>

4096 bit DH params will be pretty slow to handshake. Maybe that's okay in
your circumstance though since you seem to be using this for a personal use
and not expecting a high connection rate. You also have a 8 kbit RSA self
signed certificate and using 256 bit ciphers which increase TLS overhead.




> log /var/run/log local0 notice
>

Is that where the logging socket is on FreeBSD now? I haven't used FreeBSD
in quite a while.



> maxconn 4096
> user daemon
> group daemon
> daemon
>
> defaults
> modehttp
> option  httplog
> option  dontlognull
> option  forwardfor
> option  http-server-close
> option  httpclose
> option  tcplog
> option  dontlog-normal
>

You have both tcp logging and http logging enabled at the same time. In
addition, you also have all logging disabled with "dontlognull" and
"dontlog-normal". If all your proxies are HTTP like you've shown, just
enable httplog and remove the tcplog option. When troubleshooting, enable
logging at least normal connections.

You also do not want to use both httpclose and http-server-close since they
conflict. Remove option httpclose.

Timeouts are also missing and you should be getting warnings about that too.

[WARNING] 321/103152 (87887) : config : missing timeouts for proxy
'https-in'.
   | While not properly invalid, you will certainly encounter various
problems
   | with such a configuration. To fix this, please ensure that all
following
   | timeouts are set to a non-zero value: 'client', 'connect', 'server'.




>
>
> frontend http-in
> bind 192.168.11.3:80
>  reqadd X-Forwarded-Proto:\ http
> redirect scheme https code 301 if !{ ssl_fc }
>

Why bother to add a request header which will never be used?




>
> frontend https-in
> option httplog
> option forwardfor
> option http-server-close
> option httpclose
>

Don't need to repeat all of the above since it should be set in defaults
above (if set properly).
With no "log global" you won't get any logs anyway and are probably seeing
a warning when haproxy checks the config or starts.

[WARNING] 321/103109 (87884) : config : log format ignored for proxy
'https-in' since it has no log address.






> rspadd Public-Key-Pins:\
> pin-sha256="1Pw5h93NOsPw6j/vaTYl5VvW9cmtuZXtNP3cVz10hKo=";\
> max-age=15768000;\ includeSubDomains
>

AFAIK, HPKP is only somewhat supported by only the most recent browser
releases. I believe that it's also ignored by them for certificates which
are self-signed or signed by a CA that is not in the browsers
system-defined CA set. Probably doesn't cause your issue but who knows --
it is still experimental.

The "http-response set-header" supported in haproxy 1.5 and later is more
powerful and easier to read than the old reqadd and rspadd features.




> bind 192.168.11.3:443 ssl crt /usr/local/etc/haproxy.pem ciphers
> AES256+EECDH:AES256+EDH force-tlsv12 no-sslv3
>

Don't need to repeat these options that are already set globally.




>
> backend 10amd64
> server node1 192.168.11.3:81 cookie A check
>
>

Setting sticky cookies and not using them is probably harmless but what's
the point?

-Bryan


Re: acl regex

2015-11-11 Thread Bryan Talbot
On Wed, Nov 11, 2015 at 8:43 PM, Guillaume Bourque <
guillaume.bour...@logisoftech.com> wrote:

> Hi all,
>
> I can’t create an acl that will match this
>
> http://domain/?lang=
>
> I tried
>
> acl fr_top  path_reg^/.lang\=$
> acl fr_top  path_reg^/\?lang\=$
>
> acl fr_toppath_beg/?lang\=$
>
>
>

You can't match the query string with the 'path' matcher. Try 'req.uri' or
'query' if you're using 1.6.


Re: HAProxy does not write 504 on keep-alive connections

2015-11-11 Thread Bryan Talbot
On Wed, Nov 11, 2015 at 6:47 AM, Holger Just  wrote:

>
> As a loadbalancer however, HAProxy should always return a proper HTTP
> error if the request was even partially forwarded to the server. It's
> probably fine to just close the connection if the connect timeout stroke
> and the request was never actually handled anywhere but it should
> definitely return a real HTTP error if its the sever timeout and a
> backend server started doing anything with a request.
>
>
This would be my preferred behavior and actually what I thought haproxy was
already doing.

-Bryan


Re: Allowing 500 errors to pass through

2015-11-10 Thread Bryan Talbot
On Tue, Nov 10, 2015 at 4:53 PM, Aristedes Maniatis  wrote:

> I've got a situation with haproxy 1.5.x I'm trying to understand better.
> In my situation, several Apache httpd servers sit behind haproxy and behind
> that are the actual application servers. httpd is using mod-jk to load
> balance all the applications to all the web servers. So if the application
> server returns a 500 error, apache httpd will pass that through to haproxy
> and then to the end user.
>
> So far, all good. But I don't want haproxy to then remove that server from
> the pool. The 500 error could have been from a single application server,
> and we don't want to it lock out since it is already load balanced behind
> all the httpd servers. We will end up locking out one of the httpd server
> which really wasn't at fault.
>
> So my question:
>
> 1. Will haproxy remove a server from its backend pool if it returns a 500
> response to a request (I'm not talking about the health check, but just a
> regular request)
>


Not by default but haproxy can monitor normal traffic and take action if
desired. See the 'observe' and 'on-error' options.



>
> 2. Can haproxy be instructed to ignore 500 errors for its health check (I
> still want to detect that the server has gone away and doesn't respond, but
> the 500 error might be transient or it might just be on one page due to
> misconfiguration and doesn't warrant removing the whole server).
>

Probably, but if you don't care about the HTTP request or response, why not
just use TCP health checks?

-Bryan


Re: HAProxy does not write 504 on keep-alive connections

2015-11-10 Thread Bryan Talbot
On Tue, Nov 10, 2015 at 12:04 PM, Laurent Senta 
wrote:

> Hi there,
> I think there's a bug in how HAProxy handles timeout, that'd be great if
> you can confirm or help me figure out what I do wrong:
>
> Basically: if a server timeout happens on a keep-alive connection, haproxy
> does not write a 504 response before closing the socket.
>

I am able to reproduce this behavior too. I tried a few versions of haproxy
all the way back to 1.4.0 and they all did this.

The general way to reproduce is:
1) open tcp connection
2) make request that completes before 'timeout server' / 'timeout client'
3) make request that does NOT complete before 'timeout server' on same tcp
connection

The second request gets no response and the connection is just closed. If
the same request (#3) is made on a new tcp connection that did not have a
previously successful response, the response is a 504 and not the silently
closed connection.

The haproxy.cfg I'm using is

global
maxconn 4096

defaults
mode http
timeout connect 5s
timeout client 450ms
timeout server 450ms

listen http
bind :8000
server slowserv 127.0.0.1:8002


And 'slowserv' simply sleeps for the amount of time requested through a
query string parameter.

Note that curl and httpclient it seems is not good to test in this
situation because both do retry the request that receives an empty response
-- even if that request was a POST.

I'm able to see the successful and empty-responses using wget like this by
forcing wget to not retry (only try once). In these cases, the backend will
take 0.4 seconds and 0.5 seconds to respond (respectively).  The timeout
server is configured to strike at 0.45 seconds.

$> wget --tries 1 -O /dev/null "http://127.0.0.1:8000/?0.4"; "
http://127.0.0.1:8000/?0.5";





>
> This leads python to fail with a serious BadStatusLineError instead of a
> simple http error.
> And Ruby to retry potentially non-idempotent methods.
>
> Here's a basic setup to reproduce the error:
> https://gist.github.com/lsenta/1d33c6a01c07b32ac18a
>
> I've also had some help by meineerde on irc, here's the haproxy logs with
> a ruby client doing the same request:
> https://gist.github.com/meineerde/87a571c57369d322dae0#gistcomment-1617687
>
>

In case there is some confusion, the comment about "6 gets instead of 5" is
due to the ruby httpclient or curl retrying on that end and is not being
re-tried by haproxy.



> I've seen this behavior with v.1.6.2, 1.5.15 and 1.6.0
>


and several 1.5.x, 1.4.26, 1.4.6, and 1.4.0 all the same.

-Bryan


Re: tcp-check with persistent session cookie ?

2015-11-06 Thread Bryan Talbot
On Fri, Nov 6, 2015 at 1:00 PM, Sébastien ROHAUT <
sebastien.rohaut@gmail.com> wrote:

> Hi,
>
>
> Is it possible to get and store the JSESSIONID cookie returned by the
> tcp-check expect (or something like this), and send it with the tcp-check
> send, to reuse the same session ?
>
> Is there a way for a health check to use persistent cookie session (always
> the same, one per server), returned by the check ?
>
>
Even if you can configure health checks to reuse the session id, your app
will still be trivial to remotely crash from the net by anyone able to make
GET requests that start sessions.


Re: haproxy daemon does not attempt to read ca-file on startup

2015-10-29 Thread Bryan Talbot
On Thu, Oct 29, 2015 at 1:43 PM, Joseph Hammerman <
jhammer...@secondmarket.com> wrote:

> Hi Brian,
>
> I am trying to issue the intermediate certificate so that my trust chain
> is presented to the browser. Am I using the wrong directive for that
> purpose?
>

Yes. The intermediate certs should go in the certificate file along with
the private key.

So, something like this in your case then:
$> cat secondmarket.com.cert authority-intermediate.pem secondmarket.com.key
> secondmarket.com.pem


You might also want DH parameters in that file too if you enable DH key
exchange ciphers.

-Bryan




>
> Thanks,
> Joe Hammerman
>
> On Thu, Oct 29, 2015 at 2:33 PM, Bryan Talbot 
> wrote:
>
>> On Thu, Oct 29, 2015 at 10:39 AM, Joseph Hammerman <
>> jhammer...@secondmarket.com> wrote:
>>
>>> Hi HAProxy users list,
>>>
>>> I am running HAProxy version 1.5.12-1 on Ubuntu Precise Pangolin
>>> (12.04). I have confirmed that it was compiled with OpenSSL support built
>>> in.
>>>
>>> I have configured an SSL backend thusly:
>>>
>>> bind 0.0.0.0:443 ssl crt /etc/ssl/private/secondmarket.com.pem ca-file
>>> /etc/ssl/private/secondmarket.ca.pem ciphers
>>> EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4
>>>
>>> launching haproxy under strace provides no indication that it made an
>>> attempt to read the ca-file (although you can clearly see it loading the
>>> crt file). strace output is here: http://pastebin.com/RDgAug7E
>>>
>>> Does anyone know why the ca-file directive is being ignored? Shall I
>>> upgrade?
>>>
>>
>>
>> ca-file is used when validating client certificates. Do you configure
>> anything that requires or expects clients to present a valid certificate?
>>
>> -Bryan
>>
>>
>
>
> --
>
> This message is intended only for the addressee. Please notify sender by
> e-mail if you are not the intended recipient. If you are not the intended
> recipient, you may not copy, disclose, or distribute this message or its
> contents, in either excerpts or in its entirety, to any other person and
> any such actions may be unlawful.  SecondMarket Solutions, Inc. and it
> subsidiaries ("SecondMarket") is not responsible for any unauthorized
> redistribution.
>
>
> Securities-related services of SecondMarket are provided through SMTX, LLC
> (“SMTX”), a wholly owned subsidiary of SecondMarket and a registered broker
> dealer and member of FINRA/SIPC.   SMTX does not accept time sensitive,
> action-oriented messages or transaction orders, including orders to
> purchase or sell securities, via e-mail.  SMTX reserves the right to
> monitor and review the content of all messages sent to or from this e-mail
> address.  Messages sent to or from this e-mail address may be stored on the
> SMTX e-mail system and archived in accordance with FINRA and SEC rules and
> regulations.
>
> This message is intended for those with an in-depth understanding of the
> high risk and illiquid nature of private securities and these assets may
> not be suitable for you. This message does not represent a solicitation for
> an order or an offer to buy or sell any security.  There is not enough
> information contained in this message with which to make an investment
> decision and any information contained herein should not be used as a basis
> for this purpose. SMTX does not produce in-house research, make
> recommendations to purchase or sell specific securities, provide investment
> advisory services, or conduct a general retail business.
>


Re: haproxy daemon does not attempt to read ca-file on startup

2015-10-29 Thread Bryan Talbot
On Thu, Oct 29, 2015 at 10:39 AM, Joseph Hammerman <
jhammer...@secondmarket.com> wrote:

> Hi HAProxy users list,
>
> I am running HAProxy version 1.5.12-1 on Ubuntu Precise Pangolin (12.04).
> I have confirmed that it was compiled with OpenSSL support built in.
>
> I have configured an SSL backend thusly:
>
> bind 0.0.0.0:443 ssl crt /etc/ssl/private/secondmarket.com.pem ca-file
> /etc/ssl/private/secondmarket.ca.pem ciphers
> EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4
>
> launching haproxy under strace provides no indication that it made an
> attempt to read the ca-file (although you can clearly see it loading the
> crt file). strace output is here: http://pastebin.com/RDgAug7E
>
> Does anyone know why the ca-file directive is being ignored? Shall I
> upgrade?
>


ca-file is used when validating client certificates. Do you configure
anything that requires or expects clients to present a valid certificate?

-Bryan


Re: Does Haproxy supports backend on https for reverse proxy

2015-10-05 Thread Bryan Talbot
On Mon, Oct 5, 2015 at 1:42 PM, Kuchekar, Yogita (Yogita) <
ykuche...@avaya.com> wrote:

> Thanks for your reply..
>
>
>
> Sorry for the typo. Version for Haproxy is 1.5.
>
>
>
> I have been trying to achieve this for a while referring to forum examples.
>
> My configuration is like this. Could you please point me to a working
> example .
>
>
>
> defaults
>
> modehttp
>
> log global
>
> option  httplog
>
> option  dontlognull
>
> option http-server-close
>
> option forwardfor   except 127.0.0.0/8
>
> option  redispatch
>
> retries 3
>
> timeout http-request10s
>
> timeout queue   1m
>
> timeout connect 10s
>
> timeout client  1m
>
> timeout server  1m
>
> timeout http-keep-alive 10s
>
> timeout check   10s
>
> maxconn 3000
>
>
>
> #-
>
> # main frontend which proxys to the backends
>
> #-
>
>
>
>
>
>  frontend www
>
>bind 10.177.222.83:80
>
>option http-server-close
>

Should not need to repease 'http-server-close' here since you have it in
default already.



>default_backend default-backend
>
>
>
>
>
> backend default-backend
>
>server adm-testing-platform 10.177.222.82:443 check
>
>
>
>

I think this would work if you were using non-ssl port 80 for the server
backend, but since you're using ssl port 443, you need to enable ssl
options for that server line. Specifically the 'ssl' and 'ca-file' options
detailed at
https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#5.2

-Bryan


Re: Does Haproxy supports backend on https for reverse proxy

2015-10-05 Thread Bryan Talbot
On Mon, Oct 5, 2015 at 12:49 PM, Kuchekar, Yogita (Yogita) <
ykuche...@avaya.com> wrote:

> Hi ,
>
>
>
> I have installed Haproxy servere 5.1 on linux RHEL 6.1
>


Assuming you mean haproxy version 1.5, then yes both scenarios are
possible. I'm sure you can find many blog posts and sample configurations
on this mailing list to get you started.

-Bryan




>
>
> I have configured Haproxy servere on linux at 80 port and trying to do
> reverse proxy with backend on https protocol (443). Is it possible in
> haparoxy ?
>
> Client -->http traffic -->Haproxy server-->https traffic-->backend server
>
>
>
>
>
> If I have Haproxy kistening to https traffic (have certificate support)and
> backend  server with https traffic, is  this reverse proxy supported in
> Haproxy ?
>
>Client -->https traffic -->Haproxy server-->https
> traffic-->backend server
>
>
>
>
>
> Is there any other solution for this scenario?
>
> Really appreciate your help here.
>
>
>
>
>
> Thanks,
>
> Yogita
>
>
>
>
>


Re: HAProxy Slows At 1500+ connections Really Need some help to figure out why

2015-10-02 Thread Bryan Talbot
On Fri, Oct 2, 2015 at 1:48 PM, Daren Sefcik 
wrote:

> I Hope this is the right place to ask for help..if not please flame me and
> send me on my way
>
> So I had haproxy 1.5 installed (as a front end for a cluster of squid
> proxies) on a low end Dell server with pfsense(PFS) 2.1.5 and was
> experiencing slow down with 1500+ connections so I  built up a new PFS
> 2.2.4 machine on a brand new Dell R630  with 64gb RAM, Dual CPU,  bad ass
> raid disks etcloaded and configured haproxy with several squid backends
> and some ICAP  backends. Things work great until I hit about 1500 or more
> connections and then everything just slows to a crawl. Restarting haproxy
> helps momentarily but it will slow back down again very quickly. If I
> offload clients to the point of only 300-400 connections it will become
> responsive again. In the haproxy stats page it will show 97% idle or
> similar and the output from top will show maybe 5% cpu for haproxy. If I
> configure the browser client to use one of the squid backends directly it
> works fast but as soon as I put the broswer proxy config back to use the
> haproxy frontend IP it will slow down.
>


The problem seems consistent with your connection tracking tables filling
up. You don't say if the 1500 concurrent connections creates a lot of new
connections or if they are 1500 connections that last for a long time. If
your connection lifetime is short then the connection tracking tables
probably need to be tuned.

I don't recall what the conntrack controls are for FreeBSD but it's
probably something in the pfctl utility, right?

-Bryan


Re: Implementing HAProxy First Time: Conditional backend issue

2015-09-30 Thread Bryan Talbot
xy subdomain_p1-backend
> stopped (FE: 0 conns, BE: 1 conns).
> Sep 30 12:47:29 localhost haproxy[5690]: Proxy HAProxy-stats stopped (FE:
> 26 conns, BE: 12 conns).
>
>

Why are proxy frontends repeated twice in the above log but backends (and
other entries) just once? Did you produce this output with a config
different from what you provided above too?





> Info.log
> Conditional backend
> Sep 30 12:47:29 localhost haproxy[1691]: 192.168.100.153 - - "GET
> /CoscendPad HTTP/1.1" 404 262 "" "" 53639 804 "webapps-frontend"
> "subdomain_p1-backend" "Product1" 5 0 0 3 8  4 4 0 0 0 0 0 "" ""
>

The server "Product1" was connected to (in 0 ms) and responded (in 3 ms)
with a 404 status code of 262 bytes -- if my assumptions about your
modified CLF log format is correct.



>
> Default backend:
> Sep 30 12:47:29 localhost haproxy[1691]: 192.168.100.153 - - "GET
> /favicon.ico HTTP/1.1" 200 4603 "" "" 53639 813 "webapps-frontend"
> "webapps-backend" "Product1" 30 0 0 3 34  1 1 0 0 0 0 0 "" ""
>
>




>
> Sincerely,
> Susheel Jalali
> Coscend Communications Solutions
> susheel.jal...@coscend.com
>
> Web site: www.Coscend.com
> --
> CONFIDENTIALITY NOTICE: See 'Confidentiality Notice Regarding E-mail
> Messages from Coscend Communications Solutions' posted at:
> http://www.Coscend.com/Terms_and_Conditions.html
>
>
> On 10/01/15 01:21, Bryan Talbot wrote:
>
> On Wed, Sep 30, 2015 at 12:37 PM, Susheel Jalali <
> susheel.jal...@coscend.com> wrote:
>
>> Dear HAProxy Developers community:
>>
>> After incorporating inputs from some of you, we tested with an updated
>> haproxy.cfg (see below).  Product-1 is still not accessible
>>
>
> Without the complete haproxy config and some logs, it was impossible for
> anyone to understand what issues you might be having. The question was just
> too vague.
>
>>
>> Info.log
>> Conditional backend
>> Sep 30 09:12:44 localhost haproxy[1691]: 192.168.100.153 - - "GET
>> /CoscendPad HTTP/1.1" 404 262 "" "" 53639 804 "webapps-frontend"
>> "subdomain_p1-backend" "Product1" 5 0 0 3 8  4 4 0 0 0 0 0 "" ""
>>
>>
> From the logs shown it looks like your "conditional" backend is returning
> a 404, but since the log format is not standard; without the haproxy config
> we can only guess at what the log contents mean.
>
> -Bryan
>
>
> No virus found in this message.
> Checked by AVG - www.avg.com
> Version: 2015.0.6140 / Virus Database: 4419/10732 - Release Date: 09/30/15
>
>
>
>
>


Re: Implementing HAProxy First Time: Conditional backend issue

2015-09-30 Thread Bryan Talbot
On Wed, Sep 30, 2015 at 12:37 PM, Susheel Jalali  wrote:

> Dear HAProxy Developers community:
>
> After incorporating inputs from some of you, we tested with an updated
> haproxy.cfg (see below).  Product-1 is still not accessible
>

Without the complete haproxy config and some logs, it was impossible for
anyone to understand what issues you might be having. The question was just
too vague.



> Info.log
> Conditional backend
> Sep 30 09:12:44 localhost haproxy[1691]: 192.168.100.153 - - "GET
> /CoscendPad HTTP/1.1" 404 262 "" "" 53639 804 "webapps-frontend"
> "subdomain_p1-backend" "Product1" 5 0 0 3 8  4 4 0 0 0 0 0 "" ""
>
>
>From the logs shown it looks like your "conditional" backend is returning a
404, but since the log format is not standard; without the haproxy config
we can only guess at what the log contents mean.

-Bryan


Re: How to access Web products by their names in access url

2015-09-23 Thread Bryan Talbot
On Tue, Sep 22, 2015 at 12:06 AM, Susheel Jalali <
susheel.jal...@coscendcommunications.com> wrote:

> Access URL  
> http://CoscendCommunications.com/Product1
>
>
>
> Thank you.
>
> -
>
> frontend apps-frontend
>
> bind  *:80
>
> log   global
>
> optionforwardfor
>
> optionhttplog clf
>
> reqadd X-Forwarded-Proto:\ http
>

Probably better to use the nicer "http-request set-header" for new
configurations.



>
>
> acl host_subdomainP1 url_sub -i http://CoscendCommunications.com/Product1
>

You probably want to use the or 'path' matcher I think since you only seem
to care about the /Product1 portion to act on.




> use_backend subdomainP1 if host_subdomainP1
>
> default_backend apps-backend
>
>
>
> backend apps-backend
>
> log   global# use global settings
>
> balance   roundrobin
>
> optionhttpclose
>
> optionforwardfor
>
> http-request set-header X-Forwarded-Port %[dst_port]
>
> optionhttpchk HEAD / HTTP/1.1\r\nHost:localhost
>

This backend has no servers so will always return an error.

-Bryan


Re: Send the same traffic to multiple backends

2015-09-10 Thread Bryan Talbot
Unless there is some LUA magic that can support this in the latest 1.6
development builds: no, that's not possible.

-Bryan


On Thu, Sep 10, 2015 at 8:18 PM, Unknown User 
wrote:

> Is there a way to send the same traffic to multiple backends (sort of like
> a tee), say a test and a prod backend? If so, can the return traffic from
> one of the backend's be ignored?
>
>
>


Re: Can HAProxy loadbalance multiple requests send through single TCP connection

2015-09-02 Thread Bryan Talbot
TCP really has no notion of "messages", it's all just bytes. So no, this
would not be possible with plain TCP.

-Bryan


On Wed, Sep 2, 2015 at 12:05 PM, Prabu rajan  wrote:

> Hi Team,
>
> Our client to HAProxy establishes single TCP connection and continues to
> send messages. We would like to know, is there a way to load balance those
> messages across the services sitting behind HAProxy. Please advise.
>
> Regards,
> Prabu
>


Re: getting transparent proxy to work.

2015-08-20 Thread Bryan Talbot
On Thu, Aug 20, 2015 at 4:05 PM, Rich Vigorito  wrote:

> Reading this:
> http://blog.haproxy.com/2012/06/05/preserve-source-ip-address-despite-reverse-proxies/​
> about PROXY protocol, what needs to happen for PROXY protocol to be
> recognized by the web server?
>
The webserver needs to support it. There is a (probably incomplete) list
here: http://blog.haproxy.com/haproxy/proxy-protocol/



> Im assuming the haproxy server already does?
>
>
> Yes, of course.

-Bryan


Re: getting transparent proxy to work.

2015-08-20 Thread Bryan Talbot
On Wed, Aug 19, 2015 at 3:26 PM, Rich Vigorito  wrote:

> I should also clarify the goal of using this approach was to do TLS from
> router to haproxy and onto webservers but to preserve the client IP. The
> other thought I had was to SSL terminate on haproxy box and initiate new
> TLS handshake from haproxy to webservers. Though Im assuming transparent
> proxy will mean less work for haproxy server. Is this second approach even
> possible? to accomplish the goal of TLS all the way through the call all
> ive seen is the transparent proxy solution which Ive been struggling with.
>

Transparent proxying might be one way to get the client IP onto the backend
servers but there are others too as you've mentioned and those might be
much easier.

Yes, you can terminate SSL on haproxy and make a new SSL connection to the
backend. With that, you'd probably need to add the X-Forwarded-For http
header (use 'mode http') and configure your webserver to use XFF too.

If your webserver or app can support the haproxy "PROXY" protocol, that
might also be an option for you and allows you to pass-through the SSL (not
terminated at haproxy) to the backend if you need that.

-Bryan


Re: HAProxy Logging

2015-07-20 Thread Bryan Talbot
2015-07-20 2:47 GMT-07:00 :

>  Dear Sir and Madam,
>
> I am interested in your application HA Proxy.
> But first I have some question.
>
> Is it possible that the HA Proxy writes log files in the home directory
> with the same ownership like the HA Proxy?
>

haproxy logs using syslog, so you'll need to configure your syslog to log
to any destination it supports.



> We could imagine that the HA Proxy monitors the traffic.
>

Don't know what you're suggesting here.



>
> And is it possible that we can reset the statistic at the statistic report
> site manually without restarting the application?
>


Does this work for you?
http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#clear counters

-Bryan


Re: SSL handshake failure when setting up no-tlsv10

2015-05-20 Thread Bryan Talbot
On Wed, May 20, 2015 at 10:40 AM, Lukas Tribus  wrote:

> > yes i figured since it is a ubuntu 10.04 machine it has old version of
> > openssl
> >
> > so i looked around for upgrading the openssl and found this link
> > https://sandilands.info/sgordon/upgrade-latest-version-openssl-on-ubuntu
> >
> > so can i just upgrade to openssl 1.0.1 and add it to the correct path
> > and just restart the haproxy service?
>
> Please don't.
>
> As long as you don't *exactly* know what you are doing, ONLY use your
> OS internal packaging system and don't follow tips you find on google.
> This particular blog post for example makes you install a ancient version
> of openssl (just look at the date of the post), with numerous issues and
> bugs. Also you would very likely mess up your whole system.
>
> Ubuntu 10.04 is EOL, you don't use an EOL'ed OS in production, period.
>
> Upgrade to the next Ubuntu LTS edition by following the howto of your
> OS vendor:
> https://help.ubuntu.com/community/PreciseUpgrades
>
>
>

I agree with Lukas. Unless you're an expert at building and installing
customized system software, I would not recommend you do anything like that
on a server you want to be stable.

Upgrade your OS is your best option for sure.

-Bryan


Re: SSL handshake failure when setting up no-tlsv10

2015-05-20 Thread Bryan Talbot
On Wed, May 20, 2015 at 10:10 AM, Amol  wrote:

> here is the output from the commands you requested
>
> Built with OpenSSL version : OpenSSL 0.9.8k 25 Mar 2009
> Running on OpenSSL version : OpenSSL 0.9.8k 25 Mar 2009
>
>

> :~$ openssl version
> OpenSSL 0.9.8k 25 Mar 2009
>
>
>

The openssl version is just too old to support TLS 1.2 as you can see in
the supported cipher list below. Best bet would be to upgrade to a newer
release of your OS. Another option would be to compile a newer version of
openssl and compile your own haproxy and statically link against the newer
openssl.

-Bryan




> :~$ openssl ciphers -v
> DHE-RSA-AES256-SHA  SSLv3 Kx=DH   Au=RSA  Enc=AES(256)  Mac=SHA1
> DHE-DSS-AES256-SHA  SSLv3 Kx=DH   Au=DSS  Enc=AES(256)  Mac=SHA1
> AES256-SHA  SSLv3 Kx=RSA  Au=RSA  Enc=AES(256)  Mac=SHA1
> EDH-RSA-DES-CBC3-SHASSLv3 Kx=DH   Au=RSA  Enc=3DES(168) Mac=SHA1
> EDH-DSS-DES-CBC3-SHASSLv3 Kx=DH   Au=DSS  Enc=3DES(168) Mac=SHA1
> DES-CBC3-SHASSLv3 Kx=RSA  Au=RSA  Enc=3DES(168) Mac=SHA1
> DES-CBC3-MD5SSLv2 Kx=RSA  Au=RSA  Enc=3DES(168) Mac=MD5
> DHE-RSA-AES128-SHA  SSLv3 Kx=DH   Au=RSA  Enc=AES(128)  Mac=SHA1
> DHE-DSS-AES128-SHA  SSLv3 Kx=DH   Au=DSS  Enc=AES(128)  Mac=SHA1
> AES128-SHA  SSLv3 Kx=RSA  Au=RSA  Enc=AES(128)  Mac=SHA1
> RC2-CBC-MD5 SSLv2 Kx=RSA  Au=RSA  Enc=RC2(128)  Mac=MD5
> RC4-SHA SSLv3 Kx=RSA  Au=RSA  Enc=RC4(128)  Mac=SHA1
> RC4-MD5 SSLv3 Kx=RSA  Au=RSA  Enc=RC4(128)  Mac=MD5
> RC4-MD5 SSLv2 Kx=RSA  Au=RSA  Enc=RC4(128)  Mac=MD5
> EDH-RSA-DES-CBC-SHA SSLv3 Kx=DH   Au=RSA  Enc=DES(56)   Mac=SHA1
> EDH-DSS-DES-CBC-SHA SSLv3 Kx=DH   Au=DSS  Enc=DES(56)   Mac=SHA1
> DES-CBC-SHA SSLv3 Kx=RSA  Au=RSA  Enc=DES(56)   Mac=SHA1
> DES-CBC-MD5 SSLv2 Kx=RSA  Au=RSA  Enc=DES(56)   Mac=MD5
> EXP-EDH-RSA-DES-CBC-SHA SSLv3 Kx=DH(512)  Au=RSA  Enc=DES(40)   Mac=SHA1
> export
> EXP-EDH-DSS-DES-CBC-SHA SSLv3 Kx=DH(512)  Au=DSS  Enc=DES(40)   Mac=SHA1
> export
> EXP-DES-CBC-SHA SSLv3 Kx=RSA(512) Au=RSA  Enc=DES(40)   Mac=SHA1
> export
> EXP-RC2-CBC-MD5 SSLv3 Kx=RSA(512) Au=RSA  Enc=RC2(40)   Mac=MD5
> export
> EXP-RC2-CBC-MD5 SSLv2 Kx=RSA(512) Au=RSA  Enc=RC2(40)   Mac=MD5
> export
> EXP-RC4-MD5 SSLv3 Kx=RSA(512) Au=RSA  Enc=RC4(40)   Mac=MD5
> export
> EXP-RC4-MD5     SSLv2 Kx=RSA(512) Au=RSA  Enc=RC4(40)   Mac=MD5
> export
> :~$
>
>
>   --
>  *From:* Bryan Talbot 
> *To:* Amol ; HAproxy Mailing Lists <
> haproxy@formilux.org>
> *Sent:* Wednesday, May 20, 2015 1:04 PM
>
> *Subject:* Re: SSL handshake failure when setting up no-tlsv10
>
> On Wed, May 20, 2015 at 9:39 AM, Amol  wrote:
>
> Thanks you for responding and i wanted to share some more from my findings
>
> when i set
> *ssl-default-bind-options no-sslv3 force-tlsv12*
>
> $ sudo vi /etc/haproxy/haproxy.cfg
> :~$ sudo /etc/init.d/haproxy restart
>  * Restarting haproxy
> haproxy
> [ALERT] 139/122930 (8602) : parsing [/etc/haproxy/haproxy.cfg:22] :
> 'ssl-default-bind-options' 'force-tlsv12': library does not support
> protocol TLSv1.2
> [ALERT] 139/122930 (8602) : Error(s) found in configuration file :
> /etc/haproxy/haproxy.cfg
> [ALERT] 139/122930 (8602) : Fatal errors found in configuration.
>
>
>
> Yes, it sounds like your openssl lib must be pretty old or is oddly
> configured. What does "haproxy -vv" and "openssl version" report? You can
> see a list of supported ciphers and protocols using "openssl ciphers -v" as
> well.
>
>
>
> -Bryan
>
>
>
>


Re: SSL handshake failure when setting up no-tlsv10

2015-05-20 Thread Bryan Talbot
On Wed, May 20, 2015 at 9:39 AM, Amol  wrote:

> Thanks you for responding and i wanted to share some more from my findings
>
> when i set
> *ssl-default-bind-options no-sslv3 force-tlsv12*
>
> $ sudo vi /etc/haproxy/haproxy.cfg
> :~$ sudo /etc/init.d/haproxy restart
>  * Restarting haproxy
> haproxy
> [ALERT] 139/122930 (8602) : parsing [/etc/haproxy/haproxy.cfg:22] :
> 'ssl-default-bind-options' 'force-tlsv12': library does not support
> protocol TLSv1.2
> [ALERT] 139/122930 (8602) : Error(s) found in configuration file :
> /etc/haproxy/haproxy.cfg
> [ALERT] 139/122930 (8602) : Fatal errors found in configuration.
>


Yes, it sounds like your openssl lib must be pretty old or is oddly
configured. What does "haproxy -vv" and "openssl version" report? You can
see a list of supported ciphers and protocols using "openssl ciphers -v" as
well.

-Bryan


Re: SSL handshake failure when setting up no-tlsv10

2015-05-11 Thread Bryan Talbot
On Mon, May 11, 2015 at 1:46 PM, Amol  wrote:

> Hi
> I am using Haproxy (1.5.9) and trying to resolve a PCI compliance issue
> with TLS v1.0, but when i set the following options in global section of
> the haproxy.cfg i am getting an error in my haproxy.log and the webpage
> does not showup.
>
> ssl-default-bind-options no-sslv3 *no-tlsv10*
>
> *error in haproxy.log*
>
> May 11 16:37:39 load-lb haproxy[2680]: xx.xx.xx.xx:56787
> [11/May/2015:16:37:39.626] www-https/1: SSL handshake failure
>
>
> here is the snippet of the actual SSL settings
>
> ssl-default-bind-ciphers
> EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:
> EECDH:EDH+aRSA:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS:!RC4
> ssl-default-bind-options no-sslv3 *no-tlsv10*
> tune.ssl.default-dh-param 4096
>
>
> Please let me know if i am missing anything?
>
>
>

Works for me.

$ ./haproxy -vv
HA-Proxy version 1.5.12-2 2015/05/11
Copyright 2000-2015 Willy Tarreau 

Build options :
  TARGET  = generic
  CPU = generic
  CC  = gcc
  CFLAGS  = -O2 -g -fno-strict-aliasing
  OPTIONS = USE_ZLIB=1 USE_OPENSSL=0

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200

Encrypted password support via crypt(3): no
Built with zlib version : 1.2.5
Compression algorithms supported : identity, deflate, gzip
Built with OpenSSL version : OpenSSL 1.0.2a 19 Mar 2015
Running on OpenSSL version : OpenSSL 1.0.2a 19 Mar 2015
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built without PCRE support (using libc's regex instead)

Available polling systems :
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 2 (2 usable), will use poll.


$ cat haproxy.cfg
global
  tune.ssl.default-dh-param 4096
  ssl-default-bind-options no-sslv3 no-tlsv10
  ssl-default-bind-ciphers
EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS:!RC4

defaults
  timeout client 5s
  timeout server 5s
  mode http

listen foo
  bind :4433 ssl crt ./test.pem


$ ./haproxy -f ./haproxy.cfg -c
Configuration file is valid


$ openssl version
OpenSSL 1.0.2a 19 Mar 2015


$ echo | openssl s_client -connect 127.0.0.1:4433
...
SSL-Session:
Protocol  : TLSv1.2
Cipher: ECDHE-RSA-AES256-GCM-SHA384
...


Maybe it's an issue with your client?

-Bryan


Re: Server health check being called from each pool

2015-05-01 Thread Bryan Talbot
You're looking for track

http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#track

-Bryan


On Fri, May 1, 2015 at 5:34 PM, Michael Bushey  wrote:

> I have a master-master-master MySQL DB cluster, but run into deadlocks
> if writes from one web node are across multiple DB servers, so I have
> this:
>
>listen QA-Single-DB1:23321
> bind 127.0.0.1:23321
> option httpchk
> default-server port 9200 inter 5000 fastinter 2000 rise 2 fall 2
> server db1 db1:3306 check
> server db2 db2:3306 check backup
> server db3 db3:3306 check backup
>
>   listen QA-Single-DB2:23322
> bind 127.0.0.1:23322
> option httpchk
> default-server port 9200 inter 5000 fastinter 2000 rise 2 fall 2
> server db2 db2:3306 check
> server db3 db3:3306 check backup
> server db1 db1:3306 check backup
>
>   listen QA-Single-DB3:23323
> bind 127.0.0.1:23323
> option httpchk
> default-server port 9200 inter 5000 fastinter 2000 rise 2 fall 2
> server db3 db3:3306 check
> server db1 db1:3306 check backup
> server db2 db2:3306 check backup
>
>
> This works, but each listen section is doing a health check. Is there
> any way to specify the health check as a global default? Not having
> "backup" and using "balance source" would almost work, but I have
> multiple sites on one server. I would like the sites spread out over
> the the three DB servers but with fail-over.
>
> Thanks for any help/insight/comments!
> Michael
>
>


Re: "stats uri" doesn't inherit from defaults sections

2015-04-09 Thread Bryan Talbot
On Thu, Apr 9, 2015 at 7:03 AM, Jonathan Matthews 
wrote:

> Hi all -
>
> A bit of lunchtime playing around today has exposed the fact that a
> "stats uri" in a defaults section has no effect on backends to which
> the defaults section /should/ apply. Stats-serving backends only obey
> the compile-time default ("/haproxy?stats") in my tests, until an
> explicit "stats uri" is placed inside the backend definition.
>
>
I think that it does work in the defaults section but only if stats is also
enabled there.


defaults
  timeout client 5s
  timeout server 5s
  stats enable
  stats uri /foo
  mode http

listen foo
  bind :8000


A setup like that works for me using 1.5.11. Putting just the "stats uri"
in defaults but then putting "stats enable" only in the backend does not
work for any version of 1.4 or 1.5.

-Bryan




> The docs state that "stats uri" is valid in defaults sections, so let
> me ask: is this a documentation bug (which I'll happily submit a patch
> for!) or something else? To my mind, it absolutely makes sense to have
> this statement as settable in a defaults section.
>
> I've only tested this on the latest Debian backports version, 1.5.8,
> but I don't see anything related in the changelog since then which
> makes me think it's been fixed. The docs for 1.5.11 currently state
> it's a defaults-settable config statement.
>
> Cheers,
> Jonathan
> --
> Jonathan Matthews
> Oxford, London, UK
> http://www.jpluscplusm.com/contact.html
>
>


Re: Print http log to stdout?

2015-04-02 Thread Bryan Talbot
On Thu, Apr 2, 2015 at 1:28 PM, Douglas Borg 
wrote:

> Willy Tarreau  1wt.eu> writes:
>
> >
> > On Fri, Dec 13, 2013 at 03:43:51AM +0800, Igor wrote:
> > > In verbose mode, is it possible to print http log to stdout?
> >
> > No it's not possible. Do you think it could be useful ? If so, I don't
> > think it should be too difficult to do, especially considering that at
> > Regards,
> > Willy
>
>
> Hi Willy,
>
> Sorry for replying to such an old message. Hopefully this is still
> monitored.
>
> I do think it would be useful for HAProxy to have configuration options to
> print logs directly to stdout/stderr. With container technologies like
> Docker,
> it is nice to keep to the one process per container rule and not have to
> run
> rsyslog alongside HAProxy just to ship logs out to stdout/stderr. This also
> fits in nicely with the strategy outlined in http://12factor.net/logs.
>
> Check out https://github.com/dockerfile/haproxy/issues/3.
>
> I think having an easy way to configure haproxy output logs directly to
> stdout
> and stderr would be much appreciated by anyone trying to fit haproxy into
> thier
> stacks running on containers.
>


It would be nice to allow haproxy to log to stdout; however it is not hard
to log from haproxy and not run another logger in the container. Just mount
the /dev/log device from the host into the container and log to it from
haproxy like normal. If on a systemd host, it'll all go to systemd-journal.

-Bryan


Re: cannot bind to socket error

2015-04-02 Thread Bryan Talbot
You need to set net.ipv4.ip_nonlocal_bind=1 to allow processes to bind to
an IP address not currently on the host.

-Bryan


On Thu, Apr 2, 2015 at 2:19 PM, Tim Dunphy  wrote:

> Hey folks,
>
>  I'm setting up HAProxy and keepalived on 2 nodes today. And I'm able to
> start HAProxy on the first node, but not on the 2nd node.
>
> I've tested failover of the VIP for keepalived and it stays up if either
> node is running keepalived.
>
> I have the same haproxy config on both nodes. This is the config I have
> setup:
>
> global
> log 127.0.0.1 local0 notice
> user haproxy
> group haproxy
>
> defaults
> log global
> retries 2
> timeout connect 3000
> timeout server 5000
> timeout client 5000
>
> listen web-cluster
> bind 3.3.87.23:80
> mode http
> balance roundrobin
> server web-1 3.3.86.246:8080 check
> server web-2 3.3.86.247:8080 check
>
> listen 3.3.87.23:80
> bind 3.3.87.23:80
> mode http
> stats enable
> stats uri /
> stats realm Strictly\ Private
> stats auth admin:wouldntYouLikeToKnow
>
>
> And I notice that on the first node if I do a netstat I can see the
> keepalived vip listening on the port I specify.
>
> [root@aoaapld00130la haproxy]# netstat -tulpn | grep -i listen  | grep
> haproxy
> tcp0  0 3.3.87.23:800.0.0.0:*
>   LISTEN
>  57332/haproxy
>
> And on the first node haproxy runs without complaint:
>
> [root@aoaapld00130la haproxy]# service haproxy status
> haproxy (pid  57332) is running...
>
> But on the second node, I'm getting an error saying that HAProxy cannot
> bind to socket.
>
> [root@aoaapld00130lb haproxy]# service haproxy start
> Starting haproxy: [ALERT] 091/171840 (22084) : Starting proxy web-cluster:
> cannot bind socket [3.3.87.23:80]
> [ALERT] 091/171840 (22084) : Starting proxy 3.3.87.23:80: cannot bind
> socket [3.3.87.23:80]
>[FAILED]
>
> Can someone please help me understsand why haproxy is failing on the
> second node?
>
> Thanks!
> Tim
>
>
>
>
>
> --
> GPG me!!
>
> gpg --keyserver pool.sks-keyservers.net --recv-keys F186197B
>
>


Re: [PATCH 2/2] DOC: Document the new tls-ticket-keys bind keyword

2015-02-25 Thread Bryan Talbot
On Wed, Feb 25, 2015 at 12:09 PM, Lukas Tribus  wrote:

> > If a site has N haproxy hosts, how should new ticket-keys be
> > distributed (and processes reloaded) and avoid the race condition of
> > some hosts using the new keys before those keys are on all hosts?
>
> You distribute the new key to all instances for decryption, but use
> the penultimate key for encryption instead of the ultimate key:
>
> https://blog.cloudflare.com/tls-session-resumption-full-speed-and-secure/
>
>

That is a nice solution.

I didn't understand that was the behavior from reading the documentation
patch from the OP. This makes it sound like the last key is used for
encryption and not the next-to-last (penultimate).


+tls-ticket-keys 
+  Sets the TLS ticket keys file to load the keys from. The keys need to be
48
+  bytes long, encoded with base64 (ex. openssl rand -base64 48). Number of
keys
+  is specified by the TLS_TICKETS_NO build option (default 3) and at least
as
+  many keys need to be present in the file. Last TLS_TICKETS_NO keys will
be
+  used for decryption and only the last one for encryption. This enables
easy
+  key rotation by just appending new key to the file and reloading the
process.


-Bryan


Re: [PATCH 2/2] DOC: Document the new tls-ticket-keys bind keyword

2015-02-25 Thread Bryan Talbot
If a site has N haproxy hosts, how should new ticket-keys be distributed
(and processes reloaded) and avoid the race condition of some hosts using
the new keys before those keys are on all hosts?

Seems that not all hosts would be updated at exactly the same time and that
until all hosts are updated, that any requests with new ticket-keys that
are routed to not-yet-updated hosts will force another full handshake.

Seems like a "use after time" would be needed so that all hosts could start
using the new ticket-keys only after some time when they all have all of
the keys needed.

-Bryan



On Wed, Feb 25, 2015 at 10:49 AM, Pavlos Parissis  wrote:

> On 25/02/2015 12:10 μμ, Lukas Tribus wrote:
> >> -- Use stats socket to update the list without reload
> >>
> >> -- Update Session state at disconnection log schema to include
> >> something useful in case server receives a ticket which was encrypted
> with key
> >> that is not anymore in the list. Debugging SSL problems is a nightmare
> >> by definition and having a lot of debug information is very much
> appreciated
> >> by sysadmins
> >
> > If the ticket is not in the list, it will simply fall back to a full
> handshake, not
> > abort the handshake, so there is no error in that case. Generic SSL/TLS
> resumption
> > counter should correctly account for those tings already.
> >
> >
>
> Error was the wrong word here as RFC 5077 clearly states it as a
> situation from which both ends can recovery without causing an error.
> But, you want to avoid the fall-back mechanism as much as possible as it
> defeats the purpose of TLS session resumption, which is a faster user
> experience over HTTPS. Thus, you need have a clear way to identify the
> volume of the traffic which is effected by this.
> I mentioned about session state at disconnection log schema as way to
> pass clear information to operator that your key rotation is degrading
> user experience
>
> I guess the generic counter you mentioned could do the trick here.
>
> >> -- Possible use peer logic to sync the list to others, tricky but it is
> >> required when you have several LBs, alternatively users can deploy the
> logic
> >> that twitter has used
> >
> > That doesn't make much sense for externally provided tls keys, you
> > may as well use the external interface on all instances.
> >
>
> Correct. I only mentioned as an easy way for users that don't have the
> external interface to facilitate this.
>
> > This would make more sense for SSL session ids, they are currently shared
> > between processes, but not between different haproxy instances (stud for
> > example can do this iirc).
> >
> >
> >
> > Lukas
> >
> >
> >
>
> Thanks getting back to me,
>
> Once again thanks to the people who work on this.
>
> I guess someone has to inform few bloggers about this in order to update
> their blog spot where they mention that you can't implement a proper TLS
> session resumption with HAProxy:-)
>
> Cheers,
> Pavlos
>
>
>
>
>


Re: Timeouts + Active sessions

2015-02-24 Thread Bryan Talbot
On Tue, Feb 24, 2015 at 1:39 AM, Francois Lagier 
wrote:

> Hello everyone,
>
> I am currently trying to tune my HaProxy architecture (65k queries per
> seconds, low latency requirement (<50ms), with 12 servers using multi-core
> (4 cores per server)) and I have a couple of questions about the
> http-keep-alive timeout and the behavior when we are actually timing out.
> In my situation, it looks like the client (using KAL) is not sending me the
> data and it's triggering an Eof Exception on my backend once HaProxy is
> timing out and that's why I would like to reset the connection with the
> client.
>
> My question to help me understand what's happening:
>
>- What is the default "timeout http-keep-alive" value when it's not
>specify in the configuration?
>
>
Docs say it defaults to "timeout http-request" which itself defaults to
"timeout client". So, from your gist, your timeout http-keep-alive looks to
be 3 seconds.


>
>-
>- In case of a timeout (at the server level in this case) with
>keep-alive configured (option http-server-close), is the session going to
>stay active or is going to get closed after returning the 5xx? What will be
>the best way for me to close it after a timeout?
>
>

In my experience the tcp connection is closed when the response is a 500.
I'm not sure if that's documented though.



>
>-
>
> Here is my current configuration for timeouts and options:
> https://gist.github.com/francoislagier/3f666253ba61f7b0784c
>
> Thank you very much and have a great day.
>
> Best,
> Francois
>
>
>
>


Re: haproxy-systemd-wrapper with -sf causes it to exit and print usage info

2015-01-20 Thread Bryan Talbot
I think that the recommended way to restart when using the wrapper is to
signal with a HUP or USR2 to the wrapper which will take care of the
soft-restart of haproxy itself.

I believe that a HUP will just cause haproxy to be restarted while the USR2
will reload both haproxy and the wrapper binary itself.

The sample unit file in contrib/systemd/haproxy.service.in is:

[Unit]
Description=HAProxy Load Balancer
After=network.target

[Service]
ExecStartPre=@SBINDIR@/haproxy -f /etc/haproxy/haproxy.cfg -c -q
ExecStart=@SBINDIR@/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg -p
/run/haproxy.pid
ExecReload=/bin/kill -USR2 $MAINPID
KillMode=mixed
Restart=always

[Install]
WantedBy=multi-user.target


On Tue, Jan 20, 2015 at 1:38 AM, Yaron Rosenbaum 
wrote:

> Hi
>
> Adding the -sf flag to haproxy-systemd-wrapper causes it to exit and print
> usage info.
> (-sf   does the same).
> Haproxy 1.5.8, debian wheezy.
>
> Is this a known issue? am I using it incorrectly?
> I’m assuming a reload would be issuing the same command (with pids after
> -sf)
>
> Thanks.
>
> root# haproxy-systemd-wrapper -f /opt/multicloud/discovery/haproxy.cfg -D
> -p /var/run/haproxy.pid  -sf
> <7>haproxy-systemd-wrapper: executing /usr/sbin/haproxy -f
> /opt/multicloud/discovery/haproxy.cfg -D -p /var/run/haproxy.pid -sf -Ds
> HA-Proxy version 1.5.8 2014/10/31
> Copyright 2000-2014 Willy Tarreau 
>
> Usage : haproxy [-f ]* [ -vdVD ] [ -n  ] [ -N 
> ]
> [ -p  ] [ -m  ] [ -C  ]
> -v displays version ; -vv shows known build options.
> -d enters debug mode ; -db only disables background mode.
> -dM[] poisons memory with  (defaults to 0x50)
> -V enters verbose mode (disables quiet mode)
> -D goes daemon ; -C changes to  before loading files.
> -q quiet mode : don't display messages
> -c check mode : only check config files and exit
> -n sets the maximum total # of connections (2000)
> -m limits the usable amount of memory (in MB)
> -N sets the default, per-proxy maximum # of connections (2000)
> -L set local peer name (default to hostname)
> -p writes pids of all children to this file
> -de disables epoll() usage even when available
> -dp disables poll() usage even when available
> -dS disables splice usage (broken on old kernels)
> -dV disables SSL verify on servers side
> -sf/-st [pid ]* finishes/terminates old pids. Must be last
> arguments.
>
> <5>haproxy-systemd-wrapper: exit, haproxy RC=256
>
>
> (Y)
>
>


Re: What is the hardware requirement for haproxy?

2015-01-20 Thread Bryan Talbot
The hardware requirements for haproxy itself are very modest and nearly
anything will work. The requirements really depend on how much and what
sort of traffic you need to handle. Network card and CPU speed are the most
important hardware factors for performance though.

-Bryan


On Mon, Jan 19, 2015 at 7:45 PM, 金富清  wrote:

> Hi Sir,
>
> i want to install haproxy 1.4 on linux system(64 bit),but i do not
> know the hardware requirement for haproxy. For example ,the cpu
> requirement,memory requirement. Could you kindly give me suggections ?
>
> thanks a lot.
>
>
>


Re: Significant number of 400 errors..

2014-11-26 Thread Bryan Talbot
There are clearly a lot of junk bytes in those URI which are not allowed by
the HTTP specs. If you really want to be passing unencoded binary control
characters, spaces, and nulls to your backends in HTTP request and header
lines, then HTTP mode is probably not going to work for you.

TCP mode will allow them to get through but if your backends actually
expect the requests to be valid HTTP, you will likely be opening up a huge
can of worms and exposing your apps to a host of protocol level attacks.

Also, your connection limits seem pretty ambitious if there really are 2
php servers in that backend and not 2000.

-Bryan



On Mon, Nov 24, 2014 at 10:22 PM, Alexey Zilber 
wrote:

> Hi Willy and Lukas,
>
>   Here's snippets of the new config:
>
>
> -
>
> global
>
>maxconn 645000
>
>maxpipes 645000
>
>ulimit-n 645120
>
>user haproxy
>
>group haproxy
>
> tune.bufsize 49152
>
>spread-checks 10
>
>daemon
>
>quiet
>
>stats socket /var/run/haproxy.sock level admin
>
>pidfile /var/run/haproxy.pid
>
>
> defaults
>
>log global
>
>modehttp
>
> option accept-invalid-http-request
>
> option accept-invalid-http-response
>
>option  httplog
>
>option  dontlognull
>
>option dontlog-normal
>
>option log-separate-errors
>
> option http-server-close
>
> option tcp-smart-connect
>
> option tcp-smart-accept
>
> option forwardfor except 127.0.0.1
>
>option dontlog-normal
>
>retries 3
>
>option redispatch
>
>maxconn 200
>
>contimeout  5000
>
>clitimeout  6
>
>srvtimeout  6
>
> listen  www   0.0.0.0:80
>
>mode http
>
> capture response header Via len 20
>
>  capture response header Content-Length len 10
>
>  capture response header Cache-Control len 8
>
>  capture response header Location len 40
>
>balance roundrobin
>
># Haproxy status page
>
>stats uri /haproxy-status
>
>stats auth fb:phoo
>
># when cookie persistence is required
>
>cookie SERVERID insert indirect nocache
>
># When internal servers support a status page
>
>option httpchk GET /xyzzyx.php
>
> bind 0.0.0.0:443 ssl crt /etc/lighttpd/ssl_certs/.co.pem
>
> http-request add-header X-FORWARDED-PROTO https if { ssl_fc }
>
>   server app1 10.1.1.6:85 check inter 4 rise 2 fall 3 maxconn
> 16384
>
>   server app2 10.1.1.7:85 check inter 4 rise 2 fall 3 maxconn
> 16384
>
> -
>
>
> The old config did NOT have the following items, and had about 500x more
> errors:
> -
>   tune.bufsize 49152
>
> option accept-invalid-http-request
>
> option accept-invalid-http-response
> -
>
> Here's what the 'show errors' shows on a sampling of the server.  It looks
> like 90% of the errors are the second error (25/Nov/2014:00:06:30.753):
>
>
>
>
> Total events captured on [24/Nov/2014:23:31:52.468] : 151
>
>
>
> [22/Nov/2014:21:55:56.597] frontend www (#2): invalid request
>
>   backend www (#2), server  (#-1), event #150
>
>   src 166.137.247.239:8949, session #3883610, session flags 0x0080
>
>   HTTP msg state 26, msg flags 0x, tx flags 0x
>
>   HTTP chunk len 0 bytes, HTTP body len 0 bytes
>
>   buffer flags 0x00808002, out 0 bytes, total 1183 bytes
>
>   pending 1183 bytes, wrapping at 49152, error at position 227:
>
>
>
>   0
> sited%22%3A1416713764%2C%22times_visited%22%3A6%2C%22device%22%3A%22We
>
>   00070+
> b%22%2C%22lastsource%22%3A%22bottomxpromo%22%2C%22language%22%3A%22en%
>
>   00140+
> 22%2C%22extra%22%3A%22%7B%5C%22tr%5C%22%3A%5C%22en%5C%22%7D%22%2C%22di
>
>   00210+ d_watch%22%3A1%7D; yX=5167136038811769837; _vhist=%7B%22visito
>
>   00280+
> r_id%22%3A%225024165909427731336%22%2C%22seen_articles%22%3A%22%7B%5C%
>
>   00350+
> 22950%5C%22%3A1402416590%2C%5C%22685%5C%22%3A1402416675%2C%5C%22799%5C
>
>   00420+
> %22%3A1402416789%2C%5C%22954%5C%22%3A1402416997%2C%5C%22939%5C%22%3A14
>
>   00490+
> 02417098%2C%5C%222334%5C%22%3A1407162586%2C%5C%222055%5C%22%3A14071626
>
>   00560+
> 91%2C%5C%223888%5C%22%3A1409938121%2C%5C%223020%5C%22%3A1409938211%2C%
>
>   00630+
> 5C%223773%5C%22%3A1409938340%2C%5C%222163%5C%22%3A1409938389%2C%5C%222
>
>   00700+
> 569%5C%22%3A1409938872%2C%5C%222426%5C%22%3A1409938959%2C%5C%2213984%5
>
>   00770+
> C%22%3A1411916274%2C%5C%221675%5C%22%3A1411916466%2C%5C%2214950%5C%22%
>
>   00840+
> 3A1412432461%2C%5C%2219759%5C%22%3A1416714580%7D%22%2C%22num_articles%
>
>   00910+ 22%3A17%7D; lastlargeleaderboard=1416713603;
> lastlike-264755123648776=
>
>   00980+ 1416713764; lastlike-703681396382652=1416713919;
> lastlike-939184096095
>
>   01050+ 676=1416714070; lastlike-293377080869393=14

Re: POST body not getting forwarded

2014-11-20 Thread Bryan Talbot
On Wed, Nov 19, 2014 at 9:17 PM, Rodney Smith  wrote:

> I have a problem where a client is sending audio data via POST, and while
> the request line and headers reach the server, the body of the POST does
> not. However, if the client uses the header "Transfer-Encoding: chunked"
> and chunks the data, it does get sent. What can I do to get the POST body
> sent without the chunking?
> What can be changed to get the incoming raw data packets to get forwarded?
>
>
> The client sends this as the first packet, where path and hostaddress get
> changed via regex before getting assigned a server:
> POST /path/g711.cgi HTTP/1.1
> Host: hostaddress
> Connection: Close
> Authorization: Basic ASLKSDNW8RUNVS3===
>
> And in subsequent packets, the raw audio data: blah, blah, blah.
> -r
>
>

The request does not conform to the HTTP spec and haproxy is ignoring the
body as required by the spec.

See sections 4.3 and 4.4
http://www.w3.org/Protocols/rfc2616/rfc2616-sec4.html#sec4.3

In particular, this paragraph:

The presence of a message-body in a request is signaled by the inclusion of
a Content-Length or Transfer-Encoding header field in the request's
message-headers. A message-body MUST NOT be included in a request if the
specification of the request method (section 5.1.1) does not allow sending
an entity-body in requests. A server SHOULD read and forward a message-body
on any request; if the request method does not include defined semantics
for an entity-body, then the message-body SHOULD be ignored when handling
the request.



-Bryan


  1   2   3   >