Re: Haproxy subdomain going to wrong backend

2016-11-09 Thread Bryan Talbot

> On Nov 9, 2016, at Nov 9, 4:45 AM, Azam Mohammed  wrote:
> 
> Hello,
> 
>  
>  
> 
> acl  url_subdomain   hdr_dom(host)   -i  subdomain.domain.com 
> 
> acl  url_test hdr_dom(host)   -i  
> test.subdomain.domain.com 
>  
>  
> use_backend subdomain if url_subdomain
> 
> use_backend test   if url_test
> 
>  
>  
> Both the subdomain has different web pages. Now if we enter 
> test.subdomain.domain.com  in the browser 
> it goes into subdomain.domain.com  backend. We 
> have no idea what is causing this issue.
> 
>  


Both the url_subdomain and url_test acts match the string 
‘subdomain.domain.com’.

Make the ACL match be more specific or put the “use_backend test” first since 
it is already more specific.

-Bryan




Re: Haproxy subdomain going to wrong backend

2016-11-10 Thread Bryan Talbot
… please include the list in your responses too


> On Nov 10, 2016, at Nov 10, 4:09 AM, Azam Mohammed  wrote:
> 
> Hi Bryan,
> 
> Thanks for your reply.
> 
> Putting "use_backend test" first in Haproxy config worked fine.
> 
> But I have few more question based on the solution.
> 
> As you said both the url_subdomain and url_test acls match the string 
> ‘subdomain.domain.com <http://subdomain.domain.com/>' we get this issue.  But 
> in the ACL section full URL is specified, then why acl url_subdomain is 
> catching the requested with the URL test.subdomain.domain.com 
> <http://test.subdomain.domain.com/>. I believe url_subdomain should alway 
> match subdomain.domain.com <http://subdomain.domain.com/> only.
> 

hdr_dom matches domains (strings terminated by a dot (.) or whitespace. Since 
you seem to be expecting an exact string match, just use hdr.


> If there is anything to do with the levels of subdomain, is it mentioned in 
> the Haproxy documentation to use the precedence. Could please point me where 
> to look in Haproxy documentation for this.   
> 


The documentation is quite extensive and you can find specifics about req.hdr 
at https://cbonte.github.io/haproxy-dconv/1.6/configuration.html#7.3.6-req.hdr 
<https://cbonte.github.io/haproxy-dconv/1.6/configuration.html#7.3.6-req.hdr>

-Bryan


> --
> 
> Thanks & Regards, 
>  
> Azam Sheikh Mohammed
> IT Network & System Admin
>  
> D a n a t
> Al-Shatha Tower Office 1305, Dubai Internet City | P.O.Box: 502113, Dubai, 
> UAE | Tel: +971 4 368 8468 Ext. 133 | Fax:  +971 4 368 8232 | Mobile:  +971 
> 55 498 8089 | Email: a...@danatev.com <mailto:a...@danatev.com>
> On Thu, Nov 10, 2016 at 12:46 AM, Bryan Talbot  <mailto:bryan.tal...@playnext.com>> wrote:
> 
>> On Nov 9, 2016, at Nov 9, 4:45 AM, Azam Mohammed > <mailto:a...@danatev.com>> wrote:
>> 
>> Hello,
>> 
>>  
>>  
>> acl  url_subdomain   hdr_dom(host)   -i  subdomain.domain.com 
>> <http://subdomain.domain.com/>
>> acl  url_test hdr_dom(host)   -i  
>> test.subdomain.domain.com <http://test.subdomain.domain.com/>
>>  
>>  
>> use_backend subdomain if url_subdomain
>> 
>> use_backend test   if url_test
>> 
>>  
>>  
>> Both the subdomain has different web pages. Now if we enter 
>> test.subdomain.domain.com <http://test.subdomain.domain.com/> in the browser 
>> it goes into subdomain.domain.com <http://subdomain.domain.com/> backend. We 
>> have no idea what is causing this issue.
>> 
>>  
> 
> 
> Both the url_subdomain and url_test acts match the string 
> ‘subdomain.domain.com <http://subdomain.domain.com/>’.
> 
> Make the ACL match be more specific or put the “use_backend test” first since 
> it is already more specific.
> 
> -Bryan
> 
> 
> 



Re: Haproxy subdomain going to wrong backend

2016-11-10 Thread Bryan Talbot

> On Nov 9, 2016, at Nov 9, 4:45 AM, Azam Mohammed  wrote:
> 
> Also we have exact same Haproxy config on QA and UAT environment and works 
> fine.
> 
> QA Environment:
> Haproxy Version: HA-Proxy version 1.5.4
> OS Version: CentOS release 6.3 (Final)
> 
> UAT Environment:
> Haproxy Version: HA-Proxy version 1.3.26
> OS Version: CentOS release 5.6 (Final)
> 

I didn’t notice before, but both of these versions are quite old. You should 
consider upgrading them when possible. I’m sure there are many critical 
security issues that have been fixed in the years since these were released.

-Bryan




Re: Haproxy subdomain going to wrong backend

2016-11-14 Thread Bryan Talbot
Use “reply-all” so the thread stays on the list.


> On Nov 14, 2016, at Nov 14, 4:33 AM, Azam Mohammed  wrote:
> 
> Hi Bryan,
> 
> Thanks for your email.
> 
> I was doing a bit of testing on haproxy.
> 
> I used hdr to match the subdomain in frontend but I got 503 "503 Service 
> Unavailable" No server is available to handle this request.
> 
> Haproxy Log:
> http-in http-in/ -1/-1/-1/-1/163 503 212 - - SC-- 4/4/0/0/0 0/0 "GET 
> /favicon.ico HTTP/1.1"
> 
> 
> http-in http-in/ -1/-1/-1/-1/0 503 212 - - SC-- 2/2/0/0/0 0/0 "GET 
> /favicon.ico HTTP/1.1"
> 
> But using hdr_dom(host) works fine
> 
> Haproxy Log:
> 
> 

Clearly the host header being sent isn’t the exact strings that you’re checking 
for. 

-Bryan



> http-in ppqa2argaamplus/web01 0/0/2/26/28 200 1560 - - --VN 6/6/0/0/0 0/0 
> "GET /content/ar/images/argaam-plus-icon.ico HTTP/1.1"
> 
> All our websites are developed on ASP.NET .
> 
> I want to use hdr (as you mention this match exact string) to match the 
> subdomain.
> 
> Could you please help me to fix this.
> 
> 
> --
> 
> Thanks & Regards, 
>  
> Azam Sheikh Mohammed
> IT Network & System Admin
>   


Re: Working with Multiple HTTPS Applications with haproxy

2016-11-28 Thread Bryan Talbot

> On Nov 23, 2016, at Nov 23, 2:35 AM, Deepak Shakya  wrote:
> 
> I want to setup haproxy to be able to proxy multiple https applications on 
> the same https port
> 
> Something like this:
> 
> Client/Browser  ---(https)--->  haproxy:8443/app1 ---(https)--->  
> app1-server:8101 (Default)
> Client/Browser  ---(https)--->  haproxy:8443/app2 ---(https)--->  
> app2-server:8102
> 
> I was thinking to have SSL Pass-through for the above case and here is my 
> configuration for the same.
> 
> frontend pmc-fe 0.0.0.0:8443 
> mode tcp
> option tcplog
> default_backend app1-be
> 
> acl app2_aclpath_beg /app2/
> use_backend app2-be if app2_acl
> 
> backend app1-be
> mode tcp
> stick-table type ip size 200k expire 30m
> stick on src
> server app1-server app1-server:8101
> 
> backend app2-be
> reqrep ^([^\ ]*\ /)app2[/]?(.*) \1\2
> server app2-server app2-server:8102
> 
> 
> But, this is not working? Can somebody guide me?


If this is actually your config then SSL is not decrypted at the proxy and 
there is no way for the app2_acl to ever match. If you want to inspect HTTP 
content in the proxy, then you must terminate SSL in the proxy too.


-Bryan



Re: HTTP redirects while still allowing keep-alive

2017-01-09 Thread Bryan Talbot

> On Jan 8, 2017, at Jan 8, 2:03 PM, Ciprian Dorin Craciun 
>  wrote:
> 
> Quick question:  how can I configure HAProxy to redirect (via
> `http-request redirect ...`) without HAProxy sending the `Connection:
> close` header, thus still allowing keep-alive on this connection.

I do not see the behavior you describe, but I also do no know what haproxy 
version you might be using or what your config might be like.

haproxy version 1.7.1 with a proxy config like that shown below does not close 
the connection and contains no “connection: close” header for me.

listen http
bind :8000
http-request redirect prefix /redir



$> curl -v http://127.0.0.1:8000/foo
*   Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 8000 (#0)
> GET /foo HTTP/1.1
> Host: 127.0.0.1:8000
> User-Agent: curl/7.52.1
> Accept: */*
>
< HTTP/1.1 302 Found
< Cache-Control: no-cache
< Content-length: 0
< Location: /redir/foo
<
* Curl_http_done: called premature == 0
* Connection #0 to host 127.0.0.1 left intact


-Bryan



> 
> My use-case is the following:  I have a stubborn server that insists
> on pointing to the "wrong" resource URL's, thus on a page load, I get
> a storm of redirects, each with a different connection (due to the
> `Connection: close` reply header).
> 
> 
> I tried to skim the documentation and search the internet (and the
> mailing list archives), but no such topic popped-up, thus I have the
> feeling this is quite impossible as of now...
> 
> Thanks,
> Ciprian.
> 




Re: HTTP redirects while still allowing keep-alive

2017-01-10 Thread Bryan Talbot

> On Jan 10, 2017, at Jan 10, 12:28 AM, Ciprian Dorin Craciun 
>  wrote:
> 
> On Tue, Jan 10, 2017 at 9:36 AM, Cyril Bonté  wrote:
>> This is because haproxy behaves differently depending on the the Location
>> URL :
>> - beginning with /, it will allow HTTP keep-alived connections (Location:
>> /redir/foo)
>> - otherwise it unconditionnally won't, and there's no option to change this
>> (Location: http://mysite/redir)
> 
> 


Whatever the reason for forcing the connection closed -- it only closes when 
the scheme changes. Redirecting to a different host or port when using a 
“scheme less” URI allows the connection to be kept open.


listen http
bind :8000
http-request redirect location //127.0.0.2:8001/redir




$> curl -L -v 127.0.0.1:8000/foo
*   Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 8000 (#0)
> GET /foo HTTP/1.1
> Host: 127.0.0.1:8000
> User-Agent: curl/7.52.1
> Accept: */*
>
< HTTP/1.1 302 Found
< Cache-Control: no-cache
< Content-length: 0
< Location: //127.0.0.2:8001/redir
<
* Curl_http_done: called premature == 0
* Connection #0 to host 127.0.0.1 left intact
* Issue another request to this URL: 'http://127.0.0.2:8001/redir'
*   Trying 127.0.0.2…


Maybe that will be useful to Ciprian to make the redirect to a new hostname but 
keep the connection to the old host open if that’s what is needed.

-Bryan





Re: How can I change the URI when forwarding to a server

2017-01-12 Thread Bryan Talbot

> On Jan 12, 2017, at Jan 12, 5:26 AM, Jürgen Haas  
> wrote:
> 
> Hi all,
> 
> I wonder if I can change the uri that the server receives without doing
> a redirect.


You’re looking for http-request with set-uri or set-path + set-query: 
https://cbonte.github.io/haproxy-dconv/1.6/configuration.html#4.2-http-request 


-Bryan



> 
> Example:
> Request from client: https://www.example.com/login/username?p1=something
> Request received by server: /login.php?s=username&p1=something
> 
> More general:
> - if path begins with /login/*[?*]
> - add the first * as a query parameter s to the query
> - keep other optional query parameters in place
> 
> Is anything like that possible?
> 
> 
> Thanks
> Jürgen
> 



Re: Queries Rearding to the Redirections According to the ports

2017-01-31 Thread Bryan Talbot

> On Jan 31, 2017, at Jan 31, 11:26 PM, parag bharne 
>  wrote:
> 
> HI,
> Here our scenario where we wnat to work using haproxy
> 
> (client) -> http://www.example.com  -> (redirect) -> 
> https://www.example.com 
> (client) -> http://www.example.com:8080  -> 
> (redirect) ->
> https://www.example.com :8080
> 
> This is Possible in haproxy or not, plz try to reply as fast as possible
> 

Yes.



> Parag Bharne



Re: Queries Rearding to the Redirections According to the ports

2017-02-02 Thread Bryan Talbot

> On Feb 1, 2017, at Feb 1, 1:21 AM, parag bharne  
> wrote:
> 
> Above Conditions will work for 80 port, for SSL It works on 443, but for 
> other port i.e 8080 The SSL cannot get access.


The sample configs do not make much sense given this statement so it’s hard to 
say what you’re trying to do.

My recommendation is to simplify your config and get it working for both your 
sites with only HTTPS. Then add support to redirect HTTP requests to the 
working HTTPS listener.




> 
> See My Configuration File I have Tried
> ### First Configuration###
> frontend www-http
> bind *:80
> bind *:443 ssl crt /etc/apache2/ssl/apache.pem
> reqadd X-Forwarded-Proto:\ https
> default_backend tcp-backend
> mode tcp
> 
> frontend www-http
> bind *:80
> bind *:443 ssl crt /etc/apache2/ssl/apache.pem
> reqadd X-Forwarded-Proto:\ https
> default_backend www-backend
> mode tcp
> 
> 
> backend tcp-backend
> redirect scheme https if !{ ssl_fc }
> server example 1.0.0.0:8080 <http://1.0.0.0:8080/> check
> 
> backend www-backend
>  redirect scheme https if !{ ssl_fc }
>  server example.com <http://example.com/> 1.0.0.1:80 <http://1.0.0.1/> 
> check
> 
> ## Second Configuration ##
> 
> frontend www-http2
> bind *:80
> bind *:443 ssl crt /etc/apache2/ssl/apache.pem
> reqadd X-Forwarded-Proto:\ https
> default_backend tcp-backend
> mode tcp
> 
> frontend tcp-http1
> bind *:81
> bind *:81 ssl crt /etc/apache2/ssl/apache.pem
> reqadd X-Forwarded-Proto:\ https
> default_backend www-backend
> mode tcp
> 
> backend tcp-backend
> redirect scheme https if !{ ssl_fc }
> server example.com <http://example.com/> 1.0.0.0:8080 
> <http://1.0.0.0:8080/> check
> 
> backend www-backend
>  redirect scheme https if !{ ssl_fc }
>  server example.com <http://example.com/> 1.0.0.1:80 <http://1.0.0.1/> 
> check
> 
> #####Please Help mi to 
> Confiuration Chnages if any have. Give some hints to do that one 
> 
> Thanks and Regards 
>Parag Bharne
> 
> 
> On Wed, Feb 1, 2017 at 12:59 PM, Bryan Talbot  <mailto:bryan.tal...@playnext.com>> wrote:
> 
>> On Jan 31, 2017, at Jan 31, 11:26 PM, parag bharne 
>> mailto:parag.bha...@bizruntime.com>> wrote:
>> 
>> HI,
>> Here our scenario where we wnat to work using haproxy
>> 
>> (client) -> http://www.example.com <http://www.example.com/> -> (redirect) 
>> -> https://www.example.com <https://www.example.com/>
>> (client) -> http://www.example.com:8080 <http://www.example.com:8080/> -> 
>> (redirect) ->
>> https://www.example.com <https://www.example.com/>:8080
>> 
>> This is Possible in haproxy or not, plz try to reply as fast as possible
>> 
> 
> Yes.
> 
> 
> 
>> Parag Bharne
> 
> 



Re: Layer 7 Headers

2017-02-06 Thread Bryan Talbot

> On Feb 6, 2017, at Feb 6, 4:24 PM, Andrew Kroenert 
>  wrote:
> 
> Hey Guys
> 
> Quick one, Can anyone confirm any difference between the following header 
> manipulations in haproxy


Well, they’re very different … the first alters the response and the second 
alters the request.

If your haproxy version supports the http-request / http-response methods, 
those should probably be preferred over the older rspadd / reqadd which are 
kept for backwards compatibility.



> 
> 1.
> rspadd Server:\ Test


This adds a “Server: Test” header line to the response sent by the server 
before forwarding to the client.
https://cbonte.github.io/haproxy-dconv/1.7/configuration.html#rspadd 



> 
> 2.
> http-request add-header Server Test
> 


This adds a “Server: Test” header line request sent by the client before 
forwarding to the server.
https://cbonte.github.io/haproxy-dconv/1.7/configuration.html#4.2-http-request 



-Bryan



Re: [PATCH][RFC] MEDIUM: global: add a 'grace' option to cap the soft-stop time

2017-03-15 Thread Bryan Talbot

> On Mar 15, 2017, at Mar 15, 4:44 PM, Cyril Bonté  wrote:
> 
> Several use cases may accept to abruptly close the connections when the
> instance is stopping instead of waiting for timeouts to happen.
> This option allows to specify a grace period which defines the maximum
> time to spend to perform a soft-stop (occuring when SIGUSR1 is
> received).
> 
> With this global option defined in the configuration, once all connections are
> closed or the grace time is reached, the instance will quit.


Most of the other settings for time-limits include the word “timeout”. Maybe 
“timeout grace”, “timeout shutdown”, “timeout exit” or something is more 
consistent with other configuration options?

-Bryan




Re: stick-table ,show table, use field

2017-03-30 Thread Bryan Talbot

> On Mar 30, 2017, at Mar 30, 10:19 AM, Arnall  wrote:
> 
> Hello everyone,
> 
> when using socat to show a stick-table i have lines like this :
> 
> # table: dummy_table, type: ip, size:52428800, used:33207
> 
> 0x7f202f800720: key=aaa.bbb.ccc.ddd use=0 exp=599440 gpc0=0 
> conn_rate(5000)=19 conn_cur=0 http_req_rate(1)=55
> 
> ../...
> 
> I understand all the fields except 2 :
> 
> used:33207
> 
> use=0
> 
> I found nothing in the doc, any idea ?
> 


I believe that these are documented in the management guides and not the config 
guides.

https://cbonte.github.io/haproxy-dconv/1.6/management.html#9.2-show%20table 


Here, I think that ‘used’ for the table is the number of entries that currently 
exist in the table, and ‘use’ for an entry is the number of sessions that 
concurrently match that entry.

-Bryan



Re: low load client payload intermittently dropped with a "cD" error (v1.7.3)

2017-04-10 Thread Bryan Talbot

> On Apr 8, 2017, at Apr 8, 2:24 PM, Lincoln Stern  
> wrote:
> 
> I'm not sure how to interpret this, but it appears that haproxy is dropping
> client payload intermittently (1/100).  I have included tcpdumps and logs to
> show what is happening.
> 
> Am I doing something wrong?  I have no idea what could be causing this or how
> to go about debugging it.  I cannot reproduce it, but I do observe in 
> production ~2 times
> a day across 20 instances and 2K connections.
> 
> Any help or advice would be greatly appreciated.
> 
> 
> 

You’re in TCP mode with 60 second timeouts. So, if the connection is idle for 
that long then the proxy will disconnect. If you need idle connections to stick 
around longer and mix http and tcp traffic then you probably want to set 
“timeout tunnel” to however long you’re willing to let idle tcp connections sit 
around and not impact http timeouts. If you only need long-lived tcp “tunnel” 
connections, then you can instead just increase both your “timeout client” and 
“timeout server” timeouts to cover your requirements.

-Bryan



> What I'm trying to accomplish is to provide HA availability over two routes
> (i.e. internet providers).  One acts as primary and I gave it a "static-rr"
> "weight" of 256 and the other as backup and has a weight of "1".  Backup
> should only be used in case of primary failure.
> 
> 
> log:
> Apr  4 18:55:27 app055 haproxy[13666]: 127.0.0.1:42262 
>  [04/Apr/2017:18:54:41.585] ws-local servers/server1 
> 1/86/45978 4503 5873 -- 0/0/0/0/0 0/0
> Apr  4 22:46:37 app055 haproxy[13666]: 127.0.0.1:47130 
>  [04/Apr/2017:22:46:36.931] ws-local servers/server1 
> 1/62/663 7979 517 -- 0/0/0/0/0 0/0
> Apr  4 22:46:38 app055 haproxy[13666]: 127.0.0.1:32931 
>  [04/Apr/2017:22:46:37.698] ws-local servers/server1 
> 1/55/405 3062 553 -- 1/1/1/1/0 0/0
> Apr  4 22:46:43 app055 haproxy[13666]: 127.0.0.1:41748 
>  [04/Apr/2017:22:46:43.190] ws-local servers/server1 
> 1/115/452 7979 517 -- 2/2/2/2/0 0/0
> Apr  4 22:46:46 app055 haproxy[13666]: 127.0.0.1:57226 
>  [04/Apr/2017:22:46:43.576] ws-local servers/server1 
> 1/76/3066 2921 538 -- 1/1/1/1/0 0/0
> Apr  4 22:46:47 app055 haproxy[13666]: 127.0.0.1:39656 
>  [04/Apr/2017:22:46:47.072] ws-local servers/server1 
> 1/67/460 8254 528 -- 1/1/1/1/0 0/0
> Apr  4 22:47:38 app055 haproxy[13666]: 127.0.0.1:39888 
>  [04/Apr/2017:22:46:38.057] ws-local servers/server1 
> 1/63/60001 0 0 cD 0/0/0/0/0 0/0 
> Apr  5 08:44:55 app055 haproxy[13666]: 127.0.0.1:42650 
>  [05/Apr/2017:08:44:05.529] ws-local servers/server1 
> 1/53/49645 4364 4113 -- 0/0/0/0/0 0/0
> 
> 
> tcpdump:
> 22:46:38.057127 IP 127.0.0.1.39888 > 127.0.0.1.9011: Flags [S], seq 
> 2113072542, win 43690, options [mss 65495,sackOK,TS val 82055529 ecr 
> 0,nop,wscale 7], length 0
> 22:46:38.057156 IP 127.0.0.1.9011 > 127.0.0.1.39888: Flags [S.], seq 
> 3284611992, ack 2113072543, win 43690, options [mss 65495,sackOK,TS val 
> 82055529 ecr 82055529,nop,wscale 7], length 0
> 22:46:38.057178 IP 127.0.0.1.39888 > 127.0.0.1.9011: Flags [.], ack 1, win 
> 342, options [nop,nop,TS val 82055529 ecr 82055529], length 0
> 22:46:38.057295 IP 10.10.10.10.34289 > 99.99.99.99.8000: Flags [S], seq 
> 35567, win 29200, options [mss 1460,sackOK,TS val 82055529 ecr 
> 0,nop,wscale 7], length 0
> 22:46:38.060539 IP 127.0.0.1.39888 > 127.0.0.1.9011: Flags [P.], seq 1:199, 
> ack 1, win 342, options [nop,nop,TS val 82055530 ecr 82055529], length 198
> 22:46:38.060598 IP 127.0.0.1.9011 > 127.0.0.1.39888: Flags [.], ack 199, win 
> 350, options [nop,nop,TS val 82055530 ecr 82055530], length 0
> ... client payload acked ...
> 22:46:38.120527 IP 99.99.99.99.8000 > 10.10.10.10.34289: Flags [S.], seq 
> 4125907118, ack 35568, win 28960, options [mss 1460,sackOK,TS val 
> 662461622 ecr 82055529,nop,wscale 8], length 0
> 22:46:38.120619 IP 10.10.10.10.34289 > 99.99.99.99.8000: Flags [.], ack 1, 
> win 229, options [nop,nop,TS val 82055545 ecr 662461622], length 0
> ... idle timeout by server 5 seconds later...
> 22:46:43.183207 IP 99.99.99.99.8000 > 10.10.10.10.34289: Flags [F.], seq 1, 
> ack 1, win 114, options [nop,nop,TS val 662466683 ecr 82055545], length 0
> 22:46:43.183387 IP 127.0.0.1.9011 > 127.0.0.1.39888: Flags [F.], seq 1, ack 
> 199, win 350, options [nop,nop,TS val 82056810 ecr 82055530], length 0
> 22:46:43.184011 IP 10.10.10.10.34289 > 99.99.99.99.8000: Flags [.], ack 2, 
> win 229, options [nop,nop,TS val 82056811 ecr 662466683], length 0
> 22:46:43.184025 IP 127.0.0.1.39888 > 127.0.0.1.9011: Flags [.], ack 2, win 
> 342, options [nop,nop,TS val 82056811 ecr 82056810], length 0
> 22:46:43.184715 IP 127.0.0.1.39888 > 127.0.0.1.9011: Flags [P.], seq 199:206, 
> ack 2, win 342, options [nop,nop,TS val 82056811 ecr 82056810], length 7
> 22:46:43.184795 IP 127.0.0.1.9011 > 1

Re: Haproxy 1.5.4 unable to accept new TCP request, backlog full, tens of thousands close_wait connection

2017-04-26 Thread Bryan Talbot

> On Apr 26, 2017, at Apr 26, 2:13 AM, jaseywang  wrote:
> 
> Hi
> @Willy @Cyril do you have any recommended config for ssl related setting, we 
> now use nbproc and cpu-map to distribute the load to each cpu, though haproxy 
> can work with cdn now, it's performance is not as good as before without cdn, 
> user time of each core is almost saturated.
> Thanks.


I think that most would recommend using a TLS config from 
https://wiki.mozilla.org/Security/Server_Side_TLS 
 unless you have specific 
needs and expert knowledge to make up your own.

-Bryan



Re: haproxy

2017-05-12 Thread Bryan Talbot

> On May 11, 2017, at May 11, 7:51 AM, Jose Alarcon  
> wrote:
> 
> Hello,
> 
> excuseme my english is very bad, i need know how change configuration haproxy 
> pasive/active manually not using keepalived.
> 

There is no standard way because that is not a feature of haproxy. High 
availability of the proxy is managed by an external tool like keepalived.

-Bryan


> i need this information for a highscholhomework.
> 
> thanks.
> 
> my native lenguaje is spanish.-




Re: haproxy "inter" and "timeout check", retries and "fall"

2017-05-15 Thread Bryan Talbot

> On May 13, 2017, at May 13, 10:59 PM, Jiafan Zhou  
> wrote:
> 
> 
> Hi all,
> 
> The version of haproxy I use is: 
> 
> # haproxy -version
> HA-Proxy version 1.5.2 2014/07/12
> Copyright 2000-2014 Willy Tarreau  

This version is so old. I’m sure there must be hundreds of bugs fixed over the 
last 3 years. Why not use a properly current version?


> I have a question regarding the Health Check. In the documentation of 
> haproxy, it mentions the below for the "timeout check" and "inter":
> 
> Now I am wondering here which one and what value will be used for healthcheck 
> interval. Is it "timeout check" as 10 seconds, or the "inter" as the default 
> 2 seconds?
> 
> 

Why not just set the health check values that you care about and not worry 
about guessing what they’ll end up being when only some are set and some are 
using defaults? If you need / expect them to be a particular value for proper 
system operation, I’d set them no matter what the defaults may be declared to 
be. 


> Another question, since I defined the "retries" to be 3, in the case of 
> server connection failure, will it reconnect 3 times? Or does it use the 
> "fall" parameter (which defaults to 3 here as well) instead for healthcheck 
> retry?
> 
> 


“retries” is for dispatching requests and is not used for health checks.


> So in this configuration, in the case of server failure, does it wait for up 
> to 30 seconds (3 fall or retries), then 20 seconds (2 rise), before the 
> server is considered operational? (in total 50 seconds)
> 
> 

retries are not considered, only health check specific settings like “fail”, 
“inter"

> Thanks,
> 
> Jiafan
> 



Re: Deny with 413 request too large

2017-05-17 Thread Bryan Talbot

> On May 15, 2017, at May 15, 6:35 PM, Joao Morais  
> wrote:
> 
> This is working but sounds a hacky workaround since I’m using another status 
> code. If I try to use:
> 
>errorfile 413 /usr/local/etc/haproxy/errors/413.http
>http-request deny deny_status 413 if { req.body_size gt 10485760 }
> 
> ... HAProxy complains with:
> 
>[WARNING] 135/001448 (27) : parsing [/etc/haproxy/haproxy.cfg:15] : status 
> code 413 not handled by 'errorfile', error customization will be ignored.
>[WARNING] 135/001448 (27) : parsing [/etc/haproxy/haproxy.cfg:89] : status 
> code 413 not handled, using default code 403.
> 
> How should I configure HAProxy in order to deny with 413?



You’ve already found it. AFAIK, that’s the only way.

-Bryan




Re: haproxy "inter" and "timeout check", retries and "fall"

2017-05-19 Thread Bryan Talbot

> On May 18, 2017, at May 18, 2:58 AM, Jiafan Zhou  
> wrote:
> 
> Hi Bryan,
> 
> For reference:
> 
> 
>> defaults
>> timeout http-request10s
>> timeout queue   1m
>> timeout connect 10s
>> timeout client  1m
>> timeout server  1m
>> timeout http-keep-alive 10s
>> timeout check   10s
>> 
> 
> - For "timeout check" and "inter", it was for some troubleshooting and would 
> like to understand the behaviour a bit more. By reading haproxy official 
> document, it is not clear to me.
> 
> I think in my case, it uses the "timeout check" as 10 seconds. There is no 
> "inter" parameter in the configuration.
> 
> 

Ten seconds for a health check to respond is an eternity. Personally, I’d 
expect a response 1000 times faster than that. Why do you want it to be so 
long? What problems with the default health check was this super long timeout 
meant to resolve?

> But here I try to understand which value will use if "timeout check" is 
> present, but "inter" is not. I already set the timeout check".
> 
> - Finally, I think I am still right about the "fall" (default to 3) and 
> "rise" (default to 2).
> 
> It takes up to 50 seconds to converge the server, as far as the haproxy is 
> concerned.
> 
> 


I don’t think that health checks are run concurrently to the same server in a 
backend. This means that if your server is accepting the TCP connection but not 
responding before the “timeout check” timer strikes, then you could be seeing 
40+ second times to detect the failure especially if there are delays in making 
the connection for the healthcheck too.

The defaults should detect a down server in 3 consecutive failures with 2 
seconds between each check so 6 seconds or so.

-Bryan



Re: Deny with 413 request too large

2017-05-22 Thread Bryan Talbot
>>> 
>>>  errorfile 413 /usr/local/etc/haproxy/errors/413.http
>>>  http-request deny deny_status 413 if { req.body_size gt 10485760 }
>>> 
>>> ... HAProxy complains with:
>>> 
>>>  [WARNING] 135/001448 (27) : parsing [/etc/haproxy/haproxy.cfg:15] : status 
>>> code 413 not handled by 'errorfile', error customization will be ignored.
>>>  [WARNING] 135/001448 (27) : parsing [/etc/haproxy/haproxy.cfg:89] : status 
>>> code 413 not handled, using default code 403.
>>> 
>>> How should I configure HAProxy in order to deny with 413?
>> 
> 
> In my understanding I should only use a 400badreq.http like message on an 
> errorfile 400 config line, otherwise if HAProxy need to issue a 400 status 
> code, my 413 status code would be issued instead.
> 
> Is this a valid feature request or there are technical reasons why this has 
> been done that way?
> 
> Hints are welcome.


I think the way to do it is to create a backend to handle deny with the special 
message and then use the backend to reject the request. You can have different 
backends to handle any special case and not pollute the normal error responses.

frontend http
… normal stuff here
  use_backend req_too_big if { req.body_size gt 10485760 }


backend req_too_big
  errorfile 400 /path/to/my/error400_req_too_big.http
  http-request deny 400



-Bryan




Re: HAProxy1.7.9-http-reuse

2017-10-26 Thread Bryan Talbot
Hello


> On Oct 26, 2017, at Oct 26, 3:13 PM, karthikeyan.rajam...@thomsonreuters.com 
> wrote:
> 
> Hi,
> We have  the set up working, the ping time from local to remote haproxy is 
> 204 ms.
> The time taken for the web page when accessed by the browser is 410 ms.
> We want the latency to be 204 ms when accessed by the browser. We configured 
> to reuse http & with http-reuse aggresive|always|safe options
> but could not reduce the 410 ms to 204 ms. It is always 410 ms. Please let us 
> know how we can reuse http & reduce out latency.
>  

> 4. Local haproxy log
> 172.31.x.x:53202 [26/Oct/2017:21:02:36.368] http_front http_back/web1 
> 0/0/204/205/410 200 89 - -  0/0/0/0/0 0/0 {} "GET / HTTP/1.0"


This log line says that it took your local proxy 204 ms to connect to the 
remote proxy and that the first response bytes from the remote proxy were 
received by the local proxy 205 ms later for a total round trip time of 410 ms 
(after rounding).

The only way to get the total time to be equal to the network latency times 
would be to make the remote respond in 0 ms (or less!). If the two proxies are 
actually 200 ms apart, I don’t see how you could do much better.

-Bryan



Re: HAProxy1.7.9-http-reuse

2017-10-26 Thread Bryan Talbot


> On Oct 26, 2017, at Oct 26, 6:13 PM, karthikeyan.rajam...@thomsonreuters.com 
> wrote:
> 
>  
> Yes the log indicates that. But the RTT via ping is 204 ms, with http-reuse 
> always/aggressive option the connection is reused & we expect a time close to 
> ping+ a small overhead time, the http-resuse always seem to have no impact on 
> the  total time taken.
> We are looking to get the option working.


I’d bet that it’s working but that it doesn’t do what you're assuming it does.

It’s not a connection pool that keeps connections open to a backend when there 
are no current requests. As the last paragraph and note of 
https://cbonte.github.io/haproxy-dconv/1.7/configuration.html#http-reuse 
 says


No connection pool is involved, once a session dies, the last idle connection
it was attached to is deleted at the same time. This ensures that connections
may not last after all sessions are closed.

Note: connection reuse improves the accuracy of the "server maxconn" setting,
because almost no new connection will be established while idle connections
remain available. This is particularly true with the "always" strategy.

So, testing one connection at a time one would not expect to see any 
difference. The benefit comes when there are many concurrent requests.

One way to check if the feature is working would be to run your ‘ab’ test with 
some concurrency N and inspect the active TCP connections from local proxy to 
remote proxy. If the feature is working, I would expect to see about N 
(something less) TCP connections that are reused for multiple requests. If 
there are 1000 requests sent with concurrency 10 and 1000 different TCP 
connections used the feature isn’t working (or the connections are private).

-Bryan



Re: [ANNOUNCE] haproxy-1.8-rc1 : the last mile

2017-11-01 Thread Bryan Talbot


> On Nov 1, 2017, at Nov 1, 3:28 AM, Aleksandar Lazic  wrote:
> 
> 
> There is now a shiny new docker image with the rc1.
> 
> docker run --rm --entrypoint /usr/local/sbin/haproxy me2digital/haproxy18 -vv
> 


For the past couple of years, I’ve also been maintaining a base docker image 
for haproxy. It is interesting to see how other’s structure the build and 
configuration. 

I see that you include a base / default configuration file while I’ve left that 
completely up to the user to provide one. Given how many different ways people 
use haproxy, it didn’t seem that there was any one “basic” config that would 
work beyond a trivial example. I’m curious how useful the configuration you’ve 
packaged is. I use my image as a base which I then repackage use-case specific 
configuration files into for deployments and I assume anyone else using the 
image does the same thing, but I do not have any feedback about that.


https://hub.docker.com/r/fingershock/haproxy-base/ 


-Bryan



Re: Bug: haproxy fails to build with USE_THREAD=

2018-02-05 Thread Bryan Talbot
Bisecting the 1.9 / master branch shows the build break (on OSX) with



abeaff2d543fded7ffc14dd908d673c59d725155 is the first bad commit
commit abeaff2d543fded7ffc14dd908d673c59d725155
Author: Willy Tarreau 
Date:   Mon Feb 5 19:43:30 2018 +0100

BUG/MINOR: fd/threads: properly dereference fdcache as volatile

In fd_rm_from_fd_list(), we have loops waiting for another change to
complete, in case we don't have support for a double CAS. But these
ones fail to place a compiler barrier or to dereference the fdcache
as a volatile, resulting in an endless loop on the first collision,
which is visible when run on MIPS32.

No backport needed.






> On Feb 5, 2018, at Feb 5, 12:36 PM, Tim Düsterhus  wrote:
> 
> Hi
> 
> if haproxy is build without USE_THREAD (e.g. by using TARGET=generic or
> by explicitely setting USE_THREAD=) it fails to link, because
> import/plock.h is not included when src/fd.c is being compiled.
> 
>> src/fd.c: In function ‘fd_rm_from_fd_list’:
>> src/fd.c:268:9: warning: implicit declaration of function ‘pl_deref_int’ 
>> [-Wimplicit-function-declaration]
>>  next = pl_deref_int(&fdtab[fd].cache.next);
>> ^
> 
> *snip*
> 
>> src/fd.o: In function `fd_rm_from_fd_list':
>> /scratch/haproxy/src/fd.c:268: undefined reference to `pl_deref_int'
>> /scratch/haproxy/src/fd.c:276: undefined reference to `pl_deref_int'
>> collect2: error: ld returned 1 exit status
>> Makefile:898: recipe for target 'haproxy' failed
>> make: *** [haproxy] Error 1
> 
> Best regards
> Tim Düsterhus
> 




Re: Reuse backend connections

2018-06-29 Thread Bryan Talbot


> On Jun 29, 2018, at Jun 29, 5:11 AM, Leela Kalidindi (lkalidin) 
>  wrote:
> 
> Hi,
>  
> How can I enforce haproxy to reuse limited backend connections regardless of 
> number of client connections? Basically I do not want to recreate backend 
> connection for every front end client.  
>  
> "HAPROXY_0_BACKEND_HEAD": "\nbackend {backend}\n balance {balance}\n mode 
> http\n option httplog\n  option forwardfor\n option http-keep-alive\n option 
> persist\n http-reuse aggressive\n maxconn 16\n",
> "HAPROXY_0_FRONTEND_HEAD": "\nfrontend {backend}\n  bind 
> {bindAddr}:{servicePort}\n  mode http\n  option httplog\n  option 
> forwardfor\n option http-keep-alive\n maxconn 16\n"
>  
> I currently have the above configuration, but still backend connections are 
> getting closed when the next client request comes in.
>  
> Could someone help me with the issue?  Thanks in advance!
>  


I suspect that there is a misunderstanding of what backend connection re-use 
means. Specifically this portion from the documentation seems to trip people up:


https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#http-reuse 

No connection pool is involved, once a session dies, the last idle connection
it was attached to is deleted at the same time. This ensures that connections
may not last after all sessions are closed.

I suspect that in your testing, you send one request, observe TCP state, then 
send a second request and expect the second request to use the same TCP 
connection. This is not how the feature works. The feature is optimized to 
support busy / loaded servers where the TCP open rate should be minimized. This 
allows a server to avoid, say opening 2,000 new connections per second, and 
instead just keep re-using a handful. It’s not a connection pool that pre-opens 
10 connections and keeps them around in case they might be needed.

-Bryan



Re: Reuse backend connections

2018-06-29 Thread Bryan Talbot


> On Jun 29, 2018, at Jun 29, 12:38 PM, Leela Kalidindi (lkalidin) 
>  wrote:
> 
> Hi Bryan,
>  
> Thanks a lot for the prompt response.
>  
> Is there a such kind of thing to leave the backend connections open forever 
> that can serve any client request? 
>  


No, not to my knowledge.

-Bryan



Re: Reuse backend connections

2018-06-29 Thread Bryan Talbot


> On Jun 29, 2018, at Jun 29, 12:42 PM, Leela Kalidindi (lkalidin) 
>  wrote:
> 
> Bryan,
>  
> One another follow-up question - what does persist do?  Thanks!
>  


https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#persist 


is for 

https://en.wikipedia.org/wiki/Remote_Desktop_Protocol 


Is that what you were asking?

-Bryan



Re: Reuse backend connections

2018-06-29 Thread Bryan Talbot


> On Jun 29, 2018, at Jun 29, 12:50 PM, Leela Kalidindi (lkalidin) 
>  wrote:
> 
> Not for Remote desktop protocol, it is for haproxy backend server with option 
> persist as in
> "HAPROXY_0_BACKEND_HEAD": "\nbackend {backend}\n balance {balance}\n mode 
> http\n option httplog\n  option forwardfor\n option http-keep-alive\n option 
> persist\n http-reuse aggressive\n maxconn 16\n",
>  


You need to stop playing 20 questions on the mailing list and RTFM already.

https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#option%20persist 


-Bryan




cQ session termination

2010-01-08 Thread Bryan Talbot

I'm trying to understand exactly what this termination state means.  I'm seeing 
http connections from clients that seem to be complete and queued.  These 
requests seem to be timing out from the queue well before the "timeout queue" 
time though.  The proxy is busy at these times and there are a number of 
requests queued.

Why are these sessions terminated?  I was expecting it to be terminated after 
the 30s "timeout queue" but it seems to have terminated after just 5s while 
waiting for client data (which seems odd since the Tq value is present and 
small).  My understanding was that requests would not be queued until the 
client completed sending the request.

Here is a sample log entry that shows the issue.  This is using 1.3.20 btw.

111.222.333.444:1234 [08/Jan/2010:13:05:42.055] web portal/ 
19/5000/-1/-1/5019 503 212 - - cQ-- 811/811/721/0/0 0/550 "GET /foo/bar 
HTTP/1.1" 


defaults
timeout connect   5s
timeout client5s
timeout http-request  5s
timeout server   20s
timeout queue30s



-Bryan







Re: cQ session termination

2010-01-08 Thread Bryan Talbot
OK, I'll try setting timeouts like this and see how that works.


timeout connect   5s
timeout client20s
timeout http-request  5s
timeout server   20s
timeout queue20s


-Bryan




On Jan 8, 2010, at Jan 8, 12:38 PM, Willy Tarreau wrote:

> On Fri, Jan 08, 2010 at 12:13:37PM -0800, Bryan Talbot wrote:
>> 
>> I'm trying to understand exactly what this termination state means.  I'm 
>> seeing http connections from clients that seem to be complete and queued.  
>> These requests seem to be timing out from the queue well before the "timeout 
>> queue" time though.  The proxy is busy at these times and there are a number 
>> of requests queued.
>> 
>> Why are these sessions terminated?  I was expecting it to be terminated 
>> after the 30s "timeout queue" but it seems to have terminated after just 5s 
>> while waiting for client data (which seems odd since the Tq value is present 
>> and small).  My understanding was that requests would not be queued until 
>> the client completed sending the request.
>> 
>> Here is a sample log entry that shows the issue.  This is using 1.3.20 btw.
>> 
>> 111.222.333.444:1234 [08/Jan/2010:13:05:42.055] web portal/ 
>> 19/5000/-1/-1/5019 503 212 - - cQ-- 811/811/721/0/0 0/550 "GET /foo/bar 
>> HTTP/1.1" 
> 
> I think I know what is happening :
> 
>> defaults
>>timeout connect   5s
>>timeout client5s
>>timeout http-request  5s
>>timeout server   20s
>>timeout queue30s
> 
> The "timeout client" fires before "timeout queue". Client and server
> timeouts have always covered the other ones, but I suspect that in
> this specific case we should be able to disable the client timeout
> when we're waiting in the queue. This is a risky change as we
> absolutely need to ensure we re-enable it when leaving the queue,
> but that's something that needs to be studied because I think it
> makes sense to do it.
> 
> In the mean time, you should simply set your "timeout client" to
> at least the same value as "timeout queue". As a general rule of
> thumb, I recommend to set both client and server timeouts to
> similar values, because it avoids head scratching trying to
> understand which one should have fired first. Also, if your client
> timeout is smaller than the server's, haproxy will not necessarily
> know whether the server is not responding because it is waiting for
> the client or because it's working, so the client timeout will still
> fire when your server responds slowly. Thus, I think you could
> reasonably set "timeout client" to 30s and leave the other ones
> untouched.
> 
> Regards,
> Willy
> 



Apache Traffic Server?

2010-03-18 Thread Bryan Talbot
On Mar 18, 2010, at Mar 18, 7:08 AM, Erik Gulliksson wrote:
> 
> What I
> am looking for in my SSL-decoding solution is support for TE:chunked,
> http keep-alive, option to set "SSL engine" (for h/w acceleration),
> soft-reconfiguration (something like haproxy's "-sf"), HTTP header
> manipulation, open-source, free, robust and efficient. This is
> beginning to sound like haproxy with SSL support :)
> 



I'm looking for ways to add caching and reduce the number of components to 
manage and configure as well.  I know this is an haproxy specific list, but 
there are many knowledgeable people here and I was wondering if anyone has any 
experience with Traffic Server?  It's a newly open-sourced, but previously 
commercial proxy and cache (that also supports SSL), which was donated to 
Apache.

I've only read some of the documentation so far but it seems very complete, 
though probably too complicated if all that is needed is a simple proxy.


http://incubator.apache.org/trafficserver/docs/v2/admin/
http://incubator.apache.org/projects/trafficserver.html
http://wiki.apache.org/incubator/TrafficServerProposal
http://cwiki.apache.org/TS/traffic-server.html



Any opinions or experience with this?


-Bryan



Re: Binding by Hostname

2010-04-21 Thread Bryan Talbot
On Apr 21, 2010, at Apr 21, 3:05 AM, Laurie Young wrote:

> 
> 
> 
> Unfortunately I am still no closer to knowing if HAProxy can do this :-(
> 


I don't think you can bind frontends using name-based virtual hosts like it 
seems you're attempting to do.  If you want to do that, you'll need to use 
different ports or different IP addresses.


-Bryan




Re: precedence of if conditions

2010-06-30 Thread Bryan Talbot
See section 7.7: AND is implicit.


7.7. Using ACLs to form conditions
--

Some actions are only performed upon a valid condition. A condition is a
combination of ACLs with operators. 3 operators are supported :

  - AND (implicit)
  - OR  (explicit with the "or" keyword or the "||" operator)
  - Negation with the exclamation mark ("!")



-Bryan


On Wed, Jun 30, 2010 at 4:35 PM, Hank A. Paulson <
h...@spamproof.nospammail.net> wrote:

> Also, what is the rule for multiple if conditions? I am missing it if it is
> in the docs.
>
>
> reqirep blah if a b or c
>
> is that:
>
> (a and b) or c
> or
> a and (b or c)
> or
> something else :)
>
>
>


Re: Load balancing and Logging questions

2010-07-28 Thread Bryan Talbot
TCP connections is what is being load balanced, not RMI requests.  If
several RMI requests are made over a single TCP connection they'll all go to
the same server.



On Sun, Jul 25, 2010 at 11:43 PM, Barak Yaish  wrote:

> Hello all,
>
> I've 2 RMI servers fronted by haproxy 1.4.8, here is the config file:
>
> global
> stats socket /tmp/stats
> pidfile /var/run/haproxy.pid
> daemon
> defaults
> mode tcp
> option dontlognull retries 3 option redispatch
> maxconn 2000
> contimeout 5000
> clitimeout 5
> srvtimeout 5
> listen  RMI 10.80.0.55:1099
> mode tcp
> balance roundrobin
> server dev103 10.80.0.206:1099
> server dev105 10.80.0.212:1099
> listen  OTHER 10.80.0.55:10999
> mode http
> stats enable
> stats uri /admin?stats
>  I've created a RMI client against the haproxy machine, and invoking
> functionS indeed directed to the concrete servers. The problem is, that
> after the client created, all traffic directed to only one server, and no
> load balance occures. Re-creating the client may result with traffic
> directed to the other server, but still - only to that server.
>
> Question No. 1: Is my config file wrong?
>
> I'm trying to figure it what I'm doing wrong, since my config file
> looks quite simple.
>
> Question No. 2: Is there a way to configure haproxy to dump data regarding
> the traffic it directs to a simple file rarther than syslog server ? Trying
> to run with -d displayed some lines which do not tell me alot:
>
> Available polling systems :
>  sepoll : pref=400,  test result OK
>   epoll : pref=300,  test result OK
>poll : pref=200,  test result OK
>  select : pref=150,  test result OK
> Total: 4 (4 usable), will use sepoll.
> Using sepoll() as the polling mechanism.
> :RMI.accept(0004)=0007 from [172.16.0.190:4700]
> :RMI.srvcls[0007:0008]
> :RMI.clicls[0007:0008]
> :RMI.closed[0007:0008]
>
>
>
> Thanks.
>


clarification of CD termination code

2010-07-28 Thread Bryan Talbot
I'm trying to figure out what _exactly_ the CD termination code means.  The
docs says:

 CD   The client unexpectedly aborted during data transfer. This can be
  caused by a browser crash, by an intermediate equipment between the
  client and haproxy which decided to actively break the connection,
  by network routing issues between the client and haproxy, or by a
  keep-alive session between the server and the client terminated first
  by the client.


Does this mean that clients MUST have not received some of the data or
could a client have received all of the data from the response?
What's an unexpected abortion vs a normal termination?

I have a client (windows using a MS xml library to make http requests)
which always ends up with CD-- terminations, but the software seems to
work properly otherwise.


-Bryan


Re: clarification of CD termination code

2010-08-04 Thread Bryan Talbot
In the tcpdump listed below, isn't the next-to-the-last RST also include an
ACK of the data previously sent?  If that is the case, then the client has
received all of the data and ACK'd it but then rudely closed the TCP
connection without the normal FIN exchange.  Is my reading correct?


19:03:33.106842 IP 10.79.25.20.4266 > 10.79.6.10.80: S
2041799057:2041799057(0) win 65535 
19:03:33.106862 IP 10.79.6.10.80 > 10.79.25.20.4266: S
266508528:266508528(0) ack 2041799058 win 5840 
19:03:33.106945 IP 10.79.25.20.4266 > 10.79.6.10.80: . ack 1 win 65535
19:03:33.107045 IP 10.79.25.20.4266 > 10.79.6.10.80: P 1:269(268) ack 1 win
65535
19:03:33.107060 IP 10.79.6.10.80 > 10.79.25.20.4266: . ack 269 win 6432
19:03:33.134401 IP 10.79.6.10.80 > 10.79.25.20.4266: P 1:270(269) ack 269
win 6432
19:03:33.134442 IP 10.79.6.10.80 > 10.79.25.20.4266: F 270:270(0) ack 269
win 6432
19:03:33.134548 IP 10.79.25.20.4266 > 10.79.6.10.80: R 269:269(0) ack 270
win 0
19:03:33.134562 IP 10.79.25.20.4266 > 10.79.6.10.80: R
2041799326:2041799326(0) win 0


-Bryan



On Mon, Aug 2, 2010 at 10:45 PM, Willy Tarreau  wrote:

> On Wed, Jul 28, 2010 at 05:51:18PM -0700, Bryan Talbot wrote:
> > I'm trying to figure out what _exactly_ the CD termination code means.
>  The
> > docs says:
> >
> >  CD   The client unexpectedly aborted during data transfer. This can
> be
> >   caused by a browser crash, by an intermediate equipment between
> the
> >   client and haproxy which decided to actively break the
> connection,
> >   by network routing issues between the client and haproxy, or by
> a
> >   keep-alive session between the server and the client terminated
> first
> >   by the client.
> >
> >
> > Does this mean that clients MUST have not received some of the data or
> > could a client have received all of the data from the response?
>
> it means that the client closed the response socket before all data was
> consumed.
>
> > What's an unexpected abortion vs a normal termination?
>
> An unexpected abortion is when the system reports a socket error while
> there were still some data to send in the buffer. A normal termination
> is when there is no error and that all pending data were sent before
> the client closed.
>
> > I have a client (windows using a MS xml library to make http requests)
> > which always ends up with CD-- terminations, but the software seems to
> > work properly otherwise.
>
> It is possible that this client only needs something at the very
> beginning of the response and that it aborts connections once it
> gets what it needs. This is what happens with health checks too
> if too large objects are sent in response. From a protocol point
> of view this can be seen as dirty, but if the client does not need
> anything else, it results in cheaper network use because less data
> get transferred.
>
> If you're sure to see this for every request, it would be nice to
> get a capture of a full request/response from both sides to see
> what it looks like (use tcpdump -s0 to get full packets). I suspect
> that you'll see an RST packet coming from the client to haproxy
> while haproxy is sending data.
>
> Regards,
> Willy
>
>


Re: PD error code

2010-09-03 Thread Bryan Talbot
Section 8.5 of the doc (
http://haproxy.1wt.eu/download/1.3/doc/configuration.txt) says:

- On the first character, a code reporting the first event which caused the
session to terminate :


 P : the session was prematurely aborted by the proxy, because of a
connection limit enforcement, because a DENY filter was matched,
because of a security check which detected and blocked a dangerous
error in server response which might have caused information leak
(eg: cacheable cookie), or because the response was processed by
the proxy (redirect, stats, etc...).


- on the second character, the TCP or HTTP session state when it was closed :

D : the session was in the DATA phase.



On Thu, Sep 2, 2010 at 1:06 PM, Joe Williams  wrote:

>
> Anyone know what this one could be?
>
> -Joe
>
>
> On Sep 1, 2010, at 10:35 AM, Joe Williams wrote:
>
> >
> > I've seen a few "PD" error codes in my logs but don't see it mentioned in
> the docs. What does this flag stand for?
> >
> > Thanks.
> > -Joe
> >
> >
> > Name: Joseph A. Williams
> > Email: j...@joetify.com
> > Blog: http://www.joeandmotorboat.com/
> > Twitter: http://twitter.com/williamsjoe
> >
> >
>
> Name: Joseph A. Williams
> Email: j...@joetify.com
> Blog: http://www.joeandmotorboat.com/
> Twitter: http://twitter.com/williamsjoe
>
>
>


Re: Support for SSL

2010-11-19 Thread Bryan Talbot
Here's an interesting blog post by a Google engineer about how they
rolled out SSL for many of their services with very little additional
CPU and network overhead.  Specifically, he claims that

"On our production frontend machines, SSL/TLS accounts for less than
1% of the CPU load, less than 10KB of memory per connection and less
than 2% of network overhead."

http://www.imperialviolet.org/2010/06/25/overclocking-ssl.html

-Bryan


On Fri, Nov 19, 2010 at 4:54 AM, Willy Tarreau  wrote:
>
> On Wed, Nov 17, 2010 at 09:46:05AM -0500, John Marrett wrote:
> > Bedis,
> >
> > > Cause using the cores to decrypt traffic would reduce drastically
> > > overall performance.
> > > Well, this is what we saw on our HTTP cache server (running CentOS) on
> > > 8 cores hardware: when enabling SSL, the performance were so bad that
> >
> > > So we kept our old Nortel vpn 3050 to handle the SSL traffic.
> >
> > I'm astonished to hear that you had these kinds of issues on modern
> > hardware. We stopped using dedicated SSL hardware quite some time ago.
>
> I'm not surprized at all. The issue generally lies in mixing high latency
> processing (eg: SSL) with low latency (eg: HTTP processing). When your
> CPUs are stuck for 200 microseconds processing an SSL connection, you
> can try to do whatever you want, all pending HTTP processing will be
> stuck that long, which will considerably limit the request rate.
>
> One of the solution sometimes is to dedicate some CPUs to slow processing
> and others to fast processing, but this is not always possible.
>
> Cheers,
> Willy
>
>



Re: disable-on-404 and tracking

2010-12-06 Thread Bryan Talbot
I worked around this issue by including the "option httpchk" in the
backend but never using the "check" option for the servers in that
backend that are tracked.  The server lines do contain the "track"
option.


backend be1
balance roundrobin
http-check disable-on-404
option httpchk HEAD /online.php HTTP/1.1\r\nHost:\ healthcheck
server 1.2.3.4 1.2.3.4:80 check

backend be2
balance roundrobin
http-check disable-on-404
option httpchk HEAD /online.php HTTP/1.1\r\nHost:\ healthcheck
server 1.2.3.4 1.2.3.4:80 track be1/1.2.3.4


-Bryan



On Mon, Dec 6, 2010 at 10:51 AM, Joe Williams  wrote:
> Just to add some info to this thread, I did some testing and I get some 
> combination of the following errors depending on where (default, backends, 
> etc) I have the disable-on-404 directive.
>
> config : 'disable-on-404' will be ignored for backend 'test' (requires 
> 'option httpchk').
> config : backend 'test', server 'test': unable to use joe/node001 for 
> tracing: disable-on-404 option inconsistency.
> config : 'disable-on-404' will be ignored for frontend 'http_proxy' (requires 
> 'option httpchk').
>
> I assume this is by design for some reason but certainly seems like a 
> desirable feature. Can anyone point me in the right direction regarding a 
> writing a patch to "fix" it?
>
> Thanks.
> -Joe
>
>
> On Dec 6, 2010, at 8:55 AM, Joe Williams wrote:
>
>> Anyone have any thoughts? Is it possible to use tracking and disable-on-404 
>> together?
>>
>> -Joe
>>
>>
>> On Dec 2, 2010, at 3:41 PM, Joe Williams wrote:
>>
>>>
>>> On Dec 2, 2010, at 2:28 PM, Krzysztof Olędzki wrote:
>>>
 On 2010-12-02 21:28, Joe Williams wrote:
>
> List,
>
> I am attempting to enable the disable-on-404 option on only the
> backends that other backends track. It seems that the secondary
> backends do not like this and error out saying it is "inconsistent"
> even if disable-on-404 is only enabled in the backend that they
> track. Is it possible to have disable-on-404 without httpchk in each
> backend?

 Yes, you need to enable disable-on-404 on both tracked and tracking
 backends.
>>>
>>> Doesn't that also mean that I have to enable httpchk on all those backends 
>>> as well?
>>>
>>> -Joe
>>>
>>>
>>> Name: Joseph A. Williams
>>> Email: j...@joetify.com
>>> Blog: http://www.joeandmotorboat.com/
>>> Twitter: http://twitter.com/williamsjoe
>>>
>>>
>>
>> Name: Joseph A. Williams
>> Email: j...@joetify.com
>> Blog: http://www.joeandmotorboat.com/
>> Twitter: http://twitter.com/williamsjoe
>>
>>
>
> Name: Joseph A. Williams
> Email: j...@joetify.com
> Blog: http://www.joeandmotorboat.com/
> Twitter: http://twitter.com/williamsjoe
>
>
>



Re: hot reconfiguration, how to?

2010-12-08 Thread Bryan Talbot
See the architecture doc section 4.3

http://haproxy.1wt.eu/download/1.3/doc/architecture.txt

-Bryan


On Wed, Dec 8, 2010 at 12:51 PM, Joshua N Pritikin wrote:

> I found:
>
>
> http://serverfault.com/questions/165883/is-there-a-way-to-add-more-backend-server-to-haproxy-without-restarting-haproxy
>  http://sysbible.org/2008/07/26/haproxy-hot-reconfiguration/
>
> However, these options are last documented in version 1.2:
>
>  http://haproxy.1wt.eu/download/1.2/doc/haproxy-en.txt
>
> A brief search did not uncover any similar documentation in newer
> versions:
>
>  http://haproxy.1wt.eu/download/1.3/doc/configuration.txt
>  http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
>
> What is the current approved way to do hot reconfiguration?
>
>


Re: HAProxy Cookie/Host Forwarding

2010-12-10 Thread Bryan Talbot
I do something similar using a config that is pretty much like what you've
shown.  What doesn't work about the config you've shown?

-Bryan



On Fri, Dec 10, 2010 at 9:05 AM, Anthony Saenz
wrote:

> Hi,
>
> Don't mean to bug but did anyone get a chance to possibly look at this and
> provide some assistance or does anyone know of alternative means to get this
> to work with cookie information?
>
> Thanks.
>
>
> On 12/8/10 3:43 PM, Anthony Saenz wrote:
>
>> Hey,
>>
>> I was wondering if anyone could be of any assistance? I'm trying to use
>> HAProxy to forward based on cookie and host. As it stands, I want HAProxy to
>> see if a cookie is set and if it is, forward to a development server and if
>> it isn't just push out to production. The main issue being, we have multiple
>> production servers and a few thousand domains, so hard coding everything
>> doesn't seem very efficient.
>>
>> Is there a way to have HAProxy forward the production request based on the
>> host the person is attempting to initially connect to without it being
>> static in the configuration file? Here's what I have so far...
>>
>> global
>>daemon
>>
>> defaults
>>mode http
>>timeout connect 1 # default 10 second time out if a backend is not
>> found
>>timeout client 30
>>timeout server 30
>>maxconn 6
>>retries 3
>>
>> frontend read_cookies
>>bind:80
>>
>>acl is_servermagic hdr_reg(Cookie) dev_magic=.*
>>
>>use_backend development if is_servermagic
>>
>>default_backend production
>>
>> backend development
>>modehttp
>>option  forwardfor
>>balance source
>>option  httpclose
>>server  dev1 192.168.1.100:80
>>
>> backend production
>>modehttp
>>option  forwardfor
>>balance source
>>option  httpclose
>>server  prod1 example.com:80
>>
>>
>


Re: HAProxy Cookie/Host Forwarding

2010-12-10 Thread Bryan Talbot
I thought you said you have "multiple" servers but thousands of domains.  In
that case, just list the servers by IP address and the proxy will pass the
Host header value as supplied by the client.  You should probably avoid
using names resolved using DNS anyway so that a DNS hickup doesn't
effectively disable your proxy.


backend production
   modehttp
   option  forwardfor
   balance source
   option  httpclose
   server  prod1 10.10.10.11:80 <http://example.com/>
   server  prod2 10.10.10.12:80 <http://example.com/>
   server  prod3 10.10.10.13:80 <http://example.com/>



On Fri, Dec 10, 2010 at 10:25 AM, Anthony Saenz
wrote:

> Well, it's not that it doesn't work but we have approximately 1200 domains
> (that we actively develop on) and it doesn't seem very logical to hardcode
> each domain in. I was wondering if there was a workaround for the lines
> below so instead of using example.com:80 using a variable such as $HOST:80
> (whichever host the end user is attempting to connect to).
>
>
> backend production
>modehttp
>option  forwardfor
>balance source
>option  httpclose
>server  prod1 example.com:80 <http://example.com/>
>
> On Fri, Dec 10, 2010 at 10:18 AM, Bryan Talbot wrote:
>
>> I do something similar using a config that is pretty much like what you've
>> shown.  What doesn't work about the config you've shown?
>>
>> -Bryan
>>
>>
>>
>>
>> On Fri, Dec 10, 2010 at 9:05 AM, Anthony Saenz <
>> antho...@consumertrack.com> wrote:
>>
>>> Hi,
>>>
>>> Don't mean to bug but did anyone get a chance to possibly look at this
>>> and provide some assistance or does anyone know of alternative means to get
>>> this to work with cookie information?
>>>
>>> Thanks.
>>>
>>>
>>> On 12/8/10 3:43 PM, Anthony Saenz wrote:
>>>
>>>> Hey,
>>>>
>>>> I was wondering if anyone could be of any assistance? I'm trying to use
>>>> HAProxy to forward based on cookie and host. As it stands, I want HAProxy 
>>>> to
>>>> see if a cookie is set and if it is, forward to a development server and if
>>>> it isn't just push out to production. The main issue being, we have 
>>>> multiple
>>>> production servers and a few thousand domains, so hard coding everything
>>>> doesn't seem very efficient.
>>>>
>>>> Is there a way to have HAProxy forward the production request based on
>>>> the host the person is attempting to initially connect to without it being
>>>> static in the configuration file? Here's what I have so far...
>>>>
>>>> global
>>>>daemon
>>>>
>>>> defaults
>>>>mode http
>>>>timeout connect 1 # default 10 second time out if a backend is
>>>> not found
>>>>timeout client 30
>>>>timeout server 30
>>>>maxconn 6
>>>>retries 3
>>>>
>>>> frontend read_cookies
>>>>bind:80
>>>>
>>>>acl is_servermagic hdr_reg(Cookie) dev_magic=.*
>>>>
>>>>use_backend development if is_servermagic
>>>>
>>>>default_backend production
>>>>
>>>> backend development
>>>>modehttp
>>>>option  forwardfor
>>>>balance source
>>>>option  httpclose
>>>>server  dev1 192.168.1.100:80
>>>>
>>>> backend production
>>>>modehttp
>>>>option  forwardfor
>>>>balance source
>>>>option  httpclose
>>>>server  prod1 example.com:80
>>>>
>>>>
>>>
>>
>


Re: SSLTunnel + HAProxy: advice about hardware minimum requirements

2010-12-21 Thread Bryan Talbot
If you're concerned about the SSL handing of your setup, the openssl
command line tool includes some simple tools that can run rudimentary
tests and benchmarks.

One simple command to make 30 seconds of serial requests using both
new and resumed (if you support it) SSL sessions would be:

$> openssl s_client -connect remote.host:443 -www /index.html

This will give you at least some idea of how many serial (not
parallel) requests per second your system can accept.

Apache ab or another http load generators can probably be used to do
better concurrent testing.

-Bryan



On Mon, Dec 20, 2010 at 3:01 PM, Gabriel Sosa  wrote:
> Hello,
>
> Due some requirements we are facing the need to get the ip of clients
> that connect to our system through SSL connections. Our proxy box its
> an small dedicated server that has been running only HAproxy at the
> moment and we are concern about if the server would be able to run
> SSLtunnel + HAproxy.
>
> At the moment we are in about few hundred (about 400 or so)  of
> concurrent connections.
>
> here is the information of the server
>
> 2x Intel(R) Pentium(R) D CPU 3.00GHz
> 2GB of RAM
>
> I know this info isn't too accurate at the moment but do you guys
> think this hardware will be able to face the load for the moment?
>
> Thanks in advance
>
> --
> Gabriel Sosa
> Si buscas resultados distintos, no hagas siempre lo mismo. - Einstein
>
>



logging to unix socket

2010-12-29 Thread Bryan Talbot
I'm trying to configure haproxy to log to a unix socket but keep getting
alerts to stderr when the proxy is started, restarted or reloaded.  Other
than these alert messages, logging to the socket seems to be working just
fine.  The message is repeated about 20 times.

What am I doing wrong?

host:~# ls -alF /var/lib/haproxy/dev/log /dev/log
srw-rw-rw- 1 root root 0 Dec 29 20:01 /dev/log=
srw-rw-rw- 1 root root 0 Dec 29 20:01 /var/lib/haproxy/dev/log=


[ALERT] 362/195509 (22897) : sendto logger #0 failed: Resource temporarily
unavailable (errno=11)



global
log /dev/log local0
maxconn 5000
spread-checks 4
chroot /var/lib/haproxy
stats socket /var/lib/haproxy/stats
user haproxy
group haproxy
daemon

defaults
log global
modehttp
option  httplog


-Bryan


Re: logging to unix socket

2010-12-30 Thread Bryan Talbot
Yeah, I forgot to mention that I am running haproxy 1.4.10 on CentOS 5.5
with a 2.6.18-194.26.1.el5 kernel.

host:~# cat /proc/sys/net/unix/max_dgram_qlen
10

The system is idle and not handling any traffic.

I set max_dgram_qlen to 1000 but the problem persists.

host:~# cat /proc/sys/net/unix/max_dgram_qlen
1000

-Bryan


On Thu, Dec 30, 2010 at 7:31 AM, Willy Tarreau  wrote:

> Hi Bryan,
>
> On Wed, Dec 29, 2010 at 05:09:12PM -0800, Bryan Talbot wrote:
> > I'm trying to configure haproxy to log to a unix socket but keep getting
> > alerts to stderr when the proxy is started, restarted or reloaded.  Other
> > than these alert messages, logging to the socket seems to be working just
> > fine.  The message is repeated about 20 times.
> >
> > What am I doing wrong?
>
> it's possible that nothing is wrong. There are two types of unix sockets,
> stream and dgram. Haproxy uses dgram (by analogy to the udp socket). Half
> of the syslog daemons use stream, half of them use dgram. It's possible
> that your syslog is set to use streams.
>
> What is possible too is that the syslog daemon is too slow to process logs
> (eg: it does synchronous writes), and that the small backlog on the unix
> dgram socket quickly fills up, causing some losses. I *think* that the
> setting below on linux can be used to change that, though I have never
> tested :
>
> $ cat /proc/sys/net/unix/max_dgram_qlen
> 10
>
> Willy
>
>


Re: logging to unix socket

2010-12-30 Thread Bryan Talbot
Oh, and the syslogger is sysklogd which is the distribution default I
believe.

How can I tell what mode the socket uses?  If it used the wrong mode,
wouldn't logging not work in general because logging messages seem to be
received properly otherwise.

-Bryan


On Thu, Dec 30, 2010 at 9:43 AM, Bryan Talbot wrote:

> Yeah, I forgot to mention that I am running haproxy 1.4.10 on CentOS 5.5
> with a 2.6.18-194.26.1.el5 kernel.
>
> host:~# cat /proc/sys/net/unix/max_dgram_qlen
> 10
>
> The system is idle and not handling any traffic.
>
> I set max_dgram_qlen to 1000 but the problem persists.
>
> host:~# cat /proc/sys/net/unix/max_dgram_qlen
> 1000
>
> -Bryan
>
>
> On Thu, Dec 30, 2010 at 7:31 AM, Willy Tarreau  wrote:
>
>> Hi Bryan,
>>
>> On Wed, Dec 29, 2010 at 05:09:12PM -0800, Bryan Talbot wrote:
>> > I'm trying to configure haproxy to log to a unix socket but keep getting
>> > alerts to stderr when the proxy is started, restarted or reloaded.
>>  Other
>> > than these alert messages, logging to the socket seems to be working
>> just
>> > fine.  The message is repeated about 20 times.
>> >
>> > What am I doing wrong?
>>
>> it's possible that nothing is wrong. There are two types of unix sockets,
>> stream and dgram. Haproxy uses dgram (by analogy to the udp socket). Half
>> of the syslog daemons use stream, half of them use dgram. It's possible
>> that your syslog is set to use streams.
>>
>> What is possible too is that the syslog daemon is too slow to process logs
>> (eg: it does synchronous writes), and that the small backlog on the unix
>> dgram socket quickly fills up, causing some losses. I *think* that the
>> setting below on linux can be used to change that, though I have never
>> tested :
>>
>> $ cat /proc/sys/net/unix/max_dgram_qlen
>> 10
>>
>> Willy
>>
>>
>


Re: logging to unix socket

2010-12-30 Thread Bryan Talbot
Ahh ok, restarting syslog fixed the issue.  I didn't realize that
the max_dgram_qlen setting would only take affect when the socket was bound.
 Our site is fairly busy but not a load for haproxy: we get several tens of
millions of hits a day.  What is a reasonable setting for max_dgram_qlen?

As far as synchronous writes goes, I've been using async for normal logs but
write errors from 'log-separate-errors' in sync mode.

Thank you for helping get my restarts clean with no more scary looking ALERT
messages!


-Bryan



On Thu, Dec 30, 2010 at 10:03 AM, Willy Tarreau  wrote:

> On Thu, Dec 30, 2010 at 09:45:44AM -0800, Bryan Talbot wrote:
> > Oh, and the syslogger is sysklogd which is the distribution default I
> > believe.
>
> OK, sysklogd does synchronous writes by default. You must append a "-"
> (minus sign) in front of the file names to make it use asynchronous
> writes.
>
> > How can I tell what mode the socket uses?  If it used the wrong mode,
> > wouldn't logging not work in general because logging messages seem to be
> > received properly otherwise.
>
> If even one message is received, then it's working in the proper mode.
> You should restart it after you change the /proc setting BTW, since those
> settings are used only when binding the socket.
>
> But I'm pretty sure that with the async write, your problem will be gone.
>
> Willy
>
>


Re: precedence of if conditions (again)

2011-01-07 Thread Bryan Talbot
Doesn't this work?

... if A B1 or A B2 or A B3 or A B4

-Bryan


On Fri, Jan 7, 2011 at 7:16 AM, Hank A. Paulson <
h...@spamproof.nospammail.net> wrote:

> On 6/30/10 9:50 PM, Willy Tarreau wrote:
>
>> On Wed, Jun 30, 2010 at 08:53:19PM -0700, Bryan Talbot wrote:
>>
>>> See section 7.7: AND is implicit.
>>>
>>>
>>> 7.7. Using ACLs to form conditions
>>> --
>>>
>>> Some actions are only performed upon a valid condition. A condition is a
>>> combination of ACLs with operators. 3 operators are supported :
>>>
>>>   - AND (implicit)
>>>   - OR  (explicit with the "or" keyword or the "||" operator)
>>>   - Negation with the exclamation mark ("!")
>>>
>>
>> I'm realizing that that's not enough to solve Hank's question, because
>> the precedence is not explained in the doc (it was so obvious to me that
>> it was like in other languages that it's not explained), so :
>>
>>reqirep blah if a b or c
>>
>> is evaluated like this :
>>
>>(a and b) or c
>>
>> and :
>>
>>reqirep blah if a b or c d
>>
>> is evaluated like this :
>>
>>(a and b) or (c and d)
>>
>> Regards,
>> Willy
>>
>
> I have a more complex grouping and I am still not sure how to create it.
> I have one required condition A and one of 4 other conditions B1-B4 so I
> need
> something like:
>
> if A and (B1 or B2 or B3 or B4)
>
> is there a way to do that?
>
>
>


Re: Acl url_sub doesn't seems to match

2011-01-12 Thread Bryan Talbot
I think the problem is that url_dom operates on the URL found in the request
line but in your case, that URL is a relative URI (/) which does not contain
a host name.

I think if you use hdr_dom(Host) it'll do what you want.

-Bryan


On Wed, Jan 12, 2011 at 8:39 AM, Contact Dowhile  wrote:

> Hello,
>
> i have a pretty simple HAProxy configuration, but it can't match an
> super-simple acl..
>
> Here is the config
> global
>   daemon
>   user haproxy
>   group haproxy
>   maxconn 5000
>
> defaults
>   mode http
>   maxconn  4950
>   retries  2
>   timeout client 60s   # Client and server timeout must match the longest
>   timeout server 60s   # time we may wait for a response from the server.
>   timeout queue  60s   # Don't queue requests too long if saturated.
>   timeout connect 4s   # There's no reason to change this one.
>   timeout http-request 5s  # A complete request may never take that
> long.
>
> frontend web :80
>option forwardfor
>acl acl_aspc url_dom autos-prestige-collection
>use_backend aspc if acl_aspc
>default_backend webfarm
>
> backend aspc
>balance source
>server webC 10.1.0.26:80 check
>
> backend webfarm
>balance source
>server webA 10.1.0.20:80 check
>
> What i want is going to webfarm for every website, and going to aspc for
> http://*.autos-prestige-collection.com/*
> This is because this site is located on a windows iis server...
> If i'm in debug mode here is what happen
>
> myhost etc # haproxy -d -f haproxy.cfg
> Available polling systems :
> sepoll : pref=400,  test result OK
>  epoll : pref=300,  test result OK
>   poll : pref=200,  test result OK
> select : pref=150,  test result OK
> Total: 4 (4 usable), will use sepoll.
> Using sepoll() as the polling mechanism.
> :web.accept(0004)=0005 from [xx.xx.xx.xx:27615]
> :web.clireq[0005:]: GET / HTTP/1.1
> :web.clihdr[0005:]: Host: www.autos-prestige-collection.com
> :web.clihdr[0005:]: User-Agent: Mozilla/5.0 (X11; U; Linux
> x86_64; en-US; rv:1.9.2.12) Gecko/20101027 Firefox/3.6.12
> :web.clihdr[0005:]: Accept:
> text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
> :web.clihdr[0005:]: Accept-Language:
> fr,fr-fr;q=0.8,en-us;q=0.5,en;q=0.3
> :web.clihdr[0005:]: Accept-Encoding: gzip,deflate
> :web.clihdr[0005:]: Accept-Charset:
> ISO-8859-1,utf-8;q=0.7,*;q=0.7
> :web.clihdr[0005:]: Keep-Alive: 115
> :web.clihdr[0005:]: Connection: keep-alive
> :webfarm.srvrep[0005:0006]: HTTP/1.1 200 OK
> :webfarm.srvhdr[0005:0006]: Date: Wed, 12 Jan 2011 16:31:13 GMT
> :webfarm.srvhdr[0005:0006]: Server: Apache/2.2.17 (Fedora)
> :webfarm.srvhdr[0005:0006]: X-Powered-By: PHP/5.3.4
> :webfarm.srvhdr[0005:0006]: Content-Length: 3546
> :webfarm.srvhdr[0005:0006]: Connection: close
> :webfarm.srvhdr[0005:0006]: Content-Type: text/html; charset=UTF-8
> :webfarm.srvcls[0005:0006]
> :webfarm.clicls[0005:0006]
> :webfarm.closed[0005:0006]
>
> We see that "Host: www.autos-prestige-collection.com"  (so it should match
> my acl isn't it ???) but we see that haproxy redirected this query to
> "webfarm :webfarm.srvhdr[0005:0006]: Server: Apache/2.2.17 (Fedora)"
>
> My iis server is going ok, if i put this in
> frontend web :80
>default_backend aspc
>
> i'm redirected to my iis server (but then ALL my websites are redirected to
> the iis which i don't want...)
>
> I tried with url_dom and url_sub, nothing changes, it never catch the acl
> rule...
>
> I'm running haproxy 1.4.10 on gentoo.
>
> Thanks for reading
> Guillaume
>
>
>


Re: reqrep only in case no server available?

2011-02-03 Thread Bryan Talbot
On Thu, Feb 3, 2011 at 5:44 AM, Raphael Bauduin  wrote:
>>  server mainappserver 10.12.13.127:80 weight 16 maxconn 16 check inter 10s
>>  acl maintenance_mode nbsrv eq 0
>>  reqrep ^([^\ ]*)\ /([^\ ]*)\ (.*)     \1\ /\ \3 if maintenance_mode
>>  server static-backup 10.12.13.14:2001 backup
>>
>> Raph
>
> I sent this mail too fast: the reqrep is applied even when the acl is
> not verified (using ubuntu package v 1.3.18-1).
>


haproxy 1.3 doesn't suppor the "reqrep pattern [if | unless] acl"
form.  That's a 1.4 feature.  Unfortunately, the 1.3 configuration
parser doesn't complain about it either so it just silently accepts
the reqrep and applies it every time.


reqrep   
reqirep (ignore case)
  Replace a regular expression with a string in an HTTP request line


-Bryan



does http-server-close close idle client sockets when needed?

2011-02-14 Thread Bryan Talbot
I can't find in the documentation anything about how haproxy handles
client keep-alive (using http-server-close) when the maximum number of
client connections has been reached.

If there are idle client connections, will the proxy close them to
allow new connections to be established?  Or, will new connections be
refused while keeping idle connections around?

It seems like it would be best to begin closing idle connections when
the connection limit is approaching thus degrading to non-keep-alive
behavior under heavy load.

What does haproxy 1.4 do?

-Bryan



balance url_param with POST

2011-02-24 Thread Bryan Talbot
I'm not sure I understand how the url_param option for balance is
supposed to work.  From reading the description, it sounded like it
might work for both GET and POST methods when either method includes a
query string section in the URI.  However, that doesn't seem to be
working as I expected with 1.4.10.

listen foo
   bind: *.80
   balance url_param id
   server one a.b.c.d ...
   server two a.b.c.e ...


GET /foo/bar?id=1--> works fine and always sends traffic to the same server
POST /foo/bar?id=1  --> uses round robin


The docs say:

  url_param   The URL parameter specified in argument will be looked up in
  the query string of each HTTP GET request.

  If the modifier "check_post" is used, then an HTTP POST
  request entity will be searched for the parameter argument,
  when the question mark indicating a query string ('?') is not
  present in the URL.  ...


which confuses me on whether or not POST query string params are
searched or not.  The first statement says it only works with GET
methods.  The second section says it can work with POST entity content
only when a query string is not present.

I would like to use url_param balancing for POST query string
parameters.  How can I do that?

-Bryan



Re: balance url_param with POST

2011-02-25 Thread Bryan Talbot
In general, I need to load balance based on a url param for any
standard HTTP method, especially the RESTful ones, not just GET.  I
need it to work for GET, HEAD, PUT, DELETE, POST at the very least. It
would be great to work with custom methods like PURGE as well as that
is commonly used with proxy-caches.

Why limit balance url_param to only work with GET?  Why not allow it
to work with any method that contains a URI?

-Bryan



On Thu, Feb 24, 2011 at 11:12 AM, Bryan Talbot  wrote:
> I'm not sure I understand how the url_param option for balance is
> supposed to work.  From reading the description, it sounded like it
> might work for both GET and POST methods when either method includes a
> query string section in the URI.  However, that doesn't seem to be
> working as I expected with 1.4.10.
>
> listen foo
>   bind: *.80
>   balance url_param id
>   server one a.b.c.d ...
>   server two a.b.c.e ...
>
>
> GET /foo/bar?id=1    --> works fine and always sends traffic to the same 
> server
> POST /foo/bar?id=1  --> uses round robin
>
>
> The docs say:
>
>  url_param   The URL parameter specified in argument will be looked up in
>                  the query string of each HTTP GET request.
>
>                  If the modifier "check_post" is used, then an HTTP POST
>                  request entity will be searched for the parameter argument,
>                  when the question mark indicating a query string ('?') is not
>                  present in the URL.  ...
>
>
> which confuses me on whether or not POST query string params are
> searched or not.  The first statement says it only works with GET
> methods.  The second section says it can work with POST entity content
> only when a query string is not present.
>
> I would like to use url_param balancing for POST query string
> parameters.  How can I do that?
>
> -Bryan
>



Re: balance url_param with POST

2011-02-25 Thread Bryan Talbot
Maybe this is the problem?  Line 548 of backend.c from 1.4.11:


if (s->txn.meth == HTTP_METH_POST &&
memchr(s->txn.req.sol + s->txn.req.sl.rq.u, 
'&',
   s->txn.req.sl.rq.u_l ) == NULL)
s->srv = get_server_ph_post(s);
else
s->srv = get_server_ph(s->be,
   s->txn.req.sol + 
s->txn.req.sl.rq.u,
   
s->txn.req.sl.rq.u_l);


It looks to me like when the method is a POST, that the url is
searched for a '&' character and if it's not found then the post body
might be checked.  Of course, it's quite likely that there is just one
query string parameter so the uri would not contain a '&'.  I believe
this should check for the existence of a '?' instead.

If this is the case, then I think there is a documentation bug as well
since the first line for url_param claims it only works for GET.


-Bryan



On Fri, Feb 25, 2011 at 12:46 PM, Bryan Talbot  wrote:
> In general, I need to load balance based on a url param for any
> standard HTTP method, especially the RESTful ones, not just GET.  I
> need it to work for GET, HEAD, PUT, DELETE, POST at the very least. It
> would be great to work with custom methods like PURGE as well as that
> is commonly used with proxy-caches.
>
> Why limit balance url_param to only work with GET?  Why not allow it
> to work with any method that contains a URI?
>
> -Bryan
>
>
>
> On Thu, Feb 24, 2011 at 11:12 AM, Bryan Talbot  wrote:
>> I'm not sure I understand how the url_param option for balance is
>> supposed to work.  From reading the description, it sounded like it
>> might work for both GET and POST methods when either method includes a
>> query string section in the URI.  However, that doesn't seem to be
>> working as I expected with 1.4.10.
>>
>> listen foo
>>   bind: *.80
>>   balance url_param id
>>   server one a.b.c.d ...
>>   server two a.b.c.e ...
>>
>>
>> GET /foo/bar?id=1    --> works fine and always sends traffic to the same 
>> server
>> POST /foo/bar?id=1  --> uses round robin
>>
>>
>> The docs say:
>>
>>  url_param   The URL parameter specified in argument will be looked up in
>>                  the query string of each HTTP GET request.
>>
>>                  If the modifier "check_post" is used, then an HTTP POST
>>                  request entity will be searched for the parameter argument,
>>                  when the question mark indicating a query string ('?') is 
>> not
>>                  present in the URL.  ...
>>
>>
>> which confuses me on whether or not POST query string params are
>> searched or not.  The first statement says it only works with GET
>> methods.  The second section says it can work with POST entity content
>> only when a query string is not present.
>>
>> I would like to use url_param balancing for POST query string
>> parameters.  How can I do that?
>>
>> -Bryan
>>
>



Re: balance url_param with POST

2011-02-28 Thread Bryan Talbot
I'd like to find a work-around to this issue until it can be fixed.
Is it possible to add a dummy query string parameter in a rewrite rule
which would then cause the balance url_param to work with a POST?

I'm not seeing how to use reqrep to alter a POST uri by appending a
'&a=1' parameter to the end since there is no support for substitution
groups.  Any pointers?

-Bryan


On Fri, Feb 25, 2011 at 11:57 PM, Willy Tarreau  wrote:
> Hi Bryan,
>
> On Fri, Feb 25, 2011 at 11:40:00PM -0800, Bryan Talbot wrote:
>> Maybe this is the problem?  Line 548 of backend.c from 1.4.11:
>>
>>
>>                               if (s->txn.meth == HTTP_METH_POST &&
>>                                   memchr(s->txn.req.sol + 
>> s->txn.req.sl.rq.u, '&',
>>                                          s->txn.req.sl.rq.u_l ) == NULL)
>>                                       s->srv = get_server_ph_post(s);
>>                               else
>>                                       s->srv = get_server_ph(s->be,
>>                                                              s->txn.req.sol 
>> + s->txn.req.sl.rq.u,
>>                                                              
>> s->txn.req.sl.rq.u_l);
>>
>>
>> It looks to me like when the method is a POST, that the url is
>> searched for a '&' character and if it's not found then the post body
>> might be checked.  Of course, it's quite likely that there is just one
>> query string parameter so the uri would not contain a '&'.  I believe
>> this should check for the existence of a '?' instead.
>>
>> If this is the case, then I think there is a documentation bug as well
>> since the first line for url_param claims it only works for GET.
>
> I agree with all your points. Looks like this needs fixing. Fortunately
> I have not released 1.4.12 yet ;-)
>
> And yes, the reference to GET in the doc really meant "everything but POST".
> I think we could improve that to look for the URI first, then switch to the
> body if there is a content-length or transfer-encoding. That would be a lot
> cleaner and would not rely anymore on the method.
>
> Regards,
> Willy
>
>



Re: balance url_param with POST

2011-03-01 Thread Bryan Talbot
>> I'm not seeing how to use reqrep to alter a POST uri by appending a
>> '&a=1' parameter to the end since there is no support for substitution
>> groups.  Any pointers?
>
> We can't modify the contents of a POST request but we can indeed alter
> the URI. And yes it does support substitution groups. For instance, you
> could duplicate the only url param you currently have, eg something
> approximately like this :
>
>     reqrep ^(.*)\?([^&]*)$ \1?\2&\2
>


Your'e right of course.  I don't know why I was thinking that I
couldn't use substitution groups.  Thanks for the pointer and this
will work for me as a work-around until a proper fix can be made.  I
only need to alter the request line and not POST content.

-Bryan



admin socket 1.4.x crash report

2011-03-03 Thread Bryan Talbot
I found a way that causes the 1.4.10 (and probably 1.4.11) releases to
crash with a segfault.  The message in /var/log/messages is

Mar  3 12:44:34 host kernel: haproxy[16392]: segfault at
 rip  rsp 7fff7402a9d8 error 14



host:~$ /usr/sbin/haproxy -vv
HA-Proxy version 1.4.10 2010/11/28
Copyright 2000-2010 Willy Tarreau 

Build options :
  TARGET  = linux26
  CPU = generic
  CC  = gcc
  CFLAGS  = -O2 -g -fno-strict-aliasing
  OPTIONS = USE_LINUX_TPROXY=1 USE_REGPARM=1 USE_PCRE=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200

Encrypted password support via crypt(3): yes

Available polling systems :
 sepoll : pref=400,  test result OK
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 4 (4 usable), will use sepoll.



The segfault occurs if a server in disabled backend is put into
maintenance mode using the socket.

For example:

backend foo
disabled
server 1.2.3.4 1.2.3.4
server 1.2.3.5 1.2.3.5


$> echo "disable server foo/1.2.3.4" | socat stdio /var/lib/haproxy/stats


Unfortunately I had a script that would disable a particular server
for all backends the server was in.  The script didn't check that a
backend wasn't already disabled and so crashed a production server.
I've been able to reproduce this on other instances so I don't think
it was a one-time coincidence.


-Bryan



Re: admin socket 1.4.x crash report

2011-03-04 Thread Bryan Talbot
Great, thanks for the quick fix Cyril and for getting it into 1.4.12 Willy.

-Bryan


On Thu, Mar 3, 2011 at 10:23 PM, Willy Tarreau  wrote:
> Hi Guys,
>
> On Thu, Mar 03, 2011 at 08:09:57PM +0100, Cyril Bonté wrote:
>> Hi Bryan,
>>
>> Le jeudi 3 mars 2011 19:05:38, Bryan Talbot a écrit :
>> > I found a way that causes the 1.4.10 (and probably 1.4.11) releases to
>> > crash with a segfault.  The message in /var/log/messages is
>> >
>> > Mar  3 12:44:34 host kernel: haproxy[16392]: segfault at
>> >  rip  rsp 7fff7402a9d8 error 14
>>
>> Reproduced here, this is also the case for "set weight".
>> It happens when px->lbprm is null (it's the case when a proxy is disabled).
>>
>> We should check the proxy state and capabilities before trying to change a
>> server state.
>>
>> Give me some time to prepare a patch, if no one is already working on it.
>
> Thank you very much ! I was about to release 1.4.12 yesterday but fortunately
> I was to tired for that. I'll merge both of your fixes before the release !
>
> Cheers!
> Willy
>
>



Re: How does http_find_header() work?

2011-03-31 Thread Bryan Talbot
This would be useful, but having a format similar to what's currently
used for forwardfor would be nice:

option uniqueid [{if | unless} ] [ header  ]

I would also like to be sure that any incoming values for the header
could be stripped (using reqidel) and still have the new one added
properly.

-Bryan


On Wed, Mar 30, 2011 at 7:30 PM, Roy Smith  wrote:
> Willy,
>
> This turned out to be surprisingly straight-forward.  Patch attached (against 
> the 1.4.11 sources).
>
> To enable generation of the X-Unique-Id headers, you add "unique-id" to a 
> listen stanza in the config file.  This doesn't make any sense unless you're 
> in http mode (although my code doesn't check for that, which could reasonably 
> considered a bug).  What this does is adds a header that looks like:
>
> X-Unique-Id: CB0A6819.4B7D.4D93DFDB.C69B.10
>
> to each incoming request.  This gets done before the header capture 
> processing happens, so you can use the existing "capture request header" to 
> log the newly added headers.  There's nothing magic about the format of the 
> Id code.  In the current version, it's just a mashup of the hostid, haproxy 
> pid, a timestamp, and a sequence number.  The sequence numbers count up to 
> 1000, and then the leading part is regenerated.  I'm sure there's better 
> schemes that could be used.
>
> Here's a sample config stanza:
>
> listen test-nodes 0.0.0.0:19199
>       mode http
>       option httplog
>       balance leastconn
>       capture request header X-Unique-Id len 64
>       unique-id
>       server localhost localhost:9199 maxconn 8 weight 10 check inter 60s 
> fastinter 60s rise 2
>
> If there is already a X-Unique-Id header on the incoming request, it is left 
> untouched.
>
> A little documentation:
>
> We've got (a probably very typical) web application which consists of many 
> moving parts mashed together.  In our case, it's an haproxy front end, an 
> nginx layer (which does SSL conversion and some static file serving), 
> Apache/PHP for the main application logic, and a number of ancillary 
> processes which the PHP code talks to over HTTP (possibly with more haproxies 
> in the middle).  Plus mongodb.  Each of these moving parts generates a log 
> file, but it's near impossible to correlate entries across the various logs.
>
> To fix the problem, we're going to use haproxy to assign every incoming 
> request a unique id.  All the various bits and pieces will log that id in 
> their own log files, and pass it along in the HTTP requests they make to 
> other services, which in turn will log it.  We're not yet sure how to deal 
> with mongodb, but even if we can't get it to log our ids, we'll still have a 
> very powerful tool for looking at overall performance through the entire 
> application suite.
>
> Thanks so much for the assistance you provided, not to mention making haproxy 
> available in the first place.  Is there any possibility you could pick this 
> up and integrate it into a future version of haproxy?  Right now, we're 
> maintaining this in a private fork, but I'd prefer not to have to do that.  I 
> suspect this may also be useful for other people.  If there's any 
> modifications I could make which would help you, please let me know.
>
>
>



Re: Question concerning "option forwardfor" and HTTP keep-alive

2011-08-04 Thread Bryan Talbot
On Thu, Aug 4, 2011 at 8:55 AM, Guillaume Bourque <
guillaume.bour...@gmail.com> wrote:

> defaults
> modehttp
> log global
> option  dontlognull
> # optionhttpclose   # permet d'avoir
> les ip ds toutes les ligne des access log
> option  http-server-close   #  permet d'avoir
> les ip ds la premiere ligne des access log
> option  httplog
> option  log-health-checks
> option  redispatch
> option  forwardfor  except 10.222.0.52
> option  forwardfor  except 10.222.0.53
> option  forwardfor  except 10.222.0.58  # ip of
> haproxy and stunnel box
>
>
>
>

Does having multiple forwardfor statements like this actually work so that
the x-forwarded-for header isn't added if the connection comes from any of
those hosts (networks)?  I had assumed that if any of the "option
forwardfor" rules matched that the header would be added.

-Bryan


X-Forwarded-For contortions

2011-08-12 Thread Bryan Talbot
In haproxy 1.4.x, the "option forwardfor" feature's lack of an ACL to
control its application is causing me to have an ugly and confusing haproxy
configuration.

The issue has come up recently while attempting to configure the proxy to
accept connections from trusted upstream proxies (a CDN) while also
accepting connections from untrusted clients.  The issue is that traffic
from a trusted proxy already includes the X-Forwarded-For header which needs
to be preserved while traffic from untrusted clients needs to strip any
existing XFF header(s) and add our own.

The best solution I've been able to come up with is to not have "option
forwardfor" in the defaults and to duplicate the web frontend.  One of the
frontends is used for public traffic and strips and sets the XFF header and
the other preserves existing XFF header after verifying the source.  This
works but all the rest of the front end login is now duplicated.

This solution means that I have to live with duplicated code in two frontend
section or write a configuration assembly application to assemble a valid
haproxy configuration from multiple file snippets.  Neither of these are
very desirable.  There is an SSL terminator (nginx) in front of haproxy
which complicates matters further.

What are my other options?  There are multiple backends so having one shared
front end and duplicating the backend sections and putting the XFF handling
there isn't any better.  Routing connections through the proxy twice for
every hit isn't very nice either since the number of hits isn't low (not
high by Willy's standards though) and it really pollutes the logs which we
do analyze.

Thanks in advance for any help or suggestions.

-Bryan


Re: X-Forwarded-For contortions

2011-08-15 Thread Bryan Talbot
That would work but there are several CIDR networks that contain a trusted
proxy.  The CDN is global and has proxies on most continents and many of
them are on different networks.

Since the "option forwardfor" can only handle a single network in the
"if|unless" argument the only real use case it can support is where all the
upstream proxies come from the same network -- like for an upstream SSL
terminator.

The logic I need is:

if 
  leave existing XFF header alone
else
  strip any XFF header from request and add our own


-Bryan



On Mon, Aug 15, 2011 at 2:12 AM, Brane F. Gračnar wrote:

> On Friday 12 of August 2011 20:17:11 Bryan Talbot wrote:
> > What are my other options?  There are multiple backends so having one
> > shared front end and duplicating the backend sections and putting the XFF
> > handling there isn't any better.  Routing connections through the proxy
> > twice for every hit isn't very nice either since the number of hits isn't
> > low (not high by Willy's standards though) and it really pollutes the
> logs
> > which we do analyze.
>
> Didn't tried this config, but it should work according to docs...
>
> frontend some_fe
>   acl trusted_proxies a.b.c.d/24
>
>   reqidel ^X-Forwarded-For: unless trusted_proxies
>
>   option forwardfor except a.b.c.d/24
>
>   default_backend some_be
>
> backend some_be
> 
>
> Best regards, Brane
>


Re: X-Forwarded-For contortions

2011-08-24 Thread Bryan Talbot
I think this will address my issue, thanks a lot!  Sorry for not
getting back to your questions sooner; I just returned from an
end-of-summer trip.

I'll check it out soon.  It's in 1.4 trunk but not in any release yet, correct?


-Bryan


On Fri, Aug 19, 2011 at 2:04 PM, Willy Tarreau  wrote:
> Hi Bryan,
>
> On Tue, Aug 16, 2011 at 07:51:07AM +0200, Willy Tarreau wrote:
>> Hi Bryan,
>>
>> On Mon, Aug 15, 2011 at 11:13:18AM -0700, Bryan Talbot wrote:
>> > That would work but there are several CIDR networks that contain a trusted
>> > proxy.  The CDN is global and has proxies on most continents and many of
>> > them are on different networks.
>> >
>> > Since the "option forwardfor" can only handle a single network in the
>> > "if|unless" argument the only real use case it can support is where all the
>> > upstream proxies come from the same network -- like for an upstream SSL
>> > terminator.
>> >
>> > The logic I need is:
>> >
>> > if 
>> >   leave existing XFF header alone
>> > else
>> >   strip any XFF header from request and add our own
>>
>> Well, I think we could see it slightly differently :
>>
>>   if 
>>     leave existing XFF header alone
>>   else
>>     strip any XFF header from request
>>
>>   if (no XFF is present)
>>     and add our own
>>
>> This way we could use the ACLs to delete any existing occurrence
>> and have a simple flag to indicate the addition is conditionnal.
>> For instance we could have a "dont-replace" flag on the fwdfor
>> config line to indicate that it should only be set and not replaced.
>>
>> However we have to keep in mind that XFF is performed both in the
>> frontend and the backend, so the conditional logic must be compatible
>> between both. This means that if either of the front or back has the
>> option without the "dont-replace" flag, it will still be replaced.
>
> I did it, though I changed the argument name to "if-none" to make it
> less ambiguous especially in the case where both the frontend and the
> backend have the option. It is even supported in the defaults section
> (but this should be avoided since it has security implications).
>
> Thus, for your case above, you'd simply remove the header if the IP
> comes from an untrusted host, then conditionally add it :
>
>   reqidel ^x-forwarded-for if !from_proxies
>   option forwardfor if-none
>
> I'm attaching the patch, but it's already pushed upstream.
>
> Regards,
> Willy
>
>



Re: haproxy not logging correctly or I am misunderstanding something?

2011-08-26 Thread Bryan Talbot
I think you're mixing up your client IP and server IP addresses.  The
10.0.3.12:42281 is the source IP:PORT for the TCP connection making the
request (the client).  The request was serviced by the server
named ip_10_0_2_14 which I assume has an IP address of 10.0.2.14.

-Bryan



On Fri, Aug 26, 2011 at 11:02 AM, Dean Hiller  wrote:

> Not sure if this is my confusion or if there is something wrong.  Version
> 1.3.22-1 of haproxy according to apt-cache pkginfo haproxy.
>
> The HAProxy log was(note the 10.0.3.12 and 200 success)
>
> Aug 26 17:18:58 localhost haproxy[990]: 
> 10.0.3.12:42281[26/Aug/2011:17:18:43.503] appli1-rewrite 
> appli1-rewrite/ip_10_0_2_14
> 9/0/2/10577/15160 200 4179 - -  156/156/156/30/0 0/0 "GET
> /API-2011-02-09/supplyChainPartys/033344409/eventSequences/urn:epc:id:sgtin:0111222.999888.02731
> HTTP/1.1"
>
> so I go to server 10.0.3.12 since it was success 200 and I log every
> request server side and this request is not there.  I then happen to find it
> on my 10.0.3.14 server instead!  My test is under very high load as you
> can see by the service time.
>
> thanks,
> Dean
>


Re: How to test keep-alive is working?

2011-08-26 Thread Bryan Talbot
You could use something as simple as curl to see if the connection is left
in-tact.

$> curl -I -v www.example.com

If keep-alive is working, curl will include a verbose message like this:

* Connection #0 to host www.example.com left intact

and then close the connection since it has no pending requests to make

* Closing connection #0

-Bryan



On Fri, Aug 26, 2011 at 11:56 AM, bradford  wrote:

> How do I test that keep-alive is working?
>
> I've added "option http-server-close" to the "frontend" and do not see a
> connection header in the http response.  I use to see "connectino: closed"
> when I had "httpclose" enabled, but don't see "connection: keep-alive" or
> anything similar.  So, how do I test it's working?
>
> Bradford
>


Re: reqrep not working

2011-12-01 Thread Bryan Talbot
You can do it but both replacements need their own statement: one for
the URI and one for the Host header.

-Bryan


On Thu, Dec 1, 2011 at 5:24 AM,   wrote:
> HI,
>
> looks like it cant be done
>
> change the host based on URI, and rewrite the URI at the same time
>
>
>
> thx all for the time to read think on the issue.
>
> Regards
>
> Owen
>
> ---
> posted at http://www.serverphorums.com
> http://www.serverphorums.com/read.php?10,411976,412451#msg-412451
>



Re: How to explain 503 and 504 error.

2011-12-30 Thread Bryan Talbot
The 503's you show are from clients disconnecting very shortly after
they've sent the request but before haproxy can connect to a backend
server.  The client closed the TCP connection without waiting for a
response.  The client probably didn't actually receive a 503 but haproxy
seems to log it that way because it would have sent a 503 in that case if
it could.

The 504's occur after the server-timeout of 30 seconds had passed without
the backend returning a response.  30 seconds is a long time for clients to
wait for most web resources, so I'd claim that your application is broken
if it's taking that long to respond.  Since there are a low number of
connections to the backends, I hope that they're not overloaded.

-Bryan



On Thu, Dec 29, 2011 at 10:42 AM, Benoit GEORGELIN (web4all) <
benoit.george...@web4all.fr> wrote:

> Here another extract from log on specific website, as you can see, all
> images are in 503 error..
>
>  "GET /inbreeding.php?nid=280240&nidp=&nidm=&nbgen=12 HTTP/1.1"
> Dec 29 19:40:34 127.0.0.1 haproxy[24349]: 
> 193.247.39.42:58371[29/Dec/2011:19:40:34.355] public webserver/http-16 
> 43/0/-1/-1/43 503 288 -
> - CCVN 458/458/44/7/0 0/0 {www.golf5forum.fr} "GET
> /Themes/maclike_20g/images/theme/menu_gfx.png HTTP/1.1"
> Dec 29 19:40:34 127.0.0.1 haproxy[24349]: 
> 193.247.39.42:58381[29/Dec/2011:19:40:34.477] public webserver/http-16 
> 64/0/-1/-1/65 503 288 -
> - CCVN 463/463/55/9/0 0/0 {www.golf5forum.fr} "GET
> /Themes/maclike_20g/images/banniere/Bannierenoel2010.gif HTTP/1.1"
> Dec 29 19:40:34 127.0.0.1 haproxy[24349]: 
> 193.247.39.42:58387[29/Dec/2011:19:40:34.815] public webserver/http-16 
> 172/0/-1/-1/172 503 288
> - - CCVN 452/452/36/8/0 0/0 {www.golf5forum.fr} "GET
> /Themes/maclike_20g/images/bbc/bbc_bg.gif HTTP/1.1"
> Dec 29 19:40:35 127.0.0.1 haproxy[24349]: 
> 193.247.39.42:58390[29/Dec/2011:19:40:35.057] public webserver/http-16 
> 111/0/-1/-1/111 503 288
> - - CCVN 450/450/38/8/0 0/0 {www.golf5forum.fr} "GET
> /Themes/maclike_20g/images/theme/main_block.png HTTP/1.1"
> Dec 29 19:40:35 127.0.0.1 haproxy[24349]: 
> 193.247.39.42:58391[29/Dec/2011:19:40:35.221] public webserver/http-16 
> 144/0/-1/-1/146 503 288
> - - CCVN 444/444/38/9/0 0/0 {www.golf5forum.fr} "GET
> /Themes/maclike_20g/images/theme/main_block.png HTTP/1.1"
> Dec 29 19:40:35 127.0.0.1 haproxy[24349]: 
> 193.247.39.42:58392[29/Dec/2011:19:40:35.266] public webserver/http-16 
> 101/0/-1/-1/102 503 288
> - - CCVN 443/443/37/9/0 0/0 {www.golf5forum.fr} "GET
> /Themes/maclike_20g/images/theme/main_block.png HTTP/1.1"
> Dec 29 19:40:35 127.0.0.1 haproxy[24349]: 
> 193.247.39.42:58394[29/Dec/2011:19:40:35.429] public webserver/http-16 
> 122/0/-1/-1/122 503 288
> - - CCVN 441/441/35/7/0 0/0 {www.golf5forum.fr} "GET
> /Themes/maclike_20g/images/banniere/Bannierenoel2010.gif HTTP/1.1"
> Dec 29 19:40:35 127.0.0.1 haproxy[24349]: 
> 193.247.39.42:58395[29/Dec/2011:19:40:35.596] public webserver/http-16 
> 63/0/-1/-1/63 503 288 -
> - CCVN 444/444/34/7/0 0/0 {www.golf5forum.fr} "GET
> /Themes/maclike_20g/images/theme/main_block.png HTTP/1.1"
> Dec 29 19:40:35 127.0.0.1 haproxy[24349]: 
> 193.247.39.42:58396[29/Dec/2011:19:40:35.597] public webserver/http-16 
> 72/0/-1/-1/72 503 288 -
> - CCVN 442/442/34/7/0 0/0 {www.golf5forum.fr} "GET
> /Themes/maclike_20g/images/img/wobody.jpg HTTP/1.1"
> Dec 29 19:40:35 127.0.0.1 haproxy[24349]: 
> 193.247.39.42:58397[29/Dec/2011:19:40:35.717] public webserver/http-16 
> 131/0/-1/-1/131 503 288
> - - CCVN 442/442/37/8/0 0/0 {www.golf5forum.fr} "GET
> /Themes/maclike_20g/images/theme/main_block.png HTTP/1.1"
> Dec 29 19:40:35 127.0.0.1 haproxy[24349]: 
> 193.247.39.42:58399[29/Dec/2011:19:40:35.737] public webserver/http-16 
> 112/0/-1/-1/112 503 288
> - - CCVN 441/441/37/8/0 0/0 {www.golf5forum.fr} "GET
> /Themes/maclike_20g/images/theme/main_block.png HTTP/1.1"
> Dec 29 19:40:36 127.0.0.1 haproxy[24349]: 
> 193.247.39.42:58401[29/Dec/2011:19:40:35.900] public webserver/http-16 
> 144/0/-1/-1/145 503 288
> - - CCVN 447/447/36/9/0 0/0 {www.golf5forum.fr} "GET
> /Themes/maclike_20g/images/theme/main_block.png HTTP/1.1"
> Dec 29 19:40:36 127.0.0.1 haproxy[24349]: 
> 193.247.39.42:58402[29/Dec/2011:19:40:35.950] public webserver/http-16 
> 102/0/-1/-1/102 503 288
> - - CCVN 445/445/39/10/0 0/0 {www.golf5forum.fr} "GET
> /Themes/maclike_20g/images/theme/main_block.png HTTP/1.1"
> Dec 29 19:40:36 127.0.0.1 haproxy[24349]: 
> 193.247.39.42:58405[29/Dec/2011:19:40:36.296] public webserver/http-16 
> 238/0/-1/-1/238 503 288
> - - CCVN 452/452/42/8/0 0/0 {www.golf5forum.fr} "GET
> /Themes/maclike_20g/images/theme/main_block.png HTTP/1.1"
> Dec 29 19:40:36 127.0.0.1 haproxy[24349]: 
> 193.247.39.42:58408[29/Dec/2011:19:40:36.621] public webserver/http-16 
> 88/0/-1/-1/88 503 288 -
> - CCVN 445/445/42/8/0 0/0 {www.golf5forum.fr} "GET
> /Themes/maclike_20g/images/theme/main_block.png HTTP/1.1"
>
>
> --
> *De: *"Benoit GEORGELIN (web4all)" 
> *À: *haproxy@formilux.or

Re: strange udp port bind issue

2012-01-18 Thread Bryan Talbot
My guess is that the port must be for syslog.  Maybe there was a bug
in older 1.4.x versions that don't shutdown cleanly in some cases?  I
haven't run into that myself.

-Bryan


On Wed, Jan 18, 2012 at 1:05 PM, Coates, James  wrote:
> We have been running haproxy for years and are currently running 1.4.8.
> Recently we upgraded to exchange 2010 and are balancing it behind haproxy
> with the following config.  Normally when we need to do maintenance, we will
> change the config and run haproxy with the –sf switch to take over for the
> running process.  Since adding this config, the old processes will not die
> and running netstat shows that each process is holding on to a single udp
> port around 32800.
>
>
>
> udp    0  0 0.0.0.0:32803
> 0.0.0.0:*   31441/haproxy
>
> udp    0  0 0.0.0.0:32804
> 0.0.0.0:*   896/haproxy
>
> udp    0  0 0.0.0.0:32805
> 0.0.0.0:*   23140/haproxy
>
>
>
> Any idea why this is happening?  There are no udp ports configured to listen
> anywhere in the config file.
>
>
>
> listen  exchange2010 :80
>
>     bind :25
>
>     bind :110, :135
>
>     bind :139, :443
>
>     bind :6, :60001
>
>     bind :6001-6004
>
>     bind :993-995
>
>     mode    tcp
>
>     balance roundrobin
>
>     server primary  check port 80
>
>     server secondary  check port 80 backup
>
>     option abortonclose
>
>     maxconn 4
>
>     clitimeout 360
>
>     srvtimeout 360
>
> **
>
> This e-mail is intended for the use of the addressee(s) only and may contain
> privileged, confidential, or proprietary information of ICG Commerce. If you
> have received this message in error, please e-mail administrator at
> postmas...@icgcommerce.com, then delete the e-mail and destroy any printed
> copy. ICG Commerce reserves the right to retain, archive, use and disclose
> any emails that are sent from or to this email address. Thank you.
>
>
>
> www.icgcommerce.com
>
>
>
> **
>
>




Re: Does haproxy support cronolog?

2012-01-31 Thread Bryan Talbot
Nothing says you can't run a special instance of syslog-ng that only logs
for haproxy.  The system configuration files don't need to be special in
that case.

-Bryan


On Tue, Jan 31, 2012 at 2:34 AM, Graeme Donaldson
wrote:

> On 31 January 2012 11:21, wsq003  wrote:
> > Hi
> >
> > Here we want haproxy to write logs to separate log files (i.e.
> > /home/admin/haproxy/var/logs/haproxy_20120131.log), and we want to rotate
> > the log files. Then cronolog seems to be a good candidate.
>
> HAproxy can only log to a syslog daemon currently, and this is
> unlikely to change.
>
> Graeme.
>
>


Re: Match when a header is missing?

2012-09-25 Thread Bryan Talbot
On Tue, Sep 25, 2012 at 12:30 PM, Shawn Heisey  wrote:

> I have a need to cause haproxy to match an ACL when a header (User-Agent)
> is missing.  Can that be written with the configuration language in its
> current state?  I'm running 1.4.18 here.
>


How about
acl has_useragent  hdr_cnt(User-Agent) gt 0

-Bryan


Re: Slow read http attack

2012-10-02 Thread Bryan Talbot
Having one (or some small number) of slow readers isn't a problem.  The
problem comes up when some significant percentage of your requests are from
slow readers.  Those readers might be from the same client or distributed
which seems pretty common from the attacks we see.

Would it be possible for haproxy to start dropping connections to slow
readers if some percentage of them compared to capacity (of the frontend,
backend, etc) were slow?  If the slowness is due to network congestion,
then those clients are already having connection issues anyway.

-Bryan


On Tue, Oct 2, 2012 at 11:13 PM, Willy Tarreau  wrote:

> On Tue, Oct 02, 2012 at 04:12:33PM +0200, Baptiste wrote:
> > Hi,
> >
> > Your timeouts are too long:
> >   timeout client 102
> >   timeout server 102
> >
> > This one sounds good:
> > timeout http-request 6000
> >
> > As far as I understand, your application is not compatible with
> > slowloris protection :)
>
> It's not slowloris here, it's slow reading, which is the opposite. It's
> a lot harder to protect against this on web servers precisely because of
> today's applications behaviour.
>
> If you want to apply a timeout, you can't because the receiver will read
> almost 1 byte every second.
>
> If you want to set a max transfer time (eg: 1 minute max), it will not
> work :
>   - 1 minute is far too long for small objects
>   - 1 minute is far too short for large objects
>
> Ideally we need to consider the transfer rate : clients transfering too
> slowly are building an attack. But this needs to be detected very quickly
> (otherwise the attack works). And if you compute transfer rate over small
> elements, you have to consider TCP retransmits which can add 3*2^N seconds
> on losses. This means that it can be perfectly normal to have several
> pauses of 3 or 9 seconds during a transfer if the uplink to the client
> is congested.
>
> But if we allow even just 3 seconds for every send, then the attack still
> works.
>
> I think that in practice we should consider the time it takes to transfer
> the equivalent of two TCP segments (2896..2920 bytes over ethernet),
> because
> TCP acks every second segment. The idea could be that we rearm the client
> timeout only after *that* many bytes have been transferred, or the buffer
> is empty. At least we could still support slow clients (eg: 1kB/s) but not
> extremely slow ones. But quite frankly, even at 1kB/s with a 1MB object,
> it is possible to harm the server, the transfer will last 20 minutes.
>
> So what are we left with ? We want to support reasonably slow clients,
> TCP retransmits over the internet and large objects ? There is a big
> difference between a slow client and a slowread attack : the number of
> concurrent connections from the same client. If your slowreader only
> has one connection to your site, you don't mind. If it has 10, then it
> becomes an issue.
>
> Fast clients such as proxies won't have too many connections even if they
> have many users behind, because :
>   1) they will merge similar requests
>   2) they generally transfer much faster than the average client, which
>  significantly reduces the number of concurrent connections
>
> So it is perfectly possible to enforce the limit on the number of
> concurrent
> conns to stop such an attack, using a stick-table and a tcp-request rule.
> The slow transfer will not be the key to detect the anomaly, but the fact
> that the client needs many connections to succeed will be the key.
>
> Regards
> Willy
>
>
>


Re: Slow read http attack

2012-10-03 Thread Bryan Talbot
I believe that what haproxy now does with connections when resources are
all used up is comparable to a router's tail-drop congestion control --
when servers and queue is full, connections are refused (dropped).

I was thinking that moving to something more like RED would be more fair.
 Especially if the algorithm could be tuned by the site operator.  I would
think there would be high and low watermarks that would activate
and deactivate it and other tunable parameters to guide the selection of
which connections should be dropped: random, transfer rate, include /
exclude backend, acl, etc.

So for example, let's assume there are 100 max connections allowed for a
backend and typically 30 of them are in use. The configuration might be
something like: if current connections > 110% of max (connections are
queued), random drop connections with "active transfer" > 5 sec and
transfer rate < 1KBps.  Here, I think "active transfer" would mean that
headers have been received and POST (PUT, etc) content is being sent very
slowly or that response headers have been received but that reading the
body is happening very slowly.  It also seems like there would need to be a
limit to how long a client has to read response headers much like there is
for a client to send request headers.

RED isn't perfect and has a few known issues as I'm sure you're aware.
 Maybe there is another congestion control that haproxy could use that is
better than tail drop and RED.

-Bryan



On Tue, Oct 2, 2012 at 11:58 PM, Willy Tarreau  wrote:

> On Tue, Oct 02, 2012 at 11:42:27PM -0700, Bryan Talbot wrote:
> > Having one (or some small number) of slow readers isn't a problem.  The
> > problem comes up when some significant percentage of your requests are
> from
> > slow readers.  Those readers might be from the same client or distributed
> > which seems pretty common from the attacks we see.
> >
> > Would it be possible for haproxy to start dropping connections to slow
> > readers if some percentage of them compared to capacity (of the frontend,
> > backend, etc) were slow?  If the slowness is due to network congestion,
> > then those clients are already having connection issues anyway.
>
> It's not necessarily true unfortunately. And if you want to be kind with
> your visitors, you'd better kill the fast connections because in one click
> they can get the same object again, than killing the ones who'll need
> another
> hour of download (and of connection usage on your side BTW).
>
> If you have some captures of such slow readers that are attacking you, I
> would be interested, I think they can be very useful. One significant
> issue to deal with is that the system's network buffers hide the slow
> ACKs to the application layer, so you can't easily detect that someone
> is reading one byte at a time, and distinguishing someone who does this
> from someone experiencing repeated retransmits (typically a smartphone)
> is very hard.
>
> What if your server is saturated serving smartphones ? You don't
> necessarily want to kill many transfers, it will only make things
> worse because they'll be there again after a retry.
>
> Probably that we should try to consider the transmission quality over
> the whole transfer, which might be similar to the transfer rate after
> all. The difference between the slow reader and other readers is that
> the slow reader reads *very* slowly in order to have its connection
> last as long as possible. The average send() size as well as the average
> time between two send() calls might be a good indication of what is
> happening.
>
> But even then, you see, the attack can change a bit. Imagine for a
> second that the attacker connects to your site, waits one second,
> sends "HEAD /favicon.ico", waits one second and does it again over
> the same connection. It's just one tiny request per second to keep
> the connection alive. Should we block them or let them pass ? Very
> hard to tell. This is a perfectly valid transfer pattern, but if
> abused, it becomes an attack. The difference is that the connection
> to the backend server will be very ephemeral with haproxy, but doing
> this with a normal server is as painful as reading one small segment
> at a time.
>
> Willy
>
>


Re: set-cookie on redirect

2012-12-21 Thread Bryan Talbot
Won't the "Referer" header value contain that either way?



On Fri, Dec 21, 2012 at 9:03 AM, Benjamin Polidore wrote:

> Is it possible to set a cookie value to the path of the GET request?  I
> have a situation where I want to set a "calling_url" cookie when I redirect
> to a login page, and I am currently setting it to a string literal for the
> main website, but i'd like to set it to the path received from client prior
> to redirect.
>
> Thanks,
> bp
>



-- 
Bryan Talbot
Architect / Platform team lead, Aeria Games and Entertainment
Silicon Valley | Berlin | Tokyo | Sao Paulo


Re: "Transparent" redirects

2012-12-27 Thread Bryan Talbot
See the docs for reqrep: http://code.google.com/p/haproxy-docs/wiki/reqrep

-Bryan


On Thu, Dec 27, 2012 at 8:44 AM, Alexandru Florescu wrote:

>
> Hi,
> I have an old style URL that looks like this:
> http://api.domain.com/some_path
> and a new style URL that looks like this
> http://api.domain.com/v2/some_path
>
> While I can put a 302 redirect in the old style URL that will point to the
> new one this introduces a 100-200ms latency and
> I was wondering if I could avoid this by setting HAproxy so when the URL
> http://api.domain.com/some_path is called then transparently the new URL
> is called by HAproxy and the results sent as if
> generated by the old URL.
>
> Would such a configuration be possible?
>
> Thank you,
> Alex
>
>
> --
> Alex Florescu
> http://PagePeeker.com 
> a...@pagepeeker.com
> twitter: @PagePeeker 
> facebook: https://www.facebook.com/pagepeeker
>


Re: Connection error on RabbitMQ consumer behind haproxy

2013-01-11 Thread Bryan Talbot
forwardfor is for http only of course.

You have the client and server timeouts set to 60 seconds which means that
if those tcp connections are idle for that time the connection will be
closed.  Maybe that's not what you intended?

-Bryan



On Thu, Jan 10, 2013 at 8:20 PM, B MK  wrote:

> Hello,
>
> My rabbitmq connection dropped every minute behind haproxy. If i connected
> directly there was no issue.  I already mailed this issue on rabbitmq
> mailing list.
> http://rabbitmq.1065348.n5.nabble.com/Connection-error-on-RabbitMQ-consumer-behind-haproxy-td24348.html
> .
>
> How to add this "option forwardfor" in tcp mode. Is it possible.
> [WARNING] 010/095024 (902) : config : 'option forwardfor' ignored for
> proxy 'rabbitmq_consumer_cluster' as it requires HTTP mode.
>
>
>
> See my haproxy configuration,
>
> global
> log 127.0.0.1local0
> log 127.0.0.1local1 notice
> #log loghostlocal0 info
> maxconn 4096
> #chroot /usr/share/haproxy
> user haproxy
> group haproxy
> daemon
> #debug
> #quiet
>
> defaults
> logglobal
> #modehttp
> #optionhttplog
> optiondontlognull
> retries3
> option redispatch
> maxconn5000
> contimeout1
> clitimeout6
> srvtimeout6
>
>
> listen rabbitmq_producer_cluster 0.0.0.0:5672
> mode tcp
>
> balance roundrobin
>
> server rabbit_1 rabbit1:5672 check inter 5000 rise 2 fall 3
> server rabbit_2 rabbit2:5672 check inter 5000 rise 2 fall 3
> #server rabbit_3 rabbit3:5672 check inter 5000 rise 2 fall 3
>
> listen rabbitmq_consumer_cluster 0.0.0.0:5673
> mode tcp
> balance roundrobin
> option tcpka
>
> server rabbit_1 rabbit1:5672 check inter 5000 rise 2 fall 3
> server rabbit_2 rabbit2:5672 backup check inter 5000 rise 2 fall 3
> #server rabbit_3 rabbit3:5672 check inter 5000 rise 2 fall 3
>
> listen private_monitoring :8100
> mode http
> option httplog
> stats enable
> stats uri   /stats
> stats refresh 5s
>
>


Re: Info On Haproxy

2013-01-11 Thread Bryan Talbot
If by "go down" you mean that the server stops unexpectedly, then haproxy
will NOT retry requests that have already been sent to a backend server.
 If that server goes down the client will receive an error (503 or
something) and will have to decide what action to take.

-Bryan


On Fri, Jan 11, 2013 at 3:22 AM, Steven Acreman  wrote:

> Hi Manoj,
>
> 1. It depends how you have everything configured. Haproxy will send
> requests to a working server if one goes down, it can also retry the failed
> ones on the working server. If your app has sessions you then need to
> ensure they are available on the other servers otherwise users will get
> logged out etc.
>
> 2. Yes, just update the haproxy configuration file and reload the service
> (works perfectly for me on RHEL 6.3 anyway).
>
> Thanks,
>
> Steven
>
> On 11 January 2013 11:04, Manoj Joshi  wrote:
>
>> Hello There,
>>
>> ** **
>>
>> Greetings for the day.
>>
>> ** **
>>
>> I am planning to use haproxy in my live web cluster. I have few queries
>> and concerns below. Kindly help me out in understanding these :
>>
>> ** **
>>
>> 1) How haproxy will act in case a server from web cluster goes down that
>> has already few sessions running. What will happen with those already
>> connected sessions?
>>
>> ** **
>>
>> 2) Can I add a new server to the web cluster on haproxy configuration
>> without impacting its current sessions?
>>
>> ** **
>>
>> ** **
>>
>> Thanks & Regards,
>>
>> *Manoj Kumar Joshi*
>>
>> SO Support Track Lead
>>
>> Express KCS**
>>
>> ** **
>>
>> IM: mkjoshi-ekcs (Skype)
>>
>> E: mkjo...@expresskcs.com 
>>
>> U: *www.expresskcs.com*
>>
>> ** **
>>
>
>


Re: HA proxy

2013-01-22 Thread Bryan Talbot
Why mess around with a version that's more than 5 years old?  Use an up to
date version like 1.4.22 or even better, don't compile your own and use a
binary package for your platform (assuming there is one since you didn't
state what you're trying to build or run on).  Then you might try reading
some of the documentation which thoroughly describes configuration options.

-Bryan


On Tue, Jan 22, 2013 at 6:53 AM, Jonathan Matthews
wrote:

> I started trying to help you but had to give up.
> Congratulations on your early entry for "most obnoxiously formatted
> email of 2013".
>
> Please re-send your email to the list
>
> * without coloured text
> * without HTML
> * without word documents attached
>
> but
>
> * with a more complete description and log of the steps you took, what
> you saw, and what you expected to see.
>
> Thanks,
> Jonathan
> --
> Jonathan Matthews // Oxford, London, UK
> http://www.jpluscplusm.com/contact.html
>
>


Re: HA proxy

2013-01-22 Thread Bryan Talbot
Have you tried "yum install haproxy"?


On Tue, Jan 22, 2013 at 8:03 PM, Paulson AJ  wrote:

> Hi Bryan,
>
> ** **
>
> Thanks for your inputs 
>
> Indeed we are using ver 1.4.22 , it will be very kind of you ,if you can
> mail me the installation procedure 
>
> We need to evaluate this before we put into production 
>
> ** **
>
> Paulson 
>
> ** **
>
> *From:* Bryan Talbot [mailto:btal...@aeriagames.com]
> *Sent:* Wednesday, January 23, 2013 2:01 AM
> *To:* Jonathan Matthews
> *Cc:* haproxy@formilux.org; Saipraveen Guttula (IT Services), Bangalore;
> Paulson AJ
> *Subject:* Re: HA proxy
>
> ** **
>
> Why mess around with a version that's more than 5 years old?  Use an up to
> date version like 1.4.22 or even better, don't compile your own and use a
> binary package for your platform (assuming there is one since you didn't
> state what you're trying to build or run on).  Then you might try reading
> some of the documentation which thoroughly describes configuration options.
> 
>
> ** **
>
> -Bryan
>
> ** **
>
> On Tue, Jan 22, 2013 at 6:53 AM, Jonathan Matthews <
> cont...@jpluscplusm.com> wrote:
>
> I started trying to help you but had to give up.
> Congratulations on your early entry for "most obnoxiously formatted
> email of 2013".
>
> Please re-send your email to the list
>
> * without coloured text
> * without HTML
> * without word documents attached
>
> but
>
> * with a more complete description and log of the steps you took, what
> you saw, and what you expected to see.
>
> Thanks,
> Jonathan
> --
> Jonathan Matthews // Oxford, London, UK
> http://www.jpluscplusm.com/contact.html
>
> ** **
>
>
>
> ::DISCLAIMER::
>
> 
>
> The contents of this e-mail and any attachment(s) are confidential and
> intended for the named recipient(s) only.
> E-mail transmission is not guaranteed to be secure or error-free as
> information could be intercepted, corrupted,
> lost, destroyed, arrive late or incomplete, or may contain viruses in
> transmission. The e mail and its contents
> (with or without referred errors) shall therefore not attach any liability
> on the originator or HCL or its affiliates.
> Views or opinions, if any, presented in this email are solely those of the
> author and may not necessarily reflect the
> views or opinions of HCL or its affiliates. Any form of reproduction,
> dissemination, copying, disclosure, modification,
> distribution and / or publication of this message without the prior
> written consent of authorized representative of
> HCL is strictly prohibited. If you have received this email in error
> please delete it and notify the sender immediately.
> Before opening any email and/or attachments, please check them for viruses
> and other defects.
>
>
> 
>



-- 
Bryan Talbot
Architect / Platform team lead, Aeria Games and Entertainment
Silicon Valley | Berlin | Tokyo | Sao Paulo


Re: client keep-alive when servers

2013-01-30 Thread Bryan Talbot
If you're asking for keep-alive from client to haproxy and no keep alive
from haproxy to server, then that's what the http-server-close option
provides.

What makes you think that keep alive is not working?

-Bryan


On Wed, Jan 30, 2013 at 6:32 AM, Chris Burroughs
wrote:

> We are using haproxy with tproxy to front of our various web services.
> Most of them are very short lived one-off requests, so we have generally
> optimised for closing everything quickly and getting out of the way.  We
> have a new case where we would like client keep-alives, while maintain
> are traditional quick close on the backend behavior.  We tried removing
> "option httpclose", but that did not seem to work.
>
> Is it possible to have haproxy send http keep-alives to the client of
> the backend has no keep-alives and is setting "Connection: close"?
>
> global
> maxconn 65536
> pidfile /var/run/haproxy.pid
> daemon
> nbproc  6
> log 127.0.0.1 local4 debug
> defaults
> mode http
> log  global
> option   http-server-close
> option   contstats
> timeout client   9s
> timeout server   9s
> timeout connect  5s
> timeout http-request 7s
> maxconn  65536
>
> listen http_proxy 0.0.0.0:80
> mode http
> stats enable
> stats uri /ha-stats
> stats auth haprox:stats
> source 0.0.0.0 usesrc clientip
> log global
> balance roundrobin
> option httpchk HEAD /live-lb HTTP/1.0
>
>


Re: client keep-alive when servers

2013-01-30 Thread Bryan Talbot
Oh, your backend looks like it's tomcat?  Some tomcat versions mishandle
HTTP 1.1 and keep-alive so the http-pretend-keepalive was added a while ago
to handle servers like that. Does that work better?

-Bryan



On Wed, Jan 30, 2013 at 1:36 PM, Chris Burroughs
wrote:

> Form curl:
> < HTTP/1.1 200 OK
> < Server: Apache-Coyote/1.1
> < Cache-Control: max-age=72
> < Content-Type: application/json;charset=UTF-8
> < Date: Wed, 30 Jan 2013 21:31:14 GMT
> < Connection: close
> <
> * Closing connection #0
>
> as opposed to ending with something like
> * Connection #0 to host HOST left intact
> with no Connection: close
>
> Maybe to rephrase.  Can I have haproxy<-->client use keepalive when
> haproxy<-->backend is explicitly closeing and not using keepalive (set
> in both haproxy and the backend'ss configuration).
>
>
>
> On 01/30/2013 02:36 PM, Bryan Talbot wrote:
> > If you're asking for keep-alive from client to haproxy and no keep alive
> > from haproxy to server, then that's what the http-server-close option
> > provides.
> >
> > What makes you think that keep alive is not working?
> >
> > -Bryan
> >
> >
> > On Wed, Jan 30, 2013 at 6:32 AM, Chris Burroughs
> > wrote:
> >
> >> We are using haproxy with tproxy to front of our various web services.
> >> Most of them are very short lived one-off requests, so we have generally
> >> optimised for closing everything quickly and getting out of the way.  We
> >> have a new case where we would like client keep-alives, while maintain
> >> are traditional quick close on the backend behavior.  We tried removing
> >> "option httpclose", but that did not seem to work.
> >>
> >> Is it possible to have haproxy send http keep-alives to the client of
> >> the backend has no keep-alives and is setting "Connection: close"?
> >>
> >> global
> >> maxconn 65536
> >> pidfile /var/run/haproxy.pid
> >> daemon
> >> nbproc  6
> >> log 127.0.0.1 local4 debug
> >> defaults
> >> mode http
> >> log  global
> >> option   http-server-close
> >> option   contstats
> >> timeout client   9s
> >> timeout server   9s
> >> timeout connect  5s
> >> timeout http-request 7s
> >> maxconn  65536
> >>
> >> listen http_proxy 0.0.0.0:80
> >> mode http
> >> stats enable
> >> stats uri /ha-stats
> >> stats auth haprox:stats
> >> source 0.0.0.0 usesrc clientip
> >> log global
> >> balance roundrobin
> >> option httpchk HEAD /live-lb HTTP/1.0
> >>
> >>
> >
>
>


-- 
Bryan Talbot
Architect / Platform team lead, Aeria Games and Entertainment
Silicon Valley | Berlin | Tokyo | Sao Paulo


Re: client keep-alive when servers

2013-01-30 Thread Bryan Talbot
http-pretend-keepalive still enables keep-alive to the client like
http-server-close does.

The difference is that http-server-close sends a Connection: close to the
backend to indicate it doesn't intend to use keep alive.  This however
confuses some tomcat versions and causes them to act like the request was
using HTTP 1.0 and (i think) send a connection: close in the response which
is what your client is seeing.  http-pretend-keepalive makes haproxy not
send that Connection: close to the backend but then it closes the
connection anyway.

-Bryan



On Wed, Jan 30, 2013 at 5:04 PM, Chris Burroughs
wrote:

> I read http://code.google.com/p/haproxy-docs/wiki/http_pretend_keepalive
> and it seems to involve stopping haproxy from setting Connection: close
> when the server did not.  I want the opposite: haproxy to *not* set
> Connection:close when the backend does.
>
> Well, funny you should mention tomcat.  The backends are tomcat and they
> all have keep-alive disabled.   I don't particularly trust its keepalive
> which is why I was trying to avoid using it.
>
> On 01/30/2013 06:59 PM, Bryan Talbot wrote:
> > Oh, your backend looks like it's tomcat?  Some tomcat versions mishandle
> > HTTP 1.1 and keep-alive so the http-pretend-keepalive was added a while
> ago
> > to handle servers like that. Does that work better?
> >
> > -Bryan
> >
> >
> >
> > On Wed, Jan 30, 2013 at 1:36 PM, Chris Burroughs
> > wrote:
> >
> >> Form curl:
> >> < HTTP/1.1 200 OK
> >> < Server: Apache-Coyote/1.1
> >> < Cache-Control: max-age=72
> >> < Content-Type: application/json;charset=UTF-8
> >> < Date: Wed, 30 Jan 2013 21:31:14 GMT
> >> < Connection: close
> >> <
> >> * Closing connection #0
> >>
> >> as opposed to ending with something like
> >> * Connection #0 to host HOST left intact
> >> with no Connection: close
> >>
> >> Maybe to rephrase.  Can I have haproxy<-->client use keepalive when
> >> haproxy<-->backend is explicitly closeing and not using keepalive (set
> >> in both haproxy and the backend'ss configuration).
> >>
> >>
> >>
> >> On 01/30/2013 02:36 PM, Bryan Talbot wrote:
> >>> If you're asking for keep-alive from client to haproxy and no keep
> alive
> >>> from haproxy to server, then that's what the http-server-close option
> >>> provides.
> >>>
> >>> What makes you think that keep alive is not working?
> >>>
> >>> -Bryan
> >>>
> >>>
> >>> On Wed, Jan 30, 2013 at 6:32 AM, Chris Burroughs
> >>> wrote:
> >>>
> >>>> We are using haproxy with tproxy to front of our various web services.
> >>>> Most of them are very short lived one-off requests, so we have
> generally
> >>>> optimised for closing everything quickly and getting out of the way.
>  We
> >>>> have a new case where we would like client keep-alives, while maintain
> >>>> are traditional quick close on the backend behavior.  We tried
> removing
> >>>> "option httpclose", but that did not seem to work.
> >>>>
> >>>> Is it possible to have haproxy send http keep-alives to the client of
> >>>> the backend has no keep-alives and is setting "Connection: close"?
> >>>>
> >>>> global
> >>>> maxconn 65536
> >>>> pidfile /var/run/haproxy.pid
> >>>> daemon
> >>>> nbproc  6
> >>>> log 127.0.0.1 local4 debug
> >>>> defaults
> >>>> mode http
> >>>> log  global
> >>>> option   http-server-close
> >>>> option   contstats
> >>>> timeout client   9s
> >>>> timeout server   9s
> >>>> timeout connect  5s
> >>>> timeout http-request 7s
> >>>> maxconn  65536
> >>>>
> >>>> listen http_proxy 0.0.0.0:80
> >>>> mode http
> >>>> stats enable
> >>>> stats uri /ha-stats
> >>>> stats auth haprox:stats
> >>>> source 0.0.0.0 usesrc clientip
> >>>> log global
> >>>> balance roundrobin
> >>>> option httpchk HEAD /live-lb HTTP/1.0
> >>>>
> >>>>
> >>>
> >>
> >>
> >
> >
>
>


-- 
Bryan Talbot
Architect / Platform team lead, Aeria Games and Entertainment
Silicon Valley | Berlin | Tokyo | Sao Paulo


Re: Max Sessions and source balancing

2013-02-21 Thread Bryan Talbot
I believe the answer to both of your questions is "no".

The configuration directives you've specified will be followed: if more
than maxconn concurrent requests are needed for a particular server,
additional requests will be queue until the maxconn of the fronend /
backend is reached.  Existing connections won't be moved due to a server
being at capacity.

-Bryan


On Thu, Feb 21, 2013 at 10:05 AM, Jim Gronowski wrote:

> We have a server setup as follows:
>
> 
> listen MyService X.X.X.X:XX
> mode tcp
> option tcplog
> option redispatch
> balance source
> maxconn 5000
> option ssl-hello-chk
> server server1 X.X.X.X maxconn 500 check
> server server2 X.X.X.X maxconn 500 check
> -
>
> So, each server is limited to 500 sessions.  If 'server1' reaches 500
> connections, will new incoming connections automatically be routed to
> 'server2', even with source-based routing in place?  Will new
> connections that were previously bound to server1 connect to server2?
>
>
>
>
> Thanks,
> Jim
>
>
>
>
>
> Ditronics, LLC email disclaimer:
>
> This communication, including attachments, is intended only for the
> exclusive use of addressee and may contain proprietary, confidential, or
> privileged information. Any use, review, duplication, disclosure,
> dissemination, or distribution is strictly prohibited. If you were not the
> intended recipient, you have received this communication in error. Please
> notify sender immediately by return e-mail, delete this communication, and
> destroy any copies.
>
>
>


Re: second backup server when first on fails

2013-02-26 Thread Bryan Talbot
On Tue, Feb 26, 2013 at 2:52 AM, Hauke Bruno Wollentin <
mail...@haukebruno.de> wrote:

> Hi together,
>
> I have haproxy 1.4.22 running with 1 frontend and 1 backend. There are 2
> servers in that backend:
>
>server prim [...] check port 80
>server sec [...] check port 80 backup
>
> First one is a bigger internet connection than the second, because of that
> I
> only want to forward the packets to 'sec' if 'prim' fails.
>
> I am looking for some kind of configuration, that will deliver a local
> maintenance website if both 'prim' and 'sec' fails.
>
> Any ideas?
>
>
>
> Best regards,
> Hauke
>
>

Just add another backup server to your backend and it will only be used
when your "prim" and "sec" are both down.  It can serve content from
wherever you'd like including nginx serving up a static sorry page on
localhost ... or a sorry page from S3, etc.

-Bryan


Re: use_backend: brackets/grouping not accepted in condition

2013-03-22 Thread Bryan Talbot
On Fri, Mar 22, 2013 at 2:47 AM, Christian Ruppert wrote:

> Hi Baptiste,
>
> it is IMHO not really clear that brackets are for anonymous ACLs only.
> Wouldn't it make sense to support it for use_backend as well?
>
>
Those two are not mutually exclusive: you can use them with use_backend and
they are for anonymous acls.

for example:
  use_backend www if METH_POST or {path_beg /static /images /img /css}

-Bryan


Re: possible crashes on linux with recent glibc

2013-04-01 Thread Bryan Talbot
On Fri, Mar 29, 2013 at 11:01 AM, Willy Tarreau  wrote:

> Hi,
>
>
> For the medium term, I'm going to prepare the following changes :
>
>   - make poll() rely solely on bit fields without using FD_* macros
>   - add a start up warning when select() is used with a maxconn leading
> to more than FD_SETSIZE fds, followed by a runtime test to make it
> crash in glibc while parsing the config if needed instead of reserving
> a friday evening surprize for you.
>   - enable poll() by default in the generic target, as it's supported on
> all platforms where haproxy is known to build
>



haproxy built with macports on OSX seems to only have support for select()
and not poll().  I don't have any suggestions but is this environment
impacted by your proposed changes?

Not running haproxy on osx for anything other than localhost development
mode of course, but keeping it working on osx would be great.


$> /opt/local/sbin/haproxy -vv
HA-Proxy version 1.4.22 2012/08/09
Copyright 2000-2012 Willy Tarreau 

Build options :
  TARGET  = osx
  CPU = generic
  CC  = /usr/bin/clang -arch x86_64
  CFLAGS  = -O2 -g -fno-strict-aliasing
  OPTIONS = USE_LIBCRYPT=1 USE_REGPARM=1 USE_PCRE=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200

Encrypted password support via crypt(3): yes

Available polling systems :
 select : pref=150,  test result OK
Total: 1 (1 usable), will use select.


stick socket.io traffic without using cookies

2013-04-19 Thread Bryan Talbot
I'm trying to find a way to get the two-http-request handshake that
socket.io uses to stick to the same server without using cookies for
persistence.  Most of the guides I've found online all use cookies, but in
my case, at least some of the (non-browser) client apps don't support them.

socket.io is a protocol that can be used over websocket (our main use case)
but also supports several other fallback transport layers (xhr polling,
jasonp, flash polling, etc).  The protocol is described here:
https://github.com/LearnBoost/socket.io-spec

A sample of the request and response looks like this:

* handshake to establish a session: the session id is returned in the
response body
GET /socket.io/1/ HTTP/1.1
User-Agent: node-XMLHttpRequest
Accept: */*
Host: 1.2.3.4
Connection: keep-alive

HTTP/1.1 200 OK
Content-Type: text/plain
Date: Thu, 18 Apr 2013 19:14:44 GMT
Transfer-Encoding: chunked

47
M7S2snisYW8uw3eUddC-:60:60:websocket,htmlfile,xhr-polling,jsonp-polling
0



* second request which upgrades to websocket with the socket.io session id
specified in the url _path_
GET /socket.io/1/websocket/M7S2snisYW8uw3eUddC- HTTP/1.1
Connection: Upgrade
Upgrade: websocket
Sec-WebSocket-Version: 13
Sec-WebSocket-Key: MTMtMTM2NjMxMjQ4NDU2NQ==
Host: 1.2.3.4

HTTP/1.1 101 Switching Protocols
Upgrade: websocket
Connection: Upgrade
Sec-WebSocket-Accept: o3FS8Zcg7KLSAx+MKOfkBndQCdE=



I think that the session id from the handshake's response body can be
extracted and stored in a stick table using something like
   stick store-response res.payload(0,20)

however, I can't find any way to extract the session id from the url path
and use that to lookup the server_id from the stick table.

I think there are two problems which prevent me from doing this.  The first
is that there is no pattern extraction method which can extract a pattern
from the url path.  The second is that payload() extracts and stores a
binary value and the patterns that can extract values from the request
(query string, header, etc) all return a string.  Can a string be used to
match in a stick table which stores binary values?

It seems that there is enough information in the first and second request
to allow them to be stuck to the same server without resorting to cookies
or src based stickiness but I don't see a way to do that with haproxy.  I'm
using 1.5-dev18

Thanks for any suggestions.

-Bryan


Re: Balance Roundrobin vs Balance Source

2013-04-19 Thread Bryan Talbot
On Thu, Apr 18, 2013 at 1:13 PM,  wrote:

> Hi All,
>
> We have HAPROXY 1.4.22 running in our environment, one issue that I have
> encountered during testing concerns source IP address affinity, we are
> trying to achieve a form of Sticky Session persistence. I noticed that if
> we have the following configuration in place then we experience problems
> with web pages not loading:
>
> #-
> # NSD which proxys to the NSD Application Servers on port 8081
> #-
>
> frontend http-nsd
> mode http
> bind *:8081
> default_backend nsd
>
> #-
> # round robin balancing between the various backends
> #-
> backend nsd
>  mode http
>  balance roundrobin
>   cookie SERVERID insert indirect nocache
>   server server01 xxx.xxx.xxx.:8081 check cookie s1
>   server server02 xxx.xxx..xxx:8081 check cookie s2
>
> If we then change the balance mode to source then the web page loads
> successfully.
>
> Is this the correct way to be achieving 'stickiness' or is there a better
> more elegant way of achieving this?.
>
>

Using cookies for persistence is certainly common and usually works but
without knowing more about the specifics of your problem I don't think
anyone can help.  "pages not loading" is not enough detail.

-Bryan


1.5-dev18 segfaults with "stats bind-process"

2013-04-19 Thread Bryan Talbot
I'm testing out nbproc for ssl offloading for the first time and ran into
an issue with "stats bind-process" which seems to segfault on startup.

# cat x.cfg
global
  nbproc 2
  stats bind-process 1

listen stats
  bind :8000
  mode http
  stats enable
  stats admin if TRUE
  stats uri   /


# /usr/sbin/haproxy -c -V -f /etc/haproxy/x.cfg
Segmentation fault


# /usr/sbin/haproxy -vv
HA-Proxy version 1.5-dev18 2013/04/03
Copyright 2000-2013 Willy Tarreau 

Build options :
  TARGET  = linux26
  CPU = generic
  CC  = gcc
  CFLAGS  = -m64 -march=x86-64 -O2 -g -fno-strict-aliasing
  OPTIONS = USE_ZLIB=1 USE_CPU_AFFINITY=1 USE_OPENSSL=1 USE_PCRE=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.3
Compression algorithms supported : identity, deflate, gzip
Built with OpenSSL version : OpenSSL 0.9.8e-fips-rhel5 01 Jul 2008
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : no (version might be too old, 0.9.8f min
needed)
OpenSSL library supports prefer-server-ciphers : yes

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.


Re: 1.5-dev18 segfaults with "stats bind-process"

2013-04-20 Thread Bryan Talbot
ok, so the globals were position dependent.  I did have bind-process in the
stats listen section but started stripping entries from the configuration
to make the segfault reproduce with a minimal config.

So, without the patch

global
  nbproc 2
  stats  socket /var/lib/haproxy/stats user haproxy group haproxy mode 0660
level admin
  stats  bind-process 1

listen stats
  bind-process 1
  mode  http
  bind  :8000
  stats enable
  stats admin if TRUE
  stats uri   /

works but if the order of the global "stats socket" and "stats
bind-process" are switched, it segfaults

global
  nbproc 2
  stats  bind-process 1
  stats  socket /var/lib/haproxy/stats user haproxy group haproxy mode 0660
level admin

listen stats
  bind-process 1
  mode  http
  bind  :8000
  stats enable
  stats admin if TRUE
  stats uri   /


The patch fixes the segfault for me.

Thanks,
-Bryan




On Sat, Apr 20, 2013 at 12:54 AM, Willy Tarreau  wrote:

> On Fri, Apr 19, 2013 at 06:13:12PM -0700, Bryan Talbot wrote:
> > I'm testing out nbproc for ssl offloading for the first time and ran into
> > an issue with "stats bind-process" which seems to segfault on startup.
> >
> > # cat x.cfg
> > global
> >   nbproc 2
> >   stats bind-process 1
> >
> > listen stats
> >   bind :8000
> >   mode http
> >   stats enable
> >   stats admin if TRUE
> >   stats uri   /
> >
> >
> > # /usr/sbin/haproxy -c -V -f /etc/haproxy/x.cfg
> > Segmentation fault
>
> Fix attached, thank you Bryan.
>
> The reason for this is that you don't have a "stats socket" before
> "stats bind-process" and this last one forgot to allocate the stats
> frontend.
>
> I now understand what you tried to do but this is wrong :-)
> The global "stats bind-process" forces the process of the global stats
> socket.
>
> What you want to do above (it seems) is to bind your listener to a
> specific process. This is done this way :
>
>  global
>nbproc 2
>
>  listen stats
>bind :8000
>bind-process 1
>mode http
>stats enable
>stats admin if TRUE
>stats uri   /
>
> Anyway I'm attaching the fix for the bug you reported.
>
> Cheers,
> Willy
>
>


Re: stick socket.io traffic without using cookies

2013-04-22 Thread Bryan Talbot
So it looks like there's no way to properly support socket.io currently
with haproxy without hacking the client.  I've been doing that (duplicate
the token in the query string or header) during testing but don't want to
do it generally since I'm not always in control of client code.

-Bryan



On Fri, Apr 19, 2013 at 1:39 PM, Ian Scott  wrote:

>  On 04/19/2013 10:21 AM, Bryan Talbot wrote:
>
> I'm trying to find a way to get the two-http-request handshake that
> socket.io uses to stick to the same server without using cookies for
> persistence.  Most of the guides I've found online all use cookies, but in
> my case, at least some of the (non-browser) client apps don't support them.
>
> ...
>
>   I think there are two problems which prevent me from doing this.  The
> first is that there is no pattern extraction method which can extract a
> pattern from the url path.
>
>
> I ran into a similar issue with sticking on requests from a similar
> websocket/longpolling/etc library (nginx-push-stream-module) and solved it
> by passing the channel ID in the querystring and sticking on urlp. It looks
> like the socket.io protocol spec allows for user-defined query
> components, but socket.io-client doesn't let you set them (unless I'm not
> reading things properly).
>
> I agree that being able to extract from part of the URL path would be a
> great feature.
>
>  The second is that payload() extracts and stores a binary value and the
> patterns that can extract values from the request (query string, header,
> etc) all return a string.  Can a string be used to match in a stick table
> which stores binary values?
>
> From looking at the source, it'll let you do it. Whether it actually works
> I'll let someone more knowledgeable answer. I could see one issue being
> with null termination causing them to not match.
>
> http://git.1wt.eu/web?p=haproxy.git;a=blob;f=src/stick_table.c;h=3097e662fe1613178e6e2d561c101f2852acd85c;hb=HEAD#l591
>
> Ian
>


Re: stick socket.io traffic without using cookies

2013-04-22 Thread Bryan Talbot
On Mon, Apr 22, 2013 at 1:46 PM, Brandon Dimcheff wrote:

> On Mon, Apr 22, 2013 at 12:40:46PM -0700, Bryan Talbot wrote:
> > So it looks like there's no way to properly support socket.io currently
> > with haproxy without hacking the client.  I've been doing that (duplicate
> > the token in the query string or header) during testing but don't want to
> > do it generally since I'm not always in control of client code.
>
> We've just been doing source-ip based load balancing, which seems to
> work fine for our socket.io stuff... Would that not work for you?
>


I think that would work for some cases, but there are a few common use
cases for us when that won't work well: devices that change IP often (e.g.,
mobile), users behind proxies, load testing.  There are probably others.

Since mobile users tend to change IP address much more often than
non-mobile clients, their sessions and websocket would get broken as well.
 Some of our users are being proxies which can change the user's external
IP address more often as well and produces uneven load when there are many
clients from a small number of IPs. Finally, for my immediate use case, is
load testing using a smallish number of EC2 hosts where src based routing
is just not effective but that's the easiest one to work around.

-Bryan


Re: do I still need nginx for static file serving?

2013-04-22 Thread Bryan Talbot
Since haproxy is not a webserver (it's a reverse proxy), you still need a
webserver to actually serve content and run the application.

-Bryan


On Mon, Apr 22, 2013 at 2:28 PM, S Ahmed  wrote:

> My backend servers run jetty, and currently I am using nginx that runs on
> port 80 to route traffic to the backend that runs on e.g. port 8081.
>
> I also using nginx to serve the static files for the folder:
>
> /assets/
>
> So all requests that have this folder do net get proxied to jetty on port
> 8081, but nginx servers the static files.
>
> If I use haproxy now, do I still need to run nginx to service static files
> or is this something haproxy can do just as effeciently?
>
> I'd rather reduce the # of services I have to manage :)
>


Re: stick socket.io traffic without using cookies

2013-04-22 Thread Bryan Talbot
Ahh, I was looking at it from a socket.io protocol level and forgot to look
to balancing purely at the HTTP level.  This works for the cases where we
have some control of the client application but still doesn't allow me to
proxy any generic app that conforms to the socket.io spec.  I can use this
for now though.

Is there any chance in being able to extract and stick on a component of
the request path?

-Bryan



On Mon, Apr 22, 2013 at 3:06 PM, Ian Scott  wrote:

>  On 04/22/2013 12:40 PM, Bryan Talbot wrote:
>
> So it looks like there's no way to properly support socket.io currently
> with haproxy without hacking the client.  I've been doing that (duplicate
> the token in the query string or header) during testing but don't want to
> do it generally since I'm not always in control of client code.
>
> Actually it looks like the socket.io client preserves any query string
> given in the io.connect() call. So there's no hacking of socket.io code
> required.
>
> I just did a quick test of the example on the socket.io homepage, with
> the call io.connect('http://localhost:?foo=somethingrandom') and
> foo=bar got passed along  both in the initial handshake and websocket
> connection:
> http://localhost:/socket.io/1/?foo=somethingrandom&t=137914255
>
> ws://localhost:/socket.io/1/websocket/zWrhX7gYKiBnFmoqwVsr?foo=somethingrandom
>
> So if you set an arbitrary query string parameter and stick on it with
> urlp, it should work, and with no payload sniffing. Unless I'm missing
> something.
>
> Ian
>


Re: Block url in https

2013-04-24 Thread Bryan Talbot
Since the traffic passing through your port 443 is presumably encrypted, by
design, the proxy can't do anything with the contents including read it.

-Bryan



On Wed, Apr 24, 2013 at 7:57 AM, Matthieu Boret  wrote:

> Hi,
>
> I try to block a URL(public.mydomain.com) in https but this doesn't
> works. If it's possible I would redirect to a 503 error page.
>
> frontend unsecured
>   bind *:80
>   mode http
>   redirect scheme https
>
> frontend secure_tcp
>   mode tcp
>   bind *:443 name https
>   reqideny ^public**
>   default_backend bck_tcp
>
>
> Thanks
>
>
> Matthieu
>


Re: Keeping LB pools status in sync

2013-04-26 Thread Bryan Talbot
It sounds like you're asking how to use a server's health state in one
backend as the health state in another.  If so you can use the "track"
option on the servers

backend pool1
  server server1 1.1.1.1:6060 track pool2/server1
  server server2 1.1.1.2:6060 track pool2/server2

backend pool2
  server server1 1.1.1.1:80 check
  server server2 1.1.1.2:80 check

Is that what you want?

-Bryan




On Fri, Apr 26, 2013 at 5:09 PM, Ahmed Osman  wrote:

>  Hello Everyone,
>
> ** **
>
> I’m wondering if anyone is able to tell me if this is default behavior or
> if I need to configure this. In a nutshell I have this setup:
>
> ** **
>
> LB_Pool1
>
> Server1:6060
>
> Server2:6060
>
> ** **
>
> LB_Pool2
>
> Server1:80
>
> Server2:80
>
> ** **
>
> ** **
>
> I can do a check pretty easily on LB_Pool2 however I don’t have a method
> for doing so on LB_Pool1. If something goes wrong with Server1 then the
> check in LB_Pool2 will detect it immediately and remove it from the pool
> until it’s back up. Will Server1 be removed from LB_Pool1 at the same time?
> And if not, how would I set it up so that happens?
>
> ** **
>
> ** **
>
> ** **
>
> ** **
>
> *Ahmed Osman*
>
> *DevOps Engineer*
>
> *Infrastructure Support Services*
>
> *TIBCO Spotfire*
>
> ** **
>


Re: SMTP load balancer.

2013-04-30 Thread Bryan Talbot
On Tue, Apr 30, 2013 at 6:52 PM, Eliezer Croitoru wrote:

> server smtp1 192.168.25.1:25 maxconn 10
> server smtp2 192.168.25.1:25 maxconn 10
> ##conf end
>
> when I run the connection from other machine I get all the load on one
> machine..
>
>
Looks like you've listed the same IP address twice.

-Bryan


Re: Monitor always returns HTTP 200

2013-05-02 Thread Bryan Talbot
On Thu, May 2, 2013 at 8:55 AM, James Bensley  wrote:

> acl backend_down nbsrv(http--servers) lt 2 # HAProxy can see
> lee than 2 backend servers
> monitor-uri /checkuri
> monitor-net 172.22.0.0/24



What's the address of the computer making the requests?  If it's in the
172.22.0.0/24 network, all responses for any URI will be 200 as long as
"monitor fail" is false.

-Bryan


build with static openssl

2013-05-10 Thread Bryan Talbot
What's required to build haproxy and statically link with openssl libs like
can be done with pcre?  It would be a nice option to have when running on
OS with older openssl (like RHEL 5.x) but still allow haproxy to use latest
openssl.

-Bryan


Re: build with static openssl

2013-05-13 Thread Bryan Talbot
ok, that's basically what I did to get it working too.  I'm still doing
some testing but so far it's working as expected and using openssl 1.0.1e
on a redhat 5.x system.

I ended up configuring openssl with "no-dso" which seems to make it
statically link to its dependencies and not need to pull -ldl into the
haproxy build.  Not sure what other impacts that has though.

Thanks for the pointers!


-Bryan





On Fri, May 10, 2013 at 5:24 PM, Lukas Tribus  wrote:

> Hi Bryan,
>
>
> > What's required to build haproxy and statically link with openssl libs
> > like can be done with pcre?
>
> The following procedure will install a static build of latest openssl
> in a directory of your choice without interfering with your OS headers
> and libraries:
>
> > export LIBSSLBUILD=/tmp/libsslbuild
> > mkdir $LIBSSLBUILD
> > cd ~
> > wget http://www.openssl.org/source/openssl-1.0.1e.tar.gz
> > tar zxvf openssl-1.0.1e.tar.gz
> > cd openssl-1.0.1e &&
> > ./config --prefix=$LIBSSLBUILD no-shared
> > make
> > make install_sw
>
>
> Then build haproxy by pointing to the proper path:
> > make TARGET=linux2628 USE_OPENSSL=1 ADDINC=-I$LIBSSLBUILD/include \
> > ADDLIB="-L$LIBSSLBUILD/lib -ldl"
>
> OpenSSL depends on libdl, so we need pass -ldl along.
>
>
> When everything is compiled, checkout your openssl version (use a
> snapshot from Apr 27th or younger to see build and runtime
> openssl version). Both should say 1.0.1e in our case. Also check with
> ldd; it should not show any openssl libraries loaded dynamically.
>
> > lukas@ubuntuvm:~/haproxy$ ./haproxy -vv | grep OpenSSL
> > Built with OpenSSL version : OpenSSL 1.0.1e 11 Feb 2013
> > Running on OpenSSL version : OpenSSL 1.0.1e 11 Feb 2013
> > OpenSSL library supports TLS extensions : yes
> > OpenSSL library supports SNI : yes
> > OpenSSL library supports prefer-server-ciphers : yes
> > lukas@ubuntuvm:~/haproxy$ ldd haproxy
> > linux-gate.so.1 =>  (0xb76e4000)
> > libcrypt.so.1 => /lib/i386-linux-gnu/libcrypt.so.1 (0xb76ab000)
> > libdl.so.2 => /lib/i386-linux-gnu/libdl.so.2 (0xb76a6000)
> > libc.so.6 => /lib/i386-linux-gnu/libc.so.6 (0xb74fb000)
> > /lib/ld-linux.so.2 (0xb76e5000)
> > lukas@ubuntuvm:~/haproxy$
>
>
>
> Regards,
>
> Lukas


Re: disable haproxy logging to console

2013-05-24 Thread Bryan Talbot
Something like this should do it:


*.emerg;local2.none  *

-Bryan



On Fri, May 24, 2013 at 1:16 AM, Wolfgang Routschka <
wolfgang.routsc...@drumedar.de> wrote:

> Hi Guys,
>
> one question about disable haproxy logging to console.
>
> System is RHEL6.x Clone Scientifc Linux 6.4 64 Bit with Haproxy 1.5-dev18
>
> I have configured logging for hayproxy in rsyslog.conf
>
> # HAProxy Logging
> local0.*
>  /var/log/haproxy/haproxy.log
>
> It´s always OK for logging but for example a backend has no available
> server (testing, maintenance etc.) haproxy log to console.
>
> Message from syslogd@localhost at May 24 10:09:24 ...
>  haproxy[32537]: backend testhas no server available!
>
> Message from syslogd@localhost at May 24 10:09:24 ...
>  haproxy[32537]: backend test has no server available!
>
> in rsyslog.conf *.emerg is writing log-messages to * - for console too -
> if I change it to /var/log/message haproxy is not logging to console but I
> doesn´t want to change *.ermerg
>
> How can I disable haproxy message to console?
>
> Greetings
>
> Wolfgang
>
>
>


  1   2   3   >