How to block file large more than specific size?

2011-05-10 Thread Igor
Use conf like below in frontend, but doesn't work. Any help?

acl bigfile shdr_val(content-length) gt 1000
block if bigfile

Bests,
-Igor



Send proxy authorization header to squid

2011-05-13 Thread Igor
Hi all,

In my frontend conf, I used

reqadd Proxy-Authorization:\ Basic\ 

to send auth header to proxy, other proxy works OK, but squid proxy
seems doesn't like Proxy-Authorization header, it continues return 407
error for *some* requests but some requests return 200 tcp OK.

This is bug or conf wrong?

Bests,
-Igor



Re: Send proxy authorization header to squid

2011-05-16 Thread Igor
Thanks. The problem solved.

Bests,
-Igor



On Tue, May 17, 2011 at 3:45 AM, Willy Tarreau w...@1wt.eu wrote:
 Hi Igor,

 On Sat, May 14, 2011 at 01:50:06AM +0800, Igor wrote:
 Hi all,

 In my frontend conf, I used

 reqadd Proxy-Authorization:\ Basic\ 

 to send auth header to proxy, other proxy works OK, but squid proxy
 seems doesn't like Proxy-Authorization header, it continues return 407
 error for *some* requests but some requests return 200 tcp OK.

 This is bug or conf wrong?

 What you describe makes me think that your config works in tunnel mode,
 which means that only the first request of every connection is parsed and
 processed. You have to add option http-server-close to handle the situation
 correctly.

 Regards,
 Willy





Re: HAProxy Response time performance

2011-06-09 Thread Igor
Can't find 1.4.16 at http://haproxy.1wt.eu/download/1.4/src/ ?

Bests,
-Igor



2011/6/9 Hervé COMMOWICK hcommow...@exosec.fr:
 Hello Matt,

 You need to activate logging to see what occurs to your requests, you
 can use halog tool (in the contrib folder) to filter out fast
 requests.

 Other things you can enable to reduce latency is :
 option tcp-smart-accept
 option tcp-smart-connect

 and finally you can test :
 option splice-response
 But this one will be dependent of your kind of traffic.

 next release 1.4.16 have some improvements in latency
 (http://www.mail-archive.com/haproxy@formilux.org/msg05080.html), i
 think you can give it a try, take the daily snapshot for this.

 Regards,

 Hervé.

 On Wed, 8 Jun 2011 23:57:38 -0700
 Matt Christiansen ad...@nikore.net wrote:

 Hello,

 I am wanting to move to HAProxy for my load balancing solution. Over
 all I have been greatly impressed with it. It has way more throughput
 and can handle way more connections then our current LB Solution
 (nginx). I have been noticing one issue in all of our tests though, it
 seems like in the TP99.9 (and greater) of response times is much MUCH
 higher then nginx and we have a lot of outliers.

 Our test makes a call to the VIP and times the time it takes to
 receive the data back then pauses for a sec or two and makes the next
 response. In both of the sample results below I did 2000 requests.

 HAProxy

 Average: 39.71128451818
 Median: 29.4217891182
 tp90: 67.48199012481
 tp99: 313.29083442688
 tp99.9: 562.318801879883
 Over 500ms: 10
 Over 2000ms: 0

 nginx

 Average: 69.6072148084641
 Median: 59.2541694641113
 tp90: 87.6350402832031
 tp99: 112.42142221222
 tp99.9: 180.88918274272
 Over 500ms: 0
 Over 2000ms: 0

 So as you can see a big difference in the TP99.9 and a big difference
 in the outlier count but the average and median response time are
 really low.

 We are running a pretty stock centos 5.6 server install with HAProxy
 1.4.15, HAProxy isn't using more then like 4% of the CPU and the
 System CPU is closer to 12%.

 I was wondering if you guys had any obvious response time related
 performance tweaks I can try. If you need more info let me know too.

 Thanks,
 Matt C.




 --
 Hervé COMMOWICK, EXOSEC (http://www.exosec.fr/)
 ZAC des Metz - 3 Rue du petit robinson - 78350 JOUY EN JOSAS
 Tel: +33 1 30 67 60 65  -  Fax: +33 1 75 43 40 70
 mailto:hcommow...@exosec.fr





The best way to do healthy check in forward proxy?

2011-06-12 Thread Igor
Hi, all.

I got 2 squid forward proxies, and just want to use haproxy to do load
balance and healthy monitoring.
I got basic backend conf like:

  balance roundrobin
server  squid1  127.0.0.1:999
server squid2  127.0.0.1:998

but what's the best and smart way to config haproxy to do healthy
check to make the high availability of the proxy server?

Bests,
-Igor



errorfile 403 and haproxy return 200?

2011-06-13 Thread Igor
When I use errorfile 403 /etc/haproxy/403.html, the haproxy(1.4.16ss)
will return HTTP/0.9 200 OK to the client not HTTP/1.0 403 Forbidden.

Is this a bug?

Bests,
-Igor



Re: errorfile 403 and haproxy return 200?

2011-06-13 Thread Igor
Opps, I didn't mention that error file must contain http headers in it :(

Bests,
-Igor



On Tue, Jun 14, 2011 at 1:19 PM, Willy Tarreau w...@1wt.eu wrote:
 On Tue, Jun 14, 2011 at 12:00:07PM +0800, Igor wrote:
 When I use errorfile 403 /etc/haproxy/403.html, the haproxy(1.4.16ss)
 will return HTTP/0.9 200 OK to the client not HTTP/1.0 403 Forbidden.

 Is this a bug?

 It works perfectly here. Are you sure your 403 file is correct ? Maybe
 you copied it from the 200 or something like this ? Does it contain
 valid headers ? Since you called it .html, I have a doubt. I prefer
 to call them .http in order to avoid any ambiguity.

 Regards,
 Willy





Separated config file support

2011-06-15 Thread Igor
Got a very long haproxy.conf, is there any way to separate config file
by using any directive like include *.conf?

Bests,
-Igor



How to check transfer speed health?

2011-07-03 Thread Igor
Hi,

Sometimes, some backend servers look like OK(low latency and good response)
and status UP(option httpchk GET), but the transfer speed from them
is very poor, is there any way to check backend server's transfer speed
health? When transfer speed going bad, then mark the backend server down?

Bests,
-Igor


Frontend outgoing bandwidth limit and concurrent source IP limit

2012-04-17 Thread Igor
Hi,

I have two things:

1, Limit frontend's outgoing bandwidth at a specified rate such as 200KB/s
2, Limit frontend's concurrent connection source IP at no more than 3, if
over this limit, give a error page

Can I do these with stick-table or it's not possible with haproxy?

Thanks.

Bests,
-Igor


Re: Frontend outgoing bandwidth limit and concurrent source IP limit

2012-04-18 Thread Igor
Thanks all. Hope we will see 1.6-dev1 soon :D

Bests,
-Igor


On Wed, Apr 18, 2012 at 1:40 PM, Willy Tarreau w...@1wt.eu wrote:

 On Wed, Apr 18, 2012 at 05:39:24AM +0200, Baptiste wrote:
  Hi,,
 
  1. not doable at this time with HAProxy
  And I don't even know if there is any plans to do it soon.

 It's planned for 1.6, let's hope one day we finish 1.5 first :-)

  2. easily doable through the stick table with the counter conn_cur.
  Some examples are provided here
 
 http://blog.exceliance.fr/2012/02/27/use-a-load-balancer-as-a-first-row-of-defense-against-ddos/
  Note that limiting number of connection to 3 is too low for regular
  browser, it may be enough for webservices.

 Warning, Igor asked for limiting source addresses to 3 max. The table_cnt
 ACL
 is usable to report the number of entries in a table (eg: the number of
 source
 IP addresses). It's just needed to make the table expire immediately so
 that
 these addresses are not kept when the connection closes. A timeout of 1ms
 should do the trick I think.

 Willy




Dev 11 breaks stick table

2012-06-16 Thread Igor
Hi,

The configuration below works fine in dev8, but in
haproxy-ss-20120607,  it failed, alway return 502 error.
Any idea?

   stick-table type ip size 3 expire 1ms nopurge store
rspideny . if {  table_cnt gt 3 }
tcp-request connection track-sc1 src

Bests,
-Igor



Key for count track-sc1 source IP by CIDR

2012-06-18 Thread Igor
Hi,

At the moment, only src is supported for counting track-sc1's source
IP(mask 32), any plan to add support for counting by mask?
For example, 192.168.1.x source IP as 1 count.

Cheers,
-Igor



Re: Key for count track-sc1 source IP by CIDR

2012-07-03 Thread Igor
Is there a ETA for this? May be dev12 or even soon ? ;)

Bests,
-Igor


On Tue, Jun 19, 2012 at 1:08 PM, Willy Tarreau w...@1wt.eu wrote:
 Hi Igor,

 On Mon, Jun 18, 2012 at 11:12:47PM +0800, Igor wrote:
 Hi,

 At the moment, only src is supported for counting track-sc1's source
 IP(mask 32), any plan to add support for counting by mask?
 For example, 192.168.1.x source IP as 1 count.

 Yes, this is planned, we need to finish the porting to fully rely on the
 pattern engine for this (which already supports a mask in stick-on).

 Cheers,
 Willy




Dynamic DNS lookup

2012-08-24 Thread Igor
I have dynamic FQDN server in backend like: b1.example.com:, which
b1.example.com has dynamic IP, haproxy seems not work properly when
server's IP changed. Any way to work around?

Thanks.

Bests,
-Igor



Re: Old processes never die

2012-11-14 Thread Igor
OK, I will try it.
BTW, latest snapshot seems break cli to enable/disable backend's
server, always complain: No such server. It works well in dev11

Bests,
-Igor


On Thu, Nov 15, 2012 at 6:32 AM, Willy Tarreau w...@1wt.eu wrote:
 Hi Igor,

 On Thu, Nov 15, 2012 at 06:27:05AM +0800, Igor wrote:
 Hi,

 Runing a haproxy-ss-20120903 linux box, every reload action will keep
 old process forever(never die), what may cause this?

 My reload command is:

 $HAPROXY $config -p $PIDFILE -D $EXTRAOPTS -sf $(cat $PIDFILE)

 Probably a bug. When doing dev12 we had to do a significant rework of the
 connection management and I introduced a number of regressions that I hope
 have all since been addressed. I remember having noticed one case of stuck
 connection BTW. Would you please give a try to the latest snapshot ?

 Thanks,
 Willy




Re: Old processes never die

2012-11-14 Thread Igor
Hi, Willy, lastest haproxy has no this bug :)
But comes another annoying cli bug:  enable/disable backend's server
doesn't work.

Bests,
-Igor


On Thu, Nov 15, 2012 at 7:00 AM, Igor j...@owind.com wrote:
 OK, I will try it.
 BTW, latest snapshot seems break cli to enable/disable backend's
 server, always complain: No such server. It works well in dev11

 Bests,
 -Igor


 On Thu, Nov 15, 2012 at 6:32 AM, Willy Tarreau w...@1wt.eu wrote:
 Hi Igor,

 On Thu, Nov 15, 2012 at 06:27:05AM +0800, Igor wrote:
 Hi,

 Runing a haproxy-ss-20120903 linux box, every reload action will keep
 old process forever(never die), what may cause this?

 My reload command is:

 $HAPROXY $config -p $PIDFILE -D $EXTRAOPTS -sf $(cat $PIDFILE)

 Probably a bug. When doing dev12 we had to do a significant rework of the
 connection management and I introduced a number of regressions that I hope
 have all since been addressed. I remember having noticed one case of stuck
 connection BTW. Would you please give a try to the latest snapshot ?

 Thanks,
 Willy




Re: Disable server in stat page triggers 503

2013-01-15 Thread Igor
Hi, conf like:

listen  admin
bind 127.0.0.1:11199
stats enable
stats hide-version
stats uri /ha-stats
stats realm Ha\ statistics
stats auth admin:admin
stats refresh 60s
stats admin if TRUE

I will try remove password to check that.

Bests,
-Igor


On Tue, Jan 15, 2013 at 4:27 AM, Willy Tarreau w...@1wt.eu wrote:
 Hi Igor,

 On Tue, Jan 15, 2013 at 03:04:10AM +0800, Igor wrote:
 Hi, sometimes when I disable server in stat page, it will return 503,
 I must refresh the page to do disable again.
 This is issue happens when I upgrade to haproxy-ss-20131226, and
 haproxy-ss-20130108 still has this issue.

 This vaguely reminds me something I encountered and thought was fixed.
 Let me guess, you have a dedicated stats instance ? I suspect the
 request tries to pass through.

 Could you please share at least the section which contains the stats
 statement as well as the associated defaults section. Please remove
 any password if you have.

 Willy




Re: Disable server in stat page triggers 503

2013-01-15 Thread Igor
OOps, here's the default session :)

defaults
log global
modehttp
option  httplog
option http-no-delay
option logasap
option tcp-smart-accept
option tcp-smart-connect
retries 2
option redispatch
maxconn 4096
timeout check 3000
timeout connect 2
timeout server 3
timeout client 3
errorfile 403 /etc/haproxy/403.http
errorfile 502 /etc/haproxy/502.http

Bests,
-Igor


On Tue, Jan 15, 2013 at 4:24 PM, Willy Tarreau w...@1wt.eu wrote:
 On Tue, Jan 15, 2013 at 09:09:22AM +0100, Cyril Bonté wrote:
 Hi Igor,

 Le 15/01/2013 09:00, Igor a écrit :
 Hi, conf like:
 
 listen  admin
  bind 127.0.0.1:11199
  stats enable
  stats hide-version
  stats uri /ha-stats
  stats realm Ha\ statistics
  stats auth admin:admin
  stats refresh 60s
  stats admin if TRUE
 
 I will try remove password to check that.

 You forgot to provide the defaults section.
 It's important, to see if you're not missing some options such as
 http-server-close or httpclose, which could explain your 503.

 Agreed. Anyway this would be a bug because the stats page works in
 close mode. But it is still possible.

 BTW, Igor, when I said remove the password, I meant do not post
 your password to the list. There is no reason it should change anything
 to the issue you're facing, though I may be wrong of course.

 I'll try to reproduce the issue with your config which looks fine to me
 at this point (but let's see the defaults section).

 Willy




Invalid ACL with Dev-18 JIT

2013-04-03 Thread Igor
Try with PCRE JIT, but failed with:

error detected while parsing ACL 'adb' : regex 'ad_keyword=' is invalid.

is this my ACL's problem or bug?

Bests,
-Igor


Re: Invalid ACL with Dev-18 JIT

2013-04-04 Thread Igor
After patched, -vv shows:

HA-Proxy version 1.5-dev18 2013/04/03
Copyright 2000-2013 Willy Tarreau w...@1wt.eu

Build options :
  TARGET  = linux2628
  CPU = native
  CC  = gcc
  CFLAGS  = -O2 -march=native -g -fno-strict-aliasing
  OPTIONS = USE_LINUX_SPLICE=1 USE_REGPARM=1 USE_STATIC_PCRE=1
USE_PCRE_JIT=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built without zlib support (USE_ZLIB not set)
Compression algorithms supported : identity
Built without OpenSSL support (USE_OPENSSL not set)
Built with PCRE version : 8.21 2011-12-12
PCRE library supports JIT : yes

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.


my ACL like:

acl side2 hdr_reg(host) -i -f /etc/haproxy/ip_reg.txt

ip_reg.txt:

\b(?:\d{1,3}\.){3}\d{1,3}\b


\.us

Error like:

error detected while parsing ACL 'side2' : regex
'\b(?:\d{1,3}\.){3}\d{1,3}\b' is invalid.

The config works fine without JIT enable.


Bests,
-Igor


On Thu, Apr 4, 2013 at 8:31 PM, Lukas Tribus luky...@hotmail.com wrote:

 Hi Igor,


  error detected while parsing ACL 'adb' : regex 'ad_keyword=' is invalid.


 Can you apply the attached patch and provide the output from haproxy -vv?
 It does not fix anything, but it shows what PCRE version you are using and
 and if JIT is actually enabled.

 Also, can you give us some details about your configuration? Can you post
 the regexp part and your actual request?



 Cheers,
 Lukas


Limit frontend bandwidth rate?

2013-05-01 Thread Igor
Limit frontend bandwidth speed would be handy for some product environment,
is this still planned in 1.5 dev?

Bests,
-Igor


Re: Limit frontend bandwidth rate?

2013-05-02 Thread Igor
Hi, Baptiste, you may misunderstand, it's limit speed like at rate 1Mbps :)

Bests,
-Igor


On Thu, May 2, 2013 at 2:10 PM, Baptiste bed...@gmail.com wrote:

 Hi,

 What you can do with 1.5 currently is using a stick table and monitor
 bandwith per Host header for example.
 Then if you go over a limit, you can redirect requests to an
 overloaded explanation page.

 Baptiste


 On Wed, May 1, 2013 at 8:40 PM, Igor j...@owind.com wrote:
  Limit frontend bandwidth speed would be handy for some product
 environment,
  is this still planned in 1.5 dev?
 
  Bests,
  -Igor



SSL terminate mode

2013-05-05 Thread Igor
Hi,

For some security purpose and performance testing purpose, is it possible
to use haproxy as SSL client?

May config like:

frontend HTTP
bind :80
mode httpsclient(?)
default_backend SSLPOOL

backend SSLPOOL
mode tcp
server  ssl1  public_ip:443

I know some other tools can do termination, but I prefer to do it all in
haproxy, thanks for any advice.


Bests,
-Igor


Re: SSL terminate mode

2013-05-05 Thread Igor
Thanks, Willy. Frontend in http mode(may be called https terminate mode)
and backend in SSL is my goal, which uses remote https connection directly,
haproxy terminates SSL backend into http. this is what for performance
testing sometimes.

Bests,
-Igor


On Sun, May 5, 2013 at 5:55 PM, Willy Tarreau w...@1wt.eu wrote:

 Hi Igor,

 On Sun, May 05, 2013 at 05:42:21PM +0800, Igor wrote:
  Hi,
 
  For some security purpose and performance testing purpose, is it possible
  to use haproxy as SSL client?

 Yes and it was even our first goal when implementing native SSL support.

  May config like:
 
  frontend HTTP
  bind :80
  mode httpsclient(?)
  default_backend SSLPOOL
 
  backend SSLPOOL
  mode tcp
  server  ssl1  public_ip:443

 You need to add ssl at the end of the line above. Your backend needs
 to be in http mode if the frontend is also in http mode. If you need
 this for security, also take a look at the verify server keyword,
 which is used to validate the peer's certificate (otherwise SSL will
 not provide any security at all and will just make you feel safe).

 Willy




Don't use one server in backend on condition?

2013-07-08 Thread Igor
Hi, is it possible to let one server not to be used in backend on ACL
condition like

backend pool
acl local_dsthdr(host) -i localhost
server  1  10.0.0.1:2121 weight 1 check
server  2 10.0.0.2:2121 weight 1 check acl local_dst
server  3  10.0.0.3:2121 weight 1 check

Thanks in advance.

Bests,
-Igor



Unicode user-agent

2013-10-17 Thread Igor
Hi, I used hdr(user-agent) ACL to block some traffic, recently need to
block some Chinese named user-agent, does haproxy could handle this?

Thanks.

Bests,
-Igor



set weight bug?

2013-11-05 Thread Igor
Using newest snapshot, when I do

echo set weight s1/p1 100| socat stdio /tmp/haproxy

to a server already has weight 100, then fresh haproxy's stat page, it
requires password, and it doesn't accept the right password set in
stats auth until I reload the haproxy.

I have a script to set servers weight, I found sometimes set weight to
servers rapidly, like multi echo set weight s(*)/p(*) 100| socat
stdio /tmp/haproxy, will crash haproxy daemon.


Bests,
-Igor



Re: set weight bug?

2013-11-05 Thread Igor
Here is my config http://pastie.org/private/wf0dv30krqpasgmhtdnahw
(Deleted some servers and two backends for clear config)

I used script to handle servers weight since haproxy-ss-20131031, so I
never tried previous versions.

Bests,
-Igor


On Wed, Nov 6, 2013 at 5:55 AM, Lukas Tribus luky...@hotmail.com wrote:
 Hi Igor,


 Using newest snapshot, when I do

 echo set weight s1/p1 100| socat stdio /tmp/haproxy

 to a server already has weight 100, then fresh haproxy's stat page, it
 requires password, and it doesn't accept the right password set in
 stats auth until I reload the haproxy.

 I have a script to set servers weight, I found sometimes set weight to
 servers rapidly, like multi echo set weight s(*)/p(*) 100| socat
 stdio /tmp/haproxy, will crash haproxy daemon.


 I can't reproduce neither problem. Can you post your configurations so we
 can try to reproduce?

 If you know a certain snapshot/release where this worked fine for you,
 please tell, this will help reducing the regression range (if its a bug).



 Regards,

 Lukas



Re: set weight bug?

2013-11-27 Thread Igor
Hi, Willy, after upgraded to haproxy-ss-20131122, enable and disable
servers via socket will crash haproxy, there's no this issue in
haproxy-ss-20131031.

Bests,
-Igor


On Thu, Nov 21, 2013 at 10:42 PM, Willy Tarreau w...@1wt.eu wrote:
 Hi Igor,

 On Thu, Nov 21, 2013 at 09:03:05PM +0800, Igor wrote:
 Thanks Willy, because I use snapshot haproxy in production, so I have
 no change to do more investigation, glad you  could reproduce the bug
 :)

 now you'll have something a bit more reliable to work with. I've just
 committed the two following fixes :

 e7b7348 BUG/MEDIUM: checks: fix slow start regression after fix attempt
 004e045 BUG/MAJOR: server: weight calculation fails for map-based algorithms

 The first one is a crash when using slowstart without checks. It's not your
 case but you'll probably want to fix it in case you happen to switch to
 slowstart. I encountered it while trying to reproduce your bug based on
 Lukas' findings.

 The second one fixes the issue you're facing which is also what Lukas
 noticed (wrong weight after a set weight on a map-based algorithm).
 I can confirm that it resulted in the total backend weight to be larger
 than the table, causing out of bounds accesses after a weight change
 as soon as the map indx went far enough (depending on your load and
 the total initial weights, it could take a few seconds to a few minutes).

 The fix is quite large because I wanted to get rid of all places where
 the computations were hazardous (and there were quite a few). From my
 opinion and my tests, it now correcty covers all situations.

 Thanks for reporting it!

 Willy




SSL client mode

2013-12-08 Thread Igor
For testing and bench purpose, client mode like stud[1] would be
useful, any plan to implement this feature?

[1] https://github.com/bumptech/stud/pull/79

Bests,
-Igor



Re: SSL client mode

2013-12-08 Thread Igor
Hi, it may like stunnel's client mode.

In haproxy, we may get like this to terminate SSL server to HTTP server.

listen http
 bind: 80
 mode ssl-client
 use-server sslsrv 127.0.0.1:443

Bests,
-Igor


On Mon, Dec 9, 2013 at 4:25 AM, Lukas Tribus luky...@hotmail.com wrote:
 Hi Igor,

 For testing and bench purpose, client mode like stud[1] would be
 useful, any plan to implement this feature?

 Not sure what that means, can you elaborate on the use case?

 SSL encrypted backend connections are already supported.


 Regards,
 Lukas



Re: SSL client mode

2013-12-08 Thread Igor
Thanks, Lukas. I don't quite understand what you mean, can you show me
an example conf?

Bests,
-Igor


On Mon, Dec 9, 2013 at 4:40 AM, Lukas Tribus luky...@hotmail.com wrote:
 Hi,


 listen http
 bind: 80
 mode ssl-client
 use-server sslsrv 127.0.0.1:443

 This should already work without the need to introduce a new mode.

 Just configure your frontent without SSL and your backend with SSL, both
 using HTTP mode.


 Regards,

 Lukas



Compile warning on OS X

2013-12-09 Thread Igor
include/common/time.h:111:29: warning: implicit conversion from
'unsigned long' to '__darwin_suseconds_t' (aka 'int') changes value
from
  18446744073709551615 to -1 [-Wconstant-conversion]
tv-tv_sec = tv-tv_usec = TV_ETERNITY;
 ~ ^~~
include/common/time.h:32:26: note: expanded from macro 'TV_ETERNITY'

Can I ignore this warning even the compile succeed? Thanks for any suggestion.

Bests,
-Igor



Re: SSL client mode

2013-12-09 Thread Igor
Thanks Thomas and Lukas, that's what I look for.

Bests,
-Igor


On Mon, Dec 9, 2013 at 10:17 PM, Thomas Heil
h...@terminal-consulting.de wrote:
 Hi,

 On 08.12.2013 21:34, Igor wrote:
 Hi, it may like stunnel's client mode.

 In haproxy, we may get like this to terminate SSL server to HTTP server.

 listen http
  bind: 80
  mode ssl-client
  use-server sslsrv 127.0.0.1:443
 I think this should work
 --
 listen http :80
   mode http
   server sslsrv 127.0.0.1:443 ssl
 --

 As Lukas mentioned haproxy-devel has a builtin for client ssl mode.

 cheers
 thomas
 Bests,
 -Igor


 On Mon, Dec 9, 2013 at 4:25 AM, Lukas Tribus luky...@hotmail.com wrote:
 Hi Igor,

 For testing and bench purpose, client mode like stud[1] would be
 useful, any plan to implement this feature?
 Not sure what that means, can you elaborate on the use case?

 SSL encrypted backend connections are already supported.


 Regards,
 Lukas



 --
 Thomas Heil
 -
 ! note my new number !
 Skype: phiber.sun
 Email: h...@terminal-consulting.de
 Tel:   0176 / 44555622
 --





New bug?

2013-12-09 Thread Igor
Hi, after upgraded to haproxy-ss-20131207, haproxy failed to start due
to the errors:

[ALERT] 343/024837 (19081) : parsing [/etc/haproxy/conf.conf:15] :
error detected while parsing a 'rspideny' condition : missing args for
fetch method 'table_cnt' in sample expression 'table_cnt'.

[ALERT] 343/024837 (19081) : parsing [/etc/haproxy/conf.conf:19] :
error detected while parsing ACL 'too_fast' : missing args for fetch
method 'fe_sess_rate' in sample expression 'fe_sess_rate'.

[ALERT] 343/024837 (19081) : parsing [/etc/haproxy/conf.conf:23] :
'tcp-request content accept' : error detected in frontend
'zorayoyo9881' while parsing 'if' condition : no such ACL : 'too_fast'

Bests,
-Igor



Print http log to stdout?

2013-12-12 Thread Igor
In verbose mode, is it possible to print http log to stdout?

Thanks.

Bests,
-Igor



Re: Compile warning on OS X

2013-12-13 Thread Igor
Hi, Willy, the patch fixed the reported warning, but seems introduce
new warning, the log: http://pastebin.com/dBfHGV2S

Thanks.



Bests,
-Igor


On Fri, Dec 13, 2013 at 4:25 PM, Willy Tarreau w...@1wt.eu wrote:
 On Tue, Dec 10, 2013 at 12:13:09AM +0100, Lukas Tribus wrote:
 Hi Igor,


  include/common/time.h:111:29: warning: implicit conversion from
  'unsigned long' to '__darwin_suseconds_t' (aka 'int') changes value
  from
  18446744073709551615 to -1 [-Wconstant-conversion]
  tv-tv_sec = tv-tv_usec = TV_ETERNITY;
  ~ ^~~
  include/common/time.h:32:26: note: expanded from macro 'TV_ETERNITY'
 
  Can I ignore this warning even the compile succeed? Thanks for any 
  suggestion.


 Not sure, could you git bisect this?

 Last change here dates from 2007. Compilers tend to report more and more
 warnings in recent versions, and systems use a wide variety of types for
 time.

 Igor, could you please confirm that the attached patch fixes the issue ?
 If so, I'll merge it.

 Thanks,
 Willy




Re: Compile warning on OS X

2013-12-13 Thread Igor
I see, thanks for the very clear explanation. :)

Bests,
-Igor


On Fri, Dec 13, 2013 at 5:45 PM, Willy Tarreau w...@1wt.eu wrote:
 Hi Igor,

 On Fri, Dec 13, 2013 at 05:13:51PM +0800, Igor wrote:
 Hi, Willy, the patch fixed the reported warning,

 Thanks for testing! I'm merging it then.

 but seems introduce new warning, the log: http://pastebin.com/dBfHGV2S

 No it's not the same. Gcc is getting *really* annoying. It reports stupid
 warnings all the time and forces you to write your code a certain way to
 shut them down. It's really unbelievable. Have you seen the comment in the
 code? It already says that this ugly construct was made *only* to shut gcc
 down. But it seems that your new version is even smarter and now requires
 the semi-colon to be put on a distinct line. It does not make any sense
 any more, this compiler decides on the *form* of your code, not the semantics.
 One day we'll see Warning, you used parenthesis in sizeof which is an 
 operator
 and not a function or your 'if' statement was indented with spaces instead 
 of
 tabs, which might confuse readers with tab size different from 8.

 I think that gcc is more and more developped by monkeys for monkeys.

 Each version is slower and buggier than the previous one, and more annoying
 on legacy code.

 I don't want to put the -Wno-x flags in the default build options because
 some older compilers available on certain platforms don't support them all.
 At the moment haproxy builds back to gcc 2.95. We can decide to support 3.x
 and above in the future but that does not mean we'll gain more options to
 silence it.

 That said, if someone is willing to enumerate the list of -Wno-xxx options
 that are supported from 2.95 to 4.8 and which allow us to live in peace
 with this boring compiler, I'm totally open to add them.

 Thanks,
 Willy




Re: [ANNOUNCE] haproxy-1.5-dev20

2013-12-16 Thread Igor
acl adb url_reg,lower -f /etc/haproxy/long.lst

Did I use this in a wrong way?

Bests,
-Igor


On Mon, Dec 16, 2013 at 10:41 AM, Willy Tarreau w...@1wt.eu wrote:
 Hi all,

 here is probably the largest update we ever had, it's composed of 345
 patches!

 Some very difficult changes had to be made and as usual when such changes
 happen, they take a lot of time due to the multiple attempts at getting
 them right, and as time goes, people submit features :-)

 After two weeks spent doing only fixes, I thought it was time to issue dev20.
 I'm sure I'll forget a large number of things, but the main features of this
 version include the following points (in merge order) :

   - optimizations (splicing, polling, etc...) : a few percent CPU could be
 saved ;

   - memory : the connections and applets are now allocated only when needed.
 Additionally, some structures were reorganized to avoid fragmentation on
 64-bit systems. In practice, an idle session size has dropped from 1936
 bytes to 1296 bytes (-640 bytes, or -33%).

   - samples : all sample fetch expressions now support a comma-delimited
 list of converters. This is also true in ACLs, so that it becomes
 possible to do things like :

 # convert to lower case and use fast tree indexing
 acl known_domain hdr(host),lower -f huge-domain-list.lst

   - a lot of code has been deduplicated in the tracked counters, it's now
 possible to use sc_foo_bar(1, args) instead of sc1_foo_bar(args). Doing
 so has simplified the code and makes life of APIs easier.

   - it's now possible to look up a tracked key from another table. This allows
 to retrieve multiple counters for the same key.

   - several hash algorithms are provided, and it is possible to select them
 per backend. This high quality work was done at Tumblr by Bhaskar Maddala.

   - agent-checks: this new feature was merged and replaced the lb-agent-chk.
 Some changes are still planned but feedback is welcome. The goal of this
 agent is to retrieve soem weight information from a server independantly
 of the service health. A typical usage would consist in reporting the
 server's idle percentage as an estimate of the possible weight. This work
 was done by Simon Horman for Loadbalancer.org.

   - samples : more automatic conversions between types are supported, making
 it easier to stick to any parameter. The types are much more dynamic now.
 Some improvements are still pending. This work was done by Thierry 
 Fournier
 at Exceliance.

   - map : a new type of converter appeared : maps. A map matches a key from
 a file just like ACLs do, and replaces this value with the value 
 associated
 with the key on the same line of the file. As it is a converter, it can be
 used in any sample expression. The first usage consists in geolocation,
 where networks are associated with country codes. Maps may be consulted,
 deleted, updated and filled from the CLI. Some will probably use this to
 program actions or emulate ACLs without even reloading a config. This
 work was also achieved by Thierry Fournier, and reviewed by Cyril Bonté
 who developped the original Geoip patchset for 1.4 and 1.5.

   - http-request redirect now supports log-format like expressions, just like
 http-request add-header. This allows to emit strings extracted from the
 request (host header, country code from a map, ...). Thierry again here.

   - checks: tcp-check supports send/expect sequences with 
 strings/regex/binary.
 Thus it now becomes possible to check unsupported protocols, even binary.
 This work is from Baptiste Assmann.

   - keep-alive: the dynamic allocation of the connection and applet in the
 session now allows to reuse or kill a connection that was previously
 associated with the session. Thus we now have a very basic support for
 keep-alive to the servers. There is even an option to relax the load
 balancing to try to keep the same connection. Right now we don't do
 any connection sharing so the main use is for static servers and for
 far remote servers or those which require the broken NTLM auth. That
 said, the performance tests I have run show an increase from 71000
 connections per second to 15 keep-alive requests per second running
 on one core of a Xeon E5 3.6 GHz. This doubled to 300k requests per
 second with two cores. I didn't test above, I lacked injection tools :-)
 One good point is that it will help people assemble haproxy and varnish
 together with haproxy doing the consistent hash and varnish caching after
 it.

 As most of you know, server-side keep-alive is the condition to release 1.5.
 Now we have it, we'll be able to improve on it but it's basically working.

 I expect to release 1.5-final around January and mostly focus on chasing
 bugs till there. So I'd like to set a feature freeze. I know it doesn't
 mean much

Re: HAProxy Next?

2013-12-20 Thread Igor
- Frontend bandwidth speed limit ability.

Bests,
-Igor


On Tue, Dec 17, 2013 at 4:14 PM, Annika Wickert
a.wick...@traviangames.com wrote:
 Hi all,

 we did some thinking about how to improve haproxy and which features we’d
 like to see in next versions.

 We came up with the following list and would like to discuss if they can be
 done/should be done or not.
 - One global statssocket which can be switched through to see stats of every
 bind process. And also an overall overview summed up from all backends and
 frontends.
 - One global control socket to control every backend server and set them
 inactive or active on the fly.
 - In general better nbproc  1 support
 - Include possibility in configfile to maintain one configfile for each
 backend / frontend pair
 - CPU pinning in haproxy without manually using taskset/cpuset
 - sflow output
 - latency metrics at stats interface (frontend and backend, avg, 95%, 90%,
 max, min)
 - accesslist for statssocket or ldap authentication for stats socket

 Are there any others things which would be cool? I hope we can have a nice
 discussion about a “fancy” feature set which could be provided by lovely
 haproxy.

 Best regards,
 Annika

 ---
 Systemadministration

 Travian Games GmbH
 Wilhelm-Wagenfeld-Str. 22
 80807 München
 Germany

 a.wick...@traviangames.com
 www.traviangames.de

 Sitz der Gesellschaft München
 AG München HRB: 173511
 Geschäftsführer: Siegfried Müller
 USt-IdNr.: DE246258085

 Diese Email einschließlich ihrer Anlagen ist vertraulich und nur für den
 Adressaten bestimmt. Wenn Sie nicht der vorgesehene Empfänger sind,
 bitten wir Sie, diese Email mit Anlagen unverzüglich und vollständig zu
 löschen und uns umgehend zu benachrichtigen.

 This email and its attachments are strictly confidential and are
 intended solely for the attention of the person to whom it is addressed.
 If you are not the intended recipient of this email, please delete it
 including its attachments immediately and inform us accordingly.




Does haproxy could be a forward proxy?

2014-01-02 Thread Igor
Hi, this question is silly, but I use haproxy even on my laptop to
split traffic, for example, there's a ACL to let some special domains
go via remote proxy, and the default goes local proxy, I wonder is it
possible to replace local proxy with haproxy, so I could have: 
server default local:1080  directly without creating a proxy by
another tool.

Thanks.

Bests,
-Igor



Re: HAProxy 1.5 possible bug

2014-03-06 Thread Igor
On Thu, Mar 6, 2014 at 3:50 PM, Willy Tarreau w...@1wt.eu wrote:
 We've been thinking about implementing a simple async resolver in
 combination with health checks to at least automatically update server
 addresses at EC2 and similar horrible environments where a reboot can
 change your server's address. A next step could be to try to use the
 same resolver for regular traffic. The thing is that doing this fast
 will require a cache otherwise it will be slow and will hammer the DNS
 servers quickly.

This is the most wanted feature :)


Bests,
-Igor



Limit requests to host from one source.

2014-05-08 Thread Igor
Hello every guru,

I got a TCP frontend and a HTTP backend, recently I have a issue some
users send too much queries to one URL, maybe it's malware or autobot.
So is it possible to limit one source IP to access api.example.com
request rate at 30 per hour? The other hosts like www.example.com,
mail.example.com not limited by that.

I refer to 1.5 doc and
http://blog.haproxy.com/2012/02/27/use-a-load-balancer-as-a-first-row-of-defense-against-ddos/
, can't find a clear way to accomplish.

Thanks.

Bests,
-Igor



Re: Limit requests to host from one source.

2014-05-09 Thread Igor
Hi, Baptiste

What I mean is tracking every single IP and only limit it to access
specified host name at a limited rate :)

Bests,
-Igor

Bests,
-Igor


On Fri, May 9, 2014 at 3:36 PM, Baptiste bed...@gmail.com wrote:
 Hi Igor,

 You can reuse the examples from the blog and limit tracking to a single IP:
 tcp-request connection track-sc1 src if { src a.b.c.d }

 Baptiste


 On Thu, May 8, 2014 at 5:57 PM, Igor j...@owind.com wrote:
 Hello every guru,

 I got a TCP frontend and a HTTP backend, recently I have a issue some
 users send too much queries to one URL, maybe it's malware or autobot.
 So is it possible to limit one source IP to access api.example.com
 request rate at 30 per hour? The other hosts like www.example.com,
 mail.example.com not limited by that.

 I refer to 1.5 doc and
 http://blog.haproxy.com/2012/02/27/use-a-load-balancer-as-a-first-row-of-defense-against-ddos/
 , can't find a clear way to accomplish.

 Thanks.

 Bests,
 -Igor




OCSP and Startssl

2014-06-29 Thread Igor
Hi, list

I enable OCSP with empty .ocsp file, but it seems not work,
https://www.ssllabs.com/ssltest/ reports OCSP No.

If do openssl ocsp -issuer s.pem.issuer -cert s.pem -url
http://ocsp.startssl.com/sub/class2/server/ca -header HOST
ocsp.startssl.com -respout s.pem.ocsp, so it works, ssllabs reports
OCSP Yes.

May be like this issue: http://trac.nginx.org/nginx/ticket/465 ?

Bests,
-Igor



Re: 100% CPU after upgraded to 1.6dev

2014-07-18 Thread Igor
Hi

-vv:

./haproxy -vv
HA-Proxy version 1.6-dev0-41 2014/07/12
Copyright 2000-2014 Willy Tarreau w...@1wt.eu

Build options :
  TARGET  = linux2628
  CPU = native
  CC  = gcc
  CFLAGS  = -O2 -march=native -g -fno-strict-aliasing
  OPTIONS = USE_LINUX_SPLICE=1 USE_REGPARM=1 USE_OPENSSL=1 USE_STATIC_PCRE=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built without zlib support (USE_ZLIB not set)
Compression algorithms supported : identity
Built with OpenSSL version : OpenSSL 1.0.1e 11 Feb 2013
Running on OpenSSL version : OpenSSL 1.0.1e 11 Feb 2013
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 8.30 2012-02-04
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built with transparent proxy support using: IP_TRANSPARENT
IPV6_TRANSPARENT IP_FREEBIND

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

1.5.1 same, after some quests, the load increasing.

Bests,
-Igor


On Sat, Jul 19, 2014 at 2:00 AM, Lukas Tribus luky...@hotmail.com wrote:
 Hi Igor, hi Thomas,


 On 18.07.2014 19:22, Igor wrote:
 Hi, I use git commit e63a1eb290a1c407453dbcaa16535c85a1904f9e, 1.5.2
 same result like git version.

 Ok, can you still post the haproxy -vv output please. Best thing would
 be if you could git bisect this in the haproxy-1.5 repository. Could you
 do that? If not, could you try 1.5.1?

 Does the CPU load increase right after starting haproxy (with no load),
 after the first request or later?



 When I look at your config, my educated guess would be commit
 60d7aeb6e1450995e721d01f48f60b7db4c44e2b.

 1.5.2 doesn't contain that commit, it must be something older.



 Regards,

 Lukas





Re: HAProxy as a TCP Fast Open Client

2015-06-19 Thread Igor
I have a scenario to use client mode, is TFO client mode ready to
merge to 1.6 dev?

Bests,
-Igor


On Fri, Feb 14, 2014 at 1:47 AM, Willy Tarreau w...@1wt.eu wrote:
 Hi David,

 On Thu, Feb 13, 2014 at 01:50:16PM +, David Harrold wrote:
 Hi Willy

 Did some more investigation on the case where the application request is too 
 large to
 fit within the initial SYN.

 Here is my test setup:

 Web clients ??  haproxy   ??  long-thin-pipe ?  haproxy--?  web 
 servers
 TFO Client   
  TFO Server

 Client sends an HTTP request larger than MSS, the client side haproxy uses 
 TFO and puts as much data
 as possible within the initial SYN. When SYN ACK is returned, the remaining 
 request data is sent.
 On closer inspection although the correct number of octets are sent, the 
 octets in the continuation packet are all NUL.

 E.g. Debug shows 1500 octets in the call to sendto() and a return value of 
 1500.
 Wireshark shows TFO sending 1420 octets in the SYN. After SYN ACK comes 
 back, 80 octets are sent in the next packet,
 but these 80 octets are all NUL.

 OK so that's clearly a bug below.

 Looks like something broken in the TFO client, but would be good to see if 
 others can duplicate my results.

 I?m testing using VMware which I think emulates TCP offload by default, 
 wondering whether that could be the cause?

 Could be indeed, we've got a few issues with GRO/GSO in the past as well.
 I'll have to run some tests with your patch to see with different kernels
 if I can reproduce the same issue. It is also possible that it was a known
 old bug that's already fixed but not backported to some stable branches.

 Regarding default values for the TFO backlog - I was concerned that if this 
 is maxconn then is
 there a DoS vulnerability? Eg if a TFO client streams SYNs with random data 
 at you, each of these ties up
 an haproxy connection for a while, starving other clients?

 But it's the same with real connections in practice, because even when the
 connection is accepted, we still need to parse it. This is also the reason
 for having a short http-request timeout. For example, if you support 100k
 concurrent connections on the frontend and wait for a request for 5 seconds,
 a client will have to send 20k conns/s to keep you full. In practice, even
 at this rate, you'll accept 100k extra conns in the backlog which will get
 served but will have to wait 0 to 5s on average.

 The worst thing to do is to reduce the accept rate, which lowers the bar
 for attackers. The higher the limit, the more information we have for
 dealing with pending data. One of the easy things we are already able to
 do is count the number of concurrent connections per source address for
 example. Something we cannot do if we refrain from accepting these
 connections.

 I also have some memories about the network stack defending itself when a
 SYN queue overflows, it would reject TFO or accelerate the regeneration of
 cookies, I don't remember exactly.

 Cheers,
 Willy





Re: [ANNOUNCE] haproxy-1.6-dev2

2015-06-19 Thread Igor
It's very cool to have DNS finally! I wonder is that possible to do this like?

use_backend us_upstream if {
hdr(Host),dnsname_to_ip_and_map(geo_us.lst)  -m str us }

Convert hostname to IP, find IP's geo info, use matched backend.

Thank you.

Bests,
-Igor


On Thu, Jun 18, 2015 at 4:06 PM, Baptiste bed...@gmail.com wrote:
 On Wed, Jun 17, 2015 at 5:08 PM, Willy Tarreau wi...@haproxy.com wrote:
 Hi all,

 the impatient readers among you will have noticed that it's been almost 3
 weeks since I sent the e-mail announcing the imminent release of 1.6-dev2.
 That end of merge window has been a nightmare and is not finished, but I
 thought it would be wise to issue dev2 anyway so that people can test the
 stuff that has been merged anyway. Lesson learned, for 1.7 we'll have a
 much shorter merge window so that people don't have enough time to push
 that much stuff at the last minute :-)

 To be honnest, I'm far from being satisfied with this version. It's as huge
 as dev1 (344 commits) despite some things still being pending. Also noticed
 quite a number of areas that need to be fixed / cleaned up etc. So at least
 the feature freeze is a good thing.

 Reading the changelog since 1.6-dev1, in no particular order, I've found :

   - DNS-based server name resolution : haproxy is now able to periodically
 ask a set of resolvers for the IP address of some servers and to update
 them without restarting. This will make life much easier for people
 running in AWS where IP address change randomly. Some more stuff was
 planned for this such as marking the server as unresolvable if resolving
 fails, but we found that people would probably like to have a 
 configurable
 behaviour. Feedback on this is desired and will drive the next steps.

   - peers protocol v2 : haproxy 1.6 and 1.5 will not be able to synchronize
 their stick tables but on the other hand the new protocol is much better
 and more extensible. First it uses a single connection regardless of the
 number of tables to synchronize. Second it will support synchronizing
 much more than just stick tables. For now it replicates all stick-tables
 contents (including gpc, etc...). This allows reloads to keep entries,
 rates, etc... as well as to pass them to a backup node in case of a
 switchover. It's very likely that during 1.7 development we'll further
 extend the amount of information that can be exchanged.

   - peers support nbproc  1 as long as they're referenced by a single 
 process,
 and peers sections can be disabled (useful for debugging).

   - config : removed a few deprecated keywords (eg: reqsetbe). I wanted to
 remove block as well, and appsession. On the first one I'm not sure,
 on the second one only Aleks (the author of the feature) provided some
 feedback and agreed it was probably time for it to go. Expect that we'd
 get rid of them soon if nobody objects.

   - pattern cache : a small lru cache applies to pattern matching when it
 runs from a list (eg: case insensitive string match, regex, etc). This
 can significantly speed up host header matching or regex matching
 against a huge list.

   - support for stateless zip compression with libslz : this doesn't waste
 memory anymore and compresses about 3 times faster than zlib, at a lower
 compression ratio.

   - support for session/transaction/request/response variables : using the
 set-var action in {tcp,http}-{request-response} rulesets, it's possible
 to assign the result of a sample expression to a variable allocated on 
 the
 fly and which lasts for all the session, the transaction or just the
 ephemeral processing being done on the request or response. This makes
 it possible to keep copies of certain request information and reuse them
 in the response for example. Some work is still pending on this part,
 in particular the ability to use variables with in all arithmetic
 converters which currently only take a constant.

   - support for declared captures : sometimes it's desired to capture in
 the backend or response path but that was not possible since only the
 frontend can assign a capture slot. The solution consists in making
 it possible to declare a capture slot in the frontend for later use.

   - servers: in addition to DNS, it's possible to change a server's IP 
 address
 from the CLI.

   - ssl: it's now possible to forge SSL certs on the fly. That's convenient
 when haproxy has to be deployed in front of proxies which already work
 like this.

   - device identification : two companies, 51Degrees and DeviceAtlas,
 provided patches to add support for their respective libs. We're
 starting to see some demand for such features due to the abundance
 of smartphones, tablets and I don't-know-what, and both libs come
 with a free device database, so it seems to be the right timing.
 The README was updated for both

Re: [ANNOUNCE] haproxy-1.6-dev2

2015-06-19 Thread Igor
Wow, sounds great, hope it comes soon :)

Bests,
-Igor


On Fri, Jun 19, 2015 at 8:00 PM, Willy Tarreau wi...@haproxy.com wrote:
 On Fri, Jun 19, 2015 at 07:35:49PM +0800, Igor wrote:
 It's very cool to have DNS finally! I wonder is that possible to do this 
 like?

 use_backend us_upstream if {
 hdr(Host),dnsname_to_ip_and_map(geo_us.lst)  -m str us }

 Convert hostname to IP, find IP's geo info, use matched backend.

 Not yet. Maybe later it will be possible but for now the resolution is only
 applied to checked servers.

 Willy




Do you need redesign of your site at the address https://www.haproxy.com or another original software?

2019-09-30 Thread Igor
Hello.

I can make a new high-quality fast website in the adaptive layout at the price 
from $300 for your nice project at the address https://www.haproxy.com (HAProxy 
Technologies | The World’s Fastest and Most Widely Used Software Load Balancer).

As well I can do complicated web applications, accounting apps or any other 
software, made and configured specifically for you, and much more, about what 
you can find out on my site https://www.programs.gq/en/ 

With best regards, Igor,
flashscript1...@gmail.com




cannot auth squid_kerb_auth farm behind haproxy

2012-10-03 Thread igor kattar
Hello everybody,
I have a farm of three squid proxies, pointing one of them
individualy, in a browser for example, I can authenticate (kerberos
authentication via squid_kerb_auth) but when I point the browser to
the vip I cannot authenticate. Does anybody have a clue about how can
I authenticate via vip? I'd already tried to create a HTTP/vip entry
in the keytab (used by the squid_kerb_auth) but no success, neither
setting GSS_C_NO_NAME in squid_kerb_auth

Thanks!



effect of adding `cookie` option to server

2014-06-18 Thread Igor Serebryany
Hi!

I am trying to figure out what the effect of adding the `cookie` option to
a `server` config line is. According to this chunk of documentation:

https://cbonte.github.io/haproxy-dconv/configuration-1.4.html#5-cookie

This value will be checked in incoming requests, and the first

operational server possessing the same value will be selected.

However, doesn't this require me enabling a cookie load balancing
algorithm? What I mean is, if I don't explicitly set any load balancing
algorithm and the default (roundrobin) is chosen, it seems as though
setting the cookie actually as no effect.

In fact, setting the cookie should have no effect unless I specify
`appsession`, `cookie`, or `balance uri` (or one of the other persistent
`balance` algorithms) in a backend. Is that correct?

Another way to phrase the question: is it true that the two listen stanzas
below actually behave identically in every respect? The only change is the
addition of the `cookie` param to each server.

listen helloworld
bind :80
mode http
option httplog
server srv1 10.0.2.15:9494 check inter 1s rise 1 fall 1
server srv2 10.0.2.15:9495 check inter 1s rise 1 fall 1

AND

listen helloworld
bind :80
mode http
option httplog
server srv1 10.0.2.15:9494 check inter 1s rise 1 fall 1 cookie srv1
server srv2 10.0.2.15:9495 check inter 1s rise 1 fall 1 cookie srv2

thanks!
--igor


Re: How to edit backend members in realtime without HAProxy restart

2014-06-19 Thread Igor Serebryany
Hi Justin,

We do something similar with Synapse, here:
https://github.com/airbnb/synapse

Two caveats:
* there is no way to dynamically *add* backends to haproxy without a
restart. Synapse uses the stats socket to put down backends in maintenance
mode and bring them back up when the backend becomes available again, but
every time a backend is added Synapse restarts haproxy
* there's no connector yet for plugging Synapse into consul; this would
need to be written.

--igor


On Thu, Jun 19, 2014 at 6:02 PM, Justin Franks justin.fra...@lithium.com
wrote:

   Hello,

 We are using Consul, written by the same guys who wrote Vagrant. Really
 great tool. http://www.consul.io/

 Consul is a service registry and discovery and DNS solution among other
 things.

 I can create an internal name like 'some.thing.internal' which will
 resolve to a pool of nodes. So when I do a 'dig + short
 some.thing.internal' it will return

 IPaddressA

 IPaddressB

 IPaddressC

 etc...

 Consul keeps the list of nodes related to a particular URI up to date in
 real-time. It uses health checks and so on and will round robin the traffic
 to all the nodes evenly. But I want HAProxy too.

 So I want HAProxy to say, I see a URI that resolves to N number of IP
 addresses. I will add the addresses to the backend. I will keep polling the
 URI. If the address change I will update the backend.

 How can I accomplish this?

 What are my options?

 A cron job to run 'dig + short some.thing.internal' every minute and send
 that info to HAProxy for backend members?

 Unix sockets, ALCs and stick-tables?

 I just want the result that I described above done in real-time so HAProxy
 restart not required. I don't care how it is accomplished.

 Thanks in advance for any ideas.









 *
 Justin Franks
 Lead Operations Engineer
 SaaS, Cloud, Data Centers  Infrastructure
 Lithium Technologies, Inc
 225 Bush St., 15th Floor
 San Francisco, CA 94104
 tel: +1 415 757 3100 x3219



Re: HA proxy - Need infromation

2015-04-13 Thread Igor Cicimov
On Tue, Apr 14, 2015 at 12:55 AM, Thibault Labrut 
thibault.lab...@enioka.com wrote:

 Hello,

 I currently installing HAProxy with keepalived to one of my clients.

 To facilitate the administration of this tool, I would like to know if you
 can advise me of administration web gui for HA proxy.


Look for stats in the HAP documentation.



 Thank you for your help.

 Best regards,
 --
 Thibault Labrut
 enioka
 24 galerie Saint-Marc
 75002 Paris
 +33 615 700 935
 +33 144 618 314



Re: SSL backends stopped working

2015-04-23 Thread Igor Cicimov
On 23/04/2015 6:01 PM, i...@linux-web-development.de wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256

 Hi!

 I'm having trouble with one of our HAProxy-Servers that uses a backend
with TLS. When starting HAProxy the backend will report all servers as down:

 Server web_remote/apache_rem_1 is DOWN, reason: Layer6 invalid response,
info: SSL handshake failure,

When i see this it is usually issue with the ciphers. Can you try setting
specific cipher in the ssl backend that you know is supported by the
backend servers?

check duration: 41ms. 1 active and 0 backup servers left. 0 sessions
active, 0 requeued, 0 remaining in queue.



 My backend configuration is as follows:

 backend web_remote
 balance leastconn
 option  httpchk HEAD /
 option  redispatch
 retries 3

 default-server  inter 5000 rise 2 fall 5 maxconn 1 maxqueue 5

 server apache_rem_1  1.2.3.4:12345 check maxconn 1000
maxqueue 5000 ssl ca-file /etc/ssl/web.pem
 server apache_rem_2  2001:1:2:3:4:5:6:8:12345  check maxconn 1000
maxqueue 5000 ssl ca-file /etc/ssl/web.pem


 This backend worked just fine until now, a quick wget on the server also
worked and openssl s_client reports the certificate of the backend to be
valid.

 I couldn't find anything on the list except that the error would be due
to SSL_ABORT, but I'm not sure what this is supposed to tell me...

 Is there anything else for HAProxy/TLS that could be configured wrong?
How could I debug this issue when everything else reports the handshake was
successful?
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v2

 iQIcBAEBCAAGBQJVOKYDAAoJEJGDW18KFrBD7p4P/05tlwkxRUJwVoI3Tl1Q3+xI
 upIcN9MfTHPpA6ilVkT2S43HxyZ7RYgYGRs6LEcipLJOhGSxIHcPgGZKwsMJK8NO
 cldP20A0SoRvkUsro1UWOj/iqAsxg+j6IYNxuBJUb5i2yG6KFlp/PupJJI1QDUov
 NzyfjqIh9iSgRA6j3jJSYUDLg5KM3Frl8O0GQysztxF8fihambx8vYjlEkIyrrtc
 obmRN3hyIHnJC3oTfhEtpyg8ihV8B6XCNCEHXLonEa8QQ4lIluKhDmh+LsydZ/og
 oEFQeBNp8VfRVIx8iT1ixNFAtw85ZcB0X5GpUMxHZ5l4IscD2THCfqge+nbOIoCw
 9gHitbrKEe323DXIAiv/xWiJZNw3DwDyPDIXFLypBH2F6ZRSosBMyFwkj5omj3ey
 FKAL6DLXDylMgbrihSKA381GktPa5Vr/QmlMjr924VVDbQBmgFBiF7MKeSFHoAjT
 AJvWXplp8jIb7c1wo5vOVEa3MqLEW6Me+r2RvbAiDbQbXmVbRGmVgXo0WeZ2xgMq
 yhFAoW4JvgrrAqNdocXxc2DoP7BU51zu4b9qq4aPECUzyODpLYtU/PCDNBuvBcWI
 erGvwQt6iJP5C8NDHz/Q2mEdBgAq5K+qoSDn5CK+pmWDdR26AVRU8bH8Np4JP2ec
 c+qlPjicDRLalAn3jmQa
 =9FK7
 -END PGP SIGNATURE-




Re: Backend status changes continuously

2015-04-21 Thread Igor Cicimov
On 21/04/2015 6:00 PM, Krishna Kumar (Engineering) 
krishna...@flipkart.com wrote:

 Hi all,

 While running the command: : ab -n 10 -c 1000 192.168.122.110:80/256
,
 the haproxy stats page shows the 4 different backend servers changing
status
 between Active up, going down, Active or backup down, Down, Backup
down, going UP, sometimes all 4 backends are in DOWN state. The result is
very
 poor performance reported by 'ab' as compared to running directly against
a
 single backend.

 What could be the reason for this continuous state change?

 root@HAPROXY:~# haproxy -vv
 HA-Proxy version 1.5.8 2014/10/31
 Copyright 2000-2014 Willy Tarreau w...@1wt.eu

 Build options :
   TARGET  = linux2628
   CPU = generic
   CC  = gcc
   CFLAGS  = -g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat
-Werror=format-security -D_FORTIFY_SOURCE=2
   OPTIONS = USE_ZLIB=1 USE_OPENSSL=1 USE_PCRE=1

 Default settings :
   maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200

 Encrypted password support via crypt(3): yes
 Built with zlib version : 1.2.7
 Compression algorithms supported : identity, deflate, gzip
 Built with OpenSSL version : OpenSSL 1.0.1e 11 Feb 2013
 Running on OpenSSL version : OpenSSL 1.0.1k 8 Jan 2015
 OpenSSL library supports TLS extensions : yes
 OpenSSL library supports SNI : yes
 OpenSSL library supports prefer-server-ciphers : yes
 Built with PCRE version : 8.30 2012-02-04
 PCRE library supports JIT : no (USE_PCRE_JIT not set)
 Built with transparent proxy support using: IP_TRANSPARENT
IPV6_TRANSPARENT IP_FREEBIND

 Available polling systems :
   epoll : pref=300,  test result OK
poll : pref=200,  test result OK
  select : pref=150,  test result OK
 Total: 3 (3 usable), will use epoll.


 Thanks,
 - Krishna Kumar

http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#inter


Re: Backend status changes continuously

2015-04-22 Thread Igor Cicimov
On Wed, Apr 22, 2015 at 3:34 PM, Krishna Kumar (Engineering) 
krishna...@flipkart.com wrote:

 Hi Baptists,

 Sorry I didn't provide more details earlier.


 --
 1. root@HAPROXY:~# haproxy -vv

 HA-Proxy version 1.5.8 2014/10/31
 Copyright 2000-2014 Willy Tarreau w...@1wt.eu

 Build options :
   TARGET  = linux2628
   CPU = generic
   CC  = gcc
   CFLAGS  = -g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat
 -Werror=format-security -D_FORTIFY_SOURCE=2
   OPTIONS = USE_ZLIB=1 USE_OPENSSL=1 USE_PCRE=1

 Default settings :
   maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200

 Encrypted password support via crypt(3): yes
 Built with zlib version : 1.2.7
 Compression algorithms supported : identity, deflate, gzip
 Built with OpenSSL version : OpenSSL 1.0.1e 11 Feb 2013
 Running on OpenSSL version : OpenSSL 1.0.1k 8 Jan 2015
 OpenSSL library supports TLS extensions : yes
 OpenSSL library supports SNI : yes
 OpenSSL library supports prefer-server-ciphers : yes
 Built with PCRE version : 8.30 2012-02-04
 PCRE library supports JIT : no (USE_PCRE_JIT not set)
 Built with transparent proxy support using: IP_TRANSPARENT
 IPV6_TRANSPARENT IP_FREEBIND

 Available polling systems :
   epoll : pref=300,  test result OK
poll : pref=200,  test result OK
  select : pref=150,  test result OK
 Total: 3 (3 usable), will use epoll.

 --
 2. Configuration file:
 global
 daemon
 maxconn  6
 quiet
 nbproc 2
 maxpipes 16384
 user haproxy
 group haproxy
 stats socket /var/run/haproxy.sock mode 600 level admin
 stats timeout 2m

 defaults
 option  dontlognull
 option forwardfor
 option http-server-close
 retries 3
 option redispatch
 maxconn 6
 option splice-auto
 option prefer-last-server
 timeout connect 5000ms
 timeout client 5ms
 timeout server 5ms

 frontend www-http
 bind *:80
 reqadd X-Forwarded-Proto:\ http
 default_backend www-backend

 frontend www-https
 bind *:443 ssl crt /etc/ssl/private/haproxy.pem ciphers
 AES:ALL:!aNULL:!eNULL:+RC4:@STRENGTH
 rspadd Strict-Transport-Security:\ max-age=31536000


Just a note, if you want to use STS you need to put your site on the HSTS
list for each browser ie Chrome and Firefox have separate ones etc.


 reqadd X-Forwarded-Proto:\ https
 default_backend www-backend

 userlist stats-auth
 group adminusers admin
 user  admininsecure-password admin
 group readonlyusers user
 user  userinsecure-password user

 backend www-backend
 mode http
 maxconn 6
 stats enable
 stats uri /stats
 acl AUTHhttp_auth(stats-auth)
 acl AUTH_ADMINhttp_auth(stats-auth) admin
 stats http-request auth unless AUTH
 balance roundrobin
 option prefer-last-server
 option forwardfor
 option splice-auto
 option splice-request
 option splice-response
 compression offload
 compression algo gzip
 compression type text/html text/plain text/javascript
 application/javascript application/xml text/css application/octet-stream
 server nginx-1 192.168.122.101:80 maxconn 15000 cookie S1 check
 server nginx-2 192.168.122.102:80 maxconn 15000 cookie S2 check
 server nginx-3 192.168.122.103:80 maxconn 15000 cookie S3 check
 server nginx-4 192.168.122.104:80 maxconn 15000 cookie S4 check


And where is your cookie and the checks setup?



 --

 3. A 24 processor Ubuntu system starts 2 nginx VM's (KVM, 2 vcpu, 1GB),
 and 1 haproxy VM (KVM, 2 vcpu, 1GB). 'ab' runs on the host and tests with
 either the haproxy VM, or directly to one of the 2 nginx VM's.

 Sometimes during the test, I also see many nf_conntrack: table full,
 dropping
 packet messages on the host system.

 Thanks.
 - Krishna


 On Tue, Apr 21, 2015 at 1:29 PM, Krishna Kumar (Engineering) 
 krishna...@flipkart.com wrote:

 Hi all,

 While running the command: : ab -n 10 -c 1000 192.168.122.110:80/256
 ,
 the haproxy stats page shows the 4 different backend servers changing
 status
 between Active up, going down, Active or backup down, Down, Backup
 down, going UP, sometimes all 4 backends are in DOWN state. The result is
 very
 poor performance reported by 'ab' as compared to running directly against
 a
 single backend.

 What could be the reason for this continuous state change?

 root@HAPROXY:~# haproxy -vv
 HA-Proxy version 1.5.8 2014/10/31
 Copyright 2000-2014 Willy Tarreau w...@1wt.eu

 Build options :
   TARGET  = linux2628
   CPU = generic
   CC  = gcc
   CFLAGS  = -g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat
 -Werror=format-security 

Re: Stick tables and counters persistence

2015-04-16 Thread Igor Cicimov
On Fri, Apr 17, 2015 at 2:26 PM, Dennis Jacobfeuerborn 
denni...@conversis.de wrote:

 On 17.04.2015 02:12, Igor Cicimov wrote:
  Hi all,
 
  Just a quick one, are the stick tables and counters persisted on haproxy
  1.5.11 reload/restart?

 With nbproc=1 yes as long as you use a peers section that contains the
 local host as an entry.

 Regards,
   Dennis




Thanks Dennis, yes that's exactly my user case ie peers and nbproc=1.

Cheers,
Igor


Stick tables and counters persistence

2015-04-16 Thread Igor Cicimov
Hi all,

Just a quick one, are the stick tables and counters persisted on haproxy
1.5.11 reload/restart?

Thanks,
Igor


Re: switching backends based on boolean value

2015-04-16 Thread Igor Cicimov
On Fri, Apr 17, 2015 at 3:26 AM, Dennis Jacobfeuerborn 
denni...@conversis.de wrote:

 Hi,
 I'm trying to find the best way to toggle maintenance mode for a site. I
 have a regular and a maintenance backend defined an I'm using something
 like:

 frontend:
   acl is_maintenance always_false
   use_backend back-maintenance if is_maintenance
   default_backend back

 Since I saw some ACL modifying command for the unix socket I figured
 that I could use those to switch the acl dynamically but apparently
 while there are get/add/del/clear commands there is no actual command to
 set an acl.
 Is there a way to accomplish this kind of dynamic switching?

 Regards,
Dennis


How about putting the maintenance server as backup in the pool and removing
the real server from the pool when due for maintenance and then putting it
back when finished.


Re: proxy haproxy has no server available!

2015-04-06 Thread Igor Cicimov
On Tue, Apr 7, 2015 at 3:24 PM, Krishna Kumar Unnikrishnan (Engineering) 
krishna...@flipkart.com wrote:

 Sorry, forgot to mention, this is haproxy version 1.5.11


 On Tue, Apr 7, 2015 at 10:52 AM, Krishna Kumar Unnikrishnan (Engineering)
 krishna...@flipkart.com wrote:

 Hi all,

 I am moving from using LXC to KVM for haproxy on my Debian 7 system. When
 I
 start haproxy, I get this error:
 _
 Apr  7 10:38:22 localhost haproxy[3418]: Proxy haproxy started.
 Apr  7 10:38:24 localhost haproxy[3420]: Server haproxy/nginx-1 is DOWN,
 reason Layer4 timeout, check duration: 2000ms. 1 active and 0 backup
 servers left. 0 essions active, 0 requeued, 0 remaining in queue.
 Apr  7 10:38:24 localhost haproxy[3419]: Server haproxy/nginx-1 is DOWN,
 reason Layer4 timeout, check duration: 2001ms. 1 active and 0 backup
 servers left. 0 essions active, 0 requeued, 0 remaining in queue.
 Apr  7 10:38:25 localhost haproxy[3420]: Server haproxy/nginx-2 is DOWN,
 reason Layer4 timeout, check duration: 2001ms. 0 active and 0 backup
 servers left. 0 essions active, 0 requeued, 0 remaining in queue.
 Apr  7 10:38:25 localhost haproxy[3420]: proxy haproxy has no server
 available!
 Apr  7 10:38:25 localhost haproxy[3419]: Server haproxy/nginx-2 is DOWN,
 reason Layer4 timeout, check duration: 2001ms. 0 active and 0 backup
 servers left. 0 essions active, 0 requeued, 0 remaining in queue.
 Apr  7 10:38:25 localhost haproxy[3419]: proxy haproxy has no server
 available!

 From outside the haproxy, I get the error:
 # wget 192.168.122.112:80
 --2015-04-07 10:48:47--  http://192.168.122.112/
 Connecting to 192.168.122.112:80... connected.
 HTTP request sent, awaiting response... 503 Service Unavailable
 2015-04-07 10:48:47 ERROR 503: Service Unavailable.
 ___

 The config file is:
 global
 log 127.0.0.1   local0
 log 127.0.0.1   local1 notice
 maxconn  65536
 daemon
 quiet
 nbproc 2
 debug
 user haproxy
 group haproxy

 defaults
 log global
 modehttp
 option  dontlognull
 retries 3
 option redispatch
 maxconn 65536
 timeout connect 5000
 timeout client  5
 timeout server  5

 #listen haproxy 192.168.122.112:80
 listen haproxy *:80
 mode http
 stats enable
 stats auth someuser:somepassword
 balance roundrobin
 option prefer-last-server
 option forwardfor
 option httpchk HEAD /check.txt HTTP/1.0


Check if the above health check is really working, you show that requesting
the root page works but we don't see you checking the /check.txt file (does
it exist at all?). Run:

$ curl --http1.0 -X HEAD 192.168.122.101:80 http://192.168.122.101/
/check.txt
$ curl --http1.0 -X HEAD 192.168.122.102:80 http://192.168.122.101/
/check.txt

from the HAP server.

server nginx-1 192.168.122.101:80 check
 server nginx-2 192.168.122.102:80 check

 BTW, I could not use listen haproxy 192.168.122.112:80, but had to use
 *:80
 as haproxy does not start up with the former. It seems like haproxy
 startup is
 happening ahead of networking.
 __

 I also stopped/restarted haproxy, but I still get the same error at start.

 root@haproxy-2:~# netstat -apn | grep :80
 tcp0  0 0.0.0.0:80  0.0.0.0:*
 LISTEN  3558/haproxy
 ___
 From outside haproxy, I can do a wget/curl to either of the two servers:

 # wget 192.168.122.101:80
 --2015-04-07 10:42:28--  http://192.168.122.101/
 Connecting to 192.168.122.101:80... connected.
 HTTP request sent, awaiting response... 200 OK
 Length: 867 [text/html]
 Saving to: `index.html'

 100%[==] 867 --.-K/s   in
 0s

 2015-04-07 10:42:28 (104 MB/s) - `index.html' saved [867/867]
 ___

 And I can do the same from haproxy:
 root@haproxy-2:~# wget 192.168.122.101
 --2015-04-07 10:43:48--  http://192.168.122.101/
 Connecting to 192.168.122.101:80... connected.
 HTTP request sent, awaiting response... 200 OK
 Length: 867 [text/html]
 Saving to: `index.html'

 100%[==] 867 --.-K/s   in
 0s

 2015-04-07 10:43:48 (80.3 MB/s) - `index.html' saved [867/867]
 ___

 How do I fix this problem?

 Thank you,
 - KK





Fwd: proxy haproxy has no server available!

2015-04-07 Thread Igor Cicimov
Forgot to cc the list.

-- Forwarded message --
From: Igor Cicimov ig...@encompasscorporation.com
Date: Tue, Apr 7, 2015 at 4:25 PM
Subject: Re: proxy haproxy has no server available!
To: Krishna Kumar Unnikrishnan (Engineering) krishna...@flipkart.com




On Tue, Apr 7, 2015 at 3:58 PM, Krishna Kumar Unnikrishnan (Engineering) 
krishna...@flipkart.com wrote:

 Thanks Igor for the suggestion. I get:

 root@haproxy-2:/var/www# curl --http1.0 -X HEAD
 192.168.122.101:80/check.txt
 curl: (18) transfer closed with 168 bytes remaining to read
 root@haproxy-2:/var/www# curl --http1.0 -X HEAD
 192.168.122.102:80/check.txt
 curl: (18) transfer closed with 168 bytes remaining to read

 And without the flags:

 root@haproxy-2:/var/www# curl 192.168.122.102:80/check.txt
 html
 headtitle404 Not Found/title/head
 body bgcolor=white
 centerh1404 Not Found/h1/center
 hrcenternginx/1.6.2/center
 /body
 /html

 Is this the problem? I am not sure how to fix it.


Obviously the given txt file does not exist in your nginx document root
directory. You said you are migrating the setup so wonder how did this use
to work till now?


 Thanks,
 - KK

 On Tue, Apr 7, 2015 at 11:10 AM, Igor Cicimov 
 ig...@encompasscorporation.com wrote:



 On Tue, Apr 7, 2015 at 3:24 PM, Krishna Kumar Unnikrishnan (Engineering)
 krishna...@flipkart.com wrote:

 Sorry, forgot to mention, this is haproxy version 1.5.11


 On Tue, Apr 7, 2015 at 10:52 AM, Krishna Kumar Unnikrishnan
 (Engineering) krishna...@flipkart.com wrote:

 Hi all,

 I am moving from using LXC to KVM for haproxy on my Debian 7 system.
 When I
 start haproxy, I get this error:
 _
 Apr  7 10:38:22 localhost haproxy[3418]: Proxy haproxy started.
 Apr  7 10:38:24 localhost haproxy[3420]: Server haproxy/nginx-1 is
 DOWN, reason Layer4 timeout, check duration: 2000ms. 1 active and 0 backup
 servers left. 0 essions active, 0 requeued, 0 remaining in queue.
 Apr  7 10:38:24 localhost haproxy[3419]: Server haproxy/nginx-1 is
 DOWN, reason Layer4 timeout, check duration: 2001ms. 1 active and 0 backup
 servers left. 0 essions active, 0 requeued, 0 remaining in queue.
 Apr  7 10:38:25 localhost haproxy[3420]: Server haproxy/nginx-2 is
 DOWN, reason Layer4 timeout, check duration: 2001ms. 0 active and 0 backup
 servers left. 0 essions active, 0 requeued, 0 remaining in queue.
 Apr  7 10:38:25 localhost haproxy[3420]: proxy haproxy has no server
 available!
 Apr  7 10:38:25 localhost haproxy[3419]: Server haproxy/nginx-2 is
 DOWN, reason Layer4 timeout, check duration: 2001ms. 0 active and 0 backup
 servers left. 0 essions active, 0 requeued, 0 remaining in queue.
 Apr  7 10:38:25 localhost haproxy[3419]: proxy haproxy has no server
 available!

 From outside the haproxy, I get the error:
 # wget 192.168.122.112:80
 --2015-04-07 10:48:47--  http://192.168.122.112/
 Connecting to 192.168.122.112:80... connected.
 HTTP request sent, awaiting response... 503 Service Unavailable
 2015-04-07 10:48:47 ERROR 503: Service Unavailable.
 ___

 The config file is:
 global
 log 127.0.0.1   local0
 log 127.0.0.1   local1 notice
 maxconn  65536
 daemon
 quiet
 nbproc 2
 debug
 user haproxy
 group haproxy

 defaults
 log global
 modehttp
 option  dontlognull
 retries 3
 option redispatch
 maxconn 65536
 timeout connect 5000
 timeout client  5
 timeout server  5

 #listen haproxy 192.168.122.112:80
 listen haproxy *:80
 mode http
 stats enable
 stats auth someuser:somepassword
 balance roundrobin
 option prefer-last-server
 option forwardfor
 option httpchk HEAD /check.txt HTTP/1.0


 Check if the above health check is really working, you show that
 requesting the root page works but we don't see you checking the /check.txt
 file (does it exist at all?). Run:

 $ curl --http1.0 -X HEAD 192.168.122.101:80 http://192.168.122.101/
 /check.txt
 $ curl --http1.0 -X HEAD 192.168.122.102:80 http://192.168.122.101/
 /check.txt

 from the HAP server.

 server nginx-1 192.168.122.101:80 check
 server nginx-2 192.168.122.102:80 check

 BTW, I could not use listen haproxy 192.168.122.112:80, but had to
 use *:80
 as haproxy does not start up with the former. It seems like haproxy
 startup is
 happening ahead of networking.
 __

 I also stopped/restarted haproxy, but I still get the same error at
 start.

 root@haproxy-2:~# netstat -apn | grep :80
 tcp0  0 0.0.0.0:80  0.0.0.0:*
 LISTEN  3558/haproxy
 ___
 From outside haproxy, I can do a wget/curl to either of the two
 servers:

 # wget 192.168.122.101:80
 --2015-04-07 10:42:28--  http://192.168.122.101/
 Connecting to 192.168.122.101:80... connected.
 HTTP request sent, awaiting

Re: Compression does not seem to work in my setup

2015-04-08 Thread Igor Cicimov
On Wed, Apr 8, 2015 at 3:47 PM, Krishna Kumar Unnikrishnan (Engineering) 
krishna...@flipkart.com wrote:

 Hi all,

 I am trying to use the compression feature, but don't seem to get it
 working when
 trying to curl some text files (16K containing a-zA-Z, also smaller files
 like 1024
 bytes):

 $ curl -o/dev/null -D - http://192.168.122.110:80/TEXT_16K; -H
 Accept-Encoding: gzip
   % Total% Received % Xferd  Average Speed   TimeTime Time
 Current
  Dload  Upload   Total   SpentLeft
 Speed
   0 00 00 0  0  0 --:--:-- --:--:--
 --:--:-- 0HTTP/1.1 200 OK
 Server: nginx/1.6.2
 Date: Wed, 08 Apr 2015 05:00:35 GMT
 *Content-Type: application/octet-stream*
^
^

Well, compare the Content-Type of the file you are returning with the types
specified in your config:


*compression type text/html text/plain text/javascript
application/javascript application/xml text/css*
it is not on the list is it ???

Content-Length: 16384
 Last-Modified: Wed, 08 Apr 2015 04:45:12 GMT
 ETag: 5524b258-4000
 Accept-Ranges: bytes

 100 16384  100 163840 0  4274k  0 --:--:-- --:--:-- --:--:--
 5333k

 My configuration file has these parameters:

 
 compression algo gzip
 *compression type text/html text/plain text/javascript
 application/javascript application/xml text/css*
 server nginx-1 192.168.122.101:80 maxconn 15000 check
 server nginx-2 192.168.122.102:80 maxconn 15000 check
 .
 ..

 Tcpdump at the proxy shows:

 GET /TEXT_16K HTTP/1.1
 User-Agent: curl/7.26.0
 Host: 192.168.122.110
 Accept: */*
 Accept-Encoding: gzip
 X-Forwarded-For: 192.168.122.1


 HTTP/1.1 200 OK
 Server: nginx/1.6.2
 Date: Wed, 08 Apr 2015 05:25:09 GMT
 Content-Type: application/octet-stream
 Content-Length: 16384
 Last-Modified: Wed, 08 Apr 2015 04:28:01 GMT
 Connection: keep-alive
 ETag: 5524ae51-4000
 Accept-Ranges: bytes

 HTTP/1.1 200 OK
 Server: nginx/1.6.2
 Date: Wed, 08 Apr 2015 05:25:09 GMT
 Content-Type: application/octet-stream
 Content-Length: 16384
 Last-Modified: Wed, 08 Apr 2015 04:28:01 GMT
 Connection: keep-alive
 ETag: 5524ae51-4000
 Accept-Ranges: bytes

 haproxy build info:
 HA-Proxy version 1.5.8 2014/10/31
 Copyright 2000-2014 Willy Tarreau w...@1wt.eu

 Build options :
   TARGET  = linux2628
   CPU = generic
   CC  = gcc
   CFLAGS  = -g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat
 -Werror=format-security -D_FORTIFY_SOURCE=2
   OPTIONS = USE_ZLIB=1 USE_OPENSSL=1 USE_PCRE=1

 Default settings :
   maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200

 Encrypted password support via crypt(3): yes
 Built with zlib version : 1.2.7
 Compression algorithms supported : identity, deflate, gzip
 Built with OpenSSL version : OpenSSL 1.0.1e 11 Feb 2013
 Running on OpenSSL version : OpenSSL 1.0.1e 11 Feb 2013
 OpenSSL library supports TLS extensions : yes
 OpenSSL library supports SNI : yes
 OpenSSL library supports prefer-server-ciphers : yes
 Built with PCRE version : 8.30 2012-02-04
 PCRE library supports JIT : no (USE_PCRE_JIT not set)
 Built with transparent proxy support using: IP_TRANSPARENT
 IPV6_TRANSPARENT IP_FREEBIND

 Available polling systems :
   epoll : pref=300,  test result OK
poll : pref=200,  test result OK
  select : pref=150,  test result OK
 Total: 3 (3 usable), will use epoll.

 How can I fix this? Thanks for any help,

 Regards,
 - KK




-- 
Igor Cicimov | DevOps


p. +61 (0) 433 078 728
e. ig...@encompasscorporation.com http://encompasscorporation.com/
w*.* encompasscorporation.com
a. Level 4, 65 York Street, Sydney 2000


Re: HAProxy responding with NOSRV SC

2015-06-04 Thread Igor Cicimov
On Thu, Jun 4, 2015 at 12:21 PM, RAKESH P B pb.rakes...@gmail.com wrote:

 Hi All,

 I have a strange situation where requests to my HAProxy are returning with
 a 503 error. HAProxy logs shows that a NOSRV error: for POST requests from
 application RSET service.

 api-https-in~ api-https-in/NOSRV -1/-1/-1/-1/40 503 1237 - - SC--
 15/0/0/0/0 0/0 POST /PATH HTTP/1.1


According to the docs the SC connection termination flags mean:

 SC   The server or an equipment between it and haproxy explicitly
refused
  the TCP connection (the proxy received a TCP RST or an ICMP
message
  in return). Under some circumstances, it can also be the network
  stack telling the proxy that the server is unreachable (eg: no
route,
  or no ARP response on local network). When this happens in HTTP
mode,
  the status code is likely a 502 or 503 here.

So if you are confident that you are looking at the same type of requests
and in the same time period for both cases you are showing (with and
without HAP), then you should turn your attention to the networking side of
the things. Make sure nothing is blocking the connections between HAP and
the backends (ie can you at least telnet to port 80 from HAP to the
backend), confirm that your health check HEAD /test.jsp HTTP/1.0 really
works, confirm your backend understands and actually uses X-Forwarded-Proto
header, confirm that your backend has a capacity for 8096 simultaneous
connections etc. etc. etc.



 During this time, the backend server was confirmed up and was receiving
 traffic for GET requests from web browser and also POST request from REST
 client  POSTMAN rest client.


  api-https-in~ name1/name 669/0/2/4/675 200 513 - -  2/2/0/1/0 0/0
 GET /PATH HTTP/1.1

  api-https-in~ name1/name 336/0/1/4/341 415 95 - -  2/2/0/1/0 0/0
 POST /PATH HTTP/1.1


 Here is my configuration file

 frontend http-in
 bind *:80
 redirect scheme https code 301 if !{ ssl_fc }
 maxconn 8096


 frontend api-https-in
 bind X.X.X.X:443 ssl crt PATH1
 reqadd X-Forwarded-Proto:\ https
 acl host_soap hdr_end(host) -i example.com
 use_backend name1 if host_soap
 acl secure dst_port eq 44



 backend name1

 mode http
 option httpchk  HEAD /test.jsp HTTP/1.0
 appsession JSESSIONID len 32 timeout 1800s
 server  name X.X.X.X:80




-- 
Igor Cicimov | DevOps


p. +61 (0) 433 078 728
e. ig...@encompasscorporation.com http://encompasscorporation.com/
w*.* encompasscorporation.com
a. Level 4, 65 York Street, Sydney 2000


Re: HAProxy for Statis IP redundancy

2015-08-16 Thread Igor Cicimov
On 16/08/2015 11:21 PM, Mitchell Gurspan mitch...@visualjobmatch.com
wrote:

 Hi –

 Would you be able to tell me if HAProxy can be used to solve the
following problem?





 I host an iis 7.5) windows site on a comcast business static IP (in
office). the internet goes down sometimes and I’d like redundancy.



 I cant find the proper way to add a second internet provider/static IP
for failover when the primary line goes down.



 I thought maybe DNS round robin but it looks like an IIS site cannot have
multiple bindings for this



 Any thoughts? Is there a standard architecture or method for Internet
connectivity redundancy for one website on one server ? Cost is an issue.



 Thanks!



 Mitchell

 Visualjobmatch.com

Can't see what this has to do with haproxy this is something you setup in
your infrastructure. Get a router with two WAN ports each connected to
different ISP. For DYI you can set linux box as router with iptables and
policy routing. Google will show you many exmples how to do it.


Re: HTTPS to HTTP reverse proxy

2015-08-11 Thread Igor Cicimov
On Tue, Aug 11, 2015 at 12:10 PM, Roman Gelfand rgelfa...@gmail.com wrote:

 I am publishing horde webmail application.  The horde itself is served
 internally via http protocol on apache.  Please, see the configuration,
 below.  The issue seems to be with css and image files as formatting is out
 wack.  Please note, accessing the http site from intranet works.

 global
   log 127.0.0.1 local0 debug
   tune.ssl.default-dh-param 2048
   maxconn 4096
   user proxy
   group proxy
   daemon
   #debug
   #quiet

 defaults
   log global
   mode  http
   option forwardfor
   option  httplog
   option  dontlognull
   option  redispatch
   option http-server-close
   retries 3
   maxconn 2000
   timeout connect 5000
   timeout client 5
   timeout server 5

 frontend farm_test_ssl
   mode  http
   bind 0.0.0.0:443 ssl crt /etc/ssl/certs/cs.pem crt
 /etc/ssl/certs/remote.pem
   use_backend bk_cs_cert if { ssl_fc_sni cs.localdom.com } # content
 switching based on SNI
   use_backend bk_remote_cert if { ssl_fc_sni remote.localdom.com } #
 content switching based on SNI

 backend bk_cs_cert
   mode http
   server cs 192.168.8.108:80 check ssl verify none

 backend bk_remote_cert
   mode http
   server remail 192.168.8.166:80 check ssl verify none



Roman,

My guess would be a mixed content that every modern browser will block
these days. Meaning you request a page over https but the response page has
http links for the css and js files which the browser will refuse to load.
You can confirm that using the development tools in chrome or firefox just
to make sure this is the case.

More details about ssl offloading can be find here:
http://blog.haproxy.com/2013/02/26/ssl-offloading-impact-on-web-applications/

In short, you need to tell the backend apache that the content needs to be
served via ssl. That is usually done by providing some headers in HAProxy:

   http-request set-header X-Forwarded-Proto https if  { ssl_fc }

then in Apache I have:

SetEnvIfNoCase X-Forwarded-Proto https HTTPS=on
# Insure the pages requested over ssl are always over ssl
RewriteEngine On
RewriteCond %{HTTP_X_Forwarded_Proto}  ^https$
RewriteRule ^(.*)$ https://%{HTTP_HOST}$1 [R,L]

Hope this helps, in case I'm right that is :-).
Igor


Re: haproxy can't bind to mysql port

2015-07-23 Thread Igor Cicimov
On Fri, Jul 24, 2015 at 1:46 PM, Tim Dunphy bluethu...@gmail.com wrote:

 Hi all,

  I'm attempting to setup mysql load balancing using HA/Proxy. Seemed
 pretty straight forward at first.

 I'm using Amazon ec2 for all nodes. First I made sure that the
 haproxy nodes could contact the mysql boxes by opening up the security
 group from the mysql boxes to the haproxy ones on port 3306.


How did you do that? By putting the haproxy's security group or the
haproxy's ip to the mysql group inbound rule? If IP which one is that?



 I setup the following config:

 global
 log 127.0.0.1 local0 notice
 user haproxy
 group haproxy

 defaults
 log global
 retries 2
 timeout connect 3000
 timeout server 5000
 timeout client 5000

 listen mysql-cluster
 bind 127.0.0.1:3306

mode tcp
 option mysql-check user haproxy_check
 balance roundrobin
 server mysql-1 10.10.10.10:3306 check
 server mysql-2 10.10.10.11:3306 check

 listen 0.0.0.0:80
 mode http
 stats enable
 stats uri /
 stats realm Strictly\ Private
 stats auth admin:secret

 And ensured that haproxy could bind to non local IP's:


Sorry but which non local IP is that? How many interfaces haproxy has? Is
it connected to the 10.10.10.0/24 network at all?

Looks to me you are trying to use VIP's or something which does not work in
same way as in normal lan's. Don't forget that in AWS we are dealing with
SDN's so giving l0 or any other interface a second IP address localy on the
instance using ip tool lets say will simply not work. That IP is not
visible to the SDN and the interface will never send or receive any
traffic. You need that IP allocated to the haproxy interface (no option for
l0 here) via EC2 console or aws cli tool.



 [root@ha1:/etc/haproxy] #grep ipv4 /etc/sysctl.conf
 net.ipv4.ip_nonlocal_bind=1

 [root@ha1:/etc/haproxy] #sysctl -p
 net.ipv4.ip_nonlocal_bind = 1

 Yet when I try to start up haproxy I get the following result:

 [root@ha1:/etc/haproxy] #systemctl status haproxy
 haproxy.service - HAProxy Load Balancer
Loaded: loaded (/usr/lib/systemd/system/haproxy.service; enabled)
Active: inactive (dead) since Fri 2015-07-24 03:44:18 UTC; 9s ago
   Process: 25034 ExecStart=/usr/sbin/haproxy-systemd-wrapper -f
 /etc/haproxy/haproxy.cfg -p /run/haproxy.pid (code=exited, status=0/SUCCESS)
  Main PID: 25034 (code=exited, status=0/SUCCESS)

 Jul 24 03:44:18 ha1 systemd[1]: Starting HAProxy Load Balancer...
 Jul 24 03:44:18 ha1 systemd[1]: Started HAProxy Load Balancer.
 Jul 24 03:44:18 ha1 haproxy-systemd-wrapper[25034]:
 haproxy-systemd-wrapper: executing /usr/sbin/haproxy -f
 /etc/haproxy/hap...id -Ds
 Jul 24 03:44:18 ha1 haproxy-systemd-wrapper[25034]: [ALERT] 204/034418
 (25035) : *Starting proxy mysql-cluster: cannot bind s...:3306]*
 Jul 24 03:44:18 ha1* h*aproxy-systemd-wrapper[25034]:
 haproxy-systemd-wrapper: exit, haproxy RC=256
 Hint: Some lines were ellipsized, use -l to show in full.



 So it seems that haproxy is expecting to have mysql already listening on
 port 3306. But mysql is runnign on two external nodes with port 3306 open
 to the two haproxy machines.

 What am I doing wrong? And how can I get this to work?

 Thanks,
 TIm
 --
 GPG me!!

 gpg --keyserver pool.sks-keyservers.net --recv-keys F186197B




-- 
Igor Cicimov | DevOps


p. +61 (0) 433 078 728
e. ig...@encompasscorporation.com http://encompasscorporation.com/
w*.* encompasscorporation.com
a. Level 4, 65 York Street, Sydney 2000


Re: haproxy can't bind to mysql port

2015-07-25 Thread Igor Cicimov
By run I meant you have to start it as root user which you are doing
anyway. Can you run:

# nc -l -p 80

as root just to confirm you can bind to port 80?
On 25/07/2015 2:10 PM, Igor Cicimov ig...@encompasscorporation.com
wrote:

 You need to run haproxy as root to bind to ports lower than 1024
 On 25/07/2015 1:36 PM, Tim Dunphy bluethu...@gmail.com wrote:

 Hi Yuan,

 Nice.
 Do you use selinux in prod.
 regards,
 ; Yuan


 Yep! Actually I use it every chance I get. Prod/stage/dev and my own
 hobby environments. And right now actually what I was discussing was a
 hobby environment.

 And actually if I could bother you guys one more time, I do have one more
 issue to solve. LOL

 And this time it's guaranteed not to be an SELinux issue. Because I tried
 running haproxy with SELInux on and off this time.

 But what's happening now, is that HA/Proxy is not creating the http port
 for the 'stats' interface. I've setup stats to listen on port 80. But for
 some reason that's not happening.

 Here's my config one more time, with the trouble part in bold:

 global
 log 127.0.0.1 local0 notice
 user haproxy
 group haproxy

 defaults
 log global
 retries 2
 timeout connect 3000
 timeout server 5000
 timeout client 5000

 listen mysql-cluster
 bind 0.0.0.0:3306
 mode tcp
 option mysql-check user haproxy_check
 balance roundrobin
 server mysql-1 52.3.28.48:3306 check
 server mysql-2 52.2.0.176:3306 check








 *listen 0.0.0.0:80 http://0.0.0.0:80mode httpstats enable
 stats uri /stats realm Strictly\ Privatestats auth admin:secret*
 Currently haproxy is listening on the first port specified* - 3306 - *but
 not listening on port 80.

 Observe:

 [root@ha1:/etc/haproxy] #lsof -i :3306
 COMMAND   PIDUSER   FD   TYPE  DEVICE SIZE/OFF NODE NAME
 *haproxy 11653 haproxy4u  IPv4 7145270  0t0  TCP *:mysql (LISTEN)*

 [root@ha1:/etc/haproxy] #lsof -i :80
 [root@ha1:/etc/haproxy] #

 [root@ha1:/etc/haproxy] #telnet localhost 80
 Trying 127.0.0.1...
 telnet: connect to address 127.0.0.1: Connection refused

 Port 80 simply isn't listening.

 And this time, I can't blame it on SELinux being on:

 [root@ha1:/etc/haproxy] #getenforce
 Permissive

 I've grepped thru /var/log/messages but not turned up any clues to this
 one.

 And I really would like to get the stats interface up and running.

 Any thoughts here? I'm wondering what I can do to get stats working.

 Thanks,
 Tim



 On Fri, Jul 24, 2015 at 10:52 PM, Gmail longwuy...@gmail.com wrote:

 Nice.
 Do you use selinux in prod.
 regards,
 ; Yuan

 On 07/25/2015 09:17 AM, Tim Dunphy wrote:

 Bingo!!!

 The problem was with SELinux. Not sure what took me so long to think of
 it...!!!

 So set the mysql listener back to port 3306. Turned off SELinux with
 setenforce 0. Then it started right up!!! And port 3306 was listening.

 Then I consulted with audit2why and saw the following:

 type=AVC msg=audit(1437786617.963:28856863): avc:  denied  {
 name_connect }
 for  pid=29175 comm=haproxy dest=3306
 scontext=system_u:system_r:haproxy_t:s0
 tcontext=system_u:object_r:mysqld_port_t:s0 tclass=tcp_socket

  Was caused by:
  The boolean haproxy_connect_any was set incorrectly.
  Description:
  Allow haproxy to connect any

  Allow access by executing:
  # *setsebool -P haproxy_connect_any 1*


 I just ran that command you see above in bold, and then all was right
 with
 the world.

 [root@ha1:/etc/haproxy] #systemctl status haproxy
 haproxy.service - HAProxy Load Balancer
 Loaded: loaded (/usr/lib/systemd/system/haproxy.service; disabled)
 Active: active (running) since Sat 2015-07-25 01:14:53 UTC; 33s ago
   Main PID: 30618 (haproxy-systemd)
 CGroup: /system.slice/haproxy.service
 ├─30618 /usr/sbin/haproxy-systemd-wrapper -f
 /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
 ├─30619 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p
 /run/haproxy.pid -Ds
 └─30620 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p
 /run/haproxy.pid -Ds

 Jul 25 01:14:53 ha1 systemd[1]: Starting HAProxy Load Balancer...
 Jul 25 01:14:53 ha1 systemd[1]: Started HAProxy Load Balancer.
 Jul 25 01:14:53 ha1 haproxy-systemd-wrapper[30618]:
 haproxy-systemd-wrapper: executing /usr/sbin/haproxy -f
 /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds

 [root@ha1:/etc/haproxy] #lsof -i :3306
 COMMAND   PIDUSER   FD   TYPE  DEVICE SIZE/OFF NODE NAME
 haproxy 30620 haproxy1u  IPv4 7075172  0t0  TCP
 ha1.example.com:55499-ec2-52-2-0-xxx.compute-1.amazonaws.com:mysql
 (SYN_SENT)
 haproxy 30620 haproxy4u  IPv4 7074731  0t0  TCP *:mysql (LISTEN)


 Thanks for nudging me in the right direction. All I had to hear was the
 word 'selinux' and from there it all fell into place!

 Thanks!!
 Tim

 On Fri, Jul 24, 2015 at 8:20 PM, Gmail longwuy...@gmail.com wrote:

  I could be completely wrong here and I am curious to know the answer

Re: haproxy can't bind to mysql port

2015-07-24 Thread Igor Cicimov
You need to run haproxy as root to bind to ports lower than 1024
On 25/07/2015 1:36 PM, Tim Dunphy bluethu...@gmail.com wrote:

 Hi Yuan,

 Nice.
 Do you use selinux in prod.
 regards,
 ; Yuan


 Yep! Actually I use it every chance I get. Prod/stage/dev and my own hobby
 environments. And right now actually what I was discussing was a hobby
 environment.

 And actually if I could bother you guys one more time, I do have one more
 issue to solve. LOL

 And this time it's guaranteed not to be an SELinux issue. Because I tried
 running haproxy with SELInux on and off this time.

 But what's happening now, is that HA/Proxy is not creating the http port
 for the 'stats' interface. I've setup stats to listen on port 80. But for
 some reason that's not happening.

 Here's my config one more time, with the trouble part in bold:

 global
 log 127.0.0.1 local0 notice
 user haproxy
 group haproxy

 defaults
 log global
 retries 2
 timeout connect 3000
 timeout server 5000
 timeout client 5000

 listen mysql-cluster
 bind 0.0.0.0:3306
 mode tcp
 option mysql-check user haproxy_check
 balance roundrobin
 server mysql-1 52.3.28.48:3306 check
 server mysql-2 52.2.0.176:3306 check








 *listen 0.0.0.0:80 http://0.0.0.0:80mode httpstats enable
 stats uri /stats realm Strictly\ Privatestats auth admin:secret*
 Currently haproxy is listening on the first port specified* - 3306 - *but
 not listening on port 80.

 Observe:

 [root@ha1:/etc/haproxy] #lsof -i :3306
 COMMAND   PIDUSER   FD   TYPE  DEVICE SIZE/OFF NODE NAME
 *haproxy 11653 haproxy4u  IPv4 7145270  0t0  TCP *:mysql (LISTEN)*

 [root@ha1:/etc/haproxy] #lsof -i :80
 [root@ha1:/etc/haproxy] #

 [root@ha1:/etc/haproxy] #telnet localhost 80
 Trying 127.0.0.1...
 telnet: connect to address 127.0.0.1: Connection refused

 Port 80 simply isn't listening.

 And this time, I can't blame it on SELinux being on:

 [root@ha1:/etc/haproxy] #getenforce
 Permissive

 I've grepped thru /var/log/messages but not turned up any clues to this
 one.

 And I really would like to get the stats interface up and running.

 Any thoughts here? I'm wondering what I can do to get stats working.

 Thanks,
 Tim



 On Fri, Jul 24, 2015 at 10:52 PM, Gmail longwuy...@gmail.com wrote:

 Nice.
 Do you use selinux in prod.
 regards,
 ; Yuan

 On 07/25/2015 09:17 AM, Tim Dunphy wrote:

 Bingo!!!

 The problem was with SELinux. Not sure what took me so long to think of
 it...!!!

 So set the mysql listener back to port 3306. Turned off SELinux with
 setenforce 0. Then it started right up!!! And port 3306 was listening.

 Then I consulted with audit2why and saw the following:

 type=AVC msg=audit(1437786617.963:28856863): avc:  denied  {
 name_connect }
 for  pid=29175 comm=haproxy dest=3306
 scontext=system_u:system_r:haproxy_t:s0
 tcontext=system_u:object_r:mysqld_port_t:s0 tclass=tcp_socket

  Was caused by:
  The boolean haproxy_connect_any was set incorrectly.
  Description:
  Allow haproxy to connect any

  Allow access by executing:
  # *setsebool -P haproxy_connect_any 1*


 I just ran that command you see above in bold, and then all was right
 with
 the world.

 [root@ha1:/etc/haproxy] #systemctl status haproxy
 haproxy.service - HAProxy Load Balancer
 Loaded: loaded (/usr/lib/systemd/system/haproxy.service; disabled)
 Active: active (running) since Sat 2015-07-25 01:14:53 UTC; 33s ago
   Main PID: 30618 (haproxy-systemd)
 CGroup: /system.slice/haproxy.service
 ├─30618 /usr/sbin/haproxy-systemd-wrapper -f
 /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
 ├─30619 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p
 /run/haproxy.pid -Ds
 └─30620 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p
 /run/haproxy.pid -Ds

 Jul 25 01:14:53 ha1 systemd[1]: Starting HAProxy Load Balancer...
 Jul 25 01:14:53 ha1 systemd[1]: Started HAProxy Load Balancer.
 Jul 25 01:14:53 ha1 haproxy-systemd-wrapper[30618]:
 haproxy-systemd-wrapper: executing /usr/sbin/haproxy -f
 /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds

 [root@ha1:/etc/haproxy] #lsof -i :3306
 COMMAND   PIDUSER   FD   TYPE  DEVICE SIZE/OFF NODE NAME
 haproxy 30620 haproxy1u  IPv4 7075172  0t0  TCP
 ha1.example.com:55499-ec2-52-2-0-xxx.compute-1.amazonaws.com:mysql
 (SYN_SENT)
 haproxy 30620 haproxy4u  IPv4 7074731  0t0  TCP *:mysql (LISTEN)


 Thanks for nudging me in the right direction. All I had to hear was the
 word 'selinux' and from there it all fell into place!

 Thanks!!
 Tim

 On Fri, Jul 24, 2015 at 8:20 PM, Gmail longwuy...@gmail.com wrote:

  I could be completely wrong here and I am curious to know the answer
 myself. Please don't take this as a solution, just my thoughts.

 First, you can not use backend ip-address of 10.x.x.x subnet because
 each
 account's VPC is seggregated. If you do want to use 10.X.X.X ipadress
 you
 have to setup a 

Re: acl regex

2015-11-12 Thread Igor Cicimov
On Thu, Nov 12, 2015 at 6:44 PM, Guillaume Bourque <
guillaume.bour...@logisoftech.com> wrote:

> Hi,
>
> thanks for the suggestion but it did not work for me.   I tried
>
>acl fr_top  url_reg/?lang=
>acl fr_top  url_reg/?lang=$
> # off acl fr_topurlp_reg(lang\=$,?) -m
> found
> # off acl fr_topurlp_reg(lang\=$,?) -m
> found
>
> but with no luck
>
> thanks
>
> ---
> Guillaume Bourque, B.Sc.,
> Le 2015-11-12 à 02:18, Igor Cicimov <ig...@encompasscorporation.com> a
> écrit :
>
>
> On 12/11/2015 5:30 PM, "Guillaume Bourque" <
> guillaume.bour...@logisoftech.com> wrote:
> >
> > Hello Bryan
> >
> > I’m running haproxy 1.5.4 and I can’t find any example on how to user
> req.uri if you could give a examples on how to match a specific query to
> redirect to another
> >
> > From http://domain/pages/store.php?lang=fr   to http://domain/store/
> >
> > That would be great !
> >
> > TIA
> >
> >
> >
> > ---
> > Guillaume Bourque, B.Sc.,
> >
> > Le 2015-11-12 à 00:42, Bryan Talbot <bryan.tal...@ijji.com> a écrit :
> >
> >> On Wed, Nov 11, 2015 at 8:43 PM, Guillaume Bourque <
> guillaume.bour...@logisoftech.com> wrote:
> >>>
> >>> Hi all,
> >>>
> >>> I can’t create an acl that will match this
> >>>
> >>> http://domain/?lang=
> >>>
> >>> I tried
> >>>
> >>> acl fr_top  path_reg^/.lang\=$
> >>> acl fr_top  path_reg^/\?lang\=$
> >>>
> >>> acl fr_toppath_beg/?lang\=$
> >>>
> >>>
> >>
> >>
> >> You can't match the query string with the 'path' matcher. Try 'req.uri'
> or 'query' if you're using 1.6.
> >>
> >>
> >
> Try this:
>
> acl fr_top  url_reg   /pages/store.php?lang=fr
>
>
>
Ok, my last try :-)

http-request redirect location /store code 301 if { capture.req.uri lang=
-m found }


Re: acl regex

2015-11-11 Thread Igor Cicimov
On 12/11/2015 5:30 PM, "Guillaume Bourque" <
guillaume.bour...@logisoftech.com> wrote:
>
> Hello Bryan
>
> I’m running haproxy 1.5.4 and I can’t find any example on how to user
req.uri if you could give a examples on how to match a specific query to
redirect to another
>
> From http://domain/pages/store.php?lang=fr   to http://domain/store/
>
> That would be great !
>
> TIA
>
>
>
> ---
> Guillaume Bourque, B.Sc.,
>
> Le 2015-11-12 à 00:42, Bryan Talbot  a écrit :
>
>> On Wed, Nov 11, 2015 at 8:43 PM, Guillaume Bourque <
guillaume.bour...@logisoftech.com> wrote:
>>>
>>> Hi all,
>>>
>>> I can’t create an acl that will match this
>>>
>>> http://domain/?lang=
>>>
>>> I tried
>>>
>>> acl fr_top  path_reg^/.lang\=$
>>> acl fr_top  path_reg^/\?lang\=$
>>>
>>> acl fr_toppath_beg/?lang\=$
>>>
>>>
>>
>>
>> You can't match the query string with the 'path' matcher. Try 'req.uri'
or 'query' if you're using 1.6.
>>
>>
>
Try this:

acl fr_top  url_reg   /pages/store.php?lang=fr


Re: HAProxy and backend on the same box

2015-11-12 Thread Igor Cicimov
On 13/11/2015 1:04 AM, "jaleel"  wrote:
>
> Hello,
>
> I am trying to setup the following for deployment
>
> I have 2 servers.
> server1: eth0:10.200.2.211 (255.255.252.0)
> eth1: 192.168.10.10 (255.255.255.0)
> server2: eth0: 10.200.2.242 (255.255.252.0)
> eth1: 192.168.20.10 (255.255.255.0)
>
> VRRP between server1 and server2 eth0. VRIP is 10.200.3.84
>
>
> my haproxy config:
> --
> listen  ingress_traffic 10.200.3.84:7000
> mode tcp
> source 0.0.0.0 usesrc clientip
> balance roundrobin
> server server1 192.168.10.10:9001
> server server2 192.168.20.10:9001
>
> Iptables:
> ---
> iptables -t mangle -N DIVERT
> iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
> iptables -t mangle -A DIVERT -j MARK --set-mark 1
> iptables -t mangle -A DIVERT -j ACCEPT
>
> ip rule add fwmark 1 lookup 100
> ip route add local 0.0.0.0/0 dev lo table 100
>
>
> Now 10.200.2.211 is the master and owns VRIP 10.200.3.84
>
> When traffic comes to 10.200.3.84:7000, the routing to server2 is
successful and end-to-end communication is fine. But the response from
server1 (192.168.10.10:9001) is not reaching HAProxy.
>
> I cannot have 3rd box for HAProxy alone.
>
> Any suggestions
>
> Thank you
> -Abdul Jaleel
>
>
The backends need to have haproxy set as gateway.


Re: Selecting back end from host header

2015-11-14 Thread Igor Cicimov
On Sun, Nov 15, 2015 at 1:21 AM, SL  wrote:

> Hi,
>
> We have quite a large number of backends, and are selecting which back end
> to use based on the host specified in the request.  (Note these are not
> loadbalanced, we have to target them individually).
>
> Currently we are doing this with ACLs, e.g. for each:
>
> acl svr1_request hdr_beg(host) -i svr1
>
> then:
>
> use_backend svr1 if svr1_request
>
> (An example request host in this case would be svr1.example.com)
>
> Using ACLs like this means that we have a large number of repeated ACLs
> and use_backends.  It's a bit cumbersome, difficult to maintain, and I
> suspect not very efficient.
>
> Is there a better way to do this?  What would be ideal, is some way to
> take the subdomain of the request host, and simply select a backend whose
> name matched, but I don't know of any way to do that.  Is such a thing
> possible?
>
> Thank you
>
> S
>
>
http://blog.haproxy.com/2015/01/26/web-application-name-to-backend-mapping-in-haproxy/


Re: Need some help configuring backend health checks

2015-10-30 Thread Igor Cicimov
On 30/10/2015 4:48 PM, "Daren Sefcik"  wrote:
>
> So I think those links were the right idea and I have been trying
different configurations but am not quite there and am hoping somebody can
offer a bit more guidance.
>
> So when I telnet to the icap server I type in the OPTIONS line followed
by (2) return key presses and then it returns the ICAP text, below is my
telent session output
>
> ===
>
> $ telnet 10.1.4.153 1344
> Trying 10.1.4.153...
> Connected to 10.1.4.153.
> Escape character is '^]'.
> OPTIONS icap://127.0.0.1:1344/respmod ICAP/1.0
>
> ICAP/1.0 200 OK
> ISTAG: "5BDEEEA9-12E4-2"
> Service: Diladele Web Safety 4.2.0.CBF4
> Service-ID: qlproxy
> Methods: RESPMOD
> Options-TTL: 3600
> Max-Connections: 15000
> Allow: 204
> Preview: 4096
> Transfer-Preview: *
> Encapsulated: null-body=0
> Connection: close
>
> 
>
>
> Here is what I have tried in the backend configurations
>
> option tcp-check
> tcp-check send OPTIONS\ icap\:\/\/127\.0\.0\.1\:1344\/respmod\
ICAP\/1\.0\r\n\
> tcp-check send \r\n
> tcp-check expect string ICAP\/1\.0\ 200\ OK
>
>
> but it is still not working, I suspect I need to use some type of regex
or such. Hoping somebody can help me along with this.
>
> TIA..
>
>
> On Mon, Oct 19, 2015 at 7:42 AM, Daren Sefcik 
wrote:
>>
>> Thanks Jarno, I am still not sure how I can apply this to each server
using a different port but will poke around at it and see if I can figure
it out.
>>
>> On Mon, Oct 19, 2015 at 1:04 AM, Jarno Huuskonen 
wrote:
>>>
>>> Hi,
>>>
>>> On Sun, Oct 18, Daren Sefcik wrote:
>>> > I have an ICAP server backend with servers that each listen on
different
>>> > ports, can anyone offer some advice on how to configure health checks
for
>>> > it? I am currently using basic but that really doesn't help if the
service
>>> > is not responding.
>>> >
>>> > Here is my haproxy config for the backend:
>>> >
>>> > backend HTPL_CONT_FILTER_tcp_ipvANY
>>> > mode tcp
>>> > balance roundrobin
>>> > timeout connect 5
>>> > timeout server 5
>>> > retries 3
>>> > server HTPL-WEB-01_10.1.4.153 10.1.4.153:1344 check inter 5000
weight 200
>>> > maxconn 200 fastinter 1000 fall 5
>>> > server HTPL-WEB-02_10.1.4.154 10.1.4.154:1344 check inter 5000
weight 200
>>> > maxconn 200 fastinter 1000 fall 5
>>> > server HTPL-WEB-02_10.1.4.155_01 10.1.4.155:8102 check inter 5000
weight
>>> > 200 maxconn 200 fastinter 1000 fall 5
>>> > server HTPL-WEB-02_10.1.4.155_02 10.1.4.155:8202 check inter 5000
weight
>>> > 200 maxconn 200 fastinter 1000 fall 5
>>>
>>> Do the icap servers (squid+diladele?) respond to something like this:
>>> https://support.symantec.com/en_US/article.TECH220980.html
>>> or https://exchange.icinga.org/oldmonex/1733-check_icap.pl/check_icap.pl
>>>
>>> Maybe you can use tcp-check to send icap request and look for
>>> "ICAP/1.0 200" response:
>>>
https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#tcp-check%20connect
>>> http://blog.haproxy.com/2014/01/02/haproxy-advanced-redis-health-check/
>>>
>>> -Jarno
>>>
>>> --
>>> Jarno Huuskonen
>>
>>
>
Since your telnet session is on port 1344, maybe

tcp-check connect port 1344

before the send command.


Re: questions for haproxy 1.5

2015-10-30 Thread Igor Cicimov
On 31/10/2015 2:03 AM, "Igor Cicimov" <ig...@encompasscorporation.com>
wrote:
>
>
> On 30/10/2015 11:18 PM, "Labedan, Alain" <alain.labe...@cgi.com> wrote:
> >
> > Hi,
> >
> >
> >
> > I have HAPROXY in front of servers backend which are load balanced.
> >
> >
> >
> > -  For terminated SSL haproxy, I want HAproxy give the good
certificate to the client associated with the good domain .
> >
> > I’ve not found how to configure HA for that:  I ‘ve 4 domains
associated with one public IP in front . So how declare and use the 4
 certificates SSL for the 4 domains ?
> >
> >
> >
> > -  How use affinity session ? is it SERVERID insert ?
> >
> >
> >
> >
> >
> > Thanks for your answer .
> >
> > Bests regards .
> >
> >
> >
> > Alain Labedan
> >
> >
> >
> > AVIS DE CONFIDENTIALITÉ : Ce message peut contenir des renseignements
confidentiels appartenant exclusivement au Groupe CGI inc. ou à ses
filiales. Si vous n'êtes pas le destinataire indiqué ou prévu dans ce
message (ou responsable de livrer ce message à la personne indiquée ou
prévue) ou si vous pensez que ce message vous a été adressé par erreur,
vous ne pouvez pas utiliser ou reproduire ce message, ni le livrer à
quelqu'un d'autre. Dans ce cas, vous devez le détruire et vous êtes prié
d'avertir l'expéditeur en répondant au courriel.​
> >
> >
> This might give you an idea http://blog.haproxy.com/category/ssl/
>
> Just use sni in the frontend (google this many examples out there) and
based on acl send the traffic to one of 4 dummy backends as in the example,
something like this
>
> acl domain1 req_ssl_sni -i www.domain1.com
> use_backend bk_domain1_sock  if domain1
>
> Then each backend and listener will bind to a socket as in the example
and each listen section will have its own certificate and point to
appropriate backend.
>
> Just a theory not sure if it will work haven't tested. Of course you need
modern browsers with sni support in order for this to work.

Sorry just noticed the link didn't  copy properly, correct one given below:
http://blog.haproxy.com/2015/07/15/serving-ecc-and-rsa-certificates-on-same-ip-with-haproxy/


Re: questions for haproxy 1.5

2015-10-30 Thread Igor Cicimov
On 30/10/2015 11:18 PM, "Labedan, Alain"  wrote:
>
> Hi,
>
>
>
> I have HAPROXY in front of servers backend which are load balanced.
>
>
>
> -  For terminated SSL haproxy, I want HAproxy give the good
certificate to the client associated with the good domain .
>
> I’ve not found how to configure HA for that:  I ‘ve 4 domains associated
with one public IP in front . So how declare and use the 4  certificates
SSL for the 4 domains ?
>
>
>
> -  How use affinity session ? is it SERVERID insert ?
>
>
>
>
>
> Thanks for your answer .
>
> Bests regards .
>
>
>
> Alain Labedan
>
>
>
> AVIS DE CONFIDENTIALITÉ : Ce message peut contenir des renseignements
confidentiels appartenant exclusivement au Groupe CGI inc. ou à ses
filiales. Si vous n'êtes pas le destinataire indiqué ou prévu dans ce
message (ou responsable de livrer ce message à la personne indiquée ou
prévue) ou si vous pensez que ce message vous a été adressé par erreur,
vous ne pouvez pas utiliser ou reproduire ce message, ni le livrer à
quelqu'un d'autre. Dans ce cas, vous devez le détruire et vous êtes prié
d'avertir l'expéditeur en répondant au courriel.​
>
>
This might give you an idea http://blog.haproxy.com/category/ssl/

Just use sni in the frontend (google this many examples out there) and
based on acl send the traffic to one of 4 dummy backends as in the example,
something like this

acl domain1 req_ssl_sni -i www.domain1.com
use_backend bk_domain1_sock  if domain1

Then each backend and listener will bind to a socket as in the example and
each listen section will have its own certificate and point to appropriate
backend.

Just a theory not sure if it will work haven't tested. Of course you need
modern browsers with sni support in order for this to work.


Re: tcp-check with persistent session cookie ?

2015-11-06 Thread Igor Cicimov
On 07/11/2015 8:01 AM, "Sébastien ROHAUT" 
wrote:
>
> Hi,
>
> We encountered a big problem this afternoon, which crashed for a while
one of our websites, a java (tomcat+lift) application. We are using Haproxy
1.5.
>
> For our backend, we're doing something like this, using tcp-check because
we need to check status AND a string, which is not possible with http-check
:
>
> backend backend-mywebsite
>   balance roundrobin
>   option redispatch
>   option tcp-check
>   tcp-check send GET\ /check \ HTTP/1.1\r\nHost:\ 
> www.mywebsite.fr\r\nConnection:\
close\r\n
>   tcp-check send \r\n
>   tcp-check expect string HTTP/1.1\ 200\ OK
>   tcp-check expect  rstring "healthStatus":"(Healthy|DegradedMode)"
>   cookie JSESSIONID prefix nocache
>
>
>   server s1 s1:11503  weight 1 check inter 10s fall 3 rise 2 ssl cookie s1
>   server s2 s2:11503  weight 1 check inter 10s fall 3 rise 2 ssl cookie s2
>   server s3 s3:11503  weight 1 check inter 10s fall 3 rise 2 ssl cookie s3
>   server s4 s4:11503  weight 1 check inter 10s fall 3 rise 2 ssl cookie s4
>
> For some reasons, the /check page didn't returned the correct application
status and our / returned a 500 even if /check was OK, so we decided to
check /.
>
> After 20 minutes, our application crashed. In fact, our 4 fronts crashed
at the same time, and if we restarted them, 20 minutes after, they crashed
again. We lost some time because we were really thinking on a software bug,
before we realize the root cause.
>
> * Each tcp-check send opens a session on the application
> * Each session, on the / page, consumes 500 KB
> * session duration : 30 minutes
> * We have 4 Haproxy, doing 2 checks (the app provides 2 websites, so one
check for each Host: ), 6 times per minute = 48 checks, each minute. On
each front.
> * After 20 minutes : more than 450 MB used in the app for sessions
> * Full GC, crash
>
> So, my question is :
>
> Is it possible to get and store the JSESSIONID cookie returned by the
tcp-check expect (or something like this), and send it with the tcp-check
send, to reuse the same session ?
>
> Is there a way for a health check to use persistent cookie session
(always the same, one per server), returned by the check ?
>
> Thank you very much,
>
> Sebastien Rohaut

What we did in our case is simply not produce a session in the app for the
health check path.


Re: Need some help configuring backend health checks

2015-10-30 Thread Igor Cicimov
On 31/10/2015 3:14 AM, "Daren Sefcik" <dsef...@hightechhigh.org> wrote:
>
>
>
> On Thu, Oct 29, 2015 at 11:15 PM, Igor Cicimov <
ig...@encompasscorporation.com> wrote:
>>
>>
>> On 30/10/2015 4:48 PM, "Daren Sefcik" <dsef...@hightechhigh.org> wrote:
>> >
>> > So I think those links were the right idea and I have been trying
different configurations but am not quite there and am hoping somebody can
offer a bit more guidance.
>> >
>> > So when I telnet to the icap server I type in the OPTIONS line
followed by (2) return key presses and then it returns the ICAP text, below
is my telent session output
>> >
>> > ===
>> >
>> > $ telnet 10.1.4.153 1344
>> > Trying 10.1.4.153...
>> > Connected to 10.1.4.153.
>> > Escape character is '^]'.
>> > OPTIONS icap://127.0.0.1:1344/respmod ICAP/1.0
>> >
>> > ICAP/1.0 200 OK
>> > ISTAG: "5BDEEEA9-12E4-2"
>> > Service: Diladele Web Safety 4.2.0.CBF4
>> > Service-ID: qlproxy
>> > Methods: RESPMOD
>> > Options-TTL: 3600
>> > Max-Connections: 15000
>> > Allow: 204
>> > Preview: 4096
>> > Transfer-Preview: *
>> > Encapsulated: null-body=0
>> > Connection: close
>> >
>> > 
>> >
>> >
>> > Here is what I have tried in the backend configurations
>> >
>> > option tcp-check
>> > tcp-check send OPTIONS\ icap\:\/\/127\.0\.0\.1\:1344\/respmod\
ICAP\/1\.0\r\n\
>> > tcp-check send \r\n
>> > tcp-check expect string ICAP\/1\.0\ 200\ OK
>> >
>> >
>> > but it is still not working, I suspect I need to use some type of
regex or such. Hoping somebody can help me along with this.
>> >
>> > TIA..
>> >
>> >
>> > On Mon, Oct 19, 2015 at 7:42 AM, Daren Sefcik <dsef...@hightechhigh.org>
wrote:
>> >>
>> >> Thanks Jarno, I am still not sure how I can apply this to each server
using a different port but will poke around at it and see if I can figure
it out.
>> >>
>> >> On Mon, Oct 19, 2015 at 1:04 AM, Jarno Huuskonen <
jarno.huusko...@uef.fi> wrote:
>> >>>
>> >>> Hi,
>> >>>
>> >>> On Sun, Oct 18, Daren Sefcik wrote:
>> >>> > I have an ICAP server backend with servers that each listen on
different
>> >>> > ports, can anyone offer some advice on how to configure health
checks for
>> >>> > it? I am currently using basic but that really doesn't help if the
service
>> >>> > is not responding.
>> >>> >
>> >>> > Here is my haproxy config for the backend:
>> >>> >
>> >>> > backend HTPL_CONT_FILTER_tcp_ipvANY
>> >>> > mode tcp
>> >>> > balance roundrobin
>> >>> > timeout connect 5
>> >>> > timeout server 5
>> >>> > retries 3
>> >>> > server HTPL-WEB-01_10.1.4.153 10.1.4.153:1344 check inter 5000
weight 200
>> >>> > maxconn 200 fastinter 1000 fall 5
>> >>> > server HTPL-WEB-02_10.1.4.154 10.1.4.154:1344 check inter 5000
weight 200
>> >>> > maxconn 200 fastinter 1000 fall 5
>> >>> > server HTPL-WEB-02_10.1.4.155_01 10.1.4.155:8102 check inter 5000
weight
>> >>> > 200 maxconn 200 fastinter 1000 fall 5
>> >>> > server HTPL-WEB-02_10.1.4.155_02 10.1.4.155:8202 check inter 5000
weight
>> >>> > 200 maxconn 200 fastinter 1000 fall 5
>> >>>
>> >>> Do the icap servers (squid+diladele?) respond to something like this:
>> >>> https://support.symantec.com/en_US/article.TECH220980.html
>> >>> or
https://exchange.icinga.org/oldmonex/1733-check_icap.pl/check_icap.pl
>> >>>
>> >>> Maybe you can use tcp-check to send icap request and look for
>> >>> "ICAP/1.0 200" response:
>> >>>
https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#tcp-check%20connect
>> >>>
http://blog.haproxy.com/2014/01/02/haproxy-advanced-redis-health-check/
>> >>>
>> >>> -Jarno
>> >>>
>> >>> --
>> >>> Jarno Huuskonen
>> >>
>> >>
>> >
>> Since your telnet session is on port 1344, maybe
>>
>> tcp-check connect port 1344
>>
>> before the send command.
>
> Thank you but each backend server has a different port configured, that
is just one example.
>
>
> server HTPL-WEB-01_10.1.4.153 10.1.4.153:1344 check inter 5000  weight
200 maxconn 200 fastinter 1000 rise 1 fall 5
> server HTPL-WEB-02_10.1.4.154 10.1.4.154:1344 check inter 5000  weight
200 maxconn 200 fastinter 1000 rise 1 fall 5
> server HTPL-WEB-02-DOCK-02_10.1.4.155_01 10.1.4.155:8102 check inter 5000
 weight 200 maxconn 200 fastinter 1000 rise 1 fall 5
> server HTPL-WEB-02-DOCK-02_10.1.4.155_02 10.1.4.155:8202 check inter 5000
 weight 200 maxconn 200 fastinter 1000 rise 1 fall 5

I see. In that case I would say to try:

tcp-check expect rstring ICAP\/1\.0\ 200\ OK

since the response is multiline and you need regexp as you mentioned.


Re: About maxconn and minconn

2015-10-07 Thread Igor Cicimov
On Thu, Oct 8, 2015 at 11:51 AM, Igor Cicimov <
ig...@encompasscorporation.com> wrote:

>
>
> On Thu, Oct 8, 2015 at 12:18 AM, Dmitry Sivachenko <trtrmi...@gmail.com>
> wrote:
>
>> Hello,
>>
>> I am using haproxy-1.5.14 and sometimes I see the following errors in the
>> log:
>>
>> Oct  7 08:33:03 srv1 haproxy[77565]: unix:1 [07/Oct/2015:08:33:02.428]
>> MT-front MT_RU_EN-back/ 0/1000/-1/-1/1000 503 212 - - sQ--
>> 125/124/108/0/0 0/28 "POST /some/url HTTP/1.1"
>> (many similar at one moment)
>>
>> Common part in these errors is "1000" in Tw and Tt, and "sQ--"
>> termination state.
>>
>> Here is the relevant part on my config (I can post more if needed):
>>
>> defaults
>> balance roundrobin
>> maxconn 1
>> timeout queue 1s
>> fullconn 3000
>> default-server inter 5s downinter 1s fastinter 500ms fall 3 rise 1
>> slowstart 60s maxqueue 1 minconn 5 maxconn 150
>>
>> backend MT_RU_EN-back
>> mode http
>> timeout server 30s
>> server mt1-34 mt1-34:19016 track MT-back/mt1-34 weight 38
>> server mt1-35 mt1-35:19016 track MT-back/mt1-35 weight 38
>> 
>>
>> So this error log indicates that request was sitting in the queue for
>> timeout queue==1s and his turn did not come.
>>
>> In the stats web interface for MT_RU_EN-back backend I see the following
>> numbers:
>>
>> Sessions: limit=3000, max=126 (for the whole backend)
>> Limit=150, max=5 or 6 (for each server)
>>
>> If I understand minconn/maxconn meaning right, each server should accept
>> up to min(150, 3000/18) connections
>>
>> So according to stats the load were far from limits.
>>
>> What can be the cause of such errors?
>>
>> Thanks!
>>
>
> The only thing I can think of is you have left net.core.somaxconn = 128,
> try increasing it to 4096 lets say to match your planned capacity of 3000
>
>
 sQ   The session spent too much time in queue and has been expired. See
  the "timeout queue" and "timeout connect" settings to find out
how to
  fix this if it happens too often. If it often happens massively in
  short periods, it may indicate general problems on the affected
  servers due to I/O or database congestion, or saturation caused by
  external attacks.

another possibility to investigate. If the backends are too slow, or maybe
the connection is delayed by a firewall in the middle or something, then
maybe tuning the "connect timeout" may help:

If the server is located on the same LAN as haproxy, the connection should be
immediate (less than a few milliseconds). Anyway, it is a good practice to
cover one or several TCP packet losses by specifying timeouts that are
slightly above multiples of 3 seconds (eg: 4 or 5 seconds). By default, the
connect timeout also presets both queue and tarpit timeouts to the same value
if these have not been specified.


Re: About maxconn and minconn

2015-10-07 Thread Igor Cicimov
On Thu, Oct 8, 2015 at 12:18 AM, Dmitry Sivachenko 
wrote:

> Hello,
>
> I am using haproxy-1.5.14 and sometimes I see the following errors in the
> log:
>
> Oct  7 08:33:03 srv1 haproxy[77565]: unix:1 [07/Oct/2015:08:33:02.428]
> MT-front MT_RU_EN-back/ 0/1000/-1/-1/1000 503 212 - - sQ--
> 125/124/108/0/0 0/28 "POST /some/url HTTP/1.1"
> (many similar at one moment)
>
> Common part in these errors is "1000" in Tw and Tt, and "sQ--" termination
> state.
>
> Here is the relevant part on my config (I can post more if needed):
>
> defaults
> balance roundrobin
> maxconn 1
> timeout queue 1s
> fullconn 3000
> default-server inter 5s downinter 1s fastinter 500ms fall 3 rise 1
> slowstart 60s maxqueue 1 minconn 5 maxconn 150
>
> backend MT_RU_EN-back
> mode http
> timeout server 30s
> server mt1-34 mt1-34:19016 track MT-back/mt1-34 weight 38
> server mt1-35 mt1-35:19016 track MT-back/mt1-35 weight 38
> 
>
> So this error log indicates that request was sitting in the queue for
> timeout queue==1s and his turn did not come.
>
> In the stats web interface for MT_RU_EN-back backend I see the following
> numbers:
>
> Sessions: limit=3000, max=126 (for the whole backend)
> Limit=150, max=5 or 6 (for each server)
>
> If I understand minconn/maxconn meaning right, each server should accept
> up to min(150, 3000/18) connections
>
> So according to stats the load were far from limits.
>
> What can be the cause of such errors?
>
> Thanks!
>

The only thing I can think of is you have left net.core.somaxconn = 128,
try increasing it to 4096 lets say to match your planned capacity of 3000


Re: [blog] What's new in HAProxy 1.6

2015-10-14 Thread Igor Cicimov
On 14/10/2015 9:41 PM, "Baptiste" <bed...@gmail.com> wrote:
>
> Hey,
>
> I summarized what's new in HAProxy 1.6 with some configuration
> examples in a blog post to help quick adoption of new features:
> http://blog.haproxy.com/2015/10/14/whats-new-in-haproxy-1-6/
>
> Baptiste
>
Awesome, thank you!

Igor


Re: About maxconn and minconn

2015-10-08 Thread Igor Cicimov
On Thu, Oct 8, 2015 at 7:15 PM, Dmitry Sivachenko 
wrote:

>
> > On 7 окт. 2015 г., at 16:18, Dmitry Sivachenko 
> wrote:
> >
> > Hello,
> >
> > I am using haproxy-1.5.14 and sometimes I see the following errors in
> the log:
> >
> > Oct  7 08:33:03 srv1 haproxy[77565]: unix:1 [07/Oct/2015:08:33:02.428]
> MT-front MT_RU_EN-back/ 0/1000/-1/-1/1000 503 212 - - sQ--
> 125/124/108/0/0 0/28 "POST /some/url HTTP/1.1"
> > (many similar at one moment)
> >
> > Common part in these errors is "1000" in Tw and Tt, and "sQ--"
> termination state.
> >
> > Here is the relevant part on my config (I can post more if needed):
> >
> > defaults
> >balance roundrobin
> >maxconn 1
> >timeout queue 1s
> >fullconn 3000
> >default-server inter 5s downinter 1s fastinter 500ms fall 3 rise 1
> slowstart 60s maxqueue 1 minconn 5 maxconn 150
> >
> > backend MT_RU_EN-back
> >mode http
> >timeout server 30s
> >server mt1-34 mt1-34:19016 track MT-back/mt1-34 weight 38
> >server mt1-35 mt1-35:19016 track MT-back/mt1-35 weight 38
> >
> >
> > So this error log indicates that request was sitting in the queue for
> timeout queue==1s and his turn did not come.
> >
> > In the stats web interface for MT_RU_EN-back backend I see the following
> numbers:
> >
> > Sessions: limit=3000, max=126 (for the whole backend)
> > Limit=150, max=5 or 6 (for each server)
>
>
> I also forgot to mention the "Queue" values from stats web-interface:
> Queue max = 0 for all servers
> Queue limit = 1 for all servers (as configured in default-server)
> So according to stats queue was never used.
>
>
> Right under the servers list, there is a "Backend" line, which has the
> value of "29" in "Queue Max" column.
> What does it mean?
>
>
Well that means you had up to 29 requests in the backend queue waiting for
connection. In my case I have never seen this queue be more then 0 on the
backend or any of the backend servers for that matter. Also the queue limit
per server is 128 not 1 (I think you confuse queue limit with queue timeout
which you have set to 1 sec indeed).

So, as mentioned before, and pointed by Baptiste, your servers are not that
fast as you expect them to be, ie you have set your queues size and timeout
too low. First, is haproxy on the same LAN segment as the backend servers?
For example what is the value of the LastChk column, it should be ms
(milliseconds) if your servers are close to haproxy and not under big load.

If I were in your shoes I would:

- drop the fullconn setting and let haproxy do the math for me
- definitely increase the queue timeout to more than 1 sec (why would you
risk loosing messages, except if you are short on ram)
- set connect timeout as per the excerpt I sent previously

and see how I go.


>
> >
> > If I understand minconn/maxconn meaning right, each server should accept
> up to min(150, 3000/18) connections
> >
> > So according to stats the load were far from limits.
> >
> > What can be the cause of such errors?
> >
> > Thanks!
>
>
>


Re: HTTP Response Rewriting to Replace Internal IP with FQDN

2015-10-06 Thread Igor Cicimov
On Wed, Oct 7, 2015 at 7:06 AM, Susheel Jalali <susheel.jal...@coscend.com>
wrote:

> Dear HAProxy Developers,
>
> After incorporating insights from Bryan Talbot and articles from Baptiste
> Assman on HAProxy Web site, we have been able to get the basic
> configuration of HAProxy going.  Now we are adding configuration to access
> specific products in our LAN.
>
> We would like to access Product1 via URL:
> https://coscend.com:14443/Product1/
>
> Output URL from the Product1 server should be:
> https://coscend.com:14443/Product1/signin?xyz
>
> What we are getting:   https://Internal_IP:14443/Product1/signin?xyz
>
> The server presents the right page, but with internal IP address of the
> server.  Hence, the product can only be accessed from internal LAN, not
> from WAN.  What are we missing?
>
> Below is the configuration deployed.
>
> global
>
> […]
>
> default
>
> […]
>
>
>
> frontend webapps-frontend
>
> bind  *:80 name http
>
> bind  *:443 name https ssl crt /path/to/server.pem
>
>
>
> log   global
>
> optionforwardfor
>
> optionhttplog clf
>
>
>
> reqadd X-Forwarded-Proto:\ https if { ssl_fc }
>
> reqadd X-Forwarded-Proto:\ http if !{ ssl_fc }
>
> #http-request add-header X-Forwarded-Proto:\ https if { ssl_fc }  #
> Don't know how to use it instead of reqadd
>
> #http-request add-header X-Forwarded-Proto:\ http if !{ ssl_fc }   #
> Don't know how to use it instead of reqadd
>
>
>
> acl host_httpsreq.hdr(Host) coscend.com:14443  # 14443 is due to
> port forwarding deployment
>
> acl path_subdomain_p1 path_beg -i /Product1
>
>
>
> use_backend subdomain_p1-backend if host_https path_subdomain_p1
>
>
>
> backend subdomain_p1-backend
>
> http-request set-header Host 
>
> reqirep ^([^\ ]*)\ /Product1/?([^\ ]*)\ (.*)$   \1\ /Product1\2\ \3
>
>
>
> acl hdr_location res.hdr(Location) -m found
>
> #http-response replace-header Host (.*) %%HP if hdr_location   # This
> is not working
>
> rspirep ^(Location:)\ (https?://([^/]*))/(.*)$\1\
> http://\3/Product1/\4 if hdr_location
>


What happens if you move these two from the backend into the frontend
section (I believe that's where they belong):

acl hdr_location res.hdr(Location) -m found
rspirep ^(Location:)\ (https?://([^/]*))/(.*)$\1\
http://\3/Product1/\4 if hdr_location

Also in the rspirep you are rewriting https to http but you say the
response you are seeing is still with https:
https://Internal_IP:14443/Product1/signin?xyz
which most probably means that condition is not working for sure.

In case you are serving a single domain maybe simplifying this to begin
with may help:

rspirep ^(Location:)\ https?://[^/]*/(.*)$\1\
http://coscend.com/Product1/\2 <http://%5C3/Product1/%5C4> if hdr_location

Also any messages during haproxy startup or in the haproxy log indicating
possible issues? Something along the lines of "this and this statement will
never match due to bla bla".


>
>
> server Product1.VM0  cookie c check
>
>
>
> Thank you.
>
> --
>
> Sincerely,
>
> Susheel Jalali
>
> Coscend Communications Solutions
>
> Elite Premio Complex Suite 200,  Pune 411045 Maharashtra India
> susheel.jal...@coscend.com
>
> Web site: www.Coscend.com
> --
>
> CONFIDENTIALITY NOTICE: See 'Confidentiality Notice Regarding E-mail
> Messages from Coscend Communications Solutions' posted at:
> http://www.Coscend.com/Terms_and_Conditions.html
>
>


-- 
Igor Cicimov | DevOps


p. +61 (0) 433 078 728
e. ig...@encompasscorporation.com <http://encompasscorporation.com/>
w*.* encompasscorporation.com
a. Level 4, 65 York Street, Sydney 2000


Re: Converting from sticking on src-ip to custom auth header

2015-09-30 Thread Igor Cicimov
The stick-table type would be string and not ip in that case though
On 01/10/2015 5:07 AM, "Jason J. W. Williams" 
wrote:
>
> We've been seeing CenturyLink and a few other residential providers
NATing their IPv4 traffic, making client persistency on source IP result in
really lopsided load balancing lately.
>
> We'd like to convert to sticking on a custom header we're already using
that IDs the user. There isn't a lot of examples of this, so I was curious
if this is the right approach:
>
> Previous "stick on src" config:
https://gist.github.com/williamsjj/7c3876d32cab627ffe70
>
> New "stick on header" config:
https://gist.github.com/williamsjj/f0ddc58b9d028b3fb906
>
> Thank you in advance for any advice.
>
> -J

The stick-table type would be string and not ip in that case though


Re: Converting from sticking on src-ip to custom auth header

2015-09-30 Thread Igor Cicimov
Well in case of header you would have something like this I guess:

tcp-request content track-sc1 hdr(x-app-authorization)



On Thu, Oct 1, 2015 at 9:47 AM, Jason J. W. Williams <
jasonjwwilli...@gmail.com> wrote:

> Wondered about that... Do the "tcp-request" rate limiters use the stick
> table (I assume they need type ip) or another implied table?
>
> -J
>
> On Wed, Sep 30, 2015 at 3:41 PM, Igor Cicimov <
> ig...@encompasscorporation.com> wrote:
>
>> The stick-table type would be string and not ip in that case though
>>
>> On 01/10/2015 5:07 AM, "Jason J. W. Williams" <jasonjwwilli...@gmail.com>
>> wrote:
>> >
>> > We've been seeing CenturyLink and a few other residential providers
>> NATing their IPv4 traffic, making client persistency on source IP result in
>> really lopsided load balancing lately.
>> >
>> > We'd like to convert to sticking on a custom header we're already using
>> that IDs the user. There isn't a lot of examples of this, so I was curious
>> if this is the right approach:
>> >
>> > Previous "stick on src" config:
>> https://gist.github.com/williamsjj/7c3876d32cab627ffe70
>> >
>> > New "stick on header" config:
>> https://gist.github.com/williamsjj/f0ddc58b9d028b3fb906
>> >
>> > Thank you in advance for any advice.
>> >
>> > -J
>>
>> The stick-table type would be string and not ip in that case though
>>
>
>


-- 
Igor Cicimov | DevOps


p. +61 (0) 433 078 728
e. ig...@encompasscorporation.com <http://encompasscorporation.com/>
w*.* encompasscorporation.com
a. Level 4, 65 York Street, Sydney 2000


Re: Converting from sticking on src-ip to custom auth header

2015-10-01 Thread Igor Cicimov
What version are you running? From memory up to 1.5.x you can have only one
table per fe/be, not sure about 1.6 I haven't tried it yet. I've seen
people using second table via dummy backend though. I don't have access to
my notes atm so maybe someone else can jump in and help with this.
On 01/10/2015 2:22 PM, "Jason J. W. Williams" <jasonjwwilli...@gmail.com>
wrote:

> I still would like to keep the rate limiting based on source ip but the
> persistence based on header.
>
> My thought was to create a second named stick table but I didn't see a
> name parameter to the stick-table declaration.
>
> Sent via iPhone
>
> On Sep 30, 2015, at 18:23, Igor Cicimov <ig...@encompasscorporation.com>
> wrote:
>
> Well in case of header you would have something like this I guess:
>
> tcp-request content track-sc1 hdr(x-app-authorization)
>
>
>
> On Thu, Oct 1, 2015 at 9:47 AM, Jason J. W. Williams <
> jasonjwwilli...@gmail.com> wrote:
>
>> Wondered about that... Do the "tcp-request" rate limiters use the stick
>> table (I assume they need type ip) or another implied table?
>>
>> -J
>>
>> On Wed, Sep 30, 2015 at 3:41 PM, Igor Cicimov <
>> ig...@encompasscorporation.com> wrote:
>>
>>> The stick-table type would be string and not ip in that case though
>>>
>>> On 01/10/2015 5:07 AM, "Jason J. W. Williams" <jasonjwwilli...@gmail.com>
>>> wrote:
>>> >
>>> > We've been seeing CenturyLink and a few other residential providers
>>> NATing their IPv4 traffic, making client persistency on source IP result in
>>> really lopsided load balancing lately.
>>> >
>>> > We'd like to convert to sticking on a custom header we're already
>>> using that IDs the user. There isn't a lot of examples of this, so I was
>>> curious if this is the right approach:
>>> >
>>> > Previous "stick on src" config:
>>> https://gist.github.com/williamsjj/7c3876d32cab627ffe70
>>> >
>>> > New "stick on header" config:
>>> https://gist.github.com/williamsjj/f0ddc58b9d028b3fb906
>>> >
>>> > Thank you in advance for any advice.
>>> >
>>> > -J
>>>
>>> The stick-table type would be string and not ip in that case though
>>>
>>
>>
>
>
> --
> Igor Cicimov | DevOps
>
>
> p. +61 (0) 433 078 728
> e. ig...@encompasscorporation.com <http://encompasscorporation.com/>
> w*.* encompasscorporation.com
> a. Level 4, 65 York Street, Sydney 2000
>
>


Re: [PATCH] BUG: config: external-check command validation is checking for incorrect arguments.

2015-10-02 Thread Igor Wiedler
Hello,

I wanted to test the external-check option in 1.6 (master) and it seems like 
the validation logic is broken. I was wondering what the status of this patch 
is: http://marc.info/?l=haproxy=144240175729490=2 
<http://marc.info/?l=haproxy=144240175729490=2>. Can we get it merged?

Many thanks!

Regards,
Igor

Re: Frontend ACL rewrites URL incorrectly to backend

2015-10-05 Thread Igor Cicimov
Sorry don't know why the previous message was sent only to you and not to
the forum as well. Maybe because I sent it from my phone. Anyway,
rectifying that now.

>From what you posted about haproxy config, I can't see how can that acl
cause any problems and why would haproxy rewrite the url in the first
place. If that is happening then we all would be seeing the same problem
not just you. I'm also running WP on apache behind haproxy and definitely
don't see this issue. So it has to be something specific to your setup.
Maybe the combination of haproxy and varnish maybe htaccess file.

You can post the full setup of haproxy obfuscating any sensitive details.
Someone might notice something suspicious. Also taking tcpdump for the
traffic entering varnish should confirm that haproxy is mangling the url or
not. If it doens't then you move further down and take a tcp dump of the
traffic entering apache. In that way you will find the culprit for sure.

Cheers,
Igor



On Tue, Oct 6, 2015 at 9:22 AM, Daren Sefcik <dsef...@hightechhigh.org>
wrote:

> As I wrote in my previous emails it is not just a WP problem but several
> other sites also that behave weird but some others are just fine. They all
> work just fine with varnish and have been for several years, it is only a
> problem when I put haproxy in the front of all of it.
>
> wp-config is just the config file, if anything it may be an issue with
> whats in the .htaccess file but again, it is not just WP. I am happy to
> send you relevant parts of those files if you think you understand the
> problem and want to look at them.
>
> thanks,
> Daren
>
>
> On Mon, Oct 5, 2015 at 2:58 PM, Igor Cicimov <
> ig...@encompasscorporation.com> wrote:
>
>>
>> On 06/10/2015 5:48 AM, "Daren Sefcik" <dsef...@hightechhigh.org> wrote:
>> >
>> > Hey Joris, I appreciate the help...I am not sure I quite understand
>> though, is there something I can configure in haproxy to resolve this? It
>> is not just a Wordpress problem, I have other sites also that do not behave
>> correctly when I put haproxy in front of them.
>> >
>> > On Mon, Oct 5, 2015 at 8:22 AM, joris dedieu <joris.ded...@gmail.com>
>> wrote:
>> >>
>> >> Hi,
>> >>
>> >> 2015-10-04 23:33 GMT+02:00 Daren Sefcik <dsef...@hightechhigh.org>:
>> >> > I am trying to make some requests go to specific backends but am
>> finding
>> >> > that in certain backends that the url gets doubled up or otherwise
>> mangled,
>> >> > ie:
>> >> >
>> >> > request to frontend = http://my.company.com
>> >> > what the backend server ends up with =
>> >> > http://my.company.comhttp://my.company.com
>> >> >
>> >> > This does not happen in all of the backends, only a few...a
>> wordpress site
>> >>
>> >> This is typically what append when wordpress is invoked with a wrong
>> >> Host header.
>> >> It must match WP_SITEURL and WP_HOME
>> >>
>> >> Regards
>> >> Joris
>> >>
>> >> > comes to mind as a specific example. Since this does not happen on
>> every
>> >> > single backend server I suspect it is instead something happening on
>> the
>> >> > receiving server but since it only happens when I put haproxy in
>> front of it
>> >> > there is some connection between them.
>> >> >
>> >> > Can someone help me understand what haproxy is doing or how to fix
>> this from
>> >> > happening?
>> >> > Before anyone says it is varnish doing it I should say several of
>> the other
>> >> > backends using varnish work fine, it is only a few that get the url
>> messed
>> >> > up.
>> >> >
>> >> > TIA
>> >> >
>> >> > example ACL:
>> >> >
>> >> > acl   acl_my.company.com hdr(host) -i
>> my.company.com
>> >> > use_backend  VARNISH_BKEND if acl_my.company.com
>> >
>> >
>> Whats in your wp-config.php file? Also seams you have varnish in the mix
>> too you sure it is not varnish doing something weird?
>>
>>
>


-- 
Igor Cicimov | DevOps


p. +61 (0) 433 078 728
e. ig...@encompasscorporation.com <http://encompasscorporation.com/>
w*.* encompasscorporation.com
a. Level 4, 65 York Street, Sydney 2000


Re: Questions Aboute the PEM Phrase.

2015-12-02 Thread Igor Cicimov
On 03/12/2015 6:54 AM, "Jesus Moran"  wrote:
>
> Hello.
>
> Excelent work whit this tool.
>
> Today i was integrating haproxy 1.5 whit SSL and was easy and fast, but i
wave a litte issue.
>
> When i create the .key file i add it a phrase.
>
>
> i cerate the certificate with GoDaddy. And Now Alway when i reload,
start, restart the services teh proxy need the
>
>  * Reloading haproxy haproxy
Enter PEM pass phrase:
> Enter PEM pass phrase:
>
> twice times.
>
> I think this phrase will affect the proxy the nextime they need restar to
clear logs stats, etc...
>
> there´s any way to setup the passphrasee in cofig file or in default file
to avoid any kind of problem when hsaproxy reload???.
>
> Best Regards.
>
>
> Jesus
Just remove the passphrase:

openssl rsa -in /path/to/originalkeywithpass.key -out
/path/to/newkeywithnopass.key


Re: SSLv2Hello is disabled

2015-12-01 Thread Igor Cicimov
On 02/12/2015 12:41 AM, "Cohen Galit"  wrote:
>
> Hello,
>
>
>
> When HAProxy 1.5.9 is trying to sample our servers with this
configuration: tcp-check connect port 50443 ssl
>
>
>
> Our servers returns an error:
>
>
>
> 2015-11-29 09:48:18,155 [StartPoint-IMAP-SSL-Worker(14)]
[e8d05153-267f-4378-9a97-5245391ffe26] [] ERROR
connection.SSLHandshakeStartPointListener
(SSLHandshakeStartPointListener.java:onFailure :80) - SSL/TLS handshake
failed with client identified by /10.106.75.51:35892
>
> javax.net.ssl.SSLHandshakeException: SSLv2Hello is disabled
>
>
>
>
>
> Please advice,
>
>
>
> Thanks,
You need to disable SSLv3 in haproxy or enable it on the imap side which
probably has only TLS support setup. I can't see option of setting the ssl
version in tcp-check connect so probably has to be done globaly in haproxy.


RE: SSLv2Hello is disabled

2015-12-01 Thread Igor Cicimov
On 02/12/2015 10:19 AM, "Lukas Tribus"  wrote:
>
> > On 02/12/2015 12:41 AM, "Cohen Galit"
> > > wrote:
> > >
> > > Hello,
> > >
> > >
> > >
> > > When HAProxy 1.5.9 is trying to sample our servers with this
> > configuration: tcp-check connect port 50443 ssl
> > >
> > >
> > >
> > > Our servers returns an error:
> > >
> > >
> > >
> > > 2015-11-29 09:48:18,155 [StartPoint-IMAP-SSL-Worker(14)]
> > [e8d05153-267f-4378-9a97-5245391ffe26] [] ERROR
> > connection.SSLHandshakeStartPointListener
> > (SSLHandshakeStartPointListener.java:onFailure :80) - SSL/TLS handshake
> > failed with client identified by
> > /10.106.75.51:35892
>
> Do you authenticate the client and/or the server?
>
>
>
> > > javax.net.ssl.SSLHandshakeException: SSLv2Hello is disabled
> > You need to disable SSLv3 in haproxy
>
> We are talking about the SSLv2 hello format. Its not about SSLv2
> or SSLv3, its about the hello format.
Which can also be used by sslv3 clients  hence my comment.

>
> However, haproxy unconditionally sets SSL_OP_NO_SSLv2, which
> makes openssl not use the SSLv2 Hello, so I don't see why this would
> happen.
>
> I think the error message from Tomcat about the SSLv2Hello is irrelevant
> and misleading and you actually have a simple authentication problem.
>
>
>
> Regards,
>
> Lukas
>
>


Re: lua authentication

2015-12-03 Thread Igor Cicimov
Hi Grant,

On Fri, Dec 4, 2015 at 7:46 AM, Grant Haywood <gr...@iowntheinter.net>
wrote:

> Hello,
>
> I was wondering if there is a basic example of using lua to do
> authentication?
>
> I am specificaly interested in constructing 'ldap' and 'jwt' versions of
> the 'userlist' block
>
> thx in advance for your time
>
>
Excellent question. One feature I would love to see in haproxy is support
for ldap authentication. It would be awesome If that could be done via lua.

Thanks,
Igor


Re: Official haproxy blog uses a stickiness table of size 1 (just 1, no suffix). Is this OK?

2016-01-04 Thread Igor Cicimov
On Mon, Jan 4, 2016 at 10:57 PM, Mike MacCana 
wrote:

> I'm investigating active/passive HAProxy setups and came across the
> following from the official HAProxy blog. At http://blog.haproxy
> .com/2014/01/17/emulating-activepassing-application-clustering-with-
> haproxy/
>
>   backend bk_app
>stick-table type ip size 1 nopurge peers LB
>
> The size of 1 seems odd - given that's saying create a stickiness table
> with a maximum size of a single entry, according to
> https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#stick-table
>
>   is the maximum number of entries that can fit in the table.
> This
> value directly impacts memory usage. Count approximately
> 50 bytes per entry, plus the size of a string if any. The size
> supports suffixes "k", "m", "g" for 2^10, 2^20 and 2^30
> factors.
>
> - Is this a typo, and '1' should be '1k' or '1m' or some other larger
> number
> - Is this intentional, and there is a reason to have a table with only one
> entry? If so could you someone please explain why?
>

The explanation is in the comments below in the same page:
http://blog.haproxy.com/2014/01/17/emulating-activepassing-application-clustering-with-haproxy/#comment-4631


>
> Thanks muchly - and thanks for making HAProxy!
>
> Mike
>


Re: Owncloud through Haproxy makes upload not possible

2015-11-19 Thread Igor Cicimov
On 20/11/2015 7:23 AM, "Piotr Kubaj"  wrote:
>
> On 11/19/2015 17:01, Janusz Dziemidowicz wrote:
> > 2015-11-19 15:45 GMT+01:00 Piotr Kubaj :
> >> Now, about RSA vs ECDSA. I simply don't trust ECDSA. There are quite a
> >> lot of questions about constants used by ECDSA, which seem to be
> >> chosen quite arbitrarily by its creator, which happens to be NSA.
> >> These questions of course remain unanswered. Even respected scientists
> >> like Schneier say that RSA should be used instead (see
> >>
https://www.schneier.com/blog/archives/2013/09/the_nsa_is_brea.html#c167
> >> 5929
> >
> > But ECDSA itself does not contain any constants (see
> > https://en.wikipedia.org/wiki/Elliptic_Curve_Digital_Signature_Algorithm
).
> > Yes, you have to choose domain parameters and most commonly used are
> > NIST ones. But you can also use brainpool curves, which specifically
> > avoid using any arbitrary constants (see
> > http://www.ecc-brainpool.org/download/Domain-parameters.pdf) and they
> > are even defined for TLS (https://tools.ietf.org/html/rfc7027) and
> > apparently supported by latest OpenSSL. Unfortunately not by anything
> > else.
> > OK, anyway that's your preference, I'm not going to argue about ECDSA
or not;)
> >
> >> ). When I'm done setting my HTTP(S) services, I'll simply limit
> >> incoming connections connections on my firewall so DDOS'ing won't be
> >> possible, unless you DDOS my firewall :)
> >
> > I've never said anything about DDoS. In such setup there is no need
> > for distributed DoS. The CPU usage of RSA 8192 is so high that a
> > single shell script running on a single attack machine can kill any
> > server.
> > If you are willing to limit your connection rate on a firewall to a
> > few per second, then fine;)
> >
> > As for your problem. Now that it seems like SSL problem, can you just
> > try with RSA 4096 or 2048? RSA 8192 is really not much tested in most
> > code, so maybe the problem is in fact related.
> >
> Unfortunately, accessing my HTTPS services by only OpenSSL is out of the
> question. Besides, I use LibreSSL and am not sure it supports it, since
> OpenBSD people got rid of quite a lot of unnecessary code.
>
> So I can only choose ECDSA or RSA.
>
> I don't think limiting my connections is a bad idea vs choosing weaker
> RSA. As I said before, I actually expect only a few connections at once.
>
> I've generated RSA 2048 cert with:
> openssl req -x509 -newkey rsa:2048 -keyout haproxy.pem -out haproxy.pem
> -days 3650 -nodes
>
> That is, I didn't use any non-default options, such as SHA512.
> Unfortunately, it doesn't yield any result. I'm now considering
> switching to SSL Pass-through, and configuring HTTPS in each of my WWW
> servers, it may be much quicker considering how long I've been getting
> Haproxy to work.
>
It might be something specific to BSD os causing issues for you since I
haven't heard of anyone complaining about ssl till now. You can also try
latest stable 1.5.15 since I can't see any 1.6 specific feature in your
config.


RE: tcpdump and Haproxy SSL Offloading

2016-06-04 Thread Igor Cicimov
On 4 Jun 2016 11:53 pm, "mlist" <ml...@apsystems.it> wrote:
>
> Hi Luca and Igor,
>
>
>
> I know there is not a simple way. In this network trace I verified an
IE11 / Edge bug with preconnect sessions.
>
> This is a known problem, also if not so documented.
>
> As you can see Windows Client TCP Stack correctly send an ACK to a FIN
request from HAProxy, but IE
>
> does not instruct TCP Stack to send its own FIN to HAProxy to close the
TCP connection gracefully, so IE at later time
>
> erroneously try to use the TCP Connection, correctly HAProxy send a RST.
In the HTTP buffer of the client there is
>
> what HAProxy sent before closing the connection (a 408 Timeout HTTP
Status message), so IE erroneously read this
>
> message and wrong again, instead of closing HTTP session and retry with
another session, send the 408
>
> Timeout to the Browser, so client see 408, thinking the server does not
work well…
>
>
>
>
>
>
>
>
>
> In this case:
>
> -  408 is not sent from backend server, so not all the traffic
can be collected on backend server to analyze the issue
>
> -  HAProxy send 408, but I don’t see HTTP flow, I see only HTTPS
(SSL/TLS) Flow as all is encrypted from client to haproxy and haproxy do
SSL termination
>
> -  In this particular case we can reproduce the problem on a test
machine, as we know and manage the machine, we can use the wireshark
unencypting capabilities, but I think this method is not so robust as
involves many changing parts Cipher/haproxy/…
>
>
>
> I have to put togheter many info (haproxy log, tcpdump trace, ecc ) to
have a complete picture (ie: it would be simple if I can see and search 408
http status code in the tcpdump trace instead of know it is here from
browser and form haproxy log, but not shown in wireshark as incapsulated in
TLS communication. Haproxy just do different not properly application level
thing for SSL termination, so I’m uncertain is haproxy or not to manage and
provide unencrypted traffic, all other solution as we are seeing are prone
to difficulties and instability of the trace process.
>
>
>
> I like robust solution and if possible solution independent as soon as
possible from implicit (instead explicit) support by components, that is:
haproxy work using unencrypting with client session keys, but I know on
some version this mechanism does not work, so this introduce instability.
>
>
>
> But I’ll try FPS with client session keys solution also if I think this
is complex as we have to capture all keys for all TLS sessions to see the
complete traffic we take as many SSL handshake take place in that trace.
>
>
>
> I think that at the moment the best solution is to temporarily disable
PFS ciphers suite, I think this is a simple reconfiguration of haproxy.cfg
and haproxy daemon restart, for the test period, not so critical for us.
>
> I need for this some hints as I’m not so informed about cipher suites, in
particular I didn’t found a clear specification how to identify cipher show
/ used by haproxy and how identify what support or not PFS.
>
>
>
> ie: if I check from a SSL tester site I see this protocols and chiper
suite for haproxy SSL termination:
>
>
>
> Protocols
>
>
>
>TLS 1.2  Yes
>
>TLS 1.1  Yes
>
>TLS 1.0  Yes
>
>SSL 3No
>
>SSL 2No
>
>
>
>
>
> Cipher Suites
>
>
>
>TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (0xc014)  ECDH
secp256r1 (eq. 3072 bits RSA)   FS   256
>
>TLS_RSA_WITH_RC4_128_SHA
(0x5)
INSECURE 128
>
>TLS_ECDHE_RSA_WITH_RC4_128_SHA (0xc011)  ECDH
secp256r1 (eq. 3072 bits RSA)   FS INSECURE 128
>
>TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 (0xc028)   ECDH
secp256r1 (eq. 3072 bits RSA)   FS   256
>
>TLS_RSA_WITH_AES_256_CBC_SHA256
(0x3d)
  256
>
>TLS_RSA_WITH_AES_256_CBC_SHA
(0x35)
  256
>
>TLS_RSA_WITH_CAMELLIA_256_CBC_SHA
(0x84)
  256
>
>TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 (0xc027)   ECDH
secp256r1 (eq. 3072 bits RSA)   FS   128
>
>TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (0xc013)  ECDH
secp256r1 (eq. 3072 bits RSA)   FS   128
>
>TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA (0xc012) ECDH
secp256r1 (eq. 3072 bits RSA)   FS   112
>
>TLS_RSA_WITH_AES_128_CBC_SHA256
(0x3c)
  128
>
>TLS_RSA_WITH_AES_128_CBC_SHA
(0x2f)
  128
>
>TLS_RSA_WITH_CAMELLIA_128_CBC_SHA (0x41)
   128
>
>

Re: tcpdump and Haproxy SSL Offloading

2016-06-02 Thread Igor Cicimov
On Fri, Jun 3, 2016 at 3:14 AM, mlist  wrote:

> Often I need to take tcpdump to analyze haproxy communication to clients
> and to backend servers.
>
> As we use haproxy as SSL termination point (haproxy SSL ofloading), at low
> levels (so tcpdump level)
>
> we see communication with client encrypted.
>

If you are not using DHE cyphers (but you should) then you can try ssldump.
In case of Diffie-Hellman though new encryption key is generated for each
ssl session so you are out of luck here.


> There are simple solution so I can do a tcpdump having
>
> unencrypted communication ? Has haproxy some mechanism ?
>

Not that I'm aware of but you can try chaining a local proxy where you can
see the traffic in clear text before you send the traffic to the backend .


>
>
> I have 3 haproxy LBs with 2 L4 LBs balancing on haproxy LBs so I want to
> avoid if possible to make more
>
> complex infrastructure introducing some other intermediate proxy to do
> that, so I make the communication
>
> path as simple and equal to normal request path as possible.
>
>
>
> Roberto
>
>
>
>
>


Re: ACL & frontend : random behavior / haproxy 1.5.18-1ppp1

2016-06-10 Thread Igor Cicimov
On Fri, Jun 10, 2016 at 7:39 PM, Kevin Maziere <ke...@kbrwadventure.com>
wrote:

> Hi
> (in english this time,sorry for the noise)
>
> I can't explain a strange behavior of haproxy when using simple acl which
> redirect to a specific backend.
> The frontend in which the ACL and the specific backend is set has also a
> default frontend.
>
> If I curl/wget/chrome/firefox/opera... on the frontend IP with a hostname
> which match the ACL, sometime the reply is made by the wanted backend,
> sometime by the default one,randomly.
>

And the request you are testing with is???


> No error logs
> If I remove the default backend line, all request are sent to the specific
> backend.
>
> Any help ?
>
> Tanks
>
> Kévin
>
>
> Here is my conf :
>
>  global
> log 127.0.0.1   local0
> log 127.0.0.1   local1 notice
> maxconn x
> #debug
> #quiet
> #spread-checks
> user haproxy
> group haproxy
> defaults
> log global
> modehttp
> #option  dontlognull
> maxconn 
> timeout server  xxm
> timeout connect xxm
> timeout client  xxm
> option redispatch
> retries 5
> option  httplog
> option forwardfor
> timeout http-keep-alive xm
> timeout http-request xm
>
>
> frontend 10.0.01-80
> bind 10.0.0.1:80
> reqadd X-Forwarded-Proto:\ http
> option http-server-close
>
> acl host_beg_ttfr  hdr_beg(Host) tt-fra29-2-france
> use_backend tt-france-fra29-2 if host_beg_ttfr
>
> default_backend ipv4-fr
>
> backend tt-france-fra29-2
> reqirep ^Host:\  tt-france-fra29-2-france.subd.fr.mondomainamoi.fr
> Host:\ fra29-2-fra.md.bbb.loca
> server labas 192.168.21.5:80
>
> backend ipv4-fr
> balance roundrobin
> option httpchk GET /
> server fr-icietla 192.168.22.4:8080 weight 1 check inter 5000 rise 2
> fall 5
>



-- 
Igor Cicimov | DevOps


p. +61 (0) 433 078 728
e. ig...@encompasscorporation.com <http://encompasscorporation.com/>
w*.* www.encompasscorporation.com
a. Level 4, 65 York Street, Sydney 2000


Re: tcpdump and Haproxy SSL Offloading

2016-06-03 Thread Igor Cicimov
Hi Lukas,

On Sat, Jun 4, 2016 at 3:03 AM, Lukas Tribus  wrote:

> Hello,
>
>
> you can dump the symmetric keys from the browser and import them in
> wireshark to decrypt PFS protected TLS sessions [1]


Yes in case you want to troubleshoot something generic this is a good
approach but if you want to troubleshoot sessions not initiated by your
self, ie particular client connection, this is practically impossible.


> or downgrade your ciphers settings to non-PF ciphers. Properly decrypting
> the TLS session is the only way to really make sure you see what happens,
> even if there is a TLS related bug in the client or server (haproxy).
>
>
> Some other idea's are:
>
> - if your backend traffic is unencrypted, you may want to capture the
> traffic there.
>

Not practical though if you have tens of backend servers. Much better if
you have to troubleshoot on 2 instead of dozen of servers. At least that's
how I understand the question related to tcpdump usage on the haproxy
servers them self.


>
> - if haproxy is rejecting the request, check "show errors" on the admin
> socket.
>
>

As you said, the best solution is to not depend on haproxy specific
> features, as you don't want to modify existing infrastructure in a
> troubleshooting case.Maybe something
>
>
Outside haproxy maybe something like mitmproxy or sslstrip might help. Not
sure though have never used them myself.


>
> Another proxy layer means that you decrypt TLS on the front-end proxy,
> while you sniff the plaintext traffic between the front-end and the second
> tier proxy. You can probably do this with a single haproxy instance
> recirculating the traffic through a unix socket and capture the traffic on
> it, but it would require some trial and error and definitely some testing.
>
>
This will probably be faster but can't use tcpdump in that case.


>
> I believe the SSLKEYLOGFILE approach [1] to be the most efficient and
> simplest approach.
>
>
> cheers,
>
> lukas
>
>
> [1]
> https://jimshaver.net/2015/02/11/decrypting-tls-browser-traffic-with-wireshark-the-easy-way/
>
>
>


Re: Use regex for backend selection

2016-06-22 Thread Igor Cicimov
use_backend %[req.hdr(host),lower]

On Thu, Jun 23, 2016 at 6:21 AM, Mildis <m...@mildis.org> wrote:

> Hi,
>
> I’m in the process of setting HAProxy as an HTTPS frontend switch to
> different backends.
> As I have 10+ different backends, I’d like to replace
>
> acl to-server1 hdr_beg(host) -i server1.domain.tld
> acl to-server2 hdr_beg(host) -i server2.domain.tld
> …
> acl to-serverN hdr_beg(host) -i serverN.domain.tld
>
> use_backend bck-server1 if to-server1
> use_backend bck-server2 if to-server2
> …
> use_backend bck-serverN if to-serverN
>
>
> by something more generic like
>
> use_backend bck-\1 if hdr_reg(host) -i (.*).domain.tld
>
>
> but I can’t find a way to make it work.
>
> Am I on the right path ?
>
> Thanks,
> Mildis
>



-- 
Igor Cicimov | DevOps


p. +61 (0) 433 078 728
e. ig...@encompasscorporation.com <http://encompasscorporation.com/>
w*.* www.encompasscorporation.com
a. Level 4, 65 York Street, Sydney 2000


  1   2   3   4   >