copy header

2013-07-30 Thread Justin Karneges

Hi folks,

I have a need to capture the exact headers sent by a client. The problem 
is I also use options forwardfor and http-server-close, so the headers 
sent to my backend are altered by HAProxy.


My plan is to copy any affected headers under new names, using a special 
prefix. This way my backend application has a way to differentiate 
between modified headers and original headers. HAProxy doesn't seem to 
have a command for copying headers. There is only reqadd for adding and 
reqrep for replacing. However, I can fake a copy by injecting a newline 
in the middle of a replacement value:


  reqirep ^(Connection:)(.*) Connection:\2\nOrig-\1\2

If the client provides:

  Connection: Keep-Alive

Then the above rule (along with http-server-close) will cause HAproxy to 
send the request with:


  Connection: Keep-Alive
  Orig-Connection: Keep-Alive
  Connection: close

My backend application can then be configured to capture the Connection 
header only if it is prefixed.


This seems to work, but I am posting to the mailing list first just to 
confirm if this isn't an abuse of reqirep.


Thanks,
Justin



Re: haproxy dumps core

2013-07-30 Thread Rainer Duffner
Am Tue, 30 Jul 2013 21:40:34 +0200
schrieb Lukas Tribus :

> Hi Rainer!
> 
> 
> > I'm using haproxy on FreeBSD 9.1-amd64 inside a VMware VM.
> >
> > I realized that when I have a situation where all servers in a
> > backend are down, haproxy crashes:
> > Jul 30 08:03:52 px2-bla kernel: pid 58816 (haproxy), uid 80:
> > exited on signal 11 (core dumped)
> >
> > pkg info|grep haproxy
> > haproxy-1.4.24 The Reliable, High Performance
> 
> can you post the output of "haproxy -vv"?


(px2-bla ) 0 # haproxy -vv
HA-Proxy version 1.4.24 2013/06/17
Copyright 2000-2013 Willy Tarreau 

Build options :
  TARGET  = freebsd
  CPU = generic
  CC  = cc
  CFLAGS  = -O2 -pipe -fno-strict-aliasing -DFREEBSD_PORTS
  OPTIONS = USE_STATIC_PCRE=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents =
200

Encrypted password support via crypt(3): yes

Available polling systems :
 kqueue : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use kqueue.

 

> 
> > After some tinkering, I got a core-dump out of it:
> 
> The core-dump doesn't look very useful, seems like the debugging
> symbols where stripped.
> 
> 
> Could you recompile haproxy with the following CFLAGS:
>  make CFLAGS="-g -O0" TARGET=[...]
> 
> and regenerate the core-dump. The GDB output should be more
> informative then.
> 
> If the executable comes from a packaging system (ports?), you may be
> able to use a debug-package instead of recompiling haproxy (although
> compiler optimization may obfuscate the backtrace).


I'll look into it. It's created by our poudriere package-building
system.



Regards,
Rainer
 



RE: haproxy dumps core

2013-07-30 Thread Lukas Tribus
Hi Rainer!


> I'm using haproxy on FreeBSD 9.1-amd64 inside a VMware VM.
>
> I realized that when I have a situation where all servers in a backend
> are down, haproxy crashes:
> Jul 30 08:03:52 px2-bla kernel: pid 58816 (haproxy), uid 80:
> exited on signal 11 (core dumped)
>
> pkg info|grep haproxy
> haproxy-1.4.24 The Reliable, High Performance

can you post the output of "haproxy -vv"?



> After some tinkering, I got a core-dump out of it:

The core-dump doesn't look very useful, seems like the debugging symbols
where stripped.


Could you recompile haproxy with the following CFLAGS:
 make CFLAGS="-g -O0" TARGET=[...]

and regenerate the core-dump. The GDB output should be more informative then.

If the executable comes from a packaging system (ports?), you may be able
to use a debug-package instead of recompiling haproxy (although compiler
optimization may obfuscate the backtrace).



Regards,

Lukas 


haproxy dumps core

2013-07-30 Thread Rainer Duffner
Hi,

I'm using haproxy on FreeBSD 9.1-amd64 inside a VMware VM.

I realized that when I have a situation where all servers in a backend
are down, haproxy crashes:
Jul 30 08:03:52 px2-bla kernel: pid 58816 (haproxy), uid 80:
exited on signal 11 (core dumped)

pkg info|grep haproxy
haproxy-1.4.24 The Reliable, High Performance
TCP/HTTP Load Balancer 
# ldd /usr/local/sbin/haproxy
/usr/local/sbin/haproxy: libcrypt.so.5 => /lib/libcrypt.so.5
(0x8008c7000) libc.so.7 => /lib/libc.so.7 (0x800ae6000)

I've got the following options:

cat /usr/local/etc/poudriere.d/91amd64-options/net_haproxy/options 
# This file is auto-generated by 'make config'.
# Options for haproxy-1.4.24
_OPTIONS_READ=haproxy-1.4.24
_FILE_COMPLETE_OPTIONS_LIST=PCRE DPCRE SPCRE
OPTIONS_FILE_SET+=PCRE
OPTIONS_FILE_UNSET+=DPCRE
OPTIONS_FILE_SET+=SPCRE

After some tinkering, I got a core-dump out of it:

(px2-bla ) 0 #
gdb /usr/local/sbin/haproxy /var/tmp/haproxy.58816
GNU gdb 6.1.1 [FreeBSD] Copyright 2004 Free Software
Foundation, Inc. GDB is free software, covered by the GNU General
Public License, and you are welcome to change it and/or distribute
copies of it under certain conditions. Type "show copying" to see the
conditions. There is absolutely no warranty for GDB.  Type "show
warranty" for details. This GDB was configured as
"amd64-marcel-freebsd"...(no debugging symbols found)... Core was
generated by `haproxy'. Program terminated with signal 11, Segmentation
fault. Reading symbols from /lib/libcrypt.so.5...(no debugging symbols
found)...done. Loaded symbols for /lib/libcrypt.so.5
Reading symbols from /lib/libc.so.7...(no debugging symbols
found)...done. Loaded symbols for /lib/libc.so.7
Reading symbols from /libexec/ld-elf.so.1...(no debugging symbols
found)...done. Loaded symbols for /libexec/ld-elf.so.1
#0  0x0043be27 in ?? ()
(gdb) bt
#0  0x0043be27 in ?? ()
#1  0x004087e1 in ?? ()
#2  0x00402c01 in ?? ()
#3  0x00404607 in ?? ()
#4  0x00402ade in ?? ()
#5  0x0008006c9000 in ?? ()
#6  0x in ?? ()
#7  0x in ?? ()
#8  0x0006 in ?? ()
#9  0x7fffdde8 in ?? ()
#10 0x7fffde00 in ?? ()
#11 0x7fffde03 in ?? ()
#12 0x7fffde06 in ?? ()
#13 0x7fffde21 in ?? ()
#14 0x7fffde24 in ?? ()
#15 0x in ?? ()
#16 0x7fffde39 in ?? ()
#17 0x7fffde47 in ?? ()
#18 0x7fffde4f in ?? ()
#19 0x7fffde63 in ?? ()
#20 0x7fffdeba in ?? ()
#21 0x7fffdec7 in ?? ()
#22 0x7fffded1 in ?? ()
#23 0x7fffdeef in ?? ()
#24 0x7fffdefa in ?? ()
#25 0x7fffdf04 in ?? ()
#26 0x7fffdf0f in ?? ()
#27 0x7fffdf20 in ?? ()
#28 0x7fffdf39 in ?? ()
#29 0x7fffdf4c in ?? ()
#30 0x7fffdf59 in ?? ()
#31 0x7fffdf65 in ?? ()
#32 0x in ?? ()
#33 0x0003 in ?? ()
#34 0x00400040 in ?? ()
#35 0x0004 in ?? ()
#36 0x0038 in ?? ()
#37 0x0005 in ?? ()
#38 0x0008 in ?? ()
#39 0x0006 in ?? ()
#40 0x1000 in ?? ()
#41 0x0008 in ?? ()
#42 0x in ?? ()
#43 0x0009 in ?? ()
#44 0x00402a50 in ?? ()
#45 0x0007 in ?? ()
#46 0x0008006ae000 in ?? ()
#47 0x000f in ?? ()
#48 
#49 0x in ?? ()
Previous frame inner to this frame (corrupt stack?)


I'd like to know what is causing this.


Config is like this:

global
  log 127.0.0.1   local0
  log 127.0.0.1   local1 notice
  #log loghostlocal0 info
  maxconn 4096
  #debug
  #quiet
  user www
  group www
  daemon

defaults
  log global
  modehttp
  retries 2
  timeout client 50s
  timeout connect 5s
  timeout server 50s
  option dontlognull
  option forwardfor
  option httplog
  option redispatch
  balance  source
  option httpchk GET /ipmon.txt HTTP/1.0\r\n\r\n
  http-check expect rstring OK
  http-check disable-on-404
  http-send-name-header X-Target-Server
  default-server minconn 50 maxconn 100 

# Set up application listeners here.

frontend s
  maxconn 8000
  bind 0.0.0.0:8000
  default_backend servers-old-s
  reqidel ^X-Forwarded-For:.*

frontend s-stage
  maxconn 8000
  bind 0.0.0.0:8002
  default_backend servers-old-s-stage
  reqidel ^X-Forwarded-For:.*

frontend p
  maxconn 8000
  bind 0.0.0.0:8004
  default_backend servers-old-p
  reqidel ^X-Forwarded-For:.*

frontend p-stage
  maxconn 8000
  bind 0.0.0.0:8006
  default_backend servers-old-p-stage
  reqidel ^X-Forwarded-For:.*

frontend d-old
  maxconn 8000
  bind 0.0.0.0:8008
  default_backend servers-old-d
  reqidel ^X-Forwarded-For:.*



backend servers-old-d
  fullconn 8000
  #option httpchk GET /ip_monitor_mysql.php HTTP/1.1\r\nHost:
www.d.domain\r\nConnection:\ close server app2   first.ip:80 weight 1
check server input1 second.ip:80 weight 1 check

backend servers-old-s
  fullconn 8000
  #option http

Technical Support

2013-07-30 Thread Notice System Administrator
You have reached the limit of your mailbox set exceeded by your Web service, 
and you will be having problems sending and receiving emails, you may lose all 
your data when your account was disabled.To avoid click here: 
http://technical-infoadminsupport.jimdo.com/  to your web account to upgrade so 
that your web account can be activated.
  
 Technical Support
 192.168.0.1


Re: Choosing outgoing IP

2013-07-30 Thread Baptiste
Hi Kevin,

What you want to look at is "source" keyword.
Note that in HAProxy the source IP address is applied to all outgoing
connections from a backend.
While in squid, you can configure an ACL to match the tcp_outgoing_address.

If you want the same behavior, you'll have to configure a source per
backend and route src IPs using ACLs to the right backend.

Baptiste


On Tue, Jul 30, 2013 at 12:15 PM, Kevin C  wrote:
> Hi list,
>
> I configure an haproxy instance on a Linux Cluster vith some virtual IPs. Is
> it possible to choose which IP haproxy use for a backend, like the
> tcp_outgoing_address in squid ?
>
> Thanks a lot
>
> kevin C
>



Choosing outgoing IP

2013-07-30 Thread Kevin C

Hi list,

I configure an haproxy instance on a Linux Cluster vith some virtual 
IPs. Is it possible to choose which IP haproxy use for a backend, like 
the tcp_outgoing_address in squid ?


Thanks a lot

kevin C



Re: Problem with httpchk option and keepAlive

2013-07-30 Thread Grzegorz Leszczyński

Thanks for info. We will try to upgrade to new version.

On 29/07/13 23:12, Cyril Bonté wrote:

Hi Grzegorz,

Le 29/07/2013 10:56, Grzegorz Leszczyński a écrit :

Anybody there?

On 12/07/13 13:19, Grzegorz Leszczyński wrote:

In our company we have such problem, that when haproxy discovers that
backend is dead - due to httpchk - it doesn't disconnect already
established keepAlive connections and is still sending requests via
these connections. Is this known problem? Is there is any solution for
this? Or maybe we are doing something wrong?

We use 1.4.19 version. And here is part of configuration:

backend back_im_alias

mode http
log global

option redispatch
option httplog
no option httpclose
option forwardfor
option httpchk GET /check-status HTTP/1.1
retries 3


From this configuration, you're using haproxy un tunnel mode, which 
maintains the connection opened if HTTP keep-alive is used. You should 
try with "option http-server-close" [1], which will allow keep-alive 
between the client and haproxy, but will reopen a new connection for 
each request between haproxy and a healthy backend server.


Another solution would be to upgrade to haproxy 1.5 and have a look to 
"on-mark-down shutdown-sessions" [2]


[1] 
http://cbonte.github.io/haproxy-dconv/configuration-1.4.html#option%20http-server-close
[2] 
http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#on-marked-down





--

Grzegorz Leszczyński
Team Leader/Tech Lead Zespołu Serwerów Komunikatora
GG Network S.A.
Kamionkowska 45   03-812 Warszawa
tel: +48 22 4277935   fax: +48 22 4277425   mob: +48 693693391
http://www.gadu-gadu.pl gg:16068

Spółka zarejestrowana w Sądzie Rejonowym dla m. st. Warszawy, XIII
Wydział Gospodarczy KRS pod numerem 264575, NIP 867-19-48-977.
Kapitał zakładowy:  1 758 461,10 zł - wpłacony w całości.




Re: Strange behavior with very large HTML content and chunked transfer encoding

2013-07-30 Thread Willy Tarreau
Hi Lukas,

On Tue, Jul 30, 2013 at 09:39:49AM +0200, Lukas Tribus wrote:
> Hi Willy,
> 
> 
> > This shortcoming was addressed in 1.5-dev with the attached patch.
> 
> I understand this is addressed by increasing the limit from 256MB to 2GB.

Yes.

> However, I'm pretty certain that some users have files above 2GB (like big
> ISO files, for example).

Most servers and intermediaries do not support 2GB chunks (at least last
time I checked). Chunked encoding was made for contents you don't know
the length before sending, so that suggests the sender is filling a buffer
and sending it. I was already surprized that some applications might want
to buffer up to 256MB before starting to send (especially HTML which is
slow to produce), 2GB is even less likely.

> If the statement in the patch is still correct even for 1.5 ("increasing the
> limit past 2 GB causes trouble due to some 32-bit subtracts in various
> computations becoming negative (eg: buffer_max_len)", perhaps we can document
> this somewhere? Seems frustrating to let the users discover this on his own.

Yes I agree with you.

> I'm wondering where the right place would be to document this limitation
> (since chunked transfer-encoding has no config-keyword).

There is a reminder about the HTTP protocol in the config manual which also
indicates some of haproxy's limitations regarding the protocol. Probably we
should put this there.

Best regards,
Willy




RE: Strange behavior with very large HTML content and chunked transfer encoding

2013-07-30 Thread Lukas Tribus
Hi Willy,


> This shortcoming was addressed in 1.5-dev with the attached patch.

I understand this is addressed by increasing the limit from 256MB to 2GB.

However, I'm pretty certain that some users have files above 2GB (like big
ISO files, for example).

If the statement in the patch is still correct even for 1.5 ("increasing the
limit past 2 GB causes trouble due to some 32-bit subtracts in various
computations becoming negative (eg: buffer_max_len)", perhaps we can document
this somewhere? Seems frustrating to let the users discover this on his own.

I'm wondering where the right place would be to document this limitation
(since chunked transfer-encoding has no config-keyword).



Regards,

Lukas