Re:

2011-03-19 Thread Baptiste
Hey,

You can also play with /proc/sys/vm/swapiness to avoid  / limit swapping...
But as explained, it's a bad idea to let a lot balancer swapping. It's
supposed to introduce a very very low delay and swapping would
increase that delay.
Just ensure you have enough memory to handle the load you need/want.

cheers


On Fri, Mar 18, 2011 at 7:33 PM, Ben Timby bti...@gmail.com wrote:
 On Fri, Mar 18, 2011 at 2:00 PM, Antony ddj...@mail.ru wrote:
 Hi guys!

 I'm new to HAProxy and currently I'm testing it.
 So I've read this on the main page of the web site:
 The reliability can significantly decrease when the system is pushed to its 
 limits. This is why finely tuning the sysctls is important. There is no 
 general rule, every system and every application will be specific. However, 
 it is important to ensure that the system will never run out of memory and 
 that it will never swap. A correctly tuned system must be able to run for 
 years at full load without slowing down nor crashing.
 And now have the question.

 How do you usually prevent system to swap? I use Linux but solutions for any 
 other OSes are interesting for me too.

 I think it isn't just to swapoff -a and to del appropriate line in 
 /etc/fstab. Because some people say that it isn't good choise..

 Prevent swapping by ensuring your resource limits (max connections)
 etc. keep the application from exceeding the amount of physical
 memory.

 Or conversely by ensuring that your physical memory is sufficient to
 handle the load you will be seeing.

 This is what is referred to in the documentation, you need to tune
 your limits and available memory for the workload you are seeing. Of
 course simple things like not running other memory hungry applications
 on the same machine apply as well. This is an iterative process
 whereby you observe the application, make adjustments and repeat. You
 must generate test load within the range of normal operations for this
 tweaking to be true-to-life. Of course once you go into production the
 tweaking will continue, no simulation is a replacement for production
 usage.

 The reason running without swap is bad is because if you hit the limit
 of your physical memory, the OOM killer is invoked. Any process is
 subject to termination by the OOM killer, so in most cases decreased
 performance is more acceptable than loss of a critical system process.





Re: Rate limit per IP

2011-03-20 Thread Baptiste
Hi,

Yes, Haproxy can limit rate connection.
Please look for rate-limit sessions and fe_sess_rate in the
configuration.txt documentation [1].

In HAproxy 1.5 [2], there are a few more options, like src_conn_
which are more accurate and might help you better.
Bear in mind that 1.5 is still in development.

[1] http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
[2] http://haproxy.1wt.eu/download/1.5/doc/configuration.txt



On Sat, Mar 19, 2011 at 9:31 PM, Allan Wind
allan_w...@lifeintegrity.com wrote:
 Is there a way to rate limit per IP (or CDIR)?  In the sense our
 global capacity (rate limit sessions) might be x requests/sec,
 but to protect against abusive bots or DOS attacks we would to
 also limit any IP or ideally some bigger buckets like a CDIR to
 say x/100 requests/sec.


 /Allan
 --
 Allan Wind
 Life Integrity, LLC
 http://lifeintegrity.com





Re: Bench of haproxy

2011-05-06 Thread Baptiste
Hi Vincent,

It seems that the CPU speed of your F5 3900 is 2.4GHz with 8G of memory.
For HAproxy, the faster the CPU is, the more reqs per seconds you
could achieve :)

keep us updated with your result, its interesting :)

cheers

On Fri, May 6, 2011 at 7:32 PM, Vincent Bernat ber...@luffy.cx wrote:
 Hi Willy!

 Thanks for your valuable answer.

 OoO Lors de la soirée naissante  du jeudi 05 mai 2011, vers 18:18, Willy
 Tarreau w...@1wt.eu disait :

 With this configuration, I get 10 000 HTTP req/s. The haproxy process
 takes 100% CPU.

 it's important to check how it translates into user and system usage.

 Here is vmstat output when fully charged:

 procs ---memory-- ---swap-- -io -system-- cpu
  r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa
  1  0      0 3843736  13124  30540    0    0     0     0 77534 8842  3 20 78   0
  1  0      0 3842372  13124  30540    0    0     0     0 75640 8684  2 19 79   0
  0  0      0 3841132  13124  30540    0    0     0     0 73996 8801  2 20 78   0

 There are 8 processors.

 Changing maxconn or disabling splice does not change
 anything. If I use 6 haproxy process, I can get to 30 000 HTTP req/s.
 All haproxy takes 100% CPU in this case. Moreover, I am pretty sure that
 the Avalanche is not the bottleneck since we can bench more than 120 000
 HTTP req/s with the same setup. I have tried to stick haproxy to 1 CPU
 (with taskset) and I still get 10 000 HTTP req/s.

 Now, if I look at http://haproxy.1wt.eu/#perf, I can see that I should
 be able to achieve 40 000 HTTP req/s. This is four times what I am able
 to achieve. What is wrong with my setup? Why enabling/disabling splice
 does not affect my results? Is there a way to fetch the 2.6.27-wt5 used
 for the tests?

 What objects size are you fetching from the Reflector ? TCP splicing will
 help with objects larger than the internal buffers (16kB) and only with
 some NICs.

 I am using 1048-bytes objects.  Therefore, I understand that there is no
 point in splicing  with this size. I will look at  what the average size
 is for our web servers. Maybe this is a bit small by today standards.

 In order to achieve maximum performance, here's what I'm usually doing :

  - bind haproxy to its own core

  - bind all network interrupts to a different core from the one running
    haproxy, but which shares at least L2 or L3 cache with it ;

  - ensure that interrupts are not flooding the system. Check with vmstat 1.
    As a rule of thumb, anything above 80-100k ints/s is too much.

  - disable all logging statements (log global and option httplog)

  - move the stats check page to a different listen instance so that
    the prod requests don't have to be matched against stats URI

  - use option http-server-close or option forceclose so that
    haproxy sees all requests and actively closes connections right
    after that.

  - use option tcp-smart-accept and option tcp-smart-connect to
    save one useless ACK per connection on each side.

 I have done all this:
  1. bind haproxy to core 0 of the first processor
  2. bind the two network interfaces to core 1 and 2 of the first processor
  3. isolate  the  4  cores  of  the  first  processor  (therefore  other
    processes are binded to the second processor)
  4. remove the stat socket
  5. remove any logging
  6. adding all those options

  global
          daemon

  defaults
          mode    http
          option  splice-auto
          retries 3
          option  redispatch
          option  http-server-close
          option  forceclose
          option  tcp-smart-accept
          option  tcp-smart-connect
          contimeout      5s
          clitimeout      50s
          srvtimeout      50s

  listen poolbench
          bind    172.31.200.10:80
          bind    172.31.201.10:80
          bind    172.31.202.10:80
          bind    172.31.203.10:80
          mode    http
          option  httpchk /
          balance roundrobin
          server  real1 172.31.208.2:80
          server  real2 172.31.209.2:80
          server  real3 172.31.210.2:80
          server  real4 172.31.211.2:80

 I am now able  to achieve 13800 req/s which is a  nice enhancement. I am
 unsure if all the  cores of one processor share the same  L2 or L3 cache
 on those  processors. I suppose that  cores of the  same processor share
 the same L2 cache and the 2 processors share the same L3 cache.

 You can also try to increase buffer size up to 64 kB, which provides
 better performance than splicing with certain 10-gig NICs. This is
 done in the global section: tune.bufsize 65536.

 I did not do this since I use 1048-bytes object. I will do it when I use
 bigger objects.

 Also, using bonding in load balancing mode at high packet rates has
 always resulted in a lower performance than raw NICs for me.

 I will also test without bonding.

 And 10-gig  NICs generally have  faster processing and  better segment
 coalescing abilities than gig 

Re: Bench of haproxy

2011-05-07 Thread Baptiste
On Sat, May 7, 2011 at 12:14 AM, Vincent Bernat ber...@luffy.cx wrote:
 OoO En  cette soirée bien amorcée  du vendredi 06 mai  2011, vers 22:46,
 Baptiste bed...@gmail.com disait :

 It seems  that the  CPU speed  of your F5  3900 is  2.4GHz with  8G of
 memory.

 The  F5  is  using some  Cavium  chip  to  forward requests.   The  main
 processor  is mainly  used for  the web  interface which  can  be pretty
 slow. ;-)
 --

mmm...
I thought the cavium would be used for L4 balancing only.
But it seems they can do layer 7 as well within the chip:
  http://www.caviumnetworks.com/processor_NITROX-DPI.html
Must be quite expensive :D

To come back to haproxy, since it's event driven, the fastest the CPU,
the most request it will handle :)
and the more memory you'll have in your chassis, the more TCP
connection you'll be able to maintain.

Good luck with your testing.

Baptiste


 I WILL NOT BARF UNLESS I'M SICK
 I WILL NOT BARF UNLESS I'M SICK
 I WILL NOT BARF UNLESS I'M SICK
 -+- Bart Simpson on chalkboard in episode 8F15




Re: load balance https with routing traffic rules

2011-05-26 Thread Baptiste
Hi,

As you said, since your traffic is encrypted, haproxy can't dig into
http protocol, so you must you tcp mode to load balance https.
If you want to take advantage of all the smart stuff in Haproxy about
http, you must decrypt the traffic before it's forwarded to haproxy
(using pound, stunnel, nginx or whatever).

I hope this helps.

cheers


2011/5/26 Gustavo Jiménez gustavo.jime...@aplicaciones.com.co:
 Hello

 Our problem is about we need balance https with some filter or matching
 layer4 that can help you to redirect the incoming web service transactions
 for the port 443 (https) of the different servers with different
 applications, we have tried with acl src and rdp_cookie but we don't have
 the expected results.

 The problem is that https require mode tcp but that setting TCP mode meant
 we wouldn't have access to certain ACL information, like the domain name on
 the incoming request, in order to know how to route traffic?

 our idea is some like this:

 frontend xx:443
  mode tcp

  acl firt_webpage xxx filter
  acl second_webpage xxx filter
  
  
  acl nwebpage xxx filter

     use_backend  fwp if firt_webpage
     use_backend  swp if second_webpage
     ...
     
     use_backend  nwp if n_webpage

 backend fwp
    server fwpSSL xxx:443
 backend swp
    server swpSSL xxx:443
 ...
 ...
 backend nwp
    server nwpSSL xxx:443

 --
 Cordialmente,

 Gustavo A. Jiménez Correa
 Infrastructure Manager
 Web: www.aplicaciones.com.co
 Bogotá, Colombia

 Si tiene alguna felicitación, petición o reclamo, envíe por favor un correo
 a servicioalclie...@aplicaciones.com.co
 El contenido de este correo y/o sus anexos es de carácter confidencial y
 para uso exclusivo de la persona natural o jurídica, a la que se encuentra
 dirigido. Si usted no es su destinatario intencional, por favor, devuélvalo
 de inmediato y elimine el documento y sus anexos. Cualquier retención,
 copia, reproducción, difusión, distribución y, en general, cualquier uso
 indebido, es prohibido y penalizado por la Ley. Aplicaciones S.A. manifiesta
 que los anexos han sido revisados y estima que se encuentran sin virus, pero
 quien los reciba, se hace responsable de las pérdidas o daños que su uso
 pueda causar.



Re: PythonPaste and HAProxy

2011-06-01 Thread Baptiste
Hi,

Maybe you should try this:
option httpchk HEAD /app/haproxycheck
http-check expect status 200

cheers

On Wed, Jun 1, 2011 at 10:59 AM, Christian Klinger cklin...@novareto.de wrote:
 Hi,

 i try to loadbalance some PythonPaste Servers with the help of haproxy.

 I have configured this healthcheck:

 ...
 option httpchk GET /app/haproxycheck
 http-check expect status 200
 ...

 All works fine, so far. But i get this error on my PasteConsole...

 Traceback (most recent call last):
  File
 /opt/extranet/lib/python2.6/site-packages/Paste-1.7.5.1-py2.6.egg/paste/httpserver.py,
 line 1068, in process_request_in_thread
    self.finish_request(request, client_address)
  File /usr/local/lib/python2.6/SocketServer.py, line 322, in
 finish_request
    self.RequestHandlerClass(request, client_address, self)
  File /usr/local/lib/python2.6/SocketServer.py, line 618, in __init__
    self.finish()
  File /usr/local/lib/python2.6/SocketServer.py, line 661, in finish
    self.wfile.flush()
  File /usr/local/lib/python2.6/socket.py, line 297, in flush
    self._sock.sendall(buffer(data, write_offset, buffer_size))
 error: [Errno 32] Datenübergabe unterbrochen (broken pipe)


 I guess it's because haproxy does not wait on the response of the
 haproxycheck page from the webserver.

 Any idea what i can do to fix it?

 Christian







Re: haproxy stats

2011-06-03 Thread Baptiste
Hey,

All the stats are stored in memory. You must configure a stat socket
to retrieve them.
As far as I know, enabling stats has no impact on performance.

cheers

On Fri, Jun 3, 2011 at 6:06 PM, Beckler, Amon
amon.beck...@relayhealth.com wrote:
 Can anyone enlighten me on where the data for the stats page is stored?  And
 does enabling it have a performance impact?  We are interested in using it,
 but want to make sure we are not going to have some file getting bloated
 eating up disk or hurting our throughput performance.



haproxy and amazon

2011-06-23 Thread Baptiste
Hi gents,

I'm looking for people who use haproxy on an amazon server.
I'm more interested by the number of hit/s you could get.

Thanks for anybody who could help :)

Regards



Re: haproxy and amazon

2011-06-24 Thread Baptiste
Hi Malcolm and Julien

Thanks a lot for your answers.
Very appreciated :)

cheers



Re: Clients hitting infinite loop, cause load high on all backend servers

2011-07-01 Thread Baptiste
Hi Manoj,

Sounds like an application issue :)

Could you paste there your configuration file?

And I have also a few questions:
- are your rsync sessions quite long (longer thant 10s) ?
- what average of rsync sessions do you have on the platform ?
- what makes your rsync session stuck ?
- what's your purpose exactly: prevent a stuckable rsync to connect
to the platform ?

But once again, the issue seems more related to your rsync
configuration or the way you call/recall rsync.

cheers


On Sat, Jul 2, 2011 at 5:53 AM, Manoj Kumar manoj.ku...@xoriant.com wrote:
 Hi All,


 We have 15 to 20 servers in the Haproxy rotation which handle all sync
 traffic but We have some client(s) hitting the rsync infinite loop. Even one
 client could cause this:

  * Requests go to one sync server, rsync worker process gets stuck, client
 might send more requests which could get more processes stuck,
 server unresponsive

  * Haproxy redirects this client to another server; repeat above

 Does haproxy have a mechanism to control how many sync servers are tried
 (after the primary one has failed) for a client? How can we handle this kind
 of problem?

 Thanks
 Manoj



Re: HAProxy - 504 Gateway Timeout error.

2011-07-06 Thread Baptiste
hi,

Your maxconn seems a bit low if you have a lot of clients...
Maybe you should try increasing it or at lease increase the queue timeout.

As hank said, turn on http log, it will provide you very interesting
information about your issue.

cheers



Re: auto reply 200 on stats URL just for logs

2011-07-11 Thread Baptiste
hi,

There is a HTTP method for that: HEAD.

cheers

On Mon, Jul 11, 2011 at 11:21 AM, Damien Hardy
damienhardy@gmail.com wrote:
 I have to precise that this is not related to the stats delivered by
 haproxy but a static resource used on our pages to get counters based on
 access logs provide by haproxy.

 2011/7/11 Damien Hardy damienhardy@gmail.com

 Hello,

 I want haproxy to reply 200 with no data on a predefined URL without going
 through backends just for getting logs with referrer for our stats platform.

 Is there a way to do that with haproxy directives (redirect with code 200
 maybe) ?

 Best regards,

 --
 Damien





Re: Does haproxy support wccp(Web Cache Communication Protocol) ?

2011-07-11 Thread Baptiste
Hi,

You don't need a load balancer to load-balancer WCCP.
This protocol has already some builtin healthchecks and has a nice URL
hash algorithm.

cheers


On Tue, Jul 12, 2011 at 5:13 AM, 岳强 yueqiang.da...@gmail.com wrote:
 Hello!
     I am doing some work about cache(squid), which suppors wccp, but i
 don't know if haproxy support.
    Because i didn't find any infomations about the wccp on haproxy, so i
 think you can give me a good answer!

     Thank you very much!!

 Regards,
 YueQiang



Re: Haproxy response 502 but backend send 200

2011-07-12 Thread Baptiste
Hi,

According to HAProxy logs, your errors seems application related:

 SH   The server aborted before sending its full HTTP response headers, or
  it crashed while processing the request. Since a server aborting at
  this moment is very rare, it would be wise to inspect its logs to
  control whether it crashed and why. The logged request may indicate a
  small set of faulty requests, demonstrating bugs in the application.
  Sometimes this might also be caused by an IDS killing the connection
  between haproxy and the server.



2011/7/12 Alexey Vlasov ren...@renton.name:
 Hi.

 I've got such a scheme on the shared hosting:
                      +- apache_pool1
                      |
 apache_fe - haproxy -|- apache_pool2
                      |
                      +- apache_pool3
             ...

 haproxy.conf:
 
 defaults
    log         127.0.0.1 local1 notice
    mode        http
    retries     10
    maxconn     2000
    timeout     client 5
    timeout     connect 5000
    timeout     server 5m
    balance     roundrobin
    option      forwardfor except 111.111.111.111/32
    stats       enable
    stats       uri /haproxy-1gb?stats

 listen  backend_pool1   111.111.111.111:9099
    option  httplog
    log     127.0.0.1 local2
    cookie  SERVERID
    option  httpchk
    capture request header Host len 40
    server  pool1 111.111.111.111:8099 weight 256 cookie pool1 check inter 800 
 fall 3 rise 2 maxconn 500
    server  pool2 111.111.111.111:8100 weight   1 cookie pool2 check inter 800 
 fall 3 rise 2 maxconn 250
    server  pool3 111.111.111.111:8101 backup
 

 Unfortunately, I can't understand myself the cause of these errors:
 1. log from apache_fe:
 ===
 217.212.230.49 - - [12/Jul/2011:22:28:02 +0400] GET 
 /?option=com_sobi2sobi2Task=sobi2Detailssobi2Id=80default=80Itemid=7 
 HTTP/1.1 502 107 
 http://clientvhost.com/index.php?option=com_sobi2Itemid=7catid=1529; 
 Opera/9.80 (J2ME/MIDP; Opera Mini/4.3.24214/25.669; U; ru) Presto/2.5.25 
 Version/10.54
 ===

 2. haproxy access.log:
 Jul 12 22:28:04 l19 haproxy_aux2_pools[4944]: 111.111.111.111:42001 
 [12/Jul/2011:22:28:02.281] backend_pool1 backend_pool1/pool1 0/0/0/-1/2084 
 502 204 - - SH-- 24/6/6/6/0 0/0 {clientvhost.com:9099} GET 
 /?option=com_sobi2sobi2Task=sobi2Detailssobi2Id=80default=80Itemid=7 
 HTTP/1.1

 3. apache_pool1:
 10 217.212.230.49 - - [12/Jul/2011:22:28:19 +0400] www.clientvhost.com GET 
 /?option=com_sobi2sobi2Task=sobi2Detailssobi2Id=20default=20Itemid=7 
 HTTP/1.1 200 425885 
 http://clientvhost.com/index.php?option=com_sobi2Itemid=7catid=1529; 
 Opera/9.80 (J2ME/MIDP; Opera Mini/4.3.24214/25.669; U; ru) Presto/2.5.25 
 Version/10.54

 there are few of them (taking into account 10k sites on the server),
 but still. Moreover users sometimes complain on such problems.

 Right from the log you can see that the request has been processed
 normally (normal size of a response, 425885 bytes - apache_pool1 log)
 for 10 seconds , but haproxy somehow returned to the client 502-th error.

 show errors doesn't show anything.

 Does anyone know what else can be added into the options of haproxy
 logging? Or maybe somebody just knows how this can be fixed.

 Thank you in advance.

 --
 BRGDS. Alexey Vlasov.





Re: https from source to destination

2011-07-13 Thread Baptiste
On Wed, Jul 13, 2011 at 11:04 PM, Christopher Ravnborg
christopher.ravnb...@gmail.com wrote:
 Hi
 I'm looking for a solution which can do the following:
 Client need to connect to https webserver via haproxy. Encryption all the
 way.
 Log on webserver needs to contain client ip, this can be done, at least on
 http with forwardfor, that works fine.
 I have setup haproxy and read about stunnel with a patch to do https to
 haproxy, if i understand it right, stunnel will then decrypt/unwrap the
 stream, and pass it on to the server.
 If this is the case - does it send the non-https traffic to the https server
 - and will this be possible at all or am i misunderstanding this totally ?
 I just can't figure out how this can be done.
 Is this possible for me to do at all ?
 /Christopher


Hi Christopher,

Purpose of using stunnel in front of HAProxy is to offload SSL
processing from your backend servers and to take advantage of all
wonderfull layer7 features from HAProxy since traffic will be in
clear.
HAProxy will get connected through HTTP to your backend server.

If you don't want to bother with all this patching, at Exceliance, we
provide HAPee, HAProxy Enterprise Edition which includes a patched
stunnel for business and enterprise.
More information here:
http://www.exceliance.fr/en/products/hapee

later



Re: How can we Use

2011-07-30 Thread Baptiste
Hi Sunit,

It's as simple as installing haproxy package from your linux distribution.
HAProxy configuration is quite easy and if you want to load-balance HTTP
protocol, it will be easy to find some good example on the web.
This load-balancer can be either a physical machine or a VM.

There is no cost, everything is open source.

Now, if you need (or want) some support on your load-balancer and if you
want to use HAProxy, you should have a look to www.exceliance.fr .
Exceliance sells either services around HAProxy (
http://www.exceliance.fr/en/products/hapee) or HAProxy based load-balancer
appliances: Aloha (http://www.exceliance.fr/en/products/aloha)
Note that there is a VMWare version of the Aloha:
http://www.exceliance.fr/en/products/aloha/aloha-va

Last but not least, some HAProxy developer works at Exceliance.fr and the
code made for Exceliance appliances is pushed back into HAProxy opensource
version.

regards


On Sat, Jul 30, 2011 at 7:50 AM, SUNIT TYAGI sunit.ty...@spectranet.inwrote:

 Dear Support,

 ** **

 We want to setup a Load Balancer with 4 Virtual server ( On Vmware ) , Can
 you please suggest us how can we do that  also which version will support
 the VPS Servers.

 ** **

 Also is there any cost for the same . Application will be apache based 
 database will be my Sql.

 ** **

 Sunit

image001.gif

Re: acl using path_beg

2011-07-31 Thread Baptiste
Hi,

The wp-admin page of wordpress is a 302 redirecting to wp-login.php.

Have you tried to browse the backend directly?
I guess it should not work.

There are some parameters on Wordpress to tell him on which URL it
will be hosted.
By default, it may be /, in your case you should turn this parameter
to /blog :)

cheers


On Sun, Jul 31, 2011 at 11:46 PM, Gabriel Sosa sosagabr...@gmail.com wrote:
 Hello folks

 I'm trying to send all the traffic that starts with /blog to a
 specific backend and I'm using *path_beg* for that. here is a snip of
 my config file:

 defaults
        log             global
        timeout client  6
        timeout server  3m
        timeout connect 15000
        retries         3
        option          redispatch


 frontend  http
        mode            http
        log             global
        option          httplog
                                option                                  
 forceclose
                                option                                  
 httpclose

        bind            XXX.XXX.XXX.XXX:80        # com 80

                                acl blog_acl path_beg /blog
                                use_backend blog_backend if blog_acl
                                default_backend farm80




 For some reason, if I browse http://www.example.com/blog everything
 works just fine, but if I browse http://www.example.com/blog/wp-admin/
 (as you can guess I'm using wordpress) I get a 404 status.

 AFAIK, the acl path_beg /blog should match /blog/ or  /blog/wp-admin
 basically anything after /blog/ should be sent to that backend.

 do you have any idea why that could be not working as expected?

 Best regards

 --
 Gabriel Sosa
 Si buscas resultados distintos, no hagas siempre lo mismo. - Einstein





Re: 5000 CPS for haproxy

2011-08-02 Thread Baptiste
Hi Carlo,

Before testing the application itself, you must first test the infrastructure ;)
Once you know how much your infrastructure can deliver, then your
bench makes sense.
This is a step by step method, from the lower layer to the higher one.

Before testing your application in a virtualized environment, you
should bench it on physical servers.
Because on a virtualized environment, you're sharing resources with
anybody and the behavior may be odd under heavy load.

By the way, do you have a few ruby examples, I'm interested by your
way of testing applications.
Long time ago, I used perl and libwww.

cheers :)


On Tue, Aug 2, 2011 at 9:08 AM, carlo flores ca...@petalphile.com wrote:
 To add to this is a great automated tool and ideas from The Chicago Tribune
 called Bees With Machine Guns, which spins up n AWS micro instances to push
 traffic to the target server.

 https://github.com/newsapps/beeswithmachineguns

 My CTO makes the argument that connections/s or sessions/s don't mean much
 unless those sessions are testing realistic user traffic (which tests the
 application/database/etc). This is not the methodology you're using to test
 HAProxy, of course, but it is something I think about enough that I feel
 obligated to type about it.  If you care, we do this with Ruby's Net:HTTP
 libraries making specific calls on existing sessions to our RESTful servers,
 and those calls are built on random but real user data.


 On Monday, August 1, 2011, Willy Tarreau w...@1wt.eu wrote:
 Hello,

 On Mon, Aug 01, 2011 at 07:00:37PM +0530, appasaheb bagali wrote:
 hello,

 we have deployed the Haproxy on amazon cloud.

 its working fine we would like to do testing  5000 CPS .
 Please suggest the way to test

 There are various tools for that. The principle is that you should
 start some dummy servers on other instances (or at least fast static
 servers such as nginx), and run injection tools on other instances.
 Such tools might be httperf, ab, inject or any such thing. You will
 then configure your haproxy to forward to the dummy servers and will
 send your injectors' requests to haproxy. The tools will tell you
 the data rate, connection rate, etc... You're encouraged to enable
 the stats page on haproxy so that you can check rates and errors in
 live.

 In general, for 5k CPS, you need a bit of system tuning, because most
 Linux distros come with a conntrack setting which is only valid for a
 desktop usage but not for a server usage, so the traffic will suddenly
 stop after a few seconds. Or better, simply disable the module.

 Also, it is important that you have at least two machines for the
 servers and at least two for the clients, because in such environments,
 you have no visibility on anything, and it's quite common that some VMs
 are struggling or that some network paths are saturated. If you see that
 two servers behave differently, at least it's easier to spot where the
 problem is.

 Regards,
 Willy






Re: Automate backend registration

2011-08-03 Thread Baptiste
Hi Jens,

What do you mean by registration?
Is that make haproxy aware of the freshly deployed application  ?

cheers

On Wed, Aug 3, 2011 at 5:46 PM, Jens Bräuer jens.brae...@numberfour.eu wrote:
 Hi HA-Proxy guys,

 I wonder whats the current state of the art to automate the registration of 
 backend. My setup runs in on EC2 and I run HA-Proxy in front of local 
 applications to easy administration. So a typical config file would be like 
 this.

 frontend http
    bind *:8080
    acl is-auth                         path_beg /auth
    acl is-core                         path_beg /core
    use_backend auth                            if is-auth
    use_backend core                            if is-core

 backend auth
    server auth-1 localhost:7778 check

 backend core
    server core-1 localhost:1 check

 All applications are installed via RPMs and I would like couple the 
 installation with the backend registration. I like to do this as want to 
 configure everything in one place (the RPM) and the number of installed 
 applications may vary from host to host.

 I'd really appreciate hint where I can find tools or whats the current state 
 to handle this kind of task.

 Cheers,
 Jens






Re: make haproxy notice that backend server ip has changed

2011-08-05 Thread Baptiste
 Why don't you edit the haproxy conf directly and reload it ? If you have the
 new IP and are going to update the /etc/hosts, what is stopping you from
 doing a sed on the backend's ip in haproxy.cfg ?


 Or, you could just run in a VPC and stop doing weird stuff with your
 networking ;)


 Julien



Or using some kind of haproxy conf template with some keyword you
replace using sed with IPs you would get from the hosts file?
with inotify, you can get updated each time hosts file change, then
you generate a new haproxy conf from your template and you ask haproxy
to reload it :)
brilliant !

cheers



Re: cookie-less sessions

2011-08-05 Thread Baptiste
Hi Hank

Actually stick on URL param should work with client which does not
support cookies.
is the first reply a 30[12] ?

How is they user aware of the jsid or how is he supposed to send his
jsid to the server?

Do you have a X-Forwarded-For on your proxy or can you setup one?

cheers



Re: make haproxy notice that backend server ip has changed

2011-08-05 Thread Baptiste
On Fri, Aug 5, 2011 at 11:58 PM, Willy Tarreau w...@1wt.eu wrote:
 Hi Baptiste,

 On Fri, Aug 05, 2011 at 11:53:40PM +0200, Baptiste wrote:
 Or using some kind of haproxy conf template with some keyword you
 replace using sed with IPs you would get from the hosts file?
 with inotify, you can get updated each time hosts file change, then
 you generate a new haproxy conf from your template and you ask haproxy
 to reload it :)

 Once again, if the host is in /etc/hosts, then you don't need to touch
 the config anymore. Simply reload it so that it resolves the hosts
 again.

 cheers,
 Willy



Why make things easy when you can make them complicated

cheers



Re: cookie-less sessions

2011-08-06 Thread Baptiste
On Sat, Aug 6, 2011 at 8:51 AM, Hank A. Paulson
h...@spamproof.nospammail.net wrote:
 On 8/5/11 3:01 PM, Baptiste wrote:

 Hi Hank

 Actually stick on URL param should work with client which does not
 support cookies.
 is the first reply a 30[12] ?

 So you are saying that stick on URL param reads the outgoing 302 and saves
 the URL param from that in the stick table on 1.5? f so, great then problem
 solved. If it doesn't save it on the way out from the initial redirect then
 it won't help.

 Is the same supposed to happen with balance url_param on 1.4?
 If not, I will switch to 1.5. If it is supposed to, it doesn't, afaict.


 How is they user aware of the jsid or how is he supposed to send his
 jsid to the server?

 302 to the URL with the jsid URL param.

 Thanks


 Do you have a X-Forwarded-For on your proxy or can you setup one?

 cheers



Well, I'm thinking of something, let me run some tests and I'll come
back to you with a good or a bad news.

cheers



Re: cookie-less sessions

2011-08-06 Thread Baptiste
On Sat, Aug 6, 2011 at 9:32 AM, Willy Tarreau w...@1wt.eu wrote:
 Hi Baptiste,

 On Sat, Aug 06, 2011 at 09:24:08AM +0200, Baptiste wrote:
 On Sat, Aug 6, 2011 at 8:51 AM, Hank A. Paulson
 h...@spamproof.nospammail.net wrote:
  On 8/5/11 3:01 PM, Baptiste wrote:
 
  Hi Hank
 
  Actually stick on URL param should work with client which does not
  support cookies.
  is the first reply a 30[12] ?
 
  So you are saying that stick on URL param reads the outgoing 302 and saves
  the URL param from that in the stick table on 1.5? f so, great then problem
  solved. If it doesn't save it on the way out from the initial redirect then
  it won't help.
 
  Is the same supposed to happen with balance url_param on 1.4?
  If not, I will switch to 1.5. If it is supposed to, it doesn't, afaict.
 
 
  How is they user aware of the jsid or how is he supposed to send his
  jsid to the server?
 
  302 to the URL with the jsid URL param.
 
  Thanks
 
 
  Do you have a X-Forwarded-For on your proxy or can you setup one?
 
  cheers
 
 

 Well, I'm thinking of something, let me run some tests and I'll come
 back to you with a good or a bad news.

 Right now I see no way to do that. We'd need to extract the url_param
 from the Location header, this would be a new pattern. I think it's
 not too hard to implement. We already have url_param for the request,
 we could have hdr_url_param(header_name) or something like this.

 Regards,
 Willy



Actually, I was thinking of the stuff you developed to replace the
appsession by stick tables.
A dirty workaround could be:
- configure the application to send a  response with a set-cookie and
the 302/Location header, both with the same ID value.
- on HAProxy, match the set-cookie of the response and store it in a stick table
- when the client send request, match the URL param against the table
set up above

I agree that learning a URL param from a header would be cleaner ;)


Hank,
these options are currently not available in HAProxy, even in 1.5-dev6.
I guess it will be released in the next 1.5-dev.

cheers



Re: cookie-less sessions

2011-08-06 Thread Baptiste
I made it work on our Aloha load-balancer (4.1.2) :)

PHP code on the server:

cookie.php :
?php
session_start();
header(Location: /?ID= . session_id());

echo apache_getenv(SERVER_ADDR);
?

test script.php:
?php
echo apache_getenv(SERVER_ADDR);
?

it creates a set-cookie with cookie name PHPSESSID and redirect the
use with a URL param ID (which has the same value as the cookie id).

HAProxy configuration, on my backend configuration:
  stick-table type string len 32 size 10K
  stick store-response set-cookie(PHPSESSID)
  stick on url_param(ID)

The test:
First request:

$ curl -D - http://aloha/cookie.php
HTTP/1.1 302 Found
Date: Sat, 06 Aug 2011 09:13:45 GMT
Server: Apache/2.2.16 (Debian)
X-Powered-By: PHP/5.3.3-7+squeeze3
Set-Cookie: PHPSESSID=8a28f7089e9d70c3375505c9620472db; path=/
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Pragma: no-cache
Location: /?ID=8a28f7089e9d70c3375505c9620472db
Vary: Accept-Encoding
Content-Length: 68
Content-Type: text/html

192.168.10.101



Second request, with the ID on URL:

$ curl -D - http://aloha/script.php?ID=8a28f7089e9d70c3375505c9620472db
HTTP/1.1 200 OK
Date: Sat, 06 Aug 2011 09:15:21 GMT
Server: Apache/2.2.16 (Debian)
X-Powered-By: PHP/5.3.3-7+squeeze3
Vary: Accept-Encoding
Content-Length: 17
Content-Type: text/html

192.168.10.101


Note that I'm using the same backend, since the IP printed is the same.


Let's have a look at the table now:

echo show table bk_myappli | socat unix-connect:/var/run/haproxy.stat stdio
# table: bk_myappli, type: string, size:10240, used:5
0x14b5694: key=8a28f7089e9d70c3375505c9620472db use=0 exp=0 server_id=1


Hank, as said before, it's not yet in HAProxy.
I can't speak on behalf of Willy, but you can ask him to kindly
include it on the next 1.5-dev release :)

cheers



Re: cookie-less sessions

2011-08-06 Thread Baptiste
On Sat, Aug 6, 2011 at 12:50 PM, Willy Tarreau w...@1wt.eu wrote:
 On Sat, Aug 06, 2011 at 11:27:53AM +0200, Baptiste wrote:
 I made it work on our Aloha load-balancer (4.1.2) :)

 Baptiste, you should stop taking hardware with you during holidays,
 it's too much temptation ;-)


Since we have the KVM version, I can work at anytime, anywhere...
It's bad 

Anyway, I'm always keen to help people :)

cheers



Re: Defending against the Apache killer

2011-08-22 Thread Baptiste
Hi,

Why not only dropping this Range:bytes=0- header?

cheers


2011/8/22 Levente Peres sheri...@eurosystems.hu:
 Hello,

 There're a number of webserver-mace apps on the net, the newest that I heard
 of being the so called Apache killer script I saw a few days agon on Full
 disclosure... Here you can see a demonstration of what it does. Also, I've
 attached the script itself.

 http://www.youtube.com/watch?v=fkCQZaVjBhA

 I believe we should discuss some possibilities about how to configure
 HAProxy to protect Apache backends as much as possible, or at least mitigate
 such attacks? Any ideas?

 Cheers,

 Levente




Re: Proxy Protocol in 1.4.x ?

2011-08-23 Thread Baptiste
Hi Sebastien,

Actually, bumptech has not yet integrated all the patches developed by Emeric.
And the stunnel version used is the one without Exceliance (Emeric
again) patches.

But definitely, stud is interesting.

cheers


On Tue, Aug 23, 2011 at 1:02 PM, Sebastien Estienne
sebastien.estie...@gmail.com wrote:
 New benchmark on this topic with haproxy:
 http://vincent.bernat.im/en/blog/2011-ssl-benchmark.html


 On Saturday, July 9, 2011, Willy Tarreau w...@1wt.eu wrote:
 Hello Sébastien,

 On Fri, Jul 08, 2011 at 11:17:12PM +0200, Sébastien Estienne wrote:
 yes we perfectly understand this, and that is what we like about haproxy.
 But the demand for SSL is growing, it s even mandatory for some use
 cases.
 Stud looks really promising and solid and a good match for haproxy as it
 was designed to be used with haproxy ( http://devblog.bu.mp/introducing-stud
 ).
 Today we have the choice between:
 - haproxy 1.4 + patched stunnel
 - haproxy 1.5 dev + stud
 - patched haproxy 1.4 + stud

 The last one seems the most stable with the best performance, so as the
 demand for SSL is growing, i think it would be a big plus that haproxy 1.4
 can work with stud  without being patched.

 I see your point. Well, there is also a fourth solution. At Exceliance, we
 have an haproxy enterprise edition (hapee) packaging which includes a
 patched haproxy 1.4, patched stunnel etc... There's a free version you can
 register for. We decided to install it as some of our supported customers
 for free, just because it made maintenance easier for us, and rendered
 their infrastructure more stable.

 I don t know if it would make sense but maybe stud could be integrated
 somehow in haproxy like this:
 Instead of starting stud then haproxy separately, the main haproxy
 process could fork some stud-like process (binding 443) as it already forks
 haproxy childs for multicore and it would discuss using the proxy protocol
 transparentlyfor the end user with no need to setup the link between both.

 It's amusing that you're saying that : when I looked at the code, I
 thought
 they use the same design model as haproxy and they have the same goals,
 maybe this could be merged. My goal with SSL in haproxy is that we can
 dedicate threads or processes to that task, thus some core changes are
 still
 needed, but a first step might precisely be to have totally independant
 processes communicating over a unix socket pair and the proxy protocol.
 It's just not trivial yet to reach the server in SSL, but one thing at a
 time...

 This would offer a seemless SSL integration without hurting haproxy
 codebase and stability for clear http content.

 Exactly.

 Thanks for your insights, they comfort me in that mine were not too
 excentric :-)

 Willy



 --
 Sebastien Estienne




Re: Proxy Protocol in 1.4.x ?

2011-08-23 Thread Baptiste
I wish I have enough time :)
All the bench have been run, just need time to write it down!

cheers

On Tue, Aug 23, 2011 at 2:04 PM, Sebastien Estienne
sebastien.estie...@gmail.com wrote:
 could publish some benchmark of haproxy + all best ssl frontend.
 i think it would be really valuable infos for the haproxy community.
 thanx

 --
 Sebastien E.


 Le 23 août 2011 à 13:29, Baptiste bed...@gmail.com a écrit :

 Hi Sebastien,

 Actually, bumptech has not yet integrated all the patches developed by 
 Emeric.
 And the stunnel version used is the one without Exceliance (Emeric
 again) patches.

 But definitely, stud is interesting.

 cheers


 On Tue, Aug 23, 2011 at 1:02 PM, Sebastien Estienne
 sebastien.estie...@gmail.com wrote:
 New benchmark on this topic with haproxy:
 http://vincent.bernat.im/en/blog/2011-ssl-benchmark.html


 On Saturday, July 9, 2011, Willy Tarreau w...@1wt.eu wrote:
 Hello Sébastien,

 On Fri, Jul 08, 2011 at 11:17:12PM +0200, Sébastien Estienne wrote:
 yes we perfectly understand this, and that is what we like about haproxy.
 But the demand for SSL is growing, it s even mandatory for some use
 cases.
 Stud looks really promising and solid and a good match for haproxy as it
 was designed to be used with haproxy ( 
 http://devblog.bu.mp/introducing-stud
 ).
 Today we have the choice between:
 - haproxy 1.4 + patched stunnel
 - haproxy 1.5 dev + stud
 - patched haproxy 1.4 + stud

 The last one seems the most stable with the best performance, so as the
 demand for SSL is growing, i think it would be a big plus that haproxy 1.4
 can work with stud  without being patched.

 I see your point. Well, there is also a fourth solution. At Exceliance, we
 have an haproxy enterprise edition (hapee) packaging which includes a
 patched haproxy 1.4, patched stunnel etc... There's a free version you can
 register for. We decided to install it as some of our supported customers
 for free, just because it made maintenance easier for us, and rendered
 their infrastructure more stable.

 I don t know if it would make sense but maybe stud could be integrated
 somehow in haproxy like this:
 Instead of starting stud then haproxy separately, the main haproxy
 process could fork some stud-like process (binding 443) as it already 
 forks
 haproxy childs for multicore and it would discuss using the proxy protocol
 transparentlyfor the end user with no need to setup the link between both.

 It's amusing that you're saying that : when I looked at the code, I
 thought
 they use the same design model as haproxy and they have the same goals,
 maybe this could be merged. My goal with SSL in haproxy is that we can
 dedicate threads or processes to that task, thus some core changes are
 still
 needed, but a first step might precisely be to have totally independant
 processes communicating over a unix socket pair and the proxy protocol.
 It's just not trivial yet to reach the server in SSL, but one thing at a
 time...

 This would offer a seemless SSL integration without hurting haproxy
 codebase and stability for clear http content.

 Exactly.

 Thanks for your insights, they comfort me in that mine were not too
 excentric :-)

 Willy



 --
 Sebastien Estienne





Re: Defending against the Apache killer

2011-08-24 Thread Baptiste
On Tue, Aug 23, 2011 at 8:09 AM, Willy Tarreau w...@1wt.eu wrote:
 On Mon, Aug 22, 2011 at 07:57:10PM +0200, Baptiste wrote:
 Hi,

 Why not only dropping this Range:bytes=0- header?

 Agreed. Protecting against this vulnerability is not a matter of limiting
 connections or whatever. The attack makes mod_deflate exhaust the process'
 memory. What is needed is to remove the Range header when there are too
 many occurrences of it.

 Their attack puts up to 1300 Range values. Let's remove the header if
 there are more than 2 :

    reqidel ^Range if { hdr_cnt(Range) gt 2 }

 That should reliably defeat the attack.

 Regards,
 Willy




Actually, this is slightly different.
According to the Perl script, a single Range header is sent, but it is
forge with a lot of range value.
IE: Range: 0-,5-1,5-2,5-3,[...]

Since there is no hdr_size ACLs for now, the only way is to use a
hdr_reg to do this:
reqidel ^Range if { hdr_reg(Range) ([0-9]+-[0-9]+,){10,} }

But the regexp above does not work (haproxy 1.5-dev6), the comma is
not matched
don't know yet if it's an haproxy bug or not, I'll tell you once I
have finished investigating.

cheers



Re: Defending against the Apache killer

2011-08-24 Thread Baptiste
On Wed, Aug 24, 2011 at 12:44 PM, Baptiste bed...@gmail.com wrote:
 On Tue, Aug 23, 2011 at 8:09 AM, Willy Tarreau w...@1wt.eu wrote:
 On Mon, Aug 22, 2011 at 07:57:10PM +0200, Baptiste wrote:
 Hi,

 Why not only dropping this Range:bytes=0- header?

 Agreed. Protecting against this vulnerability is not a matter of limiting
 connections or whatever. The attack makes mod_deflate exhaust the process'
 memory. What is needed is to remove the Range header when there are too
 many occurrences of it.

 Their attack puts up to 1300 Range values. Let's remove the header if
 there are more than 2 :

    reqidel ^Range if { hdr_cnt(Range) gt 2 }

 That should reliably defeat the attack.

 Regards,
 Willy




 Actually, this is slightly different.
 According to the Perl script, a single Range header is sent, but it is
 forge with a lot of range value.
 IE: Range: 0-,5-1,5-2,5-3,[...]

 Since there is no hdr_size ACLs for now, the only way is to use a
 hdr_reg to do this:
 reqidel ^Range if { hdr_reg(Range) ([0-9]+-[0-9]+,){10,} }

 But the regexp above does not work (haproxy 1.5-dev6), the comma is
 not matched
 don't know yet if it's an haproxy bug or not, I'll tell you once I
 have finished investigating.

 cheers


I confirm, this looks like a bug in HAProxy, maybe in the way HAProxy
loads the regexp from the configuration file:
Here is a req.txt file simulating the attack:
HEAD / HTTP/1.1
Host: 10.0.3.20
Range: 
bytes=0-,5-0,5-1,5-2,5-3,5-4,5-5,5-6,5-7,5-8,5-9,5-10,5-11,5-12,5-13,5-14,5-15,5-16,5-17,5-18,5-19,5-20,5-21,5-22,5-23,5-24,5-25,5-26,5-27,5-28,5-29,5-30,5-31,5-32,5-33,5-34,5-35,5-36,5-37,5-38,5-39,5-40,5-41,5-42,5-43,5-44,5-45,5-46,5-47,5-48,5-49,5-50,5-51,5-52,5-53,5-54,5-55,5-56,5-57,5-58,5-59,5-60,5-61,5-62,5-63,5-64,5-65,5-66,5-67,5-68,5-69,5-70,5-71,5-72,5-73,5-74,5-75,5-76,5-77,5-78,5-79,5-80,5-81,5-82,5-83,5-84,5-85,5-86,5-87,5-88,5-89,5-90,5-91,5-92,5-93,5-94,5-95,5-96,5-97
Accept-Encoding: gzip
Connection: close


And a working regexp tested with egrep:
egrep -v ([0-9]+-[0-9]+,){10,} req.txt
HEAD / HTTP/1.1
Host: 10.0.3.20
Accept-Encoding: gzip
Connection: close

The following regexp works in HAProxy: ([0-9]+-[0-9]+)
The same with the coma does not work: ([0-9]+-[0-9]+,)
This one works: ([0-9]+-[0-9]+?)
And this one does not: ([0-9]+-[0-9]+?)\{10,\}

Maybe I'm doing something wrong.
If your need more details, please let me know.

cheers



Re: Defending against the Apache killer

2011-08-24 Thread Baptiste
On Wed, Aug 24, 2011 at 1:44 PM, Cyril Bonté cyril.bo...@free.fr wrote:
 Hi all,

 On Wednesday 24 August 2011 13:02:18 Baptiste wrote:
 (...)
  Since there is no hdr_size ACLs for now, the only way is to use a
  hdr_reg to do this:
  reqidel ^Range if { hdr_reg(Range) ([0-9]+-[0-9]+,){10,} }
 
  But the regexp above does not work (haproxy 1.5-dev6), the comma is
  not matched
  don't know yet if it's an haproxy bug or not, I'll tell you once I
  have finished investigating.
 
  cheers

 I confirm, this looks like a bug in HAProxy, maybe in the way HAProxy
 loads the regexp from the configuration file:

 This is not how HAProxy loads the regex but how it applies them to the
 headers.
 The comma character (,) is considered as a value separator. HAProxy will then
 try to apply the regex to each value found in the Range header.
 For this header :
 Range:
 bytes=0-,5-0,5-1,5-2,5-3,5-4,5-5,5-6,5-7,5-8,5-9,5-10,5-11,5-12,5-13,5-14,5-15,5-16,5-17,5-18,5-19,5-20,5-21,5-22,5-23,5-24,5-25,5-26,5-27,5-28,5-29,5-30,5-31,5-32,5-33,5-34,5-35,5-36,5-37,5-38,5-39,5-40,5-41,5-42,5-43,5-44,5-45,5-46,5-47,5-48,5-49,5-50,5-51,5-52,5-53,5-54,5-55,5-56,5-57,5-58,5-59,5-60,5-61,5-62,5-63,5-64,5-65,5-66,5-67,5-68,5-69,5-70,5-71,5-72,5-73,5-74,5-75,5-76,5-77,5-78,5-79,5-80,5-81,5-82,5-83,5-84,5-85,5-86,5-87,5-88,5-89,5-90,5-91,5-92,5-93,5-94,5-95,5-96,5-97

 It will check byte=0-
 then 5-0
 then 5-1
 then ...


 --
 Cyril Bonté



ahah :)
You're both all right.
Sorry, I totaly forgot this part of the RFC:
Multiple message-header fields with the same field-name MAY be
present in a message if and only if the entire field-value for that
header field is defined as a comma-separated list [i.e., #(values)].
It MUST be possible to combine the multiple header fields into one
field-name: field-value pair, without changing the semantics of the
message, by appending each subsequent field-value to the first, each
separated by a comma. The order in which header fields with the same
field-name are received is therefore significant to the interpretation
of the combined field value, and thus a proxy MUST NOT change the
order of these field values when a message is forwarded.


So the hdr_cnt from willy works.
I did not try this option since this is not how the Perl script of the
first mail build the attack.

sorry for the noise and glad to see tha HAProxy works well :)



Re: How to test keep-alive is working?

2011-08-26 Thread Baptiste
Hi,

In HTTP 1.1,  Keep alive is the default mode and does not require any header.
On the other hand, in HTTP 1.0, there is no keepalive by default,
that's why browser and web servers had to anounce it.

More information available here:
http://blog.exceliance.fr/2011/06/30/implement_http_keepalive_without_killing_your_apache_server/

cheers


On Fri, Aug 26, 2011 at 8:56 PM, bradford fingerm...@gmail.com wrote:
 How do I test that keep-alive is working?

 I've added option http-server-close to the frontend and do not see a
 connection header in the http response.  I use to see connectino: closed
 when I had httpclose enabled, but don't see connection: keep-alive or
 anything similar.  So, how do I test it's working?

 Bradford




Re: CVE-2011-3192 and Range requests

2011-08-27 Thread Baptiste
Hi,

HAProxy is fine and can protect your Apache.
Have a look at this page, you'll find some HAProxy configuration example:
http://blog.exceliance.fr/2011/08/25/protect-apache-against-apache-killer-script/

Basically, removing the malformed Range header is easy to do.
Usually, the same source IP address will also try to open a lot of connections.
HAProxy can also help you to slowdown this kind of attack, since they
are not legitimate traffic, you don't want them to hit too much you
web servers.

Good luck.



On Sat, Aug 27, 2011 at 8:04 AM, Aristedes Maniatis a...@ish.com.au wrote:
 What is the vulnerability [1] of an Apache httpd server with haproxy in
 front of it?

 1. haproxy is fine, httpd will still suffer from DoS attacks
 2. haproxy may itself suffer DoS
 3. haproxy is fine and will protect an httpd server from DoS

 Thanks for an excellent product.

 Ari


 [1] http://article.gmane.org/gmane.comp.apache.announce/59


 --
 --
 Aristedes Maniatis
 ish
 http://www.ish.com.au
 Level 1, 30 Wilson Street Newtown 2042 Australia
 phone +61 2 9550 5001   fax +61 2 9550 4001
 GPG fingerprint CBFB 84B4 738D 4E87 5E5C  5EFA EF6A 7D2E 3E49 102A





Re: Error 504

2011-09-08 Thread Baptiste
Hello,

you server might be very slow or your server timeout in your conf
might be too low.

If you can copy/paste your conf and tell us which version you're using
and the underlying OS.

cheers


On Thu, Sep 8, 2011 at 1:35 PM, Christophe Rahier
christo...@qualifio.com wrote:
 Hi,
 I've a question about this error :

 504 Gateway Time-out

 The server didn't respond in time.

 What could I check in my config ? I created 2 LB with a virtual IP and all
 request are coming from the firewall to this IP.
 I think it's possible, if needed, I can copy my configuration file.
 Thanks for your help, I'm lost.
 Regards, Christophe



Re: Error 504

2011-09-08 Thread Baptiste
I can't see anything weird here.
are the backend status OK on the haproxy http stat page?

cheers

On Thu, Sep 8, 2011 at 2:28 PM, Christophe Rahier
christo...@qualifio.com wrote:
 Hi,

 Here's my config. Webservers are IIS.

 global
 log 192.168.0.2 local0
 log 127.0.0.1 local1 notice
 maxconn     10240
 defaults
 log    global
 option dontlognull
 retries    2
 clitimeout  5
 srvtimeout  5
 contimeout  5
 timeout server 60s

 listen WebPlayer-Farm 192.168.0.2:80
 mode http
 option httplog
 balance source
 #balance leastconn
 option forwardfor
 stats enable
 option http-server-close
 server Player1 192.168.0.10:80 check
 server Player2 192.168.0.11:80 check
 server Player3 192.168.0.12:80 check
 server Player4 192.168.0.13:80 check

 listen WebPlayer-Farm-SSL 192.168.0.2:443
 mode tcp
 option ssl-hello-chk
 balance source
 server Player1 192.168.0.10:443 check
 server Player2 192.168.0.11:443 check
 server Player3 192.168.0.12:443 check
 server Player4 192.168.0.13:443 check

 listen  Manager-Farm    192.168.0.2:81
 mode http
 option httplog
 balance source
 option forwardfor
 stats enable
 option http-server-close
 server  Manager1 192.168.0.60:80 check
 server  Manager2 192.168.0.61:80 check

 listen Manager-Farm-SSL 192.168.0.2:444
 mode tcp
 option ssl-hello-chk
 balance source
 server Manager1 192.168.0.60:443 check
 server Manager2 192.168.0.61:443 check

 listen  info 192.168.0.2:90
 mode http
 balance source
 stats uri /



 Thanks for your help!

 Christophe




 Le 08/09/11 14:16, « Baptiste » bed...@gmail.com a écrit :

Hello,

you server might be very slow or your server timeout in your conf
might be too low.

If you can copy/paste your conf and tell us which version you're using
and the underlying OS.

cheers


On Thu, Sep 8, 2011 at 1:35 PM, Christophe Rahier
christo...@qualifio.com wrote:
 Hi,
 I've a question about this error :

 504 Gateway Time-out

 The server didn't respond in time.

 What could I check in my config ? I created 2 LB with a virtual IP and
all
 request are coming from the firewall to this IP.
 I think it's possible, if needed, I can copy my configuration file.
 Thanks for your help, I'm lost.
 Regards, Christophe








Re: Error 504

2011-09-11 Thread Baptiste
5 or 10s sounds good :)

cheers

On Sun, Sep 11, 2011 at 8:11 PM, Christophe Rahier
christo...@qualifio.com wrote:
 Hi Cyril,

 Thanks for your help, I'll adapt my config file.

 About timeout http-keep-alive, which value do you recommend?

 Christophe


 Le 11/09/11 13:34, « Cyril Bonté » cyril.bo...@free.fr a écrit :

Hi Christophe,

Le Jeudi 8 Septembre 2011 05:28:41 Christophe Rahier a écrit :
 defaults
 log    global
 option dontlognull
 retries    2
 clitimeout  5
 srvtimeout  5
 contimeout  5
 timeout server 60s

Be careful beacause your configuration provides both the deprecated
srvtimeout keyword and timeout server, the latest declared will apply.
You should clean up your configuration by using only non deprecated
keywords :
timeout client, timeout server and timeout connect.

It means that your server timeout is not 5 but 60s.

Also, because your proxies are using option http-server-close, you
should
define a timeout http-keep-alive to reduce the ttl of idle keep-alive
connections.

--
Cyril Bonté







Re: Stress test

2011-09-12 Thread Baptiste
Hi Dwyer,

well, the question is not how to bench HAProxy, it's more how to
bench the application through HAProxy.
If you just want to bench pure haproxy performance, then an apache
serving a static file and ab as a client might be enough.

cheers


On Tue, Sep 13, 2011 at 3:16 AM, Dwyer, Simon sdw...@federalit.net wrote:
 I am trying to do some load testing on a new HA Proxy pair.  Is there a 
 common way to do this?  i have done some searching but have not found 
 anything solid.

 Cheers,

 Simon




Re: Problems with load balancing on cloud servers

2011-09-12 Thread Baptiste
Hi Liong,

You can also play with vm.swapiness to avoid your ubuntu server to use its swap.

cheers



Re: Establishing connection lasts long

2011-09-13 Thread Baptiste
heh,
This has nothing to see with haproxy but more how your hypervisor
manages VMs which doesn't do anything :)

cheers

On Tue, Sep 13, 2011 at 1:35 PM, Tim Korves t...@whtec.net wrote:
 Hi,

 It's very strange. When I check the server load, it is almost zero.

 same here... Anyone got information about such an issue?

 Regards, Tim

 Hi again,

 I noticed the same thing, the problem happens at the first call of
 the
 page,

 Ok, seems to be a bug? Or what do you think?

 After the result is immediate.

 I can confirm that.

 Any idea?

 Thanks, Tim


 Hi there,

 we're using haproxy 1.4.15 on a Ubuntu 10.04 box. This box is
 virtualised, HW-specs: 1 CPU-core (Xeon 2.00GHz), 512MB RAM, 2x 1GBit
 virtual LAN (these are also two different physical NICs in the HV).

 Now we've got the problem, that the initial connect through haproxy
 seems to be delayed. The HTTP-Servers behind haproxy are physical
 one's
 and they seem to deliver the page quite a lot faster directly then
 using
 haproxy in front.

 Any ideas or recommendations on checking haproxy to be not the source
 of the delay?

 Regards, Tim

 --
 Tim Korves
 Administrator

 whTec
 Teutoburger Straße 309
 D-46119 Oberhausen
 Fon: +49 (40) 70 97 50 35 -0
 Fax: +49 (40) 70 97 50 35 -99
 SIP: t.kor...@fon.whtec.net

 ---

 Service:     serv...@whtec.net
 Buchhaltung: buchhalt...@whtec.net
 DNS:         d...@whtec.net

 ACHTUNG:
 Anfragen von BOS bitte über b...@whtec.net
 Anfragen von NGOs (e.V., gGmbH etc.) bitte über n...@whtec.net



 --
 Tim Korves
 Inhaber / Administrator

 whTec
 Teutoburger Straße 309
 D-46119 Oberhausen
 Fon: +49 (40) 70 97 50 35 -0
 Fax: +49 (40) 70 97 50 35 -99
 SIP: t.kor...@fon.whtec.net

 ---

 Service:     serv...@whtec.net
 Buchhaltung: buchhalt...@whtec.net
 DNS:         d...@whtec.net

 ACHTUNG:
 Anfragen von BOS bitte über b...@whtec.net
 Anfragen von NGOs (e.V., gGmbH etc.) bitte über n...@whtec.net



 --
 Tim Korves
 Inhaber / Administrator

 whTec
 Teutoburger Straße 309
 D-46119 Oberhausen
 Fon: +49 (40) 70 97 50 35 -0
 Fax: +49 (40) 70 97 50 35 -99
 SIP: t.kor...@fon.whtec.net

 ---

 Service:     serv...@whtec.net
 Buchhaltung: buchhalt...@whtec.net
 DNS:         d...@whtec.net

 ACHTUNG:
 Anfragen von BOS bitte über b...@whtec.net
 Anfragen von NGOs (e.V., gGmbH etc.) bitte über n...@whtec.net





Re: how to serve inline flash policy

2011-09-15 Thread Baptiste
Hi,

Can you try with the configuration below:

frontend ft_application
bind :80
mode tcp
use_backend bk_xml if !HTTP
default_backend bk_http

backend bk_xml
mode tcp
balance roundrobin
stick match src table bk_http
server s1 192.168.1.1:80 track bk_http/s1
server s2 192.168.1.2:25 track bk_http/s2

backend bk_http
mode http
balance roundrobin
stick store-request src
stick-table type ip size 200k expire 30m
server s1 192.168.1.1:80 check
server s2 192.168.1.2:80 check


Basically, if the traffic is not HTTP, the frontend will use the xml
backend, otherwise, it would use the http one.
The session is maintained between both backends through a stick-table.

I have not tried this conf, but I would be keen to know if it helped you :)

cheers


On Thu, Sep 15, 2011 at 5:24 PM, Vladimir Dronnikov dronni...@gmail.com wrote:
 Hi!

 Wonder is that possible to serve inline flash policy? That is, to
 distinguish connections which receives 'policy-file-request/\0' (no
 newline is expected) and immediately respond with some xml. The
 problem is that such requests are _not HTTP_ ones, but they still must
 be served from the same host and port where the page was loaded from
 -- i.e. by a HTTP server.

 TIA,
 --Vladimir





Re: how to serve inline flash policy

2011-09-16 Thread Baptiste
since your request is not RFC compliant, HAProxy will drop it.
You may give a try with the option accept-invalid-http-request on
the frontend definition.

cheers


On Fri, Sep 16, 2011 at 10:42 AM, Vladimir Dronnikov
dronni...@gmail.com wrote:
 If you send cookies with your XML requests, then this is doable too :)
 But with the 1.5-dev branch only which is able to learn the cookie
 string and to store it into a stick table.

 I see.


 That way you learn the cookie from the HTTP backend and you keep
 stickiness on it in the XML backend.

 Well, I don't need stickiness in XML backend. The whole play with tcp
 mode is too let _raw_ TCP requests pass to the same backend as HTTP
 requests go...

 Pity, I loose forwardfor option is frontend -- this is really needed.

 Is there ever a way to analyze raw content of request buffer in http
 mode? What I need is to route requests starting with
 policy-file-request/NULL to the default backend.




Re: how to serve inline flash policy

2011-09-16 Thread Baptiste
might be patchable :)
I'll look at it and let you know.

On Fri, Sep 16, 2011 at 10:51 AM, Vladimir Dronnikov
dronni...@gmail.com wrote:
 since your request is not RFC compliant, HAProxy will drop it.
 You may give a try with the option accept-invalid-http-request on
 the frontend definition.

 Gave, with no success so far... :)

 Consider req_ssl_ver pattern -- it snoops into request buffer and
 finds the match. I need the same looking for policy.../ pattern.




Re: how to serve inline flash policy

2011-09-16 Thread Baptiste
have you turned on the proxy to mode http ?
this macro might be available only in http mode.



On Fri, Sep 16, 2011 at 10:51 AM, Vladimir Dronnikov
dronni...@gmail.com wrote:
 since your request is not RFC compliant, HAProxy will drop it.
 You may give a try with the option accept-invalid-http-request on
 the frontend definition.

 Gave, with no success so far... :)

 Consider req_ssl_ver pattern -- it snoops into request buffer and
 finds the match. I need the same looking for policy.../ pattern.




Re: Proxy Protocol in 1.4.x ?

2011-09-19 Thread Baptiste
Hi there,

Finally, we've finished our bench on SSL tools available for HAProxy:
stud and stunnel.
Please read the benchmark here:

http://blog.exceliance.fr/2011/09/16/benchmarking_ssl_performance/

cheers



Re: Caching

2011-09-19 Thread Baptiste
In any case HAProxy can be pointed about this problem.

Do you have a proxy on your LAN?
or Apache mod_cache enabled?

cheers



On Mon, Sep 19, 2011 at 2:30 PM, Christophe Rahier
christo...@qualifio.com wrote:
 Hi,

 I thought the problem was in my browser but when I empty the cache, I've
 the same problem.

 To be sure, I tried with an other browser and the problem is the same.

 When I call my page locally from the server, the result is OK.

 Christophe


 Le 19/09/11 13:45, « Baptiste » bed...@gmail.com a écrit :

hi Christophe,

HAProxy is *only* a reverse proxy.
No caching functions in it.

Have you tried to browse your backend servers directly?
Can it be related to your browser's cache?

cheers

On Mon, Sep 19, 2011 at 1:39 PM, Christophe Rahier
christo...@qualifio.com wrote:
 Hi,
 Is there a caching system at HAProxy?

 In fact, we find that when we put online new files (CSS, for example)
that
 they are not addressed directly, it usually takes about ten minutes.

 Thank you in advance for your help.

 Christophe








Re: Transparent Proxy

2011-09-24 Thread Baptiste
On Fri, Sep 23, 2011 at 11:53 PM, Jason J. W. Williams
jasonjwwilli...@gmail.com wrote:
 Hello,

 My understanding has been that HAProxy can be set up in conjunction
 with TPROXY support in the Linux kernel so that the backend servers
 see the original client's source IP address on incoming packets?

 So is the option transparent
 (http://code.google.com/p/haproxy-docs/wiki/transparent) not related
 to that type of transparent proxying or am I mistaken and there's no
 way to make HAProxy preserve the original client IP on the way to the
 backend servers?

 Thank you in advance.

 -J



Hi,

You have to patch your kernel with TProxy and then to use the source keyword:
http://code.google.com/p/haproxy-docs/wiki/source

Note that the default gateway of your servers must be the HAProxy box
in that kind of architecture.

cheers



Re: Log host info with uri

2011-09-27 Thread Baptiste
You might want to use capture request header host len 64

cheers

On Tue, Sep 27, 2011 at 11:46 PM, John Lauro
john.la...@covenanteyes.com wrote:
 Is there an easy way to have haproxy log the host with the uri instead of
 just the relative uri?  I have some 503 errors, and they are going to
 virtual hosts on the backend and I am having some trouble tracking them
 down…  and the uri isn’t specific enough as it is common among multiple
 hosts…  I’m sure this can be done, just having trouble figuring it out at
 the moment…









Re: Proxy Protocol in 1.4.x ?

2011-09-28 Thread Baptiste
On the same subject, an excellent article from Vincent:
http://vincent.bernat.im/en/blog/2011-ssl-session-reuse-rfc5077.html

Good one mate :)

cheers


On Mon, Sep 19, 2011 at 12:00 PM, Baptiste bed...@gmail.com wrote:
 Hi there,

 Finally, we've finished our bench on SSL tools available for HAProxy:
 stud and stunnel.
 Please read the benchmark here:

 http://blog.exceliance.fr/2011/09/16/benchmarking_ssl_performance/

 cheers




Re: Possibility to define internal redirect based on response header from a backend

2011-09-29 Thread Baptiste
Hi,

This is currently not doable with HAProxy.

cheers

On Thu, Sep 29, 2011 at 4:33 PM, Galfy Pundee
galfyo.pun...@googlemail.com wrote:
 Hi all,
  I have two back ends - one serving fast python generated content and
 one serving fast static content. I would like from the python to send
 a response containing a header, for example
             X-Location-Intenal:
 http://only-internally-visible-url/folder/file.ext
  and then Haproxy to make the  redirect internally without the client
 noticing this. Is this possible with acl rules? Is it possible to
 process response header coming from the back end? Any suggestions how
 this can be done are welcomed.

 Regards,
  Gal





Re: Re: Re: [haproxy] about least Connection problem

2011-09-29 Thread Baptiste
Hi,

Sorry, but I don't understand what you mean :)
Can explain again please?


cheers

On Thu, Sep 29, 2011 at 8:16 AM, 강동주 jinjud...@gmail.com wrote:
 hello.

 I already asked about least Connection Problem..

 my answer is   It is working well ( least Connection )


 but i hava another problem that haproxy is starting to dynamically..

 i think some configuration( like : A connection : 2000  to 0 ) is Restart ,
 when haproxy is started to dynamically.

  do you hava a any idea?

 --this is my script- ( i called script when update server)

 time /haproxy/lib/haproxy -f /haproxy/conf/test.cfg -p /haproxy/pid/test.pid
 -sf $(cat /haproxy/pid/haproxy.pid)

 





 thank you ,  hava a nice day





Re: [ANNOUNCE] haproxy 1.5-dev7

2011-10-03 Thread Baptiste
On Sun, Sep 11, 2011 at 12:23 AM, Willy Tarreau w...@1wt.eu wrote:
 Hi all,

 Five months have elapsed since 1.5-dev6. A massive amount of changes was
 merged since then. Most of them were cleanups and optimizations. A number
 of changes were dedicated to making listeners more autonomous. The immediate
 effect is a more robust handling of resource saturation, and the second
 effect is the removal of the 10-years old maintain_proxies() function which
 was harming performance and hard to get over.

 Halog was improved too (faster with more filters). A significant number
 of external contributions were merged, among them the stats socket updates
 to clear session-table keys by values. There are too many changes to list,
 but nothing too dangerous, so I'd say it's the 1.5-dev version I trust the
 most today.

 I'm planning on putting all the focus on server-side keep-alive again. Some
 of the remaining issues have been overcome. Surely there are still a number,
 but we can't know if we don't try :-)

 Do not hesitate to give 1.5-dev7 a try. I'm currently updating all 1.5 I
 have to it.

   site index      : http://haproxy.1wt.eu/
   sources         : http://haproxy.1wt.eu/download/1.5/src/devel/
   changelog       : http://haproxy.1wt.eu/download/1.5/src/CHANGELOG

 Cheers,
 Willy





Hi all,

I wrote up a small post entry on our blog which summarizes the main
new features:

http://blog.exceliance.fr/2011/10/03/whats-new-in-haproxy-1-5-dev7/

cheers



Re: 500s with 1.4.18 and 1.5d7

2011-10-03 Thread Baptiste
On Mon, Oct 3, 2011 at 11:02 PM, Hank A. Paulson
h...@spamproof.nospammail.net wrote:
 On 10/3/11 12:19 PM, Brane F. Gračnar wrote:

 On Monday 03 of October 2011 20:09:17 Hank A. Paulson wrote:

 I am not sure if these counts are exceeding the never threshold

     500  when haproxy encounters an unrecoverable internal error, such as
 a
          memory allocation failure, which should never happen

 I am not sure what I can do to troubleshoot this since it is in prod :(
 Is there a way to set it to core dump and die when it has a 500?

 Are you sure, that these are not upstream server 500 errors?

 Best regards, Brane

 Good point, I don't know how to differentiate from the haproxy logs which
 500s originate from haproxy and which are passed through from the backend
 servers. I wish there was an easy way to tell since haproxy 500s are much
 more worrisome. Maybe ai am missing something...



on your log line, you may have a letter on the second character of the
termination state flags.
If there is a letter, then it means there has been an issue between
HAProxy and your server.

cheers



Re: How about server side keep-alive in v1.5?

2011-10-09 Thread Baptiste
On Sun, Oct 9, 2011 at 4:50 AM, wsq003 wsq...@sina.com wrote:
 Hi Willy,

 In the mainpage I saw below: 1.5 will bring keep-alive to the server, but
 it will probably make sense only with static servers.

 While in the change-log or source code I did not find this feature
 (server side keep-alive).

 Am I missing something, or server side keep-alive still on going?

 Thanks,


Sorry for answering on behalf of Willy,
(Willy, just correct me if I'm wrong).

HTTP Keepalive has not been developed yet on the server side.
Actually, this is one of the reason why the 1.5 release of HAProxy is
still under development.

Concerning the delay, only Willy will be able to answer.

cheers



Re: Haproxy and Ajax / HXR Post

2011-10-09 Thread Baptiste
2011/10/9 Andreas Bergman andr...@sea-ab.se:
 Hi All,
 Earlier today we tried a emergency, not pre-tested LB solution for a
 customer, needless to say this didn't go very well.
 The LB part worked well, and most of the functions worked well, but among
 those who didn't work at all were Ajax HXR requests used to upload photos
 and registering to the site.
 We did a rollback and the site now works somewhat well.
 But, we need to figure out why the HXRs and most POST requests didn't work.
 I've seen in some threads that the HXR requests lacks a length header and
 therefore Nginx chokes and sends a 405/403.
 The setup used is Haproxy, Varnish and Nginx, but when we removed varnish it
 didn't work either, but using only varnish and Nginx works, but Haproxy and
 Nginx doesn't, that leads to the conclusion that it is something with HXR
 and Haproxy, probably a config-issue or something related to the HXR header.
 Any ideas? The config we used was pretty straightforward, we used keep alive
 thats it. (The LB machine is reinstalled so i haven' got the config anymore)
 Br
 Andreas Bergman

Hi,

Maybe you enabled option http-server-close or option http-close.

In order to troubleshoot, we need at least the config and the logs :)
Can you re-install an HAProxy and run some test again and share with
us your config and the generated logs?

Note: you can do that out of production, just make HAProxy listen on
an exotic port and point your own browser on it.

Regards



Re: Haproxy stats page incomplete (1.4.17)

2011-10-10 Thread Baptiste
Hi,

Are both HAProxy to the same version?

cheers


Re: Backend server in maintenance mode

2011-10-14 Thread Baptiste
On Fri, Oct 14, 2011 at 6:32 PM, Mathieu Simon
mathieu.simo...@gmail.com wrote:
 Hello,

 here my question.

 I'm trying to stop gracefully a backend server using HATop.
 The disable command give me a 504 status code for pending request on this
 backend server.

 Does it exist a workaround in config file to finish the pending request ?
 (like a force-persist?)

 My current cluster is in active/passive mode with one active and the ohter
 in backup
 So no problem for disable the backup with no traffic.

 I could easily do a workaround by hot reconfigure haproxy and set the active
 to backup.
 But I prefer to touch only the config file when I need to add a new server
 for example.

 I know already the way to do what I want with iptables but I'm looking for
 solution simpler like HATop could do.

 I hope my question is clear enough :)

 Thanks in advance!

 Mathieu Simon.





Hi,
You should try to turn the weight of the server to 0.

Cheers



Re: Problem with rewrites + SSL

2011-10-18 Thread Baptiste
On Tue, Oct 18, 2011 at 8:31 PM, Saul s...@extremecloudsolutions.com wrote:
 Hello List,

 I am having an issue trying to translate some urls with my haproxy
 setup and Im hoping someone can shed some light.

 Information:

  4 apache servers need a reliable LB such as HA. These apache servers
 are listening on 80,443 however all traffic gets rewritten (with
 apache re-writes) to https if the request comes on port 80, currently
 there is just a firewall with dnat.

 The apaches are not serving content directly from disk but rather
 proxy passing to backend servers based on the request, this
 information is only relevant because of the different hostnames that a
 client will be hitting when connecting to the site.

 The problem:

 I want to be able to re-write the url at the HA level but I am having
 some issues trying to do this accurately. I have a front end listening
 on 80 and a front end listening on 443 https, the latter is set to TCP
 mode so it will transparently forward requests to the apaches on 443.
 So what i've done is try to force a redirect to https if the requests
 comes via 80 to a url, the problem is that because there are many
 hostnames and calls associated with every requests, I can't simply
 send all traffic to one URL, I need to be able to just replace the
 protocol and keep the request intact.

 Config:

 ##--
 ##  HTTP FRONTEND
 ## 
 frontend www 10.1.1.1:80
 mode http

 acl no_ssl dst_port 80
 redirect prefix https://sub1.mydomain.com if no_ssl

 backend www
 mode http
 balance roundrobin
 stats enable
 option httpclose
 option forwardfor
 option httpchk HEAD /ha.txt HTTP/1.0

 server Apache1 10.1.1.13:80 weight 100 check
 server Apache2 10.1.1.14:80 weight 100 check
 server Apache3 10.1.1.15:80 weight 100 check
 server Apache4 10.1.1.16:80 weight 100 check

 ##--
 ##  HTTPS FRONTEND
 ## 

 frontend https-in
 mode tcp
 bind :443
 default_backend bk-https

 backend bk-https
 mode tcp
 balance source
 option ssl-hello-chk

 server Apache_ssl1 10.1.1.13:443 weight 100 check
 server Apache_ssl2 10.1.1.14:443 weight 100 check
 server Apache_ssl3 10.1.1.15:443 weight 100 check
 server Apache_ssl4 10.1.1.16:443 weight 100 check


 Notes: most of the requests users will make will hit
 https://sub1.mydomain.com but the problem is that once they get there
 there are assets that load on sub2.mydomain.com sub3.mydomain.com and
 because traffic is going through HAproxy and we have that rule to
 re-write everything to https://sub1.mydomain.com half of the stuff
 won't load.

 Any help is greatly appreciated it and Thank you in advance. Willy You Rock!



Hey,

You should give a try something like this:
 acl sub1 hdr_sub(Host) sub1.mydomain.com
 acl sub2 hdr_sub(Host) sub2.mydomain.com
 acl sub3 hdr_sub(Host) sub3.mydomain.com
 acl sub4 hdr_sub(Host) sub4.mydomain.com
 redirect prefix https://sub1.mydomain.com if sub1
 redirect prefix https://sub2.mydomain.com if sub2
 redirect prefix https://sub3.mydomain.com if sub3
 redirect prefix https://sub4.mydomain.com if sub4

no need to match agains dst_port 80, if you're in http frontend it
means you already have a destination port 80.

I hope this helps.

Cheers



Re: Haproxy consulting

2011-10-18 Thread Baptiste
On Tue, Oct 18, 2011 at 6:39 PM, Cory Forsyth cory.fors...@gmail.com wrote:
 Hi, my company would like to hire someone for a few hours' worth of
 consulting time to help us gut-check our haproxy configuration and set
 up.

 In particular, this is what we are trying to do:

 We are trying to limit connections to our server by IP address, but
 over a given time window for each IP.  If it has connected in the last
 5 minutes it is allowed to continue connecting, regardless of whether
 the IP limit has been reached.
 If it is a new IP, it is only allowed if the number of other IPs is
 below the limit.  So if an IP gets in, as long as it continues to
 connect at least once every 5 minutes it will always be allowed to
 continue connecting.

 I have set something up to do this using a secondary process to check
 the haproxy stick-table (via socat) for the number of entries (and the
 entries are tracked by IP and expired after 5minutes), and if the
 number of entries is greater than the limit this shuts down a Sinatra
 ruby app that is configured as a backend in haproxy's config...and the
 configured frontend has an ACL that checks whether that backend is
 down when deciding if it can allow in a new IP.

 We'd like some expert eyes to look over this setup and suggest
 alternatives or improvements, and also suggestions for how to load
 test this setup to make sure it will work well at scale.

 thanks,
 Cory




Hi

Why don't you play with stick-table size, setting up to the limit of
IP you want to allow on your frontend, the expire time and finally the
nopurge option:

  stick-table ip size 1000 expire 5m nopurge
  stick on src

With such definition, the table will store 1K source IP, for 5 minutes.
Any IP older than 5 minutes would be dropped by Haproxy, releasing it
for a new client.

Or maybe I'm missing something.

You can share with me (in private) your current HAProxy configuration
and I'll have a look if you wish.

cheers



Re: about nbproc in conf

2011-10-19 Thread Baptiste
2011/10/19 wsq003 wsq...@sina.com:
 Hi

 In manual there is following:

 nbproc number
   Creates number processes when going daemon. This requires the daemon
   mode. By default, only one process is created, which is the recommended mode
   of operation. For systems limited to small sets of file descriptors per
   process, it may be needed to fork multiple daemons. USING MULTIPLE PROCESSES
   IS HARDER TO DEBUG AND IS REALLY DISCOURAGED. See also daemon.


 My question is how DISCOURAGED is it? Here we need to handle a lot of small
 http request and small response, and haproxy need to handle more then 20k
 reuquests per seconds. Single process work in single CPU seems not enough.
 (We may add ACL config in the future, and haproxy would be even busier.)

 So, I want to set nbproc to 4 or 8.

 For now I know some shortages of multi-processe mode haproxy:
 1, it would cause many context switch (after testing, this is acceptable)
 2, it would become difficult to get status report for every process (we
 found some ways to make it acceptable, though it's still painful)

 Is there any other shortage that I do not realize?

 Thanks,


Hi,

The first thing you need to remember before considering going
multiprocess is that HAProxy is event-driven, which means that the
faster the CPU, the faster HAProxy will be, and this is linear: over a
3GHz CPU, HAProxy will peform 50% faster than on a 2GHz...

Second point, consider that ACLs impact on HAProxy as null.
Well, for all acls but regexp based, where the impact remains realy low anyway.

HAProxy is not able to share its memory between process, so anything
related to memory won't work (stats, server maxconn, stick-table,
etc)
So if you can leave without that, it's fine.

On a single process we can go up to 50K at Exceliance. To be fair
that's Marketing, since Willy benched the appliance much over this
limit :)

cheers



Re: Keep alive with haproxy stud

2011-10-26 Thread Baptiste
Hi Erik,

You just need to enable the option httplog in your HAProxy frontend
which is verbose and provide useful information for troubleshooting.

cheers


On Tue, Oct 25, 2011 at 10:52 PM, Erik Torlen
erik.tor...@apicasystem.com wrote:
 Hi,

 I will continue testing in a few days and see how the result will turn out to 
 be. We have made a lot of changes
 so we'll see how it goes.

 All of the results include the details of the response time from the loadtest.
 Any recommendations on the logging we can use to get more information on what 
 is happening on the server side?
 We are currently just using syslog.

 /E

 -Original Message-
 From: Willy Tarreau [mailto:w...@1wt.eu]
 Sent: den 14 oktober 2011 23:16
 To: Erik Torlen
 Cc: haproxy@formilux.org
 Subject: Re: Keep alive with haproxy  stud

 Hi Erik,

 On Sat, Oct 08, 2011 at 06:40:49PM +, Erik Torlen wrote:
 Hi,

 I see different results on the keep alive using http vs https.

 Loadtest against https (through stud) gives me around 69% keep alive 
 effiency (using 3-20 objects per connection in different tests). When testing
 through http directly against haproxy I get 99% keep alive with the same 
 loadtest scripts.

 I have tried changing timeouts and different modes (http-pretend-keepalive 
 etc) but still no improvement.

 Anyone that knows how to improve this and why it's happening?

 If you're trying directly then via stud and see different things, then
 none of the haproxy options (pretend-keepalive, ...) will have any effect.
 It is very possible that timeouts were too low but that would mean you
 were using insanely low timeouts (eg: a few ms). It is also possible
 that the tool you used for the test can't run as many https concurrent
 connections as it runs http connections, and that it closes some of them
 by itself. And it is also possible that there are a few issues with stud.
 While it performs well, it's still young and it is possible that some
 pathological corner cases remain. Haproxy experienced this in its early
 age too. You need to enable logging everywhere and get more precise stats
 from your load testing tool (eg: all response times, not just an average).

 Regards,
 Willy






Re: Timeout values

2011-10-26 Thread Baptiste
Hi Erik,

What's your purpose here?
Depending on your load test and you haproxy configuration, the queue
timeout might generate 503 responses.
The other ones are related to the behavior you want for your web platform.
Basically, all the values you added seems too high.


Cheers


On Tue, Oct 25, 2011 at 11:02 PM, Erik Torlen
erik.tor...@apicasystem.com wrote:
 Hi,

 I would like to get feedback on these timeout values.

    timeout http-request    40s
    timeout queue           1m
    timeout connect         120s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 40s
    timeout check           40s

 I have done alot of different loadtests with different values using stud in 
 front of haproxy and backend on separate instances
 in the cloud (meaning there is higher latency then normal against backend).

 Can't see any big difference in the loadtest result when having these timeout 
 fairly high. I guess that really low values will affect
 the loadtest result more.

 /E





Re: Haproxy with stunnel and a session cookie service.

2011-10-26 Thread Baptiste
Hi,

how do you achieve session persistance in HAProxy configuration?
What load-balancing algorithm do you use?
Can you configure HAProxy to log your session cookie then show us some
log lines?

cheers



On Wed, Oct 26, 2011 at 2:57 PM, Sean Patronis spatro...@add123.com wrote:


 We are in the process of converting most of our HAProxy usage to be http
 balanced (instead of TCP).

 In our lab we are using stunnel to decrypt our https traffic to http
 which then gets piped to haproxy.  We also have a load balanced session
 service that stunnel/HAProxy also serves which uses uses cookies for our
 sessions (the session service generates/maintains the cookies).

 Whenever we are using stunnel/HAProxy to decrypt and balance our session
 service, something bad seems to happen to the session cookie and our
 session service returns an error.  If we use just straight http
 balancing without stunnel in the loop, our session service works fine.
 Also, if we just use HAProxy and tcp balancing, our session service
 works fine (the current way our production service works).

 What gives is there some special config in HAProxy or stunnel that I am
 missing?


 Thanks.







Re: client side keep-alive (http-server-close vs httpclose)

2011-10-26 Thread Baptiste
Hi,

In order to be able to process layer 7 manipulation (what you want to
achieve) for *each* request, then you must enable http mode on your
frontebd/backend and to enable the option http-server-close.

cheers

On Thu, Oct 27, 2011 at 12:21 AM, Vivek Malik vivek.ma...@gmail.com wrote:
 The documentation also says

 In HTTP mode, it is possible to rewrite, add or delete some of the request
 and
 response headers based on regular expressions. It is also possible to block
 a
 request or a response if a particular header matches a regular expression,
 which is enough to stop most elementary protocol attacks, and to protect
 against information leak from the internal network. But there is a
 limitation
 to this : since HAProxy's HTTP engine does not support keep-alive, only
 headers
 passed during the first request of a TCP session will be seen. All
 subsequent
 headers will be considered data only and not analyzed. Furthermore, HAProxy
 never touches data contents, it stops analysis at the end of headers.

 The above confuses me about keep-alive. Please suggest if this applies in
 http mode.

 On Wed, Oct 26, 2011 at 6:15 PM, Vincent Bernat ber...@luffy.cx wrote:

 OoO En cette  nuit nuageuse du jeudi 27 octobre  2011, vers 00:02, Vivek
 Malik vivek.ma...@gmail.com disait :

  We have been using haproxy in production for around 6 months while
  using httpclose. We use functions like reqidel, reqadd to manipulate
  request headers and use_backend to route a request to a specific
  backend.

  We run websites which often have ajax calls and load javascripts and
  css files from the server. Thinking about keep alive, I think it
  would be desired to keep client side keep alive so that they can
  reuse connections to load images, javascript, css and make ajax calls
  over it.

  From a haproxy request processing and manipulating perspective, Is
  there a difference between http-server-close and httpclose? Would
  reqadd/reqidel/use_backend work on subsequent requests during client
  side keep alive too?

 Yes. From the documentation:

 ,
 | By default HAProxy operates in a tunnel-like mode with regards to
 persistent
 | connections: for each connection it processes the first request and
 forwards
 | everything else (including additional requests) to selected server. Once
 | established, the connection is persisted both on the client and server
 | sides. Use option http-server-close to preserve client persistent
 connections
 | while handling every incoming request individually, dispatching them one
 after
 | another to servers, in HTTP close mode. Use option httpclose to switch
 both
 | sides to HTTP close mode. option forceclose and option
 | http-pretend-keepalive help working around servers misbehaving in HTTP
 close
 | mode.
 `
 --
 Vincent Bernat ☯ http://vincent.bernat.im

 Make sure input cannot violate the limits of the program.
            - The Elements of Programming Style (Kernighan  Plauger)





Re: HAProxy and Downloading Large Files

2011-10-28 Thread Baptiste
hi,

What do HAProxy logs report you when the error occur?
What version of HAPRoxy are you running?

Regards


On Fri, Oct 28, 2011 at 11:02 PM, Justin Rice jrice0...@gmail.com wrote:
 To all,
 I am having issues concerning downloading large files from one of my web
 apps. TCP mode works just fine. The requirements, however, call for HTTP
 mode - which does not work. Has anyone ever had this problem before? Is this
 a timeout issue? Thanks for your time and suggestions in advance.
 -Justin



Re: Using HAProxy for ldap

2011-10-30 Thread Baptiste
Maybe you can catch the health check sequence and the response in
tcpdump and share it here.
It might not be so complicated to make it compatible.

cheers

On Sun, Oct 30, 2011 at 5:51 PM, Danie Weideman danie...@gmail.com wrote:
 Hi All

 I am trying to check for an OpenLDAP and IBM Tivoli Directory Server . There
 is a valid response from the OpenLDAP server however the IBM TDS Server is
 up but I see the following error
 [WARNING] 302/184557 (14773) : Server HAldap/ldapTIM1 is DOWN, reason:
 Layer7 invalid response, info: Not LDAPv3 protocol, check duration: 4ms. 0
 active and 1 backup servers left. Running on backup. 0 sessions active, 0
 requeued, 0 remaining in queue.

 I think it could be related to TDS server not responding in the right format
 (resultCode (http://tools.ietf.org/html/rfc4511#section-4.1.9).

 Any ideas to get the TDS server integrated with HAProxy?

 Kind Regards
 Danie


 On Sun, Oct 30, 2011 at 12:36 PM, Danie Weideman danie...@gmail.com wrote:

 Hi All

 I had to enable the anonymous bind for the check to work as expected

 Regards
 Danie

 On Sun, Oct 30, 2011 at 9:16 AM, Danie Weideman danie...@gmail.com
 wrote:

 Hi all

 Thank you Brane for the assistance.

 I am testing the sollution and is using the check ldap option.

 When I disconnect the one ldap server, haproxy does not see this (using
 the web gui) server beeing down. If I mark this broken server as down, in
 the web admin gui, the results are as expected.

 My question then: how do I check from haproxy for ldap failures?

 Kind regards
 Danie

 On 24 Oct 2011 14:24, Danie Weideman danie...@gmail.com wrote:
 
  Hi
 
  Is it possible to loadbalance between two active master ldap servers?
 
  If so I would like for one to be always persistent.
 
  Thanx in advance
  Kind regards
  Danie Weideman





Re: option httpchk

2011-10-31 Thread Baptiste
Hi,

no :)

cheers

On Mon, Oct 31, 2011 at 12:15 PM, Christophe Rahier
christo...@qualifio.com wrote:
 Hi,

 In my config file, I check my servers with option httpchk HEAD
 /checkCF.cfm HTTP/1.0

 When the response is not 2xx or 3xx, would it possible to test an other
 url?

 Thanks for your help.

 Regards,

 Christophe





Re: option httpchk

2011-10-31 Thread Baptiste
euh, if there is no response, HAProxy can log it
Then, you can then detect it and take the decision you want :)

Don't ask HAProxy to reload your webservices, it's a bad idea.


On Mon, Oct 31, 2011 at 1:48 PM, Christophe Rahier
christo...@qualifio.com wrote:
 Hi,

 What a pity, this could be very useful! Indeed, as Haproxy detects that
 there is no response, it may perform an another action :-)


 Christophe


 Le 31/10/11 13:27, « Baptiste » bed...@gmail.com a écrit :

Hi,

no :)

cheers

On Mon, Oct 31, 2011 at 12:15 PM, Christophe Rahier
christo...@qualifio.com wrote:
 Hi,

 In my config file, I check my servers with option httpchk HEAD
 /checkCF.cfm HTTP/1.0

 When the response is not 2xx or 3xx, would it possible to test an other
 url?

 Thanks for your help.

 Regards,

 Christophe









Re: tracking maxconn between several haproxy server definitions - which correspond to same real web server

2011-10-31 Thread Baptiste
Hi,

If cookie insert is not an option, then in 1.5-dev7, you can perform
cookie persistence learning the application cookie and store it in a
stick table.
It's like appsession, unless it will survive a reload and you can
share it between HAProxy boxes. :)

cheers

On Mon, Oct 31, 2011 at 7:40 PM, Cyril Bonté cyril.bo...@free.fr wrote:
 Hi,

 Le Lundi 31 Octobre 2011 17:21:02 Piavlo a écrit :
 I have been using *capture cookie* and *appsession *in one haproxy
 configuration for sticky sessions.
 The problem is that then haproxy restarted/reload or domain is migrated
 to failover server the sticky data is lost.
 Today I realised that I don't need to use sticky sessions at all in
 haproxy - what I do is setup custom session name
 different in each backend http server based on it's name and then using
 acl match on *hdr_beg(Cookie)* to direct to the correct backend
 and if Cookie is not set then send it to round robin backend that has
 all the http servers listed.

 Is there any reason to use a different cookie name per server instead of using
 much standard behaviour of haproxy (using cookie insert and others) ?
 Your configuration would be a lot more simple and maintainable (see at the
 end).

 Below is the relevant frontend  backend config.
 It works great - but there is one problem. It's is very important that
 the *maxconn* that is set in *server *will be set globally per real http
 server
 but since I now have same server http server listed twice.
 For example for lb-srv1 web server there are *lb-srv1/lb-srv1* and
 *testing_rr/lb-srv1* - it means the total maxconn for lb-srv1 is now x2
 larger.
 I could split split the maxconn between lb-srv1/lb-srv1 and
 testing_rr/lb-srv1 - so that in total it's 900 - but this will not use
 the possible 900 slots
 since there is no way to know that ratio of lb-srv1/lb-srv1 vs
 testing_rr/lb-srv1  and it's highly dynamic.

 Is there some trick to share the same *maxconn* for lb-srv1/lb-srv1
 testing_rr/lb-srv1
 Or something similar to*track* - that would make *server* track other
 *server*'s maxconn too? and not only health checks.

 No, or not yet.

 afaiu the haproxy backends  lb-srv1   lb-srv1 cannot be load balanced
 but only chosen based on acl or as default fallback?

 No.

 
 frontend testing 0.0.0.0:88
          mode            http
          maxconn         2

          option          httplog
          monitor-uri     /up.html

          option          http-server-close
          option          forwardfor
          reqadd          X-Forwarded-Proto:\ http

          acl             acl-lb-srv1 hdr_beg(Cookie) lb-srv1=
          acl             acl-lb-srv1-up nbsrv(lb-srv1) 1

          acl             acl-lb-srv2 hdr_beg(Cookie) lb-srv2=
          acl             acl-lb-srv2-up nbsrv(lb-srv2) 1

          use_backend     lb-srv1 if acl-lb-srv1 acl-lb-srv1-up
          use_backend     lb-srv2 if acl-lb-srv2 acl-lb-srv2-up
          default_backend testing_rr

          monitor         fail if !acl-lb-srv1-up !acl-lb-srv2-up

 backend lb-srv1
          mode            http
          option          httpchk /up.html
          option          abortonclose

          server          lb-srv1 lb-srv1.private:82 maxconn 900 check
 inter 2000 fall 3

 backend lb-srv2
          mode            http
          option          httpchk /up.html
          option          abortonclose

          server          lb-srv2 lb-srv2.private:82 maxconn 900 check
 inter 2000 fall 3

 backend testing_rr
          mode            http
          option          httpchk /up.html
          option          abortonclose

          balance         roundrobin
          server          lb-srv1 lb-srv1.private:82 maxconn 900 track
 lb-srv1/lb-srv1
          server          lb-srv2 lb-srv2.private:82 maxconn 900 track
 lb-srv2/lb-srv2
 ---

 Proposed version (untested) :
 
 listen testing 0.0.0.0:88
        mode            http
        maxconn         2

        option          httplog
        monitor-uri     /up.html

        option          abortonclose
        option          http-server-close
        option          forwardfor
        reqadd          X-Forwarded-Proto:\ http

        acl             acl-lb-down nbsrv 0
        monitor         fail if acl-lb-down

        cookie          lb insert
        server          lb-srv1 lb-srv1.private:82 maxconn 900 cookie srv1
 check inter 2000 fall 3
        server          lb-srv2 lb-srv2.private:82 maxconn 900 cookie srv2
 check inter 2000 fall 3
 ---

 --
 Cyril Bonté





Re: haproxy and multi location failover

2011-11-01 Thread Baptiste
Hi,

Do you want to failover the Frontend or the Backend?
If this is the frontend, you can do it through DNS or RHI (but you
need your own AS).
If this is the backend, you have nothing to do: adding your servers in
the conf in a separated backend, using some ACL to take failover
decision and you're done.

cheers


On Tue, Nov 1, 2011 at 2:25 PM, Senthil Naidu senthil.na...@gmail.com wrote:
 Hi,

 Is it possible to use haproxy in a active/passive failover scenario between
 multiple datacenters.

 Regards







Re: haproxy and multi location failover

2011-11-01 Thread Baptiste
There is not (yet) a GSLB or dyndns daemon available in opensource,
but a few DNS server could be used to emulate this feature.
- PowerDNS  + pipe backend
- unbound + python module

or yourself updating your DNS server to trigger a failover


Cheers


On Tue, Nov 1, 2011 at 6:10 PM, Senthil Naidu senthil.na...@gmail.com wrote:
 hi,

 we need to have a setup as follows



 site 1 site 2

   LB  (ip 1)   LB (ip 2)
    |   |
    |   |
  srv1  srv2  srv1 srv2

 site 1 is primary and site 2 is backup in case of site 1  LB's failure or
 failure of all the servers in site1 the website should work from backup
 location servers.

 Regards

 On Tue, Nov 1, 2011 at 10:31 PM, Gene J gh5...@gmail.com wrote:

 Please provide more detail about what you are hosting and what you want to
 achieve with multiple sites.

 -Eugene

 On Nov 1, 2011, at 9:58, Senthil Naidu senthil.na...@gmail.com wrote:

 Hi,

 thanks for the reply,  if the same needs to be done with dns do we need
 any external dns services our we can use our own ns1 and ns2 for the same.

 Regards


 On Tue, Nov 1, 2011 at 9:06 PM, Baptiste bed...@gmail.com wrote:

 Hi,

 Do you want to failover the Frontend or the Backend?
 If this is the frontend, you can do it through DNS or RHI (but you
 need your own AS).
 If this is the backend, you have nothing to do: adding your servers in
 the conf in a separated backend, using some ACL to take failover
 decision and you're done.

 cheers


 On Tue, Nov 1, 2011 at 2:25 PM, Senthil Naidu senthil.na...@gmail.com
 wrote:
  Hi,
 
  Is it possible to use haproxy in a active/passive failover scenario
  between
  multiple datacenters.
 
  Regards
 
 
 
 






Re: haproxy and multi location failover

2011-11-01 Thread Baptiste
True :)
Despite short TTLs, some client would take a long time to failover.
But it's the only option unless you own your AS and you are able to
route your traffic inside it.

rgs


On Tue, Nov 1, 2011 at 6:30 PM,  vivek.ma...@gmail.com wrote:
 DNS propagation can take a long time based on my experience. We have a 
 similar problem where we host multiple identical setups in different EC2 
 availability zones. We have been thinking of having DNS entry with multiple A 
 records for load distribution and failover. However, that doesn't solve the 
 problem of OP.

 Vivek
 Sent via BlackBerry from T-Mobile

 -Original Message-
 From: Baptiste bed...@gmail.com
 Date: Tue, 1 Nov 2011 18:17:25
 To: Senthil Naidusenthil.na...@gmail.com
 Cc: Gene Jgh5...@gmail.com; haproxy@formilux.orghaproxy@formilux.org
 Subject: Re: haproxy and multi location failover

 There is not (yet) a GSLB or dyndns daemon available in opensource,
 but a few DNS server could be used to emulate this feature.
 - PowerDNS  + pipe backend
 - unbound + python module

 or yourself updating your DNS server to trigger a failover


 Cheers


 On Tue, Nov 1, 2011 at 6:10 PM, Senthil Naidu senthil.na...@gmail.com wrote:
 hi,

 we need to have a setup as follows



 site 1 site 2

   LB  (ip 1)   LB (ip 2)
    |   |
    |   |
  srv1  srv2  srv1 srv2

 site 1 is primary and site 2 is backup in case of site 1  LB's failure or
 failure of all the servers in site1 the website should work from backup
 location servers.

 Regards

 On Tue, Nov 1, 2011 at 10:31 PM, Gene J gh5...@gmail.com wrote:

 Please provide more detail about what you are hosting and what you want to
 achieve with multiple sites.

 -Eugene

 On Nov 1, 2011, at 9:58, Senthil Naidu senthil.na...@gmail.com wrote:

 Hi,

 thanks for the reply,  if the same needs to be done with dns do we need
 any external dns services our we can use our own ns1 and ns2 for the same.

 Regards


 On Tue, Nov 1, 2011 at 9:06 PM, Baptiste bed...@gmail.com wrote:

 Hi,

 Do you want to failover the Frontend or the Backend?
 If this is the frontend, you can do it through DNS or RHI (but you
 need your own AS).
 If this is the backend, you have nothing to do: adding your servers in
 the conf in a separated backend, using some ACL to take failover
 decision and you're done.

 cheers


 On Tue, Nov 1, 2011 at 2:25 PM, Senthil Naidu senthil.na...@gmail.com
 wrote:
  Hi,
 
  Is it possible to use haproxy in a active/passive failover scenario
  between
  multiple datacenters.
 
  Regards
 
 
 
 








Re: haproxy and multi location failover

2011-11-01 Thread Baptiste
RHI: Route Health Injection
AS: Autonomous System
= RHI relies on your AS to route traffic to the right POP (Point Of Presence)
Pro: compatible with anybody speaking BGP or OSPF, failover quickly
Cons: require an AS, so not compatible with public clouds :)

GSLB: (geo|global) Server Load Balancing
= relies on DNS, depending on the status of the POP (cf above).
Pro: easy to configure
Cons: no standard, must rely on the same LB vendor for each POP, quite
expensive, can take some time to failover

cheers

On Tue, Nov 1, 2011 at 7:29 PM, Vivek Malik vivek.ma...@gmail.com wrote:
 May I ask what some of the acronyms in this email thread stand for
 RHI -
 AS -
 GSLB -
 Thanks,
 Vivek

 On Tue, Nov 1, 2011 at 2:26 PM, Baptiste bed...@gmail.com wrote:

 True :)
 Despite short TTLs, some client would take a long time to failover.
 But it's the only option unless you own your AS and you are able to
 route your traffic inside it.

 rgs


 On Tue, Nov 1, 2011 at 6:30 PM,  vivek.ma...@gmail.com wrote:
  DNS propagation can take a long time based on my experience. We have a
  similar problem where we host multiple identical setups in different EC2
  availability zones. We have been thinking of having DNS entry with multiple
  A records for load distribution and failover. However, that doesn't solve
  the problem of OP.
 
  Vivek
  Sent via BlackBerry from T-Mobile
 
  -Original Message-
  From: Baptiste bed...@gmail.com
  Date: Tue, 1 Nov 2011 18:17:25
  To: Senthil Naidusenthil.na...@gmail.com
  Cc: Gene Jgh5...@gmail.com; haproxy@formilux.orghaproxy@formilux.org
  Subject: Re: haproxy and multi location failover
 
  There is not (yet) a GSLB or dyndns daemon available in opensource,
  but a few DNS server could be used to emulate this feature.
  - PowerDNS  + pipe backend
  - unbound + python module
 
  or yourself updating your DNS server to trigger a failover
 
 
  Cheers
 
 
  On Tue, Nov 1, 2011 at 6:10 PM, Senthil Naidu senthil.na...@gmail.com
  wrote:
  hi,
 
  we need to have a setup as follows
 
 
 
  site 1 site 2
 
    LB  (ip 1)   LB (ip 2)
     |   |
     |   |
   srv1  srv2  srv1 srv2
 
  site 1 is primary and site 2 is backup in case of site 1  LB's failure
  or
  failure of all the servers in site1 the website should work from backup
  location servers.
 
  Regards
 
  On Tue, Nov 1, 2011 at 10:31 PM, Gene J gh5...@gmail.com wrote:
 
  Please provide more detail about what you are hosting and what you
  want to
  achieve with multiple sites.
 
  -Eugene
 
  On Nov 1, 2011, at 9:58, Senthil Naidu senthil.na...@gmail.com
  wrote:
 
  Hi,
 
  thanks for the reply,  if the same needs to be done with dns do we
  need
  any external dns services our we can use our own ns1 and ns2 for the
  same.
 
  Regards
 
 
  On Tue, Nov 1, 2011 at 9:06 PM, Baptiste bed...@gmail.com wrote:
 
  Hi,
 
  Do you want to failover the Frontend or the Backend?
  If this is the frontend, you can do it through DNS or RHI (but you
  need your own AS).
  If this is the backend, you have nothing to do: adding your servers
  in
  the conf in a separated backend, using some ACL to take failover
  decision and you're done.
 
  cheers
 
 
  On Tue, Nov 1, 2011 at 2:25 PM, Senthil Naidu
  senthil.na...@gmail.com
  wrote:
   Hi,
  
   Is it possible to use haproxy in a active/passive failover scenario
   between
   multiple datacenters.
  
   Regards
  
  
  
  
 
 
 
 
 





Re: Timeout values

2011-11-01 Thread Baptiste
Hey Erik :)

Once again, playing with the timeout below won't change anything.
Try playing with server maxconn, some sysctls also has to be checked.

What kind of performance can you get?
Are you sure your bottleneck is your LB?

cheers

On Tue, Nov 1, 2011 at 7:33 PM, Erik Torlen erik.tor...@apicasystem.com wrote:
 I'm trying to see what difference the values does to my loadtest result. The 
 web platform is on amazon
 so the latency and everything is having a certain impact on the result.

 I will keep testing and see what kind of result I get. Currently I have 
 another problem which I will
 send in to the list :/

 /E

 -Original Message-
 From: Baptiste [mailto:bed...@gmail.com]
 Sent: den 25 oktober 2011 23:15
 To: Erik Torlen
 Cc: haproxy@formilux.org
 Subject: Re: Timeout values

 Hi Erik,

 What's your purpose here?
 Depending on your load test and you haproxy configuration, the queue
 timeout might generate 503 responses.
 The other ones are related to the behavior you want for your web platform.
 Basically, all the values you added seems too high.


 Cheers


 On Tue, Oct 25, 2011 at 11:02 PM, Erik Torlen
 erik.tor...@apicasystem.com wrote:
 Hi,

 I would like to get feedback on these timeout values.

    timeout http-request    40s
    timeout queue           1m
    timeout connect         120s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 40s
    timeout check           40s

 I have done alot of different loadtests with different values using stud in 
 front of haproxy and backend on separate instances
 in the cloud (meaning there is higher latency then normal against backend).

 Can't see any big difference in the loadtest result when having these 
 timeout fairly high. I guess that really low values will affect
 the loadtest result more.

 /E






Re: Haproxy timing issues

2011-11-01 Thread Baptiste
Hi,

First question: are you sure you're reaching the limit of
haproxy/varnish and not the limit of your client?
Mainly concerning the increasing response time.

How many CPUs do you have in your VM? Starting too much stud proccess
could be counter-productive.
I doubt doing CPU affinity in a VM improves something :)

Concerning the logs, the time we can see on your client side are very
high! Too high :)
3/4s for HAProxy to get the full request.

How are you running stud?
Which options? Are you using the one with emericbr patches?
Are you using requesting using the same SSL Session ID or do you
renegotiate a new one for each connection?

Have you checked your network statisitics, on both client and server side?
netstat -in and netstat -s
Is there a lot of drops, retransmission, congestion, etc...

On your last log line, we can see that HAProxy took 22s to establish a
TCP connection to your Varnish...

Can you share your stud, haproxy, and varnish configuration, the
version of each software, the startup parameters for Varnish.
What kind of tool do you use on your client to run your load test?
What sysctl have you already tunned?


Unfortunately, the Aloha does not run on Amazon :)


cheers,


On Tue, Nov 1, 2011 at 9:16 PM, Erik Torlen erik.tor...@apicasystem.com wrote:
 Hi,

 I am currently (and have been from time to time the last weeks) doing some 
 heavy loadtesting against haproxy with stud in front of it handling the ssl.

 My loadtest has been focused on loadtesting SSL traffic through stud against 
 haproxy on amazon ec2.

 Our current problem is that we cannot get more then ~30k active connections 
 (~150 conns/s) until we starting to see increased response time (10-60s) on 
 the
 client side. Running with 38k connections now and seeing much higher response 
 time.

 The setup is:
 1 instance running haproxy + stud
 2 instances running varnish server 3 cached images

 Varnish has 100% cache hit ratio so nothing goes to the backend.

 We have tried using m1.xlarge and the c1.xlarge. The m1.xlarge uses almost 
 100% cpu when doing the loadtests while c1.xlarge has a lot of resources left 
 (stud using a few percent per process) and haproxy ~60-70%cpu.
 The only difference is that c1.xlarge gives quite better response time before 
 the actual problem happens where resp times are increasing.

 Haproxy is running with nbproc=1
 Stud is running with n=6 and shared session cache. (Tried it with n=3 as well

 From the logging in haproxy I could see that the time it takes to establish a 
 connection against the backend and receive the data:

 Haproxy.log
 Nov  1 18:40:35 127.0.0.1 haproxy[18511]: x.x.x.x:54113 
 [01/Nov/2011:18:39:40.273] varnish varnish/varnish1 4519/0/73/50215/54809 200 
 2715 - -  238/236/4/5/0 0/0 GET /assets/images/icons/elite_logo_beta.png 
 HTTP/1.1
 Nov  1 18:40:35 127.0.0.1 haproxy[18511]: x.x.x.x:55635 
 [01/Nov/2011:18:39:41.547] varnish varnish/varnish1 3245/0/81/50207/53535 200 
 1512 - -  238/236/3/4/0 0/0 GET /assets/images/icons/favicon.ico 
 HTTP/1.1
 ...
 Nov  1 18:40:44 127.0.0.1 haproxy[18511]: x.x.x.x:34453 
 [01/Nov/2011:18:39:25.330] varnish varnish/varnish1 3082/0/225/32661/79559 
 200 1512 - -  234/232/1/2/0 0/0 GET /assets/images/icons/favicon.ico 
 HTTP/1.1
 Nov  1 18:40:44 127.0.0.1 haproxy[18511]: x.x.x.x:53731 
 [01/Nov/2011:18:39:25.036] varnish varnish/varnish1 3377/0/216/32669/79854 
 200 1725 - -  233/231/0/1/0 0/0 GET /assets/images/create/action_btn.png 
 HTTP/1.1

 Haproxy.err (NOTE: 504 error here)

 Nov  1 18:40:11 127.0.0.1 haproxy[18511]: x.x.x.x:34885 
 [01/Nov/2011:18:39:07.597] varnish varnish/varnish1 4299/0/27/-1/64330 504 
 194 - - sH-- 10916/10914/4777/2700/0 0/0 GET 
 /assets/images/icons/favicon.ico HTTP/1.1
 Nov  1 18:40:12 127.0.0.1 haproxy[18511]: x.x.x.x:58878 
 [01/Nov/2011:18:39:12.621] varnish varnish/varnish2 314/0/55/-1/60374 504 194 
 - - sH-- 3692/3690/3392/1623/0 0/0 GET /assets/images/icons/favicon.ico 
 HTTP/1.1

 Nov  1 18:40:18 127.0.0.1 haproxy[18511]: x.x.x.x:35505 
 [01/Nov/2011:18:39:42.670] varnish varnish/varnish1 3515/0/22078/10217/35811 
 200 1512 - -  1482/1481/1238/710/1 0/0 GET 
 /assets/images/icons/favicon.ico HTTP/1.1
 Nov  1 18:40:18 127.0.0.1 haproxy[18511]: x.x.x.x:40602 
 [01/Nov/2011:18:39:42.056] varnish varnish/varnish1 4126/0/22081/10226/36435 
 200 1512 - -  1475/1474/1231/703/1 0/0 GET 
 /assets/images/icons/favicon.ico HTTP/1.1


 Here is the logs from running haproxy with varnish as a backend on the local 
 machine:

 Haproxy.log
 Nov  1 20:00:52 127.0.0.1 haproxy[18953]: x.x.x.x:38552 
 [01/Nov/2011:20:00:45.157] varnish varnish/local_varnish 7513/0/0/0/7513 200 
 1725 - -  4/3/0/1/0 0/0 GET /assets/images/create/action_btn.png 
 HTTP/1.1
 Nov  1 20:00:54 127.0.0.1 haproxy[18953]: x.x.x.x:40850 
 [01/Nov/2011:20:00:48.219] varnish varnish/local_varnish 6524/0/0/0/6524 200 
 1725 - -  2/1/0/1/0 0/0 GET /assets/images/create/action_btn.png 
 HTTP/1.1

 Haproxy.err
 Nov  

Re: another round for configuration.txt = html

2011-11-02 Thread Baptiste
Hi Aleks,

It's a good and interesting start.
I already talked to Willy about the doc format, and unfortunately for
you, the way you're doing is not the one wanted by him.

As you have remarked, the doc format is quite open, each
documentation contributors tries to maintain the format, but there is
no strict verification on the shape (only on the content).
What Willy wants, is not a translation of the doc in a new format that
would force devs to follow strong recommendation, otherwise the
integrity of the whole doc would be broken.
He considers the documentation is readable for a human eye, so it
should be for an automatic tool which could then translate it into a
nicer format.

Purpose is double:
1. don't bother the devs when they have to write documentation
2. have a nice readable documentation

So basically, a lot of people are interested by a nicer version of the
doc, I already started working on the subject and I might push
something in my github very soon: a bash/sed/awk tool to translate the
HAProxy documentation in Markdown format (could be HTML as well).
Contribution will be welcome :)

cheers

On Thu, Nov 3, 2011 at 12:57 AM, Aleksandar Lazic al-hapr...@none.at wrote:
 Hi all,

 I have now started do change the configuration.txt in that way
 that asciidoc an produce nice HTML output.

 asciidoc -b html5 -o haproxy-conf.html configuration.txt

 http://www.none.at/haproxy-conf.html

 I have stopped at section 2.3 to get your feedback.

 As you can see in the diff there is not to much to change,
 yet.

 http://www.none.at/haproxy-conf.diff

 Thank you for your feedback

 Aleks





Re: Haproxy timing issues

2011-11-02 Thread Baptiste
I'm writting currently writting the blog article about it, but last
Emeric patch will allow you scale OUT your SSL perfomance through a
shared SSL session ID cache.

cheers


On Thu, Nov 3, 2011 at 1:21 AM, Erik Torlen erik.tor...@apicasystem.com wrote:
 Yes, I'm currently on Ubuntu 10.04.
 So basically I could grab this (http://packages.ubuntu.com/oneiric/openssl) 
 .deb package and then
 add the patch you linked for me to it?
 Can I then compile stud as default or do I have to modify the Makefile?

 /E

 -Original Message-
 From: Vincent Bernat [mailto:ber...@luffy.cx]
 Sent: den 2 november 2011 16:38
 To: Erik Torlen
 Cc: haproxy@formilux.org
 Subject: Re: Haproxy timing issues

 OoO En cette  nuit nuageuse du jeudi 03 novembre  2011, vers 00:32, Erik
 Torlen erik.tor...@apicasystem.com disait :

 Ok, could be an idea to use that then.
 Btw, I am on a system that I can't upgrade to a later version of the
 dist and take advantage of openssl 1.0.0 through apt.
 Could I make stud use openssl with static libs? E.g compiling openssl
 from source and the linking it in Makefile for stud.

 It should  be possible.  But OpenSSL  1.0.0 can live  side by  side with
 OpenSSL 0.9.8k.  I suppose that you  use Ubuntu LTS 10.04.  You can grab
 the package from Oneiric and apply a simple patch to backport it.

  https://gist.github.com/1272151/b1a61124d1568eb795fa82b24b875889cbd0005c
 --
 Vincent Bernat ☯ http://vincent.bernat.im

 panic(floppy: Port bolixed.);
        2.2.16 /usr/src/linux/include/asm-sparc/floppy.h




Re: another round for configuration.txt = html

2011-11-02 Thread Baptiste
because writting the tool to do it is more fun and easier to maintain
than a whole doc to parse again after each patch.
:)

On Thu, Nov 3, 2011 at 6:23 AM, carlo flores ca...@petalphile.com wrote:
 Just curious: why not rewrite the docs in markdown?

 Would a rewrite formulinix could just add to be welcome?

 On Wednesday, November 2, 2011, Baptiste bed...@gmail.com wrote:
 Hi Aleks,

 It's a good and interesting start.
 I already talked to Willy about the doc format, and unfortunately for
 you, the way you're doing is not the one wanted by him.

 As you have remarked, the doc format is quite open, each
 documentation contributors tries to maintain the format, but there is
 no strict verification on the shape (only on the content).
 What Willy wants, is not a translation of the doc in a new format that
 would force devs to follow strong recommendation, otherwise the
 integrity of the whole doc would be broken.
 He considers the documentation is readable for a human eye, so it
 should be for an automatic tool which could then translate it into a
 nicer format.

 Purpose is double:
 1. don't bother the devs when they have to write documentation
 2. have a nice readable documentation

 So basically, a lot of people are interested by a nicer version of the
 doc, I already started working on the subject and I might push
 something in my github very soon: a bash/sed/awk tool to translate the
 HAProxy documentation in Markdown format (could be HTML as well).
 Contribution will be welcome :)

 cheers

 On Thu, Nov 3, 2011 at 12:57 AM, Aleksandar Lazic al-hapr...@none.at
 wrote:
 Hi all,

 I have now started do change the configuration.txt in that way
 that asciidoc an produce nice HTML output.

 asciidoc -b html5 -o haproxy-conf.html configuration.txt

 http://www.none.at/haproxy-conf.html

 I have stopped at section 2.3 to get your feedback.

 As you can see in the diff there is not to much to change,
 yet.

 http://www.none.at/haproxy-conf.diff

 Thank you for your feedback

 Aleks







Re: cannot bind socket Multiple backends tcp mode

2011-11-03 Thread Baptiste
That's normal, your port 443 is binded by the first frontend.
So when HAproxy wants to bind it for your second frontend, it can't...

The only solution, in the current case, is to have one frontend per IP.
Furthermore, your ACL won't work since you're in TCP mode and the
traffic is encrypted.

Cheers



On Thu, Nov 3, 2011 at 8:34 PM, Saul s...@extremecloudsolutions.com wrote:
 Hello List,

 I hope someone can shed some light with the following situation:

 Setup:
 HAproxy frontend proxy and apache SSL backends. I didn't want to use
 haproxy+stunnel or apache mod_ssl so I use straight TCP mode and
 redirects, it works fine with one backend. The only problem is when I
 try to add a second backend for a different farm of servers I get the
 following:

 Starting frontend https-services-in: cannot bind socket

 My understanding was that multiple backends could use the same
 interface, perhaps I was wrong, if that is the case, any suggestions
 on how to be able to have multiple backends running tcp mode on port
 443 so I can match the url and redirect to the appropriate backend
 from my HAproxy?

 Thank You Very much in advance.

 Relevant configuration:

 ##--
 ##  HTTP FRONTEND
 ## 
 frontend www
 mode http
 bind :80

 redirect prefix https://secure.mydomain.com if { hdr_dom(Host) -i
 secure.mydomain.com }
 redirect prefix https://services.mydomain.com if { hdr_dom(Host) -i
 services.mydomain.com }

 backend www
 mode http
 balance leastconn
 stats enable
 option httpclose
 option forwardfor
 option httpchk HEAD /ha.txt HTTP/1.0

 server nginx_1 10.10.1.1:80 weight 100 check

 ##--
 ##  HTTPS FRONTEND
 ## 


 frontend https-in
 mode tcp
 bind :443
 default_backend https-secure-portal

 ##--
 ##  HEADER ACL'S
 ## 

 acl secure1 hdr_dom(Host) -i secure.mydomain.com
 use_backend https-secure-portal if secure1

 backend https-secure-portal
 mode tcp
 balance leastconn
 option ssl-hello-chk

 server ssl_1 10.10.1.1:443 weight 100 check

 ##--
 ##  SERVICES FRONTEND
 ## 

 frontend https-services-in
 mode tcp
 bind :443
 default_backend https-services

 acl services1 hdr_dom(Host) -i services.mydomain.com
 use_backend https-services if services1

 backend https-services
 mode tcp
 balance leastconn
 option ssl-hello-chk
 #option httpclose
 #option forwardfor

 server nginx2_ssl 10.10.1.110:443 weight 100 check





Re: Help with SSL

2011-11-03 Thread Baptiste
Hi Christophe,

Use the HAProxy box in transparent mode: HAProxy will get connected to
your application server using the client IP.
In your backend, just add the line:
source 0.0.0.0 usesrc clientip

Bear in mind that in such configuration, the default gateway of your
server must be the HAProxy box. Or you have to configure PBR on your
network.

Stunnel can be used in front of HAProxy to uncrypt the traffic.
But if your main issue is to get the client IP, then it won't help you
unless you setup transparent mode as explained above.

cheers


On Thu, Nov 3, 2011 at 10:00 PM, Christophe Rahier
christo...@qualifio.com wrote:
 Hello,

  My config of HAProxy is:

 -- CUT --
 global
 log 192.168.0.2 local0
 log 127.0.0.1 local1 notice
 maxconn     10240
 defaults
 log    global
 option dontlognull
 retries    2
 timeout client 35s
 timeout server 90s
 timeout connect 5s
 timeout http-keep-alive 10s

 listen WebPlayer-Farm 192.168.0.2:80
 mode http
 option httplog
 balance source
 #balance leastconn
 option forwardfor
 stats enable
 option http-server-close
 server Player4 192.168.0.13:80 check
 server Player3 192.168.0.12:80 check
 server Player1 192.168.0.10:80 check
 server Player2 192.168.0.11:80 check
 server Player5 192.168.0.14:80 check
 option httpchk HEAD /checkCF.cfm HTTP/1.0

 listen WebPlayer-Farm-SSL 192.168.0.2:443
 mode tcp
 option ssl-hello-chk
 balance source
 server Player4 192.168.0.13:443 check
 server Player3 192.168.0.12:443 check
 server Player1 192.168.0.10:443 check
 server Player2 192.168.0.11:443 check
 server Player5 192.168.0.14:443 check

 listen  Manager-Farm    192.168.0.2:81
 mode http
 option httplog
 balance source
 option forwardfor
 stats enable
 option http-server-close
 server  Manager1 192.168.0.60:80 check
 server  Manager2 192.168.0.61:80 check
 server  Manager3 192.168.0.62:80 check
 option httpchk HEAD /checkCF.cfm HTTP/1.0

 listen Manager-Farm-SSL 192.168.0.2:444
 mode tcp
 option ssl-hello-chk
 balance source
 server Manager1 192.168.0.60:443 check
 server Manager2 192.168.0.61:443 check
 server Manager3 192.168.0.62:443 check

 listen  info 192.168.0.2:90
 mode http
 balance source
 stats uri /


 -- CUT --

  The problem with SSL is that the IP address that I get to the web server
 is the IP address of the loadbalancer and not the original IP address.

  This is a big problem for me and it's essential that I can have the
 right IP address.

  How can I do, is it possible? I've heard of stunnel but I don't
 understand how to use it.

  Thank you in advance for your help,

  Christophe





Re: Haproxy 502 errors, all the time on specific sites or backend

2011-11-04 Thread Baptiste
By the way, this one is useless as long as you enable mode http,
because it's implied in it.
# Every header should end with a colon followed by one space.
reqideny^[^:\ ]*[\ ]*$

Cheers


On Thu, Nov 3, 2011 at 5:47 PM, Cyril Bonté cyril.bo...@free.fr wrote:
 Le Jeudi 3 Novembre 2011 17:34:38 Benoit GEORGELIN a écrit :
 Can you give me more details about your analyse? (examples)
 I will try to understand more what's happen


 Is the response who is not complete or the header only?

 The body is not complete. I tried with the examples I provided in my first
 mail.

 Examples :
 curl -si http://sandka.org/portfolio/; = HTTP/1.0 200 OK with html cut in
 the middle.
 curl -si http://sandka.org/portfolio/foobar; = HTTP/1.0 404 Not Found with
 html cut in the middle.

 There's something bad in ZenPhoto : it forces the response in HTTP/1.0, which
 prevents chunked transfer. That also can explain why mod_deflate generated 502
 errors.

 One thing you can try :
 Edit the file index.php in ZenPhoto and replace HTTP/1.0 occurences (one for
 200, one for 404) by HTTP/1.1. Hopefully, this will allow apache+php to use
 chunked responses and solve the problem.

 --
 Cyril Bonté





Re: Question about timeout

2011-11-07 Thread Baptiste
Hi,

You need to split your configuration in frontend/backend.
Then you can do content swithing based on header or prefix, depending
on how you can detect a user has a session.

So let us know how you check whether a user has a session or not, then
we can help you with the configuration.

cheers


On Mon, Nov 7, 2011 at 9:46 AM, Christophe Rahier
christo...@qualifio.com wrote:
 Hi,
 Here's a part of my config file :
 -- CUT --
 global
 log 192.168.0.2 local0
 log 127.0.0.1 local1 notice
 maxconn     10240
 defaults
 log    global
 option dontlognull
 retries    2
 timeout client 35s
 timeout server 90s
 timeout connect 5s
 timeout http-keep-alive 10s
 listen WebPlayer-Farm 192.168.0.2:80
 mode http
 option httplog
 balance source
 #balance leastconn
 option forwardfor
 stats enable
 option http-server-close
 server Player4 192.168.0.13:80 check
 server Player3 192.168.0.12:80 check
 server Player1 192.168.0.10:80 check
 server Player2 192.168.0.11:80 check
 server Player5 192.168.0.14:80 check
 option httpchk HEAD /cfadmin/ping.cfm HTTP/1.0
 -- CUT --
 I've put 5 web servers.
 I want users who have a session to be redirected to another server.

 For now, it does not work, they receive an error message.

 What should I adjust?
 Thanks for your help.
 Regards,
 Christophe



Re: Proxy Protocol in 1.4.x ?

2011-11-07 Thread Baptiste
Hi All,

After scaling up Stud, @exceliance, we (actually, @emeriBr) worked to
make it able to scale out:

More information here:
http://blog.exceliance.fr/2011/11/07/scaling-out-ssl/

Regards


On Wed, Sep 28, 2011 at 4:37 PM, Baptiste bed...@gmail.com wrote:
 On the same subject, an excellent article from Vincent:
 http://vincent.bernat.im/en/blog/2011-ssl-session-reuse-rfc5077.html

 Good one mate :)

 cheers


 On Mon, Sep 19, 2011 at 12:00 PM, Baptiste bed...@gmail.com wrote:
 Hi there,

 Finally, we've finished our bench on SSL tools available for HAProxy:
 stud and stunnel.
 Please read the benchmark here:

 http://blog.exceliance.fr/2011/09/16/benchmarking_ssl_performance/

 cheers





Re: Autoscaling in haproxy with persistence sessions

2011-11-07 Thread Baptiste
Hi Erik,

Let me give you a few information, I don't know if it will help.
Appsession is not resilient on HAProxy reload.
Which means that since you reload after updating configuration, then
all session will be re-dispatched.
You can use stick-table too, sticking on cookie is doable easily with
latest HAProxy.
Note that in HAProxy 1.5-dev7, there is also a clear tabe command
available on the stats socket.
This page summarizes what's new in HAProxy 1.5-dev7 compared to 1.5-dev 6:
http://blog.exceliance.fr/2011/10/03/whats-new-in-haproxy-1-5-dev7/

cheers


On Mon, Nov 7, 2011 at 9:32 PM, Erik Torlen erik.tor...@apicasystem.com wrote:
 If you get a burst against 3 active backend servers they will take care of 
 all the request and connections. The clients that are active
 will then get a persistence sessions against 1 of these 3 servers. It will 
 take ~5min to scale up a new server so during that period
 more clients could come in and the 3 backend would then be even more 
 overloaded.

 It is that case that I would like to avoid by resetting the session so that 
 existing plus new sessions are spread through all the existing
 plus new servers.

 /E


 -Original Message-
 From: vivek.ma...@gmail.com [mailto:vivek.ma...@gmail.com]
 Sent: den 7 november 2011 12:27
 To: David Birdsong; Erik Torlen
 Cc: haproxy@formilux.org
 Subject: Re: Autoscaling in haproxy with persistence sessions

 If the solution is intended for traffic burst, Isn't it safe to assume that 
 most clients will be new which appsession/cookie doesn't know about?

 New clients will automatically be preferred to go to newly added servers as 
 new servers will have least active connections.

 I don't think any special change is required in practice to handle burst of 
 new traffic from say a premium ad buy or email blast (along with using 
 maxidle)

 Vivek
 Sent via BlackBerry from T-Mobile

 -Original Message-
 From: David Birdsong david.birds...@gmail.com
 Date: Mon, 7 Nov 2011 12:17:53
 To: Erik Torlenerik.tor...@apicasystem.com
 Cc: Vivek Malikvivek.ma...@gmail.com; 
 haproxy@formilux.orghaproxy@formilux.org
 Subject: Re: Autoscaling in haproxy with persistence sessions

 This sounds like what balancing on a hashed value is intended for.
 'hash-type consistent' will reduce the redistribution of keys when the
 pool is expanded, and when nodes are removed, only the removed nodes
 keys are redistributed.

 On Mon, Nov 7, 2011 at 11:15 AM, Erik Torlen
 erik.tor...@apicasystem.com wrote:
 Interesting. In this case we are expecting a lot of burst traffic during a
 very short period of time, 15-30min so I am not sure if we can rely on
 scaling in a more proactive way to send traffic to the new servers. I would
 be
 more comfortable if we could just clean the existing sessions and let them
 be spread over the new servers + existing servers.



 I had a look at stick-table and saw that it has methods to support being
 deleted/cleared through the socket interface. Is it possible to do something
 similar to clean appsessions? Or maybe store appsession in
 a stick-table and clear the session through socket command?



 /E



 From: Vivek Malik [mailto:vivek.ma...@gmail.com]
 Sent: den 7 november 2011 11:05
 To: Erik Torlen
 Cc: haproxy@formilux.org
 Subject: Re: Autoscaling in haproxy with persistence sessions



 I personally find it easier to use cookie instead of appsession. We use a
 similar pattern of adding a new server. Keeping a low maxidle (like 10
 minutes) helps us send traffic to new servers. Keeping maxidle helps us
 maintain session affinity where required (like progress bars for uploads)



 Vivek

 On Mon, Nov 7, 2011 at 1:32 PM, Erik Torlen erik.tor...@apicasystem.com
 wrote:

 Hi,

 We are currently having a system which runs haproxy in the amazon cloud. Our
 system is also using autoscaling of backendservers
 so when we reach a certain cpu usage during x min we will add more servers
 to the backend and update the haproxy config + reloading haproxy.

 This works good as we have it now.

 What we would like is to add persistence to the backend in order to use the
 caches on the backend servers more efficiently (a shared cache would have
 been
 better but is not the case now unfortunately).

 This makes the autoscaling a bit more complex because of the persistence.
 When scaling up new servers the client would still stay on the overloaded
 backend servers instead of start using the new ones.

 So I thought I would check with you if there is a way to clear persistence
 session used by appsession in a good way without effecting the traffic to
 servers?

 If we cleared all the persistence sessions we could let the client go into
 the new backend servers and have request-learn in appsession learn the
 cookie and set persistence to the existing and new servers for the client.

 Any ideas here?

 Cheers
 E







Re: Autoscaling in haproxy with persistence sessions

2011-11-07 Thread Baptiste
On Mon, Nov 7, 2011 at 9:48 PM, Erik Torlen erik.tor...@apicasystem.com wrote:
 Thank you Baptiste, seems like it should work then out-of-the-box when using 
 appsession.

 On Haproxy reload the sessions should be cleared and then clients would be 
 replicated to new servers.


Actually, the weakness of appsession seems to be a strengh for you...
Wait, if you don't configure a peer on stick stable, they are also reseted :)

So both should work in your case ;)



Re: Autoscaling in haproxy with persistence sessions

2011-11-07 Thread Baptiste
On Mon, Nov 7, 2011 at 10:05 PM, Erik Torlen
erik.tor...@apicasystem.com wrote:
 What would you recommend if we wanted to have all our three haproxy instances 
 loadbalance in the same way.
 And still make use of persistence when the client is using one of the haproxy 
 instances?

 E.g Having the client come to the same backend on both haproxy srv1, srv2 and 
 srv3.

 Could we make use of some hash-algorithm to achieve this? Balancing on source 
 ip?


The hash algorithm based on source IP looks interesting, persistence
will be lost on each reload and adapted to the farm size.

Worst case, the stick table + peers to synchronise + clear table on reload...
But there might be undesirable side effects: if clear table is not
done synchronously, then new entries will be replicated...

cheers



Re: SSL Pass through and sticky session

2011-11-07 Thread Baptiste
Hi,

The configuration is for HAProxy 1.5-something :)

cheers

On Tue, Nov 8, 2011 at 3:00 AM, Mir Islam mis...@mirislam.com wrote:
 Thanks Vincent for the link. That is exactly what I was looking for. However 
 the configuration they provided does not work out of the box. My knowledge in 
 HAProxy is less than two days old. So if  you or anyone else can tell me what 
 the following errors mean, I will appreciate it a lot.

 First one is simple, for some reason binary type is not valid for 
 stick-table? All I could see in web site in reference to ip but not binary 
 or other types. I have compiled haproxy 1.4.18 from source on linux.

 ./haproxy -f ./haproxy-zdm-ssl2.cfg
 [ALERT] 311/015412 (23710) : parsing [./haproxy-zdm-ssl2.cfg:8] : 
 stick-table: unknown type 'binary'.
 [ALERT] 311/015412 (23710) : parsing [./haproxy-zdm-ssl2.cfg:10] : error 
 detected while parsing ACL 'clienthello'.
 [ALERT] 311/015412 (23710) : parsing [./haproxy-zdm-ssl2.cfg:11] : error 
 detected while parsing ACL 'serverhello'.
 [WARNING] 311/015412 (23710) : parsing [./haproxy-zdm-ssl2.cfg:14] : 
 tcp-request inspect-delay will be ignored because backend 'https' has no 
 frontend capability
 [ALERT] 311/015412 (23710) : parsing [./haproxy-zdm-ssl2.cfg:15] : error 
 detected in backend 'https' while parsing 'if' condition
 [ALERT] 311/015412 (23710) : parsing [./haproxy-zdm-ssl2.cfg:18] : unknown 
 keyword 'tcp-response' in 'backend' section
 [ALERT] 311/015412 (23710) : parsing [./haproxy-zdm-ssl2.cfg:24] : 'stick': 
 unknown fetch method 'payload_lv(43,1)'.
 [ALERT] 311/015412 (23710) : parsing [./haproxy-zdm-ssl2.cfg:27] : 'stick': 
 unknown fetch method 'payload_lv(43,1)'.
 [ALERT] 311/015412 (23710) : Error(s) found in configuration file : 
 ./haproxy-zdm-ssl2.cfg
 [WARNING] 311/015412 (23710) : config : missing timeouts for backend 'https'.
   | While not properly invalid, you will certainly encounter various problems
   | with such a configuration. To fix this, please ensure that all following
   | timeouts are set to a non-zero value: 'client', 'connect', 'server'.
 [ALERT] 311/015412 (23710) : Fatal errors found in configuration.


 My configuration

 backend https
        mode tcp
        balance roundrobin
        srvtimeout      5

        # maximum SSL session ID length is 32 bytes.
        stick-table type binary len 32 size 30k expire 30m

        acl clienthello req_ssl_hello_type 1
        acl serverhello rep_ssl_hello_type 2

        # use tcp content accepts to detects ssl client and server hello.
        tcp-request inspect-delay 5s
        tcp-request content accept if clienthello

        # no timeout on response inspect delay by default.
        tcp-response content accept if serverhello

        # SSL session ID (SSLID) may be present on a client or server hello.
        # Its length is coded on 1 byte at offset 43 and its value starts
        # at offset 44.
        # Match and learn on request if client hello.
        stick on payload_lv(43,1) if clienthello

        # Learn on response if server hello.
        stick store-response payload_lv(43,1) if serverhello

        server s1 192.168.1.1:443
        server s2 192.168.1.2:443




 On Nov 7, 2011, at 12:00 PM, Vincent Bernat wrote:

 OoO Pendant le  journal télévisé du lundi 07  novembre 2011, vers 20:16,
 Mir Islam mis...@mirislam.com disait :

 Yea that is the problem. Right now SSL is terminated at the
 application level on each server. There is no way to inspect the
 cookie even if the server sets one. Sticky session in TCP mode can be
 done by source IP (that is why I have balance source). But that
 creates the other problem as I mentioned. Folks coming from behind
 NAT will hit the same server and not get load balanced. Because
 HAProxy will think they are all the same. I was trying to find out if
 there is something else that could be done. From my own logical
 reasoning, no. :) but I have been wrong before so I was hoping
 someone had similar issue.

 See this post:
 http://blog.exceliance.fr/2011/07/04/maintain-affinity-based-on-ssl-session-id/

 While  this  won't work,  in  theory, if  client  is  requesting to  use
 tickets, almost  all clients keep the  right session ID  even when using
 tickets. You  should of course ensure  that a client will  keep the same
 session ID all  the time.  This means that you need  to ensure that your
 web server is able to resume session with and without tickets correctly.
 For example, with nginx, you need to configure a session cache.
 --
 Vincent Bernat ☯ http://vincent.bernat.im

 Keep it right when you make it faster.
            - The Elements of Programming Style (Kernighan  Plauger)






Re: Source IP rate limiting

2011-11-10 Thread Baptiste
On Thu, Nov 10, 2011 at 12:48 PM, Alex Davies a...@davz.net wrote:
 Hi,
 I am interested in rate limiting connections from users to stop small DOS
 'attacks' from individual users.
 I see the excellent post at http://blog.serverfault.com/post/1016491873/ and
 have followed this in a test enviroment.
 I have the following questions:
 * What is the best way to monitor the # of connections that are being
 rejected as a result of this from the log? The socat example in that post
 seems - to me - to show the number of IPs in the relevant tables as opposed
 to the number of connections that are being rejected. Is it possible also to
 know which 'reject' the request is blocked by (from the example post there
 are 2)
 * Is it possible to 'hash' on a specific cookie value (i'm thinking
 PHPSESSID) as well as IP, i.e. if connections for any given PHPSESSID value
 reaches x per minute block?
 Many thanks,
 Alex
 --
 Alex Davies


Hin,

You can know the numbre of rejected request through the logs.

You can use a str stick table and store the PHPSESSID in it.

And you can capture the cookie value in the logs as well to know how
many request have been rejected.

cheers



Re: Add server-id to response header

2011-11-10 Thread Baptiste
There might be a dirty way:

In your backend, give a try to the above:


   acl server1 srv_id 1
   acl server2 srv_id 2
   rspadd X-Server:\ server1 if server1
   rspadd X-Server:\ server2 if server2

   server 1  server11.1.1.1:80 id 1
   server 2  server12.2.2.2:80 id 2

Please tell me if it works :)

cheers



Re: Many BADREQ and NOSRV entries in the log

2011-11-16 Thread Baptiste
Hi,

Your request does not seem to be RFC compliant because of the blank char.
It should have been encoded with a %20.

cheers

On Wed, Nov 16, 2011 at 4:24 PM, Mariano Guezuraga mguezur...@gmail.com wrote:
 Hello list,

 I'm getting some (~400 per hour) NOSRV ...BADREQ entries in my log file.
 I've tried temporarely accept-invalid-http-request with no success, also
 I've set the timeout http-request parameter to 30s, with no difference.

 Partial log of the BADREQ: http://pastebin.com/uHJ2qztb

 My haproxy.cfg file: http://pastebin.com/EJi18bqV

 The only error I've spotted trough the socket is:

 root@loadbalancer-1:/var/log# echo show errors | socat
 unix-connect:/tmp/haproxy stdio

 [16/Nov/2011:14:46:46.506] frontend balancer (#1): invalid request
  src 92.242.138.134, session #4196, backend balancer (#1), server NONE
 (#-1)
  request length 73 bytes, error at position 14:

  0  GET /s/camisa da marca reserva/15 HTTP/1.0\r\n
  00044  Host: www.example.com.br\r\n
  00071  \r\n

 I don't know if I should ignore this, or what to try. I would appreciate any
 hint!





Re: hashing + roundrobin algorithm

2011-11-19 Thread Baptiste
On Fri, Nov 18, 2011 at 5:48 PM, Rerngvit Yanggratoke rerng...@kth.se wrote:
 Hello All,
         First of all, pardon me if I'm not communicating very well. English
 is not my native language. We are running a static file distribution
 cluster. The cluster consists of many web servers serving static files over
 HTTP.  We have very large number of files such that a single server simply
 can not keep all files (don't have enough disk space). In particular, a file
 can be served only from a subset of servers. Each file is uniquely
 identified by a file's URI. I would refer to this URI later as a key.
         I am investigating deploying HAProxy as a front end to this cluster.
 We want HAProxy to provide load balancing and automatic fail over. In other
 words, a request comes first to HAProxy and HAProxy should forward the
 request to appropriate backend server. More precisely, for a particular key,
 there should be at least two servers being forwarded to from HAProxy for the
 sake of load balancing. My question is what load balancing strategy should I
 use?
         I could use hashing(based on key) or consistent hashing. However,
 each file would end up being served by a single server on a particular
 moment. That means I wouldn't have load balancing and fail over for a
 particular key.
        Is there something like a combination of hashing and
 roundrobin strategy? In particular, for a particular key, there would be
 multiple servers serving the requests and HAProxy selects one of them
 according to roundrobin policy. If there isn't such a stretegy, any
 suggestions on how to implement this into HAProxy? Any other comments are
 welcome as well.

 --
 Best Regards,
 Rerngvit Yanggratoke


Hi,

You could create several backends and redirect requests based on an
arbitrary criteria to reduce the number of files per backends. Using
URL path prefix might be a good idea.
Then inside a backend, you can use the hash url load-balancing algorithm.

cheers



Re: hashing + roundrobin algorithm

2011-11-22 Thread Baptiste
Hi,

As long as you don't share more details on how your files are accessed
and what makes each URL unique, I can't help.
As I said, splitting your files by directory path or by Host header may be good.

Concerning example in haproxy, having the following in your frontend
will do the stuff:
  acl dir1 path_beg /dir1/
  usebackend bk_dir1 if dir1
  acl dir1 path_beg /dir2/
  usebackend bk_dir2 if dir2
...

then create the backends:
  bk_dir1
balance roundrobin
server srv1
server srv2
  bk_dir2
balance roundrobin
server srv3
server srv4
...

Hope this helps

On Mon, Nov 21, 2011 at 3:24 PM, Rerngvit Yanggratoke rerng...@kth.se wrote:
 Dear Baptiste,
             Could you please exemplify a criterion that would reduce the
 number of files per backends? And, if possible, how to implement that with
 HAProxy?

 On Sat, Nov 19, 2011 at 8:29 PM, Baptiste bed...@gmail.com wrote:

 On Fri, Nov 18, 2011 at 5:48 PM, Rerngvit Yanggratoke rerng...@kth.se
 wrote:
  Hello All,
          First of all, pardon me if I'm not communicating very well.
  English
  is not my native language. We are running a static file distribution
  cluster. The cluster consists of many web servers serving static files
  over
  HTTP.  We have very large number of files such that a single server
  simply
  can not keep all files (don't have enough disk space). In particular, a
  file
  can be served only from a subset of servers. Each file is uniquely
  identified by a file's URI. I would refer to this URI later as a key.
          I am investigating deploying HAProxy as a front end to this
  cluster.
  We want HAProxy to provide load balancing and automatic fail over. In
  other
  words, a request comes first to HAProxy and HAProxy should forward the
  request to appropriate backend server. More precisely, for a particular
  key,
  there should be at least two servers being forwarded to from HAProxy for
  the
  sake of load balancing. My question is what load
  balancing strategy should I
  use?
          I could use hashing(based on key) or consistent hashing.
  However,
  each file would end up being served by a single server on a particular
  moment. That means I wouldn't have load balancing and fail over for a
  particular key.
         Is there something like a combination of hashing and
  roundrobin strategy? In particular, for a particular key, there would be
  multiple servers serving the requests and HAProxy selects one of them
  according to roundrobin policy. If there isn't such a stretegy, any
  suggestions on how to implement this into HAProxy? Any other comments
  are
  welcome as well.
 
  --
  Best Regards,
  Rerngvit Yanggratoke
 

 Hi,

 You could create several backends and redirect requests based on an
 arbitrary criteria to reduce the number of files per backends. Using
 URL path prefix might be a good idea.
 Then inside a backend, you can use the hash url load-balancing algorithm.

 cheers



 --
 Best Regards,
 Rerngvit Yanggratoke




Re: how http-server-close work?

2011-11-22 Thread Baptiste
Hi,

It will work as you said, if you have not enabled cookie persistence.
(cookie line in your backend conf).

by default, without http-server-close option, HAProxy will tunnel
requests and responses.
It will be able to analyze only the first request, taking rooting
decision, then all following requests will have to follow the same
route path.

Using http-server-close, HAProxy is able to take routing decision for
the HTTP connection, so, objects can be downloaded from any server in
a farm.

When you enable cookie persistence, then HAProxy will bypass the
balance algorythm, since it already knows how to route the request.

cheers


On Tue, Nov 22, 2011 at 6:17 AM, wsq003 wsq...@sina.com wrote:
 Hi,

 In my condition, I set the http-server-close option for client-side
 keepalive.  (You know this will save the time of establish connections)
 My question is will haproxy re-assign backend server for every HTTP request
 in this connection? I also configure 'balance uri' and
 'hash-type consistent'.

 e.g. I hope /a/b.jpg and /c/d.jpg be assigned to different backend server
 based on consistent-hashing, even when they are in a same
 client-side connection.

 Thanks in advance.




Re: http work witouth backend, why?

2011-11-23 Thread Baptiste
It's normal.

Either you configure a listen proxy or a set of 2 proxies: a frontend
and a backend.

So in your case, configuration should look like:
 frontend proxy-https *:443
        mode tcp
        option ssl-hello-chk
        balance roundrobin
        default_backend back-https
 backend back-https
        option httpchk GET /test.txt
        http-check expect string OK
        server testweb01 192.168.1.1:443 check port 80 inter 5000 fastinter
 1000 downinter 1000 rise 2 fall 2
        server testweb02 192.168.1.2:443 check port 80 inter 5000 fastinter
 1000 downinter 1000 rise 2 fall 2


cheers

On Wed, Nov 23, 2011 at 6:43 PM, Ricardo F ri...@hotmail.com wrote:
 Hello,
 I'm trying to confiugre haproxy with https. I have a problem when I use
 backend, but, when I dont' use backend, it works, it's extrange.
 Working conf:
 listen proxy-https *:443
        mode tcp
  #      option ssl-hello-chk
        balance roundrobin
        option httpchk GET /test.txt
        http-check expect string OK
        server testweb01 192.168.1.1:443 check port 80 inter 5000 fastinter
 1000 downinter 1000 rise 2 fall 2
        server testweb02 192.168.1.2:443 check port 80 inter 5000 fastinter
 1000 downinter 1000 rise 2 fall 2

 Not working conf:

 listen proxy-https *:443
        mode tcp
        option ssl-hello-chk
        balance roundrobin
        default_backend back-https
 backend back-https
        option httpchk GET /test.txt
        http-check expect string OK
        server testweb01 192.168.1.1:443 check port 80 inter 5000 fastinter
 1000 downinter 1000 rise 2 fall 2
        server testweb02 192.168.1.2:443 check port 80 inter 5000 fastinter
 1000 downinter 1000 rise 2 fall 2

 Any idea, why?

 Regards!,

 Ricardo F.




Re: Executing Script between Failover

2011-11-24 Thread Baptiste
Hi,

Logging at the backend level allows HAProxy to send syslog messages to
report server status changes.
You can use them to trigger action on your own.

cheers

On Thu, Nov 24, 2011 at 11:58 AM, wsq003 wsq...@sina.com wrote:

 One another way would be:
 Use crontab to start a script, this script can get status of servers by
 `curl http://your.haproxy.com:8080/admin_status;cvs`
 Then you can send messages to anywhere you like.


 From: Prasad Wani
 Date: 2011-11-24 19:12
 To: haproxy
 Subject: Executing Script between Failover
 Hi,
 While configuring the Failover between two machine does HaProxy has any
 future to execute the Script Just after the Failover and before 2nd server
 start serving request.
 What I needed here Whenever Failover happens I want to call Monitoring URL
 and it will be called every time whenever failover happens. URL is sending
 alert to intimate that Failover happen.

 --
 Prasad S. Wani




Re: Deny http connection

2011-11-25 Thread Baptiste
Hi,

You could do that using a stick table and the option http_err_rate.

cheers


On Fri, Nov 25, 2011 at 1:50 PM, Sander Klein roe...@roedie.nl wrote:
 Hi,

 I was wondering if it is possible to start rate-limiting or deny a
 connection based on response codes from the backend.

 For instance, I would like to start rejecting or rate limit a HTTP
 connection when a client triggers more than 20 HTTP 500's within a certain
 time frame.

 It this possible?

 Greets,

 Sander





Re: halog manpage

2011-11-28 Thread Baptiste
Hi Joe,

For halog, the best reference is the source code itself ;)
The variable names are verbose enough to understand what it does.

To be fair, it's in my TODO to write a blog page to explain what halog
does and how usefull it can be.
I'll let know the list as soon as it has been written.

cheers



On Mon, Nov 28, 2011 at 7:00 PM, Joe Williams williams@gmail.com wrote:
 Does a halog man page exist? If not, it would be great if someone who knows
 what all the options are could document all of them. The best reference I
 know of is the following thread, which does not include many of the newer
 filters and etc.
 http://www.mail-archive.com/haproxy@formilux.org/msg02962.html
 Thanks!
 -Joe

 --
 Name: Joseph A. Williams
 Email: williams@gmail.com




Re: cookie domain set based on request

2011-11-29 Thread Baptiste
Hi,

what you want to do is not doable. I mean taking a piece of the host
header and inserting it into the Set-Cookie header.
How have you currently setup your persistence in HAProxy?
do you have any application cookie that would stay constant despite
the domain browsed and we could rely on to ensure persistence?

cheers


On Tue, Nov 29, 2011 at 10:20 PM, Allan Wind
allan_w...@lifeintegrity.com wrote:
 I would like to set the cookie domain to the top level domain of
 the request.  This is not currently possible, right?  For example
 if the request is:

 Host: www.tld

 haproxy should behaves as if this configuration was set in the
 haproxy config file:

 cookie ... domain=.tld

 In this case I have 100s of domains on the backend so I cannot
 just list them out.  We redirect to sub-domains and are seeing
 clients spread requests over all backends instead of respecting
 the cookie based persistence.


 /Allan
 --
 Allan Wind
 Life Integrity, LLC
 http://lifeintegrity.com





[bug ???] backend stick-table with conn_cur and http_req_rate

2011-11-29 Thread Baptiste
Hi Willy and the list,

I'm currently running a configuration where I use sitck tables.
I set-up the stick table on the backend side and I want to follow two
counters in it: conn_cur and http_req_rate.

I used a bash loop to generate 100 requests. Below is print out of the
table content during the test:
# table: bk_http, type: ip, size:8192, used:1
0x716bc0: key=127.0.0.1 use=0 exp=299312 gpc0=1 conn_cur=0
sess_rate(6)=0 http_req_rate(6)=0

If I setup the same table definition on the frontend, it works.

Concerning conn_cur, I could understand that it is not a bug, but a
behavior by design.
From the doc: It is incremented once an incoming connection matches the 
entry.
I guess this is because the incoming connection is managed by the
frontend, that's why I can't get my backend table updated.

Concerning http_req_rate, I can't get any clue, even from the doc.

I'm using HAProxy 1.5-dev7 and I have option http-server-close enabled.

Cheers



Re: Re: hashing + roundrobin algorithm

2011-11-30 Thread Baptiste
Hi,

Ricardo, from Tuenti, posted a very nice presentation on slideshare:
http://www.slideshare.net/ricbartm/load-balancing-at-tuenti

They explain how they did configure HAProxy to do what you're trying
to achieve ;)

cheers


2011/11/30 wsq003 wsq...@sina.com:

 My modification is based on version 1.4.16.

 ===in struct server add
 following===
 char vgroup_name[100];
 struct proxy *vgroup; //if not NULL, means this is a Virtual GROUP

 ===in function process_chk() add
 at line 1198===
 if (s-vgroup) {
 if ((s-vgroup-lbprm.tot_weight  0)  !(s-state  SRV_RUNNING)) {
 s-health = s-rise;
 set_server_check_status(s, HCHK_STATUS_L4OK, vgroup ok);
 set_server_up(s);
 } else if (!(s-vgroup-lbprm.tot_weight  0)  (s-state  SRV_RUNNING)) {
 s-health = s-rise;
 set_server_check_status(s, HCHK_STATUS_HANA, vgroup has no available server);
 set_server_down(s);
 }

 if (s-state  SRV_RUNNING) {
 s-health = s-rise + s-fall - 1;
 set_server_check_status(s, HCHK_STATUS_L4OK, vgroup ok);
 }

 while (tick_is_expired(t-expire, now_ms))
 t-expire = tick_add(t-expire, MS_TO_TICKS(s-inter));
 return t;
 }

 ===in function assign_server()
 add at line 622===
 if (s-srv-vgroup) {
 struct proxy *old = s-be;
 s-be = s-srv-vgroup;
 int ret = assign_server(s);
 s-be = old;
 return ret;
 }

 ===in function
 cfg_parse_listen() add at line
 3949===
 else if (!defsrv  !strcmp(args[cur_arg], vgroup)) {
 if (!args[cur_arg + 1]) {
 Alert(parsing [%s:%d] : '%s' : missing virtual_group name.\n,
 file, linenum, newsrv-id);
 err_code |= ERR_ALERT | ERR_FATAL;
 goto out;
 }
 if (newsrv-addr.sin_addr.s_addr) {
 //for easy indicate
 Alert(parsing [%s:%d] : '%s' : virtual_group requires the server address as 0.0.0.0\n,
 file, linenum, newsrv-id);
 err_code |= ERR_ALERT | ERR_FATAL;
 goto out;
 }
 newsrv-check_port = 1;
 strlcpy2(newsrv-vgroup_name, args[cur_arg + 1], sizeof(newsrv-vgroup_name));
 cur_arg += 2;
 }

 ===in function
 check_config_validdity() add at line
 5680
 /*
  * set vgroup if necessary
  */
 newsrv = curproxy-srv;
 while (newsrv != NULL) {
 if (newsrv-vgroup_name[0] != '\0') {
 struct proxy *px = findproxy(newsrv-vgroup_name, PR_CAP_BE);
 if (px == NULL) {
 Alert([%s][%s] : vgroup '%s' not exist.\n, curproxy-id, newsrv-id, newsrv-vgroup_name);
 err_code |= ERR_ALERT | ERR_FATAL;
 break;
 }
 newsrv-vgroup = px;
 }
 newsrv = newsrv-next;
 }

 ==

 and some minor changes in function stats_dump_proxy() that not important.

 ==

 sample config file looks like:

 backend internallighttpd
 option httpchk /monitor/ok.htm
 server wsqa 0.0.0.0 vgroup subproxy1 weight 32 check inter 4000 rise 3 fall 3
 server wsqb 0.0.0.0 vgroup subproxy2 weight 32 check inter 4000 rise 3 fall 3
 balance uri
 hash-type consistent
 option redispatch
 retries 3

 backend subproxy1
 option httpchk /monitor/ok.htm
 server wsq01 1.1.1.1:8001 weight 32 check inter 4000 rise 3 fall 3
 server wsq02 1.1.1.2:8001 weight 32 check inter 4000 rise 3 fall 3
 balance roundrobin
 option redispatch
 retries 3

 backend subproxy2
 option httpchk /monitor/ok.htm
 server wsq03 1.1.1.1:8002 weight 32 check inter 4000 rise 3 fall 3
 server wsq04 1.1.1.2:8002 weight 32 check inter 4000 rise 3 fall 3
 balance roundrobin
 option redispatch
 retries 3

 ==

 Sorry I can't provide a clean patch, because vgroup is just one of several
 changes.
 I did not consider the rewrite rules at that time. Maybe we can add
 a function call before calling assigen_server()?


 From: Willy Tarreau
 Date: 2011-11-30 01:47
 To: wsq003
 CC: Rerngvit Yanggratoke; haproxy; Baptiste
 Subject: Re: Re: hashing + roundrobin algorithm
 On Tue, Nov 29, 2011 at 02:56:49PM +0800, wsq003 wrote:

 Backend proxies may be multiple layers, then every layer can have its own LB param.
 Logically this is a tree-like structure, every real server is a leaf. Every none-leaf node is a backend proxy and may have LB param.

 I clearly understand what it looks like from the outside. It's still not very
 clear how you *concretely* implemented it. Maybe you basically did what I've
 been planning for a long time (the internal server) and then your code could
 save us some time.

 A feature I found important there was to be able to apply backend rewrite rules
 again when

  1   2   3   4   5   6   7   8   9   10   >