load balancing algorithm

2009-07-22 Thread Johan Duflost
Hello,

I'd like to know if it's possible to load balance a soap web service based on 
the operation called by the client.
We need this because we have a webservice with several operations and only one 
of them is very cpu and time consuming.
We would like to forward the requests for this heavy operation on robust 
dedicated servers and let the other servers handle all the other requests.
We can't split this webservice into two web services (one for the heavy 
operation and another one for all the other operations) because all these 
operations are supposed to be called within a session.
Does anybody know if it's possible? Thanks.

Cheers,

Johan


Re: Still dropping TS sessions.

2009-07-22 Thread Guillaume Bourque

Here is my working config,

Any recommandations are more then welcome.

version running on ubuntu server

sudo dpkg -l | grep haproxy
ii  haproxy1.3.14.3-1  
fast and reliable load balancing reverse pro



I know it's old but it work !


global

  log 127.0.0.1 local0 debug
  stats socket /var/run/haproxy-socket-stats
  maxconn 4096
  user haproxy
  group haproxy

defaults
  log global
  option tcplog
  retries 3
  maxconn 2000
  contimeout  5000
  clitimeout  1440
  srvtimeout  1440

listen stats :8080
  mode http
  option httpclose
  stats enable
  stats uri /
  balance source
  server web-1 192.168.4.30:80

listen rdpfarm :3389
  mode tcp
  balance source
  option tcpka

  server TS-1 10.0.0.102 check port 3300
  server TS-1 10.0.0.102 check port 3389 backup
  server TS-2 10.0.0.152 check port 3300
  server TS-2 10.0.0.152 check port 3389 backup


I use this setup to enable phasing out an in of TS server wihout to 
much user interruption.  When the TS server are online they listen on 
port 3389 and port 3300, when we want to put the server in maintenance 
mode we just close port 3300.   This is really nice since currently open 
RDP session will continue to work but new connection get to another TS 
server.  Then you just informe your user to finish there work and reopen 
a TS session and they will end on another server.



has an ADMIN you can connect to the server directly to the server IP in 
maintenance mode and upgrade any software on it will no user are on the 
server with you.


We love this way of working.


Voila

Feel free to comment.

Guillaume



Paul Dickson a écrit :


Has anyone had any luck in setting HAPROXY up as a front end for 
terminal services clusters?  My connections keep dropping, but have 
become a bit more reliable since my last email on the topic with the 
following conf file:


# this config needs haproxy-1.1.28 or haproxy-1.2.1

global
log 127.0.0.1local0
log 127.0.0.1local1 notice
#log loghostlocal0 info
maxconn 4096
#chroot /usr/share/haproxy
user haproxy
group haproxy
daemon
#debug
#quiet

defaults
logglobal
modehttp
optionhttplog
optiondontlognull
retries3
option redispatch
maxconn2000
# Time to wait for the opening connection to a server to succeed. 
 5000ms=5sec

contimeout1
# Time to wait for a client to respond to packets.  Set below to 
5ms=50sec

timeout client5
# Time to wait for a server to respond to packets.  Set below to 
5ms=50sec

timeout server5
option srvtcpka



listen rdp 0.0.0.0:3389
mode tcp
# All three tcpka options:  TCP protocol, Keep alive.  All of them are 
suggested for sessions with long amounts of idle time such as remote 
desktops.

   # option tcpka
option clitcpka
option srvtcpka
option redispatch
option tcplog
#Balance specifies load balance method.  Search 
http://haproxy.1wt.eu/download/1.3/doc/configuration.txt; for 
balance roundrobin to see all the available modes and what they do.

balance roundrobin
#
# NOTES ABOUT STATEMENTS AND PARAMETERS BELOW IN MATCHING ORDER.
#
# server is a haproxy internal statement
# `server name` can be listed as anything.. i put the real name for 
clarity
# IP:port#  if you don't know this you need to wipe the drule off 
your chin.

# check see if the server is up
# port # what port to check.  I'm not sure this is needed since the 
port is already specified with the IP.  Can't hurt

# inter interval to run the check in ms.  1000ms = 1sec
# fastinter #.  By default a server is checked 3 times then 
determined to fail.  This specifies if it fails the first check, the 
next will happen at the interval specified.  500ms=0.5 sec.  
# downinter #.  Oposite of fastinter, this specifies how long the 
waits should be between checks when a server has been determined to be 
down.  To reduce network traffic I have set this to 1ms, which is 
10seconds.


#server nt1s77   10.58.240.248:3389  check port 3389 inter 2000 
fastinter 500 downinter 1
#server nt1s21z10.12.20.172:3389 check port 3389 inter 2000 
fastinter 500 downinter 1
#server dcwh0310.12.20.150:3389 check port 3389 inter 2000 
fastinter 500 downinter 1
#server nt1s23vm 10.12.20.116:3389 check port 3389 inter 2000 
fastinter 500 downinter 1
#server dceoc01  10.2.128.250:3389 check port 3389 inter 2000 
fastinter 500 downinter 1
server tswh01  10.14.3.111:3389 check port 3389 inter 2000 
fastinter 500 downinter 1
server tswh02  10.14.3.102:3389 check port 3389 inter 2000 
fastinter 500 downinter 1
server tswh03  10.14.3.113:3389 check port 3389 inter 2000 
fastinter 500 downinter 1
server tswh04  

Re: load balancing algorithm

2009-07-22 Thread Pedro Mata-Mouros Fonseca
Yup, you can pretty much match any HTTP header, AFAIK. Just a matter  
of finding the appropriate regexp.

Cheers.

Pedro.

On Jul 22, 2009, at 2:29 PM, Johan Duflost wrote:


Thanks for your answer.
The URI will be the same for all the operations.
I could perhaps instead use the http header SOAPAction? What do you  
think?


Cheers,

Johan


- Original Message -
From: Pedro Mata-Mouros Fonseca
To: haproxy@formilux.org
Sent: Wednesday, July 22, 2009 12:19 PM
Subject: Re: load balancing algorithm

I guess you could analyse the URI, and if it's that specific method  
that's being called you could drop that request onto your robust  
servers backend. Search for rsprep in the documentation. Something  
like:


acl req_heavy_method path_reg regexp to match your webservice  
method in the URI

use_backend robust_servers if req_heavy_method

Pedro.

On Jul 22, 2009, at 11:04 AM, Johan Duflost wrote:


Hello,

I'd like to know if it's possible to load balance a soap web  
service based on the operation called by the client.
We need this because we have a webservice with several operations  
and only one of them is very cpu and time consuming.
We would like to forward the requests for this heavy operation on  
robust dedicated servers and let the other servers handle all the  
other requests.
We can't split this webservice into two web services (one for the  
heavy operation and another one for all  the other operations)  
because all these operations are supposed to be called within a  
session.

Does anybody know if it's possible? Thanks.

Cheers,

Johan






No virus found in this incoming message.
Checked by AVG - www.avg.com
Version: 8.5.387 / Virus Database: 270.13.22/2253 - Release Date:  
07/21/09 18:02:00




Re: Capture and alter a 404 from an internal server

2009-07-22 Thread Willy Tarreau
On Mon, Jul 20, 2009 at 10:11:16AM +0100, Pedro Mata-Mouros Fonseca wrote:
 Thank you so much Maciej, I will give it a try - although in that  
 referenced email it seems like a scary thing to do... A hard thing to  
 evaluate is the cost of having such rspirep processing in every  
 response coming from that specific frontend... Is it too overwhelming  
 to the performance?

If you're running 1 request/s you should be careful not to add too
many such statements, but at lower speeds, you will almost not notice
the extra CPU usage, particularly if you've built with the PCRE library,
which is extremely fast. I have seen large configurations where people
use between 100 and 200 regexes per request and it does not appear to
affect them that much.

 Wouldn't this just be a perfect candidate for having it's own  
 directive, in the likes of errorfile and errorloc, but specifically  
 only for errors returned by servers instead of only HAProxy? ;-)  
 Something like:
 
 errorserverfile 404 /etc/haproxy/errorfiles/404generic.http
 errorserverloc 404 http://127.0.0.1:8080/404generic.html

it might be, but I don't really know if we need the errorserverfile
or not. Becase if we only need the errorserverloc above, then you
will be able to do it using ACLs when they're usable on the response
path.

Regards,
Willy




Re: Transparent proxy of SSL traffic using Pound to HAProxy backend patch and howto

2009-07-22 Thread Willy Tarreau
On Mon, Jul 20, 2009 at 03:23:22PM +0100, Malcolm Turnbull wrote:
 Many thanks to Ivansceó Krisztián for working on the TPROXY patch for
 Pound for us, we can finally do SSL termination - HAProxy - backend
 with TPROXY.
 
 http://blog.loadbalancer.org/transparent-proxy-of-ssl-traffic-using-pound-to-haproxy-backend-patch-and-howto/
 
 Patches to Pound are here:
 http://www.loadbalancer.org/download/PoundSSL-Tproxy/poundtp-2.4.5.tgz
 
 Willy,
 
 You mentioned that it may be more sensible to do something like:
 
 source 0.0.0.0 usesrc hdr(x-forwarded-for)
 
 rather than having 2 sets of TPROXY set up.. but I don't think this is
 possible yet?

Unfortunately not yet. I've had to arbitrate between that and the ability
to perform content-switching on TCP frontends and the priority went to
the later.

Another issue you might run into is the reduced number of source ports for
the same source IP, because now you have the client, pound, and haproxy
all using the same source IP, so you need to be careful that the client
never hits haproxy directly on the same port as pound, otherwise it may
use a same source port as pound and conflict with an existing session.
A trick might consist in using a distinct port on haproxy for direct
client connection and pound connections.

Regards,
Willy




Re: Still dropping TS sessions.

2009-07-22 Thread Willy Tarreau
Hi guys,

On Wed, Jul 22, 2009 at 08:52:05AM -0400, Guillaume Bourque wrote:
 Hi Paul
 
 I just retrun from vacation so I did'nt see your previous post, but 1 
 thing for sure haproxy CAN be use to dispatch RDP session, I have been 
 doing this on a couple of site with ~80 users and 4 TS servers wihout 
 any issue at all in the last year.
 
 I have look at your config and dont see what could be the problem.

I definitely see a problem. Timeouts are too short for RDP (50 seconds). So
after that time, if the client does nothing (eg: talk on the phone), his
session expires. From what I've heard, people tend to set session timeouts
between 8 and 24 hours on RDP.

BTW, you might be very interested. Exceliance has developped and contributed
RDP persistence ! This is in the development branch. Check the latest snapshot
here :

   http://haproxy.1wt.eu/git/?p=haproxy.git

basically, you just have to add the following statement in your backend :

   persist rdp-cookie

And when a session comes in, haproxy will analyse the RDP packet and
will look for an RDP cookie. If it has a matching server, it directs
the connection to that server, otherwise it does load balancing. And
we also have balance rdp-cookie which is used to balance on the
msts RDP cookie specifying the user name (when it is available).

Regards,
Willy




Re: queing problems

2009-07-22 Thread Willy Tarreau
Hi Fabian,

On Mon, Jul 20, 2009 at 06:11:45PM +0200, Fabian wrote:
 Hi List,
 
 I'm trying to set up a simple tcp load balancing:
 
 The backend servers can only handle one request at a time and the 
 requests take between 2-15 seconds to process.
 I want haproxy to distribute the tcp requests to any free backend server 
 currently not processing a request (no active connection).
 If all backends are currently active I want to queue the pending 
 requests globally and as soon as a backend becomes free the oldest 
 request in the queue should be redirect to the free backend server.
 
 Unfortunatly I can't get the queing to work. When there are pending
 connections and a backend server becomes idle, it takes a long time
 before a pending connection is handed to the server. Sometimes the 
 connection even gets a timeout despite the fact that a backend server is 
 idle for already 20 seconds.

Your configuration is right. I think that your problem is simply
that when you have too many incoming requests, the time to process
them all one at a time is too long for the last one to be served.

When unspecified, the queue timeout is equivalent to the connect
timeout (50s here). So I would suggest that you lower your connect
timeout to 5s and set timeout queue 2m for instance. Also, I
suggest that you enable a stats page to monitor the activity in
real time. It's really useful to check queueing. You just have
to add :

listen :
mode http
stats uri /

and you connect to this port with your browser. You'll see the
backend queue size, and the max it reaches.

Regards,
Willy




Re: make on os x

2009-07-22 Thread Willy Tarreau
Hi,

On Thu, Jun 11, 2009 at 09:51:00AM +0200, Rapsey wrote:
 Sorry error in -vv output, TARGET = darwin
 
 Sergej
 
 On Thu, Jun 11, 2009 at 9:46 AM, Rapsey rap...@gmail.com wrote:
 
  I'm trying to build haproxy with kqueue on osx leopard, but I don't think
  it's working. There is no mention of DENABLE_KQUEUE anywhere when it's
  building it.
  This is the make I use:
  make Makefile.osx TARGET=darwin CPU=i686 USE_PCRE=1  all

Ok you're not using the proper syntax, you need to use :

  make -f Makefile.osx TARGET=darwin CPU=i686 USE_PCRE=1  all

Otherwise you tell make to build Makefile.osx, which already
exists so no error is reported.

Also, please don't use 1.3.17 as it has a nasty bug which
can be triggered on 64-bit systems.

Willy