Re: Redirect HTTP to HTTPS and HTTPS to HTTP

2011-01-16 Thread XANi
Hi,

In that config easies way is to check what is the source IP of incoming
connection, if it comes from 127.0.0.1 (or w/e. is ur stunnel server
address) its HTTPS else its HTTP. Like acl is_ssl src 127.0.0.1 and then
use it in redirect.

I did similiar thing, but with lighttpd as frontend and configured
lighttpd to add header Ssl: Yes (and remove any incoming headers named
Ssl, just in case ;)).

Dnia 2011-01-16, nie o godzinie 19:21 +0100, Henri Storn pisze:
 Hello,
 
 I have a server hosting multiple Web sites. I use HAProxy, Stunnel and 
 HTTPD :
 
 HTTP - HAProxy (80) - HTTPD (8080)
 HTTPS - Stunnel (443) - HAproxy (8443) - HTTPD (8080)
 
 I want a single Web site is accessible via HTTPS. The others are only 
 accessible by HTTP. I want to do the following redirects :
 - http://server.domain.com/ - https://server.domain.com/ [OK]
 - https://other.domain.com/ - https://other.domain.com/ [PROBLEM]
 
 I can not create the ACL. Can you help me ?
 
 listen http
  bind *:80
  acl url_admin hdr_beg server.domain.com
  redirect prefix https://server.domain.com if url_admin
  server srv 127.0.0.1:8080 maxconn 256
 
 listen https
  bind 192.168.0.100:8443
  acl url_admin hdr_beg server.domain.com
  redirect prefix http://X unless url_admin
  option forwardfor except 192.168.0.100
  server srv 127.0.0.1:8080 maxconn 256
 
 
 Thanks,
 
 Regards.
 


-- 
Mariusz Gronczewski (XANi) xani...@gmail.com
GnuPG: 0xEA8ACE64
http://devrandom.pl


signature.asc
Description: This is a digitally signed message part


Re: Haproxy+Nginx SSL Insecurities

2010-07-03 Thread XANi
Ive done something similar  (dont remmber config details now, sorry),
basically lighttpd was used as frontend for both http and https traffic
(i used it for compressing too) it:
1.Removed header called SSL
2. Added SSL: Yes
So even if someone sends evil headers they will get removed

or u can prolly make rule in haproxy to add SSL: Yes header if it
comes from localhost and remove it if it doesn't
Dnia 2010-07-03, sob o godzinie 11:23 -0400, John T Skarbek pisze:

 Chris,
 
 
 
 Thanks for responding.  I had thought of the option you mention.
  However I discontinued it quickly.  The reason I'm not a big fan, is
 that those header values can be hacked quite easily.  Granted the end
 user (hacker) may not know the specific value that must hold.  There
 are even plugins to browsers that help end users view headers and
 modify them any which way they choose. 
 
 John T. Skarbek
 B.S.Computer Science Networking
 Radford University
 
 
 
 On Sat, Jul 3, 2010 at 9:59 AM, Chris Sarginson ch...@sargy.co.uk
 wrote:
 
 
 
 
 On 3 Jul 2010, at 14:51, John T Skarbek wrote:
 
 
 
  Good Morning,
  
  
  
  I'm testing out a solution to use nginx for ssl decryption
  to pass off requests to haproxy.  During the thought process
  of everything, and later during testing, I noticed that all
  I'd need to do in the clients web browser is to simply take
  out the 's' on 'https' and all traffic will flow unencrypted
  just dandily.  I really don't want that to happen.  So I
  thought of a couple of ideas:
* I was thinking of a solution to simply deny port 80
  traffic from the outside world, but then I do have a
  couple of pages which do not require ssl.  Users
  that don't put the 'https' in the address bar by
  default will sit at a blank page and I don't want to
  have to manage the firewall when creating sites.  
* I was then thinking of having nginx watching that
  port on specific sites for unencrypted traffic, but
  then I'm mixing services and that isn't the greatest
  when planning for future sites and simply seems
  convoluted to me.
* My last though was to haproxy use some sort of acl
  to listen to where requests come from.  If anything
  from the outside world, redirect them to a web page
  that forces ssl.  Doing this would require me to
  have another entry to listen for the source being
  itself as decreypted communications from nginx would
  then possibly be sent to the redirect page also.  
  Does anyone have any thoughts or a
  better recommended solution?  
  
  John T. Skarbek
  B.S.Computer Science Networking
  Radford University
 
 
 
 John,
 
 
 I use Nginx to insert a header (X-Forwarded-Proto: https), and
 just check that the header exists with haproxy.  If it
 doesn't, use the redirect prefix option in haproxy to force
 SSL.
 
 
 Hope this helps
 
 
 Chris
 
 
 


-- 
Mariusz Gronczewski (XANi) xani...@gmail.com
GnuPG: 0xEA8ACE64
http://devrandom.pl


signature.asc
Description: This is a digitally signed message part


Re: A (Hopefully Not too Generic) Question About HAProxy

2010-05-17 Thread XANi
Dnia 2010-05-17, pon o godzinie 14:45 -0700, Chih Yin pisze:
 Hi,
 
 
   Please excuse me if the information contained in this email is a bit
 generic.  I'm not the regular administrator, but I've been given the
 task of troubleshooting some issues with my website.  If more details
 are needed, I'll gladly go look for additional information.
 
 
   Currently, my website is experiencing a lot of errors and slowness.
  The server errors I see in the HAProxy log file are mainly 503
 errors.  The slowness of page loads for my website can be as long as
 minutes.  I'm trying to determine if anyone have had any similar
 issues with using HAProxy as a high availability load balancer.
 
 
   HAProxy 1.3.21
   CentOS running on Citrix XenServer
   HP blades
 
 
   There are actually almost 100 virtual servers running on the
 blades.  A good many of the virtual servers are application servers
 running Glassfish.  There are a few servers dedicated to CAS for
 authentication and access.  I have three servers running Rsyslogd for
 writing HAProxy log data to file.  A NetApp filer is used for storage.
 
 
   Currently, the website gets about:
 
 
 73,000 pageviews a day
 32,000 unique visitors  a day
 46,000 visit a day
 
 
 3,000 pageviews a hr
 1,300 unique visitors a hr
 1,000 visit a hr
 
 
   I am using Akamai to help manage content delivery.
 
 
   One of the things Akamai is reporting to me is that they are having
 difficulty requesting content that needs to be refreshed.  Akamai
 tries up to 4 times to get the content with a 2 second timeout to
 update content whose TTL has expired.  After the 4th time, Akamai
 looks to their own cache before returning a 503 error to the user if
 the content is not available in the cache.
 
 
   Recently, I've noticed that Akamai is encountering an increasingly
 large number of 503 and 404 errors from my website.  I've traced the
 404 errors to missing images, but I'm not sure what the cause of the
 503 errors could be.  I had some external resources help me verify
 that they are able to retrieve the content from the Glassfish
 application servers even when HAProxy is reporting the 503 errors.
 
 
   One thing I did notice about the HAProxy configuration is that there
 are actually three servers running HAProxy with identical
 configurations.  One serves as the primary high availability load
 balancer while the other two act as failovers.  The keep-alive daemons
 are configured to accomodate that setup.
 
 
   From this generic description, is there something in the way this
 architecture is set up or in the configuration of HAProxy that may be
 causing the 503 errors to be reported to Akamai?  As I mentioned, when
 an external resource makes a request for the same content directly
 from the application server, the same errors do not appear to occur.
 

503 would (usually) mean haproxy sees backends as DOWN, look for msg
about server goin up/down in haproxy logs.
Or, if thats not a case, grep haproxy logs for those 503 errors (make
sure ure using http log mode), then go to section 8 in
http://haproxy.1wt.eu/download/1.3/doc/configuration.txt and try to
determine what was exact reason for error and/or post few examples here,
your haproxy config (with sensitive information removed ofc ;) )

Having some kind of monitoring, or at least stats page active is also
very helpful.

-- 
Mariusz Gronczewski (XANi) xani...@gmail.com
GnuPG: 0xEA8ACE64
http://devrandom.pl


signature.asc
Description: To jest część  wiadomości podpisana cyfrowo


Re: Rollover Backups

2010-05-13 Thread XANi
Hmm turn off autostart of your master server so it cannot accidentally
go up ? U have to resync it manually anyway
Dnia 2010-05-13, czw o godzinie 12:36 +0100, Laurie Young pisze:
 Thanks for the response chris
  
 * if the main server goes up - requests continue going
 to the backup
 (until manual intervention)
 
  * This is done by specifying a cookie on your backup server -
 the requests will then continue to be handled by that server
 until the session expires (or the server goes down)
 
 
 
 Unfortunately the servers here are not web servers, so setting a cooke
 is not an option. I also need new requests to go to the backup - not
 just sessions already in play. 
 
 
 It's an instance of Redis, (a key value store). The main server would
 be the master, with the backup being the slave. If the master goes
 down and write operations get sent to the slave, we cannot send ANY
 requests to the master till we have been able to re-sync them (which
 is a manual operation)
  
 
 -- 
 Dr Laurie Young
 Scrum Master
 New Bamboo
 
 Follow me on twitter: @wildfalcon
 Follow us on twitter:  @newbamboo
 
 Creating fresh, flexible and fast-growing web applications is our
 passion.
 
 
 3rd Floor, Gensurco House, 
 46A Rosebery Avenue, London, EC1R 4RP
 
 http://www.new-bamboo.co.uk



-- 
Mariusz Gronczewski (XANi) xani...@gmail.com
GnuPG: 0xEA8ACE64
http://devrandom.pl


signature.asc
Description: To jest część  wiadomości podpisana cyfrowo


Re: Haproxy / stunnel performance

2010-05-12 Thread XANi
Dnia 2010-05-12, śro o godzinie 17:15 +0200, Michael Rennt pisze:

 Hello!
 
 This might be a bit off-topic (but just a little bit), as my question is 
 related to the performance
 of stunnel when used with haproxy.
 
 First of all: Is haproxy + stunnel the most common technique for terminating 
 ssl with haproxy? Is
 there a solution that's more common or even uncommon but performing better on 
 a 99% ssl traffic
 loadbalancer?
 
 We are currently terminating ssl via stunnel (4.27, ulimit -n 5), handing 
 the decrypted traffic
 over to haproxy 1.3.23 via 127.0.0.1. Haproxy is proxying the request to 2 
 other systems.
 
 The loadbalancer is an Intel XeonDual Core E3110 with 4 GB RAM, so plenty of 
 ressources for a system
 doing nothing else besides ssl termination / load balancing.
 
 We are experiencing a limit of about 100 requests per second on the ssl path. 
 Unencrypted direct
 connections to haproxy perform much better, of course, so I'm pretty sure 
 haproxy is not a bottleneck.
 
 Basically I'm interessted in getting feedback on how other people implement 
 ssl termination on a
 haproxy system and if you're reaching a request rate higher than 100 req/s? 
 This is why I didn't
 supply any configuration settings in this mail.
 
 The stunnel config is very basic. We played around with the timeout values 
 and ulimit values a bit,
 without any noticeable performance boost while the system was loaded.
 
 The system load idles at around 0.11 most of the time.
 
 Thanks in advance.
 
 Best,
 
 Michael
 

IM not familiar with stunnel, can stunnel utilize more than one core ?
If not u might try to use some light http server like lighttpd or nginx
as ssl proxy.
-- 
Mariusz Gronczewski (XANi) xani...@gmail.com
GnuPG: 0xEA8ACE64
http://devrandom.pl


signature.asc
Description: To jest część  wiadomości podpisana cyfrowo


Re: Hot reconfig briefly stops listening?

2010-04-11 Thread XANi
Dnia 2010-04-11, nie o godzinie 20:35 +0200, Lincoln Stoll pisze:

 Hi, I'm testing haproxy 1.3.18 on ubuntu 9.10, and I'm having an issue
 with hot reconfiguration - it seems to refuse new connections for a
 very short period of time (around 50ms it seems).
 
 
 
 The command I'm using is /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg
 -p /var/run/haproxy.pid -sf $(cat /var/run/haproxy.pid) , and the
 haproxy config can be seen at http://gist.github.com/362959 . I've
 also tried building 1.4.3 from source, and I'm seeing the same issue
 there. I'm using ab to generate the load, and the load is both direct
 and via nginx running as a proxy - it shows up in both places.
 
 
 Is this expected behaviour, or am I doing something wrong / is
 something wrong?
Yes it is expected, basically that command tells haproxy to:
1) signal already running haproxy instance to stop listening for new
connections but wait for already established ones to finish
2) start listening
3) if something is wrong (bad config etc) signal old instance to resume
listening

so its more like stop old instance and start new, but existing
connections wont be destroyed


-- 
Mariusz Gronczewski (XANi) xani...@gmail.com
GnuPG: 0xEA8ACE64
http://devrandom.pl


signature.asc
Description: To jest część  wiadomości podpisana cyfrowo


Re: Throughput degradation after upgrading haproxy from 1.3.22 to 1.4.1

2010-03-18 Thread XANi


 So now when I got a working haproxy 1.4, I continued to try out the
 option http-server-close but I hit a problem with our stunnel
 (patched with stunnel-4.22-xforwarded-for.diff) instances. It does not
 support keep-alive, so only the first HTTP request in a
 keepalive-session gets the X-Forwarded-For header added (insert Homer
 doh! here :). When giving it some thought, I guess this is the
 expected behaviour for what stunnel actually is supposed to do. So,
 for now I'll stick with option httpclose for a while longer...

Maybe try to use some light web server like Nginx or Lighttpd as SSL
proxy instead ?

-- 
Mariusz Gronczewski (XANi) xani...@gmail.com
GnuPG: 0xEA8ACE64
http://devrandom.pl


signature.asc
Description: To jest część  wiadomości podpisana cyfrowo


Re: setup with Oracle and SSL

2010-03-13 Thread XANi
Hi
Dnia 2010-03-13, sob o godzinie 13:34 -0500, Anne Moore pisze:
 Greetings to all, 
  
 I'm new to this group, but have really been working hard on getting
 haproxy working for Oracle Application HTTP server over SSL.
  
 I've looked through the website, but can't seem to find anything that
 shows how to setup SSL on the haproxy. I also can't find anything on
 how to setup haproxy with Oracle Application HTTP server. 
  
 Would someone on this list have that knowledge, and be willing to
 share?
  
 Thank you!
  
 Anne
That's because haproxy doesn't support SSL in http mode, if u want HTTPS
u need to set up SSL proxy in form of for example Lighttpd.
so it works like that:
Lighttpd( https:443) - Haproxy(http:80) -your_backend_servers.

Only thing to watch out is loggin client IP, basically u have to add to
config
option forwardfor except 127.0.0.1
where 127.0.0.1 is ur SSL proxy address
Then proxy will be passing original client IP thru X-Forwarded-For
header

except 127.0.0.1 is because lighttpd adds X-Forwarded-For when used
as proxy so haproxy doesn't have to (obv. replace it with other ip if ur
SSL proxy is on different host)

Regards
XANi

-- 
Mariusz Gronczewski (XANi) xani...@gmail.com
GnuPG: 0xEA8ACE64
http://devrandom.pl


signature.asc
Description: To jest część  wiadomości podpisana cyfrowo


RE: setup with Oracle and SSL

2010-03-13 Thread XANi
You *can* set up haproxy to act as tcp proxy which passes requests to
backend https servers but then u cant use any of the advanced
functions of load balancer (u cant for example forward requests to
different backend based on path, or use cookie-based persistence).
Can't u just turn off SSL (or connect to non-ssl port) on application
server ? and do SSL before haproxy ? I mean instead of

[client](HTTPS) - (HTTPS)[haproxy in tcp mode](HTTPS) -
[appserver](HTTPS)
do
[client](HTTPS) - (HTTPS)[stunnel or lighttpd](HTTP) -  (HTTP)[haproxy
in http mode](HTTP) - [appserver](HTTP)


Dnia 2010-03-13, sob o godzinie 17:27 -0500, Anne Moore pisze:
 Very interesting. Thank you for the reply. That's very disappoint that
 haproxy doesn't support SSL. 
  
 However, what if I my haproxy was HTTP, and it forwarded requests to
 my two backend HTTPS (SSL) URL servers? 
  
 Would this scenario work fine with haproxy?
  
 Thank you
  
 Anne
 
 
 
 
 __
 From: XANi [mailto:xani...@gmail.com] 
 Sent: Saturday, March 13, 2010 4:25 PM
 To: Anne Moore
 Cc: haproxy@formilux.org
 Subject: Re: setup with Oracle and SSL
 
 
 
 
 Hi
 Dnia 2010-03-13, sob o godzinie 13:34 -0500, Anne Moore pisze: 
 
  Greetings to all, 
  I'm new to this group, but have really been working hard on getting
  haproxy working for Oracle Application HTTP server over SSL. 
  I've looked through the website, but can't seem to find anything
  that shows how to setup SSL on the haproxy. I also can't find
  anything on how to setup haproxy with Oracle Application HTTP
  server. 
  Would someone on this list have that knowledge, and be willing to
  share? 
  Thank you! 
  Anne 
 
 That's because haproxy doesn't support SSL in http mode, if u want
 HTTPS u need to set up SSL proxy in form of for example Lighttpd.
 so it works like that:
 Lighttpd( https:443) - Haproxy(http:80) -your_backend_servers.
 
 Only thing to watch out is loggin client IP, basically u have to add
 to config
 option forwardfor except 127.0.0.1
 where 127.0.0.1 is ur SSL proxy address
 Then proxy will be passing original client IP thru X-Forwarded-For
 header
 
 except 127.0.0.1 is because lighttpd adds X-Forwarded-For when
 used as proxy so haproxy doesn't have to (obv. replace it with other
 ip if ur SSL proxy is on different host)
 
 Regards
 XANi
 
 -- 
 Mariusz Gronczewski (XANi) xani...@gmail.com
 GnuPG: 0xEA8ACE64
 http://devrandom.pl



-- 
Mariusz Gronczewski (XANi) xani...@gmail.com
GnuPG: 0xEA8ACE64
http://devrandom.pl


signature.asc
Description: To jest część  wiadomości podpisana cyfrowo


Re: killing persisent conections on backends marked down?

2010-02-25 Thread XANi
Dnia 2010-02-25, czw o godzinie 16:27 -0500, Greg Gard pisze:

 hi willy and friends,
 
 i am working on a set of ruby scripts to do database failover and
 stonith. so far all is working pretty well, but i have a few issues:
 
 1) rails makes persistent connections to the backend database so when
 a server is marked down, the connection remains ongoing. currently, i
 deal with this by issuing a stonith command in my ruby driver
 script for haproxy that shuts the backend down explicitly via ssh, but
 it would be nice if i could rely on haproxy to kill the connection
 explicitly. is there a setting to make haproxy kill existing
 connections on a backend going down?
 
 2) for rails i have tcp timeout set to 0 so it seems to be handling
 the persistent connections ok, but when i do a reload using the
 haproxy init script in the debian packages, i end up with two haproxy
 backends as the persistent connections aren't killed. essentially the
 original process is waiting for the connections to end before it kills
 itself, but that will never happen with rails db connection. any ideas
 or suggestions?
 
 ps: having rails not use persistent connections is not really what i
 would like to do right now. i have run that in the past on production
 and had wierd timeout problems and choppy connectivity.
 
 thanks...gg
 


1) If i remember correctly client/server timeouts only trigger when
there is no activity (no data send) so setting client and server timeout
to like 5 min could solve problem
so as long as app do some queries connection won't be dropped
2) u can do /etc/init.d/haproxy stop ; /etc/init.d/haproxy start
-- 
Mariusz Gronczewski (XANi) xani...@gmail.com
GnuPG: 0xEA8ACE64
http://devrandom.pl


signature.asc
Description: To jest część  wiadomości podpisana cyfrowo


Re: long ip acl's

2010-02-25 Thread XANi
Hi
Dnia 2010-02-25, czw o godzinie 15:35 -0800, David Birdsong pisze:

 On Thu, Feb 25, 2010 at 12:48 AM, Willy Tarreau w...@1wt.eu wrote:
  Hi David,
 
  On Wed, Feb 24, 2010 at 06:06:26PM -0800, David Birdsong wrote:
  I'm autogenerating haproxy configs on some of our front ends and
  appending a growing set of IP addresses that we'll ban.  Does this
  scale well in haproxy?  Can I expect performance to drop as the list
  grows and grows or is this implemented in a way that scales pretty
  horizontally?
 
  Yes the performance will drop but not *that* much, because IP ACLs
  check is quite fast. Just put as many IPs per line as you can.
 
  I have plans to load IP ranges from a file and to perform dichotomic
  search on them (which will be even faster than tree search due to
  lower memory footprint). It would make it possible to load millions
  of IP addresses without a noticeable performance degradation. It's
  just not there yet.
 
  I also plan to add ACL matches for stickiness tables. That will allow
  us to check using ACLs if an address was already added to a table. We
  first have to relax the conditions in which an address can be inserted.
 
  How many IP addresses do you intend to load, and how many requests
  per second do you estimate ?
 right now there are 20 or so, but this i've automated their addition
 to the config file and was wondering if this was something i could
 forget about -clearly not.
 
 we had a bad referrer list that nobody paid attention to and it grew
 to like 4k.  our home grown lighttpd module was killing lighttpd's
 performance comparing all requests against a 4,000 referrer list.
 
 these are uploads(posts) so rate is quite low. less than 100/sec.

U might try to use iptables + ipset instead. Tho according to manual
there is 65535 IP limit per set. Also no need to restart haproxy for
adding new IP's
-- 
Mariusz Gronczewski (XANi) xani...@gmail.com
GnuPG: 0xEA8ACE64
http://devrandom.pl


signature.asc
Description: To jest część  wiadomości podpisana cyfrowo


Re: HAPROXY between mysql client and server to eliminate network latency problems

2010-02-09 Thread XANi
Dnia 2010-02-09, wto o godzinie 19:37 +0100, Berco Beute pisze:

 I want to use HAPROXY between a slow mysql clients and a mysql server
 to minimize database interaction time. The large network latency
 between the client and database is currently a performance bottleneck.
 The database is often occupied serving result-data to the slow client.
 
 In my new setup the HAPROXY sits next to the database and makes sure
 database interaction time is as small as possible. After HAPROXY has
 fetched the data from the database it buffers it and serves is to the
 slow client.This way slow clients cannot occupy database connections
 for too long. 
 
 Is HAPROXY suitable for this? If so, where can I find examples how to
 configure HAPROXY?
 
 Kind regards,
 Berco Beute


Haproxy buffers only headers in http mode and when inspecting, so it
wont do what u want. Basically, in tcp mode it dont know anything about
whats goin on inside connection xcept things like sticking same client
to server (for example in rdp). It can' do things like keep connection
open and spoonfeed server response to client (unless response is small
enougth to sit in connection buffer).
Maybe https://launchpad.net/mysql-proxy ?
-- 
Mariusz Gronczewski (XANi) xani...@gmail.com
GnuPG: 0xEA8ACE64
http://devrandom.pl


signature.asc
Description: To jest część  wiadomości podpisana cyfrowo


Re: Log level error/warning

2010-02-04 Thread XANi
Dnia 2010-02-04, czw o godzinie 14:52 +0100, Reynier Serge pisze:

 Bonjour,
 
 j'ai mis un haproxy en place pour des connexions MySQL et j'aimerai 
 loguer seulement les erreurs et warning de haproxy visible sur la page 
 de stats. Le mode de log est donc tcplog, voici un extrait de mon 
 fichier de conf :
 
 listen  MySQL x.x.x.x:3306
 mode tcp
 option tcplog
 option dontlog-normal
 no option log-separate-errors
 log 127.0.0.1 local0 debug
# option tcpka
 maxconn 1
 timeout connect  3s
 timeout client   3s
 timeout server   3s
 server SRV1 x.x.x.x:3306 check inter 10s rise 3 fall 3 weight 1
 server SRV2 x.x.x.x:3306 check inter 10s rise 3 fall 3 weight 1
 
 Quand je met le niveau de log a debug on a beaucoup trop de ligne avec 
 des flags cD, si je met le niveau a notice plus rien ...
 En fait je voudrais simplement capturer les quelques warning et error 
 que j'ai sur la page de stats.

It might help if u repost your question in english (its language we use
on this mailing list), tho probably you problem is in too low timeouts 

-- 
Mariusz Gronczewski (XANi) xani...@gmail.com
GnuPG: 0xEA8ACE64
http://devrandom.pl


signature.asc
Description: To jest część  wiadomości podpisana cyfrowo


RE: Issues with RDP

2010-02-03 Thread XANi
Dnia 2010-02-03, śro o godzinie 08:54 -0500, John Marrett pisze:

 Shyam,
 
  But the problem is if a 4th guy 
  connects, it gets connected to an already existing connection 
  and also terminating the existing one. 
 
 This is because Windows XP will only permit one RDP session on a machine
 at a given time (possibly two if you force console access, but I don't
 believe so).

Yup, one console per WinXP, two on W Server, for more u need terminal
server



-- 
Mariusz Gronczewski (XANi) xani...@gmail.com
GnuPG: 0xEA8ACE64
http://devrandom.pl


signature.asc
Description: To jest część  wiadomości podpisana cyfrowo


Re: high connection rates

2010-01-31 Thread XANi
Hi

  it's statically in the kernel, i haven't gottten around to recompiling
  the kernel yet to compile it out.
 
  i am using the NOTRACK module to bypass all traffic around conntrack though.
 
 What a shame :-(
 Unless I'm mistaken, that means that a connection is created for each
 incoming packet, then immediately destroyed using the NOTRACK target.
 Then it's the same again for outgoing packets. So while lookups are
 fast in an empty table, this still costs a lot of CPU. Also, there was
 a discussion in the past about netfilter's counters causing cache
 thrashing in SMP because they are updated for every packet. I don't
 remember the details and I may even be wrong though.

Nope, raw table is done before any other table so when u specify NOTRACK
it will completely bypass conntracking. But then, unless u really need
conntrack for something else, disabling it entirely would be a bit
better (with empty iptables there is no need for kernel to go thru any
rule so a bit less cpu load)


-- 
Mariusz Gronczewski (XANi) xani...@gmail.com
GnuPG: 0xEA8ACE64
http://devrandom.pl


signature.asc
Description: To jest część  wiadomości podpisana cyfrowo


Re: Tuning HAProxy on EC2 instances?

2010-01-31 Thread XANi
Dnia 2010-02-01, pon o godzinie 02:01 +0100, Alexander Staubo pisze:

 On Mon, Feb 1, 2010 at 12:14 AM, Willy Tarreau w...@1wt.eu wrote:
  well, last year I helped some guys in charge of a world wide sports
  event which was hosted there. The performance was terrible. Completely
  unstable. [snip]
 
  In this experience, I think that for them, everything was virtual :
  the machines, the network, the support, the availability, the visitors
  and finally the profit.
 
 Ouch. That's something of a horror story. Thanks for the summary.
 This, and several recent blog posts about EC2 performance issues, is
 making me want to reconsider EC2.
 
 The negatives have not been reflected in our own testing of EC2, but
 then so far we have only dealt with single, standalone instances which
 only depend on external traffic. There is a lot of evidence that EC2
 suffers from internal network latency as well as being overcrowded, at
 least in the US. We will need to run some comprehensive performance
 tests with multiple instances.
 
 But it's hard to ignore the myriad of services that Amazon provides.
 S3, EBS, autoscaling, Elastic IPs, geographic CDN -- those are all
 things we want. A virtually infinite supply of storage space through
 EBS is a particularly attractive proposition, and one which I think
 very few dedicated hosting companies can provide. We don't want to pay
 through the nose for some kind of half-assed SAN setup.
 
 We might end up deciding to use a dedicated, non-virtual hosting
 provider. That assumes we can find one that lets us cheaply and
 quickly (eg., within a day or two) add or remove new machines. There
 are a bunch of providers like that in the US, but I don't know of any
 reputable ones in Europe. Do you know of any?
 


OVH is quite okay. From VPS i'd recommend linode.com (they have data
centers in europe and US), much faster than Amazon, and also have some
kind of API
-- 
Mariusz Gronczewski (XANi) xani...@gmail.com
GnuPG: 0xEA8ACE64
http://devrandom.pl


signature.asc
Description: To jest część  wiadomości podpisana cyfrowo


Re: Can HAProxy's Balancing Mechanism Be Called NAT?

2010-01-07 Thread XANi
Dnia 2010-01-07, czw o godzinie 11:29 +0800, Joe P.H. Chiang pisze:
 Hi All
 
 I was wondering if the HA Proxy's Balancing Mechanism be called a NAT
 Mechanism, because it's masking the servers' IP addresses, and then
 route the traffic to the location. 
 
 
 
 because i was just discussing with my colleague, and my argument is
 that: it's only a proxy which in between two pc there are a surrogate
 pc to tell where the traffic's destination is. 
 
 
 -- 
 Thanks,
 Joe
 

Well, NAT is get packet, rewrite IP headers, send packet somewhere.
In  haproxy it's get packet, analyze headers, change headers, send it
to backend server, rewrite/analyze response, send it back to client,
then maintain connection between those 2

Basically, NAT is on level 3, proxy is on level 7.
LVS(http://www.linuxvirtualserver.org/) is basically NAT in one if its
modes. HAProxy is not Network Address Translator tho it can be used to
replace it in some cases.


-- 
Mariusz Gronczewski (XANi) xani...@gmail.com
GnuPG: 0xEA8ACE64
http://devrandom.pl


signature.asc
Description: To jest część  wiadomości podpisana cyfrowo


Re: -sf/-st rereading configuration file

2009-12-24 Thread XANi
Dnia 2009-12-24, czw o godzinie 06:29 +0100, Willy Tarreau pisze:
 On Wed, Dec 23, 2009 at 02:43:04PM -0800, Paul Hirose wrote:
  I was asked how to get haproxy to reload its configuration file, and not 
  disturb any existing connections.  For example, if I have two servers 
  listed, and I want to take one out for maintenance.
  
  I wasn't sure about the difference between -sf and -st, but from reading 
  2.4(.1), I'm guessing -sf is the better way.  It allows all existing 
  connections to finish, then temporarily stops/pauses all services(?), 
  rereads the configuration file, then restarts again?
 
 No it does not work like that.
 
 You start a new process (with -sf or -st). If reads the config and tries
 to bind as many services as it can. If some ports are busy, it then sends
 a signal to the old process asking it to temporarily release its ports so
 that the new one can bind them. This leaves a small window of a few 
 milliseconds
 between the instant the port is unbound and it is rebound, where the port
 is not bound at all. But apparently people have absolutely no problem with
 that. Then, once the new process is ready, it sends one signal to the old
 one indicating to it that it can either finish what it's doing (-sf) or
 immediately stop (-st). So upon every restart, you have a fresh new process.
 Some people even use that to upgrade the binary without service disruption.
There is also little iptables hack, if u wanna be 100% sure no client
will get rejected when you're restarting, block sending TCP RST packets
to client, so when TCP SYN hits loadbalancer when its restarting and
frontend port is closed, client connection won't get resetted, TCP will
just retransmit SYN packet.

-- 
Mariusz Gronczewski (XANi) xani...@gmail.com
GnuPG: 0xEA8ACE64
http://devrandom.pl


signature.asc
Description: To jest część  wiadomości podpisana cyfrowo


Re: Friendly URLs?

2009-12-17 Thread XANi
Dnia 2009-12-17, czw o godzinie 07:52 +0100, Willy Tarreau pisze:
 On Wed, Dec 16, 2009 at 01:56:06PM +0100, XANi wrote:
 
Is there a way to do this using rewrite rules?
   
   This specific one above cannot because you have to take one part
   from the Host header and inject it into the request line. But those
   which only move components within the same line do work (eg: rewriting
   the host or rewriting the URI).
  
  Its possible to do redirect instead of rewrite ?
  so
  http://profilename.page.com gets redirected to
  http://page/profile/profilename ?
 
 If you need to automatically extract profilename from the request
 to build your redirect, then no, it's not possible. But at Exceliance,
 we're working on a way to extract generic data from a request in order
 to be able to reuse it elsewhere (ACL, stickiness, hashing). So while
 I did not thinkg about it, it would then be possible to adapt the
 redirect code so that it can use such data too.
 
  Atm its only reason why we are still using nginx ;]
 
 If it's doing that well, you have no reason to replace it. The best
 tool for each task provides you with the best architecture.
Yeah, but then i miss some (well, a lot of) features I'd want to use
that are in haproxy but not in nginx, so either i have to skip some
things or do config like haproxy sending some req to nginx only for
rewrites, which is kinda ugly.



-- 
Mariusz Gronczewski (XANi) xani...@gmail.com
GnuPG: 0xEA8ACE64
http://devrandom.pl


signature.asc
Description: To jest część  wiadomości podpisana cyfrowo


Re: Friendly URLs?

2009-12-16 Thread XANi
Dnia 2009-12-16, śro o godzinie 00:15 +0100, Willy Tarreau pisze:

 Hi,
 
 On Tue, Dec 15, 2009 at 05:47:49PM -0500, Ken Roe wrote:
  We are trying to make application URLs friendly using rewrite rules.
  The goal is to eliminate the context path of the web application from
  the url.
  
   
  
  Example: 
  
  The URL http://app.company.com should rewrite to
  http://backend:8080/app. 
 
 This is a very bad idea, and while it may work in the short term,
 you will end up with permanent issues such as erroneously built
 resource paths (images, JS, CSS, ...), bad paths on cookies, bad
 redirection URLs, or the need to explicitly state a full path with
 a host name in each Location header with the need to rewrite it at
 every stage of your architecture, etc... I regularly see setups
 making use of rewrite rules for this same purpose. The only thing
 they can say after a few years of permanent degradation and workarounds
 involving hundreds of unmaintainable rewrite rules is always the same :
 it's too late now to remove that crap, we have to live with it.
 
 So... better think twice before digging your hole.
 
  Is there a way to do this using rewrite rules?
 
 This specific one above cannot because you have to take one part
 from the Host header and inject it into the request line. But those
 which only move components within the same line do work (eg: rewriting
 the host or rewriting the URI).

Its possible to do redirect instead of rewrite ?
so
http://profilename.page.com gets redirected to
http://page/profile/profilename ?
Atm its only reason why we are still using nginx ;]

as for rewrites, what u really want is you app supporting that kind of
address, like Willy said, those are only ugly workarounds




-- 
Mariusz Gronczewski (XANi) xani...@gmail.com
GnuPG: 0xEA8ACE64
http://devrandom.pl


signature.asc
Description: To jest część  wiadomości podpisana cyfrowo


Re: minimum requirements and explanation about stats

2009-12-11 Thread XANi
Dnia 2009-12-11, pią o godzinie 02:13 -0200, Gabriel Sosa pisze:

 Hola!
 
 Currently our haproxy is setup over a Intel Pentium D 930 with 2gb 667
 RAM, we are in a test stage. Running a Centos 5.3 64bits kernel 2.6
 
 haproxy was compiled with TARGET=linux26
 
 - How can I know if this is a good hardware for this. I didn't found
 any about this in the documentation.
 - How can I calculate the maximum number of connection based on that
 hardware/OS/kernel
 
 In the other hand, how is a session considered ? I mean: If I have a
 page with 4 images (for example) should I consider this like a 5
 different sessions or just one session with 5 elements? there is any
 documentation about how should I read the stats?
 
 I really enjoy this piece of software, there is any other way (like
 IRC) to have good talks ?
 
 thank you!
 
 
 
 


Tbh best way to check it is just u fire some light http servers and run
benchmarks
U might want to tune ur system a bit, like turn off connection tracking
(as it can eat lots of RAM/CPU), get new kernel (so tcp splicing will
work) and get good network cards. From my experience haproxy perfomance
is as fast as kernel can put it on the wire ;]


-- 
Mariusz Gronczewski (XANi) xani...@gmail.com
GnuPG: 0xEA8ACE64
http://devrandom.pl


signature.asc
Description: To jest część  wiadomości podpisana cyfrowo


Re: Haproxy server timeouts?

2009-12-04 Thread XANi
Hi
Dnia 2009-12-04, pią o godzinie 13:48 -0500, Naveen Ayyagari pisze:

 Hello, 
We have come across a situation where occasionally our backend servers get 
 overwhelmed with connections. 
 
 It appears as though haproxy 'timeout server' value is getting hit which 
 drops the connection to the server, however the server continues to execute 
 the request even though the response will be ignored. 
 
 We are using apache on our backend servers and have not been able to figure 
 out how to get apache to abort processing on a connection when haproxy times 
 out. What ends up happening is that, haproxy times out the connection, then 
 it thinks that it has an available connection to serve a new client and opens 
 a new connection to the apache server. This ends up with the apache server 
 getting way more concurrent connections than we have configured haproxy to 
 allow, and in some cases brings our servers to halt as they run out of memory 
 because of the number of connections being opened. 
 
 Has anyone else had issues like this or have suggestions on a best practice 
 to manage this kind issue? I would have hoped that apache would terminate the 
 process when haproxy dropped the connection, but it does not appear to behave 
 in this manner.

Have you tried lowering max connections per server (so instead of
queuing connections on backend you will have queue on haproxy)? Also,
what u serve thru apache, php(mod_php/mod_fgid), python, ruby ?
Regards
-- 
Mariusz Gronczewski (XANi) xani...@gmail.com
GnuPG: 0xEA8ACE64
http://devrandom.pl


signature.asc
Description: To jest część  wiadomości podpisana cyfrowo


Re: Haproxy server timeouts?

2009-12-04 Thread XANi
Dnia 2009-12-04, pią o godzinie 17:57 -0500, Naveen Ayyagari pisze:

 Our servers do not serve any static content, it is entirely application 
 content for a mobile application, so we have many many requests that need to 
 run through a php server. 
 
 The issue we have is that our scripts are dependent on external resources, so 
 php execution time can vary wildly. 
 
 
 Can you elaborate on this comment:
  -do not run too much PHPs, 2xnumber of cores is ok coz if u run too much 
  content switching will eat your CPU and RAM is better used for caching, not 
  another 20 PHP processes ;]
 Given that we are using mod_php, does this still make sense or is this only 
 relevant to the fastcgi? What it 2x the number of cores? Do you mean 
 processor cores? We need to be able to concurrently handle as many PHP 
 processes as possible. 
 
 Thanks for taking the time to help out and explain your configuration. 
 

Yes i meant processor cores, basically if you have extreme cases like 80
processes on 8 cores then imo its better to use less processes and queue
reqests in proxy (too much content switching is bad thing for
performance), but if in your case its just because php waits for
something and not because server is overloaded it wont change much. You
might want to consider checking if other http servers liek lighttpd also
have that  bug
-- 
Mariusz Gronczewski (XANi) xani...@gmail.com
GnuPG: 0xEA8ACE64
http://devrandom.pl


signature.asc
Description: To jest część  wiadomości podpisana cyfrowo


Re: Using nginx / haproxy / apache setupr?

2009-11-22 Thread XANi
On Sun, 22 Nov 2009 21:30:51 +0800, Ryan Chan ryanchan...@gmail.com
wrote:
 Hello,
 
 On Sat, Nov 21, 2009 at 4:39 PM, XANi xani...@gmail.com wrote:
  Well haproxy won't buffer response so that will help a bit on
  not-so-slow-but-not-fast-either req. But then u could try use
  apache + mod_worker + php thru fast-cgi I think (not sure tho) in
  that config it will buffer req. in apache, freeing php processes to
  server other req.
 
 
 What is meant by not-so-slow-but-not-fast req?
 
 You mean nginx is able to handle this, but not HA Proxy?
I mean that for example if u tell haproxy to cut all connections to PHP
longer than 60, then php process will still be blocked for 59s.
If u put PHP behind anything that can buffer response then php will
generate all data and it will be buffered in webserver so php process
will not block


-- 
Mariusz Gronczewski (XANi) xani...@gmail.com
GnuPG: 0xEA8ACE64
http://devrandom.pl



signature.asc
Description: PGP signature


Re: Using nginx / haproxy / apache setupr?

2009-11-21 Thread XANi
On Sat, 21 Nov 2009 11:44:01 +0800, Ryan Chan ryanchan...@gmail.com
wrote:
 Hello,
 
 On Sat, Nov 21, 2009 at 7:38 AM, Aleksandar Lazic
 al-hapr...@none.at wrote:
 
  You should setup haproxy so that the 'slow clients' don't eat all
  connections to apache.
 
 
 That's mean HA Proxy can handle this for me, and nginx is useless in
 my above setup?
 
 (Since I don't use nginx/fast_cgi to serve PHP, I use apache)
Well haproxy won't buffer response so that will help a bit on
not-so-slow-but-not-fast-either req. But then u could try use apache +
mod_worker + php thru fast-cgi I think (not sure tho) in that config it
will buffer req. in apache, freeing php processes to server other req.

Regards
Mariusz

-- 
Mariusz Gronczewski (XANi) xani...@gmail.com
GnuPG: 0xEA8ACE64
http://devrandom.pl



signature.asc
Description: PGP signature


Re: TCP Proxy, dual-redundant partitions and least connection balancing

2009-11-12 Thread XANi
Hi,
On Wed, 11 Nov 2009 22:16:38 -0800, Jacques whs...@gmail.com wrote:
 Hello,
 There is some complexity here that isn't warranted at 4 servers but
 the redundancy model allows us to do a number of useful things. Also,
 while the example has each service existing the same number of times,
 in reality the number of copies of a service would vary depending on
 other factors (load, response curve, etc).
 
 Is this possible out of the box with just a configuration file?
 
 thanks for any guidance,
 Jacques
AFAIK you would have to use dynamic weigths (ability to change server
weigth thru unix socket ) in haproxy which is in dev tree. So basically
script looking on things like cpu/disk/mem load on each node and then
other script setting weighs according to those metrics.

Or, in simpler version, script getting connection stats from haproxy
stats and then setting weigths. 

So answer is no, nothing out of the box, but yes, it should be
possible :)

Regards
Mariusz

-- 
Mariusz Gronczewski (XANi) xani...@gmail.com
GnuPG: 0xEA8ACE64
http://devrandom.pl



signature.asc
Description: PGP signature


Re: Just one question about : Nginx + Haproxy

2009-11-11 Thread XANi
Hi,
On Wed, 11 Nov 2009 15:16:09 +0100, Falco Schmutz
fschm...@premaccess.com wrote:
 Hello Aleksandar,
 Yes i knom :o)
 It's just question, maybe some people do more than just simple check.
All depends what u use it for. Good idea is to have simple
php/perl/ruby/python/whatever script retrining 200 OK if app and its db
is ok and 500 ERR if something is wrong

Regards
Mariusz

-- 
Mariusz Gronczewski (XANi) xani...@gmail.com
GnuPG: 0xEA8ACE64
http://devrandom.pl



signature.asc
Description: PGP signature


Re: Using HAProxy In Place of WCCP

2009-11-04 Thread XANi
On Wed, 4 Nov 2009 06:58:32 -0500, John Lauro
john.la...@covenanteyes.com wrote:
 I see two potential issues (which may or may not be important for
 you).
 
  
 
 1.   Non http 1.1 clients may have trouble (ie: they don't send
 the host on the URL request, or if they are not really http but using
 port 80).
Yeah, for that to work you would have to use TCP mode so no tricks
like hashing by URL to improve cache hit rate
 
 2.   Back tracking if you get a complaint from some website (ie:
 RIAA complaint) is going to be near impossible of determining who
 accessed whatever.
Wouldn't loggin in haproxy solve that ?

Regards
Mariusz


signature.asc
Description: PGP signature


Re: MySQL + Haproxy Question

2009-10-25 Thread XANi
Hi
On Sat, 24 Oct 2009 19:25:36 -0400, Joseph Hardeman
jharde...@colocube.com wrote:
 Hi Mariusz
 
 Thats actually what I thought, but I wanted to ask to be sure. *S*  I
 am going to look into that solution again, the last time I tried it,
 many months ago now, I couldn't get it to work right and I would have
 to replace all of the libmysql* so files on my web servers. 
if ur app don't have huge number of SQL query types u might want to
just rewrite parts of it, like they said in mysqlproxy docs, its only
experimental feature.

Regards
Mariusz
-- 
Mariusz Gronczewski (XANi) xani...@gmail.com
GnuPG: 0xEA8ACE64
http://devrandom.pl



signature.asc
Description: PGP signature


Re: MySQL + Haproxy Question

2009-10-24 Thread XANi
Hi
On Sat, 24 Oct 2009 16:01:26 -0400, Joseph Hardeman
jharde...@colocube.com wrote:
 Hey Guys,
 
 I was wondering if there was a way to have Haproxy handle mysql 
 requests.  I know that I can use the TCP option instead of HTTP and
 it will work, but I was wondering if anyone has a way to make haproxy
 send all requests for Select statements to a set of servers and all
 Insert, Updates, and Deletes to a master MySQL server.
 
 I was just thinking about it and was wondering if this was possible
 and if anyone has done it.  If you have would you be willing to share
 how your setup is.
U can't do that, u either have to use something like 
http://forge.mysql.com/wiki/MySQL_Proxy_RW_Splitting
or (better) rewrite ur app to split write and read requests

Regards
Mariusz
-- 
Mariusz Gronczewski (XANi) xani...@gmail.com
GnuPG: 0xEA8ACE64
http://devrandom.pl



signature.asc
Description: PGP signature


Re: Query Regarding the HAProxy and TCP

2009-10-23 Thread XANi
Hi,
On Fri, 23 Oct 2009 14:26:39 +0530, R, Viswanath
viswanat...@misys.com wrote:
 Hi,
 
  
 
 Greetings, This Is Viswanath. I recently started using the HAProxy for
 balancing the load among my set of available servers. I have described
 my query in the following scenario.
 
  
 
 I have configured HAProxy to balance the load for TCP among my
 available Nodes and the configuration content is as follows
 
  
 
 global
 
 ulimit-n  1
 
 debug
 
  
 
 defaults
 
 balance roundrobin
 
 retries 1
 
 
 
 listen haserver_tcp 127.0.0.1:9000
 
 mode tcp
 
 option tcpka
 
 clitimeout  150s
 
 contimeout   30s
 
 srvtimeout30s
 
 server node1 server1:9198 check inter 10s backup
 
 server node2 server2:9198 check inter 10s
 
 server node3 server3:9198 check inter 10s
 
  
 
 With the above mentioned configuration I am able to achieve the basic
 load balancing. But when my TCP Client created a single connection and
 sends a long message, say greater than 8k bytes, in a small chunks of
 512 bytes at a time (assuming 512 bytes makes a business oriented
 logical interpretable message), and finally after sending 16 * 512
 bytes of message, my client closes the connection. For every message
 (512 bytes), my server performs an operation based on the message.
 
  
 
 And now, when my TCP Client is sending the nth message (say 10th of
 16), the server comes down after performing 9 messages successfully,
 and the TCP Client still sends the 10th message till 16th. And these
 are not reaching the server.
 
  
 
 As all the 16 messages are sent on the same connection (Connection is
 made only once in the TCP Client), the HAProxy is not re routing the
 remaining message to another node.
 
 Is this scenario achievable through HAProxy?
 
 And my next query is that, as I mentioned that my Client creates only
 one Connection and sends Business oriented logical messages, for the
 rest of the life time of the client, and can that traffic be load
 balanced?
 
  
 
 Eagerly waiting for your reply with comments and solutions if
 applicable.
AFAIK collectd knows nothing about whats goin on inside non-HTTP TCP
connections (it doesn't know if next packet is another request), for
that to work you would have to use one-transation-per-connection or
do heavy rewrite of haproxy to support your application protocol.

Regards
Mariusz
-- 
Mariusz Gronczewski (XANi) xani...@gmail.com
GnuPG: 0xEA8ACE64
http://devrandom.pl



signature.asc
Description: PGP signature


Re: dynamic weights based on actual server load

2009-10-18 Thread XANi
Hi
On Sun, 18 Oct 2009 08:26:37 +0200, Willy Tarreau w...@1wt.eu wrote:
 Hello,
 
 On Sat, Oct 17, 2009 at 11:18:24AM +0200, Angelo Höngens wrote:
  Just read this thread, and I thought I would give my humble opinion
  on this:
  
  As a hosting provider we use both windows and unix backends, en we
  use haproxy to balance requests across sites on a per-site backend
  (with squid in front of haproxy). What I would love to see, is
  dynamic balancing based on the round-trip time of the health check.
  
  So when a backend is slower to respond, the weight should go down
  (slowly), so the faster servers would get more requests. Now that's
  a feature I'd love to see.. And then there would not be anything to
  configure on the backend (we don't always have control over the
  backend application)
 
 Having already seen this on another equipment about 5-6 years ago, I
 can tell you this does not work at all. The reason is simple : the
 health checks should always be fast on a server, and their response
 time almost never tells anything about the server's remaining
 capacity. Some people even use static files as health checks.
 
 What is needed though is to measure real traffic's response time. The
 difficulty comes from the fact that if you lower the weight too much,
 there is too little traffic to measure a reduced response time, and it
 is important to be able to bound the window in which the weight
 evolves.
For healthcheck-based loadbalancing healthchecks had to be something
like make-some-number-crunching then make some-database-reading and
quite long (because shorter requests tend to be more random). So you
will have either have long time between checks or, when checks will be
more often, lot of load on server thanks to health-checks.

I think loadbalancing should be both done by request length and server
load, but then it would have to be some kind of long term log
analysing of one often used part of page, for example:
1.If server load (simplest measure will be loadavg/cores) is below 80%,
increase weigth
2.If average request time of http://example.org/index.php is less than
90% of target_request_time, increase weight a bit.
3.If avg. req. time is more than target_request_time * 1.1 decrease
weight a bit
4. every x minutes if weigth will be less than 50 add 1 if more
subtract 50 (so values won't be drifting to max or 0 over time)

target request time would be some predefined value or (better)
calculated average from all nodes + 50% so u won't have situation where
every node weigth is skyrocketing or falling down because of small
load/overload

Regards
Mariusz
-- 
Mariusz Gronczewski (XANi) xani...@gmail.com
GnuPG: 0xEA8ACE64
http://devrandom.pl



signature.asc
Description: PGP signature