is it possible to disable option httpchk per backend?

2021-03-25 Thread Mariusz Gronczewski
Hi,

is it possible to disable "option httpchk" in specific backend when it
is enabled in defaults block? I have config where basically every
backend sans one is http so I'd like to keep that in defaults and just
disable it in tcp backend (which is backend for SPOE/A) but it seems to
be one of very few options that do not have "no option httpchk". 

Cheers

Mariusz


-- 
Mariusz Gronczewski, Administrator

Efigence S. A.
ul. Wołoska 9a, 02-583 Warszawa
T:   [+48] 22 380 13 13
NOC: [+48] 22 380 10 20
E: ad...@efigence.com



Re: Conditional request logging ?

2020-06-19 Thread Mariusz Gronczewski
Thank you, I somehow missed that option, as we were stuck on older
version for long time.

Cheers!

Dnia 2020-06-18, o godz. 14:35:11
Tim Düsterhus  napisał(a):

> Mariusz,
> 
> Am 18.06.20 um 12:59 schrieb Mariusz Gronczewski:
> > Is there a way to log requests that match the given ACL (and only
> > that ACL) ? I know I can capture headers by ACL but I can't seem to
> > find any way to do that for whole log entries.
> >   
> 
> Use http-response set-log-level silent. See:
> http://cbonte.github.io/haproxy-dconv/2.1/configuration.html#4.2-http-response%20set-log-level
> 
> Best regards
> Tim Düsterhus



-- 
Mariusz Gronczewski, Administrator

Efigence S. A.
ul. Wołoska 9a, 02-583 Warszawa
T:   [+48] 22 380 13 13
NOC: [+48] 22 380 10 20
E: ad...@efigence.com



Conditional request logging ?

2020-06-18 Thread Mariusz Gronczewski
Hello,

Is there a way to log requests that match the given ACL (and only that
ACL) ? I know I can capture headers by ACL but I can't seem to find any
way to do that for whole log entries.

Cheers

Mariusz



Re: Is there a way to extract list of bound IPs via stats socket ?

2017-09-11 Thread Mariusz Gronczewski
On Fri, 1 Sep 2017 19:07:58 +0200, Willy Tarreau <w...@1wt.eu> wrote:

> On Fri, Sep 01, 2017 at 05:49:38PM +0200, Lukas Tribus wrote:
> > Hello,
> > 
> > 
> > Am 01.09.2017 um 15:46 schrieb Mariusz Gronczewski:  
> > > Hi,
> > >
> > > I've been working on a piece of code to announce IPs (via ExaBGP) only if:
> > >
> > > * HAProxy is running
> > > * HAProxy actually uses a given IP
> > > * a frontend with given IP is up for few seconds.
> > >
> > > I could do that via lsof but that's pretty processor-intensive.   
> > 
> > Not sure about the stats or admin socket, but why not use ss instead?
> > 
> > Something like:
> > sudo ss -tln  '( sport = :80 or sport = :443 )'
> > 
> > add "-p" if you need the PID.
> > 
> > Should perform well enough.  
> 
> I think it would not be too hard to add this feature to the CLI. We already
> have "show cli socket" which lists the listening stats sockets. We could
> reuse this code to list all listening sockets and not the just stats ones.
> Maybe "show listeners [optional frontend]" or something like this ?
> 
> Just my two cents,
> Willy

Anyway, if anyone's interested I have a version that just uses ss to get open 
sockets and iproute2 to verify "right" IP actually exists on server:

https://github.com/efigence/go-ha2bgp

It generates exabgp3-compatible announcements + few basic protections (delay 
annoucement to allow service to get up, do not immediately withdraw to allow 
for restarts, withdraw if it is flapping for too long). In theory it should be 
compatible with any service that listens on standard tcp socket (filter by 
default only looks for 80 or 443 ports)

We use it for ECMP setup with 4 boxes, upgraded from "just put some ip addr add 
inside haproxy init script and hope nobody ever stops it" setup


--
Mariusz Gronczewski, Administrator

Efigence S. A.
ul. Wołoska 9a, 02-583 Warszawa
T: [+48] 22 380 13 13
F: [+48] 22 380 13 14
E: mariusz.gronczew...@efigence.com <mailto:mariusz.gronczew...@efigence.com>



Re: Is there a way to extract list of bound IPs via stats socket ?

2017-09-01 Thread Mariusz Gronczewski


On Fri, 1 Sep 2017 17:49:38 +0200, Lukas Tribus <lu...@gmx.net> wrote:

> Hello,
>
>
> Am 01.09.2017 um 15:46 schrieb Mariusz Gronczewski:
> > Hi,
> >
> > I've been working on a piece of code to announce IPs (via ExaBGP) only if:
> >
> > * HAProxy is running
> > * HAProxy actually uses a given IP
> > * a frontend with given IP is up for few seconds.
> >
> > I could do that via lsof but that's pretty processor-intensive.
>
> Not sure about the stats or admin socket, but why not use ss instead?
>
> Something like:
> sudo ss -tln  '( sport = :80 or sport = :443 )'
>
> add "-p" if you need the PID.
>
> Should perform well enough.
>
Huh, interesting.

I just assumed it will be similiar speed no matter which tool I use to get that 
info but ss does that < 100 ms while lsof and netstat take ages:

time lsof -iTCP -sTCP:LISTEN >/dev/null

real0m13.460s
user0m0.201s
sys 0m12.897s

time netstat -l -n -t >/dev/null

real0m43.439s
user0m0.190s
sys 0m42.395s

time  ss -tln  '( sport = :80 or sport = :443 )' >/dev/null

real0m0.032s
user0m0.000s
sys 0m0.032s


Now I know why netstat is getting replaced instead of "just" fixed... thanks.


--
Mariusz Gronczewski, Administrator

Efigence S. A.
ul. Wołoska 9a, 02-583 Warszawa
T: [+48] 22 380 13 13
F: [+48] 22 380 13 14
E: mariusz.gronczew...@efigence.com <mailto:mariusz.gronczew...@efigence.com>



Is there a way to extract list of bound IPs via stats socket ?

2017-09-01 Thread Mariusz Gronczewski
Hi,

I've been working on a piece of code to announce IPs (via ExaBGP) only if:

* HAProxy is running
* HAProxy actually uses a given IP
* a frontend with given IP is up for few seconds.

I could do that via lsof but that's pretty processor-intensive. Is there a way 
to extract list of binded IPs (or, running config) via stats socket ? I found a 
way to do that with backend server IPs but I can't seem to find a way to do it 
for frontends.

Cheers, Mariusz
--
Mariusz Gronczewski, Administrator

Efigence S. A.
ul. Wołoska 9a, 02-583 Warszawa
T: [+48] 22 380 13 13
F: [+48] 22 380 13 14
E: mariusz.gronczew...@efigence.com <mailto:mariusz.gronczew...@efigence.com>



Re: nbproc best practices

2016-10-04 Thread Mariusz Gronczewski
On Tue, 04 Oct 2016 11:40:01 +0200, Holger Just <hapr...@meine-er.de>
wrote:

> Hi Mariusz,
> 
> Mariusz Gronczewski wrote:
> > we've come to the point when we have to start using nbproc > 1 (mostly
> > because going SSL-only in coming months) and as I understand I have
> > to bind each process to separate admin socket and then repeat every
> > command for each process, and in case of stats also sum up the
> > counters.  
> 
> For statistics, there exists a LUA script you can use in HAProxy which
> aggregates the statistics of multiple processes. See
> http://www.arpalert.org/haproxy-scripts.html#stats

We kinda already do that just on between-server (aggregating all LBs
into one for graphing purpose) level. But I guess that solves it in a
bit more transparent way and without extra daemons.

> 
> As for socket commands, often you can circumenvent the whole issue by
> applying a multi-stage architecture where you have several "dumb"
> processes just terminating SSL and forwarding the plain-text traffic to
> a single HAProxy processes which performs all of the actual
> loadbalancing rules.

I assume just doing

 bind 127.0.0.1:80 process 1
 bind 127.0.0.1:443 ssl crt /etc/haproxy/test.pem process 2-5

is not enough and will cause backends to run in all processes ?

So it would have to be separate SSL front with backend of unix socket
and then other frontend would receive on that socket and do actual
processing/splitting to backends ?

> 
> With clever bind-process rules and by using send-proxy-v2 this is pretty
> workable. Often, there is then no need for close introspection of the
> frontend-processes anymore, nor is there a need to send socket commands
> to them since they always send all their traffic to haproy anyway

I'd prefer to write tool once instead of complicating the config
further. Altho maybe the solution is to template more of it so it isn't
a problem


Thanks for insights,
Mariusz

-- 
Mariusz Gronczewski, Administrator

Efigence S. A.
ul. Wołoska 9a, 02-583 Warszawa
T: [+48] 22 380 13 13
F: [+48] 22 380 13 14
E: mariusz.gronczew...@efigence.com
<mailto:mariusz.gronczew...@efigence.com>



nbproc best practices

2016-10-03 Thread Mariusz Gronczewski
Hi,

we've come to the point when we have to start using nbproc > 1 (mostly
because going SSL-only in coming months) and as I understand I have
to bind each process to separate admin socket and then repeat every
command for each process, and in case of stats also sum up the
counters.

Does any of that is planned to change in the upcoming (say 6-12
months) release ( e.g. sharing a bit of mem to put stats of all
processes in same place to access) ? 

I was planning on making a small daemon that just connects to socket
and does required multiplexing and summarising over some text/REST API
(and then probably push on Github if mgmt. doesn't complain) but I
wouldn't want to do something that will be obsolete by next release

Cheers,
Mariusz

-- 
Mariusz Gronczewski, Administrator

Efigence S. A.
ul. Wołoska 9a, 02-583 Warszawa
T: [+48] 22 380 13 13
F: [+48] 22 380 13 14
E: mariusz.gronczew...@efigence.com
<mailto:mariusz.gronczew...@efigence.com>



Re: Active/Active

2015-02-17 Thread Mariusz Gronczewski
On Mon, 16 Feb 2015 12:41:06 +0100, Klavs Klavsen k...@vsen.dk wrote:

 As I understand anycast and ECMP (and I only know guys who use it and 
 know what they are doing ;) - it needs to be two different routes (ie. 
 routers) that are active/active.. ie. multiple location.. but I guess 
 one could do it in the same datacenter as well..
 

our setup(1 DC):

* active-active ECMP
* 4 loadbalancers + bird OSPF
* 2 routers + OSPF
* IPs are on loopback interface, added and removed when haproxy service
starts/stops
* OSPF distributes routes to these IPs to routers
* routers route by source IP so same IP always lands on same
loadbalancer

works pretty well ;) you just have to make sure that when you stop
haproxy (maintenance etc) you also down IPs that haproxy used so routers
stop sending traffic to that node


-- 
Mariusz Gronczewski, Administrator

Efigence S. A.
ul. Wołoska 9a, 02-583 Warszawa
T: [+48] 22 380 13 13
F: [+48] 22 380 13 14
E: mariusz.gronczew...@efigence.com
mailto:mariusz.gronczew...@efigence.com


pgp3K_kxBeTVs.pgp
Description: OpenPGP digital signature


Re: Round Robin not very random

2015-01-15 Thread Mariusz Gronczewski
We use leastconn to work arond Java apps having to GC (so GCing machine
gets less connections while full GC runs).

The problem with using it for HTTP is that it can be pretty uneven
with a lot of short-lived connections, but so far that was not a problem for 
us, we usually use leastconn on app backends and round-robin on static/varnish 
backends


On Thu, 15 Jan 2015 22:14:18 +0800, Alexey Zilber
alexeyzil...@gmail.com wrote:

 Hi Vivek,
 
   You're correct.  I think the situation was that there was a huge influx
 of traffic, and some servers went over their tipping point of how much they
 can handle quickly.  This caused connections to stack up as some servers
 choked.  Would leastconn give the same perfornance as roundrobin?  I
 noticed in the haproxy docs that it's not recommended for http, which is
 what we're using.  Would it be an issue to use leastconn for http?
 
 -Alex
 
 On Thu, Jan 15, 2015 at 9:41 PM, Vivek Malik vivek.ma...@gmail.com wrote:
 
  I see roubdrobin working perfectly over here. Look at sessions total and
  see how they are same for every server.
 
  It seems that all your requests are not the same workload. Some servers or
  some requests are taking longer to fulfill and increasing load on servers.
  Have you tried using leastconn instead of round robin?
 
  That might give more fair distribution of load in this situation.
 
  Regards,
  Vivek
  On Jan 14, 2015 11:45 PM, Alexey Zilber alexeyzil...@gmail.com wrote:
 
  Hi All,
 
We got hit with a bit of traffic and we saw haproxy dump most of the
  traffic to 3-4 app servers, sometimes even just one and driving load on
  there to 90.  We were running 1.5.9, I upgraded to 1.5.10 and the same
  problem remained.  Currently traffic is low so everything is load balanced
  evenly, but we expect another spike in a few hours and I expect the issue
  to return.
 
 
  Here's what haproxy-status looked like:
 
 
 
 
  Do I need to switch to maybe add a weight and tracking?  We have 12
  frontend appservers load balancing to 28.  All run haproxy and the app
  server software.
 
  Thanks!
  Alex
 
 
 



-- 
Mariusz Gronczewski, Administrator

Efigence S. A.
ul. Wołoska 9a, 02-583 Warszawa
T: [+48] 22 380 13 13
F: [+48] 22 380 13 14
E: mariusz.gronczew...@efigence.com
mailto:mariusz.gronczew...@efigence.com


pgpmG2cNzUliT.pgp
Description: OpenPGP digital signature


Re: tcp resets on reload haproxy

2012-03-21 Thread Mariusz Gronczewski
2012/3/21 Willy Tarreau w...@1wt.eu:
 Can I do something to fix this?

 Krisztian Ivancso is working on FD passing between the old and the new
 process, which should catch most of these issues. The difficulty remains
 in identifying which FD can be reused and possibly adjusted when a number
 of options have been set (eg: MSS, interface binding, ...).

 In the mean time there is a kernel patch available on the site to enable
 SO_REUSE_PORT, which allows both processes to bind the port at the same
 time. It totally clears the uncertainty window since the new process binds
 and only then asks the other one to release the ports. But still there are
 a few RST left due to the half-open connections that cannot be transferred.

 Regards,
 Willy


There is simple and ugly hack for that, you can block sending RST
packets on iptables when restarting haproxy, that way clients who sent
SYN when haproxy port was down will just retransmit.
Other way would be to start new haproxy copy on another port, do
iptables REDIRECT on it, then reload config on main instance and
remove REDIRECT, which would be even ugiler as you'd need 2 different
configs.


-- 
Mariusz Gronczewski



Re: balance by selecting host with lowest latency?

2011-12-08 Thread Mariusz Gronczewski
2011/12/6 Wout Mertens wout.mert...@gmail.com:

 On Dec 6, 2011, at 21:58 , Allan Wind wrote:

 On 2011-12-06 21:38:40, Wout Mertens wrote:
 So if you're doing HTTP load balancing for app servers, it seems to me that 
 the server that responded fastest last time should get the job.

 HAproxy is already capturing the response times at each request so I think 
 this would allow for really responsive fair load balancing.

 Would that algorithm not be subject to oscillations?  First we
 send n requests to backend 1, then we send n requests to backend
 2 as 1 is now slow.

 If n is big enough would this not cause cascade of backend
 failures?  Opposed to spreading out the load over all backends.

 Hmmm good point… Some sort of damping algorithm would be needed.

 For example, the rolling response time of the last 10 requests should be used.

 Additionally, the response time could change the server weight instead, and 
 connections would be delivered according to the normal weighing algorithm. So 
 when you have 2 servers and one is much faster, both servers gradually get a 
 weight that corresponds to their speed. In a stable situation, weight*avg 
 response time would be equal for all servers.

IMO weighting backends should be done independent of haproxy, there is
too many variables. Response time is particulary bad, imagine you have
some leased dedicated servers and one of them have slightly higher
ping to LB, even tho 2 servers are same your load would be unbalanced.

What I would like is ability to set weight from value returned by
healthcheck (with some optional averaging) so in simplest cases
(CPU-bound app server) healthcheck would only have to return
load_average/no_cores to get fairy equal load balancing



Re: Problem with rewrites + SSL

2011-10-20 Thread Mariusz Gronczewski
2011/10/18 Saul s...@extremecloudsolutions.com:
 Hello List,

 I am having an issue trying to translate some urls with my haproxy
 setup and Im hoping someone can shed some light.

 Information:

  4 apache servers need a reliable LB such as HA. These apache servers
 are listening on 80,443 however all traffic gets rewritten (with
 apache re-writes) to https if the request comes on port 80, currently
 there is just a firewall with dnat.

 The apaches are not serving content directly from disk but rather
 proxy passing to backend servers based on the request, this
 information is only relevant because of the different hostnames that a
 client will be hitting when connecting to the site.
The 'usual' solution for that is to put reverse proxy doing SSL
(stunnel or some light http server like lighttpd or nginx) before
haproxy (so all requests go thru haproxy) and then send something like
X-SSL: Yes in header to backend.



Re: about nbproc in conf

2011-10-20 Thread Mariusz Gronczewski
2011/10/19 wsq003 wsq...@sina.com:
 Hi

 In manual there is following:

 nbproc number
   Creates number processes when going daemon. This requires the daemon
   mode. By default, only one process is created, which is the recommended mode
   of operation. For systems limited to small sets of file descriptors per
   process, it may be needed to fork multiple daemons. USING MULTIPLE PROCESSES
   IS HARDER TO DEBUG AND IS REALLY DISCOURAGED. See also daemon.


 My question is how DISCOURAGED is it? Here we need to handle a lot of small
 http request and small response, and haproxy need to handle more then 20k
 reuquests per seconds. Single process work in single CPU seems not enough.
 (We may add ACL config in the future, and haproxy would be even busier.)

 So, I want to set nbproc to 4 or 8.

 For now I know some shortages of multi-processe mode haproxy:
 1, it would cause many context switch (after testing, this is acceptable)
 2, it would become difficult to get status report for every process (we
 found some ways to make it acceptable, though it's still painful)

 Is there any other shortage that I do not realize?
stick tables wont work as they are per-process.

Main reason is that most of the work is done in kernel (which is
multithreaded) so more processes just add more context-switching and
make it harder to debug. Usually its at least 3/4 time spent in
kernelspace so userspace just have to be fast enougth. Unless you have
24 cores on loadbalancer 1 process should be fine. Some ppl tried
binding network card irqs to CPU with good results, or binding haproxy
to one core and network irqs to other cores.

Try something like 'process' module of collectd, it can show you how
much % is used in system/user per process



Re: HAProxy 1.4.8 Tunning for load balancing 3 servers

2011-09-29 Thread Mariusz Gronczewski
Hi,

IMO 1st thing would be setting up some kind of monitoring (haproxy
frontend/backend stats and also server load), and haproxy logging,
this makes debugging easier.
Also check if your benchmarking tool didnt max out CPU/bandwidth, and
try to use multiple machines for that (then u can get stats about
conns/sec from haproxy)

Alternative way of balancing load is (in addition to weight) setting
connection limit per server, so if you know that for example small
server can handle 20 simultaneous connections and over that
performance of that server drops, set maxconn for that serv to 20 and
let haproxy queue/redispatch ( option redispatch) excess connections
so your servers always work on maximum performance point

What is the average response time on not loaded server ? If for
example serving request takes 10ms then with 128 simult. connections
your benchmark tool can't do more than 12k req/sec, if it's 50 ms  it
can't do more than 1/ ( 0.05 / 128 ) = 2 560 conn/sec.

2011/9/28 Ivan Hernandez ihernan...@kiusys.com:
 Hello,

 I have 3 webservers, a little old one that can handle 350req/s, a middle one
 that handles 1700req/s and a bigger one that handles 2700req/s on tests with
 the apache benchmark tool with 128 simultaneous connections. So I decided to
 put haproxy as load balancer in other server so i can (theorically) reach up
 to 4500req/s.

 I worked for a while trying many different configurations but the systems
 seems to have a limit of the fastest server on the whole cluster. If I take
 out from the cluster 1 or 2 servers, the haproxy performance is always the
 same of the fastest server in the cluster.
 Of course, load of each individual server goes down, what means that
 requests are distributed between them, but speed doesn't goes up.

 So, here I copy my config in case it has some obvious error:

 Thanks !
 Ivan

 global
    log 127.0.0.1    local0
    log 127.0.0.1    local1 notice
    maxconn 8192
    user haproxy
    group haproxy

 defaults
    log    global
    retries    3
    maxconn    8192
    contimeout    5000
    clitimeout    5
    srvtimeout    5

 listen  web-farm 0.0.0.0:80
    mode http
    option httpclose
    option abortonclose
    balance roundrobin
    server small 192.168.1.100:80 weight 1 check inter 2000 rise 2 fall 5
    server medium 192.168.1.101:80 weight 2 check inter 2000 rise 2 fall 5
    server big 192.168.1.102:80 weight 8 check inter 2000 rise 2 fall 5






Re: A (Hopefully Not too Generic) Question About HAProxy

2010-05-19 Thread Mariusz Gronczewski
One more thing about config, u dont need to do
acl is_msn01hdr_sub(X-Forwarded-For) 64.4.0
acl is_msn02hdr_sub(X-Forwarded-For) 64.4.1
acl is_msn03hdr_sub(X-Forwarded-For) 64.4.2
and then
  use_backend robot_traffic if is_msn01 or is_msn02 or is_msn03

u can just do
acl is_msnhdr_sub(X-Forwarded-For) 64.4.0
acl is_msnhdr_sub(X-Forwarded-For) 64.4.1
acl is_msnhdr_sub(X-Forwarded-For) 64.4.2

and then
 use_backend robot_traffic if is_msn

ACLs with same name are automatically ORed together.

or better yet, match bots by user-agent not by IP
http://www.useragentstring.com/pages/useragentstring.php


Re: Problem with source IP when balancing SSL requests

2010-02-17 Thread Mariusz Gronczewski
Hi
2010/2/17 Przemek Konopczynski cz...@o2.pl

 Hi everyone
 I have haproxy server for balancing https. Haproxy works in TCP mode, and I
 want to see real IP adress of the client request in server log. Is it
 possible?
 Forwardfor option works only in http mode. Now in server log I see only my
 haproxy-server local IP address.

 Regards
 Przemek

 I'd just use LVS for frontend and then tell SSL servers to add
X-Forwarded-For and connect to haproxy. In my setup i have lighttpd doing
SSL and
option forwardfor except 127.0.0.1 in lighttpd config

Regards
Mariusz


Re: dynamic weights based on actual server load

2009-10-16 Thread Mariusz Gronczewski
2009/10/16 Craig cr...@haquarter.de:
 Hi,

 a patch (set weight/get weight) I imagined some days ago was integrated
 just 6hrs after I had thought about it (Willy must be reading them!).

 I've written a simple (exchangable) interface that prints out a servers
 load and a client to read it. I plan to read the load from all servers
 and adjust the weight dynamically according to the load so that a very
 busy server gets less queries and the whole farm is more balanced. I
 plan to smoothen the increasing/decreasing a bit so that there aren't
 too great jumps with the weight, I want to implement a policy of
 something like oh that server can do 50% more, lets just increase the
 weight by 25% and check again in a minute. I hope this will autobalance
 servers with different hardware quite well, so that you don't have to
 guess or do performance tests to get the weights properly.

 Some python code is already finished (partly because I'd like to
 practise a bit) but I didn't continue yet, because I'd like to hear your
 opionions about this.

Hi

I'm sure lot of ppl (including me) will be interested in that ( I
would probably have to write something similar in a month or so). By
load you mean /proc/loadavg ? If yes then u might want to include
some kind of per-server tuning of multiplier because obviously on 16
core server loadavg of 10 would be moderately loaded while on 4 core
server i'd be overload

Regards
Mariusz



Re: httpchk disable on 200?

2009-10-01 Thread Mariusz Gronczewski
Hi

You could make some simple script opining file and returning 404 if it
exist like
?
if(fopen($FileName, 'r')) {
  header(HTTP/1.0 404 Not Found);
  echo error;
} else {echo ok;}
?

2009/10/1 Kelly Kane kelly.k...@openx.org:
 Hello,

  We use HAProxy for our frontend content delivery where I work. We're
 looking for a method to have multiple processes be able to disable a
 webserver in haproxy temporarily so they can perform maintenance features.
 To this end using a killfile rather than a healthfile would be more
 useful to us, where 404=OK and 200=NOLB. This would allow us to easily write
 in locks to the file of various processes which want the webserver out of
 the mix to this file, and remove it if it's empty.

   Is something like this feasible in the current HAProxy infrastructure? Is
 there a hack that anyone has thought up for doing this? I've googled around
 but I haven't come up with anything other than http-check disable-on-404 and
 httpchk. We're running HAProxy 1.3.18 on pretty heavily customized CentOS 5
 Linux.

 Thanks,
 Kelly




Re: Nbproc question

2009-09-30 Thread Mariusz Gronczewski
2009/9/29 Willy Tarreau w...@1wt.eu:
 On Tue, Sep 29, 2009 at 10:41:28AM -0700, David Birdsong wrote:
 (...)
  Which translates into that for one CPU :
   10% user
   40% system
   50% soft-irq
 
  This means that 90% of the time is spent in the kernel (network 
  stack+drivers).
  Do you have a high bit rate (multi-gigabit) ? Are you sure you aren't 
  running
  with any ip_conntrack/nf_conntrack module loaded ? Can you show the output 
  of

 do you recommend against these modules?  we have a stock fedora 10
 kernel that have nf_conntrack compiled in statically.

 By default I recommend against it because it's never tuned for server usage,
 and if people don't know if they are using it, then they might be using it
 with inadequate desktop tuning.

 i've increased:
 /proc/sys/net/netfilter/nf_conntrack_max but is it correct to expect
 connection tracking to add kernel networking cpu overhead due to
 netfilter?  i've speculated that it might, but fruitless searches for
 discussions that would suggest so have restrained me from bothering to
 re-compile a custom kernel for our haproxy machines.

 Yes, from my experience, using conntrack on a machine (with large enough
 hash buckets) still results in 1/3 of the CPU being usable for haproxy+system
 and 2/3 being consumed by conntrack. You must understand that when running
 conntrack on a proxy, it has to setup and tear down two connections per
 proxy connection, explaining why it ends up with that amount of CPU used.

 Often if you absolutely need conntrack to NAT packets, the solution consist
 in setting it on one front machine and having the proxies on a second level
 machine (run both in series). It will *triple* the performance because the
 number of conntrack entries will be halved and it will have more CPU to run.
You could also try to do something like this

# iptables -I PREROUTING -p tcp --dport 80 -j NOTRACK
# iptables -I OUTPUT -p tcp --dport 80 -j NOTRACK

it should disable conn. tracking for packets to/from haproxy

Regarda
Mariusz



Re: Nbproc question

2009-09-30 Thread Mariusz Gronczewski
2009/9/30 Mariusz Gronczewski xani...@gmail.com:
 2009/9/29 Willy Tarreau w...@1wt.eu:
 On Tue, Sep 29, 2009 at 10:41:28AM -0700, David Birdsong wrote:
 (...)
  Which translates into that for one CPU :
   10% user
   40% system
   50% soft-irq
 
  This means that 90% of the time is spent in the kernel (network 
  stack+drivers).
  Do you have a high bit rate (multi-gigabit) ? Are you sure you aren't 
  running
  with any ip_conntrack/nf_conntrack module loaded ? Can you show the 
  output of

 do you recommend against these modules?  we have a stock fedora 10
 kernel that have nf_conntrack compiled in statically.

 By default I recommend against it because it's never tuned for server usage,
 and if people don't know if they are using it, then they might be using it
 with inadequate desktop tuning.

 i've increased:
 /proc/sys/net/netfilter/nf_conntrack_max but is it correct to expect
 connection tracking to add kernel networking cpu overhead due to
 netfilter?  i've speculated that it might, but fruitless searches for
 discussions that would suggest so have restrained me from bothering to
 re-compile a custom kernel for our haproxy machines.

 Yes, from my experience, using conntrack on a machine (with large enough
 hash buckets) still results in 1/3 of the CPU being usable for haproxy+system
 and 2/3 being consumed by conntrack. You must understand that when running
 conntrack on a proxy, it has to setup and tear down two connections per
 proxy connection, explaining why it ends up with that amount of CPU used.

 Often if you absolutely need conntrack to NAT packets, the solution consist
 in setting it on one front machine and having the proxies on a second level
 machine (run both in series). It will *triple* the performance because the
 number of conntrack entries will be halved and it will have more CPU to run.
 You could also try to do something like this

 # iptables -I PREROUTING -p tcp --dport 80 -j NOTRACK
 # iptables -I OUTPUT -p tcp --dport 80 -j NOTRACK
# iptables -t raw -I PREROUTING -p tcp  --dport 80 -j NOTRACK
# iptables -t raw -I OUTPUT -p tcp --dport 80 -j NOTRACK
sorry for mistake