Re: Haproxy 502 errors, all the time on specific sites or backend

2011-11-04 Thread Baptiste
By the way, this one is useless as long as you enable mode http,
because it's implied in it.
# Every header should end with a colon followed by one space.
reqideny^[^:\ ]*[\ ]*$

Cheers


On Thu, Nov 3, 2011 at 5:47 PM, Cyril Bonté  wrote:
> Le Jeudi 3 Novembre 2011 17:34:38 Benoit GEORGELIN a écrit :
>> Can you give me more details about your analyse? (examples)
>> I will try to understand more what's happen
>>
>>
>> Is the response who is not complete or the header only?
>
> The body is not complete. I tried with the examples I provided in my first
> mail.
>
> Examples :
> curl -si "http://sandka.org/portfolio/"; => HTTP/1.0 200 OK with html cut in
> the middle.
> curl -si "http://sandka.org/portfolio/foobar"; => HTTP/1.0 404 Not Found with
> html cut in the middle.
>
> There's something bad in ZenPhoto : it forces the response in HTTP/1.0, which
> prevents chunked transfer. That also can explain why mod_deflate generated 502
> errors.
>
> One thing you can try :
> Edit the file index.php in ZenPhoto and replace "HTTP/1.0" occurences (one for
> 200, one for 404) by "HTTP/1.1". Hopefully, this will allow apache+php to use
> chunked responses and solve the problem.
>
> --
> Cyril Bonté
>
>



Re: Haproxy 502 errors, all the time on specific sites or backend

2011-11-03 Thread Cyril Bonté
Le Jeudi 3 Novembre 2011 17:34:38 Benoit GEORGELIN a écrit :
> Can you give me more details about your analyse? (examples)
> I will try to understand more what's happen
> 
> 
> Is the response who is not complete or the header only?

The body is not complete. I tried with the examples I provided in my first 
mail.

Examples :
curl -si "http://sandka.org/portfolio/"; => HTTP/1.0 200 OK with html cut in 
the middle.
curl -si "http://sandka.org/portfolio/foobar"; => HTTP/1.0 404 Not Found with 
html cut in the middle.

There's something bad in ZenPhoto : it forces the response in HTTP/1.0, which 
prevents chunked transfer. That also can explain why mod_deflate generated 502 
errors.

One thing you can try :
Edit the file index.php in ZenPhoto and replace "HTTP/1.0" occurences (one for 
200, one for 404) by "HTTP/1.1". Hopefully, this will allow apache+php to use 
chunked responses and solve the problem.

-- 
Cyril Bonté



Re: Haproxy 502 errors, all the time on specific sites or backend

2011-11-03 Thread Benoit GEORGELIN (web4all)
Can you give me more details about your analyse? (examples)
I will try to understand more what's happen


Is the response who is not complete or the header only?


Thanks


Cordialement,

Benoît Georgelin


Afin de contribuer au respect de l'environnement, merci de n'imprimer ce mail 
qu'en cas de nécessité

- Mail original -

De: "Cyril Bonté" 
À: "Benoit GEORGELIN (web4all)" 
Cc: haproxy@formilux.org
Envoyé: Jeudi 3 Novembre 2011 10:54:46
Objet: Re: Haproxy 502 errors, all the time on specific sites or backend

Le Jeudi 3 Novembre 2011 15:53:50 Benoit GEORGELIN a écrit :
> It's working better, but now i have some blanks pages.

Yes, responses are still truncated most of the time.

>
> Cordialement,
>
>
> Afin de contribuer au respect de l'environnement, merci de n'imprimer ce 
> mail qu'en cas de nécessité
>
> - Mail original -
>
> De: "Benoit GEORGELIN (web4all)" 
> À: "Cyril Bonté" 
> Cc: haproxy@formilux.org
> Envoyé: Jeudi 3 Novembre 2011 10:47:57
> Objet: Re: Haproxy 502 errors, all the time on specific sites or backend 
>
>
> Humm very interesting, a disabled mod_deflate on now it's working like a 
> charm :( Do you know why?
>
>
> Cordialement,
>
> Benoît Georgelin
>
> ----- Mail original -----
>
> De: "Cyril Bonté" 
> À: "Benoit GEORGELIN (web4all)" 
> Cc: haproxy@formilux.org
> Envoyé: Jeudi 3 Novembre 2011 10:32:06
> Objet: Re: Haproxy 502 errors, all the time on specific sites or backend 
>
> Hi Benoit,
>
> Le Jeudi 3 Novembre 2011 14:46:10 Benoit GEORGELIN a écrit :
> > Hi !
> >
> > My name is Benoît and i'm in a associative project who provide web
> > hosting. We are using Haproxy and we have a lot of problems with 502
> > errors :(
> >
> >
> > So, i would like to know how to really debug this and find solutions :)
> > There is some cases on mailling list archives but i will appreciate if 
> > someone can drive me with a real case on our infrastructure.
>
> My first observations, it it can help someone to target the issue :
> In your servers responses, there is no Content-Length header, this can make
> some troubles.
>
> 502 errors occurs when asking for compressed data :
> - curl -si -H "Accept-Encoding: gzip,deflate" http://sandka.org/portfolio/
> HTTP/1.0 502 Bad Gateway
> - curl -si http://sandka.org/portfolio/
> => results in a truncated page without Content-Length Header
>
> We'll have to find why your backends doesn't provide a Content-Length header
> (and what happens with compression, which should be sent in chunks).
> > Details:
> >
> >
> > Haproxy Stable 1.4.18
> > OS: Debian Lenny
> >
> > Configuration File:
> >
> >
> > ## 
> >
> > global
> >
> >
> > log 127.0.0.1 local0 notice #debug
> > maxconn 2 # count about 1 GB per 2 connections
> > ulimit-n 40046
> >
> >
> > tune.bufsize 65536 # Necessary for lot of CMS page like Prestashop :(
> > tune.maxrewrite 1024
> >
> >
> > #chroot /usr/share/haproxy
> > user haproxy
> > group haproxy
> > daemon
> > #nbproc 4
> > #debug
> > #quiet
> >
> >
> > defaults
> > log global
> > mode http
> > retries 3 # 2 -> 3 le 06102011 #
> > maxconn 19500 # Should be slightly smaller than global.maxconn.
> >
> >
> >  OPTIONS ##
> > option dontlognull
> > option abortonclose
> > #option redispatch # Désactive le 06102011 car balance en mode
> > source et non RR # option tcpka
> > #option log-separate-errors
> > #option logasap
> >
> >
> >  TIMeOUT ##
> > timeout client 30s #1m 40s Client and server timeout must match the
> > longest timeout server 30s #1m 40s time we may wait for a response from
> > the server. timeout queue 30s #1m 40s Don't queue requests too long if 
> > saturated. timeout connect 5s #10s 5s There's no reason to change this 
> > one. timeout http-request 5s #10s 5s A complete request may never take 
> > that long timeout http-keep-alive 10s
> > timeout check 10s #10s
> >
> > ###
> > # F R O N T E N D P U B L I C B E G I N
> > #
> > frontend public
> > bind 123.456.789.123:80
> > default_backend webserver
> >
> >
> >  OPTIONS ##
> > option dontlognull
> > #option httpclose
> > 

Re: Haproxy 502 errors, all the time on specific sites or backend

2011-11-03 Thread Cyril Bonté
Le Jeudi 3 Novembre 2011 15:53:50 Benoit GEORGELIN a écrit :
> It's working better, but now i have some blanks pages.

Yes, responses are still truncated most of the time.

> 
> Cordialement,
> 
> 
> Afin de contribuer au respect de l'environnement, merci de n'imprimer ce
> mail qu'en cas de nécessité
> 
> - Mail original -
> 
> De: "Benoit GEORGELIN (web4all)" 
> À: "Cyril Bonté" 
> Cc: haproxy@formilux.org
> Envoyé: Jeudi 3 Novembre 2011 10:47:57
> Objet: Re: Haproxy 502 errors, all the time on specific sites or backend
> 
> 
> Humm very interesting, a disabled mod_deflate on now it's working like a
> charm :( Do you know why?
> 
> 
> Cordialement,
> 
> Benoît Georgelin
> 
> - Mail original -
> 
> De: "Cyril Bonté" 
> À: "Benoit GEORGELIN (web4all)" 
> Cc: haproxy@formilux.org
> Envoyé: Jeudi 3 Novembre 2011 10:32:06
> Objet: Re: Haproxy 502 errors, all the time on specific sites or backend
> 
> Hi Benoit,
> 
> Le Jeudi 3 Novembre 2011 14:46:10 Benoit GEORGELIN a écrit :
> > Hi !
> > 
> > My name is Benoît and i'm in a associative project who provide web
> > hosting. We are using Haproxy and we have a lot of problems with 502
> > errors :(
> > 
> > 
> > So, i would like to know how to really debug this and find solutions :)
> > There is some cases on mailling list archives but i will appreciate if
> > someone can drive me with a real case on our infrastructure.
> 
> My first observations, it it can help someone to target the issue :
> In your servers responses, there is no Content-Length header, this can make
> some troubles.
> 
> 502 errors occurs when asking for compressed data :
> - curl -si -H "Accept-Encoding: gzip,deflate" http://sandka.org/portfolio/
> HTTP/1.0 502 Bad Gateway
> - curl -si http://sandka.org/portfolio/
> => results in a truncated page without Content-Length Header
> 
> We'll have to find why your backends doesn't provide a Content-Length header
> (and what happens with compression, which should be sent in chunks).
> > Details:
> > 
> > 
> > Haproxy Stable 1.4.18
> > OS: Debian Lenny
> > 
> > Configuration File:
> > 
> > 
> > ##
> > 
> > global
> > 
> > 
> > log 127.0.0.1 local0 notice #debug
> > maxconn 2 # count about 1 GB per 2 connections
> > ulimit-n 40046
> > 
> > 
> > tune.bufsize 65536 # Necessary for lot of CMS page like Prestashop :(
> > tune.maxrewrite 1024
> > 
> > 
> > #chroot /usr/share/haproxy
> > user haproxy
> > group haproxy
> > daemon
> > #nbproc 4
> > #debug
> > #quiet
> > 
> > 
> > defaults
> > log global
> > mode http
> > retries 3 # 2 -> 3 le 06102011 #
> > maxconn 19500 # Should be slightly smaller than global.maxconn.
> > 
> > 
> >  OPTIONS ##
> > option dontlognull
> > option abortonclose
> > #option redispatch # Désactive le 06102011 car balance en mode
> > source et non RR # option tcpka
> > #option log-separate-errors
> > #option logasap
> > 
> > 
> >  TIMeOUT ##
> > timeout client 30s #1m 40s Client and server timeout must match the
> > longest timeout server 30s #1m 40s time we may wait for a response from
> > the server. timeout queue 30s #1m 40s Don't queue requests too long if
> > saturated. timeout connect 5s #10s 5s There's no reason to change this
> > one. timeout http-request 5s #10s 5s A complete request may never take
> > that long timeout http-keep-alive 10s
> > timeout check 10s #10s
> > 
> > ###
> > # F R O N T E N D P U B L I C B E G I N
> > #
> > frontend public
> > bind 123.456.789.123:80
> > default_backend webserver
> > 
> > 
> >  OPTIONS ##
> > option dontlognull
> > #option httpclose
> > option httplog
> > option http-server-close
> > # option dontlog-normal
> > 
> > 
> > # Gestion sur URL # Tout commenter le 21/10/2011
> > # log the name of the virtual server
> > capture request header Host len 60
> > 
> > 
> > 
> > 
> > #
> > # F R O N T E N D P U B L I C E N D
> > ###
> > 
> > ###
> > # B A C K E N

Re: Haproxy 502 errors, all the time on specific sites or backend

2011-11-03 Thread Benoit GEORGELIN (web4all)
It's working better, but now i have some blanks pages.


Cordialement,


Afin de contribuer au respect de l'environnement, merci de n'imprimer ce mail 
qu'en cas de nécessité

- Mail original -

De: "Benoit GEORGELIN (web4all)" 
À: "Cyril Bonté" 
Cc: haproxy@formilux.org
Envoyé: Jeudi 3 Novembre 2011 10:47:57
Objet: Re: Haproxy 502 errors, all the time on specific sites or backend


Humm very interesting, a disabled mod_deflate on now it's working like a charm 
:(
Do you know why?


Cordialement,

Benoît Georgelin

- Mail original -

De: "Cyril Bonté" 
À: "Benoit GEORGELIN (web4all)" 
Cc: haproxy@formilux.org
Envoyé: Jeudi 3 Novembre 2011 10:32:06
Objet: Re: Haproxy 502 errors, all the time on specific sites or backend

Hi Benoit,

Le Jeudi 3 Novembre 2011 14:46:10 Benoit GEORGELIN a écrit :
> Hi !
>
> My name is Benoît and i'm in a associative project who provide web hosting.
> We are using Haproxy and we have a lot of problems with 502 errors :(
>
>
> So, i would like to know how to really debug this and find solutions :)
> There is some cases on mailling list archives but i will appreciate if
> someone can drive me with a real case on our infrastructure.

My first observations, it it can help someone to target the issue :
In your servers responses, there is no Content-Length header, this can make
some troubles.

502 errors occurs when asking for compressed data :
- curl -si -H "Accept-Encoding: gzip,deflate" http://sandka.org/portfolio/ 
HTTP/1.0 502 Bad Gateway
- curl -si http://sandka.org/portfolio/
=> results in a truncated page without Content-Length Header

We'll have to find why your backends doesn't provide a Content-Length header
(and what happens with compression, which should be sent in chunks).

> Details:
>
>
> Haproxy Stable 1.4.18
> OS: Debian Lenny
>
> Configuration File:
>
>
> ##
>
> global
>
>
> log 127.0.0.1 local0 notice #debug
> maxconn 2 # count about 1 GB per 2 connections
> ulimit-n 40046
>
>
> tune.bufsize 65536 # Necessary for lot of CMS page like Prestashop :(
> tune.maxrewrite 1024
>
>
> #chroot /usr/share/haproxy
> user haproxy
> group haproxy
> daemon
> #nbproc 4
> #debug
> #quiet
>
>
> defaults
> log global
> mode http
> retries 3 # 2 -> 3 le 06102011 #
> maxconn 19500 # Should be slightly smaller than global.maxconn.
>
>
>  OPTIONS ##
> option dontlognull
> option abortonclose
> #option redispatch # Désactive le 06102011 car balance en mode source et
> non RR # option tcpka
> #option log-separate-errors
> #option logasap
>
>
>  TIMeOUT ##
> timeout client 30s #1m 40s Client and server timeout must match the longest
> timeout server 30s #1m 40s time we may wait for a response from the server.
> timeout queue 30s #1m 40s Don't queue requests too long if saturated.
> timeout connect 5s #10s 5s There's no reason to change this one.
> timeout http-request 5s #10s 5s A complete request may never take that long
> timeout http-keep-alive 10s
> timeout check 10s #10s
>
> ###
> # F R O N T E N D P U B L I C B E G I N
> #
> frontend public
> bind 123.456.789.123:80
> default_backend webserver
>
>
>  OPTIONS ##
> option dontlognull
> #option httpclose
> option httplog
> option http-server-close
> # option dontlog-normal
>
>
> # Gestion sur URL # Tout commenter le 21/10/2011
> # log the name of the virtual server
> capture request header Host len 60
>
>
>
>
> #
> # F R O N T E N D P U B L I C E N D
> ###
>
> ###
> # B A C K E N D W E B S E R V E R B E G I N
> #
> backend webserver
> balance source # Reactive le 06102011 #
> #balance roundrobin # Désactive le 06102011 #
>
>
>  OPTIONS ##
> option httpchk
> option httplog
> option forwardfor
> #option httpclose # Désactive le 06102011 #
> option http-server-close
> option http-pretend-keepalive
>
>
> retries 5
> cookie SERVERID insert indirect
>
>
> # Detect an ApacheKiller-like Attack
> acl killerapache hdr_cnt(Range) gt 10
> # Clean up the request
> reqidel ^Range if killerapache
>
>
>
> server http-A 192.168.0.1:80 cookie http-A check inter 5000
> server http-B 192.168.1.1:80 cookie http-B check inter 5000
> server http-C 192.168.2.1:80 cookie http-C check inter 5000
> server http-D 192.168.3.1:80 cookie http-D check inter 5000
> server http-E 192.168.4.1:80 cookie http-E check inter 5000
>
>
> # Every header should end with a colon followed by one space.
> reqideny ^[^:\ ]*[\ ]*$
>
>
> # block Apache chunk exploit
> reqideny ^Transfer-Encoding:[\ ]*chunked
> reqideny ^Host:\ apache-
>
>
> # block annoying worms that fill the logs...
> reqideny ^[^:\ ]*\ .*(\.|%2e)(\.|%2e)(%2f|%5c|/|  )
> reqideny ^[^:\ ]*\ ([^\ ]*\ [^\ ]*\ |.*%00)
> reqideny ^[^:\ ]*\ .*

Re: Haproxy 502 errors, all the time on specific sites or backend

2011-11-03 Thread Benoit GEORGELIN (web4all)
Humm very interesting, a disabled mod_deflate on now it's working like a charm 
:(
Do you know why?


Cordialement,

Benoît Georgelin

- Mail original -

De: "Cyril Bonté" 
À: "Benoit GEORGELIN (web4all)" 
Cc: haproxy@formilux.org
Envoyé: Jeudi 3 Novembre 2011 10:32:06
Objet: Re: Haproxy 502 errors, all the time on specific sites or backend

Hi Benoit,

Le Jeudi 3 Novembre 2011 14:46:10 Benoit GEORGELIN a écrit :
> Hi !
>
> My name is Benoît and i'm in a associative project who provide web hosting.
> We are using Haproxy and we have a lot of problems with 502 errors :(
>
>
> So, i would like to know how to really debug this and find solutions :)
> There is some cases on mailling list archives but i will appreciate if
> someone can drive me with a real case on our infrastructure.

My first observations, it it can help someone to target the issue :
In your servers responses, there is no Content-Length header, this can make
some troubles.

502 errors occurs when asking for compressed data :
- curl -si -H "Accept-Encoding: gzip,deflate" http://sandka.org/portfolio/ 
HTTP/1.0 502 Bad Gateway
- curl -si http://sandka.org/portfolio/
=> results in a truncated page without Content-Length Header

We'll have to find why your backends doesn't provide a Content-Length header
(and what happens with compression, which should be sent in chunks).

> Details:
>
>
> Haproxy Stable 1.4.18
> OS: Debian Lenny
>
> Configuration File:
>
>
> ##
>
> global
>
>
> log 127.0.0.1 local0 notice #debug
> maxconn 2 # count about 1 GB per 2 connections
> ulimit-n 40046
>
>
> tune.bufsize 65536 # Necessary for lot of CMS page like Prestashop :(
> tune.maxrewrite 1024
>
>
> #chroot /usr/share/haproxy
> user haproxy
> group haproxy
> daemon
> #nbproc 4
> #debug
> #quiet
>
>
> defaults
> log global
> mode http
> retries 3 # 2 -> 3 le 06102011 #
> maxconn 19500 # Should be slightly smaller than global.maxconn.
>
>
>  OPTIONS ##
> option dontlognull
> option abortonclose
> #option redispatch # Désactive le 06102011 car balance en mode source et
> non RR # option tcpka
> #option log-separate-errors
> #option logasap
>
>
>  TIMeOUT ##
> timeout client 30s #1m 40s Client and server timeout must match the longest
> timeout server 30s #1m 40s time we may wait for a response from the server.
> timeout queue 30s #1m 40s Don't queue requests too long if saturated.
> timeout connect 5s #10s 5s There's no reason to change this one.
> timeout http-request 5s #10s 5s A complete request may never take that long
> timeout http-keep-alive 10s
> timeout check 10s #10s
>
> ###
> # F R O N T E N D P U B L I C B E G I N
> #
> frontend public
> bind 123.456.789.123:80
> default_backend webserver
>
>
>  OPTIONS ##
> option dontlognull
> #option httpclose
> option httplog
> option http-server-close
> # option dontlog-normal
>
>
> # Gestion sur URL # Tout commenter le 21/10/2011
> # log the name of the virtual server
> capture request header Host len 60
>
>
>
>
> #
> # F R O N T E N D P U B L I C E N D
> ###
>
> ###
> # B A C K E N D W E B S E R V E R B E G I N
> #
> backend webserver
> balance source # Reactive le 06102011 #
> #balance roundrobin # Désactive le 06102011 #
>
>
>  OPTIONS ##
> option httpchk
> option httplog
> option forwardfor
> #option httpclose # Désactive le 06102011 #
> option http-server-close
> option http-pretend-keepalive
>
>
> retries 5
> cookie SERVERID insert indirect
>
>
> # Detect an ApacheKiller-like Attack
> acl killerapache hdr_cnt(Range) gt 10
> # Clean up the request
> reqidel ^Range if killerapache
>
>
>
> server http-A 192.168.0.1:80 cookie http-A check inter 5000
> server http-B 192.168.1.1:80 cookie http-B check inter 5000
> server http-C 192.168.2.1:80 cookie http-C check inter 5000
> server http-D 192.168.3.1:80 cookie http-D check inter 5000
> server http-E 192.168.4.1:80 cookie http-E check inter 5000
>
>
> # Every header should end with a colon followed by one space.
> reqideny ^[^:\ ]*[\ ]*$
>
>
> # block Apache chunk exploit
> reqideny ^Transfer-Encoding:[\ ]*chunked
> reqideny ^Host:\ apache-
>
>
> # block annoying worms that fill the logs...
> reqideny ^[^:\ ]*\ .*(\.|%2e)(\.|%2e)(%2f|%5c|/|  )
> reqideny ^[^:\ ]*\ ([^\ ]*\ [^\ ]*\ |.*%00)
> reqideny ^[^:\ ]*\ .*

Re: Haproxy 502 errors, all the time on specific sites or backend

2011-11-03 Thread Benoit GEORGELIN (web4all)
Thanks Cyril for this elements.

Here the modules available on apache2:



actions alias auth_basic auth_mysql auth_pam authn_file authz_default 
authz_groupfile authz_host authz_user autoindex cache cgi deflate dir env 
expires headers include mime mod-evasive negotiation php5 python rewrite rpaf 
setenvif ssl status

Maybe one of them have troubles.. I will search about Content-Length header

Cordialement,

Benoît Georgelin
Web 4 all Hébergeur associatif
+33 977 218 005
+1 514 463 7255
benoit.georgelin@web 4 all.fr

Afin de contribuer au respect de l'environnement, merci de n'imprimer ce mail 
qu'en cas de nécessité

- Mail original -

De: "Cyril Bonté" 
À: "Benoit GEORGELIN (web4all)" 
Cc: haproxy@formilux.org
Envoyé: Jeudi 3 Novembre 2011 10:32:06
Objet: Re: Haproxy 502 errors, all the time on specific sites or backend

Hi Benoit,

Le Jeudi 3 Novembre 2011 14:46:10 Benoit GEORGELIN a écrit :
> Hi !
>
> My name is Benoît and i'm in a associative project who provide web hosting.
> We are using Haproxy and we have a lot of problems with 502 errors :(
>
>
> So, i would like to know how to really debug this and find solutions :)
> There is some cases on mailling list archives but i will appreciate if
> someone can drive me with a real case on our infrastructure.

My first observations, it it can help someone to target the issue :
In your servers responses, there is no Content-Length header, this can make
some troubles.

502 errors occurs when asking for compressed data :
- curl -si -H "Accept-Encoding: gzip,deflate" http://sandka.org/portfolio/ 
HTTP/1.0 502 Bad Gateway
- curl -si http://sandka.org/portfolio/
=> results in a truncated page without Content-Length Header

We'll have to find why your backends doesn't provide a Content-Length header
(and what happens with compression, which should be sent in chunks).

> Details:
>
>
> Haproxy Stable 1.4.18
> OS: Debian Lenny
>
> Configuration File:
>
>
> ##
>
> global
>
>
> log 127.0.0.1 local0 notice #debug
> maxconn 2 # count about 1 GB per 2 connections
> ulimit-n 40046
>
>
> tune.bufsize 65536 # Necessary for lot of CMS page like Prestashop :(
> tune.maxrewrite 1024
>
>
> #chroot /usr/share/haproxy
> user haproxy
> group haproxy
> daemon
> #nbproc 4
> #debug
> #quiet
>
>
> defaults
> log global
> mode http
> retries 3 # 2 -> 3 le 06102011 #
> maxconn 19500 # Should be slightly smaller than global.maxconn.
>
>
>  OPTIONS ##
> option dontlognull
> option abortonclose
> #option redispatch # Désactive le 06102011 car balance en mode source et
> non RR # option tcpka
> #option log-separate-errors
> #option logasap
>
>
>  TIMeOUT ##
> timeout client 30s #1m 40s Client and server timeout must match the longest
> timeout server 30s #1m 40s time we may wait for a response from the server.
> timeout queue 30s #1m 40s Don't queue requests too long if saturated.
> timeout connect 5s #10s 5s There's no reason to change this one.
> timeout http-request 5s #10s 5s A complete request may never take that long
> timeout http-keep-alive 10s
> timeout check 10s #10s
>
> ###
> # F R O N T E N D P U B L I C B E G I N
> #
> frontend public
> bind 123.456.789.123:80
> default_backend webserver
>
>
>  OPTIONS ##
> option dontlognull
> #option httpclose
> option httplog
> option http-server-close
> # option dontlog-normal
>
>
> # Gestion sur URL # Tout commenter le 21/10/2011
> # log the name of the virtual server
> capture request header Host len 60
>
>
>
>
> #
> # F R O N T E N D P U B L I C E N D
> ###
>
> ###
> # B A C K E N D W E B S E R V E R B E G I N
> #
> backend webserver
> balance source # Reactive le 06102011 #
> #balance roundrobin # Désactive le 06102011 #
>
>
>  OPTIONS ##
> option httpchk
> option httplog
> option forwardfor
> #option httpclose # Désactive le 06102011 #
> option http-server-close
> option http-pretend-keepalive
>
>
> retries 5
> cookie SERVERID insert indirect
>
>
> # Detect an ApacheKiller-like Attack
> acl killerapache hdr_cnt(Range) gt 10
> # Clean up the request
> reqidel ^Range if killerapache
>
>
>
> server http-A 192.168.0.1:80 cookie http-A check inter 5000
> server http-B 192.168.1.1:80 cookie http-B check inter 5000
> server http-C 192.168.2.1:80 cook

Re: Haproxy 502 errors, all the time on specific sites or backend

2011-11-03 Thread Cyril Bonté
Hi Benoit,

Le Jeudi 3 Novembre 2011 14:46:10 Benoit GEORGELIN a écrit :
> Hi !
> 
> My name is Benoît and i'm in a associative project who provide web hosting.
> We are using Haproxy and we have a lot of problems with 502 errors :(
> 
> 
> So, i would like to know how to really debug this and find solutions :)
> There is some cases on mailling list archives but i will appreciate if
> someone can drive me with a real case on our infrastructure.

My first observations, it it can help someone to target the issue :
In your servers responses, there is no Content-Length header, this can make 
some troubles.

502 errors occurs when asking for compressed data :
- curl -si -H "Accept-Encoding: gzip,deflate" http://sandka.org/portfolio/
HTTP/1.0 502 Bad Gateway
- curl -si http://sandka.org/portfolio/
=> results in a truncated page without Content-Length Header

We'll have to find why your backends doesn't provide a Content-Length header 
(and what happens with compression, which should be sent in chunks).

> Details:
> 
> 
> Haproxy Stable 1.4.18
> OS: Debian Lenny
> 
> Configuration File:
> 
> 
> ##
> 
> global
> 
> 
> log 127.0.0.1 local0 notice #debug
> maxconn 2 # count about 1 GB per 2 connections
> ulimit-n 40046
> 
> 
> tune.bufsize 65536 # Necessary for lot of CMS page like Prestashop :(
> tune.maxrewrite 1024
> 
> 
> #chroot /usr/share/haproxy
> user haproxy
> group haproxy
> daemon
> #nbproc 4
> #debug
> #quiet
> 
> 
> defaults
> log global
> mode http
> retries 3 # 2 -> 3 le 06102011 #
> maxconn 19500 # Should be slightly smaller than global.maxconn.
> 
> 
>  OPTIONS ##
> option dontlognull
> option abortonclose
> #option redispatch # Désactive le 06102011 car balance en mode source et
> non RR # option tcpka
> #option log-separate-errors
> #option logasap
> 
> 
>  TIMeOUT ##
> timeout client 30s #1m 40s Client and server timeout must match the longest
> timeout server 30s #1m 40s time we may wait for a response from the server.
> timeout queue 30s #1m 40s Don't queue requests too long if saturated.
> timeout connect 5s #10s 5s There's no reason to change this one.
> timeout http-request 5s #10s 5s A complete request may never take that long
> timeout http-keep-alive 10s
> timeout check 10s #10s
> 
> ###
> # F R O N T E N D P U B L I C B E G I N
> #
> frontend public
> bind 123.456.789.123:80
> default_backend webserver
> 
> 
>  OPTIONS ##
> option dontlognull
> #option httpclose
> option httplog
> option http-server-close
> # option dontlog-normal
> 
> 
> # Gestion sur URL # Tout commenter le 21/10/2011
> # log the name of the virtual server
> capture request header Host len 60
> 
> 
> 
> 
> #
> # F R O N T E N D P U B L I C E N D
> ###
> 
> ###
> # B A C K E N D W E B S E R V E R B E G I N
> #
> backend webserver
> balance source # Reactive le 06102011 #
> #balance roundrobin # Désactive le 06102011 #
> 
> 
>  OPTIONS ##
> option httpchk
> option httplog
> option forwardfor
> #option httpclose # Désactive le 06102011 #
> option http-server-close
> option http-pretend-keepalive
> 
> 
> retries 5
> cookie SERVERID insert indirect
> 
> 
> # Detect an ApacheKiller-like Attack
> acl killerapache hdr_cnt(Range) gt 10
> # Clean up the request
> reqidel ^Range if killerapache
> 
> 
> 
> server http-A 192.168.0.1:80 cookie http-A check inter 5000
> server http-B 192.168.1.1:80 cookie http-B check inter 5000
> server http-C 192.168.2.1:80 cookie http-C check inter 5000
> server http-D 192.168.3.1:80 cookie http-D check inter 5000
> server http-E 192.168.4.1:80 cookie http-E check inter 5000
> 
> 
> # Every header should end with a colon followed by one space.
> reqideny ^[^:\ ]*[\ ]*$
> 
> 
> # block Apache chunk exploit
> reqideny ^Transfer-Encoding:[\ ]*chunked
> reqideny ^Host:\ apache-
> 
> 
> # block annoying worms that fill the logs...
> reqideny ^[^:\ ]*\ .*(\.|%2e)(\.|%2e)(%2f|%5c|/|  )
> reqideny ^[^:\ ]*\ ([^\ ]*\ [^\ ]*\ |.*%00)
> reqideny ^[^:\ ]*\ .*