AND OR priority when forming conditions

2023-02-24 Thread Arnall

Hello everyone,

I have been using Haproxy for years but I still have trouble 
understanding this part of the documentation:


7.2. Using ACLs to form conditions

A condition is formed as a disjunctive form:

   [!]acl1 [!]acl2 ... [!]acln  { or [!]acl1 [!]acl2 ... [!]acln } ...

first it does not work "as is" if i try something like that:

tcp-request connection reject if { or blacklist_manual tor_ips } !whitelist

it leads to : error detected in frontend 'http_all' while parsing 'if' 
condition : unknown fetch method 'or' in ACL expression 'or'.


tcp-request connection reject if { blacklist_manual || tor_ips } 
!whitelist does not work eather.


At the end i write something like that:

tcp-request connection reject if blacklist_manual !whitelist || tor_ips 
!whitelist


It works but i'm still uncomfortable as i'm not really sure if it's 
treated like this : (blacklist_manual !whitelist) || (tor_ips !whitelist)


The documentation on this topic should perhaps be improved, with more 
examples with mixed AND/OR. Explain the priority for this type of 
operators, and explain what the {or [ !]acl1 [ !]acl2 ... [!]acln } 
really means


Thanks !





Re: [ANNOUNCE] haproxy-2.4.22

2023-02-14 Thread Arnall

Hello,

Le 14/02/2023 à 17:52, Tim Düsterhus a écrit :

Marc,

On 2/14/23 17:44, Marc Gebauer wrote:

Listing... Done
haproxy/bullseye-backports-2.4 2.4.21-2~bpo11+1 amd64 [upgradable 
from: 2.4.21-1~bpo11+1]



is this the recommend package to use for Debian (because of the 
version-number 2.4.21 instead of 2.4.22) or need we to wait for repo 
to be synced?




Check with 'zless /usr/share/doc/haproxy/changelog.Debian.gz' to be 
sure, but this should be the correct version. The 2 after the hyphen 
indicates that this the "second version of 2.4.12" or in other words: 
2.4.12 + just the security fix. The real 2.4.13 with the other fixes 
will likely come later.


Best regards
Tim Düsterhus



It seems OK:

haproxy (2.4.21-2~bpo11+1) bullseye-backports; urgency=medium

  * Rebuild for bullseye-backports.

 -- Vincent Bernat   Mon, 13 Feb 2023 21:38:34 +0100

haproxy (2.4.21-2) UNRELEASED; urgency=medium

  * BUG/CRITICAL: http: properly reject empty http header field names
    (CVE-2023-25725).

 -- Vincent Bernat   Mon, 13 Feb 2023 21:21:05 +0100




option redispatch, http/tcp

2022-07-29 Thread Arnall

Hello everyone,

I'm not sure about something related to the redispatch option.

When I search the internet, many people indicate that the redispatch 
option only works with http mode. But the main purpose of the 
"redispatch option" is to redispatch to another server when you can't 
establish a TCP connection with a server, so it seems that it is also 
applicable to a TCP mode.


Can you clarify this point please (the documentation is a bit unclear on 
this subject) ?


Thanks.




src, src_port and session

2022-07-05 Thread Arnall

Hello everyone,

Just a simple question, can you confirm that src and src_port are set 
only once per session ?


This seems to be the behaviour when I modify them with set-src and 
set-src-port but I want to be sure.


for example:

http-request set-src req.hdr_ip(True-Client-IP) if whatevercondition==true

For every subsequent requests in the session if i test src in a 
tcp-request command the value of src is True-Client-IP.


Am i right ?

Thanks !




Server sent fatal alert: decode_error

2020-08-17 Thread Arnall

Hello everyone,

i've made a tls test on ssllabs, and in the report i can see we have 
this error : "Server sent fatal alert: decode_error" in the hanshake 
simulation part.


it happens essentially with recent platform : Android 8.1/9.0, Chrome 
69/70/80, Firefox 73, OpenSSL 1.1.0k/1.1.1c, Safari 12.


Our configuration concerning tls is :

Haproxy 2.0.14, certificates are RSA

global

    tune.ssl.default-dh-param 2048
    ssl-default-bind-ciphers 
ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS

    ssl-default-bind-options no-sslv3 no-tls-tickets
    tune.ssl.cachesize 90 # default to 2
    tune.ssl.lifetime 3600 # default to 300 seconds

frontend web

    bind *:80
    bind *:443 ssl crt /etc/haproxy/certs/site1.pem crt 
/etc/haproxy/certs/site2.pem crt /etc/haproxy/certs/site3.pem crt 
/etc/haproxy/certs/site4.pem



And we effectively have a certain number of :  'web/2: SSL handshake 
failure' in the logs.


Any advice ?

Thanks.



--
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus




Re: http-reuse and Proxy protocol

2020-07-27 Thread Arnall

Hello,

Le 23/07/2020 à 14:34, Willy Tarreau a écrit :

Hi Arnall,

On Tue, Jul 21, 2020 at 01:27:31PM +0200, Arnall wrote:

Hello everyone,

I remember that in the past it was strongly discouraged to use http-reuse in
combination with send-proxy, because of the client IP which is provided by
the proxy protocol.

I have this configuration :

HA-Proxy version 2.0.14-1~bpo9+1 2020/04/16 - https://haproxy.org/

defaults
     http-reuse always

backend abuse
     timeout server 60s
     balance roundrobin
     hash-balance-factor 0
     server s_abuse u...@abuse.sock send-proxy-v2 maxconn 4

listen l_abuse
     bind u...@abuse.sock accept-proxy
     http-request set-var(req.delay) int(500)
     http-request lua.add_delay
     server  192.168.000.aaa:80 maxconn 1
     server  192.168.000.bbb:80  maxconn 1
     server z 192.168.000.ccc:80  maxconn 1

Is it OK ? Because i have no warning when verifying the configuration, or
should i add a "http-reuse never" in "backend abuse" ?

It is now properly dealt with, by marking the connection private, which
means it will not be shared at all. So what you'll see simply is that
there is no reuse for connections employing send-proxy. So your config
is safe, but you will just not benefit from the reuse.

Anyway it's generally not a good idea to use proxy protocol over HTTP
from an HTTP-aware agent. Better use Forward/X-Forwarded-for that passes
the info per request and that nowadays everyone can consume.

Regards,
Willy


Thank you for the answers/tips !

The abuse flow is an exception, for the regular flow we use indeed 
"forwardfor" in order to be able to use http-reuse with Varnish.


Regards.


--
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus




http-reuse and Proxy protocol

2020-07-21 Thread Arnall

Hello everyone,

I remember that in the past it was strongly discouraged to use 
http-reuse in combination with send-proxy, because of the client IP 
which is provided by the proxy protocol.


I have this configuration :

HA-Proxy version 2.0.14-1~bpo9+1 2020/04/16 - https://haproxy.org/

defaults
    http-reuse always

backend abuse
    timeout server 60s
    balance roundrobin
    hash-balance-factor 0
    server s_abuse u...@abuse.sock send-proxy-v2 maxconn 4

listen l_abuse
    bind u...@abuse.sock accept-proxy
    http-request set-var(req.delay) int(500)
    http-request lua.add_delay
    server  192.168.000.aaa:80 maxconn 1
    server  192.168.000.bbb:80  maxconn 1
    server z 192.168.000.ccc:80  maxconn 1

Is it OK ? Because i have no warning when verifying the configuration, 
or should i add a "http-reuse never" in "backend abuse" ?


Thanks !



--
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus




Re: Bad date in 1.9.xx SPEC files

2020-02-13 Thread Arnall

Le 13/02/2020 à 18:10, Blair, Steven a écrit :


This problem has existed for several iterations and should be obvious 
to a casual reviewer. Please fix it.


I really do not understand why the .spec file was removed in 2.x 
versions, but if it is intended for 1.9.x, it should at least work.


%changelog

* Thu Feb 13 2020 Willy Tarreau 

- updated to 1.9.14

* Mon Nov 25 2019 Willy Tarreau 

- updated to 1.9.13

* Thu Oct 24 2019 Willy Tarreau 

- updated to 1.9.12

* ven. sept. 27 2019 Christopher Faulet 

- updated to 1.9.11

* Thu Aug  8 2019 Willy Tarreau 

- updated to 1.9.10

Steven Blair


Hello,

This problem has existed ...

Thank You.



Re: [ANNOUNCE] haproxy-1.9-dev11

2018-12-18 Thread Arnall

Hello,

Le 17/12/2018 à 20:16, Willy Tarreau a écrit :

Hi Arnall,

On Mon, Dec 17, 2018 at 02:13:31PM +0100, Arnall wrote:

don't know if it's related but haproxy.org answers with 400 status right now
!

(Windows 10 Chrome/Firefox)

Might be, though I can't reproduce it. I've found a capture of an attack
however, with shell code in the user-agent, and which definitely deserves
a 400 :-)

Do you still observe it ?

Thanks,
Willy


Everything's OK now !

Thanks.




Re: [ANNOUNCE] haproxy-1.9-dev11

2018-12-17 Thread Arnall

Le 16/12/2018 à 23:05, Willy Tarreau a écrit :

I expected to release this week-end after running it on the haproxy.org
servers, but some annoying issues faced in production took some time to
get fixed and delayed the release.

Things have been quiet now, with 18 hours running without a glitch in
legacy mode (without HTX) and now 13 hours an counting with HTX enabled,
so things are getting much better.


Hello Willy,

don't know if it's related but haproxy.org answers with 400 status right 
now !


(Windows 10 Chrome/Firefox)




Re: FW: LUA and doing things

2018-09-24 Thread Arnall

Hello,

Le 24/09/2018 à 12:29, Franks Andy (IT Technical Architecture Manager) a 
écrit :


Sorry to be a nag, but anyone any ideas with this. Or is it just 
indicated to regularly parse log files (seems a bit of a hacky solution).


Thanks!

*From:*Franks Andy (IT Technical Architecture Manager) 
[mailto:andy.fra...@sath.nhs.uk]

*Sent:* 21 September 2018 13:20
*To:* haproxy@formilux.org
*Subject:* LUA and doing things

Hi all,

  Just hopefully a really quick question.. I would like to use LUA to, 
on connection use of a specific backend service, do something (like 
write an entry to a log file for example). I realise the example here 
is possibly locking etc but I’m not too worried at this point about that.


LUA seems, with my basic knowledge, to expect to do something to the 
traffic – for example I have this :


frontend test_84

  bind 0.0.0.0:84

  mode http

  default_backend bk_test_84

backend bk_test_84

  mode http

  stick on src table connections_test_84

  server localhost 127.0.0.1:80

I have a working lua script to do something like core.Alert(“hello 
world”).


The thing I would like to do is run this script without any effect on 
traffic – if I try and use ‘http-request’ or ‘stick on’ or similar 
keywords which can use lua scripts, they want me to program in some 
action that decides what criteria to stick on or what to do with that 
http-request. I just want something to “fire” and do nothing but run 
the lua script and carry on. Can I do it?


Please forgive my noobiness.

Thanks

Andy

I think you can find usefull documentation here : 
https://www.arpalert.org/haproxy-lua.html


concepts, API documentation ...

For your purpose why don't you just use :

backend bk_test_84
  mode http
  stick on src table connections_test_84
  http-request lua.myfunction
  server localhost 127.0.0.1:80

and in your lua file :

function myfunction(txn)

 //do what you want

end

core.register_action("myfunction", { "http-req" }, myfunction)

You have an exemple here : 
https://www.arpalert.org/src/haproxy-lua-api/1.7/index.html


core.register_action("hello-world", { "tcp-req", "http-req" }, function(txn)
   txn:Info("Hello world")
end)

with

frontend http_frt
  mode http
  http-request lua.hello-world



Re: Question about haproxy logs

2018-04-19 Thread Arnall

Le 19/04/2018 à 09:35, rai...@ultra-secure.de a écrit :

Hi,


I have lines like these:

Apr 19 09:32:03 lb-prod haproxy[16717]: 127.0.0.1:50898 
[19/Apr/2018:09:32:03.174] srv-pub-front-ssl srv-pub-back-ssl/WINSRV 
0/0/0/36/290 500 284 - - --VN 3/1/0/1/0 0/0 "POST /SaveStatistics 
HTTP/1.1"



Does that mean that the backend-server (WINSRV) replied with a code 500?




Hello,

Haproxy 1.7 :

http://cbonte.github.io/haproxy-dconv/1.7/configuration.html#8.2.3

Haproxy 1.8 :

http://cbonte.github.io/haproxy-dconv/1.8/configuration.html#8.2.3

you will find anything you want about Haproxy HTTP log.

If you use the default HTTP log format then yes the status code is 500 
in your example.





Re: HAProxy 1.7.5: conn_cur value problem with peer stick-table

2018-02-07 Thread Arnall

Le 27/10/2017 à 18:06, Arnall a écrit :

Le 27/05/2017 à 08:49, Willy Tarreau a écrit :

Hi Maxime,

On Fri, May 19, 2017 at 02:28:40PM +0200, Maxime Guillet wrote:

2/ If I launch the same test on both haproxy servers and peers
configuration activated, I can see the conn_cur counter always 
increasing


   $ ab -n 2000 -c 20 http://10.0.0.2/
   $ ab -n 2000 -c 20 http://10.0.0.3/

   haproxy1# echo "show table http" | socat stdio
/var/run/haproxy/haproxy.stats
   # table: http, type: ip, size:512000, used:1
   0x7f59d568b9a0: key=10.10.10.10 use=20 exp=7285 conn_rate(1)=225
conn_cur=47
   0x7f59d568b9a0: key=10.10.10.10 use=20 exp=5337 conn_rate(1)=213
conn_cur=52
   0x7f59d568b9a0: key=10.10.10.10 use=20 exp=7952 conn_rate(1)=178
conn_cur=133
   0x7f59d568b9a0: key=10.10.10.10 use=20 exp=9589 conn_rate(1)=218
conn_cur=259
   0x7f59d568b9a0: key=10.10.10.10 use=20 exp=6144 conn_rate(1)=143
conn_cur=321
   0x7f59d568b9a0: key=10.10.10.10 use=20 exp=8115 conn_rate(1)=190
conn_cur=553
   0x7f59d568b9a0: key=10.10.10.10 use=20 exp=7815 conn_rate(1)=180
conn_cur=676
   0x7f59d568b9a0: key=10.10.10.10 use=20 exp=7285 conn_rate(1)=162
conn_cur=738

=> the conn_cur information becomes more and more higher, while it 
should

be at maximum 40 (2 x 20 concurrent connection with ab)

I don't believe this behavior is intended, it seems more to be a bug.
Well, yes and no. The principle of stick table synchronization is 
primarily
for stickiness and secondarily for active-backup table replication, 
so the
last which pushes a value should overwrite the previous one. In that 
sense,

it's expected that the values you find are around 20. What you're seeing
still makes me think it's a bug (since it sounds like we're adding up 
some
values here), but I don't know if it's a side effect of the update or 
not.

Specifically it's possible that this replication would prevent a session
from decrementing its reference on close for example. It could also be a
stupid copy-paste of the conn_cnt replication, which would then be 
wrong.
At first glance what I'm seeing in the code looks like a simple 
replication

so I can't explain how what you're seeing happens for now :-/

Willy


Hi willy,

i found this topic because we have exactly the same problem , conn_cur 
increasing endlessly when peers are activated. ( HA-Proxy version 
1.7.9-1~bpo8+1  )
We use it for rate limiting purpose but from what i read it's a bad 
idea after all, i thought that stick table were synchronised like this :


 Server A  [ conn_cur = 10 ], Server B [ conn_cur = 20 ], with peers 
=> Server A [ conn_cur = 30 ], Server B [ conn_cur = 30 ]


But from what you write ( in fact it's in the documentation too ) it 
seems that A overwrite B, or B overwrite A so it's impossible to use 
it for rate limiting purpose.( i certainly don't want a [ conn_cur = 
10 ] overwriting a [ conn_cur = 20 ] )


Am i right ?

Thanks


Hi Willy,

as you ask i bump this message about endlessly increasing conn_cur 
counter when used with peers.


At the time of the test it was with HA-Proxy version 1.7.9-1~bpo8+1, i 
didn't test with newer version.


Regards.

Arnaud.


---
L'absence de virus dans ce courrier électronique a été vérifiée par le logiciel 
antivirus Avast.
https://www.avast.com/antivirus




Re: [ANNOUNCE] haproxy-1.8.0

2017-11-27 Thread Arnall

Le 26/11/2017 à 19:57, Willy Tarreau a écrit :

Hi all,

After one year of intense development and almost one month of debugging,
polishing, and cross-review work trying to prevent our respective coworkers
from winning the first bug award, I'm pleased to announce that haproxy 1.8.0
is now officially released!


Congratulations to everyone involved  !

Haproxy is trully a great product.




Re: HAProxy 1.7.5: conn_cur value problem with peer stick-table

2017-10-27 Thread Arnall

Le 27/05/2017 à 08:49, Willy Tarreau a écrit :

Hi Maxime,

On Fri, May 19, 2017 at 02:28:40PM +0200, Maxime Guillet wrote:

2/ If I launch the same test on both haproxy servers and peers
configuration activated, I can see the conn_cur counter always increasing

   $ ab -n 2000 -c 20 http://10.0.0.2/
   $ ab -n 2000 -c 20 http://10.0.0.3/

   haproxy1# echo "show table http" | socat stdio
/var/run/haproxy/haproxy.stats
   # table: http, type: ip, size:512000, used:1
   0x7f59d568b9a0: key=10.10.10.10 use=20 exp=7285 conn_rate(1)=225
conn_cur=47
   0x7f59d568b9a0: key=10.10.10.10 use=20 exp=5337 conn_rate(1)=213
conn_cur=52
   0x7f59d568b9a0: key=10.10.10.10 use=20 exp=7952 conn_rate(1)=178
conn_cur=133
   0x7f59d568b9a0: key=10.10.10.10 use=20 exp=9589 conn_rate(1)=218
conn_cur=259
   0x7f59d568b9a0: key=10.10.10.10 use=20 exp=6144 conn_rate(1)=143
conn_cur=321
   0x7f59d568b9a0: key=10.10.10.10 use=20 exp=8115 conn_rate(1)=190
conn_cur=553
   0x7f59d568b9a0: key=10.10.10.10 use=20 exp=7815 conn_rate(1)=180
conn_cur=676
   0x7f59d568b9a0: key=10.10.10.10 use=20 exp=7285 conn_rate(1)=162
conn_cur=738

=> the conn_cur information becomes more and more higher, while it should
be at maximum 40 (2 x 20 concurrent connection with ab)

I don't believe this behavior is intended, it seems more to be a bug.

Well, yes and no. The principle of stick table synchronization is primarily
for stickiness and secondarily for active-backup table replication, so the
last which pushes a value should overwrite the previous one. In that sense,
it's expected that the values you find are around 20. What you're seeing
still makes me think it's a bug (since it sounds like we're adding up some
values here), but I don't know if it's a side effect of the update or not.
Specifically it's possible that this replication would prevent a session
from decrementing its reference on close for example. It could also be a
stupid copy-paste of the conn_cnt replication, which would then be wrong.
At first glance what I'm seeing in the code looks like a simple replication
so I can't explain how what you're seeing happens for now :-/

Willy


Hi willy,

i found this topic because we have exactly the same problem , conn_cur 
increasing endlessly when peers are activated. ( HA-Proxy version 
1.7.9-1~bpo8+1  )
We use it for rate limiting purpose but from what i read it's a bad idea 
after all, i thought that stick table were synchronised like this :


 Server A  [ conn_cur = 10 ], Server B [ conn_cur = 20 ], with peers => 
Server A [ conn_cur = 30 ], Server B [ conn_cur = 30 ]


But from what you write ( in fact it's in the documentation too ) it 
seems that A overwrite B, or B overwrite A so it's impossible to use it 
for rate limiting purpose.( i certainly don't want a [ conn_cur = 10 ] 
overwriting a [ conn_cur = 20 ] )


Am i right ?

Thanks




bad queue report in stats

2017-10-10 Thread Arnall

Hello everyone,

Name: HAProxy
Version: 1.7.5-2~bpo8+1
Release_date: 2017/05/27
OS : Debian 8

i have something weird in my stats with this configuration :

backend be_abuse

    bind-process 1
    timeout server 60s
    balance roundrobin
    hash-balance-factor 0
    acl untrusted_country var(req.country) XX
    use-server respawn_abuse_untrusted if untrusted_country

    server abuse u...@abuse.sock maxconn 1
    server abuse_untrusted u...@abuse.sock maxconn 1 weight 0

when looking at stats i have

--- backend be_abuse : qcur 40 ,  qmax 77 , qtime 12575

--- server abuse : qcur 0 , qmax 0 , qtime 11963

--- server abuse_untrusted : qcur 38 , qmax 73 , qtime 13981

So it seems i have a false report for "server abuse" , i'm sure i have 
queue in it, i have a positive qtime (and i see qtime at request level 
in the logs). But qcur and qmax stay at 0


I have several backend like this one and the problem appears too.










Re: TCP ACL rules based on host name

2017-10-04 Thread Arnall

Le 22/09/2017 à 03:13, rt3p95qs a écrit :
Is it possible to assign TCP (no HTTP) connections to a backend based 
on an alias haproxy has?


For example:
HAProxy has 3 alias names, server01.example.com 
, server02.example.com 
 and server03.example.com 
.


The haproxy.conf file defines a front end and 3 back ends:

frontend static-svc
   bind *:80
   mode tcp
   option tcplog
   default_backend svc-svc-default


backend stactic-svc01
    balance source
    option tcplog
    server server01 127.0.0.1 check

backend static-svc02
    balance source
    option tcplog
    server server02 127.0.0.2 check

backend static-svc03
    balance source
    option tcplog
    server server03 127.0.0.3 check

The idea being that each static-service should only service in coming 
requests thru a specific alias; therefore, requests coming from the 
internet looking for server01.example.com 
 would be sent to the static-svc01 back 
end. I have seen tons of examples on how to do this with HTTP, but I 
can't find any that focus on pure TCP. My application does not use 
HTTP at all.


Thanks.


Hello,

sorry if i'm wrong but server name is not part of TCP , TCP only knows 
about IP and port, so you must have another protocol above TCP to adress 
server name. Natively Haproxy only knows about TCP and HTTP, if you know 
how works your protocol you can use :


tcp-request inspect-delay XX
tcp-request content capture  len 

you can find what you can capture here :

Layer 4 : 
https://cbonte.github.io/haproxy-dconv/1.7/configuration.html#7.3.3
Layer 5 : 
https://cbonte.github.io/haproxy-dconv/1.7/configuration.html#7.3.4
Layer 6 : 
https://cbonte.github.io/haproxy-dconv/1.7/configuration.html#7.3.5


At layer 6 you have the payload :

https://cbonte.github.io/haproxy-dconv/1.7/configuration.html#7.3.5-req.payload

you can maybe use this to establish conditions.

Have a look at this thread also : 
https://www.mail-archive.com/haproxy@formilux.org/msg25879.html




---
L'absence de virus dans ce courrier électronique a été vérifiée par le logiciel 
antivirus Avast.
https://www.avast.com/antivirus


Lua function 'xxxxxx': yield not allowed.

2017-09-29 Thread Arnall

Hello everyone,

i use a simple lua script in Haproxy ( HA-Proxy version 1.7.9-1~bpo8+1 
2017/08/24 ):


-

function add_delay(txn)
    local default = 200
    local delay = txn:get_var("req.delay")
    if delay ~= nil then
        core.msleep(delay)
    else
        core.msleep(default)
    end
end

core.register_action("add_delay", { "http-req" }, add_delay)

-

it works well but if a client close (abort) its connection in the middle 
of a transaction i have a "Lua function 'add_delay': yield not allowed." 
message in the logs.


I would like to know if i can do something to avoid the message or if i 
have to leave it as it is.


btw is it possible to access an acl in lua ?  Something like [acl mydom 
hdr(host) mydom.com] => [txn:acl("mydom")] . Or do i have to write the 
test enterily in lua with the fetches function.


Thanks !




Re: stick-table ,show table, use field

2017-03-31 Thread Arnall

Thanks Brian !

i have searched in management guides, but at "show table  [ 
data.   ] | [ key  ]" :)


BTW the doc says 2 things :
1] "their size in maximum possible number of entries, and the number of 
entries currently in use."


it seems that's, in reality, the size of the table in bytes, not really 
a number of entries. Size is still the same when adding or removing 
datatype.
stick-table size 50m => sized:  52428800 no matter the number of 
data_type per key.


2] their type (currently zero, always IP)

i have type: string when i set "type string". Maybe i missunderstand the 
sentence ?


echo "show table " | sudo socat stdio  /run/haproxy/admin.sock
# table: web_plain, type: ip, size:52428800, used:0
# table: dummy_stick_table, type: string, size:52428800, used:0

Thanks

Le 30/03/2017 à 22:50, Bryan Talbot a écrit :


On Mar 30, 2017, at Mar 30, 10:19 AM, Arnall <arnall2...@gmail.com 
<mailto:arnall2...@gmail.com>> wrote:


Hello everyone,

when using socat to show a stick-table i have lines like this :

# table: dummy_table, type: ip, size:52428800, used:33207

0x7f202f800720: key=aaa.bbb.ccc.ddd use=0 exp=599440 gpc0=0 
conn_rate(5000)=19 conn_cur=0 http_req_rate(1)=55


../...

I understand all the fields except 2 :

used:33207

use=0

I found nothing in the doc, any idea ?




I believe that these are documented in the management guides and not 
the config guides.


https://cbonte.github.io/haproxy-dconv/1.6/management.html#9.2-show%20table

Here, I think that ‘used’ for the table is the number of entries that 
currently exist in the table, and ‘use’ for an entry is the number of 
sessions that concurrently match that entry.


-Bryan





stick-table ,show table, use field

2017-03-30 Thread Arnall

Hello everyone,

when using socat to show a stick-table i have lines like this :

# table: dummy_table, type: ip, size:52428800, used:33207

0x7f202f800720: key=aaa.bbb.ccc.ddd use=0 exp=599440 gpc0=0 
conn_rate(5000)=19 conn_cur=0 http_req_rate(1)=55


../...

I understand all the fields except 2 :

used:33207

use=0

I found nothing in the doc, any idea ?

Thanks.





Re: http reuse and proxy protocol

2017-01-05 Thread Arnall

Le 03/01/2017 à 18:18, Lukas Tribus a écrit :

Hi Arnall,


Am 03.01.2017 um 16:15 schrieb Arnall:


Is it possible that with "http-reuse always" the yyy.yyy.yyy.yyy 
request has used
the xxx.xxx.xxx.xxx connection between https and http frontend with 
proxy

protocol forwarding xxx.xxx.xxx.xxx instead of yyy.yyy.yyy.yyy ?



Yes, that's what http-reuse does.

Either use a HTTP header to transport the source IP to the backend or 
set http-reuse
to never [1], because the proxy-protocol only sends information at the 
beginning (its

like our old "tunnel" mode).


Thanks Lukas,

i've instantly used the "http-reuse never" for tls frontend, after 
seeing the log.






http reuse and proxy protocol

2017-01-03 Thread Arnall

Hi everyone,

recently we have separated https and http frontend in order to scale well.

we are using a nbproc > 1 configuration for ssl offloading :

listen web_tls
mode http
bind *:443 ssl crt whatever.pem process 2
bind *:443 ssl crt whatever.pem process 3

../..
server web_plain u...@plain.sock send-proxy-v2-ssl

frontend web_plain
bind*:80 process 1
bind u...@plain.sock process 1 accept-proxy

I have forgotten that in default section i had this :

http-reuse always

Today a user tells us that he had access for one moment to debug tools 
of the site. Debug tools are IP protected (bad thing i know but that's 
another story ... )


I've searched the log and found this :

11:54:39 lb1 haproxy[123274]: xxx.xxx.xxx.xxx:51139 
[03/Jan/2017:11:54:39.080] web_plain forums_connected/proxy12 
180/0/0/180/360 200 34197 - \-  1965/1963/9/4/0 0/0 
{Mozilla/5.0_(X11;_Linux_x86_64;_rv:50.0)_Gecko/20100101_Firefox/50.0|FR} 
"GET /forums/xxx.htm HTTP/1.1"
11:54:39 lb1 haproxy[123278]: yyy.yyy.yyy.yyy:38878 
[03/Jan/2017:11:54:39.218] web_tls~ web_tls/web_plain 42/0/0/180/222 200 
34192 - \-  91/91/1/2/0 0/0 "GET /forums/xxx.htm HTTP/1.1"


At the same time i have :

11:54:39 lb1 haproxy[123274]: xxx.xxx.xxx.xxx:51139 
[03/Jan/2017:11:54:39.440] web_plain nocache_connected/jv-proxy12 
6/0/0/3/9 400 452 - \-  1965/1963/2/2/0 0/0 
{|like_Gecko)_Version/4.0_Chrome/55.0.2883.91_Mobile_Safari/537.36|FR} 
"GET /favicon.ico HTTP/1.1"
11:54:39 lb1 haproxy[123274]: xxx.xxx.xxx.xxx:51139 
[03/Jan/2017:11:54:39.450] web_plain cache1/jv-proxy10 26/0/0/13/39 200 
1482 - \-  1958/1958/4/4/0 0/0 {||FR} "GET /whatever_url HTTP/1.1"


It seems that the user has made a https request with the IP 
yyy.yyy.yyy.yyy, but when the request is forwarded to web_plain frontend 
the IP is now xxx.xxx.xxx.xxx ! and thus has access to debug tools 
because xxx.xxx.xxx.xxx has access. The user has provided us screenshot 
and the IP in the screenshot IS xxx.xxx.xxx.xxx


Is it possible that with "http-reuse always" the yyy.yyy.yyy.yyy request 
has used the xxx.xxx.xxx.xxx connection between https and http frontend 
with proxy protocol forwarding xxx.xxx.xxx.xxx instead of yyy.yyy.yyy.yyy ?


I hope this is it, i have to be sure :)
Thnks !



Re: ssl offloading and send-proxy-v2-ssl

2016-12-31 Thread Arnall

Le 27/12/2016 à 00:35, Patrick Hemmer a écrit :



On 2016/12/23 09:28, Arnall wrote:

Hi everyone,

i'm using a nbproc > 1 configuration for ssl offloading :

listen web_tls
mode http
bind *:443 ssl crt whatever.pem process 2
bind *:443 ssl crt whatever.pem process 3

../..
server web_plain u...@plain.sock send-proxy-v2-ssl

frontend web_plain
bind*:80 process 1
bind u...@plain.sock process 1 accept-proxy

../..

And i'm looking for a secure solution in the web_plain frontend to 
know if the request come from web_tls or not ( in fact i want to know 
if the connection was initially made via SSL/TLS transport ).


I though that send-proxy-v2-ssl could help but i have no idea how ... 
src and src_port are OK with the proxy protocol but ssl_fc in 
web_plain keeps answering false  ( 0 ) even the request come from 
web_tls.


I could set and forward a secret header set in web_tls but i don't 
like the idea ... (have to change the header each time an admin sys 
leave the enterprise... )


Thanks.





This use case has come up a few times: 
https://www.mail-archive.com/haproxy@formilux.org/msg23882.html
My crude solution is an ACL check on the port the client connected to 
(dst_port eq 443).


-Patrick


Thanks for the answer and happy new year !



Re: ssl offloading and send-proxy-v2-ssl

2016-12-31 Thread Arnall

Hi,

thanks for your answer, didn't know the src_is_local feature as it's a 
1.7 feature, we're still in 1.6.


the dst_port seems ok to me, will use it !

Happy new year !

Le 27/12/2016 à 08:29, Elias Abacioglu a écrit :

Sorry just realized,
src_is_local won't work when using proxy protocol.
Proxy protocol will preserve initial source information.

You can probably use dst_port like this instead:

acl secure dst_port  443
 if is secure

On Mon, Dec 26, 2016 at 11:09 PM, Elias Abacioglu 
<elias.abacio...@deltaprojects.com 
<mailto:elias.abacio...@deltaprojects.com>> wrote:


Perhaps you could use src_is_local.

Something like this

frontend web_plain

acl is_local src_is_local
http-response add-header X-External-Protocol https if is_local


/Elias

On Fri, Dec 23, 2016 at 3:28 PM, Arnall <arnall2...@gmail.com
<mailto:arnall2...@gmail.com>> wrote:

Hi everyone,

i'm using a nbproc > 1 configuration for ssl offloading :

listen web_tls
mode http
bind *:443 ssl crt whatever.pem process 2
bind *:443 ssl crt whatever.pem process 3

../..
server web_plain u...@plain.sock send-proxy-v2-ssl

frontend web_plain
bind*:80 process 1
bind u...@plain.sock process 1 accept-proxy

../..

And i'm looking for a secure solution in the web_plain
frontend to know if the request come from web_tls or not ( in
fact i want to know if the connection was initially made via
SSL/TLS transport ).

I though that send-proxy-v2-ssl could help but i have no idea
how ... src and src_port are OK with the proxy protocol but
ssl_fc in web_plain keeps answering false  ( 0 ) even the
request come from web_tls.

I could set and forward a secret header set in web_tls but i
don't like the idea ... (have to change the header each time
an admin sys leave the enterprise... )

Thanks.









ssl offloading and send-proxy-v2-ssl

2016-12-23 Thread Arnall

Hi everyone,

i'm using a nbproc > 1 configuration for ssl offloading :

listen web_tls
mode http
bind *:443 ssl crt whatever.pem process 2
bind *:443 ssl crt whatever.pem process 3

../..
server web_plain u...@plain.sock send-proxy-v2-ssl

frontend web_plain
bind*:80 process 1
bind u...@plain.sock process 1 accept-proxy

../..

And i'm looking for a secure solution in the web_plain frontend to know 
if the request come from web_tls or not ( in fact i want to know if the 
connection was initially made via SSL/TLS transport ).


I though that send-proxy-v2-ssl could help but i have no idea how ... 
src and src_port are OK with the proxy protocol but ssl_fc in web_plain 
keeps answering false  ( 0 ) even the request come from web_tls.


I could set and forward a secret header set in web_tls but i don't like 
the idea ... (have to change the header each time an admin sys leave the 
enterprise... )


Thanks.





Re: problem with server and unix socket unix@

2016-12-12 Thread Arnall

Hi Lukas,

thanks for the advice, the problem was about the chroot option.
strace with chroot :

-
Process 46596 attached
epoll_wait(0, {}, 200, 1000)= 0
epoll_wait(0, {}, 200, 1000)= 0
epoll_wait(0, {}, 200, 1000)= 0
epoll_wait(0, {{EPOLLIN, {u32=13, u64=13}}}, 200, 1000) = 1
accept4(13, {sa_family=AF_INET, sin_port=htons(37165), 
sin_addr=inet_addr("xxx.xxx.xxx.xxx")}, [16], SOCK_NONBLOCK) = 1

setsockopt(1, SOL_TCP, TCP_NODELAY, [1], 4) = 0
accept4(13, 0x7fffd2207480, [128], SOCK_NONBLOCK) = -1 EAGAIN (Resource 
temporarily unavailable)

read(1, "\26\3\1\0\342\1\0\0\336\3\3", 11) = 11
read(1, 
"\244\25\5#\1\6w\r1c\25\305J1\207\307d\265\303\226;d)1\300\244G\334\352'>\263"..., 
220) = 220
write(1, 
"\26\3\3\0Q\2\0\0M\3\3o\243\233:\351\3\5\35\0\234@\240\177\237\225\360\235JA\301B"..., 
137) = 137
read(1, 0x7fb090c0e3b3, 5)  = -1 EAGAIN (Resource 
temporarily unavailable)

epoll_ctl(0, EPOLL_CTL_ADD, 1, {EPOLLIN|EPOLLRDHUP, {u32=1, u64=1}}) = 0
epoll_wait(0, {{EPOLLIN, {u32=1, u64=1}}}, 200, 1000) = 1
read(1, "\24\3\3\0\1", 5)   = 5
read(1, "\1", 1)= 1
read(1, "\26\3\3\0(", 5)= 5
read(1, 
"\0\0\0\0\0\0\0\0[\36(\366]\301\37\246m\362\205\214\5G\373\10\267\204\214b9%;\352"..., 
40) = 40
read(1, 0x7fb090c0e3b3, 5)  = -1 EAGAIN (Resource 
temporarily unavailable)

epoll_wait(0, {{EPOLLIN, {u32=1, u64=1}}}, 200, 1000) = 1
read(1, "\27\3\3\4\377", 5) = 5
read(1, 
"\0\0\0\0\0\0\0\1%\210]1&!D\353\32\342]\326\265\370\363\304\360\272o_g\236\250\302"..., 
1279) = 1279
getsockname(1, {sa_family=AF_INET, sin_port=htons(443), 
sin_addr=inet_addr("xxx.xxx.xxx.xxx")}, [16]) = 0
getsockopt(1, SOL_IP, 0x50 /* IP_??? */, 
"\2\0\1\273\227P\37[\0\0\0\0\0\0\0\0", [16]) = 0

socket(PF_LOCAL, SOCK_STREAM, 0)= 2
fcntl(2, F_SETFL, O_RDONLY|O_NONBLOCK)  = 0
connect(2, {sa_family=AF_LOCAL, sun_path="/var/run/haproxy_plain.sock"}, 
110) = -1 ENOENT (No such file or directory)

close(2)= 0
socket(PF_LOCAL, SOCK_STREAM, 0)= 2
fcntl(2, F_SETFL, O_RDONLY|O_NONBLOCK)  = 0
--

we have a : connect(2, {sa_family=AF_LOCAL, 
sun_path="/var/run/haproxy_plain.sock"}, 110) = -1 ENOENT (No such file 
or directory)


The socket was in /var/run/ but Haproxy didn't find it, so i've thought 
that Haproxy was searching the socket in the chroot path and i was right :


global
  chroot /path/to/chroot/
  unix-bind /path/to/chroot/ mode 770 group haproxy

../...

listen web_tls
mode http
bind *:443 ssl crt fullchain.pem process 2
bind *:443 ssl crt fullchain.pem process 3

maxconn 10

server web-plain unix@haproxy_plain.sock send-proxy-v2-ssl

frontend web_plain
mode http
bind *:80 process 1
bind unix@haproxy_plain.sock process 1 accept-proxy


This way it works, it seems that in contrary to the stats socket, the 
server socket is chrooted , so you have to prefix with /path/to/chroot/.


Thanks for the help !

PS : i use "send-proxy-v2-ssl" because i want to know i want to know if 
the connection was via TLS or not, but how can i get this information in 
the plani frontend ? I've tried to use "if { ssl_fc }" but it doesn't 
work...



Le 12/12/2016 à 21:55, Lukas Tribus a écrit :

Hello Arnall,


you said you tried different users, did you remove the "user nobody" 
configuration completely?



Strace output would also help, just make sure you are looking at the 
correct process or use nbproc 1 to avoid any confusion while 
troubleshooting.



Lukas






Re: problem with server and unix socket unix@

2016-12-12 Thread Arnall

more information ,
netstat display the haproxy socket : 6417/haproxy 
/var/run/haproxy/haproxy_plain.sock.6416.tmp


and i have answer to HTTP request  (after disabling PROXY protocol) :
echo -e "GET / HTTP/1.1\r\nHost: domaine.tld\r\n" | socat 
unix-connect:/var/run/haproxy_plain.sock STDIO

HTTP/1.1 301 Moved Permanently
Server: nginx
Date: Mon, 12 Dec 2016 19:12:32 GMT
.../...

so i really don't know what is wrong in my configuration...

Le 12/12/2016 à 19:17, Arnall a écrit :

Hello everyone,

i got this configuration to offload TLS on multiple process and handle 
the plain http on only one process:


global
nbproc 3

listen web_tls
modehttp
bind *:443 ssl crt certif.pem process 2
bind *:443 ssl crt certif.pem process 3

maxconn 10

server web-plain unix@/var/run/haproxy_plain.sock send-proxy-v2

frontend web_plain
mode http
bind unix@/var/run/haproxy_plain.sock accept-proxy user nobody 
process 1


maxconn 10

use backend .../...

no matter what user, mode or directory i use for the unix socket it 
always ends the same way : error 503


log : web_tls~ web_tls/web-plain 6/0/-1/-1/6 503 213 - - SC-- 
0/0/0/0/3 0/0 "GET / 


If i try with a IPV4 server (127.0.0.1:80) instead  of unix socket, it 
works well ...


I really don't know what's wrong here...if you have any advice ...

Thank you !






problem with server and unix socket unix@

2016-12-12 Thread Arnall

Hello everyone,

i got this configuration to offload TLS on multiple process and handle 
the plain http on only one process:


global
nbproc 3

listen web_tls
modehttp
bind *:443 ssl crt certif.pem process 2
bind *:443 ssl crt certif.pem process 3

maxconn 10

server web-plain unix@/var/run/haproxy_plain.sock send-proxy-v2

frontend web_plain
mode http
bind unix@/var/run/haproxy_plain.sock accept-proxy user nobody 
process 1


maxconn 10

use backend .../...

no matter what user, mode or directory i use for the unix socket it 
always ends the same way : error 503


log : web_tls~ web_tls/web-plain 6/0/-1/-1/6 503 213 - - SC-- 0/0/0/0/3 
0/0 "GET / 


If i try with a IPV4 server (127.0.0.1:80) instead  of unix socket, it 
works well ...


I really don't know what's wrong here...if you have any advice ...

Thank you !




Re: SC session state with googlebot

2016-12-01 Thread Arnall

Sorry everyone, forget about this message , just a misconfiguration ...

Le 01/12/2016 à 15:25, Arnall a écrit :

Hello everyone,

i have a special case in our logs with googlebot, with some static 
files, we have a SC-- session state and of course a 503 status code :


 66.249.76.63:55140  frontend_web frontend_web/ -1/-1/-1/-1/5 
503 212 - \- SC-- 2179/2175/0/0/0 0/0 
{|static.hostname.tld|Mozilla/5.0_(compatible;_Googlebot/2.1;_+http://www.google.com/bot.html)|XX} 
"GET /img/xx.png HTTP/1.1"


i have the same thing for almost all kind of files served by 
static.hostname.tld (.css .js .woff etc ... ), only with Googlebot ( 
no problem for regular users or bingbot etc... )


Is it coming from our backends ? I've read the documentation about SC 
in chapter 8.5 but i really don't know how this combination 
googlebot+static.hostname.tld could lead to this ...


If you have any hint ...

Thanks !






SC session state with googlebot

2016-12-01 Thread Arnall

Hello everyone,

i have a special case in our logs with googlebot, with some static 
files, we have a SC-- session state and of course a 503 status code :


 66.249.76.63:55140  frontend_web frontend_web/ -1/-1/-1/-1/5 
503 212 - \- SC-- 2179/2175/0/0/0 0/0 
{|static.hostname.tld|Mozilla/5.0_(compatible;_Googlebot/2.1;_+http://www.google.com/bot.html)|XX} 
"GET /img/xx.png HTTP/1.1"


i have the same thing for almost all kind of files served by 
static.hostname.tld (.css .js .woff etc ... ), only with Googlebot ( no 
problem for regular users or bingbot etc... )


Is it coming from our backends ? I've read the documentation about SC in 
chapter 8.5 but i really don't know how this combination 
googlebot+static.hostname.tld could lead to this ...


If you have any hint ...

Thanks !




Re: option dontlognull

2016-11-08 Thread Arnall

Le 08/11/2016 à 16:36, Willy Tarreau a écrit :

Hello,

On Tue, Nov 08, 2016 at 03:55:04PM +0100, Arnall wrote:

Hello everyone,

i've made some test on the 'option dontlognull' / 'no option dontlognull'
and 'tcp-request deny', because i want to be sure that IP in blacklist is
logged correctly. I'm still not sure about the behavior, if i have "no
option dontlognull' i have all denied requests logged, that's OK. But with
'option dontlognull' i still have "some" denied requests logged from time to
time ( BADREQ + PR-- status, test made with own IP in blacklist ), is there
some kind of cache with 'option dontlognull' that just log the first denied
request and not the others for a specific IP ? It would be interesting to
avoid noise in log files, but the doc just say  : option "dontlognull"
indicate that a connection on which no data has been transferred will not be
logged.

In fact there was an action on your connection which is the deny. I'm
surprized that some of your connections are not logged when you do this.
This option was created to avoid logging useless connections, typically
connection probes from external components, or pre-connects from browsers
which finally don't send anything. So normally if you actively close with
"tcp-request deny", it should be logged. I'd say that if some of them are
not logged I'm interested in how to reproduce this to ensure that in the
future they will all be logged.

Here is our setup :

defaults
log global
mode http
http-reusealways
optiondontlognull
optionhttplog
optionhttp-keep-alive
optionabortonclose
optionsplice-auto
optiontcp-smart-connect
optionhttp-buffer-request
timeout connect 10s
timeout server 30s
timeout client 30s
timeout http-request 5s
timeout http-keep-alive 10s

frontend web
bind*:80
acl whitelist src -f /etc/haproxy/whitelist.lst -f 
/etc/haproxy/local.lst
acl blacklist src -f /etc/haproxy/blacklist_manual.lst -f 
/etc/haproxy/blacklist_auto.lst

tcp-request content accept if whitelist
tcp-request content reject if blacklist

With this setup :
- optiondontlognull : request denied is logged from time to time
- no optiondontlognull : request denied is always logged


Any hint ?

Then do not log, that's much better. You can even change the log level with
the "set-log-level silent" directive. That seems to better match your needs.
I do want to log denied request ! :) I just want to know what is the 
exact behavior with "option dontlognull" because it could save some 
noise in the log files.

Best regards,
Willy


Thks !




option dontlognull

2016-11-08 Thread Arnall

Hello everyone,

i've made some test on the 'option dontlognull' / 'no option 
dontlognull' and 'tcp-request deny', because i want to be sure that IP 
in blacklist is logged correctly. I'm still not sure about the behavior, 
if i have "no option dontlognull' i have all denied requests logged, 
that's OK. But with 'option dontlognull' i still have "some" denied 
requests logged from time to time ( BADREQ + PR-- status, test made with 
own IP in blacklist ), is there some kind of cache with 'option 
dontlognull' that just log the first denied request and not the others 
for a specific IP ? It would be interesting to avoid noise in log files, 
but the doc just say  : option "dontlognull" indicate that a connection 
on which no data has been transferred will not be logged. Any hint ?


Thanks.




multi process limitations

2016-05-31 Thread Arnall

Hello everyone,

could you please tell me if the limitations with multi-process are still 
true with HAProxy 1.6 :


- frontend(s) and associated backend(s) must run on the same process

- not compatible with peers section (stick table synchronisation)  ( 
from here : http://blog.haproxy.com/2015/10/14/whats-new-in-haproxy-1-6/ 
it seems to work now )


- information is stored local in each process memory area and can't be 
shared (stick table + tracked counters / stats / queue management / 
connection rate)


- Each HAProxy process performs its health check

Thanks !




Linux or FreeBSD ?

2015-09-30 Thread Arnall

Hi Eveyone,

just a simple question, is FreeBSD a good choice for Haproxy ?
Our Haproxy runs under Debian for years, but the new IT want to put it 
under FreeBSD.

Any cons ?

Thanks.



Fastsocket and Haproxy

2014-10-22 Thread Arnall

Hi everyone,

do you know this project :
https://github.com/fastos/fastsocket

Currently Fastsocket is implemented in the Linux 
kernel(kernel-2.6.32-431.17.1.el6) of CentOS-6.5. According to our 
evaluations, Fastsocket increases throughput of Nginx and 
Haproxy(measured by connections per second) by *290%* and *620%* on a 
24-core machine, compared to the base CentOS-6.5 kernel.


There is an evaluaton :
https://github.com/fastos/fastsocket#online-evaluation

SUITABLE SCENARIOS

Generally, scenarios meeting the following conditions will benefit the 
most from Fastsocket:


The machine has no less than 8 CPU cores.
Large portion of the CPU cycles is spent in network softirq and 
socket related system calls.

Short TCP connections are heavily used.
Application uses non-blocking IO over epoll as the IO framework.
Application uses multiple processes to accept connections individually.

Meanwhile, we are developing Fastsocket to improve the network stack 
performance in more general scenarios.


Willy have you tried it ? And if yes do you recommend it ?

Thanks.


Re: Error 408 with Chrome

2014-05-26 Thread Arnall

Hi Willy,

same problem here with Chrome version 35.0.1916.114 m and :
HA-Proxy version 1.4.22 2012/08/09 (Debian 6) Kernel 3.8.13-OVH
HA-Proxy version 1.5-dev24-8860dcd 2014/04/26 (Debian GNU/Linux 7.5) 
Kernel 3.10.13-OVH


htmlbodyh1408 Request Time-out/h1
Your browser didn't send a complete request in time.
/body/html

Timing : Blocking 2ms /  Receiving : 1ms

Response header:
HTTP/1.0 408 Request Time-out
Cache-Control: no-cache
Connection: close
Content-Type: text/html

Haproxy cfg:
defaults
mode   http
logglobal
option dontlognull
option redispatch
retries3
timeout connect5s
timeout client   20s
timeout server 20s
timeout http-request 5s
timeout http-keep-alive 5s
timeout check 4s
timeout queue 15s

option   abortonclose
option httplog
option http-keep-alive
option forwardfor except 127.0.0.0/8

Haproxy 1.5 : Didn't have time to setup log for now, will provide asap.
Haproxy 1.4 : nothing in log (maybe due to dontlognull ?)

If i bypass Haproxy no more 408.

Thanks.
Arnaud.

Le 23/05/2014 16:08, Willy Tarreau a écrit :

Hi Kevin,

[guys, please could you stop top-posting, it's a total mess to try to
  respond to this thread, I cannot easily take out the useless parts,
  thanks].

On Fri, May 23, 2014 at 02:35:21PM +0200, Kevin Maziere wrote:

2014-05-23 14:34 GMT+02:00 Baptiste bed...@gmail.com:


Kevin,

Do you (still) see 408 errors printed in the browser???

Baptiste

On Fri, May 23, 2014 at 2:17 PM, Kevin Maziere ke...@kbrwadventure.com
wrote:

Hi

I've just applied the first patch, here are the debug log :

In the logs :
2014-05-23T12:03:20+00:00 images-access haproxy[13409]: 127.0.0.1:56596
[23/May/2014:12:03:17.972] ipv4-yyy-443~ ipv4-yyy-443/NOSRV
-1/-1/-1/-1/2041 408 212 - - cR-- 9/3/0/0/0 0/0 BADREQ

Well, here I'm seeing a standard 408 after 2 seconds which should match
a timeout http-request of 2 seconds. Can you check if you don't have one ?
Also, this observation from the logs doesn't seem consistent with your first
claim that the 408 is immediate, here it's only after 2 seconds. Or again we
are facing this bogus preconnect feature of Chrome. People complain all the
time that not only it connects before you want to go to the site, but above
all it displays the error that it receives without checking that it got an
error prior to using the connection :-(


In the debug log, correspond lines:
2014-05-23T12:03:20+00:00 servername haproxy[13409]: Timeout detected:
fe=ipv4-yyy-443 s-flags=0080 txn-flags= req-flags=00c88000
msg-flags= now_ms=687261517 req-analyse_exp=687261515 (-2)

At least that's good, it's the first request of the connection and nothing
except the regular request timeout occurred.

There was an interesting thread here about the nasty behaviour of chrome :

   https://code.google.com/p/chromium/issues/detail?id=85229#c33

Some people suggest closing without ever emitting the 408. You can do that
this way :

 errorfile 408 /dev/null

Note that this fantastic browser breaks HTTP by preventing any server from
using the well-defined HTTP status code indicating a timeout occurred.

Kévin, I think the reason why you have the issue only on one OS is not related
to the OS but to your browsing history on that system. The browser doesn't
pre-connect there and you don't have the trouble.

Regards,
Willy







Re: Error 408 with Chrome

2014-05-26 Thread Arnall

Le 26/05/2014 16:13, Willy Tarreau a écrit :

Hi Arnall,

On Mon, May 26, 2014 at 11:56:52AM +0200, Arnall wrote:

Hi Willy,

same problem here with Chrome version 35.0.1916.114 m and :
HA-Proxy version 1.4.22 2012/08/09 (Debian 6) Kernel 3.8.13-OVH
HA-Proxy version 1.5-dev24-8860dcd 2014/04/26 (Debian GNU/Linux 7.5)
Kernel 3.10.13-OVH

htmlbodyh1408 Request Time-out/h1
Your browser didn't send a complete request in time.
/body/html

Timing : Blocking 2ms /  Receiving : 1ms

Where are you measuring this ? I suspect on the browser, right ? In
this case it confirms the malfunction of the preconnect. You should
take a network capture which will be usable as a reliable basis for
debugging. I'm pretty sure that what you'll see in fact is the following
sequence :

browser haproxy
--- connect --
... long pause ...
 408 + FIN ---
... long pause ...
--- send request -
 RST -

And you see the error in the browser immediately. The issue is then
caused by the browser not respecting this specific rule :

  

Yes it was measured on the browser (Chrome network monitor)
I 've made a network capture for you.(in attachment)

Thanks.
Arnaud.


chrome_haproxy_error408.pcap
Description: Binary data


Re: [ANNOUNCE] haproxy-1.5-dev20

2013-12-16 Thread Arnall
Great news Willy, thanks a lot for all of this, and thanks to all the 
contributors !


Le 16/12/2013 03:41, Willy Tarreau a écrit :

Hi all,

here is probably the largest update we ever had, it's composed of 345
patches!

Some very difficult changes had to be made and as usual when such changes
happen, they take a lot of time due to the multiple attempts at getting
them right, and as time goes, people submit features :-)

After two weeks spent doing only fixes, I thought it was time to issue dev20.
I'm sure I'll forget a large number of things, but the main features of this
version include the following points (in merge order) :

   - optimizations (splicing, polling, etc...) : a few percent CPU could be
 saved ;

   - memory : the connections and applets are now allocated only when needed.
 Additionally, some structures were reorganized to avoid fragmentation on
 64-bit systems. In practice, an idle session size has dropped from 1936
 bytes to 1296 bytes (-640 bytes, or -33%).

   - samples : all sample fetch expressions now support a comma-delimited
 list of converters. This is also true in ACLs, so that it becomes
 possible to do things like :

 # convert to lower case and use fast tree indexing
 acl known_domain hdr(host),lower -f huge-domain-list.lst

   - a lot of code has been deduplicated in the tracked counters, it's now
 possible to use sc_foo_bar(1, args) instead of sc1_foo_bar(args). Doing
 so has simplified the code and makes life of APIs easier.

   - it's now possible to look up a tracked key from another table. This allows
 to retrieve multiple counters for the same key.

   - several hash algorithms are provided, and it is possible to select them
 per backend. This high quality work was done at Tumblr by Bhaskar Maddala.

   - agent-checks: this new feature was merged and replaced the lb-agent-chk.
 Some changes are still planned but feedback is welcome. The goal of this
 agent is to retrieve soem weight information from a server independantly
 of the service health. A typical usage would consist in reporting the
 server's idle percentage as an estimate of the possible weight. This work
 was done by Simon Horman for Loadbalancer.org.

   - samples : more automatic conversions between types are supported, making
 it easier to stick to any parameter. The types are much more dynamic now.
 Some improvements are still pending. This work was done by Thierry Fournier
 at Exceliance.

   - map : a new type of converter appeared : maps. A map matches a key from
 a file just like ACLs do, and replaces this value with the value associated
 with the key on the same line of the file. As it is a converter, it can be
 used in any sample expression. The first usage consists in geolocation,
 where networks are associated with country codes. Maps may be consulted,
 deleted, updated and filled from the CLI. Some will probably use this to
 program actions or emulate ACLs without even reloading a config. This
 work was also achieved by Thierry Fournier, and reviewed by Cyril Bonté
 who developped the original Geoip patchset for 1.4 and 1.5.

   - http-request redirect now supports log-format like expressions, just like
 http-request add-header. This allows to emit strings extracted from the
 request (host header, country code from a map, ...). Thierry again here.

   - checks: tcp-check supports send/expect sequences with strings/regex/binary.
 Thus it now becomes possible to check unsupported protocols, even binary.
 This work is from Baptiste Assmann.

   - keep-alive: the dynamic allocation of the connection and applet in the
 session now allows to reuse or kill a connection that was previously
 associated with the session. Thus we now have a very basic support for
 keep-alive to the servers. There is even an option to relax the load
 balancing to try to keep the same connection. Right now we don't do
 any connection sharing so the main use is for static servers and for
 far remote servers or those which require the broken NTLM auth. That
 said, the performance tests I have run show an increase from 71000
 connections per second to 15 keep-alive requests per second running
 on one core of a Xeon E5 3.6 GHz. This doubled to 300k requests per
 second with two cores. I didn't test above, I lacked injection tools :-)
 One good point is that it will help people assemble haproxy and varnish
 together with haproxy doing the consistent hash and varnish caching after
 it.

As most of you know, server-side keep-alive is the condition to release 1.5.
Now we have it, we'll be able to improve on it but it's basically working.

I expect to release 1.5-final around January and mostly focus on chasing
bugs till there. So I'd like to set a feature freeze. I know it doesn't
mean much considering that we won't stop contribs. But I 

HaProxy and kernel 3.8

2013-05-16 Thread Arnall

Hello,

our servers are hosted at OVH. Yesterday OVH asked their customers to 
update the kernel to 3.8.13 ( due to a local linux root exploit in 
2.6.37 to 3.8.8 ).
I've done the update (old kernel = 3.2.13) but now the load of our 
haproxy servers has increased ( x 4 ). It's still reasonable but i would 
like to know if i can provide any informations to know why.


Server:
Debian 6.0.7
HaProxy 1.4.22

Thanks.



Re: HaProxy and kernel 3.8

2013-05-16 Thread Arnall

Hi Lukas,

the hoster provides its own kernel for the dedicated servers, i've 
updated via netboot to the 3.8.13 hoster kernel. So it's not the distro 
kernel nor kernel.org.


For the load, how can i know if it's kernel or user space ?
I have the load average with the top command.
%CPU ( %us, %sy ... )is the same than before, only the load average has 
increased.


Thanks.

Le 16/05/2013 16:58, Lukas Tribus a écrit :

Hi Arnall!



Yesterday OVH asked their customers to
update the kernel to 3.8.13 ( due to a local linux root exploit in
2.6.37 to 3.8.8 ).
I've done the update (old kernel = 3.2.13) but now the load of our
haproxy servers has increased ( x 4 ). It's still reasonable but i would
like to know if i can provide any informations to know why.


Debian is backporting those fixes to their distro kernel, there is no
need for you to upgrade to a kernel.org kernel - in fact this is a
pretty major step. Doesn't the latest debian kernel update [1] contain
the fix?


Did the load increase in kernel or user space?


Regards,
Lukas


[1] http://lists.debian.org/debian-security-announce/2013/msg00077.html 







Re: HaProxy and kernel 3.8

2013-05-16 Thread Arnall

kernel 3.2.13 grsec 64:

top - 19:13:09 up  6:15,  1 user,  load average: 0.03, 0.03, 0.05
Tasks: 105 total,   1 running, 104 sleeping,   0 stopped,   0 zombie
Cpu(s):  1.3%us,  2.1%sy,  0.0%ni, 79.7%id,  0.1%wa,  0.0%hi, 16.8%si, 
0.0%st

Mem:   7914788k total,  3255420k used,  4659368k free,44724k buffers
Swap:   523260k total,0k used,   523260k free,  2865856k cached

  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
23095 haproxy   20   0 73144  53m  496 S   17  0.7   0:12.22 haproxy
 2586 root  20   0 38272 4116 1012 S5  0.1   2:54.50 floodmon
 2399 root  20   0  125m 1508 1096 S2  0.0   5:26.85 rsyslogd
23101 root  20   0 19112 1300  916 R0  0.0   0:00.10 top
1 root  20   0  8396  760  632 S0  0.0   0:01.12 init

reboot with kernel 3.8.13 grsec 64:

top - 19:23:22 up 8 min,  1 user,  load average: 0.20, 0.19, 0.12
Tasks: 110 total,   1 running, 109 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.9%us,  1.9%sy,  0.0%ni, 80.8%id,  0.1%wa,  0.0%hi, 16.4%si, 
0.0%st

Mem:   7912412k total,   382768k used,  7529644k free, 5288k buffers
Swap:   523260k total,0k used,   523260k free,94876k cached

  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
 2773 haproxy   20   0  104m  77m  596 S   13  1.0   1:14.99 haproxy
 3017 root  20   0 38276 4116 1012 S3  0.1   0:03.22 floodmon
 2772 root  20   0  124m 1524 1096 S1  0.0   0:08.09 rsyslogd
1 root  20   0  8400  760  632 S0  0.0   0:00.58 init
2 root  20   0 000 S0  0.0   0:00.00 kthreadd

reboot again with 3.2.13 grsec 64:

top - 19:41:41 up 5 min,  1 user,  load average: 0.04, 0.06, 0.04
Tasks: 103 total,   2 running, 101 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.4%us,  2.0%sy,  0.0%ni, 82.6%id,  0.1%wa,  0.0%hi, 15.0%si, 
0.0%st

Mem:   7914788k total,   369548k used,  7545240k free, 4976k buffers
Swap:   523260k total,0k used,   523260k free,72872k cached

  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
 2430 haproxy   20   0  132m  90m  596 R   10  1.2   0:46.42 haproxy
 2622 root  20   0 38272 4116 1012 S4  0.1   0:02.46 floodmon
 2421 root  20   0  124m 1396 1092 S1  0.0   0:05.15 rsyslogd
1 root  20   0  8396  760  632 S0  0.0   0:01.00 init
2 root  20   0 000 S0  0.0   0:00.00 kthreadd



Le 16/05/2013 18:39, Lukas Tribus a écrit :

Hi,


%CPU ( %us, %sy ... )is the same than before, only the load average has
increased.


Uhrm, I'm not sure how this is possible. What was the load average before
the upgrade exactly? Please post the first 5 lines of top with the new
kernel.

Also, if you can boot the old kernel once, let it run for a few minutes
and post a complete top from old and the new kernel, that would probably
give as a better idea of what exactly the difference is.


Lukas   






Re: HaProxy and kernel 3.8

2013-05-16 Thread Arnall
Thanks Lukas, i've made some search on kernel 3.2.x and i've found some 
articles reporting inconsistent load average on tickless kernels 
(CONFIG_NO_HZ=y).

It seems to be the case here.

Thks again.

Arnaud.

Le 16/05/2013 20:44, Lukas Tribus a écrit :

Hi Arnall,

looks like the load average calculation was wrong with the old kernel,
it did not account for the soft interupts (XX%si):

top - 19:13:09 up  6:15,  1 user,  load average: 0.03, 0.03, 0.05
Cpu(s):  1.3%us,  2.1%sy,  0.0%ni, 79.7%id,  0.1%wa,  0.0%hi, 16.8%si,

The CPU is always about 80% idle with both the old and the new kernel,
and it also always spends about 16%si, and a few percents in the kernel
and userspace.

So it looks like you are not facing more real load, the two kernels
only have a different understanding of what load is.


Regards,

Lukas