Re: [1.6.1] Utilizing http-reuse

2015-11-11 Thread Willy Tarreau
Hi Krishna,

On Wed, Nov 11, 2015 at 12:31:42PM +0530, Krishna Kumar (Engineering) wrote:
> Thanks Baptiste. My configuration file is very basic:
> 
> global
>   maxconn 100
> defaults
> mode http
> option http-keep-alive
> option splice-response
> option clitcpka
> option srvtcpka
> option tcp-smart-accept
> option tcp-smart-connect
> timeout connect 60s
> timeout client 1800s
> timeout server 1800s
> timeout http-request 1800s
> timeout http-keep-alive 1800s
> frontend private-frontend
> maxconn 100
> mode http
> bind IP1:80
> default_backend private-backend
> backend private-backend
>  http-reuse always
>  server IP2 IP2:80 maxconn 10
> 
> As described by you, I did the following tests:
> 
> 1. Telnet to the HAProxy IP, and then run each of the following tests:
> 
> A.  Serial: Run wget, sleep 0.5; wget, sleep 0.5; (8 times). tcpdump shows 
> that
>   when each wget finishes, client closes the connection and
> haproxy does RST to
>   the single backend. Next wget opens a new connection to haproxy,
> and in turn
>   to the server upon request.

That's expected. To be clear about one point so that there is no doubt
about this, we don't have connection pools for now, we can only share
*existing* connections. So once your last connection closes, you don't
have server connections anymore and you create new ones.

> B. Run 8 wgets in parallel. Each opens a new connection to get a 128 byte 
> file.
>  Again, 8 separate connections are opened to the backend server.

But are they *really* processed in parallel ? If the file is only 128 bytes,
I can easily imagine that the connections are opened and closed immediately.
Please keep in mind that wget doesn't work like a browser *at all*. A browser
keeps connections alive. Wget fetches one object and closes. That's a huge
difference because the browser connection remains reusable while wget's not.

> C.   Run "wget -i ". wget uses keepalive to not close
>the connection. Here, wget opens only 1 connection to haproxy,
> and haproxy
>opens 1 connection to the backend, over which wget transfers
> the 5 files one after
>the other. Behavior is identical to 1.5.12 (same config file,
> except without the reuse
>directive).

OK. That's a better test.

> D.  Run 5 "wget -i  " in parallel. 5
> connections are opened by
>   the 5 wgets, and 5 connections are opened by haproxy to the
> single server, finally
>   all are closed by RST's.

Is wget advertising HTTP/1.1 in the request ? If not that could
explain why they're not merged, we only merge connections from
HTTP/1.1 compliant clients. Also we keep private any connection
which sees a 401 or 407 status code so that authentication doesn't
mix up with other clients and we remain compatible with broken
auth schemes like NTLM which violates HTTP. There are other criteria
to mark a connection private :
  - proxy protocol used to the server
  - SNI sent to the server
  - source IP binding to client's IP address
  - source IP binding to anything dynamic (eg: header)
  - 401/407 received on a server connection.

> I also modified step#1 above, to do a telnet, followed by a GET in
> telnet to actually
> open a server connection, and then run the other tests. I still don't
> see re-using connection
> having effect.

How did you make your test, what exact request did you type ?

Willy




Re: [1.6.1] Utilizing http-reuse

2015-11-11 Thread Krishna Kumar (Engineering)
Hi Willy,

>> B. Run 8 wgets in parallel. Each opens a new connection to get a 128 byte 
>> file.
>>  Again, 8 separate connections are opened to the backend server.
>
> But are they *really* processed in parallel ? If the file is only 128 bytes,
> I can easily imagine that the connections are opened and closed immediately.
> Please keep in mind that wget doesn't work like a browser *at all*. A browser
> keeps connections alive. Wget fetches one object and closes. That's a huge
> difference because the browser connection remains reusable while wget's not.

Yes, they were not really in parallel. I just tested with 128K byte
file (run 4 wgets
in parallel each retrieving 128K). Here, I see 4 connections being opened, and
lots of data packets in the middle, finally followed by 4 connections
being closed. I
can test with "ab -k" option to simulate a browser, I guess, will try that.

>> D.  Run 5 "wget -i  " in parallel. 5
>> connections are opened by
>>   the 5 wgets, and 5 connections are opened by haproxy to the
>> single server, finally
>>   all are closed by RST's.
>
> Is wget advertising HTTP/1.1 in the request ? If not that could

Yes, tcpdump shows HTTP/1.1 in the GET request.

> explain why they're not merged, we only merge connections from
> HTTP/1.1 compliant clients. Also we keep private any connection
> which sees a 401 or 407 status code so that authentication doesn't
> mix up with other clients and we remain compatible with broken
> auth schemes like NTLM which violates HTTP. There are other criteria
> to mark a connection private :
>   - proxy protocol used to the server
>   - SNI sent to the server
>   - source IP binding to client's IP address
>   - source IP binding to anything dynamic (eg: header)
>   - 401/407 received on a server connection.

I am not doing any of these specifically. Its a very simple setup where the
client@ip1 connects to haproxy@ip2 and requests 128 byte file, which
is handled by server@ip3.

>> I also modified step#1 above, to do a telnet, followed by a GET in
>> telnet to actually
>> open a server connection, and then run the other tests. I still don't
>> see re-using connection
>> having effect.
>
> How did you make your test, what exact request did you type ?

I was doing this in telnet:

GET /128 HTTP/1.1
Host: www.example.com
User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)

Thanks for your response & help,

Regards,
- Krishna Kumar



RE: Fast reloads leave orphaned processes on systemd based systems

2015-11-11 Thread Lukas Tribus
Hi Lukas,



> When reloading haproxy too fast on EL7 (RedHat, CentOS) the system is
> being filled with orphaned processes.
>
> I encountered this problem on CentOS 7 with
> haproxy-1.5.4-4.el7_1.x86_64 but expect it to exist on all systems
> using haproxy-systemd-wrapper not just those based on Fedora.
>
> Steps to reproduce:
>
> 1) haproxy is running normal.
>
> [root@localhost ~]# ps ax | grep haproxy
> 3140 ? Ss 0:00 /usr/sbin/haproxy-systemd-wrapper -f
> /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
> 3141 ? S 0:00 /usr/sbin/haproxy -f
> /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds
> 3142 ? Ss 0:00 /usr/sbin/haproxy -f
> /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds
>
> 2) Several reloads are executed in quick succession. Problem worsens
> when processes happen to execute a reload in parallel.
>
> [root@localhost ~]# while :; do systemctl reload haproxy; done
> ^C
>
> 3) There's multiple haproxy processes running that will never end. As
> you can see there's duplicate pids for the -sf arg. Maybe caused by a
> race between haproxy-systemd-wrapper reading and the new haproxy
> process writing it's pid.
>
> [root@localhost ~]# ps ax | grep haproxy
> 423 ? S 0:00 /usr/sbin/haproxy -f
> /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds -sf 419
> 429 ? S 0:00 /usr/sbin/haproxy -f
> /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds -sf 425
> 430 ? Ss 0:00 /usr/sbin/haproxy -f
> /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds -sf 419
> 431 ? Ss 0:00 /usr/sbin/haproxy -f
> /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds -sf 425
> 31833 ? Ss 0:01 /usr/sbin/haproxy-systemd-wrapper -f
> /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
> 36593 ? S 0:00 /usr/sbin/haproxy -f
> /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds -sf 36587
> 36600 ? Ss 0:00 /usr/sbin/haproxy -f
> /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds -sf 36587
> 38316 ? S 0:00 /usr/sbin/haproxy -f
> /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds -sf 38311
> 38324 ? Ss 0:00 /usr/sbin/haproxy -f
> /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds -sf 38311
> 38344 ? S 0:00 /usr/sbin/haproxy -f
> /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds -sf 38325
> 38350 ? Ss 0:00 /usr/sbin/haproxy -f
> /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds -sf 38325
> ...
> ...
>
>
> I believe the problem is that there's a race in
> haproxy-systemd-wrapper.c line 98 where it's missing a
> } else if (nb_pid> 0) { ... block until nb_pid is no longer found in
> pidfile. Or something similarly blocking.
>
> Otherwise the parent will accept new SIGUSR2/SIGHUP reloads before the
> new haproxy process that was spawned in line 96 has written it's pid
> file.
>
> Also note the following from the systemd.service manpage:
> "It is strongly recommended to set ExecReload= to a command that not
> only triggers a configuration reload of the daemon, but also
> synchronously waits for it to complete."
> That's currently not the case.

Thanks for the analysis, make sense to me. Also, since locking
in the parent scripts [1] fixes the issue if I understand correctly,
it further confirms your suspicion.

CC'ing systemd contributors for comments.



Regards,

Lukas


[1] 
https://github.com/mesosphere/marathon-lb/commit/83260fdf687c774064b54d3bb009f5b3a1d75c97

  


Re: HAProxy does not write 504 on keep-alive connections

2015-11-11 Thread Holger Just
Hi,

Willy Tarreau wrote:
> As explained above, it's because a keep-alive enabled client must implement
> the ability to replay requests for which it didn't get a response because
> the connection died. In fact we're forwarding to the client what we saw on
> the server side so that the client can take the correct decision. If your
> client was directly connected to the server, it would have seen the exact
> same behaviour.

One of the problems we saw when trying to reproduce this issue was that
some clients I tried (i.e. curl 7.45.0 and ruby's httpclient gem)
silently replayed any requests for which they didn't receive an answer.

This can result duplicated POSTs on the backend servers. Often, servers
continue to handle the first POST, even after HAProxy closed the backend
connection because its server timeout stroke. Now, the client just
replays the POST again resulting in potentially fatal behavior.

If I understand the HTTP specs correctly, this replay is correct from
the clients perspective as they can't know if they are speaking to a
loadbalancer or an origin server directly.

As a loadbalancer however, HAProxy should always return a proper HTTP
error if the request was even partially forwarded to the server. It's
probably fine to just close the connection if the connect timeout stroke
and the request was never actually handled anywhere but it should
definitely return a real HTTP error if its the sever timeout and a
backend server started doing anything with a request.

You could probably argue to differentiate between safe and unsafe
methods and to also just close for safe ones but is probably even more
confusing and has the potential for subtle bugs.

Best,
Holger



Manufacture paper labels, embroidery patches and woven badges for fashion

2015-11-11 Thread wangm
DearSir/Madam, Verygooddaytoyou=  
ThisisFancyfromDOYLabel,weareaprofe=ssionallabelmanufacturerwithmorethan15yearsexperiencebasedon=China.
 
ourmainproductsare-paperhangtags,-wovenlabels,-cottonlabel,carelabel-embroiderylabelsorlogo,-printedclotheslabels,-heattransferlabels,silverorreflective=color,-rubberlabels,likesoftPVClabel,orsili=conlogolabel-leatherembossedpatch,-lanyard-plasticsealstringtags,hangtablet.
 Freesamplescanbeoffereduponrequest.  
Andalsowehavestrongdesignersteamtoofferyougoodideainar=tworkdesign. 
Wewouldbeexpectingtobealabelsupplier=foryoubasedonaboveitems,ifthereisanychance,plscontactme=anddiscussthedetails.
 Thankyouinadvance! Bestregards  Sales:BettyDOYLABELANDPRINTINGCO.LTD.

Re: HAProxy does not write 504 on keep-alive connections

2015-11-11 Thread Bryan Talbot
On Wed, Nov 11, 2015 at 6:47 AM, Holger Just  wrote:

>
> As a loadbalancer however, HAProxy should always return a proper HTTP
> error if the request was even partially forwarded to the server. It's
> probably fine to just close the connection if the connect timeout stroke
> and the request was never actually handled anywhere but it should
> definitely return a real HTTP error if its the sever timeout and a
> backend server started doing anything with a request.
>
>
This would be my preferred behavior and actually what I thought haproxy was
already doing.

-Bryan


acl regex

2015-11-11 Thread Guillaume Bourque
Hi all,

I can’t create an acl that will match this

http://domain/?lang=

I tried

acl fr_top  path_reg^/.lang\=$
acl fr_top  path_reg^/\?lang\=$

acl fr_toppath_beg/?lang\=$

I have a redirect 301 with 

http-request redirect location http://doamine.com/ code 301 if fr_top


I have done other redirect that works fine but no luck with this one

Any help greatly appreciated.

Thanks



---
Guillaume Bourque, B.Sc.,


Re: acl regex

2015-11-11 Thread Bryan Talbot
On Wed, Nov 11, 2015 at 8:43 PM, Guillaume Bourque <
guillaume.bour...@logisoftech.com> wrote:

> Hi all,
>
> I can’t create an acl that will match this
>
> http://domain/?lang=
>
> I tried
>
> acl fr_top  path_reg^/.lang\=$
> acl fr_top  path_reg^/\?lang\=$
>
> acl fr_toppath_beg/?lang\=$
>
>
>

You can't match the query string with the 'path' matcher. Try 'req.uri' or
'query' if you're using 1.6.


Re: acl regex

2015-11-11 Thread Guillaume Bourque
Hi,

thanks for the suggestion but it did not work for me.   I tried

   acl fr_top  url_reg/?lang=
   acl fr_top  url_reg/?lang=$
# off acl fr_topurlp_reg(lang\=$,?) -m found
# off acl fr_topurlp_reg(lang\=$,?) -m found

but with no luck

thanks

---
Guillaume Bourque, B.Sc.,
Le 2015-11-12 à 02:18, Igor Cicimov  a écrit :

> 
> On 12/11/2015 5:30 PM, "Guillaume Bourque" 
>  wrote:
> >
> > Hello Bryan
> >
> > I’m running haproxy 1.5.4 and I can’t find any example on how to user 
> > req.uri if you could give a examples on how to match a specific query to 
> > redirect to another 
> >
> > From http://domain/pages/store.php?lang=fr   to http://domain/store/
> >
> > That would be great !
> >
> > TIA
> >
> >
> >
> > ---
> > Guillaume Bourque, B.Sc.,
> >
> > Le 2015-11-12 à 00:42, Bryan Talbot  a écrit :
> >
> >> On Wed, Nov 11, 2015 at 8:43 PM, Guillaume Bourque 
> >>  wrote:
> >>>
> >>> Hi all,
> >>>
> >>> I can’t create an acl that will match this
> >>>
> >>> http://domain/?lang=
> >>>
> >>> I tried
> >>>
> >>> acl fr_top  path_reg^/.lang\=$
> >>> acl fr_top  path_reg^/\?lang\=$
> >>>
> >>> acl fr_toppath_beg/?lang\=$
> >>>
> >>>
> >>
> >>
> >> You can't match the query string with the 'path' matcher. Try 'req.uri' or 
> >> 'query' if you're using 1.6. 
> >>
> >>
> >
> Try this:
> 
> acl fr_top  url_reg   /pages/store.php?lang=fr
> 



Re: WHY they are different when checking concurrent limit?

2015-11-11 Thread Willy Tarreau
Hi,

On Tue, Nov 10, 2015 at 07:50:56AM +, Zhou,Qingzhi wrote:
> Hi??
> Thanks very much.
> But I think we can use listener_full instead of limit_listener if we want
> wake up the listener when there??s a connection closed. Like in the
> beginning of listener_accept:
> 
>  if (unlikely(l->nbconn >= l->maxconn)) {
>   listener_full(l);
>   return;
>   }
> 
> 
> WHY not using listener_full ?

Because the listener is not full. If it were full, it would have been
handled by the test you pointed above. Here we're in the situation where
the frontend's maxconn is reached before the listener is full. So you
have 2 listeners in a frontend each getting half the number of connections.

We know that we won't be able to accept any new connection on this listener
until some connections are released on the frontend. So by calling
limit_listener() we temporarily pause the listener and add it to the
frontend's queue to be enabled again when the frontend releases
connections. There's no reason to add a delay here because we know
exactly when connections are released on this frontend. So trying this
again will not change anything.

Hoping this helps,
Willy




Howto masquerade real server in a two armed transparent setup

2015-11-11 Thread HAproxy
Trying to make a two armed transparent setup like mentioned here 
 to do both 
transparently L4 [SSL] DR + L7 SSL termination load balanced services.

I’m having the load balanced services working, but I wont the real servers to 
be able to access the public internet and to have other non-balanced services 
like management ssh access through HAproxy VMs to my real servers.

Question is now how do I possible masquerade the real servers to access public 
internet through my active/passiveHAproxies, currently setup like mentioned 
here . Any hints/URLs are welcomed?

TIA

/Steffen

Re: HAProxy does not write 504 on keep-alive connections

2015-11-11 Thread Willy Tarreau
On Wed, Nov 11, 2015 at 06:55:11PM -0800, Bryan Talbot wrote:
> On Wed, Nov 11, 2015 at 6:47 AM, Holger Just  wrote:
> 
> >
> > As a loadbalancer however, HAProxy should always return a proper HTTP
> > error if the request was even partially forwarded to the server. It's
> > probably fine to just close the connection if the connect timeout stroke
> > and the request was never actually handled anywhere but it should
> > definitely return a real HTTP error if its the sever timeout and a
> > backend server started doing anything with a request.
> >
> >
> This would be my preferred behavior and actually what I thought haproxy was
> already doing.

Guys, please read the HTTP RFC, you *can't* do that by default. HTTP/1 doesn't
warn before closing an idle keep-alive connection. So if you send a request
over an existing connection and the server closes at the same time, you get
exactly the situation above. And you clearly don't want to send a 502 or 504
to a client because it will be displayed on the browser. Remember the issues
we had with Chrome's preconnect and 408 ? That would be the same. We had to
silence the 408 on keep-alive connections to the client so that the browser
could replay. Here it's the same, by silently closing, we're telling the
browser it should retry, and we're ensuring not to interfere between the
browser and the server regarding the connection's behaviour.

And it's the browser's responsibility to only retry safe requests (those
that are called "idempotent"). Normally it does this by selecting which
requests can be sent over an existing connection. That ensures a non-
idempotent request cannot hit a closed connection. In web services
environments, in order to address this, you often see the requests sent
in two parts, first a POST is emitted with an Expect: 100-continue, and
only once the server responds, the body is emitted (which ensures that
the connection is still alive). Note by the way that it requires two
round trips to do this so there's little benefit to keeping persistent
connections in such a case.

You have the exact same situation between the browser and haproxy, or the
browser and the server. When haproxy or the server closes a keep-alive
connection, the browser doesn't know whether the request was being processed
or not, and that's the reason why it (normally) doesn't send unsafe request
over existing connections.

I'm not opposed to having an option to make these errors verbose for web
services environments, but be prepared to hear end-users complain if you
enable this with browsers-facing web sites, because your users will
unexpectedly get some 502 or 504.

Yes the persistent connection model in HTTP/1 is far from being perfect,
and that's one of the reasons it was changed in HTTP/2.

Willy




Re: [1.6.1] Utilizing http-reuse

2015-11-11 Thread Willy Tarreau
Hi Krishna,

On Wed, Nov 11, 2015 at 03:22:54PM +0530, Krishna Kumar (Engineering) wrote:
> I just tested with 128K byte file (run 4 wgets
> in parallel each retrieving 128K). Here, I see 4 connections being opened, and
> lots of data packets in the middle, finally followed by 4 connections
> being closed. I
> can test with "ab -k" option to simulate a browser, I guess, will try that.

In my tests, ab -k definitely works.

> > Is wget advertising HTTP/1.1 in the request ? If not that could
> 
> Yes, tcpdump shows HTTP/1.1 in the GET request.

OK.

> >   - proxy protocol used to the server
> >   - SNI sent to the server
> >   - source IP binding to client's IP address
> >   - source IP binding to anything dynamic (eg: header)
> >   - 401/407 received on a server connection.
> 
> I am not doing any of these specifically. Its a very simple setup where the
> client@ip1 connects to haproxy@ip2 and requests 128 byte file, which
> is handled by server@ip3.

OK. I don't see any reason for this not to work then.

> I was doing this in telnet:
> 
> GET /128 HTTP/1.1
> Host: www.example.com
> User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)

Looks fine as well. Very strange. I have no idea what would not at the
moment, I suspect this is something stupid and obvious but am failing
to spot it :-/

Willy




Re: acl regex

2015-11-11 Thread Guillaume Bourque
Hello Bryan

I’m running haproxy 1.5.4 and I can’t find any example on how to user req.uri 
if you could give a examples on how to match a specific query to redirect to 
another 

From http://domain/pages/store.php?lang=fr   to http://domain/store/

That would be great !

TIA



---
Guillaume Bourque, B.Sc.,

Le 2015-11-12 à 00:42, Bryan Talbot  a écrit :

> On Wed, Nov 11, 2015 at 8:43 PM, Guillaume Bourque 
>  wrote:
> Hi all,
> 
> I can’t create an acl that will match this
> 
> http://domain/?lang=
> 
> I tried
> 
> acl fr_top  path_reg^/.lang\=$
> acl fr_top  path_reg^/\?lang\=$
> 
> acl fr_toppath_beg/?lang\=$
> 
> 
> 
> 
> You can't match the query string with the 'path' matcher. Try 'req.uri' or 
> 'query' if you're using 1.6. 
> 
> 



Re: acl regex

2015-11-11 Thread Igor Cicimov
On 12/11/2015 5:30 PM, "Guillaume Bourque" <
guillaume.bour...@logisoftech.com> wrote:
>
> Hello Bryan
>
> I’m running haproxy 1.5.4 and I can’t find any example on how to user
req.uri if you could give a examples on how to match a specific query to
redirect to another
>
> From http://domain/pages/store.php?lang=fr   to http://domain/store/
>
> That would be great !
>
> TIA
>
>
>
> ---
> Guillaume Bourque, B.Sc.,
>
> Le 2015-11-12 à 00:42, Bryan Talbot  a écrit :
>
>> On Wed, Nov 11, 2015 at 8:43 PM, Guillaume Bourque <
guillaume.bour...@logisoftech.com> wrote:
>>>
>>> Hi all,
>>>
>>> I can’t create an acl that will match this
>>>
>>> http://domain/?lang=
>>>
>>> I tried
>>>
>>> acl fr_top  path_reg^/.lang\=$
>>> acl fr_top  path_reg^/\?lang\=$
>>>
>>> acl fr_toppath_beg/?lang\=$
>>>
>>>
>>
>>
>> You can't match the query string with the 'path' matcher. Try 'req.uri'
or 'query' if you're using 1.6.
>>
>>
>
Try this:

acl fr_top  url_reg   /pages/store.php?lang=fr


Re: Echo server in Lua

2015-11-11 Thread PiBa-NL

Hi Thrawn,

I tried these configs, and there doesn't seem to be much if any 
difference. The tcp one might even be the slowest in my limited 
virtualized tests, but only my a few milliseconds..


frontend lua-replyip
bind192.168.0.120:9010
modehttp
http-request use-service lua.lua-replyip
frontend lua-replyip-copy
bind192.168.0.120:9011
modetcp
tcp-request content use-service lua.lua-replyip-tcp
frontend lua-replyip-httpreq
bind192.168.0.120:9012
modehttp
http-request lua.lua-replyip-http-req

core.register_service("lua-replyip", "http", function(applet)
   local response = applet.f:src()
   applet:set_status(200)
   applet:add_header("Server", "haproxy-lua/echo")
   applet:add_header("content-length", string.len(response))
   applet:add_header("content-type", "text/plain")
   applet:start_response()
   applet:send(response)
end)

core.register_service("lua-replyip-tcp", "tcp", function(applet)
   local buffer = applet.f:src()
   applet:send("HTTP/1.0 200 OK\r\nServer: 
haproxy-lua/echo\r\nContent-Type: text/html\r\nContent-Length: " .. 
buffer:len() .. "\r\nConnection: close\r\n\r\n" .. buffer)

end)

core.register_action("lua-replyip-http-req", { "http-req" }, function (txn)
local buffer = txn.f:src()
txn.res:send("HTTP/1.0 200 OK\r\nServer: 
haproxy-lua/echo\r\nContent-Type: text/html\r\nContent-Length: " .. 
buffer:len() .. "\r\nConnection: close\r\n\r\n" .. buffer)

txn:done()
end)


Op 11-11-2015 om 3:07 schreef Thrawn:
Hmm...I seem to be able to set up something in TCP mode, and it 
returns the expected response via curl, but its performance is awful. 
I must be doing something wrong?


Lua:

core.register_action("tcp-echo", {"tcp-req"}, function (txn)
local buffer = txn.f:src()
txn.res:send("HTTP/1.0 200 OK\r\nServer: 
haproxy-lua/echo\r\nContent-Type: text/html\r\nContent-Length: " .. 
buffer:len() .. "\r\nConnection: close\r\n\r\n" ..

missing the appending of 'buffer' in the end on the line above?

txn:done()
end)

I couldn't find a way for a TCP applet to retrieve the client IP 
address; suggestions are welcome.


HAProxy config:

frontend tcp-echo
bind 127.0.2.1:1610
timeout client 1
mode tcp
tcp-request content lua.tcp-echo

Testing this with ab frequently hangs and times out even at tiny loads 
(10 requests with concurrency 3).




On Wednesday, 11 November 2015, 10:19, PiBa-NL  
wrote:



b.t.w. if sole purpose of the frontend is to echo the ip back to the 
client.
You should probably also check the 'use-service' applet syntax, i dont 
know if that could be faster for your purpose.
Then another thing to check would be if you want to use the tcp or 
http service mode. A TCP service could be almost 1 line of lua code.. 
And i kinda expect to be a bit faster.


http://www.arpalert.org/src/haproxy-lua-api/1.6/index.html#haproxy-lua-hello-world
Instead of sending 'hello world' you could send the client-ip..

Op 10-11-2015 om 23:46 schreef Thrawn:

OK, some explanation seems in order :).

I ran ab with concurrency 1000 and a total of 3 requests, against 
each server, 5 times, plus one run each with 15 requests (sum of 
the previous 5 tests).
For Apache+PHP, this typically resulted in 5-15ms response time for 
99% of requests, with the remaining few either taking tens of seconds 
or eventually disconnecting with an error.
For HAProxy+Lua, 99% response times were 1ms, or sometimes 2ms, with 
the last few taking about 200ms. So, HAProxy worked much better (of 
course).


However, on the larger run (150k), HAProxy too had a small percentage 
of disconnections (apr_socket_recv: Connection reset by peer). I've 
been able to reproduce this with moderate consistency whenever I push 
it beyond about 35000 total requests. It's still a better error rate 
than PHP, but I'd like to understand why the errors are occurring. 
For all I know, it's a problem with ab.


I've also tried a couple of runs with 15 requests but concurrency 
only 100, and neither server had trouble serving that, although 
interestingly, PHP is slightly more consistent: 99% within 4-5ms, 
then about 200ms for the last few, whereas HAProxy returns 99% within 
1-2ms and 1800ms for the last few.


The box is just my workstation, 8 cores and 16GB RAM, running Ubuntu 
15.10, with no special tuning.


Any ideas on why the HAProxy tests showed disconnections or 
occasional slow response times at high loads?




On Wednesday, 11 November 2015, 8:29, Baptiste  
 wrote:



On Tue, Nov 10, 2015 at 10:46 PM, Thrawn
> wrote:

> OK, I've set this up locally, and tested it against PHP using ab.
>
> HAProxy was consistently faster (99% within 1ms, vs 5-15ms for 
PHP), but at
> request volumes over about 35000, with concurrency 1000, it 
consistently had

Re: HAProxy does not write 504 on keep-alive connections

2015-11-11 Thread Willy Tarreau
Hi,

On Wed, Nov 11, 2015 at 03:46:38AM +, Laurent Senta wrote:
> Thanks for the reply guys,
> after investigating the source code, it looks like this behavior is wanted.

Yes, it is mandated by the way keep-alive works in HTTP. You never know when
the connection will abort, and at the moment you reuse a connection, it may
fail on you. People don't want to see in their browser errors that are not
real errors and just a result of normal traffic, instead the browser knows
it must retry because that situation is expected. But it will not retry if
it receives a valid response (and an error is a valid response).

> I've been able to "fix it" by removing these two lines:
> 
> diff --git a/src/proto_http.c b/src/proto_http.c
> index 2dcac06..d33b4a1 100644
> --- a/src/proto_http.c
> +++ b/src/proto_http.c
> @@ -6125,8 +6125,6 @@ int http_wait_for_response(struct stream *s, struct
> channel *rep, int an_bit)
> else if (rep->flags & CF_READ_TIMEOUT) {
> if (msg->err_pos >= 0)
> 
> http_capture_bad_message(>be->invalid_rep, s, msg, msg->msg_state,
> sess->fe);
> -   else if (txn->flags & TX_NOT_FIRST)
> -   goto abort_keep_alive;

I would argue that we could possibly relax this for timeouts indeed.
Someone who configures haproxy's keep-alive timeout to a value lower
than the surrounding firewalls' timeouts is seeking trouble. And the
bad was already done by making the client wait. So probably we'd
rather report the 504 here.

Would it be enough for everyone if we just removed this one or do we
need something more configurable like Tait's patch ? I think we could
have something with verbosity levels :
  - act the most transparently possible regarding our local issues (for
browsers), which means replicate on the client side what we see on
the server side (eg: close when we get a close).
  - maybe report errors in logs but still silently close the connection
  - only report suspicious errors (eg: 504 here) to the client
  - report them all (eg: for webservices where this helps detect a
failing server)

Before rushing on a patch we should also consider this with the http-reuse
that was introduced in 1.6, to ensure we don't end up with something ugly.

> I traced back that change to:
> http://git.haproxy.org/?p=haproxy-1.6.git;a=commit;h=6b726adb35d998eb55671c0d98ef889cb9fd64ab
> 
> I don't understand why it's saner to kill the connection and hide the 504
> instead of clearly stating the error and let the application handle the
> timeout.

As explained above, it's because a keep-alive enabled client must implement
the ability to replay requests for which it didn't get a response because
the connection died. In fact we're forwarding to the client what we saw on
the server side so that the client can take the correct decision. If your
client was directly connected to the server, it would have seen the exact
same behaviour.

Regards,
Willy