On 22/07/2016 11:44 πμ, Willy Tarreau wrote:
> Hi Pavlos,
> 
> On Fri, Jul 22, 2016 at 12:33:07AM +0200, Pavlos Parissis wrote:
>> On 21/07/2016 10:30 ????, Willy Tarreau wrote:
>>> Hi,
>>> 
>>> On Thu, Jul 21, 2016 at 02:33:05PM -0400, CJ Ess wrote:
>>>> I think I'm overlooking something simple, could someone spot check me?
>>>> 
>>>> What I want to do is to pool connections on my http backend - keep HAProxy 
>>>> from
>>>> opening a new connection to the same backend if there is an established 
>>>> connection
>>>> that is idle.
>>>> 
>>>> My haproxy version is 1.5.18
>>> (...)
>>> 
>>>> There is more then enough traffic going through the backend that if a 
>>>> connection is
>>>> idle, there will be a request that could use it (within ms, should never 
>>>> hit the 5s or
>>>> 75s timeouts), however in every case the connection just sites idle for 
>>>> five seconds
>>>> then closes.
>>>> 
>>>> Am I missing something simple to enable this behavior?
>>> 
>>> Yes, you're missing the "http-reuse" directive which was introduced in 1.6. 
>>> Be careful
>>> when doing this (read the doc carefully), as some servers still tend to 
>>> confuse requests
>>> and connections and could do some funny stuff there.
>> 
>> Can you elaborate a bit more on this? Which servers? Nginx/Apache and under 
>> which
>> conditions ?
> 
> Some application servers (or some components) tend to tie some incoming 
> parameters to the
> connection instead of the request. There used to be a lot of confusion 
> regarding this when
> keep-alive was brought to HTTP because it was the era where reverse proxies 
> would not even
> exist so there was no doubt that a connection always comes from a client. 
> Unfortunately some
> bad designs were introduced due to this. The most widely known certainly is 
> NTLM, which
> violates HTTP since it assumes that all requests coming over a connection 
> belong to the same
> client. HAProxy detects this by marking a connection "private" as soon as it 
> sees a 401 or
> 407 on it, and will not share it with any other client. But regardless of 
> this, you'll find
> dirty applications which assign a cookie only after the 2nd or 3rd request 
> over a given 
> connection. Some will only emit a response cookie on the first response so 
> the next requests
> will never get a cookie. Other ones will only check the X-Forwarded-For 
> header when the
> connection establishes and will use this value for all requests from the 
> connection,
> resulting in wrong logs and/or possibly rules. Others will simply take a 
> decision on the
> first request of a connection and not check the remaining ones (like haproxy 
> used to do up
> to version 1.3 and can still do when forced in tunnel mode).
> 
> Most often the application components which break these HTTP principles are 
> the ones which
> do not support a load balancer. But sometimes some of them work when you 
> install a load
> balancer working in tunnel mode in front of them (like haproxy up to 1.3 by 
> default).
> 
> A rule of thumb is that if your application only works when you have "option
> prefer-last-server", then your application certainly is at risk.
> 

OK, I am safe then. But, I do use "option prefer-last-server" as it reduces 
WallClock from
the client point of view and HAProxy can serve much more requests.


> This problem has been widely discussed inside the IETF HTTP working group and 
> is known as
> "requests must work in isolation". It's been quite well documented over the 
> years and
> normally all modern components are safe. But if you connect to a good old 
> dirty thing
> developed in the early 2000, be careful! Similarly, when using 3rd party 
> apache modules 
> developed by people doing a quick and dirty thing, be prepared to discover 
> the hard way that
> they never read an RFC in their life...
> 

Yes, I know as I had worked in the Telecom sector for few years and had to work 
with those
applications.


Thanks for yet another very detailed explanation,
Pavlos

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to