Re: http-reuse always, work quite well

2016-10-22 Thread Brendan Kearney

On 10/22/2016 02:08 AM, Willy Tarreau wrote:


You're welcome. Please note that the reuse mechanism is not perfect and
can still be improved. So do not hesitate to report any issue you find,
we definitely need real-world feedback like this. I cannot promise that
every issue will be fixed, but at least we need to consider them and see
what can be done.

Cheers,
Willy

i have http interception in place, using iptables/DNAT to redirect 
traffic to haproxy and load balance to 2 squid instances.  i was using 
aggressive mode http-reuse and it seemed to provide better streaming 
experience for roku/sling.  after a period of time, the performance 
degraded and the experience was worse than original state.  buffering, 
lag and pixelation are the symptoms.  i did not try to use the always 
mode, and turned http-reuse off for the interception i am doing.  the 
issue has cleared since.


while interception and transparent proxying seem to be problematic, 
explicit proxying and internal http have both seen a marked improvement 
in performance.  no scientific collection of data has been done, but 
page load times have been noticeably improved.  i may move from 
aggressive to always for these backends.


keep up the good work, and thanks for some really great software,

brendan




Re: stick-table not updated with every request

2016-10-22 Thread Willy Tarreau
On Sat, Oct 22, 2016 at 02:50:20PM +0200, Dennis Jacobfeuerborn wrote:
> thank you, I tried the inspect-delay again and this alone seems to fix
> things at least for the curl tests so Chad was indeed right but I might
> have forgotten to restart HAProxy when I tested this the first time.

This happens :-)

> I'm still going to include the "reject unless HTTP" bit though but I'm
> wondering if this might have any negative side effects for regular
> traffic?

No, not at all.

> Do browsers handle this rejection of their connections
> appropriately?

Absolutely, it's exactly what happens when you hit a request timeout.
In fact we used to send a 408 in the past and some browsers would
display it instead of being silent. All of them confirmed that they
expect the connection to be silently closed, which is exactly what
happens with this reject. For them, their idle connection simply
expires and this is what we want to achieve.

Cheers,
Willy



Re: stick-table not updated with every request

2016-10-22 Thread Dennis Jacobfeuerborn
On 22.10.2016 00:08, Willy Tarreau wrote:
> Hi Dennis,
> 
> On Fri, Oct 21, 2016 at 09:09:39PM +0200, Dennis Jacobfeuerborn wrote:
>> So after more experimenting I got things to work properly when I move
>> the "limited_path" acl check from the "tcp-request content" directive to
>> the "use-backend abuse-warning" directive which accomplishes the same
>> thing with regards to the rate-limiting.
>>
>> My guess is that your suspicion was correct that this is some kind of
>> "Layer 4 vs. Layer 7" problem with the path acl (Layer 7) being used in
>> the tcp-request directive (Layer 4). I'm wondering if there is some
>> other way to make this work since the inspect-delay apparently doesn't
>> work in this case.
> 
> I'm pretty sure Chad's solution is the right one. However you need to have
> a large enough inspect-delay (ideally as large as timeout http-request or
> timeout client). The reason is that some browsers perform a pre-connect
> and don't send anything for quite some time, thus your inspect-delay
> expires and the rule never matches. Another way to avoid this is to reject
> non-HTTP traffic first, which will cause idle connections to be terminated.
> Eg:
>  tcp-request inspect-delay 10s
>  tcp-request content reject unless HTTP
>  tcp-request content ... your rules here ...

Hi Willy,
thank you, I tried the inspect-delay again and this alone seems to fix
things at least for the curl tests so Chad was indeed right but I might
have forgotten to restart HAProxy when I tested this the first time.

I'm still going to include the "reject unless HTTP" bit though but I'm
wondering if this might have any negative side effects for regular
traffic? Do browsers handle this rejection of their connections
appropriately?

Regards,
  Dennis





Re: haproxy-systemd-wrapper exit code problem

2016-10-22 Thread Willy Tarreau
Hi Gabriele,

On Tue, Oct 18, 2016 at 09:40:14PM +0200, Gabriele Cerami wrote:
> Hi,
> 
> We're having a problem with version 1.5.14 of haproxy, packaged for
> CentOS 7, but it seems even the code in master is affected.
> 
> In situations where bind is not possible (in our case, the address was
> already in use) tcp_connect_server returns with a status of 256
> (ERR_ALERT). This value is then passed down as exit code for
> haproxy-systemd-wrapper.

Huh ? First, tcp_connect_server() is not involved here, it's used to
connect to a server, so it never provides an exit code. And even if
instead you meant a bind issue instead, all error codes are limited
to 5 bits so the higher value you can have is 0x1F = 31 if all flags
are reported together.

> The problem is that exit value is truncated to the least significant 8
> bits, so even if haproxy fails, systemd gets an exit code of 0 and
> thinks the service start succeded.

I agree this would be a problem but if you really observe an exit code
of 256, I'm interested in knowing how it is produced because I see no
way to produce this!

Thanks,
Willy



Re: http-reuse always, work quite well

2016-10-22 Thread Willy Tarreau
Hi Pavlos,

On Fri, Oct 21, 2016 at 03:01:52PM +0200, Pavlos Parissis wrote:
> > I'm not surprized that always works better, but my point is that if it's
> > much better it can be useful to stay with it, but if it's only 1% better
> > it's not worth it.
> > 
> 
> It is way better:-), see Marcin's response.

Ah sorry, I missed it. Indeed it looks much better, but we don't have
the reference (no reuse) on this graph. If the no reuse shows 10 times
higher average times, then it means "safe" reuse brings a 10 times
improvement and "always" brings 20 times so it's a matter of choice.
However if safe does approximately the same as no reuse, for sure
"always" is almost needed.

> >>> while "always" is optimal, strictly speaking it's
> >>> not very clean if the clients are not always willing to retry a failed
> >>> first request, and browsers typically fall into that category. A real
> >>> world case can be a request dequeued to a connection that just closes.
> >>
> >> What is the response of HAProxy to clients in this case? HTTP 50N?
> > 
> > No, the client-side connection will simply be aborted so that the client
> > can decide whether to retry or not.
> 
> Connection will be aborted by haproxy sending TCP RST?

As much as possible yes. The principle is to let the client retry the
request (since it is the only one knowing whether it's safe or not).

> > I'd suggest a rule of thumb (maybe this should be added to the doc) : watch
> > your logs over a long period. If you don't see queue timeouts, nor request
> > timeouts, it's probably safe enough to use "always".
> 
> Which field on the log do we need to watch? Tq?

Tw (time spent waiting in the queue), Tc (time spent getting a connection),
and of course the termination flags, everything with a C or Q on the second
char needs to be analysed.

> > Each time you notice
> > one of them, there is a small risk of impacting another client. It's not
> > rocket science but the risks depend on the same parameters.
> 
> 
> Thanks a lot for yet another reach in information replies.

You're welcome. Please note that the reuse mechanism is not perfect and
can still be improved. So do not hesitate to report any issue you find,
we definitely need real-world feedback like this. I cannot promise that
every issue will be fixed, but at least we need to consider them and see
what can be done.

Cheers,
Willy