Re: Haproxy running on 100% CPU and slow downloads

2016-04-25 Thread Sachin Shetty
Hi Lukas,

We tried the patch, it seems better. As soon as we switched nbproc off,
throughput did not drop immediately like it did with earlier version, it
started deteriorating slowly as traffic increased to peak hours, but
eventually it did crash to the same levels as before.

CPU Usage was also better, only at peak hours I saw 100% CPU consumed by
haproxy, other wise it would be between 60-80%.

Please see attached image measuring througput, nbproc=20 until ~10PM,
nbroc=1 from ~10PM to ~10AM, nbproc reverted to 20 from 10 AM onwards.
Y-axis is speed in MBPS.

Thanks
Sachin

On 4/21/16, 12:57 PM, "Lukas Tribus"  wrote:

>Hi,
>
>
>Am 21.04.2016 um 08:11 schrieb Sachin Shetty:
>> Hi,
>>
>> any hints to further isolate this - we have deferred the problem by
>>adding
>> all the cores we had, but I have a feeling that our request rate is not
>> that high (7K per minute a peak)  and it will show up again as traffic
>> increases.
>>
>> Thanks
>> Sachin
>>
>
>Try the fix 9c09ee87 [1], which is in snapshots since 1.6.4-20160412.
>
>
>cheers,
>
>lukas
>
>[1] 
>http://www.haproxy.org/git?p=haproxy-1.6.git;a=commitdiff_plain;h=9c09ee87
>836bb2efd78a17f9b16d8afe0ec64018;hp=3bee40bfb7a35b624c5cc9d88daff5a9e3b99f
>33
>[2] http://www.haproxy.org/download/1.6/src/snapshot/



Re: haproxy 1.6.4 segfault in logging (I think)

2016-04-25 Thread David Torgerson

> OK fixed upstream by commit 57bc891 ("BUG/MEDIUM: log: fix risk of segfault
> when logging HTTP fields in TCP mode").

Wow, you guys are fast! I just tested and can confirm that this fixes the 
segfault issue with logging. Thanks again, keep up the great work!




unsubscribe

2016-04-25 Thread Abdul Hakeem






[PATCH] BUG/MINOR: log: fix a typo that would cause %HP to log

2016-04-25 Thread Nenad Merdanovic
Typo was introduced in 57bc891 ("BUG/MEDIUM: log: fix risk of
segfault when logging HTTP fields in TCP mode") which inverted the
condition in the test and caused  to be logged when using
%HP.

Signed-off-by: Nenad Merdanovic 
---
 src/log.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/log.c b/src/log.c
index dffef54..2d02247 100644
--- a/src/log.c
+++ b/src/log.c
@@ -1917,7 +1917,7 @@ int build_logline(struct stream *s, char *dst, size_t 
maxsize, struct list *list
while (spc < end && *spc != '?' && 
!HTTP_IS_SPHT(*spc))
spc++;
 
-   if (!txn || txn->uri || nspaces == 0) {
+   if (!txn || !txn->uri || nspaces == 0) {
chunk.str = "";
chunk.len = strlen("");
} else {
-- 
2.7.0




Re: "show servers state" shows nothing?

2016-04-25 Thread James Brown
Here's the top of the file. None of the backends override the
load-server-state-from-file setting that's made in `defaults`. There
are 106 backends defined.


global
log ${LOG_DGRAM_SYSLOG} local0
log /var/run/epservices/syslog_bridge.sock local0
daemon
maxconn 4096
stats socket /var/run/epservices/lbng/haproxy.sock level admin
tune.ssl.default-dh-param 1024
server-state-file /srv/var/lbng/state

defaults
mode http
option httplog
load-server-state-from-file global
log global
retries 3
timeout client 9s
timeout connect 3s
timeout server 90s

On Sun, Apr 24, 2016 at 9:07 AM, Baptiste  wrote:
> On Thu, Apr 21, 2016 at 2:54 AM, James Brown  wrote:
>> I'm trying to set up state-file saving on 1.6.4, but "show servers state"
>> doesn't return anything. It works fine if I specify an individual backend
>> (e.g., "show servers state foo_be"), but not if I run it "bare" (which the
>> manual suggests should print out states for all backends).
>>
>> Any thoughts?
>>
>> --
>> James Brown
>> Engineer
>
>
> Hi,
>
> Could you share the relevent part of the configuration?
>
> Baptiste



-- 
James Brown
Engineer



Re: strange integer comparison 1.6.4

2016-04-25 Thread David Birdsong
On Fri, Apr 22, 2016 at 11:56 PM Cyril Bonté  wrote:

> Hi David,
>
> Le 23/04/2016 04:27, David Birdsong a écrit :
> > predefined acl's dont work w/ integer comparison either, but calculating
> > the value and doing the integer comparison directly in an ACL works fine.
> >
> > On Fri, Apr 22, 2016 at 6:31 PM David Birdsong  > > wrote:
> >
> > I'm computing an integer like so:
> > http-request set-var(txn.req_modulo) base,crc32(),mod(1000)
> >
> > when I use the variable in an anonymous ACL to determine a backend:
> >
> > use_backend sjc1_spillway_pool if { var(txn.req_modulo) lt 80 }
> >
> > only requests that calculate to 80 get sent instead of less than 80.
>
> You need to specify the "int" pattern matching method, otherwise here
> you are explicitly comparing strings "lt" and "80".
>
> Can you retry with :
> use_backend sjc1_spillway_pool if { var(txn.req_modulo) -m int lt 80 }
>
>
retried, this works. thanks.

i wasn't sure if set-var carried over any typing.


>
>
> --
> Cyril Bonté
>


Re: Support for Keep-Alive header and timeouts

2016-04-25 Thread Lukas Tribus


Am 25.04.2016 um 18:10 schrieb Craig McLure:

>From a firewall perspective all sockets are configured to forcefully
stop after about 20 minutes after which time a connection will go
'stale' and no longer function, any additional packets on that socket
will be ignored.


And why would you configure the firewall to do this? I don't see how this
makes
sense.

Resource limitations, physical restrictions, upstream limitations,
security requirements, could be anything, it's not really relevant to
the discussion, there could be many reasons why someone needed a
specific cut-off after a certain amount of time.


Explaining your use-case would be useful so we could suggest 
alternatives, or

by understand the actual use-case, consider an implementation.

Clearly you are not interested in that.




This is true if you make assumptions about what's happening on the
backend, 10 minutes was (as noted) an example, could be 3 hours, could
be 200 years, the relevance here was simply existence of the
functionality. As far as connections dropped during in-flight request
/ response cycle, they should follow the HTTP spec on how to behave in
that scenario, and obviously the 'force close' would occur prior to
the firewall dropping the connection.


Its not an assumption, because value of the timeout is, as you are 
saying, irrelevant.


You are saying force closing the session in the middle of a response 
leading to
a truncate response and forcing the client to do the request all over 
again is

something you would consider for your production environment?




With that in mind, it's not overly uncommon behaviour. nginx for
example has keepalive_timeout to facilitate the behaviour I'm looking
for here


I think your understanding of what nginx does is flawed.

First of all nginx won't drop an active session (while a response is in 
flight), so
unless all of those responses are very short-lived and all clients are 
fast, the

transaction may endure long enough to hit your firewall thresholds.

The other thing is just because the server transmits a FIN, doesn't mean the
client can't send another request.


What I'm saying is, you cannot guarantee on the HTTP level that a session
will be closed after a certain amount of time.

You can give the browser and the server some hints, sure. But a hint is
all it is (what nginx does).




Lukas




Re: Support for Keep-Alive header and timeouts

2016-04-25 Thread Craig McLure
Hi,

On Mon, Apr 25, 2016 at 3:39 PM, Lukas Tribus  wrote:
> Hi,
>
>
> Am 25.04.2016 um 15:51 schrieb Craig McLure:
>>
>> >From a firewall perspective all sockets are configured to forcefully
>> stop after about 20 minutes after which time a connection will go
>> 'stale' and no longer function, any additional packets on that socket
>> will be ignored.
>
>
> And why would you configure the firewall to do this? I don't see how this
> makes
> sense.

Resource limitations, physical restrictions, upstream limitations,
security requirements, could be anything, it's not really relevant to
the discussion, there could be many reasons why someone needed a
specific cut-off after a certain amount of time.

>
>
>>   This is fine for our purposes, but when keep-alive
>> comes into play this raises some problems. Theoretically using all the
>> timeouts available in haproxy it's tentatively possible to maintain a
>> connection for *LONGER* than that period, at which point the
>> connection gets silently dropped, and in haproxy the connection fails
>> in a non-graceful way.
>
>
> Even if haproxy would *try* to close the session after time X, there is not
> guarantee
> that current in flight request/response would be finished in time to not get
> dropped
> at firewall level. What about slow downloads? They could go on for hours ...

This is true if you make assumptions about what's happening on the
backend, 10 minutes was (as noted) an example, could be 3 hours, could
be 200 years, the relevance here was simply existence of the
functionality. As far as connections dropped during in-flight request
/ response cycle, they should follow the HTTP spec on how to behave in
that scenario, and obviously the 'force close' would occur prior to
the firewall dropping the connection.

>
>
>> Ideally, obviously, I'd like for haproxy to have a way to close the
>> connection as gracefully as possible after X minutes, rather than the
>> current scenario where it may get killed ungracefully.
>
>
> This is not supported.

This is the answer I needed.

> You can simulate this behavior by soft reloading haproxy
> every X minutes or by shutting down those "offensive" session via the admin
> socket:
>
> http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#9.2-shutdown%20session
>
>
> However I would strongly suggest you go back to the drawing board and work
> out why you need this behavior in the first place.
>

With that in mind, it's not overly uncommon behaviour. nginx for
example has keepalive_timeout to facilitate the behaviour I'm looking
for here, I simply needed to know if I had missed something in the
manual with regards to haproxys support of this functionality,
obviously I hadn't, and as you say it's not supported.

>
> If you are concerned about the number of open connection on the proxy, just
> lower
> timeout http-keep-alive to something like 30 - 300 ms. That is way more
> effective.
>

Using a low timeout is the general solution I'm using. Again, the
email was sent to see if the drop could be forced, because it's
possible even with strict timeouts for a request that a connection
could stay for a long time depending on how it's interacting with the
backend.

>
> cheers,
>
> Lukas
>

Thanks,
Craig



Re: [PATCH 1/2] MINOR: Add ability for agent-check to set server maxconn

2016-04-25 Thread Willy Tarreau
On Sun, Apr 24, 2016 at 11:10:06PM +0200, Nenad Merdanovic wrote:
> This is very useful in complex architecture systems where HAproxy
> is balancing DB connections for example. We want to keep the maxconn
> high in order to avoid issues with queueing on the LB level when
> there is slowness on another part of the system. Example is a case of
> an architecture where each thread opens multiple DB connections, which
> if get stuck in queue cause a snowball effect (old connections aren't
> closed, new ones cannot be established). These connections are mostly
> idle and the DB server has no problem handling thousands of them.
(...)

both patches applied, thank you Nenad!

willy




Re: haproxy 1.6.4 segfault in logging (I think)

2016-04-25 Thread Willy Tarreau
On Sat, Apr 23, 2016 at 11:44:13PM +0200, Willy Tarreau wrote:
> I'm seeing 15 calls there that need to be adjusted. Status, cookies and
> URI are the main victims. A lot of "txn->uri" can simply be changed to
> "(txn && txn->uri)". I may try to do it tomorrow if nobody beats me by
> doing it before.

OK fixed upstream by commit 57bc891 ("BUG/MEDIUM: log: fix risk of segfault
when logging HTTP fields in TCP mode").

I'll backport it to 1.6.

Willy



Re: [PATCH] BUG/MINOR: frontend: fix frontend start status

2016-04-25 Thread Willy Tarreau
On Mon, Apr 25, 2016 at 05:10:39PM +0200, Ondrej Stumpf wrote:
> I see your point. The thing is, in my setup there is only one frontend. I
> ran into this when writing an upgrade script:
> 1) disable frontend
> 2) perform update of the node with haproxy - this may result in restart of
> haproxy
> 3) wait for other nodes to be updated
> 4) enable frontend
> 
> Problem is, in step 2) the haproxy starts with enabled frontend.

I'm seeing some confusion around the use of the term "node" above, it makes
me think that it's a load balancer but since you seem to imply a dependency
between nodes, I'm not totally sure. Maybe you mean it's a server instead ?
Overall I don't understand the sequence here :-/

> Since
> other nodes in the platform may not have been properly updated yet, this is
> highly undesirable.

I don't understand what operation involves a restart either here.

> What I therefore need is to start the haproxy with disabled frontend and
> then enable it manually.

If you have caused an explicit restart with "disabled" added in the
configuration, then I don't see the issue you have by issuing a second
restart. Also keep in mind that right now your patch doesn't at all
prevent the frontend from starting, it's 100% started, it's only
*reported* as stopped on the stats page due to the state inconsistency
but as you can verify it, it perfectly handles the traffic you send to
it.

Regards,
Willy




Re: [PATCH] BUG/MINOR: frontend: fix frontend start status

2016-04-25 Thread Ondrej Stumpf
I see your point. The thing is, in my setup there is only one frontend. I
ran into this when writing an upgrade script:
1) disable frontend
2) perform update of the node with haproxy - this may result in restart of
haproxy
3) wait for other nodes to be updated
4) enable frontend

Problem is, in step 2) the haproxy starts with enabled frontend. Since
other nodes in the platform may not have been properly updated yet, this is
highly undesirable.
What I therefore need is to start the haproxy with disabled frontend and
then enable it manually.

Thanks for the comments, regards.

Ondrej

On Mon, Apr 25, 2016 at 4:57 PM, Willy Tarreau  wrote:

> Hello,
>
> On Mon, Apr 25, 2016 at 01:47:23PM +0200, Ondrej Stumpf wrote:
> > That is not the issue. I'm talking about the "disabled" keyword in
> HAProxy
> > configuration file. That can be used in the "frontend" section (among
> > others) to start the frontend without actually binding to a port. To
> quote
> > the docs:
> > "The instance will still be created and its configuration will be
> checked,
> > but it will be created in the "stopped" state and will appear as such in
> > the statistics. It will not receive any traffic nor will it send any
> > health-checks or logs."
> > However, if you use that keyword, not only the frontend does NOT appear
> in
> > stats, but it also cannot be enabled via stats. The fix that I propose
> > fixes it - the frontend is after startup visible in stats (in STOP state)
> > and can be enabled via "enable frontend xyz".
>
> Actually your patch introduces a bug where there is not. It results in
> the frontend being always started and always taking traffic, so people
> who use "disabled" to switch between one frontend and another will get
> the two active at once.
>
> People who use the "disabled" keyword generally want to have two config
> sections for the same feature, but used under various conditions, for
> example a first section for use under normal conditions and a second one
> with aggressive filtering for use when under attack.
>
> As you can see in your case, the following setup results in the same port
> being bound twice, as reported by netstat :
>
>frontend normal
>bind :80
>...
>
>frontend filtered
>disabled
>bind :80
>...
>
> What is currently being done exactly matches what is indicated in the doc.
>
> As indicated by Pavlos, there's no way to re-enable a frontend that has
> been either shutdown or disabled. The reason is double :
>   - you can't unbind then rebind otherwise all privileged ports will be
> definitely lost once the privileges are dropped upon startup.
>
>   - you don't want to remain bound because as long as you're bound you're
> taking incoming connections that are not properly delivered to the
> other listener.
>
> Regards,
> willy
>
>


Re: [PATCH] BUG/MINOR: frontend: fix frontend start status

2016-04-25 Thread Willy Tarreau
Hello,

On Mon, Apr 25, 2016 at 01:47:23PM +0200, Ondrej Stumpf wrote:
> That is not the issue. I'm talking about the "disabled" keyword in HAProxy
> configuration file. That can be used in the "frontend" section (among
> others) to start the frontend without actually binding to a port. To quote
> the docs:
> "The instance will still be created and its configuration will be checked,
> but it will be created in the "stopped" state and will appear as such in
> the statistics. It will not receive any traffic nor will it send any
> health-checks or logs."
> However, if you use that keyword, not only the frontend does NOT appear in
> stats, but it also cannot be enabled via stats. The fix that I propose
> fixes it - the frontend is after startup visible in stats (in STOP state)
> and can be enabled via "enable frontend xyz".

Actually your patch introduces a bug where there is not. It results in
the frontend being always started and always taking traffic, so people
who use "disabled" to switch between one frontend and another will get
the two active at once.

People who use the "disabled" keyword generally want to have two config
sections for the same feature, but used under various conditions, for
example a first section for use under normal conditions and a second one
with aggressive filtering for use when under attack.

As you can see in your case, the following setup results in the same port
being bound twice, as reported by netstat :

   frontend normal
   bind :80
   ...

   frontend filtered
   disabled
   bind :80
   ...

What is currently being done exactly matches what is indicated in the doc.

As indicated by Pavlos, there's no way to re-enable a frontend that has
been either shutdown or disabled. The reason is double :
  - you can't unbind then rebind otherwise all privileged ports will be
definitely lost once the privileges are dropped upon startup.

  - you don't want to remain bound because as long as you're bound you're
taking incoming connections that are not properly delivered to the
other listener.

Regards,
willy




Re: Support for Keep-Alive header and timeouts

2016-04-25 Thread Lukas Tribus

Hi,


Am 25.04.2016 um 15:51 schrieb Craig McLure:

>From a firewall perspective all sockets are configured to forcefully
stop after about 20 minutes after which time a connection will go
'stale' and no longer function, any additional packets on that socket
will be ignored.


And why would you configure the firewall to do this? I don't see how 
this makes

sense.




  This is fine for our purposes, but when keep-alive
comes into play this raises some problems. Theoretically using all the
timeouts available in haproxy it's tentatively possible to maintain a
connection for *LONGER* than that period, at which point the
connection gets silently dropped, and in haproxy the connection fails
in a non-graceful way.


Even if haproxy would *try* to close the session after time X, there is 
not guarantee
that current in flight request/response would be finished in time to not 
get dropped

at firewall level. What about slow downloads? They could go on for hours ...




Ideally, obviously, I'd like for haproxy to have a way to close the
connection as gracefully as possible after X minutes, rather than the
current scenario where it may get killed ungracefully.


This is not supported. You can simulate this behavior by soft reloading 
haproxy

every X minutes or by shutting down those "offensive" session via the admin
socket:

http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#9.2-shutdown%20session



However I would strongly suggest you go back to the drawing board and work
out why you need this behavior in the first place.


If you are concerned about the number of open connection on the proxy, 
just lower
timeout http-keep-alive to something like 30 - 300 ms. That is way more 
effective.




cheers,

Lukas




Re: [PATCH] BUG/MINOR: frontend: fix frontend start status

2016-04-25 Thread Ondrej Stumpf
I'm actually using 1.5 all the time. If you check the patch I propose,
you'll see that most probably it is not the same code path.

On Mon, Apr 25, 2016 at 4:06 PM, Pavlos Parissis 
wrote:

>
>
> On 25/04/2016 01:47 μμ, Ondrej Stumpf wrote:
> > That is not the issue. I'm talking about the "disabled" keyword in
> > HAProxy configuration file. That can be used in the "frontend" section
> > (among others) to start the frontend without actually binding to a port.
> > To quote the docs:
> > "The instance will still be created and its configuration will be
> > checked, but it will be created in the "stopped" state and will appear
> > as such in the statistics. It will not receive any traffic nor will it
> > send any health-checks or logs."
> > However, if you use that keyword, not only the frontend does NOT appear
> > in stats, but it also cannot be enabled via stats. The fix that I
> > propose fixes it - the frontend is after startup visible in stats (in
> > STOP state) and can be enabled via "enable frontend xyz".
> >
>
> But, it could be that the same code path is used. Have you checked the
> behavior with 1.5?
>
>
> Cheers,
> Pavlos
>
>


Re: [PATCH] BUG/MINOR: frontend: fix frontend start status

2016-04-25 Thread Pavlos Parissis


On 25/04/2016 01:47 μμ, Ondrej Stumpf wrote:
> That is not the issue. I'm talking about the "disabled" keyword in
> HAProxy configuration file. That can be used in the "frontend" section
> (among others) to start the frontend without actually binding to a port.
> To quote the docs:
> "The instance will still be created and its configuration will be
> checked, but it will be created in the "stopped" state and will appear
> as such in the statistics. It will not receive any traffic nor will it
> send any health-checks or logs."
> However, if you use that keyword, not only the frontend does NOT appear
> in stats, but it also cannot be enabled via stats. The fix that I
> propose fixes it - the frontend is after startup visible in stats (in
> STOP state) and can be enabled via "enable frontend xyz".
> 

But, it could be that the same code path is used. Have you checked the
behavior with 1.5?


Cheers,
Pavlos



signature.asc
Description: OpenPGP digital signature


Re: Support for Keep-Alive header and timeouts

2016-04-25 Thread Aleksandar Lazic

Hi Craig.

Am 25-04-2016 15:51, schrieb Craig McLure:

Hi Aleks,

Sorry, I was a bit unclear about the initial request, it was more
about the timeout on keep alive connections than the actual support of
the Keep-Alive header!



Maybe this can help?!

http://cbonte.github.io/haproxy-dconv/configuration-1.6.html#4-option%20clitcpka

But I'm not sure if haproxy is able to handle the scenario below.?!

best regards
Aleks


I did review the manual for it, the http-keep-alive option isn't the
option I'm looking for, as "It will define how long to wait for a new
HTTP request to start coming after a response was sent.". This makes
it more of an 'idle timeout'.

I'd like the ability to, after exactly (for example) 10 minutes,
forcibly close the client socket regardless of it's current state or
what it's doing. All the timeouts I've found in haproxy seem to
related more to 'idle timeouts':

* timeout http-keep-alive - Amount of time until a socket is closed
because it's been idle and hasn't sent a request
* timeout client - Amount of time until a socket is closed because
it's expected to send data but has been idle for this period.
* timeout http-request - Amount of time until a socket is closed
because it hasn't sent a complete HTTP request in this time.


None of these really provide the type of behaviour I'm expecting, for
a small amount of context:

From a firewall perspective all sockets are configured to forcefully
stop after about 20 minutes after which time a connection will go
'stale' and no longer function, any additional packets on that socket
will be ignored. This is fine for our purposes, but when keep-alive
comes into play this raises some problems. Theoretically using all the
timeouts available in haproxy it's tentatively possible to maintain a
connection for *LONGER* than that period, at which point the
connection gets silently dropped, and in haproxy the connection fails
in a non-graceful way.

Ideally, obviously, I'd like for haproxy to have a way to close the
connection as gracefully as possible after X minutes, rather than the
current scenario where it may get killed ungracefully.

Running v1.6.4

Cheers.



On Mon, Apr 25, 2016 at 2:20 PM, Aleksandar Lazic  
wrote:

Hi.

Am 25-04-2016 14:01, schrieb Craig McLure:


Hi,

Does HAProxy support the Keep-Alive header, and a 'max connection
duration' for Keep-Alive connections?

I've poured through the manual, but can't see anything obvious, but 
it

would be useful for better control over Keep-Alive connections.



please can you show us haproxy -vv

and maybe this could help

http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#4-timeout%20http-keep-alive
http://cbonte.github.io/haproxy-dconv/configuration-1.6.html#4-timeout%20http-keep-alive

found it with search in the page.

There are more keep alive settings on this page ;-)

Best regards
Aleks




Re: Support for Keep-Alive header and timeouts

2016-04-25 Thread Craig McLure
Hi Aleks,

Sorry, I was a bit unclear about the initial request, it was more
about the timeout on keep alive connections than the actual support of
the Keep-Alive header!

I did review the manual for it, the http-keep-alive option isn't the
option I'm looking for, as "It will define how long to wait for a new
HTTP request to start coming after a response was sent.". This makes
it more of an 'idle timeout'.

I'd like the ability to, after exactly (for example) 10 minutes,
forcibly close the client socket regardless of it's current state or
what it's doing. All the timeouts I've found in haproxy seem to
related more to 'idle timeouts':

* timeout http-keep-alive - Amount of time until a socket is closed
because it's been idle and hasn't sent a request
* timeout client - Amount of time until a socket is closed because
it's expected to send data but has been idle for this period.
* timeout http-request - Amount of time until a socket is closed
because it hasn't sent a complete HTTP request in this time.


None of these really provide the type of behaviour I'm expecting, for
a small amount of context:

>From a firewall perspective all sockets are configured to forcefully
stop after about 20 minutes after which time a connection will go
'stale' and no longer function, any additional packets on that socket
will be ignored. This is fine for our purposes, but when keep-alive
comes into play this raises some problems. Theoretically using all the
timeouts available in haproxy it's tentatively possible to maintain a
connection for *LONGER* than that period, at which point the
connection gets silently dropped, and in haproxy the connection fails
in a non-graceful way.

Ideally, obviously, I'd like for haproxy to have a way to close the
connection as gracefully as possible after X minutes, rather than the
current scenario where it may get killed ungracefully.

Running v1.6.4

Cheers.



On Mon, Apr 25, 2016 at 2:20 PM, Aleksandar Lazic  wrote:
> Hi.
>
> Am 25-04-2016 14:01, schrieb Craig McLure:
>>
>> Hi,
>>
>> Does HAProxy support the Keep-Alive header, and a 'max connection
>> duration' for Keep-Alive connections?
>>
>> I've poured through the manual, but can't see anything obvious, but it
>> would be useful for better control over Keep-Alive connections.
>
>
> please can you show us haproxy -vv
>
> and maybe this could help
>
> http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#4-timeout%20http-keep-alive
> http://cbonte.github.io/haproxy-dconv/configuration-1.6.html#4-timeout%20http-keep-alive
>
> found it with search in the page.
>
> There are more keep alive settings on this page ;-)
>
> Best regards
> Aleks



Re: Support for Keep-Alive header and timeouts

2016-04-25 Thread Aleksandar Lazic

Hi.

Am 25-04-2016 14:01, schrieb Craig McLure:

Hi,

Does HAProxy support the Keep-Alive header, and a 'max connection
duration' for Keep-Alive connections?

I've poured through the manual, but can't see anything obvious, but it
would be useful for better control over Keep-Alive connections.


please can you show us haproxy -vv

and maybe this could help

http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#4-timeout%20http-keep-alive
http://cbonte.github.io/haproxy-dconv/configuration-1.6.html#4-timeout%20http-keep-alive

found it with search in the page.

There are more keep alive settings on this page ;-)

Best regards
Aleks



Support for Keep-Alive header and timeouts

2016-04-25 Thread Craig McLure
Hi,

Does HAProxy support the Keep-Alive header, and a 'max connection duration'
for Keep-Alive connections?

I've poured through the manual, but can't see anything obvious, but it
would be useful for better control over Keep-Alive connections.

Thanks.


Re: [PATCH] BUG/MINOR: frontend: fix frontend start status

2016-04-25 Thread Pavlos Parissis


On 25/04/2016 01:38 μμ, Pavlos Parissis wrote:
> 
> 
> On 25/04/2016 12:05 μμ, Ondrej Stumpf wrote:
>> Hi,
>> I ran into a bug when using the 'disabled' keyword for frontends -
> 
> If I remember correctly you can enable a frontend after it has disabled,
> but if you send to stats socket:
> 
>   'shutdown frontend '
> 
> then it can't be enabled.
> 
> Cheers,
> Pavlos
> 

On 1.5 you can disable and then enable again a frontend, but on 1.6 not.
So, disable on 1.6 works the same way as shutdown, see below

# get status
(python3) foo at poseidonas in ~/repo/haproxy-1.6 on (master u=)
haproxytool frontend -D /run/haproxy -s frontend1_proc34
frontend1_proc34 OPEN

# disable
(python3) foo at poseidonas in ~/repo/haproxy-1.6 on (master u=)
haproxytool frontend -D /run/haproxy -d frontend1_proc34
frontend1_proc34 disabled

# get status
(python3) foo at poseidonas in ~/repo/haproxy-1.6 on (master u=)
haproxytool frontend -D /run/haproxy -s frontend1_proc34
frontend1_proc34 STOP

# enable, fails
(python3) foo at poseidonas in ~/repo/haproxy-1.6 on (master u=)
haproxytool frontend -D /run/haproxy -e frontend1_proc34
frontend1_proc34 failed to be enabled:Failed to resume frontend, check
logs for precise cause (port conflict?).


(python3) foo at poseidonas in ~/repo/haproxy-1.6 on (master u=)
haproxytool frontend -D /run/haproxy -t frontend_proc1
frontend_proc1 shutdown


(python3) foo at poseidonas in ~/repo/haproxy-1.6 on (master u=)
haproxytool frontend -D /run/haproxy -s frontend_proc1
frontend_proc1 was not found


(python3) foo at poseidonas in ~/repo/haproxy-1.6 on (master u=)
haproxytool frontend -D /run/haproxy -e frontend_proc1
frontend_proc1 was not found


(python3) foo at poseidonas in ~/repo/haproxy-1.6 on (master u=)
sudo pkill -f './haproxy -f /etc/haproxy/haproxy.cfg'


# switch to 1.5
(python3) foo at poseidonas in ~/repo/haproxy-1.6 on (master u=)
cd ..

(python3) foo at poseidonas in ~/repo
cd haproxy-1.5

(python3) foo at poseidonas in ~/repo/haproxy-1.5 on (master u=)
sudo ./haproxy -f /etc/haproxy/haproxy.cfg


(python3) foo at poseidonas in ~/repo/haproxy-1.5 on (master u=)
haproxytool frontend -D /run/haproxy -s frontend_proc1
frontend_proc1 OPEN


# disable
(python3) foo at poseidonas in ~/repo/haproxy-1.5 on (master u=)
haproxytool frontend -D /run/haproxy -d frontend_proc1
frontend_proc1 disabled

# enable
(python3) foo at poseidonas in ~/repo/haproxy-1.5 on (master u=)
haproxytool frontend -D /run/haproxy -e frontend_proc1
frontend_proc1 enabled

# get status
(python3) foo at poseidonas in ~/repo/haproxy-1.5 on (master u=)
haproxytool frontend -D /run/haproxy -s frontend_proc1
frontend_proc1 OPEN


Cheers,
Pavlos




signature.asc
Description: OpenPGP digital signature


Re: [PATCH] BUG/MINOR: frontend: fix frontend start status

2016-04-25 Thread Ondrej Stumpf
That is not the issue. I'm talking about the "disabled" keyword in HAProxy
configuration file. That can be used in the "frontend" section (among
others) to start the frontend without actually binding to a port. To quote
the docs:
"The instance will still be created and its configuration will be checked,
but it will be created in the "stopped" state and will appear as such in
the statistics. It will not receive any traffic nor will it send any
health-checks or logs."
However, if you use that keyword, not only the frontend does NOT appear in
stats, but it also cannot be enabled via stats. The fix that I propose
fixes it - the frontend is after startup visible in stats (in STOP state)
and can be enabled via "enable frontend xyz".

Regards,

Ondrej

On Mon, Apr 25, 2016 at 1:38 PM, Pavlos Parissis 
wrote:

>
>
> On 25/04/2016 12:05 μμ, Ondrej Stumpf wrote:
> > Hi,
> > I ran into a bug when using the 'disabled' keyword for frontends -
>
> If I remember correctly you can enable a frontend after it has disabled,
> but if you send to stats socket:
>
> 'shutdown frontend '
>
> then it can't be enabled.
>
> Cheers,
> Pavlos
>
>


Re: [PATCH] BUG/MINOR: frontend: fix frontend start status

2016-04-25 Thread Pavlos Parissis


On 25/04/2016 12:05 μμ, Ondrej Stumpf wrote:
> Hi,
> I ran into a bug when using the 'disabled' keyword for frontends -

If I remember correctly you can enable a frontend after it has disabled,
but if you send to stats socket:

'shutdown frontend '

then it can't be enabled.

Cheers,
Pavlos



signature.asc
Description: OpenPGP digital signature


[PATCH] BUG/MINOR: frontend: fix frontend start status

2016-04-25 Thread Ondrej Stumpf
Hi,
I ran into a bug when using the 'disabled' keyword for frontends -
currently it is not possible to enable such frontend later.
The correponding patch file follows.

Regards,

Ondrej Stumpf
---

>From 34fa61371d5c80e5cc2d92142e15baad0eb28d80 Mon Sep 17 00:00:00 2001
From: Ondrej Stumpf 
Date: Mon, 25 Apr 2016 11:54:36 +0200
Subject: [PATCH] BUG/MINOR: frontend: fix frontend start status
 When the 'disabled' keyword is used in config file for a frontend, the
frontend starts
 with PAUSED status rather than STOPPED to make it enablable in the future.
---
 src/cfgparse.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/cfgparse.c b/src/cfgparse.c
index 2400559..0c069f6 100644
--- a/src/cfgparse.c
+++ b/src/cfgparse.c
@@ -3267,7 +3267,7 @@ int cfg_parse_listen(const char *file, int linenum,
char **args, int kwm)
  else if (!strcmp(args[0], "disabled")) {  /* disables this proxy */
  if (alertif_too_many_args(0, file, linenum, args, _code))
  goto out;
- curproxy->state = PR_STSTOPPED;
+ curproxy->state = PR_STPAUSED;
  }
  else if (!strcmp(args[0], "enabled")) {  /* enables this proxy (used to
revert a disabled default) */
  if (alertif_too_many_args(0, file, linenum, args, _code))
-- 
1.9.1


using lua for rewriting tcp stream

2016-04-25 Thread Alexander Zubkov
Hi all!

I'm already using haproxy to proxy some TCP-connections and also have a
need to rewrite data in TCP stream. Namely convert graphite protocol to
influx line protocol, but that does not mattes.
There is alike functionality for HTTP, which allows to replace or rewrite
its headers, but I have not see something like it for TCP. And I wondering
if it is possible to do without adding additional substances and do it in
context of haproxy. I need to read incoming newline-delimited strings and
forward modified version of them. I can write Lua-code, but not sure how it
should be embedded into haproxy's architecture. The closest I have found is
this examples:
http://article.gmane.org/gmane.comp.web.haproxy/25243/
But that did not helped me much.
Is it possible to solve my task with Lua in haproxy? Can someone help me
with code which wraps main functionality (rewriting)?