Re: HAProxy1.7.9-http-reuse

2017-10-26 Thread Willy Tarreau
Hi Bryan,

On Thu, Oct 26, 2017 at 03:51:33PM -0700, Bryan Talbot wrote:
> > 4. Local haproxy log
> > 172.31.x.x:53202 [26/Oct/2017:21:02:36.368] http_front http_back/web1 
> > 0/0/204/205/410 200 89 - -  0/0/0/0/0 0/0 {} "GET / HTTP/1.0"
> 
> 
> This log line says that it took your local proxy 204 ms to connect to the
> remote proxy and that the first response bytes from the remote proxy were
> received by the local proxy 205 ms later for a total round trip time of 410
> ms (after rounding).
> 
> The only way to get the total time to be equal to the network latency times
> would be to make the remote respond in 0 ms (or less!). If the two proxies
> are actually 200 ms apart, I don't see how you could do much better.

Not exactly in fact, what Karthikeyan is observing totally makes sense.
The first round trip is used for the SYN->SYN/ACK, the second one for
request->response. The server roughly takes 2 ms to respond if the ping
is 204ms and the total time is 410ms (410-2*204).

Regarding http-reuse, it indeed only reuses idle connections, and there
are certain conditions for this. The first one obviously is that the
response must be made in keep-alive so that the connection is kept open.
If "ab" is used to inject, it needs "-k" to enable keep-alive otherwise
by default a close is requested and the connections are closed. I'm
suddenly wondering if using "option http-pretend-keepalive" could help
cheating here, I honnestly don't know.

Willy



Re: HAProxy1.7.9-http-reuse

2017-10-26 Thread Bryan Talbot


> On Oct 26, 2017, at Oct 26, 6:13 PM, karthikeyan.rajam...@thomsonreuters.com 
> wrote:
> 
>  
> Yes the log indicates that. But the RTT via ping is 204 ms, with http-reuse 
> always/aggressive option the connection is reused & we expect a time close to 
> ping+ a small overhead time, the http-resuse always seem to have no impact on 
> the  total time taken.
> We are looking to get the option working.


I’d bet that it’s working but that it doesn’t do what you're assuming it does.

It’s not a connection pool that keeps connections open to a backend when there 
are no current requests. As the last paragraph and note of 
https://cbonte.github.io/haproxy-dconv/1.7/configuration.html#http-reuse 
 says


No connection pool is involved, once a session dies, the last idle connection
it was attached to is deleted at the same time. This ensures that connections
may not last after all sessions are closed.

Note: connection reuse improves the accuracy of the "server maxconn" setting,
because almost no new connection will be established while idle connections
remain available. This is particularly true with the "always" strategy.

So, testing one connection at a time one would not expect to see any 
difference. The benefit comes when there are many concurrent requests.

One way to check if the feature is working would be to run your ‘ab’ test with 
some concurrency N and inspect the active TCP connections from local proxy to 
remote proxy. If the feature is working, I would expect to see about N 
(something less) TCP connections that are reused for multiple requests. If 
there are 1000 requests sent with concurrency 10 and 1000 different TCP 
connections used the feature isn’t working (or the connections are private).

-Bryan



RE: HAProxy1.7.9-http-reuse

2017-10-26 Thread Karthikeyan.Rajamani
Hi Aleks,

Thanks for the reply, I have replied inline.

Thanks
Karthik

-Original Message-
From: Aleksandar Lazic [mailto:al-hapr...@none.at] 
Sent: Thursday, October 26, 2017 5:54 PM
To: Rajamani, Karthikeyan (TR Technology & Ops); haproxy@formilux.org
Subject: Re: HAProxy1.7.9-http-reuse

Hi

-- Originalnachricht --
Von: karthikeyan.rajam...@thomsonreuters.com
An: haproxy@formilux.org
Gesendet: 27.10.2017 00:13:50
Betreff: HAProxy1.7.9-http-reuse

>Hi,
>
>We are testing a setup which has a local haproxy which connects to a 
>remote haproxy which in turn calls an apache server which returns a 
>html page.
>
>Haproxy(local)->Haproxy(remote)->Apache.
>
>We have  the set up working, the ping time from local to remote haproxy 
>is 204 ms.
>
>The time taken for the web page when accessed by the browser is 410 ms.
>
>We want the latency to be 204 ms when accessed by the browser. We 
>configured to reuse http & with http-reuse aggresive|always|safe 
>options
>
>but could not reduce the 410 ms to 204 ms. It is always 410 ms. Please 
>let us know how we can reuse http & reduce out latency.
ping is normally icmp and will be answered by the kernel.
http is normally tcp and will be answered by the apache httpd.
How do you measure the latency on the browser site?

We use chrome's tools->nw & also apache bench
--
Why do you expect that a complete other work flow have the same amount of time?

You can try to use mod_cache or some other cache concepts to reduce the disc 
I/O, in case you deliver a file and it fits into os cache.

The RTT via ping is 204 ms, with the use of http-resue always/aggressive the 
time via the apache bench/ browser should be close to the RTT + a small time 
overhead for headers/io. But the http-reuse has no effect on the request to 
response time. As per the documentation the http connection should be reused & 
the total time should be close to the ping time

Ab -n 1000 -c 40 ip. The I/O time is very minimal 

Regards
Aleks

>Thanks
>
>Karthik
>
>
>
>Please find the version, local config, remote config & local haproxy 
>log
>
>
>
>1.haproxy -vv
>
>HA-Proxy version 1.7.9 2017/08/18
>
>-
>
>2.Local config
>
>-
>
>global
>
>log 127.0.0.1 local0
>
>chroot /var/lib/haproxy
>
>stats socket /run/haproxy/admin.sock mode 660 level admin
>
>stats timeout 30s
>
>user haproxy
>
>group haproxy
>
>daemon
>
>
>
>defaults
>
>log global
>
>mode http
>
>option httplog
>
>timeout http-keep-alive 5
>
>timeout connect 5000
>
>timeout client 5
>
>timeout server 5
>
>maxconn 500
>
>
>
>frontend http_front
>
>bind *:80
>
>stats uri /haproxy?stats
>
>capture response header Connection len 32
>
>default_backend http_back
>
>
>
>
>
>backend http_back
>
>option http-keep-alive
>
>http-reuse always
>
>server remote remote.haproxy.internal:80 check inter 6 maxconn
>500
>
>-
>
>3. Remote config
>
>-
>
>global
>
>log 127.0.0.1 local0
>
>chroot /var/lib/haproxy
>
>stats socket /run/haproxy/admin.sock mode 660 level admin
>
>stats timeout 30s
>
>user haproxy
>
>group haproxy
>
>daemon
>
>
>
>defaults
>
>log global
>
>mode http
>
>option httplog
>
>option http-keep-alive
>
>timeout http-keep-alive 5
>
>timeout connect 5000
>
>timeout client 5
>
>timeout server 5
>
>
>
>frontend http_front
>
>bind *:80
>
>stats uri /haproxy?stats
>
>default_backend http_back
>
>
>
>backend http_back
>
>option http-keep-alive
>
>http-reuse always
>
>server web2 52.91.x.x:80 check inter 6 maxconn 500
>
>-
>
>4. Local haproxy log
>
>172.31.x.x:53202 [26/Oct/2017:21:02:36.368] http_front http_back/web1
>0/0/204/205/410 200 89 - -  0/0/0/0/0 0/0 {} "GET / HTTP/1.0"
>
>
>
>
>
>
>
>
>
Thanks again!

Best Regards
Karthik


RE: HAProxy1.7.9-http-reuse

2017-10-26 Thread Karthikeyan.Rajamani
Hell Bryan,

Thank you for the response. I have replied inline.

From: Bryan Talbot [mailto:bryan.tal...@playnext.com]
Sent: Thursday, October 26, 2017 5:52 PM
To: Rajamani, Karthikeyan (TR Technology & Ops)
Cc: HAproxy Mailing Lists
Subject: Re: HAProxy1.7.9-http-reuse

Hello



On Oct 26, 2017, at Oct 26, 3:13 PM, 
karthikeyan.rajam...@thomsonreuters.com
 wrote:

Hi,
We have  the set up working, the ping time from local to remote haproxy is 204 
ms.
The time taken for the web page when accessed by the browser is 410 ms.
We want the latency to be 204 ms when accessed by the browser. We configured to 
reuse http & with http-reuse aggresive|always|safe options
but could not reduce the 410 ms to 204 ms. It is always 410 ms. Please let us 
know how we can reuse http & reduce out latency.



4. Local haproxy log
172.31.x.x:53202 [26/Oct/2017:21:02:36.368] http_front http_back/web1 
0/0/204/205/410 200 89 - -  0/0/0/0/0 0/0 {} "GET / HTTP/1.0"


This log line says that it took your local proxy 204 ms to connect to the 
remote proxy and that the first response bytes from the remote proxy were 
received by the local proxy 205 ms later for a total round trip time of 410 ms 
(after rounding).

The only way to get the total time to be equal to the network latency times 
would be to make the remote respond in 0 ms (or less!). If the two proxies are 
actually 200 ms apart, I don’t see how you could do much better.

-Bryan

Yes the log indicates that. But the RTT via ping is 204 ms, with http-reuse 
always/aggressive option the connection is reused & we expect a time close to 
ping+ a small overhead time, the http-resuse always seem to have no impact on 
the  total time taken.
We are looking to get the option working.

Thanks again!

Best Regards
Karthik


Re: [PATCHES][ssl] Add 0-RTT support with OpenSSL 1.1.1

2017-10-26 Thread Olivier Houchard
Hi,

You'll find attached updated patches, rebased on the latest master, and on
top of Emmanuel's latest patches (also attached for reference).
This version allows to enable 0RTT per SNI.
It unfortunately still can't send early data to servers, this may or may
not happen later.

Regards,

Olivier
>From 25d10a4b30d946de138ccdd3b2595fa84a9da675 Mon Sep 17 00:00:00 2001
From: Emmanuel Hocdet 
Date: Wed, 16 Aug 2017 11:28:44 +0200
Subject: [PATCH 1/6] MEDIUM: ssl: convert CBS (BoringSSL api) usage to neutral
 code

switchctx early callback is only supported for BoringSSL. To prepare
the support of openssl 1.1.1 early callback, convert CBS api to neutral
code to work with any ssl libs.
---
 src/ssl_sock.c | 109 ++---
 1 file changed, 58 insertions(+), 51 deletions(-)

diff --git a/src/ssl_sock.c b/src/ssl_sock.c
index 3d9723949..25b846b25 100644
--- a/src/ssl_sock.c
+++ b/src/ssl_sock.c
@@ -1965,50 +1965,57 @@ static int ssl_sock_switchctx_err_cbk(SSL *ssl, int 
*al, void *priv)
 
 static int ssl_sock_switchctx_cbk(const struct ssl_early_callback_ctx *ctx)
 {
+   SSL *ssl = ctx->ssl;
struct connection *conn;
struct bind_conf *s;
const uint8_t *extension_data;
size_t extension_len;
-   CBS extension, cipher_suites, server_name_list, host_name, sig_algs;
-   const SSL_CIPHER *cipher;
-   uint16_t cipher_suite;
-   uint8_t name_type, hash, sign;
int has_rsa = 0, has_ecdsa = 0, has_ecdsa_sig = 0;
 
char *wildp = NULL;
const uint8_t *servername;
+   size_t servername_len;
struct ebmb_node *node, *n, *node_ecdsa = NULL, *node_rsa = NULL, 
*node_anonymous = NULL;
int i;
 
-   conn = SSL_get_app_data(ctx->ssl);
+   conn = SSL_get_app_data(ssl);
s = objt_listener(conn->target)->bind_conf;
 
if (SSL_early_callback_ctx_extension_get(ctx, TLSEXT_TYPE_server_name,
 _data, 
_len)) {
-   CBS_init(, extension_data, extension_len);
-
-   if (!CBS_get_u16_length_prefixed(, _name_list)
-   || !CBS_get_u8(_name_list, _type)
-   /* Although the server_name extension was intended to be 
extensible to
-* new name types and multiple names, OpenSSL 1.0.x had a 
bug which meant
-* different name types will cause an error. Further, RFC 
4366 originally
-* defined syntax inextensibly. RFC 6066 corrected this 
mistake, but
-* adding new name types is no longer feasible.
-*
-* Act as if the extensibility does not exist to simplify 
parsing. */
-   || !CBS_get_u16_length_prefixed(_name_list, 
_name)
-   || CBS_len(_name_list) != 0
-   || CBS_len() != 0
-   || name_type != TLSEXT_NAMETYPE_host_name
-   || CBS_len(_name) == 0
-   || CBS_len(_name) > TLSEXT_MAXLEN_host_name
-   || CBS_contains_zero_byte(_name)) {
+   /*
+* The server_name extension was given too much extensibility 
when it
+* was written, so parsing the normal case is a bit complex.
+*/
+   size_t len;
+   if (extension_len <= 2)
goto abort;
-   }
+   /* Extract the length of the supplied list of names. */
+   len = (*extension_data++) << 8;
+   len |= *extension_data++;
+   if (len + 2 != extension_len)
+   goto abort;
+   /*
+* The list in practice only has a single element, so we only 
consider
+* the first one.
+*/
+   if (len == 0 || *extension_data++ != TLSEXT_NAMETYPE_host_name)
+   goto abort;
+   extension_len = len - 1;
+   /* Now we can finally pull out the byte array with the actual 
hostname. */
+   if (extension_len <= 2)
+   goto abort;
+   len = (*extension_data++) << 8;
+   len |= *extension_data++;
+   if (len == 0 || len + 2 > extension_len || len > 
TLSEXT_MAXLEN_host_name
+   || memchr(extension_data, 0, len) != NULL)
+   goto abort;
+   servername = extension_data;
+   servername_len = len;
} else {
/* without SNI extension, is the default_ctx (need 
SSL_TLSEXT_ERR_NOACK) */
if (!s->strict_sni) {
-   ssl_sock_switchctx_set(ctx->ssl, s->default_ctx);
+   ssl_sock_switchctx_set(ssl, s->default_ctx);
return 1;
}
goto abort;
@@ -2016,21 +2023,19 @@ static int ssl_sock_switchctx_cbk(const 

Re: HAProxy1.7.9-http-reuse

2017-10-26 Thread Aleksandar Lazic

Hi

-- Originalnachricht --
Von: karthikeyan.rajam...@thomsonreuters.com
An: haproxy@formilux.org
Gesendet: 27.10.2017 00:13:50
Betreff: HAProxy1.7.9-http-reuse


Hi,

We are testing a setup which has a local haproxy which connects to a 
remote haproxy which in turn calls an apache server which returns a 
html page.


Haproxy(local)->Haproxy(remote)->Apache.

We have  the set up working, the ping time from local to remote haproxy 
is 204 ms.


The time taken for the web page when accessed by the browser is 410 ms.

We want the latency to be 204 ms when accessed by the browser. We 
configured to reuse http & with http-reuse aggresive|always|safe 
options


but could not reduce the 410 ms to 204 ms. It is always 410 ms. Please 
let us know how we can reuse http & reduce out latency.

ping is normally icmp and will be answered by the kernel.
http is normally tcp and will be answered by the apache httpd.
How do you measure the latency on the browser site?

Why do you expect that a complete other work flow have the same amount 
of time?


You can try to use mod_cache or some other cache concepts to reduce the 
disc I/O, in case you deliver a file and it fits into os cache.


Regards
Aleks


Thanks

Karthik



Please find the version, local config, remote config & local haproxy 
log




1.haproxy -vv

HA-Proxy version 1.7.9 2017/08/18

-

2.Local config

-

global

   log 127.0.0.1 local0

   chroot /var/lib/haproxy

   stats socket /run/haproxy/admin.sock mode 660 level admin

   stats timeout 30s

   user haproxy

   group haproxy

   daemon



defaults

   log global

   mode http

   option httplog

   timeout http-keep-alive 5

   timeout connect 5000

   timeout client 5

   timeout server 5

   maxconn 500



frontend http_front

   bind *:80

   stats uri /haproxy?stats

   capture response header Connection len 32

   default_backend http_back





backend http_back

   option http-keep-alive

   http-reuse always

   server remote remote.haproxy.internal:80 check inter 6 maxconn 
500


-

3. Remote config

-

global

   log 127.0.0.1 local0

   chroot /var/lib/haproxy

   stats socket /run/haproxy/admin.sock mode 660 level admin

   stats timeout 30s

   user haproxy

   group haproxy

   daemon



defaults

   log global

   mode http

   option httplog

   option http-keep-alive

   timeout http-keep-alive 5

   timeout connect 5000

   timeout client 5

   timeout server 5



frontend http_front

bind *:80

stats uri /haproxy?stats

default_backend http_back



backend http_back

   option http-keep-alive

   http-reuse always

   server web2 52.91.x.x:80 check inter 6 maxconn 500

-

4. Local haproxy log

172.31.x.x:53202 [26/Oct/2017:21:02:36.368] http_front http_back/web1 
0/0/204/205/410 200 89 - -  0/0/0/0/0 0/0 {} "GET / HTTP/1.0"















Re: HAProxy1.7.9-http-reuse

2017-10-26 Thread Bryan Talbot
Hello


> On Oct 26, 2017, at Oct 26, 3:13 PM, karthikeyan.rajam...@thomsonreuters.com 
> wrote:
> 
> Hi,
> We have  the set up working, the ping time from local to remote haproxy is 
> 204 ms.
> The time taken for the web page when accessed by the browser is 410 ms.
> We want the latency to be 204 ms when accessed by the browser. We configured 
> to reuse http & with http-reuse aggresive|always|safe options
> but could not reduce the 410 ms to 204 ms. It is always 410 ms. Please let us 
> know how we can reuse http & reduce out latency.
>  

> 4. Local haproxy log
> 172.31.x.x:53202 [26/Oct/2017:21:02:36.368] http_front http_back/web1 
> 0/0/204/205/410 200 89 - -  0/0/0/0/0 0/0 {} "GET / HTTP/1.0"


This log line says that it took your local proxy 204 ms to connect to the 
remote proxy and that the first response bytes from the remote proxy were 
received by the local proxy 205 ms later for a total round trip time of 410 ms 
(after rounding).

The only way to get the total time to be equal to the network latency times 
would be to make the remote respond in 0 ms (or less!). If the two proxies are 
actually 200 ms apart, I don’t see how you could do much better.

-Bryan



HAProxy1.7.9-http-reuse

2017-10-26 Thread Karthikeyan.Rajamani
Hi,
We are testing a setup which has a local haproxy which connects to a remote 
haproxy which in turn calls an apache server which returns a html page.
Haproxy(local)->Haproxy(remote)->Apache.
We have  the set up working, the ping time from local to remote haproxy is 204 
ms.
The time taken for the web page when accessed by the browser is 410 ms.
We want the latency to be 204 ms when accessed by the browser. We configured to 
reuse http & with http-reuse aggresive|always|safe options
but could not reduce the 410 ms to 204 ms. It is always 410 ms. Please let us 
know how we can reuse http & reduce out latency.

Thanks
Karthik

Please find the version, local config, remote config & local haproxy log

1.haproxy -vv
HA-Proxy version 1.7.9 2017/08/18
-
2.Local config
-
global
   log 127.0.0.1 local0
   chroot /var/lib/haproxy
   stats socket /run/haproxy/admin.sock mode 660 level admin
   stats timeout 30s
   user haproxy
   group haproxy
   daemon

defaults
   log global
   mode http
   option httplog
   timeout http-keep-alive 5
   timeout connect 5000
   timeout client 5
   timeout server 5
   maxconn 500

frontend http_front
   bind *:80
   stats uri /haproxy?stats
   capture response header Connection len 32
   default_backend http_back


backend http_back
   option http-keep-alive
   http-reuse always
   server remote remote.haproxy.internal:80 check inter 6 maxconn 500
-
3. Remote config
-
global
   log 127.0.0.1 local0
   chroot /var/lib/haproxy
   stats socket /run/haproxy/admin.sock mode 660 level admin
   stats timeout 30s
   user haproxy
   group haproxy
   daemon

defaults
   log global
   mode http
   option httplog
   option http-keep-alive
   timeout http-keep-alive 5
   timeout connect 5000
   timeout client 5
   timeout server 5

frontend http_front
bind *:80
stats uri /haproxy?stats
default_backend http_back

backend http_back
   option http-keep-alive
   http-reuse always
   server web2 52.91.x.x:80 check inter 6 maxconn 500
-
4. Local haproxy log
172.31.x.x:53202 [26/Oct/2017:21:02:36.368] http_front http_back/web1 
0/0/204/205/410 200 89 - -  0/0/0/0/0 0/0 {} "GET / HTTP/1.0"






Re: PATCH: Lua: add UUID to the Proxy Class

2017-10-26 Thread Thierry Fournier
Thanks Baptiste,

This patch will be useful.

Thierry


> On 26 Oct 2017, at 21:59, Baptiste  wrote:
> 
> Hi,
> 
> I saw that the UUID was missing in the Proxy Class in Lua, so I added it.
> 
> The patch is in attachment.
> 
> Baptiste
> <0001-MINOR-lua-add-uuid-to-the-Class-Proxy.patch>




PATCH: Lua: add UUID to the Proxy Class

2017-10-26 Thread Baptiste
Hi,

I saw that the UUID was missing in the Proxy Class in Lua, so I added it.

The patch is in attachment.

Baptiste
From 7fc0433e3f2da0e86bc5ae0cd845856ec23743b7 Mon Sep 17 00:00:00 2001
From: Baptiste Assmann 
Date: Thu, 26 Oct 2017 21:51:58 +0200
Subject: [PATCH] MINOR: lua: add uuid to the Class Proxy

the proxy UUID parameter is not set in the Lua Proxy Class.
This patches adds it.
---
 doc/lua-api/index.rst | 4 
 src/hlua_fcn.c| 6 ++
 2 files changed, 10 insertions(+)

diff --git a/doc/lua-api/index.rst b/doc/lua-api/index.rst
index 822f8bc..96367f6 100644
--- a/doc/lua-api/index.rst
+++ b/doc/lua-api/index.rst
@@ -823,6 +823,10 @@ Proxy class
 
   Contain the name of the proxy.
 
+.. js:attribute:: Proxy.uuid
+
+  Contain the unique identifier of the proxy.
+
 .. js:attribute:: Proxy.servers
 
   Contain an array with the attached servers. Each server entry is an object of
diff --git a/src/hlua_fcn.c b/src/hlua_fcn.c
index 2ae1bbb..9a7e657 100644
--- a/src/hlua_fcn.c
+++ b/src/hlua_fcn.c
@@ -781,6 +781,12 @@ int hlua_fcn_new_proxy(lua_State *L, struct proxy *px)
 	lua_pushstring(L, px->id);
 	lua_settable(L, -3);
 
+	/* Add proxy uuid. */
+	lua_pushstring(L, "uuid");
+	snprintf(buffer, sizeof(buffer), "%d", px->uuid);
+	lua_pushstring(L, buffer);
+	lua_settable(L, -3);
+
 	/* Browse and register servers. */
 	lua_pushstring(L, "servers");
 	lua_newtable(L);
-- 
2.7.4



Re: Tcp logging in haproxy

2017-10-26 Thread Aleksandar Lazic

Hi.

-- Originalnachricht --
Von: "kushal bhattacharya" 
An: haproxy@formilux.org
Gesendet: 26.10.2017 11:20:05
Betreff: Tcp logging in haproxy

I have included tcp logging in the configuration of haproxy.But I want 
to know how it will be loggged in and where the log will be done.My 
main moto is to dump log output in some custom file but watch the logs 
dumped into it.

Thanks,
Kushal


Do you know how the logging framework in haproxy works?

http://cbonte.github.io/haproxy-dconv/1.7/configuration.html#4-log

You can use a syslog address or a unix socket to send the logs.

Please can you tell us the version of haproxy and the relevant part of 
the config for your question, thanks.


Best regards
Aleks




Re: HAProxy support for SOCKS4 as alternative to PROXY protocol?

2017-10-26 Thread Aleksandar Lazic



Am 26-10-2017 17:40, schrieb Ciprian Dorin Craciun:
On Sun, Oct 22, 2017 at 11:11 PM, Aleksandar Lazic  
wrote:

Currently the socks protocol is not implemented in haproxy.



I was hoping someone had a patch "hidden".  :)


Well then it's still hidden ;-)


What flow do you have in mind?



I have a couple of use-cases in mind, like for example:

* SOCKS4 in the backend, would allow HAProxy to route all backend
traffic through a proper SOCKS4 proxy;  this might be used as a
poor-man variant of a tunnel, like for example via SSH;  (if one makes
HAProxy into a transparent proxy, it could even serve as a layer-7
firewall;)

* SOCKS4 in the frontend, would allow HAProxy to act like a SOCKS4
proxy, and apply for example HTTP routing and filtering;  (for example
one configures HAProxy as a SOCKS4 proxy in a browser;)

Basically it allows HAProxy to interoperate with other SOCKS4 proxies
like SSH or Tor.


Sounds interesting even though I can't really help to implement just to 
point to some ideas.


As https://en.wikipedia.org/wiki/SOCKS#SOCKS4a have fields maybe you can 
use


http://cbonte.github.io/haproxy-dconv/1.7/configuration.html#req.payload

to inspect the content like this example

http://cbonte.github.io/haproxy-dconv/1.7/configuration.html#7.1.5

and create a map for dedicated backends.
It's not that dynamic but maybe a start point.

Or you can add a layer 6 module like the ssl module.

http://cbonte.github.io/haproxy-dconv/1.7/configuration.html#7.3.5


Ciprian.


Regards
Aleks



Re: HAProxy support for SOCKS4 as alternative to PROXY protocol?

2017-10-26 Thread Ciprian Dorin Craciun
On Sun, Oct 22, 2017 at 11:11 PM, Aleksandar Lazic  wrote:
> Currently the socks protocol is not implemented in haproxy.


I was hoping someone had a patch "hidden".  :)




> What flow do you have in mind?


I have a couple of use-cases in mind, like for example:

* SOCKS4 in the backend, would allow HAProxy to route all backend
traffic through a proper SOCKS4 proxy;  this might be used as a
poor-man variant of a tunnel, like for example via SSH;  (if one makes
HAProxy into a transparent proxy, it could even serve as a layer-7
firewall;)

* SOCKS4 in the frontend, would allow HAProxy to act like a SOCKS4
proxy, and apply for example HTTP routing and filtering;  (for example
one configures HAProxy as a SOCKS4 proxy in a browser;)

Basically it allows HAProxy to interoperate with other SOCKS4 proxies
like SSH or Tor.

Ciprian.



[PATCH] BUG/MEDIUM: prevent buffers being overwritten during build_logline() execution

2017-10-26 Thread Dragan Dosen
Hi all,

Here's a patch that fixes the problem with trash buffers being
overwritten during build_logline() execution.

Thanks.

Best regards,
Dragan Dosen
>From a5652cdbdbc71e4d303f28e6cacd7bdad263409f Mon Sep 17 00:00:00 2001
From: Dragan Dosen 
Date: Thu, 26 Oct 2017 11:25:10 +0200
Subject: [PATCH] BUG/MEDIUM: prevent buffers being overwritten during
 build_logline() execution

Calls to build_logline() are audited in order to use dynamic trash buffers
allocated by alloc_trash_chunk() instead of global trash buffers.

This is similar to commits 07a0fec ("BUG/MEDIUM: http: Prevent
replace-header from overwriting a buffer") and 0d94576 ("BUG/MEDIUM: http:
prevent redirect from overwriting a buffer").

This patch should be backported in 1.7, 1.6 and 1.5. It relies on commit
b686afd ("MINOR: chunks: implement a simple dynamic allocator for trash
buffers") for the trash allocator, which has to be backported as well.
---
 src/proto_http.c | 218 ++-
 src/stream.c |  18 +++--
 2 files changed, 149 insertions(+), 87 deletions(-)

diff --git a/src/proto_http.c b/src/proto_http.c
index 0662041..c81409f 100644
--- a/src/proto_http.c
+++ b/src/proto_http.c
@@ -2555,19 +2555,25 @@ resume_execution:
 			break;
 
 		case ACT_HTTP_SET_HDR:
-		case ACT_HTTP_ADD_HDR:
+		case ACT_HTTP_ADD_HDR: {
 			/* The scope of the trash buffer must be limited to this function. The
 			 * build_logline() function can execute a lot of other function which
 			 * can use the trash buffer. So for limiting the scope of this global
 			 * buffer, we build first the header value using build_logline, and
 			 * after we store the header name.
 			 */
+			struct chunk *replace;
+
+			replace = alloc_trash_chunk();
+			if (!replace)
+return HTTP_RULE_RES_BADREQ;
+
 			len = rule->arg.hdr_add.name_len + 2,
-			len += build_logline(s, trash.str + len, trash.size - len, >arg.hdr_add.fmt);
-			memcpy(trash.str, rule->arg.hdr_add.name, rule->arg.hdr_add.name_len);
-			trash.str[rule->arg.hdr_add.name_len] = ':';
-			trash.str[rule->arg.hdr_add.name_len + 1] = ' ';
-			trash.len = len;
+			len += build_logline(s, replace->str + len, replace->size - len, >arg.hdr_add.fmt);
+			memcpy(replace->str, rule->arg.hdr_add.name, rule->arg.hdr_add.name_len);
+			replace->str[rule->arg.hdr_add.name_len] = ':';
+			replace->str[rule->arg.hdr_add.name_len + 1] = ' ';
+			replace->len = len;
 
 			if (rule->action == ACT_HTTP_SET_HDR) {
 /* remove all occurrences of the header */
@@ -2578,90 +2584,105 @@ resume_execution:
 }
 			}
 
-			http_header_add_tail2(>req, >hdr_idx, trash.str, trash.len);
+			http_header_add_tail2(>req, >hdr_idx, replace->str, replace->len);
+
+			free_trash_chunk(replace);
 			break;
+			}
 
 		case ACT_HTTP_DEL_ACL:
 		case ACT_HTTP_DEL_MAP: {
 			struct pat_ref *ref;
-			char *key;
-			int len;
+			struct chunk *key;
 
 			/* collect reference */
 			ref = pat_ref_lookup(rule->arg.map.ref);
 			if (!ref)
 continue;
 
+			/* allocate key */
+			key = alloc_trash_chunk();
+			if (!key)
+return HTTP_RULE_RES_BADREQ;
+
 			/* collect key */
-			len = build_logline(s, trash.str, trash.size, >arg.map.key);
-			key = trash.str;
-			key[len] = '\0';
+			key->len = build_logline(s, key->str, key->size, >arg.map.key);
+			key->str[key->len] = '\0';
 
 			/* perform update */
 			/* returned code: 1=ok, 0=ko */
-			pat_ref_delete(ref, key);
+			pat_ref_delete(ref, key->str);
 
+			free_trash_chunk(key);
 			break;
 			}
 
 		case ACT_HTTP_ADD_ACL: {
 			struct pat_ref *ref;
-			char *key;
-			struct chunk *trash_key;
-			int len;
-
-			trash_key = get_trash_chunk();
+			struct chunk *key;
 
 			/* collect reference */
 			ref = pat_ref_lookup(rule->arg.map.ref);
 			if (!ref)
 continue;
 
+			/* allocate key */
+			key = alloc_trash_chunk();
+			if (!key)
+return HTTP_RULE_RES_BADREQ;
+
 			/* collect key */
-			len = build_logline(s, trash_key->str, trash_key->size, >arg.map.key);
-			key = trash_key->str;
-			key[len] = '\0';
+			key->len = build_logline(s, key->str, key->size, >arg.map.key);
+			key->str[key->len] = '\0';
 
 			/* perform update */
 			/* add entry only if it does not already exist */
-			if (pat_ref_find_elt(ref, key) == NULL)
-pat_ref_add(ref, key, NULL, NULL);
+			if (pat_ref_find_elt(ref, key->str) == NULL)
+pat_ref_add(ref, key->str, NULL, NULL);
 
+			free_trash_chunk(key);
 			break;
 			}
 
 		case ACT_HTTP_SET_MAP: {
 			struct pat_ref *ref;
-			char *key, *value;
-			struct chunk *trash_key, *trash_value;
-			int len;
-
-			trash_key = get_trash_chunk();
-			trash_value = get_trash_chunk();
+			struct chunk *key, *value;
 
 			/* collect reference */
 			ref = pat_ref_lookup(rule->arg.map.ref);
 			if (!ref)
 continue;
 
+			/* allocate key */
+			key = alloc_trash_chunk();
+			if (!key)
+return HTTP_RULE_RES_BADREQ;
+
+			/* allocate value */
+			value = alloc_trash_chunk();
+			if (!value) {
+

Tcp logging in haproxy

2017-10-26 Thread kushal bhattacharya
I have included tcp logging in the configuration of haproxy.But I want to
know how it will be loggged in and where the log will be done.My main moto
is to dump log output in some custom file but watch the logs dumped into it.
Thanks,
Kushal