Re: Require info on ACL for rate limiting on per URL basis.

2019-02-21 Thread Jarno Huuskonen
Hi,

On Thu, Feb 21, Badari Prasad wrote:
> But by replacing 'src' with 'path',  rate-limiting did not work. My current
> config after the change is :
> 
> backend st_src_as2_monte
> stick-table type string len 64 size 1m expire 1s store http_req_rate(1s)

(for testing it helps to use longer expire eg. 60s and longer rate
(60s). Then it's easier to use admin socket to view stick table values
to see if the stick table is updated etc).

> frontend scef
> bind 0.0.0.0:80
> bind 0.0.0.0:443 ssl crt /etc/ssl/private/as1.pem
> mode http
> option forwardfor
> 
> http-request track-sc1 path table st_src_as2_monte

You're using sc1 here.

> acl monte_as2_api_url path_beg /api/v1/monitoring-event/A02/
> #500 requests per second.
> acl monte_as1_exceeds_limit sc0_http_req_rate(st_src_as1_monte) gt 500

And sc0 here, change this to sc1 (or use track-sc1).

-Jarno

> http-request deny deny_status 429 if monte_as2_api_url
> monte_as2_exceeds_limit
> use_backend nodes
> Appreciate the response on this, and going further I will have to extend
> the rate limiting to multiple url's .
> 
> 
> Thanks
>  badari
> 
> 
> 
> On Wed, Feb 20, 2019 at 11:13 PM Jarno Huuskonen 
> wrote:
> 
> > Hi,
> >
> > On Wed, Feb 20, Badari Prasad wrote:
> > >  Thank you for responding. Came up with based on the inputs:
> > >
> > > #printf "as2monte" | mkpasswd --stdin --method=md5
> > > userlist AuthUsers_MONTE_AS2
> > > user appuser_as2  password $1$t25fZ7Oe$bjthsMcXgbCt2EJvQo8r0/
> > >
> > > backend st_src_as2_monte
> > > stick-table type string len 64 size 1000 expire 1s store
> > > http_req_rate(1s)
> > >
> > > frontend scef
> > > bind 0.0.0.0:80
> > > bind 0.0.0.0:443 ssl crt /etc/ssl/private/as1.pem
> > > mode http
> > > #option httpclose
> > > option forwardfor
> > >
> > > acl monte_as2_api_url url_beg /api/v1/monitoring-event/A02/
> > > #500 requests per second.
> > > acl monte_as2_exceeds_limit src_http_req_rate(st_src_as2_monte) gt
> > 500
> > > http-request track-sc1 src table st_src_as2_monte unless
> > > monte_as2_exceeds_limit
> > > http-request deny deny_status 429 if monte_as2_api_url
> > > monte_as2_exceeds_limit
> >
> > I'm confused :) what your requirements are but I think with
> > this configuration each src address can have rate 500 to
> > /api/v1/monitoring-event/A02/. (so with 10 different src addresses
> > you can have 5000 rate to /api/v1/monitoring-event/A02/).
> >
> > (And you're using type string stick table, type ip or ipv6 is better
> > fit for tracking src).
> >
> > But if it fits your requirements then I'm glad you found a working
> > solution.
> >
> > -Jarno
> >
> > > http-request auth realm basicauth if monte_as2_api_url
> > > !authorized_monte_as2
> > >
> > > use_backend nodes
> > >
> > > With this config I was able to rate limit per url basis.
> > >
> > > Thanks
> > >  badari
> > >
> > >
> > >
> > > On Tue, Feb 19, 2019 at 10:01 PM Jarno Huuskonen  > >
> > > wrote:
> > >
> > > > Hi,
> > > >
> > > > On Mon, Feb 11, Badari Prasad wrote:
> > > > >I want to rate limit based on url
> > > > > [/api/v1/monitoring-event/A01, /api/v1/client1/transfer_data,
> > > > > /api/v1/client2/transfer_data  ]  no matter what the source ip
> > address
> > > > is.
> > > >
> > > > Something like this might help you. Unfortunately at the moment
> > > > I don't have time to create a better example.
> > > >
> > > > acl api_a1 path_beg /a1
> > > > acl api_b1 path_beg /b1
> > > > acl rate_5 sc0_http_req_rate(test_be) gt 5
> > > > acl rate_15 sc0_http_req_rate(test_be) gt 15
> > > >
> > > > # You might want to add acl so you'll only track paths you're
> > > > # interested in.
> > > > http-request track-sc0 path table test_be
> > > > # if you want to track only /a1 /b1 part of path
> > > > # you can use for example field converter:
> > > > #http-request track-sc0 path,field(1,/,2) table test_be
> > > > #http-request set-header X-Rate %[sc0_http_req_rate(test_be)]
> > > >
> > > > http-request deny deny_status 429 if api_a1 rate_5
> > > > http-request deny deny_status 403 if api_b1 rate_15
> > > >
> > > > # adjust len and size etc. to your needs
> > > > backend test_be
> > > > stick-table type string len 40 size 20 expire 180s store
> > > > http_req_rate(60s)
> > > >
> > > > -Jarno
> > > >
> > > > > On Mon, Feb 11, 2019 at 7:34 PM Jarno Huuskonen <
> > jarno.huusko...@uef.fi>
> > > > > wrote:
> > > > >
> > > > > > Hi,
> > > > > >
> > > > > > On Mon, Feb 11, Badari Prasad wrote:
> > > > > > > Thank you for the response. I came up with my own haproxy
> > cfg,
> > > > where
> > > > > > i
> > > > > > > would want to rate limit based on event name and client id in
> > url.
> > > > > > > URL ex : /api/v1//
> > > > > > >
> > > > > > > Have attached a file for my haproxy cfg.  But it does not seems
> > to be
> > >

Re: Idea for the Wiki

2019-02-21 Thread Willy Tarreau
On Fri, Feb 22, 2019 at 01:54:00AM +0100, Tim Düsterhus wrote:
> I suggest to create new pages using the web interface only to make sure
> it can handle it. Editing can be done using git.

I agree. I'm seeing antoher  benefit to this, which is that it will
guarantee that we only use simple things that everyone can modify
using the same web interface.

Willy



Re: Idea for the Wiki

2019-02-21 Thread Tim Düsterhus
Baptiste,

Am 20.02.19 um 07:44 schrieb Baptiste:
> I just cloned the repo :)
> How should we organize directories and pages?
> IE for TLS offloading:
>   /common/acceleration/tls_offloading.md ?
> I think it's quite important to agree on it now, because the folders will
> be part of the URL indexed by google :)

Be careful here: While the Wiki technically is a git repository that
supports folders the web interface of GitHub does not.

[timwolla@/t/test.wiki (master)]find . -not -path './.git/*'
.
./Home.md
./Folder2
./Folder2/Test.md
./.git
./Foo-bar.md
./Folder
./Folder/Test.md

-> https://github.com/TimWolla/test/wiki/Test

There are two pages called `Test` in the sidebar, but only the one in
`Folder` can be accessed. The one in `Folder2` can't.

I suggest to create new pages using the web interface only to make sure
it can handle it. Editing can be done using git.

Best regards
Tim Düsterhus



Re: error in haproxy 1.9 using txn.req:send in lua

2019-02-21 Thread Thierry Fournier
Hi,

You can use something like that:

   --> receive request from client
   --> frontend a
   --> use-service lua.xxx
   --> [forward data using core.tcp(127.0.0.1:)
   --> frontend (bind 127.0.0.1:)
   --> [send request to your server]
   --> [receive response]
   --> [read response in lua.xxx]
   --> [modify reponse in lua.xxx]
   --> [forward response ]
   --> frontend a
   --> send reponse to client

It is a little bit ugly, and it eat manny memory and performances,
but it works !

BR,
Thierry


> On 13 Feb 2019, at 11:18, Laurent Penot  wrote:
> 
> Hi Christopher,
> 
> I'm so sad
> It was really working well in my use case with 1.8 versions.
> Thank's a lot for your answer
> 
> Best
> Laurent
> 
> 
> 
> On 13/02/2019 10:56, "Christopher Faulet"  wrote:
> 
>Le 13/02/2019 à 09:34, Laurent Penot a écrit :
>> Hi Thierry, guys,
>> 
>> When receiving a POST request on haproxy, I use lua to compute some 
>> values, and modify the body of the request before forwarding to the 
>> backend, so my backend can get these variables from the POST and use them.
>> 
>> Here is a sample cfg, and lua code to reproduce this.
>> 
>> # Conf (I removed all defauts, timeout and co ..) :
>> 
>> frontend front-nodes
>> 
>> bind :80
>> 
>> # option to wait for the body before processing (mandatory for POST 
>> requests)
>> 
>> option http-buffer-request
>> 
>> # default backend
>> 
>> default_backend be_test
>> 
>> http-request lua.manageRequests
>> 
>> # Lua :
>> 
>> *function */manageRequests/(txn)
>> 
>> -- create new postdata
>> 
>> *local *newPostData = /core/./concat/()
>> 
>> newPostData:add('POST /test.php HTTP/1.1\r\n')
>> 
>> newPostData:add('Host: test1\r\n')
>> 
>> newPostData:add('content-type: application/x-www-form-urlencoded\r\n')
>> 
>> *local *newBodyStr = 'var1=valueA&var2=valueB'
>> 
>> *local *newBodyLen = string.len(newBodyStr)
>> 
>> newPostData:add('content-length: ' .. tostring(newBodyLen) .. '\r\n')
>> 
>> newPostData:add('\r\n')
>> 
>> newPostData:add(newBodyStr)
>> 
>> *local *newPostDataStr = tostring(newPostData:dump())
>> 
>> txn.req:send(newPostDataStr)
>> 
>> *end*
>> 
>> /core/./register_action/("manageRequests", { "http-req" }, /manageRequests/)
>> 
>> This is working well in haproxy 1.8.x (x : 14 to 18) but I get the 
>> following error with 1.9.4 (same error with 1.9.2, others 1.9.x versions 
>> not tested) :
>> 
>> Lua function 'manageRequests': runtime error: 0 from [C] method 'send', 
>> /etc/haproxy/lua/bench.lua:97 C function line 80.
>> 
>> Line 97 of my lua file is txn.req:send(newPostDataStr)
>> 
>> Maybe I’m missing something on 1.9.x but cant find what, or maybe it’s a 
>> bug, I can’t say.
>> 
>Hi Laurent,
> 
>It is not supported to modify an HTTP request/response calling Channel 
>functions. It means calling following functions within an HTTP proxy is 
>forbidden: Channel.get, Channel.dup, Channel.getline, Channel.set, 
>Channel.append, Channel.send, Channel.forward.
> 
>Since HAProxy 1.9, a runtime error is triggered (because there is no way 
>to do it during the configuration parsing, AFAIK). You may see this as a 
>regression, but in fact, it was never really supported. But because of a 
>lack check, no error was triggered. Because these functions totally 
>hijacked the HTTP parser, if used, the result is undefined. There are 
>many ways to crash HAProxy. Unfortunately, for now, there is no way to 
>rewrite the HTTP messages in Lua.
> 
>-- 
>Christopher Faulet
> 
> 




Re: Require info on ACL for rate limiting on per URL basis.

2019-02-21 Thread Badari Prasad
Hi,
   Thank you for response, I would want to have rate-limiting on url no
matter what src ip is.
So one difference I noticed is :
  http-request track-sc1 src table st_src_as2_monte unless
monte_as2_exceeds_limit
>From your example I see:
http-request track-sc0 path table test_be

But by replacing 'src' with 'path',  rate-limiting did not work. My current
config after the change is :

backend st_src_as2_monte
stick-table type string len 64 size 1m expire 1s store http_req_rate(1s)

frontend scef
bind 0.0.0.0:80
bind 0.0.0.0:443 ssl crt /etc/ssl/private/as1.pem
mode http
option forwardfor

http-request track-sc1 path table st_src_as2_monte
acl monte_as2_api_url path_beg /api/v1/monitoring-event/A02/
#500 requests per second.
acl monte_as1_exceeds_limit sc0_http_req_rate(st_src_as1_monte) gt 500
http-request deny deny_status 429 if monte_as2_api_url
monte_as2_exceeds_limit
use_backend nodes
Appreciate the response on this, and going further I will have to extend
the rate limiting to multiple url's .


Thanks
 badari



On Wed, Feb 20, 2019 at 11:13 PM Jarno Huuskonen 
wrote:

> Hi,
>
> On Wed, Feb 20, Badari Prasad wrote:
> >  Thank you for responding. Came up with based on the inputs:
> >
> > #printf "as2monte" | mkpasswd --stdin --method=md5
> > userlist AuthUsers_MONTE_AS2
> > user appuser_as2  password $1$t25fZ7Oe$bjthsMcXgbCt2EJvQo8r0/
> >
> > backend st_src_as2_monte
> > stick-table type string len 64 size 1000 expire 1s store
> > http_req_rate(1s)
> >
> > frontend scef
> > bind 0.0.0.0:80
> > bind 0.0.0.0:443 ssl crt /etc/ssl/private/as1.pem
> > mode http
> > #option httpclose
> > option forwardfor
> >
> > acl monte_as2_api_url url_beg /api/v1/monitoring-event/A02/
> > #500 requests per second.
> > acl monte_as2_exceeds_limit src_http_req_rate(st_src_as2_monte) gt
> 500
> > http-request track-sc1 src table st_src_as2_monte unless
> > monte_as2_exceeds_limit
> > http-request deny deny_status 429 if monte_as2_api_url
> > monte_as2_exceeds_limit
>
> I'm confused :) what your requirements are but I think with
> this configuration each src address can have rate 500 to
> /api/v1/monitoring-event/A02/. (so with 10 different src addresses
> you can have 5000 rate to /api/v1/monitoring-event/A02/).
>
> (And you're using type string stick table, type ip or ipv6 is better
> fit for tracking src).
>
> But if it fits your requirements then I'm glad you found a working
> solution.
>
> -Jarno
>
> > http-request auth realm basicauth if monte_as2_api_url
> > !authorized_monte_as2
> >
> > use_backend nodes
> >
> > With this config I was able to rate limit per url basis.
> >
> > Thanks
> >  badari
> >
> >
> >
> > On Tue, Feb 19, 2019 at 10:01 PM Jarno Huuskonen  >
> > wrote:
> >
> > > Hi,
> > >
> > > On Mon, Feb 11, Badari Prasad wrote:
> > > >I want to rate limit based on url
> > > > [/api/v1/monitoring-event/A01, /api/v1/client1/transfer_data,
> > > > /api/v1/client2/transfer_data  ]  no matter what the source ip
> address
> > > is.
> > >
> > > Something like this might help you. Unfortunately at the moment
> > > I don't have time to create a better example.
> > >
> > > acl api_a1 path_beg /a1
> > > acl api_b1 path_beg /b1
> > > acl rate_5 sc0_http_req_rate(test_be) gt 5
> > > acl rate_15 sc0_http_req_rate(test_be) gt 15
> > >
> > > # You might want to add acl so you'll only track paths you're
> > > # interested in.
> > > http-request track-sc0 path table test_be
> > > # if you want to track only /a1 /b1 part of path
> > > # you can use for example field converter:
> > > #http-request track-sc0 path,field(1,/,2) table test_be
> > > #http-request set-header X-Rate %[sc0_http_req_rate(test_be)]
> > >
> > > http-request deny deny_status 429 if api_a1 rate_5
> > > http-request deny deny_status 403 if api_b1 rate_15
> > >
> > > # adjust len and size etc. to your needs
> > > backend test_be
> > > stick-table type string len 40 size 20 expire 180s store
> > > http_req_rate(60s)
> > >
> > > -Jarno
> > >
> > > > On Mon, Feb 11, 2019 at 7:34 PM Jarno Huuskonen <
> jarno.huusko...@uef.fi>
> > > > wrote:
> > > >
> > > > > Hi,
> > > > >
> > > > > On Mon, Feb 11, Badari Prasad wrote:
> > > > > > Thank you for the response. I came up with my own haproxy
> cfg,
> > > where
> > > > > i
> > > > > > would want to rate limit based on event name and client id in
> url.
> > > > > > URL ex : /api/v1//
> > > > > >
> > > > > > Have attached a file for my haproxy cfg.  But it does not seems
> to be
> > > > > rate
> > > > > > limiting the incoming requests.
> > > > >
> > > > > > backend st_src_monte
> > > > > > stick-table type string size 1m expire 10s store
> > > http_req_rate(10s)
> > > > > > ...
> > > > > >
> > > > > >acl monte_as1_exceeds_limit
> src_http_req_rate(st_src_as1_monte)
> > > gt

Re: RTMP and Seamless Reload

2019-02-21 Thread Aleksandar Lazic
Hi Erlangga.

Am 20.02.2019 um 07:24 schrieb Erlangga Pradipta Suryanto:
> Hi Aleksandar,
> 
> Very sorry for the late reply. I was out of the office.
>> Ah OBS (=Open Broadcaster Software ?) something like this?
> Yes, the open broadcaster software, that's the tool that we use in our
> development environment.
>
>> How is in general the error handling of the used SW?
> The software will try to reconnect whenever a network interruption occurs, we
> have two stream, primary and backup, so when one stream is down, we have other
> stream to server the request,
> but this will result in the playlist not being complete for one of the stream,
> we'd like to minimize that. 
>
>> * when you reload the backend, does you have also interruption on the stream?
> Yes, the stream will be disconnected and OBS will try to reconnect again.

Which you want to avoid, as you have written in the message above, right.

>> * which algo do you plan to use for the backends, `leastconn`?
> We're using maxconn, we want to limit the number of connection that the 
> backend
> rtmp server have to 1 only
>
>> * How long will a session (tcp/rtmp) normally be?
> We're planning to stream for tv stations, so in theory it will always be
> streaming daily until the tv station stops it
>
>> * How fast can/will be the reconnect from the clients?
> It actually depends on the streaming software, in the case of OBS, we can set 
> it
> to reconnect immediately after disconnection.
>
>> * Is it a option to use DSR (=Direct Server Return) for the stream from rtmp
> source?
> I am not sure if we can use DSR, I will need to consult with our networking 
> team.
>> * Which mode do you plan to use http or tcp?
> We're using TCP.
> 
> We have tried using the runtime API to maintain current stream without reload
> and creating new process.
> We tried having several backends in MAINT state, and when we need one, we will
> update the ip and port in the runtime configuration.
> It covers our current needs of not losing any of the existing stream when a 
> new
> stream arrives, and since they run on the same process, we are sure that the 
> new
> stream will be routed to the new backend.
> We plan on going forward using the runtime API for the time being.

Sounds like a solution.

> Thanks,
> 
> *Erlangga Pradipta Suryanto*

Regards
Aleks

> __
> 
> *T. *+62118898168| *BBM PIN. D8F39521*__
> 
> *E. esuryanto*@bbmtek.com 
> 
> 
> 
> 
> On Thu, Jan 31, 2019 at 10:39 PM Aleksandar Lazic  > wrote:
> 
> Hi Erlangga.
> 
> Am 31.01.2019 um 06:12 schrieb Erlangga Pradipta Suryanto:
> > Hi Aleksandar,
> >
> > Thank you for your reply.
> > As much as possible, we would like the stream to be not interrupted.
> > Though at some time, the stream will be closed and restarted.
> > We're still at POC stage right now, so we only use one haproxy, 
> nginx-rtmp
> server, and OBS to do the streaming
> 
> Ah OBS (=Open Broadcaster Software ?) something like this?
> 
> 
> https://obsproject.com/forum/resources/how-to-set-up-your-own-private-rtmp-server-using-nginx.50/
> 
> > If the current version hasn't supported that yet, we will need to look 
> for
> other option other than to reload the configuration.
> > We stumbled upon this article about runtime
> API, 
> https://www.haproxy.com/blog/dynamic-scaling-for-microservices-with-runtime-api/
> > We are currently testing it.
> 
> The dynamic configuration works like a charm but never the less you will
> have some interrupts as this is the nature of all networks.
> How is in general the error handling of the used SW?
> 
> I have some questions which you are maybe willing to answer.
> 
> * when you reload the backend, does you have also interruption on the 
> stream?
> * which algo do you plan to use for the backends, `leastconn`?
>   https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#4-balance
> * How long will a session (tcp/rtmp) normally be?
> * How fast can/will be the reconnect from the clients?
> * Is it a option to use DSR (=Direct Server Return) for the stream from 
> rtmp
> source?
> * Which mode do you plan to use http or tcp?
> 
> To get you right you wish to handover the client connected sockets
> (tcp/udp/unix) from the `old` process to the new process after a config
> reload, right?
> 
> I think this isn't a easy task nor I'm sure it's possible especially when
> you run the setup in HA setup with different "machines", but I'm not the
> expert about this topic.
> 
> > *Erlangga Pradipta Suryanto* | Software Engineer, BBM
> 
> Regards
> Aleks
> 
> > __
> >
> > *T. *+62118898168| *BBM PIN. D8F39521*__
> >
> > *E. esuryanto*@bbmtek.com   >__
> >
> > Follow us on: Facebook  | Twitte

Re: %[] in use-server directives

2019-02-21 Thread Joe K
Ah, I see. Tried it, but it seems it's not the only thing that causes the
segfault.

On Thu, Feb 21, 2019 at 8:31 AM Willy Tarreau  wrote:

> Hi Joe,
>
> On Thu, Feb 21, 2019 at 08:23:29AM +, Joe K wrote:
> > Hello everybody again ...
> >
> > So here's what I have right now, just from copy-pasting and slightly
> > editing 702d44f.
> >
> > The config check passes, but haproxy crashes with segmentation fault
> after
> > the first request with an enabled server.
>
> You need (at least) to change this one :
>
> diff --git a/include/types/proxy.h b/include/types/proxy.h
> index 14b6046c..3f8ede58 100644
> --- a/include/types/proxy.h
> +++ b/include/types/proxy.h
> @@ -490,9 +490,11 @@ struct switching_rule {
>  struct server_rule {
> struct list list;   /* list linked to from the
> proxy */
> struct acl_cond *cond;  /* acl condition to meet */
> +   int dynamic;/* this is a dynamic rule
> using the logformat expression */
> union {
> struct server *ptr; /* target server */
> char *name; /* target server name
> during config parsing */
> +   struct list expr;   /* logformat expression to
> use for dynamic rules */
> } srv;
>  };
>
> The "expr" field must move out of the union, otherwise it's shared
> with the server's pointer. Just move it before the "dynamic" field
> above. It definitely is one cause of segfault (though possibly not
> the only one).
>
> Willy
>


0002-move-server-rule-expr-out-of-union.patch
Description: Binary data


0001-make-use-server-accept-log-format.patch
Description: Binary data


Re: %[] in use-server directives

2019-02-21 Thread Willy Tarreau
Hi Joe,

On Thu, Feb 21, 2019 at 08:23:29AM +, Joe K wrote:
> Hello everybody again ...
> 
> So here's what I have right now, just from copy-pasting and slightly
> editing 702d44f.
> 
> The config check passes, but haproxy crashes with segmentation fault after
> the first request with an enabled server.

You need (at least) to change this one :

diff --git a/include/types/proxy.h b/include/types/proxy.h
index 14b6046c..3f8ede58 100644
--- a/include/types/proxy.h
+++ b/include/types/proxy.h
@@ -490,9 +490,11 @@ struct switching_rule {
 struct server_rule {
struct list list;   /* list linked to from the 
proxy */
struct acl_cond *cond;  /* acl condition to meet */
+   int dynamic;/* this is a dynamic rule using 
the logformat expression */
union {
struct server *ptr; /* target server */
char *name; /* target server name during 
config parsing */
+   struct list expr;   /* logformat expression to use 
for dynamic rules */
} srv;
 };

The "expr" field must move out of the union, otherwise it's shared
with the server's pointer. Just move it before the "dynamic" field
above. It definitely is one cause of segfault (though possibly not
the only one).

Willy



Re: Tune HAProxy in front of a large k8s cluster

2019-02-21 Thread Baptiste
On Wed, Feb 20, 2019 at 3:14 PM Joao Morais  wrote:

>
>
> > Em 20 de fev de 2019, à(s) 03:30, Baptiste  escreveu:
> >
> > Hi Joao,
> >
> > I do have a question for you about your ingress controller design and
> the "chained" frontends, summarized below:
> > * The first frontend is on tcp mode binding :443, inspecting sni and
> doing a triage;
> >There is also a ssl-passthrough config - from the triage frontend
> straight to a tcp backend.
> > * The second frontend is binding a unix socket with ca-file (tls
> authentication);
> > * The last frontend is binding another unix socket, doing ssl-offload
> but without ca-file.
> >
> > What feature is missing in HAProxy to allow switching these 3 frontends
> into a single one?
> > I understand that the ability to do ssl deciphering and ssl passthrough
> on a single bind line is one of them. Is there anything else we could
> improve?
> > I wonder if crt-list would be useful in your case:
> https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#5.1-crt-list
> >
> Hi Baptiste, I’m changing the approach of the frontend creation - if the
> user configuration just need one, this one will listen :443 without need to
> chain another one. Regarding switch to more frontends - or at least more
> bind lines in the same frontend - and creating the mode-tcp one, here are
> the current rules:
>
> * conflict on timeout client - and perhaps on other frontend configs -
> distinct frontends will be created to each one
> * if one really want to use a certificate that doesn’t match its domain -
> crt-list sounds to solve this
> * tls auth (bind with ca-file) and no tls auth - I don’t want to mix then
> in the same frontend because of security - tls auth use sni, no tls auth
> use host header
> * ssl-passthrough as you have mentioned
>
> ~jm
>
>
Hi Joao,

I am not worried about having many frontends in a single HAProxy
configuration, I am more worried by "chaining" frontends, for performance
reasons.
So having one frontend per app because they use different settings is fine,
from my point of view, unless you must chain one TCP frontend to route
traffic to the application frontend based on SNI.

I don't understand the point about TLS auth. crt-list allows you to load
multiple certificates and to define custom parameters for each of them,
this include ca-file. It's a powerful feature.

What I am trying to figure out is what would be a recommendation for a high
performance deployment of your ingress controller.

Baptiste


Re: %[] in use-server directives

2019-02-21 Thread Joe K
Hello everybody again ...

So here's what I have right now, just from copy-pasting and slightly
editing 702d44f.

The config check passes, but haproxy crashes with segmentation fault after
the first request with an enabled server.

On Tue, Feb 19, 2019 at 9:24 AM Willy Tarreau  wrote:

> On Tue, Feb 19, 2019 at 09:14:40AM +, Joe K wrote:
> > I have next to zero experience with C but the commit 702d44f seems to be
> > small enough for me to be able to wrap my head around.
> >
> > I'll try making it work for use-server tomorrow! Thank you!
>
> Ah great, thanks for this! Do not hesitate to seek for help here once
> you have some basic code.
>
> Willy
>


0001-make-use-server-accept-log-format.patch
Description: Binary data