Re: [RFC PATCH] MEDIUM: compression: Add support for brotli compression

2019-02-14 Thread Aleksandar Lazic
Hi Tim.

Am 13.02.2019 um 17:57 schrieb Tim Duesterhus:
> Willy,
> Aleks,
> List,
> 
> this (absolutely non-ready-to-merge) patch adds support for brotli
> compression as suggested in issue #21: 
> https://github.com/haproxy/haproxy/issues/21

Cool ;-)

> It is tested on Ubuntu Xenial with libbrotli 1.0.3:
> 
>   [timwolla@~]apt-cache policy libbrotli-dev
>   libbrotli-dev:
>   Installed: 1.0.3-1ubuntu1~16.04.1
>   Candidate: 1.0.3-1ubuntu1~16.04.1
>   Version table:
>   *** 1.0.3-1ubuntu1~16.04.1 500
>   500 http://de.archive.ubuntu.com/ubuntu xenial-updates/main 
> amd64 Packages
>   100 /var/lib/dpkg/status
>   [timwolla@~]apt-cache policy libbrotli1
>   libbrotli1:
>   Installed: 1.0.3-1ubuntu1~16.04.1
>   Candidate: 1.0.3-1ubuntu1~16.04.1
>   Version table:
>   *** 1.0.3-1ubuntu1~16.04.1 500
>   500 http://de.archive.ubuntu.com/ubuntu xenial-updates/main 
> amd64 Packages
>   100 /var/lib/dpkg/status
> 
> I am successfully able access brotli compressed URLs with Google Chrome,
> this requires me to disable `gzip` though (because haproxy prefers to
> select gzip, I suspect because `br` is last in Chrome's `Accept-Encoding`
> header).

Does it change it when you use `br` as frist entry in `compression algo ... `

https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#4.2-compression%20algo

> I also am able to sucessfully download and decompress URLs with `curl`
> and the `brotli` CLI utility. The server I use as the backend for these
> tests has about 45ms RTT to my machine. The HTML page I use is some random
> HTML page on the server, the noise file is 1 MiB of finest /dev/urandom.
> 
> You'll notice that brotli compressed requests are both faster as well as
> smaller compared to gzip with the hardcoded brotli compression quality
> of 3. The default is 11, which is *way* slower than gzip.

How much more/less/equal CPU usage have brotli compared to gzip?

I'm a little bit disappointed from the size point of view, it is only ~6K less
then gzip, is it worth the amount of work for such a small gain of data 
reduction.

Regards
Aleks

>   + curl localhost:8080/*snip*.html -H 'Accept-Encoding: gzip'
>   % Total% Received % Xferd  Average Speed   TimeTime Time  
> Current
>   Dload  Upload   Total   SpentLeft  
> Speed
>   100 492800 492800 0   279k  0 --:--:-- --:--:-- 
> --:--:--  279k
>   + curl localhost:8080/*snip*.html -H 'Accept-Encoding: br'
>   % Total% Received % Xferd  Average Speed   TimeTime Time  
> Current
>   Dload  Upload   Total   SpentLeft  
> Speed
>   100 434010 434010 0   332k  0 --:--:-- --:--:-- 
> --:--:--  333k
>   + curl localhost:8080/*snip*.html -H 'Accept-Encoding: identity'
>   % Total% Received % Xferd  Average Speed   TimeTime Time  
> Current
>   Dload  Upload   Total   SpentLeft  
> Speed
>   100  127k  100  127k0 0   441k  0 --:--:-- --:--:-- 
> --:--:--  441k
>   + curl localhost:8080/noise -H 'Accept-Encoding: gzip'
>   % Total% Received % Xferd  Average Speed   TimeTime Time  
> Current
>   Dload  Upload   Total   SpentLeft  
> Speed
>   100 1025k0 1025k0 0  3330k  0 --:--:-- --:--:-- 
> --:--:-- 3338k
>   + curl localhost:8080/noise -H 'Accept-Encoding: br'
>   % Total% Received % Xferd  Average Speed   TimeTime Time  
> Current
>   Dload  Upload   Total   SpentLeft  
> Speed
>   100 1024k0 1024k0 0  3029k  0 --:--:-- --:--:-- 
> --:--:-- 3030k
>   + curl localhost:8080/noise -H 'Accept-Encoding: identity'
>   % Total% Received % Xferd  Average Speed   TimeTime Time  
> Current
>   Dload  Upload   Total   SpentLeft  
> Speed
>   100 1024k  100 1024k0 0  3003k  0 --:--:-- --:--:-- 
> --:--:-- 3002k
>   + ls -al
>   total 3384
>   drwxrwxr-x  2 timwolla timwolla4096 Feb 13 17:30 .
>   drwxrwxrwt 28 root root   69632 Feb 13 17:25 ..
>   -rw-rw-r--  1 timwolla timwolla 598 Feb 13 17:30 download
>   -rw-rw-r--  1 timwolla timwolla   43401 Feb 13 17:30 html-br
>   -rw-rw-r--  1 timwolla timwolla   49280 Feb 13 17:30 html-gz
>   -rw-rw-r--  1 timwolla timwolla  130334 Feb 13 17:30 html-id
>   -rw-rw-r--  1 timwolla timwolla 1048949 Feb 13 17:30 noise-br
>   -rw-rw-r--  1 timwolla timwolla 1049666 Feb 13 17:30 noise-gz
>   -rw-rw-r--  1 timwolla timwolla 1048576 Feb 13 17:30 noise-id
>   ++ zcat html-gz
>   + sha256sum html-id /dev/fd/63 /dev/fd/62
>   ++ brotli --decompress --stdout html-br
>   56f1664241b3dbb750f93b69570be76c6baccb8de4f

Re: Compilation fails on OS-X

2019-02-13 Thread Aleksandar Lazic
Am 13.02.2019 um 14:45 schrieb Patrick Hemmer:
> Trying to compile haproxy on my local machine for testing purposes and am
> running into the following:

Which compiler do you use?

>         # make TARGET=osx
>     src/proto_http.c:293:1: error: argument to 'section' attribute is not
> valid for this target: mach-o section specifier requires a segment and section
> separated by a comma
>         DECLARE_POOL(pool_head_http_txn, "http_txn", sizeof(struct http_txn));
>         ^
>         include/common/memory.h:128:2: note: expanded from macro 
> 'DECLARE_POOL'
>                         REGISTER_POOL(&ptr, name, size)
>                         ^
>         include/common/memory.h:123:2: note: expanded from macro 
> 'REGISTER_POOL'
>                         INITCALL3(STG_POOL, create_pool_callback, (ptr), 
> (name),
> (size))
>                         ^
>         include/common/initcall.h:102:2: note: expanded from macro 'INITCALL3'
>                         _DECLARE_INITCALL(stage, __LINE__, function, arg1, 
> arg2,
> arg3)
>                         ^
>         include/common/initcall.h:78:2: note: expanded from macro
> '_DECLARE_INITCALL'
>                         __DECLARE_INITCALL(__VA_ARGS__)
>                         ^
>         include/common/initcall.h:65:42: note: expanded from macro
> '__DECLARE_INITCALL'
>                                
> __attribute__((__used__,__section__("init_"#stg))) =   \
> 
> 
> 
> Issue occurs on master, and the 1.9 branch
> 
> -Patrick




Re: HAProxy in front of Docker Enterprise problem

2019-02-13 Thread Aleksandar Lazic
Hi.

Am 13.02.2019 um 00:21 schrieb Norman Branitsky:
> I have an HAProxy 1.7 server sitting in front of a number of Docker Enterprise
> Manager nodes and Worker nodes.
> 
> The Worker nodes don’t appear to have any problem with HAProxy terminating the
> SSL and connecting to them via HTTP.
> 
> The Manager nodes are the problem.
> 
> They insist on installing their own certificates (either self-signed or CA 
> signed).
>
> They will only listen to HTTPS traffic.
> 
> So my generic frontend_main-ssl says:
> 
> bind :443  ssl crt /etc/CONFIG/haproxy-1.7/certs/cert.pem
> 
>  
> 
> The backend has the following server statement:
> 
> server xxx 10.240.12.248:443 ssl verify none
> 
>  
> 
> But apparently this doesn’t work – the client gets the SSL certificate 
> provided
> by the HAProxy server
>
> instead of the certificate provided by the Manager node. This causes the 
> Manager
> node to barf.

Do you have added the manger certificates in the cert.pem?

> Do I have to make HAProxy listen on 8443 and just do a tcp frontend/backend 
> for
> the Manager nodes?

It's one possibility. This way makes the setup easier and I don't think that you
want to intercept some http layer stuff for the docker registry.

> Norman Branitsky

Regards
aleks




Re: haproxy segfault

2019-02-12 Thread Aleksandar Lazic
Hi.

Am 12.02.2019 um 18:36 schrieb Mildis:
> Hi list,
> 
> haproxy is segfaulting multiple times these days for no apparent reason.
> At first i thought is was a load issue but even few RPS made it crash.
> 
> Symptoms are always the same : segfault of a worker then spawn of a new.
> If load is very high, spawned worker segfault immediatly.
> 
> In the messages log, the offset is always the same (+1e2000).
> 
> I'm running 1.9.4 (from vincent bernat package) in Debian stretch.
> 
> In haproxy logs :
> Feb 12 11:36:54 ns3089939 haproxy[32688]: [ALERT] 042/113654 (32688) : 
> Current worker #1 (32689) exited with code 139 (Segmentation fault)
> Feb 12 11:36:54 ns3089939 haproxy[32688]: [ALERT] 042/113654 (32688) : 
> exit-on-failure: killing every workers with SIGTERM
> Feb 12 11:36:54 ns3089939 haproxy[32688]: [WARNING] 042/113654 (32688) : All 
> workers exited. Exiting... (139)
> 
> In /var/log/messages
> kernel: traps: haproxy[32689] general protection ip:561e5b799375 
> sp:7ffe6fd3f2f0 error:0 in haproxy[561e5b72d000+1e2000]
> 
> "show errors" is empty.
> 
> How could I diagnose further without impacting production too much ?

Can you activate coredumps

ulimit -c unlimited

you should find the core in

/tmp

or just search for core on the filesystem

You can get a backtrace with the following command as soon as you have a 
coredump

gdb /usr/sbin/haproxy #YOUR_CORE_FILE#
bt full

> Thanks,
> mildis

Regards
aleks




Re: Anyone heard about DPDK?

2019-02-12 Thread Aleksandar Lazic
Hi all.

Wow so much feedback, thanks all for the answers ;-)

Am 12.02.2019 um 15:23 schrieb Alexandre Cassen:
> There has been a lot of applications/stack built around DPDK last few years.
> Mostly because people found it easy to code stuff around DPDK and are so happy
> to display perf graph about their DPDK application vs plain Linux Kernel 
> stack.

Would you like to share such a comparison?

> My intention here would be to warn a little bit about this collective 
> enthusiasm
> around DPDK. Integrating DPDK is easy and mostly fun (even if you have to 
> learn
> and dig into their rte lib and mbuf related), but most of people are 
> completely
> blind about security ! Ok Linux kernel and netdev is slow in respect of NIC
> available nowadays (10G, 40G and multiple 100G on core-networks), but using
> Linux TCP/IP stack you will benefit the hardcore hacking task done during last
> 30years by Linux netdev core guys ! this long process mostly fix and solve
> hardcore issues and for some : security issues. And you will certainly not be
> protected by a 'super fast' self proclaimed performance soft. Mostly because
> these applications are mostly features oriented than security or protocol
> full-picture, and are using this 'super fast, best of ever' argument to 
> enforce
> people mind to adopt.

When I take a look into the doc then I see some security informations.

https://doc.dpdk.org/guides/prog_guide/rte_security.html

How does such a application handle the security topic?

> The way DPDK is working in polling mode is certainly not the best at all. DPDK
> is PCI 'stealing' NIC from kernel to handle/manage itself in userspace by
> forcing active loop (100% CPU polling) to handle descriptors and convert to
> mbuf. latter you can 'forward' mbuf to Linux kernel by using KNI netdevice to
> use Linux Kernel machinery as a slow-path for complicated/not_focused
> packet-flow (most application are using KNI for ARP,DHCP,...). But most of the
> time application are implementing 'minimal' adjacent network features to make 
> it
> work in its networking environment : and here is the problem: you are focused 
> on
> perf and because of it you are making shortcut about considering potential
> threats... a prediction could be to see large number of network security holes
> opened, and specially an old bunch of security holes making a fun revival (a 
> lot
> of fun with TCP)

So this means that a application can be used with DPDK when it uses the
KNI (=Kernel NIC Interface) right?

https://doc.dpdk.org/guides/prog_guide/kernel_nic_interface.html

How much "slower" is the way via KNI?

> In contrast recent Linux Kernel introduced XDP and eBPF machinery that are
> certainly much more future proof than DPDK. First consideration in XDP design 
> is
> : you only TAP in data/packet you are interested in and not making an hold-up 
> on
> whole traffic. So XDP is for fast path but only for protocol or workflow
> identified. You program and attach an eBPF program to a specific NIC, if there
> is no match then packet simply continue its journey into Linux Kernel stack.
> 
> XDP is a response from kernel netdev community to address DPDK users. The fact
> that DPDK introduced and extended PMP to support AF_XDP is certainly a sign 
> that
> XDP is going/doing into the right direction.

Sounds a interesting future for the linux kernel.

When we take a look into the container and cloud world, does this DPDK makes any
sense? I mean when I run a container on AWS/Google/Azure I'm normally so far
from any Hardware that this high traffic possibility isn't available for the
container, right?

To the list members:
Maybe it's offtopic from the HAProxy list so please apologize for all the noise.

> regs,
> Alexandre

Regards
Aleks

> On 12/02/2019 14:04, Federico Iezzi wrote:
>> Nowadays most VNF (virtual network function) in the telco operators are built
>> around DPDK. Not demos, most 5G will be like that. 4G is migrating as we 
>> speak
>> on this new architecture.
>> There isn't any TCP stack built-it but the libraries can be used to build 
>> one.
>> VPP has integrated DPDK in this way.
>>
>> Linux network stack is not designed to managed millions of packets per 
>> second,
>> DPDK bypass it completely offloading everything in userspace. The beauty is
>> that also the physical nic drivers are in userspace using specific DPDK
>> drivers. Linux networking stack works in interrupt mode, DPDK is in polling
>> mode, basically with a while true.
>>
>>  From F5 at the dpdk summit as a relevant reference to what HAProxy does.
>> https://dpdksummitnorthamerica2018.sched.com/event/IhiF/dpdk-on-f5-big-ip-virtual-adcs-brent-blood-f5-networks
>>
>> https://www.youtube.com/watch?v=6zu81p3oTeo
>>
>> Regards,
>> Federico
>>
>> On Tue, 12 Feb 2019 at 11:08, Julien Laffaye > > wrote:
>>
>>     Something like http://seastar.io/ or https://fd.io/ ? :)
>>
>>     On Mon, Feb 11, 2019 at 11:25 AM Baptiste >     

Re: [PATCH] CONTRIB: contrib/prometheus-exporter: Add a Prometheus exporter for HAProxy

2019-02-11 Thread Aleksandar Lazic
Am 11.02.2019 um 10:40 schrieb Christopher Faulet:
> Le 09/02/2019 à 10:47, Aleksandar Lazic a écrit :
>> Hi Christopher.
>>
>> Am 07-02-2019 22:09, schrieb Christopher Faulet:
>>> Hi,
>>>
>>> This patch adds a new component in contrib. It is a Prometheus
>>> exporter for HAProxy.
>>
>> [snipp]
>>
>>> More details in the README.
>>>
>>> I'm not especially a Prometheus expert. And I must admit I never use
>>> it. So if anyone have comments or suggestions, he is welcome.
>>
>> Just for my curiosity, what's wrong with the haproxy_exporter especially
>> that haproxy_exporter uses the csv format from haproxy ?
>>
> 
> Hi Aleks,
> 
> Nothing wrong. haproxy_exporter works pretty well AFAIK. It is just an 
> external
> component and it may seem a bit annoying to deploying it instead of having 
> such
> functionality built-in in HAProxy. Furthermore, haproxy_exporter "only" 
> exports
> proxies and servers statistics. Unlike the built-in exporter, haproxy_exporter
> is limited to what the stats applet exposes in HTTP. So, it cannot export 
> global
> information. And with a bit more work, we can imagine to export even more info
> from the built-in exporter.

You are absolutely right.

> However, to mitigate what I said, it is not an aim for HAProxy to support all
> monitoring and alerting tools (Prometheus, Graphite, InfluxDB, OpenTSDB...). 
> So
> it was added under contrib and not officially integrated into HAPRoxy. It is a
> first step of a reflection on the output format for stats to let all kind of
> external tools to retrieve them. But it not a priority either. It was just a
> quick development and now we will wait and see if there is a particular demand
> to go further and how we could address it, if any.

I will add this prometheus-exporter to the dev builds as soon as it's in the 
source.

Maybe we should think about ci with github?

https://github.com/marketplace/category/continuous-integration

Regards
Aleks



Re: Anyone heard about DPDK?

2019-02-10 Thread Aleksandar Lazic
Am 10.02.2019 um 12:06 schrieb Lukas Tribus:
> On Sun, 10 Feb 2019 at 10:48, Aleksandar Lazic  wrote:
>>
>> Hi.
>>
>> I have seen this in some twitter posts and asked me if it's something 
>> useable for a Loadbalancer like HAProxy ?
>>
>> https://www.dpdk.org/
>>
>> To be honest it looks like a virtual NIC, but I'm not sure.
> 
> See:
> https://www.mail-archive.com/haproxy@formilux.org/msg26748.html

8-O Sorry I have forgotten that Question.
Sorry the noise and thanks for your patience.

> lukas

Greetings
Aleks



Anyone heard about DPDK?

2019-02-10 Thread Aleksandar Lazic
Hi.

I have seen this in some twitter posts and asked me if it's something useable 
for a Loadbalancer like HAProxy ?
 
https://www.dpdk.org/

To be honest it looks like a virtual NIC, but I'm not sure.

Regards
Aleks



Re: [PATCH] CONTRIB: contrib/prometheus-exporter: Add a Prometheus exporter for HAProxy

2019-02-09 Thread Aleksandar Lazic

Hi Christopher.

Am 07-02-2019 22:09, schrieb Christopher Faulet:

Hi,

This patch adds a new component in contrib. It is a Prometheus
exporter for HAProxy.


[snipp]


More details in the README.

I'm not especially a Prometheus expert. And I must admit I never use
it. So if anyone have comments or suggestions, he is welcome.


Just for my curiosity, what's wrong with the haproxy_exporter especially
that haproxy_exporter uses the csv format from haproxy ?


Thanks


Regards
Aleks



Re: [ANNOUNCE] haproxy-1.9.4

2019-02-07 Thread Aleksandar Lazic
Am 06.02.2019 um 17:19 schrieb Willy Tarreau:
> Hi Aleks,
> 
> On Wed, Feb 06, 2019 at 05:16:58PM +0100, Aleksandar Lazic wrote:
>> Maybe this patch was to late for 1.9.4 please can you consider to add it
>> to 2.0 and later 1.9.5, thanks.
>>
>> https://www.mail-archive.com/haproxy@formilux.org/msg32693.html
> 
> I wanted to check it with Christopher first but I know he's busy working
> on some extremely boring stuff, and don't want to risk trading his stuff
> for a review :-)

;-)

> I'll also have to correct a number of spelling mistakes so better be sure
> before doing this.

Ah cool. thanks.

BTW:

the openssl reg-tests was passed without errors

https://gitlab.com/aleks001/haproxy19-centos/-/jobs/157330203

## Starting vtest ##
Testing with haproxy version: 1.9.4
0 tests failed, 0 tests skipped, 35 tests passed

the boringssl reg-tests passed with errors.

https://gitlab.com/aleks001/haproxy-19-boringssl/-/jobs/157330626
## Starting vtest ##
Testing with haproxy version: 1.9.4
#top  TEST ./reg-tests/connection/b0.vtc FAILED (8.790) exit=2
1 tests failed, 0 tests skipped, 34 tests passed
## Gathering results ##



> Thanks!
> Willy

Regards
Aleks



Re: Weighted Backend's

2019-02-06 Thread Aleksandar Lazic
Hi James.

Am 06.02.2019 um 16:16 schrieb James Root:
> Hi All,
> 
> I am doing some research and have not really found a great way to configure
> HAProxy to get the desired results. The problem I face is that I a service
> backed by two separate collections of servers. I would like to split traffic
> between these two clusters (either using percentages or weights). Normally, I
> would configure a single backend and calculate my weights to get the desired
> effect. However, for my use case, the list of servers can be update 
> dynamically
> through the API. To maintain correct weighting, I would then have to
> re-calculate the weights of every entry to maintain a correct balance.
>
> An alternative I found was to do the following in my configuration file:
>
> backend haproxy-test
> balance roundrobin
> server cluster1 u...@cluster1.sock weight 90
> server cluster2 u...@cluster2.sock weight 10
> 
> listen cluster1
>     bind u...@cluster1.sock
>     balance roundrobin
>     server s1 127.0.0.1:8081 
> 
> listen cluster2
>     bind u...@cluster2.sock
>     balance roundrobin
>     server s1 127.0.0.1:8082 
>     server s2 127.0.0.1:8083 
> 
> This works, but is a bit nasty because it has to take another round trip 
> through
> the kernel. Ideally, there would be a way to accomplish this without having to
> open unix sockets, but I couldn't find any examples or any leads in the 
> haproxy
> docs.
> 
> I was wondering if anyone on this list had any ideas to accomplish this 
> without
> using extra unix sockets? Or an entirely different way to get the same effect?

Well as we don't know which version of HAProxy do you use I will suggest you a
solution based on 1.9.

I would try to use the set-priority-* feature

https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#4.2-http-request%20set-priority-class
https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#4.2-http-request%20set-priority-offset

https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#7.3.2-prio_class
https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#7.3.2-prio_offset

https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#7.3.3-src

I would try the following, untested but I think you get the idea.

frontend clusters

  bind u...@cluster1.sock
  bind u...@cluster2.sock

  balance roundrobin

  # I'm not sure if src works with unix sockets like this
  # maybe you need to remove the unix@ part.
  acl src-cl1 src u...@cluster1.sock
  acl src-cl2 src u...@cluster2.sock

  http-request set-priority-class -10s if src-cl1
  http-request set-priority-class +10s if src-cl2

#  http-request set-priority-offset 5s if LOGO
#  http-request set-priority-offset 5s if LOGO

  use_backend cluster1 if priority-class < 5
  use_backend cluster2 if priority-class > 5


backend cluster1
server s1 127.0.0.1:8081

backend cluster2
server s1 127.0.0.1:8082
server s2 127.0.0.1:8083

There are a lot of fetching functions so maybe you find a better solution with
another fetch function as I don't know your application.

https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#7

In case you haven't seen it there is also a management interface for haproxy.

https://cbonte.github.io/haproxy-dconv/1.9/management.html#9.3
https://www.haproxy.com/blog/dynamic-configuration-haproxy-runtime-api/

> Thanks,
> James Root

Regards
Aleks



Re: [ANNOUNCE] haproxy-1.9.4

2019-02-06 Thread Aleksandar Lazic
Hi willy.

Am 06.02.2019 um 15:25 schrieb Willy Tarreau:
> Hi,
> 
> HAProxy 1.9.4 was released on 2019/02/06. It added 65 new commits
> after version 1.9.3.

Images are updated.

https://hub.docker.com/r/me2digital/haproxy-19-boringssl
https://hub.docker.com/r/me2digital/haproxy19

Maybe this patch was to late for 1.9.4 please can you consider to add it
to 2.0 and later 1.9.5, thanks.

https://www.mail-archive.com/haproxy@formilux.org/msg32693.html

Regards
Aleks

> The main focus in terms of time spent was clearly on end-to-end H2
> correctness, which involves both the H2 protocol itself and the idle
> connections management. It's difficult to enumerate in details all the
> issues that were addressed, but these generally range from not failing
> a connection when failing a stream can be sufficient to counting the
> number of pre-allocated streams on an idle idle outgoing connection to
> make sure it still has stream IDs left. Some server-side idle timeout
> errors could occasionally lead to the whole connection being closed.
> 
> One check was added to prevent an HTX frontend from dynamically branching
> to a non-HTX backend (and conversely), as only the static branches were
> addressed till now.
> 
> There were some improvements on memory allocation failures, a number of
> places were not tested anymore (or this was new code). Ah and a memory
> leak on the unique_id was addressed (it could happen with TCP instances
> when declared in a defaults section).
> 
> Etags are now rewritten from strong to weak by the compression. I had no
> idea this concept of weak vs strong existed at all :-)
> 
> And in addition to this, yesterday two other interesting problems were
> reported and addressed :
>   - the first one is about using certain L7 features at the load balancing
> layer (such as "balance hdr") in HTX mode which could crash haproxy.
> It was in fact caused by the loss of one patch during the multiple
> liftings of the code prior to the merge. That's now fixed. I'm still
> amazed we managed to lose only one patch in this ocean of code!
>  
>   - the other one is quite nasty and impacts all supported versions. Haproxy
> currently performs very deep compatibility tests on your rules, frontends
> and backends after parsing the configuration. But a corner case remained
> by which it was possible to have a frontend bound on, say, processes
> 1 and 2, tracking a key stored in a table present only in process 1 that
> would in turn rely on peers on process 1 as well. Here there is a problem,
> when the frontend receives connections on process 2, the resolved pointers
> for the table end up pointing to a completely different location in a
> parallel universe, then peers are activated to push the data while the
> section has been deallocated... So the relevant checks have been added
> to make sure that a process doesn't try to interact with a section that
> is not present for this process. This covers the track-sc* actions, the
> sc_* sample keywords, and SPOE filters. I was extremely cautious to cover
> the strict minimum so as not to impact any harmless config. It *is*
> possible that one of your config will refuse to load if it is already
> bogus. Please note that if this happens, it means this config is wrong
> and already presents the risk of random crashes. *Do not* rollback if
> this happens, please ask for help here instead. (I in fact expect that
> nobody will see these errors, meaning that the amount of complex and
> bogus configs in field is rather low).
> 
> The rest is pretty low impact and standard.
> 
> Please find the usual URLs below :
>Site index   : http://www.haproxy.org/
>Discourse: http://discourse.haproxy.org/
>Slack channel: https://slack.haproxy.org/
>Issue tracker: https://github.com/haproxy/haproxy/issues
>Sources  : http://www.haproxy.org/download/1.9/src/
>Git repository   : http://git.haproxy.org/git/haproxy-1.9.git/
>Git Web browsing : http://git.haproxy.org/?p=haproxy-1.9.git
>Changelog: http://www.haproxy.org/download/1.9/src/CHANGELOG
>Cyril's HTML doc : http://cbonte.github.io/haproxy-dconv/
> 
> Willy
> ---
> Complete changelog :
> Christopher Faulet (2):
>   BUG/MEDIUM: mux-h1: Don't add "transfer-encoding" if message-body is 
> forbidden
>   BUG/MAJOR: htx/backend: Make all tests on HTTP messages compatible with 
> HTX
> 
> Jérôme Magnin (1):
>   DOC: add a missing space in the documentation for bc_http_major
> 
> Kevin Zhu (1):
>   BUG/MINOR: deinit: tcp_rep.inspect_rules not deinit, add to deinit
> 
> Olivier Houchard (11):
>   BUG/MEDIUM: connections: Don't forget to remove CO_FL_SESS_IDLE.
>   MINOR: xref: Add missing barriers.
>   BUG/MEDIUM: peers: Handle mux creation failure.
>   BUG/MEDIUM: checks: Check that conn_install_mux succeeded.
>   BUG/MEDIUM: servers: Only destroy

Re: info defaults maxconn

2019-02-06 Thread Aleksandar Lazic
Hi Federico.

Am 06.02.2019 um 15:33 schrieb Federico Iezzi:
> Hey there,
> 
> Maybe this is gonna be a very simple answer.
> In HAProxy 1.5.18 seems that the defaults maxconn have a global influence and 
> not per backend one.
> 
> In my case I have global maxconn at 5120001, while defaults at 256. What I'm 
> trying to achieve is to set for all my backends the same maxconn without 
> having the parameter everywhere.
> 
> Testing it, I basically saturated the 256 connections right away and 
> everything was queued. But that happened globally and not on a per-backend 
> basis.
> 
> Is that expected?

Yes, AFAIK.

Default/FE/Listen maxconn
https://cbonte.github.io/haproxy-dconv/1.5/configuration.html#4-maxconn

```
Fix the maximum number of concurrent connections on a frontend
...
```

Backend maxconn is default 0
https://cbonte.github.io/haproxy-dconv/1.5/configuration.html#5.2-maxconn


```
...
The default value is "0" which means unlimited.
...
```
> Thanks!
> Federico

Regards
Aleks



Re: Opinions about DoH (=DNS over HTTPS) as resolver for HAProxy

2019-02-04 Thread Aleksandar Lazic
Hi Lukas.
Am 04.02.2019 um 21:39 schrieb Lukas Tribus:
> Hello,
> 
> On Mon, 4 Feb 2019 at 12:14, Aleksandar Lazic  wrote:
>>
>> Hi.
>>
>> I have just opened a new Issue about DoH for resolving.
>>
>> https://github.com/haproxy/haproxy/issues/33
>>
>> As I know that this is a major change in the Infrastructure I would like to 
>> here what you think about this suggestion.
>>
>> My opinion was at the beginning against this change as there was only some 
>> big provider but now there are some tutorials and other providers for DoH I 
>> think now it's a good Idea.
> 
> Frankly I don't see a real use-case. DoH is interesting for clients
> roaming around networks that don't have a local DNS resolver or with a
> completely untrusted or compromised connectivity to their DNS server.
> A haproxy instance on the other hand is usually something installed in
> a stable datacenter, often with a local resolver, and it is resolving
> names you configured with destination IP's that are visible to an
> attacker anyway.

A possible use-case is:

Let's say you have a hybrid cloud setup (on-prem, AWS, Azure, ...) and the
networks are connected via a unsecured L2/L3 internet connectivity.

The networks are routed and the HAProxy VM/Container must resolve an
internal Backend via DNS but some regulations does not allow to send
plain DNS via the internet.

Internal APP <-> INTERNET <-> HAProxy Pub Cloud <-> Client
  ||
Internal DNS <-> DoH<->

The Solution is to use a DoH on-prem which resolves the internal Backend
via classic DNS internally and send the answer back to HAProxy via HTTPS.

Such a Setup helps to keep some VPN/IPSec setups out of the game.
I hope I have described the use-case in understandable words.

> The DNS implementation is still lacking an important feature (TCP
> mode), which Baptiste does not really have time to work on as far as I
> can tell and would actually address a problem for certain huge
> deployments. At the same time I'm not sure I can up with a *real*
> use-case for DoH in haproxy - and there is always the possibility to
> install a local UDP to DoH resolver. Also a lot of setups nowadays are
> either systemd or docker managed, both of which ship their own
> resolver anyway (providing a local UDP/TCP service).

Ack. It's not a small part, imho.

On this wiki are some DOH Tools which show how DoH could be implemented.

https://github.com/curl/curl/wiki/DNS-over-HTTPS

> I'm not sure what the complexity of DoH is. I assume it's non trivial
> to do in a non-blocking way, without question more complicated than
> TCP mode.

I don't agree on this as I think there are more or less equal hard to
implement. But I must say I'm only a "sometimes" Developer so I'm sure
I miss all the detail which make the difference.

> So I'm not a fan of pushing DoH into haproxy. Especially if the
> use-case is unclear. But those are just my two cents.

Thank you.

> Also CC'ing Baptiste.
> 
> 
> cheers,
> lukas

Regards
aleks



Opinions about DoH (=DNS over HTTPS) as resolver for HAProxy

2019-02-04 Thread Aleksandar Lazic
Hi.

I have just opened a new Issue about DoH for resolving.

https://github.com/haproxy/haproxy/issues/33

As I know that this is a major change in the Infrastructure I would like to 
here what you think about this suggestion.

My opinion was at the beginning against this change as there was only some big 
provider but now there are some tutorials and other providers for DoH I think 
now it's a good Idea.

Best regards
Aleks



Re: [PATCH] DOC: Add HTX part in the documentation

2019-02-02 Thread Aleksandar Lazic
Sorry have forgotten to add.

Need to backport to 1.9

Regards
Aleks


 Ursprüngliche Nachricht 
Von: Aleksandar Lazic 
Gesendet: 2. Februar 2019 10:01:26 MEZ
An: haproxy@formilux.org
Betreff: [PATCH] DOC: Add HTX part in the documentation

Hi.

attached a doc update for the new features of HAProxy 1.9.

I hope the patch full fills the CONTRIBUTING rules as
I haven't send patched to the list for long time ;-)

Regards
Aleks



[PATCH] DOC: Add HTX part in the documentation

2019-02-02 Thread Aleksandar Lazic

Hi.

attached a doc update for the new features of HAProxy 1.9.

I hope the patch full fills the CONTRIBUTING rules as
I haven't send patched to the list for long time ;-)

Regards
AleksFrom c0e025e81b87a23f679aff80bddc02a96c4d43b0 Mon Sep 17 00:00:00 2001
From: Aleksandar Lazic 
Date: Sat, 2 Feb 2019 09:54:55 +0100
Subject: [PATCH] DOC: Add HTX part in the documentation

---
 doc/configuration.txt | 50 +--
 1 file changed, 48 insertions(+), 2 deletions(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index fe5eb250..38ed12ed 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -192,8 +192,7 @@ HAProxy supports 4 connection modes :
 For HTTP/2, the connection mode resembles more the "server close" mode : given
 the independence of all streams, there is currently no place to hook the idle
 server connection after a response, so it is closed after the response. HTTP/2
-is only supported for incoming connections, not on connections going to
-servers.
+supports now end 2 end mode and trailers which is requierd for gRPC.
 
 
 1.2. HTTP request
@@ -384,6 +383,53 @@ Response headers work exactly like request headers, and as such, HAProxy uses
 the same parsing function for both. Please refer to paragraph 1.2.2 for more
 details.
 
+1.3.3. HAProxy HTX
+--
+In this version of HAProxy was a new http-engine developed. With this huge 
+rewrite of the http engine is it now possible to add "easier" some other and 
+new protocols.
+
+It is requierd to use option http-use-htx to activate this new engine.
+
+With HTX is it now possible to handle the following protocols with HAProxy.
+
+TCP <> HTTP/X
+SSL/TLS <> TCP
+SSL/TLS <> HTTP/X
+HTTP/1.x <> HTTP/2
+HTTP/2 <> HTTP/1.x
+
+The Diagramm below was described in this post.
+https://www.mail-archive.com/haproxy@formilux.org/msg31727.html
+
+
+   +-+ stream
+   | all HTTP processing | layer
+   +-+
+   ^ ^ ^
+   HTX | HTX | HTX | normalised
+   v v v  interface
+   +--+ ++ ++ 
+   |applet| | HTTP/1 | | HTTP/2 | whatever layer (called mux for now
+   +--+ ++ ++ but may change once we have others,
+cache || || could be presentation in OSI)
+stats | +--+  | 
+Lua svc   | |TLS   |  | transport layer
+  | +--+  |
+  |   |   | 
++-+ 
+| TCP/Unix/socketpair | control layer
++-+ 
+  |   
++--+
+|file descriptor   |  socket layer
++--+
+  |
+ +---+
+ | operating |
+ |  system   |
+ +---+
+
 
 2. Configuring HAProxy
 --
-- 
2.20.1



Re: Early connection close, incomplete transfers

2019-02-01 Thread Aleksandar Lazic
Hi.

Do you have any errors in lighthttpds log?

Regards
Aleks


 Ursprüngliche Nachricht 
Von: Veiko Kukk 
Gesendet: 1. Februar 2019 12:33:39 MEZ
An: Aleksandar Lazic 
CC: haproxy@formilux.org
Betreff: Re: Early connection close, incomplete transfers


On 2019-01-31 12:57, Aleksandar Lazic wrote:
> Willy have found some issues which are added in the code of 2.0 tree.
> Do you have a chance to test this branch or do you want to wait for
> the next 1.9 release?

I tested stable 1.9.3 and 1.9 preview version Willy gave link here 
https://www.mail-archive.com/haproxy@formilux.org/msg32678.html
There is no difference in my tests.

> I'm not sure if it affects you as we haven't seen the config yet.
> Maybe you can share your config also so that we can see if your setup
> could be effected.

Commented timeouts are original timeouts, I had increased those to make 
sure, I'm not hitting any timeouts when creating higher load with tests. 
Maxconn values  serve the same purpose.

global
   log /dev/log local0
   daemon
   nbproc 1
   nbthread 16
   maxconn 
   user haproxy
   spread-checks 5
   tune.ssl.default-dh-param 2048
   ssl-default-bind-options no-sslv3 no-tls-tickets
   ssl-default-bind-ciphers 
ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:AES128-GCM-SHA256:AES128-SHA256:AES128-SHA:!DSS
   ssl-default-server-options no-sslv3 no-tls-tickets
   ssl-default-server-ciphers 
ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:AES128-GCM-SHA256:AES128-SHA256:AES128-SHA:!DSS
   tune.ssl.cachesize 10
   tune.ssl.lifetime 1800
   stats socket /var/run/haproxy.sock.stats1 mode 640 group vault process 
1 level admin

defaults
   log global
   mode http
   option httplog
   option contstats
   option log-health-checks
   retries 5
   #timeout http-request 5s
   timeout http-request 99s
   #timeout http-keep-alive 20s
   timeout http-keep-alive 99s
   #timeout connect 10s
   timeout connect 99s
   #timeout client 30s
   timeout client 99s
   timeout server 120s
   #timeout client-fin 10s
   timeout client-fin 99s
   #timeout server-fin 10s
   timeout server-fin 99s

listen main_frontend
   bind *:443 ssl crt /etc/vault/cert.pem crt /etc/letsencrypt/certs/ 
maxconn 
   bind *:80 maxconn 
   option forwardfor
   acl local_lighty_down nbsrv(lighty_load_balancer) lt 1
   monitor-uri /load_balance_health
   monitor fail if local_lighty_down
   default_backend lighty_load_balancer

backend lighty_load_balancer
   stats enable
   stats realm statistics
   http-response set-header Access-Control-Allow-Origin *
   option httpchk HEAD /dl/index.html
   server lighty0 127.0.0.1:9000 check maxconn  fall 2 inter 15s rise 
5 id 1

Test results

httpress test output summary:

1 requests launched
thread 3: 1000 connect, 1000 requests, 983 success, 17 fail, 6212668130 
bytes, 449231 overhead
thread 9: 996 connect, 996 requests, 979 success, 17 fail, 6187387690 
bytes, 447403 overhead
thread 4: 998 connect, 998 requests, 980 success, 18 fail, 6193707800 
bytes, 447860 overhead
thread 1: 1007 connect, 1007 requests, 988 success, 19 fail, 6244268680 
bytes, 451516 overhead
thread 8: 998 connect, 998 requests, 977 success, 21 fail, 6174747470 
bytes, 446489 overhead
thread 7: 1001 connect, 1001 requests, 970 success, 31 fail, 6130506700 
bytes, 443290 overhead
thread 10: 997 connect, 997 requests, 983 success, 14 fail, 6212668130 
bytes, 449231 overhead
thread 6: 1004 connect, 1004 requests, 986 success, 18 fail, 6231628460 
bytes, 450602 overhead
thread 5: 999 connect, 999 requests, 982 success, 17 fail, 6206348020 
bytes, 448774 overhead
thread 2: 1000 connect, 1000 requests, 981 success, 19 fail, 6200027910 
bytes, 448317 overhead

TOTALS:  1 connect, 1 requests, 9809 success, 191 fail, 100 
(100) real concurrency
TRAFFIC: 6320110 avg bytes, 457 avg overhead, 61993958990 bytes, 4482713 
overhead
TIMING:  81.014 seconds, 121 rps, 747335 kbps, 825.9 ms avg req time


HAproxy log sections of incomplete transfers (6320535 bytes should be 
transferred with this test data set):
  127.0.0.1:33054 [01/Feb/2019:11:22:48.178] main_frontend 
lighty_load_balancer/lighty0 0/0/0/0/298 200 425 - - SD-- 
100/100/99/99/0 0/0 "
  127.0.0.1:32820 [01/Feb/2019:11:22:48.068] main_frontend 
lighty_load_balancer/lighty0 0/0/0/0/409 200 4990 - - SD-- 99/99/98/98/0 
0/0 "
  127.0.0.1:34330 [01/Feb/2019:11:22:49.199] main_frontend 
lighty_load_balancer/lighty0 0/0/0/0/90 200 425 - - SD-- 100/100/99/99/0 
0/0 "
  127.0.0.1:34344 [01/Feb/2019:11:22:49.201] main_frontend 
lighty_load_balancer/lighty0 0/0/0/0/88 200 425 - - SD-- 99/

Re: RTMP and Seamless Reload

2019-01-31 Thread Aleksandar Lazic
Hi Erlangga.

Am 31.01.2019 um 06:12 schrieb Erlangga Pradipta Suryanto:
> Hi Aleksandar,
> 
> Thank you for your reply.
> As much as possible, we would like the stream to be not interrupted.
> Though at some time, the stream will be closed and restarted.
> We're still at POC stage right now, so we only use one haproxy, nginx-rtmp 
> server, and OBS to do the streaming

Ah OBS (=Open Broadcaster Software ?) something like this?

https://obsproject.com/forum/resources/how-to-set-up-your-own-private-rtmp-server-using-nginx.50/

> If the current version hasn't supported that yet, we will need to look for 
> other option other than to reload the configuration.
> We stumbled upon this article about runtime API, 
> https://www.haproxy.com/blog/dynamic-scaling-for-microservices-with-runtime-api/
> We are currently testing it.

The dynamic configuration works like a charm but never the less you will have 
some interrupts as this is the nature of all networks.
How is in general the error handling of the used SW?

I have some questions which you are maybe willing to answer.

* when you reload the backend, does you have also interruption on the stream?
* which algo do you plan to use for the backends, `leastconn`?
  https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#4-balance
* How long will a session (tcp/rtmp) normally be?
* How fast can/will be the reconnect from the clients?
* Is it a option to use DSR (=Direct Server Return) for the stream from rtmp 
source?
* Which mode do you plan to use http or tcp?

To get you right you wish to handover the client connected sockets 
(tcp/udp/unix) from the `old` process to the new process after a config reload, 
right?

I think this isn't a easy task nor I'm sure it's possible especially when you 
run the setup in HA setup with different "machines", but I'm not the expert 
about this topic.

> *Erlangga Pradipta Suryanto* | Software Engineer, BBM

Regards
Aleks

> __
> 
> *T. *+62118898168| *BBM PIN. D8F39521*__
> 
> *E. esuryanto*@bbmtek.com <mailto:mtal...@bbmtek.com>__
> 
> Follow us on: Facebook <https://www.facebook.com/bbm/> | Twitter 
> <https://twitter.com/BBM> | Instagram 
> <https://www.instagram.com/discoverbbm/> | LinkedIn 
> <https://www.linkedin.com/company/discoverbbm> | YouTube 
> <https://www.youtube.com/bbm> | Vidio <https://www.vidio.com/@bbm> 
> 
> /BBM used under license by Creative Media Works Pte Ltd //(Co. Regn. No. 
> 201609444E)/
> 
> This e-mail is intended only for named addressee(s) and may contain 
> confidential and/or privileged information. If you are not the named 
> addressee or have received this e-mail in error, please notify the sender 
> immediately. The unauthorised disclosure, use or reproduction of this email's 
> content is prohibited. Unless expressly agreed, no reliance on this email may 
> be made. 
> 
> 
> 
> On Wed, Jan 30, 2019 at 7:20 PM Aleksandar Lazic  <mailto:al-hapr...@none.at>> wrote:
> 
> Hi.
> 
> Am 30.01.2019 um 13:08 schrieb Erlangga Pradipta Suryanto:
> > Hi,
> >
> > I'm trying to use haproxy to proxy rtmp stream to an nginx rtmp backend.
> > what we want to achieve is, we will add more nginx rtmp servers on the 
> backend, and when we do we want to reload the haproxy config without closing 
> the current stream.
> > We tested this by configuring haproxy with one backend and start one 
> stream, then we update the configuration to include one more backend then 
> issue the reload command to haproxy.
> > The stream is still going but when checking the process and the network 
> using ps and netstat, the old process is still up and it is still serving the 
> stream.
> > What we had in thought was that the old process could pass the stream 
> to the new process.
> >
> > We tried this using haproxy 1.8.17 and 1.9.3 and this is the haproxy 
> configuration that we use
> >
> > global
> >         debug
> >         log /dev/log    local0
> >         log /dev/log    local1 notice
> >         chroot /var/lib/haproxy
> >         stats socket /run/haproxy/admin.sock mode 660 level admin 
> expose-fd listeners
> >         stats timeout 30s
> >         user haproxy
> >         group haproxy
> >         daemon
> >
> >         # Default SSL material locations
> >         ca-base /etc/ssl/certs
> >         crt-base /etc/ssl/private
> >
> >         # Default ciphers to use on SSL-enabled listening sockets.
> >         # For more information, see ciphers(1SSL). This list is from:
>

Re: Early connection close, incomplete transfers

2019-01-31 Thread Aleksandar Lazic
Hi.

Am 31.01.2019 um 10:29 schrieb Veiko Kukk:
> HAproxy 1.9.3, but happens also with 1.7.10, 1.7.11.
> 
> Connections are getting closed during data transfer phase at random sizes on 
> backend. Sometimes just as little as 420 bytes get transferred, but usually 
> more is transferred before sudden end of connection. HAproxy logs have 
> connection closing status SD-- when this happens.

Willy have found some issues which are added in the code of 2.0 tree.
Do you have a chance to test this branch or do you want to wait for the next 
1.9 release?

I'm not sure if it affects you as we haven't seen the config yet.
Maybe you can share your config also so that we can see if your setup could be 
effected.

Best regards
Aleks

> Basic components of system look like this:
> Client --> HAproxy --> HTTP server --> Caching Proxy --> Remote origin
> 
> Our HTTP server part is compiling data from chunks it gets from local cache. 
> When it receives request from client via HAproxy, it sends response header, 
> then fetches chunks, compiles those and sends data client.
> 
> SD-- happens more frequently when connection between benchmarking tool and 
> HAproxy is fast, e.g. when doing tests where client side is not loaded much. 
> Happens much more for http than for https.
> 
> For example:
> 
> httpress -t1 -c10 -n1000 URL (rarely or not at all)
> 250 requests launched
> 500 requests launched
> 750 requests launched
> 1000 requests launched
> 
> TOTALS:  1000 connect, 1000 requests, 1000 success, 0 fail, 10 (10) real 
> concurrency
> TRAFFIC: 667959622 avg bytes, 452 avg overhead, 667959622000 bytes, 452000 
> overhead
> TIMING:  241.023 seconds, 4 rps, 2706393 kbps, 2410.2 ms avg req time
> 
> httpress -t10 -c10 -n1000 URL (happens frequently)
> 
> 2019-01-31 08:44:15 [26361:0x7fdc91a23700]: body [0] read connection closed
> 2019-01-31 08:44:15 [26361:0x7fdc91a23700]: body [0] read connection closed
> 2019-01-31 08:44:16 [26361:0x7fdc91a23700]: body [0] read connection closed
> 2019-01-31 08:44:16 [26361:0x7fdc91a23700]: body [0] read connection closed
> 2019-01-31 08:44:17 [26361:0x7fdc91a23700]: body [0] read connection closed
> 2019-01-31 08:44:18 [26361:0x7fdc91a23700]: body [0] read connection closed
> 2019-01-31 08:44:18 [26361:0x7fdc91a23700]: body [0] read connection closed
> 1000 requests launched
> 2019-01-31 08:44:19 [26361:0x7fdc82ffd700]: body [0] read connection closed
> thread 6: 73 connect, 73 requests, 72 success, 1 fail, 48093092784 bytes, 
> 32544 overhead
> thread 10: 72 connect, 72 requests, 72 success, 0 fail, 48093092784 bytes, 
> 32544 overhead
> thread 7: 73 connect, 73 requests, 72 success, 1 fail, 48093092784 bytes, 
> 32544 overhead
> thread 4: 88 connect, 88 requests, 67 success, 21 fail, 44753294674 bytes, 
> 30284 overhead
> thread 9: 111 connect, 111 requests, 56 success, 55 fail, 37405738832 bytes, 
> 25312 overhead
> thread 5: 82 connect, 82 requests, 68 success, 14 fail, 45421254296 bytes, 
> 30736 overhead
> thread 1: 86 connect, 86 requests, 68 success, 18 fail, 45421254296 bytes, 
> 30736 overhead
> thread 8: 184 connect, 184 requests, 29 success, 155 fail, 19370829038 bytes, 
> 13108 overhead
> thread 3: 73 connect, 73 requests, 73 success, 0 fail, 48761052406 bytes, 
> 32996 overhead
> thread 2: 158 connect, 158 requests, 39 success, 119 fail, 26050425258 bytes, 
> 17628 overhead
> 
> TOTALS:  1000 connect, 1000 requests, 616 success, 384 fail, 10 (10) real 
> concurrency
> TRAFFIC: 667959622 avg bytes, 452 avg overhead, 411463127152 bytes, 278432 
> overhead
> TIMING:  170.990 seconds, 3 rps, 2349959 kbps, 2775.8 ms avg req time
> 
> Because of thread count differences, -t1 (one thread) test is much more 
> loaded on client side than it is with -t10 (ten threads).
> 
> Random samples from HAproxy log (proper size of the object in HAproxy logs is 
> 667960042 bytes for that test file).
> 0/0/0/0/903 200 270807819 - - SD-- 10/10/9/9/0 0/0
> 0/0/0/0/375 200 101926854 - - SD-- 10/10/9/9/0 0/0
> 0/0/0/0/725 200 243340623 - - SD-- 10/10/9/9/0 0/0
> 0/0/0/0/574 200 183069594 - - SD-- 11/11/9/9/0 0/0
> 0/0/0/0/648 200 208194175 - - SD-- 10/10/9/9/0 0/0
> 0/0/0/0/1130 200 270807819 - - SD-- 10/10/9/9/0 0/0
> 0/0/0/0/349 200 90597175 - - SD-- 10/10/9/9/0 0/0
> 
> Our HTTP server logs contain hard unrecoverable errors about unable to write 
> to socket when HAproxy closes connection:
> Return Code: 32. Transferred 79389313 out of 667959622 Bytes in 809 msec
> Return Code: 32. Transferred 198965568 out of 667959622 Bytes in 986 msec
> Return Code: 32. Transferred 126690257 out of 667959622 Bytes in 825 msec
> Return Code: 32. Transferred 270807399 out of 667959622 Bytes in 1273 msec
> Return Code: 32. Transferred 171663764 out of 667959622 Bytes in 1075 msec
> Return Code: 32. Transferred 169362556 out of 667959622 Bytes in 1146 msec
> Return Code: 32. Transferred 167789692 out of 667959622 Bytes in 937 msec
> Return Code: 32. Transferred 199752000 out of 667959622 Bytes in 1110 msec
> Retur

Re: RTMP and Seamless Reload

2019-01-30 Thread Aleksandar Lazic
Hi.

Am 30.01.2019 um 13:08 schrieb Erlangga Pradipta Suryanto:
> Hi,
> 
> I'm trying to use haproxy to proxy rtmp stream to an nginx rtmp backend.
> what we want to achieve is, we will add more nginx rtmp servers on the 
> backend, and when we do we want to reload the haproxy config without closing 
> the current stream.
> We tested this by configuring haproxy with one backend and start one stream, 
> then we update the configuration to include one more backend then issue the 
> reload command to haproxy.
> The stream is still going but when checking the process and the network using 
> ps and netstat, the old process is still up and it is still serving the 
> stream.
> What we had in thought was that the old process could pass the stream to the 
> new process.
> 
> We tried this using haproxy 1.8.17 and 1.9.3 and this is the haproxy 
> configuration that we use
> 
> global
>         debug
>         log /dev/log    local0
>         log /dev/log    local1 notice
>         chroot /var/lib/haproxy
>         stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd 
> listeners
>         stats timeout 30s
>         user haproxy
>         group haproxy
>         daemon
> 
>         # Default SSL material locations
>         ca-base /etc/ssl/certs
>         crt-base /etc/ssl/private
> 
>         # Default ciphers to use on SSL-enabled listening sockets.
>         # For more information, see ciphers(1SSL). This list is from:
>         #  https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
>         # An alternative list with additional directives can be obtained from
>         #  
> https://mozilla.github.io/server-side-tls/ssl-config-generator/?server=haproxy
>         ssl-default-bind-ciphers 
> ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS
>         ssl-default-bind-options no-sslv3
> 
> defaults
>         log     global
>         mode    tcp
>         option  tcplog
>         option  dontlognull
>         timeout connect 5000
>         timeout client  5
>         timeout server  5
>         errorfile 400 /etc/haproxy/errors/400.http
>         errorfile 403 /etc/haproxy/errors/403.http
>         errorfile 408 /etc/haproxy/errors/408.http
>         errorfile 500 /etc/haproxy/errors/500.http
>         errorfile 502 /etc/haproxy/errors/502.http
>         errorfile 503 /etc/haproxy/errors/503.http
>         errorfile 504 /etc/haproxy/errors/504.http
> 
> frontend ft_rtpm
>         bind *:1935 name rtmp
>         mode tcp
>         maxconn 600
>         default_backend bk_rtmp
> 
> backend bk_rtmp 
>         mode tcp
>         server media01 172.17.1.213:1935 check maxconn 1 weight 10
>         #uncomment the line below then reload
>         #server media02 172.17.1.217:1935 check maxconn 1 weight 10
> 
> Is there a way to pass the stream to the new process created by the reload?

Well afaIk is this not possible with the current versions.
Why isn't a reconnect to the new process not good or possible?
Which SW is in use between haproxy?

> Thank you,
> 
> *Erlangga Pradipta Suryanto* | Software Engineer, BBM

Regards
Aleks

> __
> 
> *T. *+62118898168| *BBM PIN. D8F39521*__
> 
> *E. esuryanto*@bbmtek.com __
> 
> Follow us on: Facebook  | Twitter 
>  | Instagram 
>  | LinkedIn 
>  | YouTube 
>  | Vidio  
> 
> /BBM used under license by Creative Media Works Pte Ltd //(Co. Regn. No. 
> 201609444E)/
> 
> This e-mail is intended only for named addressee(s) and may contain 
> confidential and/or privileged information. If you are not the named 
> addressee or have received this e-mail in error, please notify the sender 
> immediately. The unauthorised disclosure, use or reproduction of this email's 
> content is prohibited. Unless expressly agreed, no reliance on this email may 
> be made. 
> 




Re: HTTP connection is reset after each request

2019-01-30 Thread Aleksandar Lazic
Hi Luke.

Am 30.01.2019 um 12:58 schrieb Luke Seelenbinder:
> Hi Aleks,
> 
> You're correct for http/1.1, but unfortunately, nothing I found after a 
> pretty long search indicated 1.8.x supports an h2 frontend with reusable 
> backend connections (h1.1 or h2).

Looks like You are also right for h2 case.
I haven't seen h2 in Marco's configuration therefor I haven't assumed he use h2.

Let's see what's Marco's answer is ;-)

> I stuck with h/1.1 until 1.9 was released because of this.
> 
> Best,
> Luke

Regards
Aleks

> —
> Luke Seelenbinder
> Stadia Maps | Founder
> stadiamaps.com
> 
> ‐‐‐ Original Message ‐‐‐
> On Wednesday, January 30, 2019 12:02 PM, Aleksandar Lazic 
>  wrote:
> 
>> Hi.
>>
> 
>> Am 30.01.2019 um 11:53 schrieb Marco Corte:
>>
> 
>>> Il 2019-01-30 11:40 Luke Seelenbinder ha scritto:
>>>
> 
>>>> Are you on 1.9.x? 1.8.x does not support reuse of backend connections
>>>> when using an h2 frontend. 1.9.x does support this and it works quite
>>>> nicely.
>>>
> 
>>> Yes! I am on version 1.8.17.
>>> Thank you for the explanation!
>>
> 
>> Well somehow it supports
>> https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4-http-reuse
>>
> 
>> I would play with the timeouts
>>
> 
>> https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4-timeout 
>> http-keep-alive
>> https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4-timeout 
>> http-request
>>
> 
>> There are some more timeouts which starts in the doc at `timeout check` in 
>> this section.
>> https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4.1
>>
> 
>> never the less 700ms is "relatively" long so I would also add a check in the 
>> server line.
>>
> 
>>> .marcoc
>>
> 
>> Regards
>> Aleks
> 




Re: HTTP connection is reset after each request

2019-01-30 Thread Aleksandar Lazic
Hi.

Am 30.01.2019 um 11:53 schrieb Marco Corte:
> Il 2019-01-30 11:40 Luke Seelenbinder ha scritto:
> 
> 
>> Are you on 1.9.x? 1.8.x does not support reuse of backend connections
>> when using an h2 frontend. 1.9.x does support this and it works quite
>> nicely.
> 
> Yes! I am on version 1.8.17.
> Thank you for the explanation!

Well somehow it supports
https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4-http-reuse

I would play with the timeouts

https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4-timeout%20http-keep-alive
https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4-timeout%20http-request

There are some more timeouts which starts in the doc at `timeout check` in this 
section.
https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4.1

never the less 700ms is "relatively" long so I would also add a check in the 
server line.

> .marcoc

Regards
Aleks



Cache question

2019-01-29 Thread Aleksandar Lazic
Hi.

I plan to use HAProxy 1.9.x cache with ~50-100k Objects which will could use 
1-2G RAM.

Have anyone used the cache features in prod with such specs?

The Idea is to use HAProxy in AUS for a Webserver in FR for caching as the 
latency delays the delivery from FR to AUS Clients.

Thank you for your answer.

Best regards
Aleks



Re: [ANNOUNCE] haproxy-1.9.3

2019-01-29 Thread Aleksandar Lazic
Am 29.01.2019 um 06:52 schrieb Willy Tarreau:
> Hi,
> 
> HAProxy 1.9.3 was released on 2019/01/29. It added 35 new commits after
> version 1.9.2.
> 
> It mainly addresses a few stability issues affecting versions up to 1.9.2.
> Several of these issues are only reproducible when using H2 to connect to
> the servers and are caused by various incorrect or insufficient error
> handling when facing failures during connection reuse. Another issue was
> a side effect of the fixes on mailers (which still use the checks
> infrastructure) that resulted in a crash when using agent-check. A last
> minor fix for the checks was made to address a timeout issue, and checks
> are expected to be in a better shape now.
> 
> Another issue was reported on the way our SSL stack deals with KeyUpdate
> messages that are part of TLS 1.3. These were identified as renegotiation
> attempts and were dropped, causing some communication issues with Chrome
> when they attempted to make use of them. Apparently we were not the only
> ones so it's a side effect of reusing a feature which has long had to be
> disabled everywhere. Now the issue was addressed, and it's important that
> distros update their packages to get this part fixed when they use OpenSSL
> 1.1.1 so that we don't leave early bugs on the net which prevent security
> features from reliably being used. This patch was also backported into the
> 1.8 branch and will be present in the next 1.8 release.
> 
> On the less important issues, some better control for stream limits were
> enforced on outgoing H2 connections. We used to observe batches of errors
> when the server was refusing too high stream IDs after it sent a GOAWAY,
> now we can react faster. In addition, in order to avoid this situation at
> all (as Nginx wants to close by default after 1000 streams over the same
> connection), we've added a "max-reuse" server parameter indicating how
> many times a connection may be reused. For example setting this to 990
> is enough to always stop reusing a connection before nginx sends its
> GOAWAY.
> 
> The H2 mux was not respecting the reserve in HTX mode, leading to the
> impossibility to manipulate headers and to some request or response
> errors. Some other small issues affecting the reserve size in HTX were
> addressed, though some of them are now a bit foggy to me.
> 
> That's about all for this release. I still have some pending fixes that
> I preferred to delay a bit and that I'll backport for the next 1.9 :
>   - make outgoing connection reuse failure fail more gracefully and
> support a retry ; we have everything for this, it just required a
> few changes in the connection setup code that I didn't feel bold
> enough to integrate into this one.
> 
>   - H2 will check that the content-length header matches the amount of
> DATA (standards compliance)
> 
>   - H2 currently don't use the server's advertised MAX_CONCURRENT_STREAMS
> setting and only uses its global one, but it's not much complicated
> to address. I expect that we may face some of these sooner or later.
> 
>   - there's this ":authority" header field missing from H2 requests that
> we should apparently add when upgrading H1 to H2.
> 
>   - regarding the reported issue of some large objects transfers over H2
> from some specific clients being truncated during reloads, I brought
> the issue to the IETF HTTP working group. Some gave me examples showing
> my initial idea of watching WINDOW_UPDATE messages will not work. However
> I managed to design another solution that I will experiment with soon
> in 2.0-dev. If it ends up working fine enough, we'll backport it to 1.9.
> 
> Last, if you feel like you'd like to contribute but don't know where to
> start, please have a look at the issue tracker (see the URL below), have
> a look at the bugs and if you feel like you can work on one of them, just
> mention it in the issue and propose a patch.
> 
> Please find the usual URLs below :
>Site index   : http://www.haproxy.org/
>Discourse: http://discourse.haproxy.org/
>Slack channel: https://slack.haproxy.org/
>Issue tracker: https://github.com/haproxy/haproxy/issues
>Sources  : http://www.haproxy.org/download/1.9/src/
>Git repository   : http://git.haproxy.org/git/haproxy-1.9.git/
>Git Web browsing : http://git.haproxy.org/?p=haproxy-1.9.git
>Changelog: http://www.haproxy.org/download/1.9/src/CHANGELOG
>Cyril's HTML doc : http://cbonte.github.io/haproxy-dconv/

Docker Images are also updated:

https://hub.docker.com/r/me2digital/haproxy19
https://hub.docker.com/r/me2digital/haproxy-19-boringssl

Both have some errors at `make reg-tests`, I think that this could be a problem 
with containerized testing.
Does anyone run haproxy in container with h2?

###
https://gitlab.com/aleks001/haproxy-19-boringssl/-/jobs/152750687

Testing with haproxy version: 1.9.3
#top  TEST ./reg-tests/connection/b0.vtc F

Re: V1.9 SSL engine and ssl-mode-async is unstable

2019-01-25 Thread Aleksandar Lazic

Hi.

Am 25-01-2019 08:55, schrieb Kevin Zhu:


HI HAProxy Team,:
I am trying to use Intel qat work with HAProxy-1.9.0, but it work very 
unstable. and i had other try HAProxy-1.8.16 and it work will, How can 
i find what is wrong?
1.8.16 and 1.9.0 use same hardwave and system to running and compile, 
and use the same config file, the attach file is config file


Please can you explain "very unstable" a little bit more.

Can you try 1.9.2/3 ?

Do you have any errors or warnings in the logs?
Maybe you can use loglevel debug?


Thanks of any help.
Best regards


Regards
Aleks

haproxy.conf
Description: Binary data


Re: h1-client to h2-server host header / authority conversion failure.?

2019-01-25 Thread Aleksandar Lazic

Hi List.

Am 25-01-2019 01:01, schrieb PiBa-NL:

Hi List,

Attached a regtest which i 'think' should pass.

**   s1    0.0 === expect tbl.dec[1].key == ":authority"
 s1    0.0 EXPECT tbl.dec[1].key (host) == ":authority" failed

It seems to me the Host <> Authority conversion isn't happening
properly.? But maybe i'm just making a mistake in the test case...

I was using HA-Proxy version 2.0-dev0-f7a259d 2019/01/24 with this 
test.


The test was inspired by the attempt to connect to mail.google.com ,
as discussed in the "haproxy 1.9.2 with boringssl" mail thread.. Not
sure if this is the main problem, but it seems suspicious to me..


That's one of the reason why I love this community ;-)

As I'm just one of this Community, I want to say, Thanks all on the list 
to be part of HAProxy ;-).



Regards,

PiBa-NL (Pieter)


Regards
Aleks



Re: haproxy 1.9.2 with boringssl

2019-01-24 Thread Aleksandar Lazic
Am 24.01.2019 um 15:09 schrieb Aleksandar Lazic:
> Am 24.01.2019 um 03:49 schrieb Willy Tarreau:
>> On Wed, Jan 23, 2019 at 09:37:46PM +0100, Aleksandar Lazic wrote:
>>>
>>> Am 23.01.2019 um 21:27 schrieb Willy Tarreau:
>>>> On Wed, Jan 23, 2019 at 09:08:00PM +0100, Aleksandar Lazic wrote:
>>>>> Should it be possible to have fe with h1 and be server h2(alpn h2), as I
>>>>> expect this or similar return value when I go thru haproxy?
>>>>
>>>> Yes absolutely. That's even what I'm doing on my tests to try to fix
>>>> the issues reported by Luke.
>>>
>>> Okay, perfect.
>>>
>>> Would you like to share your config so that I can see what's wrong with my
>>> config, thanks.
>>
>> Sure, here's a copy-paste, hoping I don't mess with anything :-)
>>
>>   defaults
>> mode http
>> option http-use-htx
>> option httplog
>> log stdout format raw daemon
>> timeout connect 4s
>> timeout client 10s
>> timeout server 10s
>>
>>   frontend decrypt
>> bind :4445
>> bind :4446 proto h2
>> bind :4443 ssl crt rsa+dh2048.pem npn h2 alpn h2
>> default_backend trace
>>
>>   backend trace
>> stats uri /stat
>> server s1 127.0.0.1:443 ssl alpn h2 verify none
>> #server s2 127.0.0.1:80
>> #server s3 127.0.0.1:80 proto h2
>>
>> As you can see you just connect to port 4445.
> 
> Many thanks.
> Sorry for the long mail thread but I'm not able to get a proper answer from 
> the ssl backend.

Please ignore this mail.
There is a problem within the container as a curl in the container have the 
same problem as haproxy, so it's related to the container run.

> I have made the setup more easier.
> 
> This setup does not return the stats page.
> curl => haproxy-19 with openssl => openssl s_server internal stats page
> 
> This setup does return the stats page.
> 
> ###
> curl -vk https://207.154.204.236:4443
> * About to connect() to 207.154.204.236 port 4443 (#0)
> *   Trying 207.154.204.236...
> * Connected to 207.154.204.236 (207.154.204.236) port 4443 (#0)
> * Initializing NSS with certpath: sql:/etc/pki/nssdb
> * skipping SSL peer certificate verification
> * SSL connection using TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
> * Server certificate:
> *   subject: CN=h2test.livesystem.at
> *   start date: Jan 24 12:18:25 2019 GMT
> *   expire date: Apr 24 12:18:25 2019 GMT
> *   common name: h2test.livesystem.at
> *   issuer: CN=Let's Encrypt Authority X3,O=Let's Encrypt,C=US
>> GET / HTTP/1.1
>> User-Agent: curl/7.29.0
>> Host: 207.154.204.236:4443
>> Accept: */*
>>
> * HTTP 1.0, assume close after body
> < HTTP/1.0 200 ok
> < Content-type: text/html
> <
> 
> 
> 
> s_server -www -alpn h2 -cert 
> /root/.caddy/acme/acme-v02.api.letsencrypt.org/sites/h2test.livesystem.at/h2test.livesystem.at.crt
>  -key 
> /root/.caddy/acme/acme-v02.api.letsencrypt.org/sites/h2test.livesystem.at/h2test.livesystem.at.key
>  -accept 4443 -debug -msg
> Secure Renegotiation IS supported
> Ciphers supported in s_server binary
> .
> ###
> 
> # openssl version
> OpenSSL 1.0.2k-fips  26 Jan 2017
> 
> # curl -V
> curl 7.29.0 (x86_64-redhat-linux-gnu) libcurl/7.29.0 NSS/3.34 zlib/1.2.7 
> libidn/1.28 libssh2/1.4.3
> Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 
> pop3s rtsp scp sftp smtp smtps telnet tftp
> Features: AsynchDNS GSS-Negotiate IDN IPv6 Largefile NTLM NTLM_WB SSL libz 
> unix-sockets
> 
> 
> defaults
> mode http
> option http-use-htx
> option httplog
> log stdout format raw daemon debug
> timeout connect 4s
> timeout client 10s
> timeout server 10s
> 
> frontend decrypt
> bind :4445
> bind :4446 proto h2
> #bind :4443 ssl crt rsa+dh2048.pem npn h2 alpn h2
> default_backend trace
> 
> backend trace
> stats uri /stat
> 
> # localhosts ip
> server s1 207.154.204.236:4443 ssl alpn h2 verify none
> 
> 
> 
> podman run --rm -it \
> -e SERVICE_DEST=mail.google.com \
> -e LOGLEVEL=debug \
> -e NUM_THREADS=8 \
> -e DNS_SRV001=1.1.1.1 \
> -e DNS_SRV002=8.8.8.8 \
> -e STATS_PORT=7411 \
> -e STATS_USER=test \
> -e STATS_PASSWORD=test \
> -e SERVICE_TCP_PORT=8443 \
> -e SERVICE_NAME=google-mail \
> -e S

Re: haproxy 1.9.2 with boringssl

2019-01-24 Thread Aleksandar Lazic
Am 24.01.2019 um 03:49 schrieb Willy Tarreau:
> On Wed, Jan 23, 2019 at 09:37:46PM +0100, Aleksandar Lazic wrote:
>>
>> Am 23.01.2019 um 21:27 schrieb Willy Tarreau:
>>> On Wed, Jan 23, 2019 at 09:08:00PM +0100, Aleksandar Lazic wrote:
>>>> Should it be possible to have fe with h1 and be server h2(alpn h2), as I
>>>> expect this or similar return value when I go thru haproxy?
>>>
>>> Yes absolutely. That's even what I'm doing on my tests to try to fix
>>> the issues reported by Luke.
>>
>> Okay, perfect.
>>
>> Would you like to share your config so that I can see what's wrong with my
>> config, thanks.
> 
> Sure, here's a copy-paste, hoping I don't mess with anything :-)
> 
>   defaults
> mode http
> option http-use-htx
> option httplog
> log stdout format raw daemon
> timeout connect 4s
> timeout client 10s
> timeout server 10s
> 
>   frontend decrypt
> bind :4445
> bind :4446 proto h2
> bind :4443 ssl crt rsa+dh2048.pem npn h2 alpn h2
> default_backend trace
> 
>   backend trace
> stats uri /stat
> server s1 127.0.0.1:443 ssl alpn h2 verify none
> #server s2 127.0.0.1:80
> #server s3 127.0.0.1:80 proto h2
> 
> As you can see you just connect to port 4445.

Many thanks.
Sorry for the long mail thread but I'm not able to get a proper answer from the 
ssl backend.

I have made the setup more easier.

This setup does not return the stats page.
curl => haproxy-19 with openssl => openssl s_server internal stats page

This setup does return the stats page.

###
curl -vk https://207.154.204.236:4443
* About to connect() to 207.154.204.236 port 4443 (#0)
*   Trying 207.154.204.236...
* Connected to 207.154.204.236 (207.154.204.236) port 4443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* skipping SSL peer certificate verification
* SSL connection using TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
* Server certificate:
*   subject: CN=h2test.livesystem.at
*   start date: Jan 24 12:18:25 2019 GMT
*   expire date: Apr 24 12:18:25 2019 GMT
*   common name: h2test.livesystem.at
*   issuer: CN=Let's Encrypt Authority X3,O=Let's Encrypt,C=US
> GET / HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 207.154.204.236:4443
> Accept: */*
>
* HTTP 1.0, assume close after body
< HTTP/1.0 200 ok
< Content-type: text/html
<



s_server -www -alpn h2 -cert 
/root/.caddy/acme/acme-v02.api.letsencrypt.org/sites/h2test.livesystem.at/h2test.livesystem.at.crt
 -key 
/root/.caddy/acme/acme-v02.api.letsencrypt.org/sites/h2test.livesystem.at/h2test.livesystem.at.key
 -accept 4443 -debug -msg
Secure Renegotiation IS supported
Ciphers supported in s_server binary
.
###

# openssl version
OpenSSL 1.0.2k-fips  26 Jan 2017

# curl -V
curl 7.29.0 (x86_64-redhat-linux-gnu) libcurl/7.29.0 NSS/3.34 zlib/1.2.7 
libidn/1.28 libssh2/1.4.3
Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 
pop3s rtsp scp sftp smtp smtps telnet tftp
Features: AsynchDNS GSS-Negotiate IDN IPv6 Largefile NTLM NTLM_WB SSL libz 
unix-sockets


defaults
mode http
option http-use-htx
option httplog
log stdout format raw daemon debug
timeout connect 4s
timeout client 10s
timeout server 10s

frontend decrypt
bind :4445
bind :4446 proto h2
#bind :4443 ssl crt rsa+dh2048.pem npn h2 alpn h2
default_backend trace

backend trace
stats uri /stat

# localhosts ip
server s1 207.154.204.236:4443 ssl alpn h2 verify none



podman run --rm -it \
-e SERVICE_DEST=mail.google.com \
-e LOGLEVEL=debug \
-e NUM_THREADS=8 \
-e DNS_SRV001=1.1.1.1 \
-e DNS_SRV002=8.8.8.8 \
-e STATS_PORT=7411 \
-e STATS_USER=test \
-e STATS_PASSWORD=test \
-e SERVICE_TCP_PORT=8443 \
-e SERVICE_NAME=google-mail \
-e SERVICE_DEST_IP=mail.google.com \
-e SERVICE_DEST_PORT=443 \
-e CONFIG_FILE=/mnt/haproxy2.cfg \
-e DEBUG=1 -v /tmp/:/mnt/ \
-p 4445 --expose 4445 \
--net host \
me2digital/haproxy19


###
openssl s_server -www -alpn h2 \
-cert 
~/.caddy/acme/acme-v02.api.letsencrypt.org/sites/h2test.livesystem.at/h2test.livesystem.at.crt
 \
-key 
~/.caddy/acme/acme-v02.api.letsencrypt.org/sites/h2test.livesystem.at/h2test.livesystem.at.key
 \
-accept 4443 -debug -msg
###

###
[root@doh-001 ~]# curl -vk http://127.0.0.1:4445
* About to connect() to 127.0.0.1 port 4445 (#0)
*   Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 4445 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 127.0.0.1:4445
> Accept: */*
>
* HTTP 1.0, assume close after body
< HTTP/1.0 503 Service Unavailab

Re: haproxy 1.9.2 with boringssl

2019-01-23 Thread Aleksandar Lazic


Am 23.01.2019 um 21:27 schrieb Willy Tarreau:
> On Wed, Jan 23, 2019 at 09:08:00PM +0100, Aleksandar Lazic wrote:
>> Should it be possible to have fe with h1 and be server h2(alpn h2), as I
>> expect this or similar return value when I go thru haproxy?
> 
> Yes absolutely. That's even what I'm doing on my tests to try to fix
> the issues reported by Luke.

Okay, perfect.

Would you like to share your config so that I can see what's wrong with my 
config, thanks.

>> I haven't seen any log option to get the backend request method, I think this
>> should be a feature request ;-).
> 
> What do you mean with "backend request method" precisely ?

As the log is for frontends It would be nice to be able to get this infos from 
below also for the backend to see what was send to the backend server.
The problem what I see is that a tcpdump/tshark does not help to see what's 
transfered on the wire when the backend talks via TLS.

https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#8.2.4

### current variables

  | H | %HM  | HTTP method (ex: POST)| string  |
  | H | %HP  | HTTP request URI without query string (path)  | string  |
  | H | %HQ  | HTTP request URI query string (ex: ?bar=baz)  | string  |
  | H | %HU  | HTTP request URI (ex: /foo?bar=baz)   | string  |
  | H | %HV  | HTTP version (ex: HTTP/1.0)   | string  |

Possible new
  | H | %bM  | Backend HTTP method (ex: POST)| string   
   |
  | H | %bP  | Backend HTTP request URI without query string (path)  | string   
   |
  | H | %bQ  | Backend HTTP request URI query string (ex: ?bar=baz)  | string   
   |
  | H | %bU  | Backend HTTP request URI (ex: /foo?bar=baz)   | string   
   |
  | H | %bV  | Backend HTTP version (ex: HTTP/1.0)   | string   
   |

###

> Willy

Aleks



Re: haproxy 1.9.2 with boringssl

2019-01-23 Thread Aleksandar Lazic
Hi Willy.

Am 23.01.2019 um 19:50 schrieb Willy Tarreau:
> Hi Aleks,
> 
> On Wed, Jan 23, 2019 at 06:58:25PM +0100, Aleksandar Lazic wrote:
>> backend be_generic_tcp
>>   mode http
>>   balance source
>>   timeout check 5s
>>   option tcp-check
>>
>>   server "${SERVICE_NAME}" ${SERVICE_DEST_IP}:${SERVICE_DEST_PORT} check 
>> inter 5s proto h2 ssl ssl-min-ver TLSv1.3 verify none
> 
> You need to replace "proto h2" with "alpn h2", so that the application
> protocol is announced to the other host, otherwise it will stick to the
> default, very likely "http/1.1", while haproxy talks h2 there. This can
> explain the 502 when the other side rejected your request.

I have changed it but still no lock.

Should it be possible to have fe with h1 and be server h2(alpn h2), as I expect 
this or similar return value when I go thru haproxy?

I haven't seen any log option to get the backend request method, I think this 
should be a feature request ;-).


curl -vo /dev/null https://mail.google.com:443
*   Trying 172.217.21.229...
* Connected to mail.google.com (172.217.21.229) port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
*   CAfile: /etc/pki/tls/certs/ca-bundle.crt
  CApath: none
* SSL connection using TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
* Server certificate:
*   subject: CN=mail.google.com,O=Google LLC,L=Mountain 
View,ST=California,C=US
*   start date: Dec 19 08:16:00 2018 GMT
*   expire date: Mar 13 08:16:00 2019 GMT
*   common name: mail.google.com
*   issuer: CN=Google Internet Authority G3,O=Google Trust Services,C=US
> GET / HTTP/1.1
> User-Agent: curl/7.29.0
> Host: mail.google.com
> Accept: */*
>
< HTTP/1.1 301 Moved Permanently
< Location: /mail/
< Expires: Wed, 23 Jan 2019 20:01:34 GMT
< Date: Wed, 23 Jan 2019 20:01:34 GMT
< Cache-Control: private, max-age=7776000
< Content-Type: text/html; charset=UTF-8
< X-Content-Type-Options: nosniff
< X-Frame-Options: SAMEORIGIN
< X-XSS-Protection: 1; mode=block
< Server: GSE
< Alt-Svc: clear
< Accept-Ranges: none
< Vary: Accept-Encoding
< Transfer-Encoding: chunked
<
{ [data not shown]
* Connection #0 to host mail.google.com left intact


Config is now this.

###
cat /tmp/haproxy.cfg
# https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#3
global
  # nodaemon

  log stdout format rfc5424 daemon "${LOGLEVEL}"

  stats socket /tmp/sock1 mode 666 level admin
  stats timeout 1h
  tune.ssl.default-dh-param 2048
  ssl-server-verify none

  nbthread "${NUM_THREADS}"


defaults
  log global

# the format is described at
# https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#4

# copied from
# 
https://github.com/haproxytech/haproxy-docker-arm64v8/blob/master/cfg_files/haproxy.cfg
  retries 3
  timeout http-request10s
  timeout queue   1m
  timeout connect 10s
  timeout client  1m
  timeout server  1m
  timeout http-keep-alive 10s
  timeout check   10s
  maxconn 3000

  default-server resolve-prefer ipv4 inter 5s resolvers mydns
  option http-use-htx
  option httplog

  log-format ">>> %ci:%cp [%tr] %ft %b/%s %TR/%Tw/%Tc/%Tr/%Ta %ST %B %CC %CS 
%tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs %{+Q}r %rt %sslv %sslc"

resolvers mydns
  nameserver dns1 "${DNS_SRV001}":53
  nameserver dns2 "${DNS_SRV002}":53
  resolve_retries   3
  timeout retry 1s
  hold valid   10s

listen stats
bind :"${STATS_PORT}"
mode http
# Health check monitoring uri.
monitor-uri /healthz

# Add your custom health check monitoring failure condition here.
# monitor fail if 
stats enable
stats hide-version
stats realm Haproxy\ Statistics
stats uri /
stats auth "${STATS_USER}":"${STATS_PASSWORD}"

frontend public_tcp
  bind :"${SERVICE_TCP_PORT}" alpn h2,http/1.1

  mode http
  log global

  default_backend be_generic_tcp


backend be_generic_tcp
  mode http
  balance source
  timeout check 5s
  option tcp-check

  server "${SERVICE_NAME}" ${SERVICE_DEST_IP}:${SERVICE_DEST_PORT} check inter 
5s alpn h2 ssl ssl-min-ver TLSv1.3 verify none
###

Log of haproxy

<29>1 2019-01-23T20:00:30+00:00 doh-001 haproxy 1 - - Proxy stats started.
<29>1 2019-01-23T20:00:30+00:00 doh-001 haproxy 1 - - Proxy public_tcp started.
<29>1 2019-01-23T20:00:30+00:00 doh-001 haproxy 1 - - Proxy be_generic_tcp 
started.
[WARNING] 022/200030 (1) : be_generic_tcp/google-mail changed its IP from 
172.217.21.229 to 172.217.18.165 by mydns/dns1.
<29>1 2019-01-23T20:00:30+00:00 doh-001 haproxy 1 - - 
be_generic_tcp/google-mail changed its IP from 172.217.21.229 to 172.217.18.165 
by mydns/dns1.

:public_tcp.accept(0006)=000c from [127.0.0

Re: haproxy 1.9.2 with boringssl

2019-01-23 Thread Aleksandar Lazic
e2digital/haproxy-19-boringssl

using CONFIG_FILE   :/mnt/haproxy.cfg
<29>1 2019-01-23T17:50:45+00:00 doh-001 haproxy 1 - - Proxy stats started.
<29>1 2019-01-23T17:50:45+00:00 doh-001 haproxy 1 - - Proxy public_tcp started.
<29>1 2019-01-23T17:50:45+00:00 doh-001 haproxy 1 - - Proxy be_generic_tcp 
started.
[WARNING] 022/175045 (1) : be_generic_tcp/google-mail changed its IP from 
172.217.21.229 to 216.58.207.69 by mydns/dns1.
<29>1 2019-01-23T17:50:45+00:00 doh-001 haproxy 1 - - 
be_generic_tcp/google-mail changed its IP from 172.217.21.229 to 216.58.207.69 
by mydns/dns1.
<30>1 2019-01-23T17:50:50+00:00 doh-001 haproxy 1 - - 127.0.0.1:54178 
[23/Jan/2019:17:50:50.727] public_tcp public_tcp/ -1/-1/-1/-1/0 0 0 - - 
PR-- 1/1/0/0/0 0/0 ""
<30>1 2019-01-23T17:50:50+00:00 doh-001 haproxy 1 - - 127.0.0.1:54178 
[23/Jan/2019:17:50:50.715] public_tcp be_generic_tcp/google-mail 0/0/13/-1/13 
502 208 - - SH-- 1/1/0/0/0 0/0 "GET / HTTP/1.1"


I thought that haproxy translates the http/1.1 cal to http/2 call, is this a 
proper assumption?
What's my mistake and thanks for help?

Thanks for help

Regards
Aleks

Am 22.01.2019 um 19:38 schrieb Aleksandar Lazic:
> Hi.
> 
> I have now build haproxy with boringssl and it looks quite good.
> 
> Is it the recommended way to simply make a git clone without any branch or 
> tag?
> Does anyone know how the KeyUpdate can be tested?
> 
> ###
> HA-Proxy version 1.9.2 2019/01/16 - https://haproxy.org/
> Build options :
>   TARGET  = linux2628
>   CPU = generic
>   CC  = gcc
>   CFLAGS  = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv
> -Wno-unused-label -Wno-sign-compare -Wno-unused-parameter
> -Wno-old-style-declaration -Wno-ignored-qualifiers -Wno-clobbered
> -Wno-missing-field-initializers -Wtype-limits -Wshift-negative-value
> -Wshift-overflow=2 -Wduplicated-cond -Wnull-dereference
>   OPTIONS = USE_LINUX_SPLICE=1 USE_GETADDRINFO=1 USE_ZLIB=1 USE_REGPARM=1
> USE_THREAD=1 USE_OPENSSL=1 USE_LUA=1 USE_PCRE2=1 USE_PCRE2_JIT=1 USE_TFO=1
> 
> Default settings :
>   maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200
> 
> Built with OpenSSL version : BoringSSL
> OpenSSL library supports TLS extensions : yes
> OpenSSL library supports SNI : yes
> OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
> Built with Lua version : Lua 5.3.5
> Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT
> IP_FREEBIND
> Built with zlib version : 1.2.11
> Running on zlib version : 1.2.11
> Compression algorithms supported : identity("identity"), deflate("deflate"),
> raw-deflate("deflate"), gzip("gzip")
> Built with PCRE2 version : 10.31 2018-02-12
> PCRE2 library supports JIT : yes
> Encrypted password support via crypt(3): yes
> Built with multi-threading support.
> 
> Available polling systems :
>   epoll : pref=300,  test result OK
>poll : pref=200,  test result OK
>  select : pref=150,  test result OK
> Total: 3 (3 usable), will use epoll.
> 
> Available multiplexer protocols :
> (protocols marked as  cannot be specified using 'proto' keyword)
>   h2 : mode=HTXside=FE|BE
>   h2 : mode=HTTP   side=FE
> : mode=HTXside=FE|BE
> : mode=TCP|HTTP   side=FE|BE
> 
> Available filters :
>   [SPOE] spoe
>   [COMP] compression
>   [CACHE] cache
>   [TRACE] trace
> ###
> 
> I also wanted to run the reg-tests but they fails.
> 
> https://gitlab.com/aleks001/haproxy-19-boringssl/-/jobs/149523589
> 
> -
> ...
> + cd /usr/src/haproxy
> + VTEST_PROGRAM=/usr/src/VTest/vtest HAPROXY_PROGRAM=/usr/local/sbin/haproxy
> make reg-tests
> ...
> ## Starting vtest ##
> Testing with haproxy version: 1.9.2
> #top  TEST ./reg-tests/http-rules/h2.vtc FAILED (0.856) exit=2
> #top  TEST ./reg-tests/mailers/k_healthcheckmail.vtc FAILED (7.742) exit=2
> #top  TEST ./reg-tests/log/b0.vtc TIMED OUT (kill -9)
> #top  TEST ./reg-tests/log/b0.vtc FAILED (10.008) signal=9
> #top  TEST ./reg-tests/http-messaging/h2.vtc FAILED (0.745) exit=2
> 4 tests failed, 0 tests skipped, 29 tests passed
> ## Gathering results ##
> ## Test case: ./reg-tests/log/b0.vtc ##
> ## test results in: 
> "/tmp/haregtests-2019-01-22_18-28-24.aBghMD/vtc.3398.357fd753"
> ## Test case: ./reg-tests/mailers/k_healthcheckmail.vtc ##
> ## test results in: 
> "/tmp/haregtests-2019-01-22_18-28-24.aBghMD/vtc.3398.477fdc0b"
>  c27.0 EXPECT resp.http.mailsreceive

Re: H2 Server Connection Resets (1.9.2)

2019-01-23 Thread Aleksandar Lazic
Hi Lukas.

Am 23.01.2019 um 10:24 schrieb Luke Seelenbinder:
> Hi Willy,
> 
> Thanks for continuing to look into this. 
> 
>>
> 
>> I've place an nginx instance after my local haproxy dev config, and
>> found something which might explain what you're observing : the process
>> apparently leaks FDs and fails once in a while, causing 500 to be returned :
> 
> That's fascinating. I would have thought nginx would have had a bit better 
> care given to things like that. . .

This can be fixed with increasing the ulimits ;-).

> Oddly enough, I cannot find any log entries that approximate this. However, 
> it's possible since we're primarily (99+%) using nginx as a reverse-proxy 
> that the fd issues wouldn't appear for us.

What's your ulimit for nginx process?

> My next thought is to try tcpdump to try to determine what's on the wire when 
> the CD-- and SD-- pairs appear, but since our stack is SSL e2e, that might 
> prove difficult. Any suggestions?

If you have enough log space you can try to activate debug log in nginx and 
haproxy.

https://nginx.org/en/docs/debugging_log.html
https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#log => debug

This will have some impacts on the performance as every request creates a lot 
of loglines!

It would be interesting which error you have in the nginx log when the CD/SD 
happen as the 'http2 flood detected' is not in the logs.

Which release of nginx do you use?
http://hg.nginx.org/nginx/tags

Maybe there are some errors in the log which can be found in this directory.
http://hg.nginx.org/nginx/file/release-1.15.8/src/http/v2/

> One more interesting piece of data: if we use htx without h2 on the backends, 
> we only see CD-- entries consistently (with a very, very few SD-- entries). 
> Thus, it would seem whatever is causing the issue is directly related to h2 
> backends. I further think we can safely say it is directly related to h2 
> streams breaking (due to client-side request cancellations) resulting in the 
> whole connection breaking in HAProxy or nginx (though determining which will 
> be the trick).
> 
> There's also a strong possibility we replace nginx with HAProxy entirely for 
> our SSL + H2 setup as we overhaul the backends, so this problem will probably 
> be resolved by removing the problematic interaction.

What was the main reason to use the nginx between the haproxy and backends?
What's the backends?

Regards
Aleks

> I'm still working on running h2load against our nginx servers to see if that 
> turns anything up.
> 
>> And at this point the connection is closed and reopened for new requests.
>> There's never any GOAWAY sent.
> 
> If I'm understanding this correctly, that implies as long as nginx sends 
> GOAWAY properly, HAProxy will not attempt to reuse the connection?
> 
>> I managed to work around the problem by limiting the number of total
>> requests per connection. I find this extremely dirty but if it helps...
>> I just need to figure how to best do it, so that we can use it as well
>> for H2 as for H1.
> 
> We're pretty satisfied with our h2 fe <-> be h1.1 setup right now, so we will 
> probably stick with that for now, since we don't want to have any more 
> operational issues from bleeding-edge bugs. (Not a comment on HAProxy, per 
> se, just a business reality. :-) ) I'm more than happy to try out anything 
> you turn up on our staging setup!
> 
> Best,
> Luke
> 
> 
> —
> Luke Seelenbinder
> Stadia Maps | Founder
> stadiamaps.com
> 
> ‐‐‐ Original Message ‐‐‐
> On Wednesday, January 23, 2019 8:28 AM, Willy Tarreau  wrote:
> 
>> Hi Luke,
>>
> 
>> I've place an nginx instance after my local haproxy dev config, and
>> found something which might explain what you're observing : the process
>> apparently leaks FDs and fails once in a while, causing 500 to be returned :
>>
> 
>> 2019/01/23 08:22:13 [crit] 25508#0: *36705 open() 
>> "/usr/local/nginx/html/index.html" failed (24: Too many open files), client: 
>> 1>
>> 2019/01/23 08:22:13 [crit] 25508#0: accept4() failed (24: Too many open 
>> files)
>>
> 
>> 127.0.0.1 - - [23/Jan/2019:08:22:13 +0100] "GET / HTTP/2.0" 500 579 "-" 
>> "Mozilla/4.0 (compatible; MSIE 7.01; Windows)"
>>
> 
>> The ones are seen by haproxy :
>>
> 
>> 127.0.0.1:47098 [23/Jan/2019:08:22:13.589] decrypt trace/ngx 0/0/0/0/0 500 
>> 701 - -  1/1/0/0/0 0/0 "GET / HTTP/1.1"
>>
> 
>> And at this point the connection is closed and reopened for new requests.
>> There's never any GOAWAY sent.
>>
> 
>> I managed to work around the problem by limiting the number of total
>> requests per connection. I find this extremely dirty but if it helps...
>> I just need to figure how to best do it, so that we can use it as well
>> for H2 as for H1.
>>
> 
>> Best regards,
>> Willy
> 




Re: haproxy 1.9.2 with boringssl

2019-01-22 Thread Aleksandar Lazic
Am 22.01.2019 um 21:45 schrieb Adam Langley:
> On Tue, Jan 22, 2019 at 12:13 PM Aleksandar Lazic  wrote:
>> Sorry for my dump question, I just want to be save not to break something.
>>
>> It would be nice to have the option '-key-update' in client.cc and server.cc
>> where can I put this feature request for boringssl?
>>
>> That would be make the test easy with this command.
>>
>> `./tool/bssl s_client -key-update -connect $test-haproxy-instance `
> 
> bssl is just for human experimentation, it shouldn't be used in
> something like a test because we break the interface from
> time-to-time. (Also note that BoringSSL in general "is not intended
> for general use, as OpenSSL is. We don't recommend that third parties
> depend upon it." https://boringssl.googlesource.com/boringssl)

Yes I have read it and was surprised, but it is how it is.

> You may well be better off using OpenSSL for a test like that, or
> perhaps writing a C/C++ program (which will probably work for either
> OpenSSL or BoringSSL).

Well thanks.
Currently I have no time wo look into this topic.

> Cheers
> 
> AGL

Regards
Aleks



Re: haproxy 1.9.2 with boringssl

2019-01-22 Thread Aleksandar Lazic
Tim.

Am 22.01.2019 um 20:57 schrieb Tim Düsterhus:

> Aleks,
> 
> Am 22.01.19 um 20:50 schrieb Aleksandar Lazic:
>> This means that the function in haproxy works but the check should be 
>> adopted to
>> match both cases, right?
> 
> At least one should investigate what exactly is happening here (the
> differences between the libc is a guess) and possibly file a bug for
> either glibc or musl. I believe what musl is doing here is correct and
> thus glibc must be incorrect.
> 
> Consider filing a tracking bug in haproxy's issue tracker to verify
> where / who exactly is doing something wrong.

Done.
https://github.com/haproxy/haproxy/issues/23

>> Do you think that in general the alpine/musl is a good idea or should I stay 
>> on
>> centos as for my other images?
> 
> FWIW: There already is an Alpine image for haproxy in Docker Official
> Images:
> https://github.com/docker-library/haproxy/blob/master/1.9/alpine/Dockerfile

Yep, I know, this uses openssl I was curious how difficult is is to run haproxy 
with boringssl.

Never the less this Dockerfile have "only" 2 failed tests.


## Starting vtest ##
Testing with haproxy version: 1.9.2
#top  TEST ./reg-tests/http-rules/h2.vtc FAILED (0.904) exit=2
#top  TEST ./reg-tests/mailers/k_healthcheckmail.vtc FAILED (7.804) exit=2
2 tests failed, 0 tests skipped, 31 tests passed
## Gathering results ##
## Test case: ./reg-tests/http-rules/h2.vtc ##
## test results in: 
"/tmp/haregtests-2019-01-22_20-26-25.BmFdCB/vtc.1383.3d3a039a"
 s10.0 EXPECT req.http.test3maskff (2001:db8:c001:c01a:::10:0) == 
"2001:db8:c001:c01a:0::10:0" failed
## Test case: ./reg-tests/mailers/k_healthcheckmail.vtc ##
## test results in: 
"/tmp/haregtests-2019-01-22_20-26-25.BmFdCB/vtc.1383.06fe4e21"
 c27.0 EXPECT resp.http.mailsreceived (11) == "16" failed
make: *** [Makefile:1102: reg-tests] Error 1


This looks like your assumption with musl<>glibc ipv6 handling is different.


> Personally I'm a Debian guy, for containers I prefer Debian based and
> CentOS / RHEL I don't use at all.

Interesting is that even the debian based Image have failed tests

https://github.com/docker-library/haproxy/tree/master/1.9

But this could be a know bug and is fixed in the current git

-
## Starting vtest ##
Testing with haproxy version: 1.9.2
#top  TEST ./reg-tests/mailers/k_healthcheckmail.vtc FAILED (7.808) exit=2
1 tests failed, 0 tests skipped, 32 tests passed
## Gathering results ##
## Test case: ./reg-tests/mailers/k_healthcheckmail.vtc ##
## test results in: 
"/tmp/haregtests-2019-01-22_20-56-31.QMI0Ue/vtc.5740.39907fe1"
 c27.0 EXPECT resp.http.mailsreceived (11) == "16" failed
Makefile:1102: recipe for target 'reg-tests' failed
make: *** [reg-tests] Error 1
+ egrep -r ^ /tmp/haregtests-2019-01-22_20-56-31.QMI0Ue/failedtests.log 
/tmp/haregtests-2019-01-22_20-56-31.QMI0Ue/vtc.5740.39907fe1
/tmp/haregtests-2019-01-22_20-56-31.QMI0Ue/failedtests.log:## Test case: 
./reg-tests/mailers/k_healthcheckmail.vtc ##
/tmp/haregtests-2019-01-22_20-56-31.QMI0Ue/failedtests.log:## test results in: 
"/tmp/haregtests-2019-01-22_20-56-31.QMI0Ue/vtc.5740.39907fe1"
/tmp/haregtests-2019-01-22_20-56-31.QMI0Ue/failedtests.log: c27.0 
EXPECT resp.http.mailsreceived (11) == "16" failed
/tmp/haregtests-2019-01-22_20-56-31.QMI0Ue/vtc.5740.39907fe1/INFO:Test case: 
./reg-tests/mailers/k_healthcheckmail.vtc
/tmp/haregtests-2019-01-22_20-56-31.QMI0Ue/vtc.5740.39907fe1/h1/cfg:global
/tmp/haregtests-2019-01-22_20-56-31.QMI0Ue/vtc.5740.39907fe1/h1/cfg:stats 
socket 
"/tmp/haregtests-2019-01-22_20-56-31.QMI0Ue/vtc.5740.39907fe1/h1/stats.sock" 
level admin mode 600
/tmp/haregtests-2019-01-22_20-56-31.QMI0Ue/vtc.5740.39907fe1/h1/cfg:stats 
socket "fd@${cli}" level admin
/tmp/haregtests-2019-01-22_20-56-31.QMI0Ue/vtc.5740.39907fe1/h1/cfg:
/tmp/haregtests-2019-01-22_20-56-31.QMI0Ue/vtc.5740.39907fe1/h1/cfg:global
/tmp/haregtests-2019-01-22_20-56-31.QMI0Ue/vtc.5740.39907fe1/h1/cfg:
lua-load /usr/src/haproxy/./reg-tests/mailers/k_healthcheckmail.lua
/tmp/haregtests-2019-01-22_20-56-31.QMI0Ue/vtc.5740.39907fe1/h1/cfg:defaults
/tmp/haregtests-2019-01-22_20-56-31.QMI0Ue/vtc.5740.39907fe1/h1/cfg:
frontend femail
/tmp/haregtests-2019-01-22_20-56-31.QMI0Ue/vtc.5740.39907fe1/h1/cfg:
mode tcp
/tmp/haregtests-2019-01-22_20-56-31.QMI0Ue/vtc.5740.39907fe1/h1/cfg:
bind "fd@${femail}"
/tmp/haregtests-

Re: haproxy 1.9.2 with boringssl

2019-01-22 Thread Aleksandar Lazic
Am 22.01.2019 um 20:54 schrieb Adam Langley:
> On Tue, Jan 22, 2019 at 11:45 AM Aleksandar Lazic  wrote:
>> Can it be reused to test a specific server like?
>>
>> ssl/test/runner/runner -test "KeyUpdate-ToServer" 127.0.0.1:8443
> 
> Not easily: it drives the implementation under test by forking a
> process and has quite a complex interface via command-line arguments.
> (I.e. 
> https://boringssl.googlesource.com/boringssl/+/eadef4730e66f914d7b9cbb2f38ecf7989f992ed/ssl/test/test_config.h)
> 
>> or should be a small c/go program be used for that test?
> 
> You could easily tweak transport_common.cc to call SSL_key_update
> before each SSL_write or so.

Great.

To be on the save site, I would like to add the following lines

###
if (!SSL_key_update(ssl, SSL_KEY_UPDATE_NOT_REQUESTED)) {
  fprintf(stderr, "SSL_key_update failed.\n");
  return false;
}
###

before this line.

https://boringssl.googlesource.com/boringssl/+/master/tool/transport_common.cc#706

Sorry for my dump question, I just want to be save not to break something.

It would be nice to have the option '-key-update' in client.cc and server.cc
where can I put this feature request for boringssl?

That would be make the test easy with this command.

`./tool/bssl s_client -key-update -connect $test-haproxy-instance `

> Cheers
> 
> AGL

Regards
Aleks



Re: haproxy 1.9.2 with boringssl

2019-01-22 Thread Aleksandar Lazic
Tim.

Am 22.01.2019 um 20:26 schrieb Tim Düsterhus:
> Aleks,
> 
> Am 22.01.19 um 19:38 schrieb Aleksandar Lazic:
>> ## test results in: 
>> "/tmp/haregtests-2019-01-22_18-28-24.aBghMD/vtc.3398.76167f9e"
>>  s10.0 EXPECT req.http.test3maskff (2001:db8:c001:c01a:::10:0) ==
>> "2001:db8:c001:c01a:0::10:0" failed
> 
> The difference here is that the test expects an IPv6 address that's not
> maximally compressed, while you get a IPv6 address that *is* maximally
> compressed. I would guess that this is the difference in behaviour
> between glibc and musl (as you are using an Alpine container).

Ah that explains this error.

This means that the function in haproxy works but the check should be adopted to
match both cases, right?

Do you think that in general the alpine/musl is a good idea or should I stay on
centos as for my other images?

Any Idea for the other failed tests?

-
## Starting vtest ##
Testing with haproxy version: 1.9.2
#top  TEST ./reg-tests/http-rules/h2.vtc FAILED (0.859) exit=2
#top  TEST ./reg-tests/mailers/k_healthcheckmail.vtc FAILED (7.739) exit=2
#top  TEST ./reg-tests/log/b0.vtc TIMED OUT (kill -9)
#top  TEST ./reg-tests/log/b0.vtc FAILED (10.001) signal=9
#top  TEST ./reg-tests/http-messaging/h2.vtc FAILED (0.752) exit=2
4 tests failed, 0 tests skipped, 29 tests passed
## Gathering results ##
## Test case: ./reg-tests/http-messaging/h2.vtc ##
## test results in: 
"/tmp/haregtests-2019-01-22_19-34-55.EKMMnc/vtc.3399.7739e83e"
 c1h2  0.0 Wrong frame type HEADERS (1) wanted WINDOW_UPDATE
## Test case: ./reg-tests/log/b0.vtc ##
## test results in: 
"/tmp/haregtests-2019-01-22_19-34-55.EKMMnc/vtc.3399.2776263d"
## Test case: ./reg-tests/http-rules/h2.vtc ##
## test results in: 
"/tmp/haregtests-2019-01-22_19-34-55.EKMMnc/vtc.3399.0900be1e"
 s10.0 EXPECT req.http.test3maskff (2001:db8:c001:c01a:::10:0) ==
"2001:db8:c001:c01a:0::10:0" failed
## Test case: ./reg-tests/mailers/k_healthcheckmail.vtc ##
## test results in: 
"/tmp/haregtests-2019-01-22_19-34-55.EKMMnc/vtc.3399.506e5b2b"
 c27.0 EXPECT resp.http.mailsreceived (11) == "16" failed
-

> Best regards
> Tim Düsterhus

Regards
Aleks



Re: haproxy 1.9.2 with boringssl

2019-01-22 Thread Aleksandar Lazic
Am 22.01.2019 um 20:30 schrieb Adam Langley:
> On Tue, Jan 22, 2019 at 11:16 AM Aleksandar Lazic  wrote:
>> Agree that I get a 400 with this command.
>>
>> `echo 'K' | ./tool/bssl s_client -connect mail.google.com:443`
> 
> (Note that "K" on its own line does not send a KeyUpdate message with
> BoringSSL's bssl tool. It just sends "K\n".)
> 
>> How does boringssl test if the KeyUpdate on a server works?
> 
> If you're asking how BoringSSL's internal tests exercise KeyUpdates
> then we maintain a fork of Go's TLS stack that is extensively modified
> to be able to generate a large variety of TLS patterns. That is used
> to exercise KeyUpdates in a number of ways:
> https://boringssl.googlesource.com/boringssl/+/eadef4730e66f914d7b9cbb2f38ecf7989f992ed/ssl/test/runner/runner.go#2779

Thanks.

Can it be reused to test a specific server like?

ssl/test/runner/runner -test "KeyUpdate-ToServer" 127.0.0.1:8443

or should be a small c/go program be used for that test?

> Cheers
> 
> AGL

Regards
Aleks



Re: haproxy 1.9.2 with boringssl

2019-01-22 Thread Aleksandar Lazic
Am 22.01.2019 um 20:04 schrieb Adam Langley:
> On Tue, Jan 22, 2019 at 10:54 AM Aleksandar Lazic  wrote:
>> Do have boringssl a similar tool like s_client?
> 
> BoringSSL builds tool/bssl (in the build directory), which is similar.
> However it doesn't have any magic inputs that can trigger a KeyUpdate
> message like OpenSSL's s_client.

Thanks.
The test is already runnig as I got your answer.

https://gitlab.com/aleks001/haproxy-19-boringssl/-/jobs/149540960

Agree that I get a 400 with this command.

`echo 'K' | ./tool/bssl s_client -connect mail.google.com:443`

How does boringssl test if the KeyUpdate on a server works?

> Cheers
> 
> AGL

Regards
Aleks



Re: haproxy 1.9.2 with boringssl

2019-01-22 Thread Aleksandar Lazic
Am 22.01.2019 um 19:54 schrieb Aleksandar Lazic:
> Cool, thanks.
> 
> Do have boringssl a similar tool like s_client?
> 
> I don't like to build openssl just for s_client call :-)

Answer my own question.

bssl is the boringssl tool command.

The open question is why the tests fails in container?

> Regards
> Aleks
> 
> 
>  Ursprüngliche Nachricht 
> Von: Janusz Dziemidowicz 
> Gesendet: 22. Jänner 2019 19:49:15 MEZ
> An: Aleksandar Lazic 
> CC: HAProxy 
> Betreff: Re: haproxy 1.9.2 with boringssl
> 
> wt., 22 sty 2019 o 19:40 Aleksandar Lazic  napisał(a):
>>
>> Hi.
>>
>> I have now build haproxy with boringssl and it looks quite good.
>>
>> Is it the recommended way to simply make a git clone without any branch or 
>> tag?
>> Does anyone know how the KeyUpdate can be tested?
> 
> openssl s_client -connect HOST:PORT (openssl >= 1.1.1)
> Just type 'K' and press enter. If the server is broken then connection
> will be aborted.
> 
> www.github.com:443, currently broken:
> read R BLOCK
> K
> KEYUPDATE
> read R BLOCK
> read:errno=0
> 
> mail.google.com:443, working:
> read R BLOCK
> K
> KEYUPDATE
> 
> 
> 




Re: haproxy 1.9.2 with boringssl

2019-01-22 Thread Aleksandar Lazic
Cool, thanks.

Do have boringssl a similar tool like s_client?

I don't like to build openssl just for s_client call :-)

Regards
Aleks


 Ursprüngliche Nachricht 
Von: Janusz Dziemidowicz 
Gesendet: 22. Jänner 2019 19:49:15 MEZ
An: Aleksandar Lazic 
CC: HAProxy 
Betreff: Re: haproxy 1.9.2 with boringssl

wt., 22 sty 2019 o 19:40 Aleksandar Lazic  napisał(a):
>
> Hi.
>
> I have now build haproxy with boringssl and it looks quite good.
>
> Is it the recommended way to simply make a git clone without any branch or 
> tag?
> Does anyone know how the KeyUpdate can be tested?

openssl s_client -connect HOST:PORT (openssl >= 1.1.1)
Just type 'K' and press enter. If the server is broken then connection
will be aborted.

www.github.com:443, currently broken:
read R BLOCK
K
KEYUPDATE
read R BLOCK
read:errno=0

mail.google.com:443, working:
read R BLOCK
K
KEYUPDATE





haproxy 1.9.2 with boringssl

2019-01-22 Thread Aleksandar Lazic
Hi.

I have now build haproxy with boringssl and it looks quite good.

Is it the recommended way to simply make a git clone without any branch or tag?
Does anyone know how the KeyUpdate can be tested?

###
HA-Proxy version 1.9.2 2019/01/16 - https://haproxy.org/
Build options :
  TARGET  = linux2628
  CPU = generic
  CC  = gcc
  CFLAGS  = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv
-Wno-unused-label -Wno-sign-compare -Wno-unused-parameter
-Wno-old-style-declaration -Wno-ignored-qualifiers -Wno-clobbered
-Wno-missing-field-initializers -Wtype-limits -Wshift-negative-value
-Wshift-overflow=2 -Wduplicated-cond -Wnull-dereference
  OPTIONS = USE_LINUX_SPLICE=1 USE_GETADDRINFO=1 USE_ZLIB=1 USE_REGPARM=1
USE_THREAD=1 USE_OPENSSL=1 USE_LUA=1 USE_PCRE2=1 USE_PCRE2_JIT=1 USE_TFO=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with OpenSSL version : BoringSSL
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
Built with Lua version : Lua 5.3.5
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT
IP_FREEBIND
Built with zlib version : 1.2.11
Running on zlib version : 1.2.11
Compression algorithms supported : identity("identity"), deflate("deflate"),
raw-deflate("deflate"), gzip("gzip")
Built with PCRE2 version : 10.31 2018-02-12
PCRE2 library supports JIT : yes
Encrypted password support via crypt(3): yes
Built with multi-threading support.

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as  cannot be specified using 'proto' keyword)
  h2 : mode=HTXside=FE|BE
  h2 : mode=HTTP   side=FE
: mode=HTXside=FE|BE
: mode=TCP|HTTP   side=FE|BE

Available filters :
[SPOE] spoe
[COMP] compression
[CACHE] cache
[TRACE] trace
###

I also wanted to run the reg-tests but they fails.

https://gitlab.com/aleks001/haproxy-19-boringssl/-/jobs/149523589

-
...
+ cd /usr/src/haproxy
+ VTEST_PROGRAM=/usr/src/VTest/vtest HAPROXY_PROGRAM=/usr/local/sbin/haproxy
make reg-tests
...
## Starting vtest ##
Testing with haproxy version: 1.9.2
#top  TEST ./reg-tests/http-rules/h2.vtc FAILED (0.856) exit=2
#top  TEST ./reg-tests/mailers/k_healthcheckmail.vtc FAILED (7.742) exit=2
#top  TEST ./reg-tests/log/b0.vtc TIMED OUT (kill -9)
#top  TEST ./reg-tests/log/b0.vtc FAILED (10.008) signal=9
#top  TEST ./reg-tests/http-messaging/h2.vtc FAILED (0.745) exit=2
4 tests failed, 0 tests skipped, 29 tests passed
## Gathering results ##
## Test case: ./reg-tests/log/b0.vtc ##
## test results in: 
"/tmp/haregtests-2019-01-22_18-28-24.aBghMD/vtc.3398.357fd753"
## Test case: ./reg-tests/mailers/k_healthcheckmail.vtc ##
## test results in: 
"/tmp/haregtests-2019-01-22_18-28-24.aBghMD/vtc.3398.477fdc0b"
 c27.0 EXPECT resp.http.mailsreceived (11) == "16" failed
## Test case: ./reg-tests/http-messaging/h2.vtc ##
## test results in: 
"/tmp/haregtests-2019-01-22_18-28-24.aBghMD/vtc.3398.7aab2925"
 c1h2  0.0 Wrong frame type HEADERS (1) wanted WINDOW_UPDATE
## Test case: ./reg-tests/http-rules/h2.vtc ##
## test results in: 
"/tmp/haregtests-2019-01-22_18-28-24.aBghMD/vtc.3398.76167f9e"
 s10.0 EXPECT req.http.test3maskff (2001:db8:c001:c01a:::10:0) ==
"2001:db8:c001:c01a:0::10:0" failed
make: *** [Makefile:1102: reg-tests] Error 1
-
###

Have anyone tried to run the tests in a containerized environment?

Regards
Aleks



Re: Automatic Redirect transformations using regex?

2019-01-22 Thread Aleksandar Lazic
Am 21.01.2019 um 23:40 schrieb Joao Guimaraes:
> Hi Haproxy team!
> 
> I've been trying to figure out how to perform automatic redirects based on
> source URL transformations. 
> 
> *Basically I need the following redirect: *
> 
> mysite.*abc* redirected to *abc*.mysite.com .

Maybe you can reuse the solution from reg-tests dir.

47 # redirect Host: example.org / subdomain.example.org
48 http-request redirect prefix
%[req.hdr(Host),lower,regsub(:\d+$,,),map_str(${testdir}/h3.map)] code 301
if { hdr(Host),lower,regsub(:\d+$,,),map_str(${testdir}/h3.map) -m found }

This solution uses a map for redirect.

http://git.haproxy.org/?p=haproxy-1.9.git;a=blob;f=reg-tests/http-rules/h3.vtc;h=55bb2687d3abe02ee74eca5283e50b039d6d162e;hb=HEAD#l47

> Note that mysite.abc is not fixed, must apply to whatever abc wants to be.
> 
> *Other examples:*
> *
> *
> 
> mysite.fr TO fr.mysite.com
> mysite.es TO es.mysite.com
> mysite.us TO us.mysite.com
> mysite.de TO de.mysite.com
> mysite.uk TO uk.mysite.com
> 
> 
> Thanks in advance!
> Joao Guimaraes

Best regards
Aleks



Re: H2 Server Connection Resets (1.9.2)

2019-01-21 Thread Aleksandar Lazic
Hi Luke.

Am 21.01.2019 um 10:30 schrieb Luke Seelenbinder:
> Hi all,
> 
> One more bug (or configuration hole) from our transition to 1.9.x using 
> end-to-end h2 connections.
> 
> After enabling h2 backends (technically `server … alpn h2,http/1.1`), we 
> began seeing a high number of backend /server/ connection resets. A 
> reasonable number of client-side connection resets due to timeouts, etc., is 
> normal, but the server connection resets were new.
> 
> I believe the root cause is that our backend servers are NGINX servers, which 
> by default have a 1000 request limit per h2 connection 
> (https://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_requests). 
> As far as I can tell there's no way to set this to unlimited. That resulted 
> in NGINX resetting the HAProxy backend connections and thus resulted in user 
> requests being dropped or returning 404s (oddly enough; though this may be as 
> a result of the outstanding bug related to header manipulation and HTX mode).

Do you have such a info in the nginx log?

"http2 flood detected"

It's the message from this lines

https://trac.nginx.org/nginx/browser/nginx/src/http/v2/ngx_http_v2.c#L4517


> This wouldn't be a problem if one of the following were true:
> 
> - HAProxy could limit the number of times it reused a connection

Can you try to set some timeout values for `timeout http-keep-alive`
https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#timeout%20http-keep-alive

I assume that this timeout could be helpful because of this block in the doc

https://cbonte.github.io/haproxy-dconv/1.9/configuration.html

```
  - KAL : keep alive ("option http-keep-alive") which is the default mode : all
requests and responses are processed, and connections remain open but idle
between responses and new requests.
```

and this code part

https://github.com/haproxy/haproxy/blob/v1.9.0/src/backend.c#L1164

> - HAProxy could retry a failed request due to backend server connection reset 
> (possibly coming in 2.0 with L7 retries?)

Mind you to create a issue for that if there isn't one already?

> - NGINX could set that limit to unlimited.

Isn't `unsigned int` not enought ?
How many idle connections do you have for how long time?

> Our http-reuse is set to aggressive, but that doesn't make much difference, I 
> don't think, since safe would result in the same behavior (the connection is 
> reusable…but only for a limited number of requests).
> 
> We've worked around this by only using h/1.1 on the backends, which isn't a 
> big problem for us, but I thought I would raise the issue, since I'm sure a 
> lot of folks are using haproxy <-> nginx pairings, and this is a bit of a 
> subtle result of that in full h2 mode.

Can you try to increase the max-requests to 20 in nginx

The `max_requests` is defined as `ngx_uint_t` which is `unsigned int`

I have found this in the nginx source.

https://www.nginx.com/resources/wiki/extending/api/main/#ngx-uint-t
https://trac.nginx.org/nginx/browser/nginx/src/http/v2/ngx_http_v2_module.h#L27
https://trac.nginx.org/nginx/browser/nginx/src/http/v2/ngx_http_v2_module.c#L85

> Thanks again for such great software—I've found it pretty fantastic to run in 
> production. :)

Just for my curiosity, have you seen any changes for your solution with the htx
/H2 e2e?

> Best,
> Luke

Best regards
Aleks

> —
> Luke Seelenbinder
> Stadia Maps | Founder
> stadiamaps.com
> 




Re: HAProxy with OpenSSL 1.1.1 breaks when TLS 1.3 KeyUpdate is used.

2019-01-20 Thread Aleksandar Lazic
Thank you for clarification.

Regard
Aleks



 Ursprüngliche Nachricht 
Von: Adam Langley 
Gesendet: 21. Jänner 2019 00:12:59 MEZ
An: Aleksandar Lazic 
CC: haproxy@formilux.org, Willy Tarreau , eb...@haproxy.com
Betreff: Re: HAProxy with OpenSSL 1.1.1 breaks when TLS 1.3 KeyUpdate is used.

On Sun, Jan 20, 2019 at 3:04 PM Aleksandar Lazic  wrote:
> which refers to 
> https://www.openssl.org/docs/manmaster/man3/SSL_key_update.html
>
> instead of the  suggested Patch?

The SSL_key_update function enqueues a KeyUpdate message to be sent.
The problem is that if a /client/ of HAProxy sends a KeyUpdate,
HAProxy thinks that it's a pre-TLS 1.3 renegotiation message and drops
the connection.

Thus the patch seeks to address that. HAProxy may also want to do
something like send a KeyUpdate for every x MBs of data sent, or y
minutes of time elapsed, but that would be a separate feature. (And
one needs to be a little cautious because OpenSSL 1.1.1 will only
accept 32 KeyUpdate messages per connection.)


Cheers

AGL




Re: HAProxy with OpenSSL 1.1.1 breaks when TLS 1.3 KeyUpdate is used.

2019-01-20 Thread Aleksandar Lazic
Hi.

As far as I understood the keyupdate

https://tools.ietf.org/html/rfc8446 4.6.3

which you refer proper isn't it also a option to use

https://wiki.openssl.org/index.php/TLS1.3#Renegotiation

which refers to https://www.openssl.org/docs/manmaster/man3/SSL_key_update.html

instead of the  suggested Patch?

Best regards
Aleks


 Ursprüngliche Nachricht 
Von: Willy Tarreau 
Gesendet: 20. Jänner 2019 23:41:17 MEZ
An: Adam Langley 
CC: haproxy@formilux.org, eb...@haproxy.com
Betreff: Re: HAProxy with OpenSSL 1.1.1 breaks when TLS 1.3 KeyUpdate is used.

Hi Adam,

[ccing Emeric]

On Sun, Jan 20, 2019 at 01:12:44PM -0800, Adam Langley wrote:
> KeyUpdate messages are a feature of TLS 1.3 that allows the symmetric
> keys of a connection to be periodically rotated. It's
> mandatory-to-implement in TLS 1.3, but not mandatory to use. Google
> Chrome tried enabling KeyUpdate and promptly broke several sites, at
> least some of which are using HAProxy.
> 
> The cause is that HAProxy's code to disable TLS renegotiation[1] is
> triggering for TLS 1.3 post-handshake messages. But renegotiation has
> been removed in TLS 1.3 and post-handshake messages are no longer
> abnormal.

Interesting!

> Thus I'm attaching a patch to only enforce that check when
> the version of a TLS connection is <= 1.2.

I think that it makes sense. I'll wait for Emeric's check regarding any
possibly overlooked impact anywhere else if some other parts would assume
that this didn't happen anymore.

> Since sites that are using HAProxy with OpenSSL 1.1.1 will break when
> Chrome reenables KeyUpdate without this change, I'd like to suggest it
> as a candidate for backporting to stable branches.

Sure! OpenSSL 1.1.1 is supported on 1.9 and 1.8 so this should be backported
there.

Just out of curiosity, if such out-of-band messages are enabled again in
1.3, do you think this might have any particular impacts on something like
kTLS where the TLS stream is deciphered by the kernel ? I don't know how
such messages can safely be delivered to userland in this case, nor if
they're needed there at all.

> [1] https://github.com/haproxy/haproxy/blob/master/src/ssl_sock.c#L1472
> 
> 
> Thank you
> 
> AGL
> 
> --
> Adam Langley a...@imperialviolet.org https://www.imperialviolet.org

Thanks!
Willy




Re: haproxy issue tracker discussion

2019-01-18 Thread Aleksandar Lazic
Cool, thanks :-)


 Ursprüngliche Nachricht 
Von: Lukas Tribus 
Gesendet: 18. Jänner 2019 14:14:06 MEZ
An: Aleksandar Lazic 
CC: haproxy , Willy Tarreau , "Tim 
Düsterhus" 
Betreff: Re: haproxy issue tracker discussion

Hello Aleksandar,


On Fri, 18 Jan 2019 at 12:54, Aleksandar Lazic  wrote:
>
> Hi.
>
> As there are now the github templates in the repo can / should we start to 
> create issues &  features on github?

Yes, you can go ahead and start filing bugs and features.

There's some minor tweaking yet to be done regarding the subsystem
labels, but that's not really a blocking issue. Once that is done, I
will send out a proper announcement on the list (tomorrow, probably).


Lukas




Re: haproxy issue tracker discussion

2019-01-18 Thread Aleksandar Lazic
Hi.

As there are now the github templates in the repo can / should we start to 
create issues &  features on github?

Regards
Aleks


 Ursprüngliche Nachricht 
Von: Willy Tarreau 
Gesendet: 14. Jänner 2019 04:11:17 MEZ
An: "Tim Düsterhus" 
CC: Lukas Tribus , haproxy 
Betreff: Re: haproxy issue tracker discussion

On Mon, Jan 14, 2019 at 03:06:54AM +0100, Tim Düsterhus wrote:
> May I suggest the following to move forward?
(...)
> That way we can test the process with a small, unimportant, test issue.
> The automated closing based on the labels can than be added a few days
> later. I don't expect huge numbers of issues right away, so they can be
> closed by hand.

Sounds good to me.

Thanks guys!
Willy




Re: [ANNOUNCE] haproxy-1.9.2

2019-01-18 Thread Aleksandar Lazic

Hi Willy,

Am 17-01-2019 15:41, schrieb Willy Tarreau:

Hi Aleks,

On Thu, Jan 17, 2019 at 01:02:56PM +0100, Aleksandar Lazic wrote:

> Very likely, yes. If you want to inspect the body you simply have to
> enable "option http-buffer-request" so that haproxy waits for the body
> before executing rules. From there, indeed you can pass whatever Lua
> code on req.body. I don't know if there would be any value in trying
> to implement some protobuf converters to decode certain things natively.
> What I don't know is if the contents can be deserialized even without
> compiling the proto files.

Agree. I would be interesting to here a good use case and a solution 
for that,

at least haproxy have the possibility to do it ;-)


From what I've seen, gRPC stream is reasonably easy to decode, and 
protobuf
doesn't require the proto file, it will just emit indexes, types and 
values,
which is enough as long as the schema doesn't change. I've seen that 
Thrift
is pretty similar. So we could decide about routing or priorities based 
on

values passed in the protocol :-)


;-)


>> As we have now a separated protocol handling layer (htx) how difficult is it 
to
>> add `mode fast-cgi` like `mode http`?
>
> We'd like to have this for 2.0. But it wouldn't be "mode fast-cgi" but
> rather "proto fast-cgi" on the server lines to replace the htx-to-h1 mux
> with an htx-to-fcgi one, because fast-cgi is another representation of
> HTTP. The "mode http" setting is what enables all HTTP processing
> (http-request rules, cookie parsing etc). Thus you definitely want to
> have it enabled.

Full Ack.

This means that I can use QUICK+HTTP/3 => php-fpm with haproxy, in the 
future ;-)


Yes.

Fast cgi isn't a bad protocol (IMHO) but sadly it was not as wide 
spread as

http(s) even it has multiplexing and keep alive feature in it.


I remember that when we checked with Thierry, there were some issues to
implement multiplexing which resulted in nobody really implementing it
in practice. I *think* the problem was due to the framing or the huge
risk of head-of-line blocking making it impossible (or very hard) to
sacrify a stream when the client doesn't read it, without damaging the
other ones. Thus it was mostly in-order delivery in the end.

(... links ...)
All of them looks to the keep alive flag but not to the multiplex 
flag.


So this doesn't seem to have change much :-)


Not as I know. From my point of view is the keep alive feature the one 
which
should be supported and the multiplex feature not, but that's just my 
opinion.



Python is different, as always, they use mainly wsgi, AFAIK.
https://wsgi.readthedocs.io/en/latest/


OK.


I forgotten Ruby, they use also another protocol.
http://rack.github.io/

For ruby we can use http as there are a lot of web servers which have 
rack already

implemented ;-)

https://www.digitalocean.com/community/tutorials/a-comparison-of-rack-web-servers-for-ruby-web-applications


uwsgi have also there on protocol
https://uwsgi-docs.readthedocs.io/en/latest/Protocol.html


I remember having looked at this one many years ago when it was
presented as a replacement for fcgi, but I got contradictory feedback
depending on whom I talked to. I don't know how widespread it is
nowadays.


Well it's not as widespread as fcgi and wsgi, AFAIK.
Let's focus on fcgi and see what's the feedback is.

I can open a issue in github as soon as it's ready to track the 
feedback.



Cheers,
Willy


Regards
Aleks



Re: [ANNOUNCE] haproxy-1.9.2

2019-01-17 Thread Aleksandar Lazic
Hi Willy.

Am 17.01.2019 um 04:25 schrieb Willy Tarreau:
> Hi Aleks,
> 
> On Wed, Jan 16, 2019 at 11:52:12PM +0100, Aleksandar Lazic wrote:
>> For service routing are the standard haproxy content routing options possible
>> (path, header, ...) , right?
> 
> Yes absolutely.
> 
>> If someone want to route based on grpc content he can use lua with body 
>> content
>> right?
>>
>> For example this library https://github.com/Neopallium/lua-pb
> 
> Very likely, yes. If you want to inspect the body you simply have to
> enable "option http-buffer-request" so that haproxy waits for the body
> before executing rules. From there, indeed you can pass whatever Lua
> code on req.body. I don't know if there would be any value in trying
> to implement some protobuf converters to decode certain things natively.
> What I don't know is if the contents can be deserialized even without
> compiling the proto files.

Agree. I would be interesting to here a good use case and a solution for that,
at least haproxy have the possibility to do it ;-)

>>> That's about all. With each major release we feel like version dot-2
>>> works pretty well. This one is no exception. We'll see in 6 months if
>>> it was wise :-)
>>
>> So you would say I can use it in production with htx ;-)
> 
> As long as you're still a bit careful, yes, definitely. haproxy.org has
> been running it in production since 1.9-dev9 or so. Since 1.9.0 was
> released, we've had one crash a few times (fixed in 1.9.1) and two
> massive slowdowns due to non-expiring connections reaching the frontend's
> maxconn limit (fixed in 1.9.2).

Yep agree. In prod is always good to keep an eye on it.

>> and the docker image is also updated ;-)
>>
>> https://hub.docker.com/r/me2digital/haproxy19
> 
> Thanks.
> 
>> As we have now a separated protocol handling layer (htx) how difficult is it 
>> to
>> add `mode fast-cgi` like `mode http`?
> 
> We'd like to have this for 2.0. But it wouldn't be "mode fast-cgi" but
> rather "proto fast-cgi" on the server lines to replace the htx-to-h1 mux
> with an htx-to-fcgi one, because fast-cgi is another representation of
> HTTP. The "mode http" setting is what enables all HTTP processing
> (http-request rules, cookie parsing etc). Thus you definitely want to
> have it enabled.

Full Ack.

This means that I can use QUICK+HTTP/3 => php-fpm with haproxy, in the future 
;-)

Fast cgi isn't a bad protocol (IMHO) but sadly it was not as wide spread as
http(s) even it has multiplexing and keep alive feature in it.

>> I ask because php have not a production ready http implementation but a 
>> robust
>> fast cgi process manager (php-fpm). There are several possible solution to 
>> add
>> http to php (nginx+php-fpm, uwsgi+php-fpm, uwsgi+embeded php) but all this
>> solutions requires a additional hop.
>>
>> My wish is to have such a flow.
>>
>> haproxy -> *.php  => php-fpm
>> -> *.static-files => nginx,h2o
> 
> It's *exactly* what I've been wanting for a long time as well. Mind you
> that Thierry implemented some experimental fast-cgi code many years ago
> in 1.3! By then we were facing some strong architectural limitations,
> but now I think we should have everything ready thanks to the muxes.

Oh wow 1.3. 8-O

In 2014 have Baptiste written a blog post how to make health checks for php-fpm
so it looks like fast-cgi is a long time on the table.

https://alohalb.wordpress.com/2014/06/06/binary-health-check-with-haproxy-1-5-php-fpmfastcgi-probe-example/

Just in case it's interesting here some receiver implementations links for
popular servers.

https://github.com/php-src/php/blob/master/main/fastcgi.h
https://github.com/php-src/php/blob/master/main/fastcgi.c

https://github.com/unbit/uwsgi/blob/master/proto/fastcgi.c
https://github.com/unbit/uwsgi/blob/master/plugins/router_fcgi/router_fcgi.c

https://golang.org/src/net/http/fcgi/fcgi.go
https://golang.org/src/net/http/fcgi/child.go

https://docs.rs/crate/fastcgi/1.0.0/source/src/lib.rs

All of them looks to the keep alive flag but not to the multiplex flag.

Python is different, as always, they use mainly wsgi, AFAIK.
https://wsgi.readthedocs.io/en/latest/

uwsgi have also there on protocol
https://uwsgi-docs.readthedocs.io/en/latest/Protocol.html

>> I have take a look into fcgi protocol but sadly I'm not a good enough 
>> programmer
>> for that task. I can offer the tests for the implementation.
> 
> That's good to know, thanks!
> 
> Cheers,
> Willy

Regards
Aleks



Re: [ANNOUNCE] haproxy-1.9.2

2019-01-16 Thread Aleksandar Lazic
Hi.

Am 16.01.2019 um 19:02 schrieb Willy Tarreau:
> Hi,
> 
> HAProxy 1.9.2 was released on 2019/01/16. It added 58 new commits
> after version 1.9.1.
> 
> It addresses a number of lower importance pending issues that were not
> yet merged into 1.9.1, one bug in the cache and fixes some long-standing
> limitations that were affecting H2.
> 
> The highest severity issue but the hardest to trigger as well is the
> one affecting the cache, as it's possible to corrupt the shared memory
> segment when using some asymmetric caching rules, and crash the process.
> There is a workaround though, which consists in always making sure an
> "http-request cache-use" action is always performed before an
> "http-response cache-store" action (i.e.  the conditions must match).
> This bug already affects 1.8 and nobody noticed so I'm not worried :-)
> 
> The rest is of lower importance but mostly annoyance. One issue was
> causing the mailers to spam the server in loops. Another one affected
> idle server connections (I don't remember the details after seeing
> several of them to be honest), apparently the stats page could crash
> when using HTX, and there were still a few cases where stale HTTP/1
> connections would never leave in HTX (after certain situations of client
> timeout). The 0-RTT feature was broken when openssl 1.1.1 was released
> due to the anti-replay protection being enabled by default there (which
> makes sense since not everyone uses it with HTTP and proper support),
> this is now fixed.
> 
> While we have been observing a slowly growing amount of orphaned connections
> on haproxy.org last week (several per hour), and since the recent fixes we
> could confirm that it's perfectly clean now.
> 
> There's a small improvement regarding the encryption of TLS tickets. We
> used to support 128 bits only and it looks like the default setting
> changed 2 years ago without us noticing. Some users were asking for 256
> bit support, so that was implemented and backported. It will work
> transparently as the key size is determined automatically. We don't
> think it would make sense at this point to backport this to 1.8, but if
> there is compelling demand for this Emeric knows how to do it.
> 
> Regarding the long-standing limitations affecting H2, some of you
> probably remember that haproxy used not to support CONTINUATION frames,
> which was causing an issue with one very old version of chromium, and
> that it didn't support trailers, making it incompatible with gRPC (which
> may also use CONTINUATION). This has constantly resulted in h2spec to
> return 6 failed tests. These limitations could be addressed in 2.0-dev
> relatively easily thanks to the much better new architecture, and I
> considered it was right to backport these patches so that we don't have
> to work around them anymore. I'd say that while from a developer's
> perspective these limitations were not bugs ("works as designed"), from
> the user's perspective they definitely were.
> 
> I could try this with the gRPC helloworld tests (which by the way support
> H2 in clear text) :
> 
>haproxy$ cat h2grpc.cfg
>defaults
> mode http
> timeout client 5s
> timeout server 5s
> timeout connect 1s
> 
>listen grpc
> log stdout format raw local0
> option httplog
> option http-use-htx
> bind :50052 proto h2
> server srv1 127.0.0.1:50051 proto h2
>haproxy$ ./haproxy -d -f h2grpc.cfg
> 
>grpc$ go run examples/helloworld/greeter_server/main.go &
>grpc$ go run examples/helloworld/greeter_client/main.go haproxy 
>2019/01/04 11:11:40 Received: haproxy
>2019/01/04 11:11:40 Greeting: Hello haproxy
> 
>(...)haproxy$ ./haproxy -d -f h2grpc.cfg
>:grpc.accept(0008)=000b from [127.0.0.1:37538] ALPN=  
>:grpc.clireq[000b:]: POST /helloworld.Greeter/SayHello 
> HTTP/2.0
>:grpc.clihdr[000b:]: content-type: application/grpc 
>:grpc.clihdr[000b:]: user-agent: grpc-go/1.18.0-dev   
>:grpc.clihdr[000b:]: te: trailers
>:grpc.clihdr[000b:]: grpc-timeout: 994982u
>:grpc.clihdr[000b:]: host: localhost:50052
>:grpc.srvrep[000b:000c]: HTTP/2.0 200
>:grpc.srvhdr[000b:000c]: content-type: application/grpc
>:grpc.srvcls[000b:000c]
>:grpc.clicls[000b:000c]
>:grpc.closed[000b:000c]
>127.0.0.1:37538 [04/Jan/2019:11:11:40.705] grpc grpc/srv1 0/0/0/1/1 200 
> 116 - -  1/1/0/0/0 0/0 "POST /helloworld.Greeter/SayHello HTTP/2.0"
> 
> In the past we'd get an error from the client saying that the response
> came without trailers. So now this limitation is expected to be just bad
> old memories.

That's great ;-) ;-)

For service routing are the standard haproxy content routing options possible
(path, header, ...) , right?

If someone want to route based on grpc content he can use lua with body content

Re: How to replicate RedirectMatch (apache reverse proxy) in Haproxy

2019-01-16 Thread Aleksandar Lazic
Hi.

Am 16.01.2019 um 16:35 schrieb mirko stefanelli:
> Hi to all,
> 
> we are trying to move from Apache reverse proxy to Haproxy, you can see below 
> a
> part of del file Apache httpd.conf:
> 
> 
>  ServerName dipendenti.xxx.xxx.it
>  ErrorLog logs/intranet_ssl_error_log
>  TransferLog logs/intranet_ssl_access_log
>  LogLevel info
>  ProxyRequests Off
>  ProxyPreserveHost On
>  ProxyPass / http://intranet.xx.xxx/
>  ProxyPassReverse / http://intranet.xxx.xxx/
>  RedirectMatch ^/$ https://dipendenti.xxx.xxx.it  /
> 
>  SSLEngine on
>  SSLProxyEngine On
>  SSLProtocol all -SSLv2
>  SSLCipherSuite HIGH:MEDIUM:!aNULL:!MD5
> 
>  SSLCertificateFile /etc/pki/tls/certs/STAR_xt.crt
>  SSLCertificateKeyFile /etc/pki/tls/private/.pem
>  SSLCertificateChainFile /etc/pki/tls/certs/STAR_xxx_ca-bundle.crt
>  BrowserMatch "MSIE [2-5]" \
>              nokeepalive ssl-unclean-shutdown \
>              downgrade-1.0 force-response-1.0
> 
> 
> As you can see here we use RedirectMatch to force respons in HTTPS.
> 
> Here part of conf on HAproxy:
> 
> in frontend part:
> 
> bind *:443 ssl crt /etc/haproxy/ssl/ #here are stored each certificates
> 
> acl acl_dipendenti hdr_dom(host) -i dipendenti.xxx.xxx.it
> 
> use_backend dipendenti if acl_dipendenti
> 
> in backend part:
> 
> backend dipendenti
>         log 127.0.0.1:514 local6 debug
>         stick-table type ip size 20k peers mypeers
>         server intranet 10.xxx.xxx.xxx:80 check
> 
> When we start service we connect to https://dipendenti.xxx.xxx.it, but
> during navigation seems that haproxy respons change from HTTPS to HTTP.
> 
> Can you suggests some idea in order to investigate on this behavior?

Maybe you get a startpoint on this blog post.

https://www.haproxy.com/blog/howto-write-apache-proxypass-rules-in-haproxy/

> Regards,
> Mirko.

Regards
Aleks



Re: Get client IP

2019-01-16 Thread Aleksandar Lazic
Hi.

Am 16.01.2019 um 06:43 schrieb Vũ Xuân Học:
> Dear,
> 
> I fixed it. I use { src x.x.x.x ... } in use_backend and it worked.
> 
> Many thanks,

Great ;-).

How about the origin issue with the ssl, how is the solution now?

Best regards
Aleks

> -Original Message-
> From: Vũ Xuân Học  
> Sent: Wednesday, January 16, 2019 10:37 AM
> To: 'Aleksandar Lazic' ; 'haproxy@formilux.org' 
> ; 'PiBa-NL' 
> Subject: RE: Get client IP
> 
> Hi,
> 
> I have other problem. I want to only allow some ip access my website. Please 
> show me how to allow some IP by domain name.
> 
> I try with: tcp-request connection reject if { hdr(host) crmone.thaison.vn } 
> !{ src x.x.x.x x.x.x.y } but it’s not work. I get error message: 
>
>   keyword 'hdr' which is incompatible with 'frontend 
> tcp-request connection rule'
> 
> I try with some other keyword but not successful.
> 
> 
> 
> 
> 
> -Original Message-
> From: Aleksandar Lazic 
> Sent: Monday, January 14, 2019 5:20 PM
> To: Vũ Xuân Học ; haproxy@formilux.org; 'PiBa-NL' 
> 
> Subject: Re: Get client IP
> 
> Hi.
> 
> Am 14.01.2019 um 03:11 schrieb Vũ Xuân Học:
>> Hi,
>>
>>  
>>
>> I don’t know how to use ssl in http mode. I have many site with many 
>> certificate.
>>
>> As you see:
>>
>> …
>>
>> bind 192.168.0.4:443   (I NAT port 443 from firewall to HAProxy IP
>> 192.168.0.4)
>>
>> …
>>
>> # Define hosts
>>
>> acl host_1 req.ssl_sni -i ebh.vn
>>
>> acl host_2 req.ssl_sni hdr_end(host) -i einvoice.com.vn
>>
>> … (many acl like above)
>>
>>
>> use_backend eBH if host_1
>>
>>use_backend einvoice443 if host_2
> 
> You can use maps for this.
> https://www.haproxy.com/blog/introduction-to-haproxy-maps/
> 
> The openshift router have a complex but usable solution. Don't get confused 
> with the golang template stuff in there.
> 
> https://github.com/openshift/router/blob/master/images/router/haproxy/conf/haproxy-config.template#L180
> 
> https://github.com/openshift/router/blob/master/images/router/haproxy/conf/haproxy-config.template#L198
> 
> Regards
> Aleks
> 
>> *From:* Aleksandar Lazic 
>> *Sent:* Monday, January 14, 2019 8:45 AM
>> *To:* haproxy@formilux.org; Vũ Xuân Học ; 'PiBa-NL'
>> 
>> *Subject:* RE: Get client IP
>>
>>  
>>
>> Hi.
>>
>> As you use IIS I strongly suggest to terminate the https on haproxy 
>> and use mode http instead of tcp.
>>
>> Here is a blog post about basic setup of haproxy with ssl
>>
>> https://www.haproxy.com/blog/how-to-get-ssl-with-haproxy-getting-rid-o
>> f-stunnel-stud-nginx-or-pound/
>>
>> I assume that haproxy have the client ip as the setup works in the http 
>> config.
>>
>> Best regards
>> Aleks
>>
>> --
>> --
>>
>> *Von:*"Vũ Xuân Học" mailto:ho...@thaison.vn>>
>> *Gesendet:* 14. Jänner 2019 02:17:23 MEZ
>> *An:* 'PiBa-NL' > <mailto:piba.nl@gmail.com>>, 'Aleksandar Lazic'
>> mailto:al-hapr...@none.at>>, haproxy@formilux.org 
>> <mailto:haproxy@formilux.org>
>> *Betreff:* RE: Get client IP
>>
>>  
>>
>> Thanks for your help
>>
>>  
>>
>> I try config HAProxy with accept-proxy like this:
>>
>> frontend ivan
>>
>>  
>>
>> bind 192.168.0.4:443 accept-proxy
>>
>> mode tcp
>>
>> option tcplog
>>
>>  
>>
>> #option forwardfor
>>
>>  
>>
>> reqadd X-Forwarded-Proto:\ https
>>
>>  
>>
>> then my website can not access.
>>
>> I use IIS as webserver and I don’t know how to accept proxy, I only 
>> know config X-Forwarded-For like this
>>
>> http://www.loadbalancer.org/blog/iis-and-x-forwarded-for-header/
>>
>>  
>>
>>  
>>
>> *From:* PiBa-NL mailto:piba.nl@gmail.com>>
>> *Sent:* Sunday, January 13, 2019 10:06 PM
>> *To:* Aleksandar Lazic > <mailto:al-hapr...@none.at>>; Vũ Xuân Học > <mailto:ho...@thaison.vn>>; haproxy@formilux.org 
>> <mailto:haproxy@formilux.org>
>> *Subject:* Re: Get client IP
>>
>>  
>>
>> Hi,
>>
>> Op 13-1-2019 om 1

Re: Get client IP

2019-01-14 Thread Aleksandar Lazic
Hi.

Am 14.01.2019 um 03:11 schrieb Vũ Xuân Học:
> Hi,
> 
>  
> 
> I don’t know how to use ssl in http mode. I have many site with many 
> certificate.
> 
> As you see:
> 
> …
> 
> bind 192.168.0.4:443   (I NAT port 443 from firewall to HAProxy IP 
> 192.168.0.4)
> 
> …
> 
> # Define hosts
> 
>     acl host_1 req.ssl_sni -i ebh.vn
> 
>     acl host_2 req.ssl_sni hdr_end(host) -i einvoice.com.vn
> 
>     … (many acl like above)
> 
> 
>     use_backend eBH if host_1
> 
>    use_backend einvoice443 if host_2

You can use maps for this.
https://www.haproxy.com/blog/introduction-to-haproxy-maps/

The openshift router have a complex but usable solution. Don't get confused with
the golang template stuff in there.

https://github.com/openshift/router/blob/master/images/router/haproxy/conf/haproxy-config.template#L180

https://github.com/openshift/router/blob/master/images/router/haproxy/conf/haproxy-config.template#L198

Regards
Aleks

> *From:* Aleksandar Lazic 
> *Sent:* Monday, January 14, 2019 8:45 AM
> *To:* haproxy@formilux.org; Vũ Xuân Học ; 'PiBa-NL'
> 
> *Subject:* RE: Get client IP
> 
>  
> 
> Hi.
> 
> As you use IIS I strongly suggest to terminate the https on haproxy and use 
> mode
> http instead of tcp.
> 
> Here is a blog post about basic setup of haproxy with ssl
> 
> https://www.haproxy.com/blog/how-to-get-ssl-with-haproxy-getting-rid-of-stunnel-stud-nginx-or-pound/
> 
> I assume that haproxy have the client ip as the setup works in the http 
> config.
> 
> Best regards
> Aleks
> 
> --------
> 
> *Von:*"Vũ Xuân Học" mailto:ho...@thaison.vn>>
> *Gesendet:* 14. Jänner 2019 02:17:23 MEZ
> *An:* 'PiBa-NL' mailto:piba.nl@gmail.com>>,
> 'Aleksandar Lazic' mailto:al-hapr...@none.at>>,
> haproxy@formilux.org <mailto:haproxy@formilux.org>
> *Betreff:* RE: Get client IP
> 
>  
> 
> Thanks for your help
> 
>  
> 
> I try config HAProxy with accept-proxy like this:
> 
> frontend ivan
> 
>  
> 
>     bind 192.168.0.4:443 accept-proxy
> 
>     mode tcp
> 
>     option tcplog
> 
>  
> 
> #option forwardfor
> 
>  
> 
>     reqadd X-Forwarded-Proto:\ https
> 
>  
> 
> then my website can not access.
> 
> I use IIS as webserver and I don’t know how to accept proxy, I only know 
> config
> X-Forwarded-For like this
> 
> http://www.loadbalancer.org/blog/iis-and-x-forwarded-for-header/
> 
>  
> 
>  
> 
> *From:* PiBa-NL mailto:piba.nl@gmail.com>>
> *Sent:* Sunday, January 13, 2019 10:06 PM
> *To:* Aleksandar Lazic mailto:al-hapr...@none.at>>; Vũ 
> Xuân
> Học mailto:ho...@thaison.vn>>; haproxy@formilux.org
> <mailto:haproxy@formilux.org>
> *Subject:* Re: Get client IP
> 
>  
> 
> Hi,
> 
> Op 13-1-2019 om 13:11 schreef Aleksandar Lazic:
> 
> Hi.
> 
>  
> 
> Am 13.01.2019 um 12:17 schrieb Vũ Xuân Học:
> 
> Hi,
> 
>  
> 
> Please help me to solve this problem.
> 
>  
> 
> I use HAProxy version 1.5.18, SSL transparent mode and I can not get 
> client IP
> 
> in my .net mvc website. With mode http, I can use option forwardfor 
> to catch
> 
> client ip but with tcp mode, my web read X_Forwarded_For is null.
> 
>  
> 
>  
> 
>  
> 
> My diagram:
> 
>  
> 
> Client => Firewall => HAProxy => Web
> 
>  
> 
>  
> 
>  
> 
> I read HAProxy document, try to use send-proxy. But when use 
> send-proxy, I can
> 
> access my web.
> 
>  
> 
> This is my config:
> 
>  
> 
> frontend test2233
> 
>  
> 
>     bind *:2233
> 
>  
> 
>     option forwardfor
> 
>  
> 
>  
> 
>  
> 
>     default_backend testecus
> 
>  
> 
> backend testecus
> 
>  
> 
>     mode http
> 
>  
> 
>     server web1 192.168.0.151:2233 check
> 
>  
> 
> Above config work, and I can get the client IP
> 
>  
> 
> That's good as it's `mode http` therefore haproxy can see the http 
> traffic.
> 
> Indeed it can insert the http forwardfor header with 'mode http'.
> 
>  
> 
> 

RE: Get client IP

2019-01-13 Thread Aleksandar Lazic
Hi.

As you use IIS I strongly suggest to terminate the https on haproxy and use 
mode http instead of tcp.

Here is a blog post about basic setup of haproxy with ssl

https://www.haproxy.com/blog/how-to-get-ssl-with-haproxy-getting-rid-of-stunnel-stud-nginx-or-pound/

I assume that haproxy have the client ip as the setup works in the http config.

Best regards
Aleks


 Ursprüngliche Nachricht 
Von: "Vũ Xuân Học" 
Gesendet: 14. Jänner 2019 02:17:23 MEZ
An: 'PiBa-NL' , 'Aleksandar Lazic' , 
haproxy@formilux.org
Betreff: RE: Get client IP

Thanks for your help

 

I try config HAProxy with accept-proxy like this:

frontend ivan
 
bind 192.168.0.4:443 accept-proxy
mode tcp
option tcplog
 
#option forwardfor
 
reqadd X-Forwarded-Proto:\ https
 

then my website can not access. 

I use IIS as webserver and I don’t know how to accept proxy, I only know config 
X-Forwarded-For like this

http://www.loadbalancer.org/blog/iis-and-x-forwarded-for-header/ 

 

 

From: PiBa-NL  
Sent: Sunday, January 13, 2019 10:06 PM
To: Aleksandar Lazic ; Vũ Xuân Học ; 
haproxy@formilux.org
Subject: Re: Get client IP

 

Hi,

Op 13-1-2019 om 13:11 schreef Aleksandar Lazic:

Hi.
 
Am 13.01.2019 um 12:17 schrieb Vũ Xuân Học:

Hi,
 
Please help me to solve this problem.
 
I use HAProxy version 1.5.18, SSL transparent mode and I can not get client IP
in my .net mvc website. With mode http, I can use option forwardfor to catch
client ip but with tcp mode, my web read X_Forwarded_For is null.
 
 
 
My diagram:
 
Client => Firewall => HAProxy => Web
 
 
 
I read HAProxy document, try to use send-proxy. But when use send-proxy, I can
access my web.
 
This is my config:
 
frontend test2233
 
bind *:2233
 
option forwardfor
 
 
 
default_backend testecus
 
backend testecus
 
mode http
 
server web1 192.168.0.151:2233 check
 
Above config work, and I can get the client IP

 
That's good as it's `mode http` therefore haproxy can see the http traffic.

Indeed it can insert the http forwardfor header with 'mode http'.



 
 

Config with SSL:
 
frontend ivan
 
bind 192.168.0.4:443
mode tcp
option tcplog
 
#option forwardfor
 
reqadd X-Forwarded-Proto:\ https

 
This can't work as you use `mode tcp` and therefore haproxy can't see the http
traffic.
 
From my point of view have you now 2 options.
 
* use https termination on haproxy. Then you can add this http header.

Thats one option indeed.



 
* use accept-proxy in the bind line. This option requires that the firewall is
able to send the PROXY PROTOCOL header to haproxy.
https://cbonte.github.io/haproxy-dconv/1.5/configuration.html#5.1-accept-proxy

I dont expect a firewall to send such a header. And if i understand correctly 
the 'webserver' would need to be configured to accept proxy-protocol.
The modification to make in haproxy would be to configure send-proxy[-v2-ssl-cn]
http://cbonte.github.io/haproxy-dconv/1.9/snapshot/configuration.html#5.2-send-proxy
And how to configure it with for example nginx:
https://wakatime.com/blog/23-how-to-scale-ssl-with-haproxy-and-nginx

 
 
The different modes are described in the doc
https://cbonte.github.io/haproxy-dconv/1.5/configuration.html#4-mode
 
Here is a blog post about basic setup of haproxy with ssl
https://www.haproxy.com/blog/how-to-get-ssl-with-haproxy-getting-rid-of-stunnel-stud-nginx-or-pound/
 

acl tls req.ssl_hello_type 1
 
tcp-request inspect-delay 5s
 
tcp-request content accept if tls
 
 
 
# Define hosts
 
acl host_1 req.ssl_sni -i ebh.vn
 
acl host_2 req.ssl_sni hdr_end(host) -i einvoice.com.vn
 

 
   use_backend eBH if host_1
 
   use_backend einvoice443 if host_2
 
 
 
backend eBH
 
mode tcp
 
balance roundrobin
 
option ssl-hello-chk
 
   server web1 192.168.0.153:443 maxconn 3 check #cookie web1
 
   server web1 192.168.0.154:443 maxconn 3 check #cookie web2
 
 
 
Above config doesn’t work, and I can not get the client ip. I try server web1
192.168.0.153:443 send-proxy and try server web1 192.168.0.153:443 send-proxy-v2
but I can’t access my web.

 
This is expected as the Firewall does not send the PROXY PROTOCOL header and the
bind line is not configured for that.

Firewall's by themselves will never use proxy-protocol at all. That it doesn't 
work with send-proxy on the haproxy server line is likely because the 
webservice that is receiving the traffic isn't configured to accept the proxy 
protocol. How to configure a ".net mvc website" to accept that is something i 
don't know if it is even possible at all..



 
 

Many thanks,

 
Best regards
Aleks
 

Thanks & Best Regards! 

* VU XUAN HOC
 

Regards,
PiBa-NL (Pieter)



Re: Get client IP

2019-01-13 Thread Aleksandar Lazic
Hi.

Am 13.01.2019 um 12:17 schrieb Vũ Xuân Học:
> Hi,
> 
> Please help me to solve this problem.
> 
> I use HAProxy version 1.5.18, SSL transparent mode and I can not get client IP
> in my .net mvc website. With mode http, I can use option forwardfor to catch
> client ip but with tcp mode, my web read X_Forwarded_For is null.
> 
>  
> 
> My diagram:
> 
> Client => Firewall => HAProxy => Web
> 
>  
> 
> I read HAProxy document, try to use send-proxy. But when use send-proxy, I can
> access my web.
> 
> This is my config:
> 
> frontend test2233
> 
>     bind *:2233
> 
>     option forwardfor
> 
>  
> 
>     default_backend testecus
> 
> backend testecus
> 
>     mode http
> 
>     server web1 192.168.0.151:2233 check
> 
> Above config work, and I can get the client IP

That's good as it's `mode http` therefore haproxy can see the http traffic.

> Config with SSL:
> 
> frontend ivan
> 
>     bind 192.168.0.4:443
>     mode tcp
>     option tcplog
> 
> #option forwardfor
> 
>     reqadd X-Forwarded-Proto:\ https

This can't work as you use `mode tcp` and therefore haproxy can't see the http
traffic.

>From my point of view have you now 2 options.

* use https termination on haproxy. Then you can add this http header.
* use accept-proxy in the bind line. This option requires that the firewall is
able to send the PROXY PROTOCOL header to haproxy.
https://cbonte.github.io/haproxy-dconv/1.5/configuration.html#5.1-accept-proxy

The different modes are described in the doc
https://cbonte.github.io/haproxy-dconv/1.5/configuration.html#4-mode

Here is a blog post about basic setup of haproxy with ssl
https://www.haproxy.com/blog/how-to-get-ssl-with-haproxy-getting-rid-of-stunnel-stud-nginx-or-pound/

>     acl tls req.ssl_hello_type 1
> 
>     tcp-request inspect-delay 5s
> 
>     tcp-request content accept if tls
> 
>  
> 
>     # Define hosts
> 
>     acl host_1 req.ssl_sni -i ebh.vn
> 
>     acl host_2 req.ssl_sni hdr_end(host) -i einvoice.com.vn
> 
> 
> 
>    use_backend eBH if host_1
> 
>    use_backend einvoice443 if host_2
> 
>  
> 
> backend eBH
> 
>     mode tcp
> 
>     balance roundrobin
> 
>     option ssl-hello-chk
> 
>    server web1 192.168.0.153:443 maxconn 3 check #cookie web1
> 
>    server web1 192.168.0.154:443 maxconn 3 check #cookie web2
> 
>  
> 
> Above config doesn’t work, and I can not get the client ip. I try server web1
> 192.168.0.153:443 send-proxy and try server web1 192.168.0.153:443 
> send-proxy-v2
> but I can’t access my web.

This is expected as the Firewall does not send the PROXY PROTOCOL header and the
bind line is not configured for that.

> Many thanks,

Best regards
Aleks

> Thanks & Best Regards! 
> 
> * VU XUAN HOC
>  Mobile: 0169.8081005
> **cid:image001.jpg@01D102DF.ABB9D420
> THAISON TECHNOLOGY DEVELOPMENT COMPANY
> *  Add  * :*  11 Dang Thuy Tram, Hoang Quoc Viet, Cau Giay, Ha Noi
>   Tel *: *+84.4.37545222 
>   Fax  *  : *+84.4.37545223
>   Email       *  : *ho...@thaison.vn *
> *  Web         *  :*http://www.thaison.vn; http://www.einvoice.vn; 
> http://www.etax.vn;  http://www.ebh.vn
> 
>  
> 




Re: haproxy issue tracker discussion

2019-01-10 Thread Aleksandar Lazic
Am 09.01.2019 um 15:22 schrieb Willy Tarreau:
> Hi Tim,
> 
> On Wed, Jan 09, 2019 at 12:58:30PM +0100, Tim Düsterhus wrote:
>> Am 09.01.19 um 05:31 schrieb Willy Tarreau:
>>> Except that the "naturally" part here is manually performed by someone,
>>> and an issue tracker is nothing more than an organized todo list, which
>>> *is* useful to remind that you missed some backports. It regularly happens
>>> to us, like when the safety of some fixes is not certain and we prefer to
>>> let them run for a while in the most recent versions before backporting
>>> them to older branches. This is exactly where an issue tracker is needed,
>>> to remind us that these fixes are still needed in older branches.
>>
>> So the commits are not being cherry-picked in the original order? I
>> imagined that the process went like this:
>>
>> 1. List all the commits since the last cherry-picks
>> 2. Look in the commit message to see whether the commit should be
>> backported.
>> 3. Cherry-pick the commit.
> 
> It's what we *try* to do, but cherry-picking never is rocket science, for
> various reasons, some ranging from uncertainty regarding some fixes that
> need to cool down later, other because an add-on was made, requiring an
> extra patch that are much more convenient to deal with together (think
> about bisect for example). That's why I created the git-show-backport
> script which gives us significant help in comparing lists of commits from
> various branches.
> 
>>> If the issue tracker only tracks issues related to the most recent branch,
>>
>> I believe you misunderstood me. What I attempted to say is:
>>
>> The issue tracker tracks which branches the bug affects. But IMO it does
>> not need to track whether the backport already happened, because the
>> information that the backport needs to happen is in the commit itself
>> (see above).
> 
> For me it is important to have the info that the branch is still unfixed
> because as I explained, the presence of a given commit is not equivalent
> to the issue being fixed. A commit is for a branch. It will often beckport
> as a 1-to-1 to the closest branches, but 10% of the time you need to
> backport extra stuff as well that is not part of the fix but which the
> fix uses, and sometimes you figure that the issue is still not completely
> fixed despite the backport being there because it's more subtle.
> 
>>> it will only solve the problem for this branch. For example, Veiko Kukk
>>> reported in November that compression in 1.7.11 was broken again. How do
>>> I know this ? Just because I've added an entry for this in my TODO file.
>>> This bug is apparently a failed backport, so it requires that the original
>>> bug is reopened and that any backport attempt to an older version is paused.
>>
>> Is the failed backport a new bug or is it not? I'd say it's a new bug,
>> because the situation changed. It's a new bug (someone messed up the
>> backport) that affects haproxy-1.7, but does not affect haproxy-dev. You
>> describe it as an old bug that needs to be re-opened.
> 
> For me it's not a new bug at all, it's the same description. Worse, often
> it will even be the one the reporter used! For example someone might report
> an issue with 1.7, that we diagnose covers 1.7 to 2.0-dev. We finally find
> the bug and if it in 2.0-dev then backport it. The backport stops working
> when reaching 1.7. It's hard to claim it's a new bug while it exactly is the
> bug the person reported! Doing otherwise would make issue lookups very
> cumbersome, even more than the mailing list archives where at least you
> can sort by threads. Thus for me it's only the status in the old branch
> which is not resolved. It's also more convenient for users looking for a
> solution to figure that the same bug is already fixed in 1.8 and that
> possibly an upgrade would be the path to least pain.
> 
>>> You'll note that for many of them the repository is only a mirror by
>>> the way, so that's another hint.
>>
>> I suspect the reason is simple: The project already had a working issue
>> tracker that predated GitHub. Many of these projects are way older than
>> GitHub.
> 
> It's possible.
> 
>> Here's some more recent projects that probably grew up with GitHub. I
>> can't comment how they do the backports, though:
>>
>> https://github.com/nodejs/node/issues (has LTS / Edge)
>> https://github.com/zfsonlinux/zfs/issues (has stable / dev)
>> https://github.com/antirez/redis/issues
>> https://github.com/moby/moby/issues (tons of automation based on an
>> issue template)
> 
> I only knew 3 of them by name and never used any ;-)
> 
> Node is interesting here. the have tags per affected version. E.g.
> https://github.com/nodejs/node/issues/25221

I like this as then you can see all effected Versions for a issue and PR.

> I tend to think that if labels already mark the relevance to a branch,
> then they override the status and probably we don't really care about
> the status. The "moby" project above does that by the way, wit

Re: Question about Maglev algorithm

2018-12-29 Thread Aleksandar Lazic
Am 29.12.2018 um 19:25 schrieb Valentin Vidic:
> On Sat, Dec 29, 2018 at 06:03:51PM +0100, Aleksandar Lazic wrote:
>> I thought I have misunderstood the Idea behind maglev, thanks for 
>> clarification.
> 
> Found another mention of Maglev [Eis16] for high-level load balancing (between
> datacenters):
> 
>   https://landing.google.com/sre/sre-book/chapters/load-balancing-frontend/

Thanks.

As explained from Willy the Eis16 is for IP packages, I think.

```
.
.

Our current VIP load balancing solution [Eis16] uses packet encapsulation. A
network load balancer puts the forwarded packet into another IP packet with
Generic Routing Encapsulation (GRE) [Han94], and uses a backend’s address as the
destination. A backend receiving the packet strips off the outer IP+GRE layer
and processes the inner IP packet as if it were delivered directly to its
network interface. The network load balancer and the backend no longer need to
exist in the same broadcast domain; they can even be on separate continents as
long as a route between the two exists.
.
.

```

As in the Kubernetes environments are more and more SDNs in use I'm asking my
self if this algorithm could have some benefit. The network setup and in general
the IT have a high change rate let's keep it in mind and let us see what the
future brings or requires ;-)

QUICK is coming "quick" over the edge and this will change a lot especially for
reverse proxies like haproxy, IMHO.

Regards
Aleks



Re: Question about Maglev algorithm

2018-12-29 Thread Aleksandar Lazic
Am 29.12.2018 um 07:41 schrieb Willy Tarreau:
> On Fri, Dec 28, 2018 at 07:55:11PM +0100, Aleksandar Lazic wrote:
>> Well as far as I understood the pdf one of the biggest difference is that
>> Maglev is a distributed system where the consistent hash is for local system.
> 
> No, not at all. The difference is that it's designed for packet processing
> so they have to take care of connection tracking and per-packet processing
> cost. From what I've read in the paper, it could be seen as a subset of
> what we already do :
>   - server weights are not supported in Maglev (and very likely not needed)
>   - slow start is not supported
>   - server insertion/removal can be extremely expensive (O(N^2)) due to the
> way they need to build the hash table for fast lookup
>   - no possibility for bounded load either
> 
> It's really important to understand the different focus of the algorithm,
> being packet-oriented instead of L7-oriented. This explains a number of
> differences and choices. I think Maglev is excellent for what it does and
> that our mechanism wouldn't be as fast if used on a per-packet basis. But
> conversely, we already do the same and even much more by default because
> we work at a different layer.

I thought I have misunderstood the Idea behind maglev, thanks for clarification.

> Willy

Cheers
Aleks



Re: Question about Maglev algorithm

2018-12-28 Thread Aleksandar Lazic
Well as far as I understood the pdf one of the biggest difference is that 
Maglev is a distributed system where the consistent hash is for local system.

What I think is if consistent hash uses the peers table for balancing it could 
be similar to Maglev, but I'm not a algo expert, just an Idea.

I don't know if it's have any benefits for haproxy, I have seen this algo on 
envoy site and wanted to know what the experts here means about it :-)

https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/load_balancing/load_balancers

Regards
Aleks


 Ursprüngliche Nachricht 
Von: Aaron West 
Gesendet: 28. Dezember 2018 19:36:03 MEZ
An: HAProxy 
Betreff: Re: Question about Maglev algorithm

I've not used it yet with IPVS because I have nothing with a new enough
Kernel (4.18+ I think), however, isn't this quite similar to HAProxy's
consistent hash options?

Aaron
Loadbalancer.org


Question about Maglev algorithm

2018-12-28 Thread Aleksandar Lazic
Hi.

Have anyone take a look into the Maglev algorithm ?

This paper looks very interesting 
https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/44824.pdf

Regards
Aleks



Tweet about Facebook's implementation and deployment of QUIC

2018-12-28 Thread Aleksandar Lazic
Hi.

I just have seen this tweet, maybe it's also interesting for you.

Subodh Iyengar (@__subodh) twitterte um 10:18 nachm. on Mi., Dez. 26, 2018:

Slides for my presentation at ACM conext on Facebook's implementation and 
deployment of QUIC are now live 
https://conferences2.sigcomm.org/co-next/2018/slides/epiq-keynote.pdf . I 
presented some numbers on latency reductions we've seen so far as well as the 
new load balancing infrastructure we've built for QUIC.
(https://twitter.com/__subodh/status/1078037284908261377)

Regards
Aleks



RE: Http HealthCheck Issue

2018-12-27 Thread Aleksandar Lazic
Hi Praveen.

That's because the http health check is on a different network layer.

The server line defines the tcp layer and the http check the application layer.

Please take a look into the doc about check and http-check.

http://cbonte.github.io/haproxy-dconv/1.7/configuration.html#5.2-check

http://cbonte.github.io/haproxy-dconv/1.7/configuration.html#4.2-option%20httpchk

As I don't know how deep the knowledge of the different layers is, let me 
suggest you this articel to refresh the knowledge.

https://en.m.wikipedia.org/wiki/TCP/IP_model

Regards
aleks


 Ursprüngliche Nachricht 
Von: "UPPALAPATI, PRAVEEN" 
Gesendet: 27. Dezember 2018 06:24:43 MEZ
An: Aleksandar Lazic , haproxy 
Betreff: RE: Http HealthCheck Issue

Hi Alex,

If I have one vhost representing all the nexus host's how can haproxy identify 
which server is down ?

I guess health check is to determine which server is healthy right if a vhost 
masks all the servers in the backend list how would haproxy divert the traffic?

Please advise.

Thanks,
Praveen.

-Original Message-----
From: Aleksandar Lazic [mailto:al-hapr...@none.at] 
Sent: Thursday, December 20, 2018 2:34 AM
To: UPPALAPATI, PRAVEEN ; haproxy 
Subject: Re: Http HealthCheck Issue

Hi Praveen.

Please keep the list in the loop, thanks.

Am 20.12.2018 um 07:00 schrieb UPPALAPATI, PRAVEEN:
> Hi Alek,
> 
> Now I am totally confused:
> 
> When I say :
> 
> 
> backend bk_8093_read
> balancesource
> http-response set-header X-Server %s
> option log-health-checks
> option httpchk GET 
> /nexus/repository/rawcentral/com.att.swm.attpublic/healthcheck.txt 
> HTTP/1.1\r\nHost:\ server1.com:8093\r\nAuthorization:\ Basic\ ...
> server primary8093r server1.com:8093 check verify none
> server backUp08093r server2.com:8093 check backup verify none 
> server backUp18093r server3.com:8093 check backup verify none
> 
> Here server1.com,server2.com and physical server hostnames.

That's the issue. You should have a vhost name which is for all servers the 
same.

> When I define HOST header which server should I define I was expecting 
> haproxy will formulate that to 
> 
> https://urldefense.proofpoint.com/v2/url?u=http-3A__server1.com-3A8093_nexus_repository_rawcentral_com.att.swm.attpublic_healthcheck.txt&d=DwIFaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=V0kSKiLhQKpOQLIjj3-g9Q&m=OZ-O9xFFKCfaCy769FVtGWPRgXm2eydV92WYJPam8Yg&s=--6orxuqJAZSK_-qumJjuZEhy1Iru7mkgPPJvYpw4RQ&e=
> https://urldefense.proofpoint.com/v2/url?u=http-3A__server2.com-3A8093_nexus_repository_rawcentral_com.att.swm.attpublic_healthcheck.txt&d=DwIFaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=V0kSKiLhQKpOQLIjj3-g9Q&m=OZ-O9xFFKCfaCy769FVtGWPRgXm2eydV92WYJPam8Yg&s=eJkYBH7JAMAIU1ZIHXCFbOs3RLA0OtMHp1ky_rN-d7s&e=
> https://urldefense.proofpoint.com/v2/url?u=http-3A__server3.com-3A8093_nexus_repository_rawcentral_com.att.swm.attpublic_healthcheck.txt&d=DwIFaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=V0kSKiLhQKpOQLIjj3-g9Q&m=OZ-O9xFFKCfaCy769FVtGWPRgXm2eydV92WYJPam8Yg&s=sUUE7fa4pe0lN8iFaAuznj-m8sgwQS2X2xeeva-yCEM&e=
> 
> to monitor which servers are live right , so how could I dynamically populate 
> the HOST?

I have written the following:

> I assume that nexus have a general URL and not srv1,srv2, ...

This is also mentioned in the config example.

https://urldefense.proofpoint.com/v2/url?u=https-3A__help.sonatype.com_repomanager2_installing-2Dand-2Drunning_running-2Dbehind-2Da-2Dreverse-2Dproxy&d=DwIFaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=V0kSKiLhQKpOQLIjj3-g9Q&m=OZ-O9xFFKCfaCy769FVtGWPRgXm2eydV92WYJPam8Yg&s=vJtNj9Bd4EYr_kdknJiGcOOfWnV6Mgna31-JAbiKwq4&e=

This means on the nexus should be a generic hostname which is for all servers
the same.

Maybe you can share the nexus config, nexus setup and the nexus version, as it's
normally not a big deal to setup a vhost.

Regards
Aleks


PS: What's this urldefense.proofpoint.com crap 8-O

> Please advice.
> 
> Thanks,
> Praveen.
> 
> 
> 
> -Original Message-
> From: Aleksandar Lazic [mailto:al-hapr...@none.at] 
> Sent: Wednesday, December 19, 2018 3:25 PM
> To: UPPALAPATI, PRAVEEN 
> Cc: Jonathan Matthews ; Cyril Bonté 
> ; haproxy@formilux.org
> Subject: Re: Http HealthCheck Issue
> 
> Am 19.12.2018 um 21:04 schrieb UPPALAPATI, PRAVEEN:
>> Ok then do I need to add the haproxy server?
> 
> I suggest to use a `curl -v
> /nexus/v1/repository/rawcentral/com.att.swm.attpublic/healthcheck.txt`
> and see how curl make the request.
> 
> I assume that nexus have a general URL and not srv1,srv2, ...
> For example.
> 
> ###
> curl -vo /dev/null 
> https://urldefense.proofpoint.com/v2/url?u=https-3A__www.haproxy.org&d=DwIFaQ&c=LFYZ-o9_HUMeMTSQicvjIg&

Re: haproxy AIX 7.1.0.0 compile issues

2018-12-26 Thread Aleksandar Lazic

Hi Patrick.

Am 26-12-2018 22:26, schrieb Overbey, Patrick (Sioux Falls):


Hello,

First off, I want to say thank you for your hard work on haproxy. It is 
a very fine piece of software.
I recently ran into a bug compiling haproxy 1.9+ on AIX 7.1 
7100-00-03-1115 using gmake 4.2 and gcc 8.1.0. I had previously had 1.8 
compiling using this same setup with a few minor code changes to 
variable names in vars.c and connection.c. I made those same changes in 
version 1.9, but have now ran into a compile issue that I cannot get 
around due to initcall.h being new in 1.9.


Please can you tell us which `minor code changes` was necessary to be 
able to compile on AIX 7.1.



Here are the compile errors I am seeing.

ld: 0711-345 Use the -bloadmap or -bnoquiet option to obtain more 
information.


What do you get when you add `-bnoquiet` to the LDFLAGS?
Please can you share the compile output with `V=1`  activated.


ld: 0711-317 ERROR: Undefined symbol: __start_init_STG_PREPARE

ld: 0711-317 ERROR: Undefined symbol: __stop_init_STG_PREPARE

ld: 0711-317 ERROR: Undefined symbol: __start_init_STG_LOCK

ld: 0711-317 ERROR: Undefined symbol: __stop_init_STG_LOCK

ld: 0711-317 ERROR: Undefined symbol: __start_init_STG_ALLOC

ld: 0711-317 ERROR: Undefined symbol: __stop_init_STG_ALLOC

ld: 0711-317 ERROR: Undefined symbol: __start_init_STG_POOL

ld: 0711-317 ERROR: Undefined symbol: __stop_init_STG_POOL

ld: 0711-317 ERROR: Undefined symbol: __start_init_STG_REGISTER

ld: 0711-317 ERROR: Undefined symbol: __stop_init_STG_REGISTER

ld: 0711-317 ERROR: Undefined symbol: __start_init_STG_INIT

ld: 0711-317 ERROR: Undefined symbol: __stop_init_STG_INIT

Would anyone have suggestions for how to fix this?

Also, as a note I am able to compile haproxy 1.5 out of the box, but 
starting with version 1.6 is where I run into compile errors. Is there 
support for these compile bugs or am I on my own?


Please can you be more precise which compile errors you got, thanks.


Thanks for any help you can offer.

PATRICK OVERBEY

Software Development Engineer Staff

Product Development/Bank Solutions

Office: 605-362-1260  x7290

FISERV
JOIN US @ FORUM 2019
Fiserv | Join Our Team | Twitter | LinkedIn | Facebook
FORTUNE Magazine WORLD'S MOST ADMIRED COMPANIES® 2014 | 2015 | 2016 | 
2017 | 2018
(c) 2018 Fiserv Inc. or its affiliates. Fiserv is a registered 
trademark of Fiserv Inc. Privacy Policy




Re: HA Proxy Load Balancer

2018-12-21 Thread Aleksandar Lazic
Hi Lance.

Please keep the list in the loop as there are several other persons which can
also help, thank you.

Am 21.12.2018 um 14:49 schrieb Lance Melancon:
> I hope this helps in what you are requesting. So this config works great but I
> need to redirect the server to a sub site as in myserver.net/site. We are
> looking for the exact syntax to add to the haproxy.cfg. I’m including my
> programmer that may understand your feedback better than myself. We did try
> several things referring to the documentation with no luck. Thanks!

docx with embedded Images is not a very secure nor a common format on this list,
due to this fact let me copy the content of the docx here and comment it inline
and answer below.

> Haproxy.cfg:
> global
>log /dev/log local0
>log /dev/log local1 notice
>chroot /var/lib/haproxy
>stats timeout 30s
>user haproxy
>group haproxy
>daemon
>maxconn 15000
> 
> defaults
>log global
>mode http
>option httplog
>option dontlognull
>timeout connect 5000
>timeout client 5
>timeout server 5
> 
> frontend myserver.net
>bind *:443
>mode tcp

Okay here is the problem.

As the haproxy is only used for tcp proxying not for http you will not be able
to make what you want.

https://cbonte.github.io/haproxy-dconv/1.7/configuration.html#4-mode

>maxconn 15000
>default_backend hac_cluster
> 
> backend hac_cluster
>mode tcp
>balance leastconn
>server myserver 192.1.1.1:443 check maxconn 5000
>server myserver 192.1.1.2:443 check maxconn 5000
> 
>listen statistics
>bind *:80

I would not recommend to put statistics on port 80, but that's only my opinion.

>mode http
>stats enable
>stats hide-version
>stats refresh 30s
>stats show-node
>stats auth myserver:password   
>stats admin if TRUE
>stats uri /lbstats
> 
> 
> haproxy -vv
>> ## excerpt from image
> Version 1.7.8
> No compression libs, openssl, pcre nor lua support

On which platform is this haproxy running?
Is haproxy installed from the package management or was it build from sources?

To be able to do what you want you will need to do the following steps.

* Install haproxy with openssl support

* get the certificates from the backend server and add it to the haproxy

https://www.haproxy.com/blog/how-to-get-ssl-with-haproxy-getting-rid-of-stunnel-stud-nginx-or-pound/
  - Pay attention that you copy teh certificates into the chroot dir
>chroot /var/lib/haproxy

* create a frontend acl for the path `acl my_site path_beg -i /site`

* create a use_backend line `use_backend my_site if my_site`

* create a backend with the name `my_site` with the server line like
  `server myserver myserver.net: ...`

As I mentioned before it's not a easy task to dig into this topic, therefore I
strongly recommend to give you and your programmer some time to understand how
load balancing on level 6(TLS/SSL) + 7(http) works.

Here are some links which could help to get a better picture of HAProxy and LB
in general.
http://www.haproxy.org/download/1.7/doc/intro.txt
https://www.haproxy.com/blog/the-four-essential-sections-of-an-haproxy-configuration/
https://www.haproxy.com/blog/introduction-to-haproxy-acls/

In any case please post some logs, configs or anything directly in the mail body
so that the persons which reads this list via a console are able to follow it
without to open a word document.

We are glad to help as long as we can read the mails ;-)

Very best regards
Aleks


> -Original Message-
> From: Aleksandar Lazic 
> Sent: Thursday, December 20, 2018 4:21 PM
> To: Lance Melancon 
> Cc: haproxy@formilux.org
> Subject: Re: HA Proxy Load Balancer
> 
>  
> 
> CAUTION: This email originated from outside Cypress-Fairbanks ISD. Do not 
> click
> links or open attachments unless you recognize the sender and know the content
> is safe.
> 
>  
> 
>  
> 
>  
> 
> Hi Lance.
> 
>  
> 
> Am 20-12-2018 21:41, schrieb Lance Melancon:
> 
>> Thanks for the info. Unfortunately I am not a programmer by a long
> 
>> shot and syntax is a big problem for me. I tried a few things but no
> 
>> luck and I can't find any examples of a redirect.
> 
>> So do I need both the backend and acl statements?
> 
>> I'm simply trying to use mysite.net to direct to mysite.net/website.
> 
>> Any time I use a / the config fails.
> 
>  
> 
> I'm not sure if you have read and understand my last mail?
> 
> Have you time to dig into this topic as it isn't a quick shot, mostly AFAIK.
> 
>  
> 
> We need some more infos to be able to help you.
> 
>  
> 
>> haproxy -vv
> 
>> a

Re: HA Proxy Load Balancer

2018-12-20 Thread Aleksandar Lazic

Hi Lance.

Am 20-12-2018 21:41, schrieb Lance Melancon:

Thanks for the info. Unfortunately I am not a programmer by a long
shot and syntax is a big problem for me. I tried a few things but no
luck and I can't find any examples of a redirect.
So do I need both the backend and acl statements?
I'm simply trying to use mysite.net to direct to mysite.net/website.
Any time I use a / the config fails.


I'm not sure if you have read and understand my last mail?
Have you time to dig into this topic as it isn't a quick shot, mostly 
AFAIK.


We need some more infos to be able to help you.


haproxy -vv
anonymized config


Regards
Aleks


-Original Message-----
From: Aleksandar Lazic 
Sent: Thursday, December 20, 2018 2:00 PM
To: Lance Melancon 
Cc: haproxy@formilux.org
Subject: Re: HA Proxy Load Balancer

CAUTION: This email originated from outside Cypress-Fairbanks ISD. Do
not click links or open attachments unless you recognize the sender
and know the content is safe.



Hi Lance.

Am 20-12-2018 18:20, schrieb Lance Melancon:


We are testing the load balancer and it's working but I can't see how
to direct the server to a specific website such as server.net/site. Is
this possible? Syntax? Thanks!


Well yes. I think it is a good starting point to read and understand
this blog article.

https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.haproxy.com%2Fblog%2Fusing-haproxy-as-an-api-gateway-part-1%2F&data=02%7C01%7CLance.melancon%40cfisd.net%7C6aa4b53295ce4715f0b308d666b5b424%7C12ac55e201c5446abe37be3ef2056122%7C0%7C1%7C636809327941066192&sdata=TCDRAt2XnDHm8IpoeJVVHnDt7Vcf7SnRo%2B6iIgAZ5kg%3D&reserved=0

What you want to do is "HTTP Routing"

For example a short snipplet
###

acl my_site path_beg -i /site

...
use_backend my_site if my_site

###

I would help a lot to have some more Information from you like.

haproxy -vv
anonymized config

As we don't know how much knowledge do you have about http I want to
tell you that this statement "server.net/site" 2 parts.

Host: server.net
Path: /site

This is explained in detail in the doc.
https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fcbonte.github.io%2Fhaproxy-dconv%2F1.9%2Fconfiguration.html%231&data=02%7C01%7CLance.melancon%40cfisd.net%7C6aa4b53295ce4715f0b308d666b5b424%7C12ac55e201c5446abe37be3ef2056122%7C0%7C1%7C636809327941066192&sdata=SzilrSGyMgnpUAgQs%2F0U6%2BzCPH7ToIjK1R1zxESfRP4%3D&reserved=0

Hth
Aleks


CYPRESS-FAIRBANKS ISD CONFIDENTIALITY NOTICE: This email, including
any attachments, is for the sole use of the intended recipient(s) and
may contain confidential student and/or employee information.
Unauthorized use and/or disclosure is prohibited under federal and
state law. If you are not the intended recipient, you may not use,
disclose, copy or disseminate this information. Please call the sender
immediately or reply by email and destroy all copies of the original
message, including any attachments. Unless expressly stated in this
e-mail, nothing in this message should be construed as a digital or
electronic signature.

CYPRESS-FAIRBANKS ISD CONFIDENTIALITY NOTICE: This email, including
any attachments, is for the sole use of the intended recipient(s) and
may contain confidential student and/or employee information.
Unauthorized use and/or disclosure is prohibited under federal and
state law. If you are not the intended recipient, you may not use,
disclose, copy or disseminate this information. Please call the sender
immediately or reply by email and destroy all copies of the original
message, including any attachments. Unless expressly stated in this
e-mail, nothing in this message should be construed as a digital or
electronic signature.




Re: [ANNOUNCE] haproxy-1.9.0

2018-12-20 Thread Aleksandar Lazic

Hi Willy.

Am 20-12-2018 10:29, schrieb Willy Tarreau:

On Thu, Dec 20, 2018 at 09:17:00AM +0100, Aleksandar Lazic wrote:
Runtime API Improvements: It would be nice when you add a block that 
hanging or
dead processes can also be debugged with this API now. Maybe I have 
overseen it.


It is already the case. It's not shown on the article, but the main 
benefit
that comes from this mechanism is that you have the list of current and 
old

processes, and that you can access them all. It is something I've been
missing for a long time, to have a CLI connection to an old process 
that

does not want to die, to see what's happening.


Yep. Exactly that info would be nice to have in the article.
It's a USP (unique selling proposition) IMHO ;-)

Server Queue Priority Control: It would be nice to have a example for 
server

decision based on Server Queue Priority.


It's not a server decision, it's a dequeuing decision. To give you a 
concrete
example of what I'm using on my build farm, I'm using haproxy to 
load-balance
distcc traffic to a bunch of build nodes. Some files are large and slow 
to
compile, others are small and build fast. The time it takes to build 
the
largest file can be almost as long as the total build time. If these 
files
start to build after the other ones, the total build time increases 
because
I have to wait for such a large file to be built on one node at the 
end. So
instead what I'm doing is that I sort the queue by file size. Each time 
a

connection slot is available on a server, instead of picking the oldest
entry, haproxy picks the one with the largest file. This way large 
files

start to build before small ones and the build completes much faster.

But there is a caveat to doing this : if you have a very large number 
of

large files, you can leave small files starving in the queue till the
timeout strikes. That's what happened to me when building my kernels. 
So
I changed from set-priority-class to set-priority-offset to address 
this.

Now large files are built up to 10 seconds earlier and small files are
built up to 10 seconds later. By doing this I can respect both the size
ordering and bound the distance between the extremes, and I don't have
a timeout anymore.


As always well explained thanks.


This is what my config looks like (with the old set-priority-class
still present but commented out), I'm copy-pasting it here because I
think it's self-explanatory :


Cool thanks.

# wait for the payload to arrive before connecting to the 
server

tcp-request inspect-delay 1m
tcp-request content reject unless { distcc_param(DOTI) -m found 
}


# convert kilobytes to classes (negated)
# test: tcp-request content set-var(sess.size) str(2b95)
tcp-request content set-var(sess.size) distcc_param(DOTI)
tcp-request content set-var(sess.prio)
var(sess.size),div(-1024),add(2047)
tcp-request content set-var(sess.prio) int(0) if {
var(sess.prio) -m int lt 0 }
tcp-request content set-var(sess.prio) int(2047)  if {
var(sess.prio) -m int gt  2047 }
#tcp-request content set-priority-class var(sess.prio)
# offset up to +10 seconds for small files
tcp-request content set-priority-offset var(sess.prio),mul(5)

balance leastconn
default-server on-marked-down shutdown-sessions

# mcbin: 4xA72: their 4 CPUs must be used before any of the 
miqi

server lg1192.168.0.201:3632 check weight 5 maxconn 4
server lg2192.168.0.202:3632 check weight 5 maxconn 4
server lg3192.168.0.203:3632 check weight 5 maxconn 4
server lg4192.168.0.204:3632 check weight 5 maxconn 4
server lg5192.168.0.205:3632 check weight 5 maxconn 4
server lg6192.168.0.206:3632 check weight 5 maxconn 4
server lg7192.168.0.207:3632 check weight 5 maxconn 4

# miqi: 4xA17
server miqi-1 192.168.0.225:3632 check weight 1 maxconn 4
server miqi-2 192.168.0.226:3632 check weight 1 maxconn 4
server miqi-3 192.168.0.227:3632 check weight 1 maxconn 4
server miqi-4 192.168.0.228:3632 check weight 1 maxconn 4
server miqi-5 192.168.0.229:3632 check weight 1 maxconn 4
server miqi-6 192.168.0.230:3632 check weight 1 maxconn 4
server miqi-7 192.168.0.231:3632 check weight 1 maxconn 4
server miqi-8 192.168.0.232:3632 check weight 1 maxconn 4
server miqi-9 192.168.0.233:3632 check weight 1 maxconn 4
server miqi-a 192.168.0.234:3632 check weight 1 maxconn 4

Willy


Regards
Aleks



Re: HA Proxy Load Balancer

2018-12-20 Thread Aleksandar Lazic

Hi Lance.

Am 20-12-2018 18:20, schrieb Lance Melancon:

We are testing the load balancer and it's working but I can't see how 
to direct the server to a specific website such as server.net/site. Is 
this possible? Syntax? Thanks!


Well yes. I think it is a good starting point to read and understand 
this blog article.


https://www.haproxy.com/blog/using-haproxy-as-an-api-gateway-part-1/

What you want to do is "HTTP Routing"

For example a short snipplet
###

acl my_site path_beg -i /site

...
use_backend my_site if my_site

###

I would help a lot to have some more Information from you like.

haproxy -vv
anonymized config

As we don't know how much knowledge do you have about http I want to 
tell you that this statement "server.net/site" 2 parts.


Host: server.net
Path: /site

This is explained in detail in the doc.
http://cbonte.github.io/haproxy-dconv/1.9/configuration.html#1

Hth
Aleks

CYPRESS-FAIRBANKS ISD CONFIDENTIALITY NOTICE: This email, including any 
attachments, is for the sole use of the intended recipient(s) and may 
contain confidential student and/or employee information. Unauthorized 
use and/or disclosure is prohibited under federal and state law. If you 
are not the intended recipient, you may not use, disclose, copy or 
disseminate this information. Please call the sender immediately or 
reply by email and destroy all copies of the original message, 
including any attachments. Unless expressly stated in this e-mail, 
nothing in this message should be construed as a digital or electronic 
signature.




Re: Http HealthCheck Issue

2018-12-20 Thread Aleksandar Lazic
Hi Praveen.

Please keep the list in the loop, thanks.

Am 20.12.2018 um 07:00 schrieb UPPALAPATI, PRAVEEN:
> Hi Alek,
> 
> Now I am totally confused:
> 
> When I say :
> 
> 
> backend bk_8093_read
> balancesource
> http-response set-header X-Server %s
> option log-health-checks
> option httpchk GET 
> /nexus/repository/rawcentral/com.att.swm.attpublic/healthcheck.txt 
> HTTP/1.1\r\nHost:\ server1.com:8093\r\nAuthorization:\ Basic\ ...
> server primary8093r server1.com:8093 check verify none
> server backUp08093r server2.com:8093 check backup verify none 
> server backUp18093r server3.com:8093 check backup verify none
> 
> Here server1.com,server2.com and physical server hostnames.

That's the issue. You should have a vhost name which is for all servers the 
same.

> When I define HOST header which server should I define I was expecting 
> haproxy will formulate that to 
> 
> http://server1.com:8093/nexus/repository/rawcentral/com.att.swm.attpublic/healthcheck.txt
> http://server2.com:8093/nexus/repository/rawcentral/com.att.swm.attpublic/healthcheck.txt
> http://server3.com:8093/nexus/repository/rawcentral/com.att.swm.attpublic/healthcheck.txt
> 
> to monitor which servers are live right , so how could I dynamically populate 
> the HOST?

I have written the following:

> I assume that nexus have a general URL and not srv1,srv2, ...

This is also mentioned in the config example.

https://help.sonatype.com/repomanager2/installing-and-running/running-behind-a-reverse-proxy

This means on the nexus should be a generic hostname which is for all servers
the same.

Maybe you can share the nexus config, nexus setup and the nexus version, as it's
normally not a big deal to setup a vhost.

Regards
Aleks


PS: What's this urldefense.proofpoint.com crap 8-O

> Please advice.
> 
> Thanks,
> Praveen.
> 
> 
> 
> -Original Message-
> From: Aleksandar Lazic [mailto:al-hapr...@none.at] 
> Sent: Wednesday, December 19, 2018 3:25 PM
> To: UPPALAPATI, PRAVEEN 
> Cc: Jonathan Matthews ; Cyril Bonté 
> ; haproxy@formilux.org
> Subject: Re: Http HealthCheck Issue
> 
> Am 19.12.2018 um 21:04 schrieb UPPALAPATI, PRAVEEN:
>> Ok then do I need to add the haproxy server?
> 
> I suggest to use a `curl -v
> /nexus/v1/repository/rawcentral/com.att.swm.attpublic/healthcheck.txt`
> and see how curl make the request.
> 
> I assume that nexus have a general URL and not srv1,srv2, ...
> For example.
> 
> ###
> curl -vo /dev/null 
> https://urldefense.proofpoint.com/v2/url?u=https-3A__www.haproxy.org&d=DwIFaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=V0kSKiLhQKpOQLIjj3-g9Q&m=CaQ1GDp8D6XzObaEV3Ad9IQ3Q1TwhAAYhFQ24IgwP68&s=53v5RKBVFzzKyU7JGcd8i6eBlGyIfSavQBRkoYcXZm8&e=
> 
> * Rebuilt URL to: 
> https://urldefense.proofpoint.com/v2/url?u=https-3A__www.haproxy.org_&d=DwIFaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=V0kSKiLhQKpOQLIjj3-g9Q&m=CaQ1GDp8D6XzObaEV3Ad9IQ3Q1TwhAAYhFQ24IgwP68&s=k3KIXBm21aPgFFyUUyjSUxggMPVWIqkWYwdaWeDZ6NI&e=
>   % Total% Received % Xferd  Average Speed   TimeTime Time  
> Current
>  Dload  Upload   Total   SpentLeft  Speed
>   0 00 00 0  0  0 --:--:-- --:--:-- --:--:-- 
> 0*
>   Trying 51.15.8.218...
> * Connected to 
> https://urldefense.proofpoint.com/v2/url?u=http-3A__www.haproxy.org&d=DwIFaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=V0kSKiLhQKpOQLIjj3-g9Q&m=CaQ1GDp8D6XzObaEV3Ad9IQ3Q1TwhAAYhFQ24IgwP68&s=WMHYWYJP4ycNXvPYov7PnJdkQB26fgfXm_ByW2BCM8g&e=
>  (51.15.8.218) port 443 (#0)
> * found 148 certificates in /etc/ssl/certs/ca-certificates.crt
> * found 599 certificates in /etc/ssl/certs
> * ALPN, offering http/1.1
> * SSL connection using TLS1.2 / ECDHE_RSA_AES_128_GCM_SHA256
> *server certificate verification OK
> *server certificate status verification SKIPPED
> *common name: *.haproxy.org (matched)
> *server certificate expiration date OK
> *server certificate activation date OK
> *certificate public key: RSA
> *certificate version: #3
> *subject: OU=Domain Control Validated,OU=EssentialSSL
> Wildcard,CN=*.haproxy.org
> *start date: Fri, 21 Apr 2017 00:00:00 GMT
> *expire date: Mon, 20 Apr 2020 23:59:59 GMT
> *issuer: C=GB,ST=Greater Manchester,L=Salford,O=COMODO CA
> Limited,CN=COMODO RSA Domain Validation Secure Server CA
> *compression: NULL
> * ALPN, server accepted to use http/1.1
>> GET / HTTP/1.1
>> Host: 
>> https://urldefense.proofpoint.com/v2/url?u=http-3A__www.haproxy.org&d=DwIFaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=V0kSKiLhQKpOQLIjj3-g9Q&m=CaQ1GDp8D6XzObaEV3Ad9IQ3Q

Re: [ANNOUNCE] haproxy-1.9.0

2018-12-20 Thread Aleksandar Lazic
Am 20.12.2018 um 06:48 schrieb Willy Tarreau:
> On Wed, Dec 19, 2018 at 11:31:33PM +0100, Aleksandar Lazic wrote:
>>> Well, I know that so quick a summary doesn't do justice to the developers
>>> having done all this amazing work, but I've seen that some of my coworkers
>>> have started to write an article detailing all these new features, so I
>>> won't waste my time paraphrasing them. I'll pass the URL here once this
>>> article becomes public. No, I'm not lazy, I'm tired and hungry ;-)
> 
> And here comes the link, it's more detailed than above :
> 
> https://www.haproxy.com/blog/haproxy-1-9-has-arrived/

good written ;-).

2 suggestions.

Runtime API Improvements: It would be nice when you add a block that hanging or
dead processes can also be debugged with this API now. Maybe I have overseen it.

Server Queue Priority Control: It would be nice to have a example for server
decision based on Server Queue Priority.

> I still have to catch up with a large number of unresponded e-mails, and
> once done, I'll send another update explaining how I hope we can organizer
> our work better for the next steps.
> 
>> Amazing work to the whole team.
>>
>> Take a good food, it's easy in France ;-), and a deep sleep to recharge your
>> batteries, you and your team have more the deserve it.
> 
> Thanks Aleks, now both done. Very good cassoulet less than 2km away from
> the office ;-)

;-)

> Cheers,
> Willy

Regards
Aleks



Re: [ANNOUNCE] haproxy-1.9.0

2018-12-19 Thread Aleksandar Lazic
Hi.

Am 19.12.2018 um 19:33 schrieb Willy Tarreau:
> Hi,
> 
> HAProxy 1.9.0 was released on 2018/12/19. It added 45 new commits
> after version 1.9-dev11.
> 
> We still had a number of small issues causing the various artefacts that
> have been visible on haproxy.org since this week-end, but now everything
> looks OK. So it's better to release before we discover new issues :-)

Good Idea ;-)

The image is available.

###
docker run --rm --entrypoint /usr/local/sbin/haproxy me2digital/haproxy19 -vv
HA-Proxy version 1.9.0 2018/12/19 - https://haproxy.org/
Build options :
  TARGET  = linux2628
  CPU = generic
  CC  = gcc
  CFLAGS  = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv
-Wno-unused-label -Wno-sign-compare -Wno-unused-parameter
-Wno-old-style-declaration -Wno-ignored-qualifiers -Wno-clobbered
-Wno-missing-field-initializers -Wtype-limits
  OPTIONS = USE_LINUX_SPLICE=1 USE_GETADDRINFO=1 USE_ZLIB=1 USE_REGPARM=1
USE_OPENSSL=1 USE_LUA=1 USE_PCRE=1 USE_PCRE_JIT=1 USE_TFO=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with OpenSSL version : OpenSSL 1.1.1a  20 Nov 2018
Running on OpenSSL version : OpenSSL 1.1.1a  20 Nov 2018
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
Built with Lua version : Lua 5.3.5
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT
IP_FREEBIND
Built with zlib version : 1.2.7
Running on zlib version : 1.2.7
Compression algorithms supported : identity("identity"), deflate("deflate"),
raw-deflate("deflate"), gzip("gzip")
Built with PCRE version : 8.32 2012-11-30
Running on PCRE version : 8.32 2012-11-30
PCRE library supports JIT : yes
Encrypted password support via crypt(3): yes
Built with multi-threading support.

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as  cannot be specified using 'proto' keyword)
  h2 : mode=HTXside=FE|BE
  h2 : mode=HTTP   side=FE
: mode=HTXside=FE|BE
: mode=TCP|HTTP   side=FE|BE

Available filters :
[SPOE] spoe
[COMP] compression
[CACHE] cache
[TRACE] trace
###

> Speaking more seriously, in the end, what we expected to be just a technical
> release looks pretty nice on the features perspective. The features by
> themselves are not high-level but address a wide number of integration
> cases that overall make this version really appealing.
> 
> In the end 1.9 brings to end users (as a quick summary) :
>   - end-to-end HTTP/2
>   - advanced master process management with its own CLI
>   - much more scalable multi-threading
>   - regression test suite
>   - priority-based dequeueing
>   - better cache supporting larger objects
>   - early hints (HTTP 103)
>   - cipher suites for TLS 1.3
>   - random balance algorithm
>   - fine-grained timers for better observability
>   - stdout logging for containers and systemd
> 
> And the rest which has kept us very busy was needed to achieve this and
> to pave the way to future developments and more contributions from people
> who won't have to know the internals as deeply as it used to be needed.
> It's expected that the road to 2.0 will be calmer now.
> 
> Well, I know that so quick a summary doesn't do justice to the developers
> having done all this amazing work, but I've seen that some of my coworkers
> have started to write an article detailing all these new features, so I
> won't waste my time paraphrasing them. I'll pass the URL here once this
> article becomes public. No, I'm not lazy, I'm tired and hungry ;-)

Amazing work to the whole team.

Take a good food, it's easy in France ;-), and a deep sleep to recharge your
batteries, you and your team have more the deserve it.

> Please find the usual URLs below :
>Site index   : http://www.haproxy.org/
>Discourse: http://discourse.haproxy.org/
>Slack channel: https://slack.haproxy.org/
>Sources  : http://www.haproxy.org/download/1.9/src/
>Git repository   : http://git.haproxy.org/git/haproxy-1.9.git/
>Git Web browsing : http://git.haproxy.org/?p=haproxy-1.9.git
>Changelog: http://www.haproxy.org/download/1.9/src/CHANGELOG
>Cyril's HTML doc : http://cbonte.github.io/haproxy-dconv/
> 
> Willy

Very best regards
aleks

> ---
> Complete changelog :
> Christopher Faulet (7):
>   BUG/MEDIUM: compression: Use the right buffer pointers to compress 
> input data
>   BUG/MINOR: mux_pt: Set CS_FL_WANT_ROOM when count is zero in rcv_buf() 
> callback
>   BUG/MEDIUM: stream: Forward the right amount of data before infinite 
> forwarding
>   BUG/MINOR: proto_htx: Call the HTX version of the function managing 
> client cookies

Re: Http HealthCheck Issue

2018-12-19 Thread Aleksandar Lazic
Am 19.12.2018 um 21:04 schrieb UPPALAPATI, PRAVEEN:
> Ok then do I need to add the haproxy server?

I suggest to use a `curl -v
/nexus/v1/repository/rawcentral/com.att.swm.attpublic/healthcheck.txt`
and see how curl make the request.

I assume that nexus have a general URL and not srv1,srv2, ...
For example.

###
curl -vo /dev/null https://www.haproxy.org

* Rebuilt URL to: https://www.haproxy.org/
  % Total% Received % Xferd  Average Speed   TimeTime Time  Current
 Dload  Upload   Total   SpentLeft  Speed
  0 00 00 0  0  0 --:--:-- --:--:-- --:--:-- 0*
  Trying 51.15.8.218...
* Connected to www.haproxy.org (51.15.8.218) port 443 (#0)
* found 148 certificates in /etc/ssl/certs/ca-certificates.crt
* found 599 certificates in /etc/ssl/certs
* ALPN, offering http/1.1
* SSL connection using TLS1.2 / ECDHE_RSA_AES_128_GCM_SHA256
*server certificate verification OK
*server certificate status verification SKIPPED
*common name: *.haproxy.org (matched)
*server certificate expiration date OK
*server certificate activation date OK
*certificate public key: RSA
*certificate version: #3
*subject: OU=Domain Control Validated,OU=EssentialSSL
Wildcard,CN=*.haproxy.org
*start date: Fri, 21 Apr 2017 00:00:00 GMT
*expire date: Mon, 20 Apr 2020 23:59:59 GMT
*issuer: C=GB,ST=Greater Manchester,L=Salford,O=COMODO CA
Limited,CN=COMODO RSA Domain Validation Secure Server CA
*compression: NULL
* ALPN, server accepted to use http/1.1
> GET / HTTP/1.1
> Host: www.haproxy.org #  That's the host header which is missing in your
check line
> User-Agent: curl/7.47.0
> Accept: */*
>
< HTTP/1.1 200 OK
< date: Wed, 19 Dec 2018 21:09:30 GMT
< server: Apache
< last-modified: Wed, 19 Dec 2018 18:32:39 GMT
< etag: "504ff5-148d4-57d643d22eab7"
< accept-ranges: bytes
< content-length: 84180
< content-type: text/html
< age: 511
<
{ [16150 bytes data]

###

Btw.: This is also shown in the manual in the Example of the option.

https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4-option%20httpchk

`option httpchk OPTIONS * HTTP/1.1\r\nHost:\ www `

The manual is good, I suggest to read it several times, what I do always ;-)

You should also avoid `X-` in the response header in your config

`http-response set-header X-Server %s`

As Norman mentioned it in on the list couples of days before.

https://www.mail-archive.com/haproxy@formilux.org/msg32110.html

Best regards
Aleks

> -Original Message-
> From: Jonathan Matthews [mailto:cont...@jpluscplusm.com] 
> Sent: Wednesday, December 19, 2018 1:32 PM
> To: UPPALAPATI, PRAVEEN 
> Cc: Cyril Bonté ; haproxy@formilux.org
> Subject: Re: Http HealthCheck Issue
> 
> On Wed, 19 Dec 2018 at 19:23, UPPALAPATI, PRAVEEN  wrote:
>>
>> Hmm. Wondering why do we need host header? I was able to do curl without the 
>> header. I did not find anything in the doc.
> 
> "curl" automatically adds a Host header unless you are directly
> hitting an IP address.
> 




Re: MQTT CONNECT parsing in Lua

2018-12-11 Thread Aleksandar Lazic
Hi Baptiste.

Am 11.12.2018 um 03:29 schrieb Baptiste:
> Hi guys,
> 
> At last AWS conference, I met with a engineer who was using HAProxy to
> load-balance IoT devices through HAProxy using MQTT protocol and he was
> complaining about the poor performance of the server with 10k of devices just
> get reconnecting.

Have you any chance to aks the engineer if your solution have better performance
then his?

> He pointed SSL performance but also authentication (validation of username /
> password).

Do you have some more details about his SSL/TLS performance problem stuff?

> So I wrote a small MQTT library for HAProxy which allows parsing the MQTT
> CONNECT message, the very first one being sent by a client.
> The library allows the following:
> * validation of the message (through a converter)
> * fetch any field from the connect message (client id, username, password,
> etc...) for fun and profit (routing, persistence, rate or concurrent 
> connection
> enforcement, etc...)
> * write your own authentication validation module on top of HAProxy
> 
> The code is there, including some HAProxy configuration examples:
> https://github.com/bedis/haproxy_mqtt_lua
> 
> I hope this will be useful to some of you.
> I am planning to write in native C the converter and the fetch above.

In general , cool ;-)

> Baptiste

Regards
Aleks



Re: sample fetch: add bc_http_major

2018-12-07 Thread Aleksandar Lazic
Hi Jerome.

Am 07.12.2018 um 15:37 schrieb Jerome Magnin:
> Hi Aleks,
> 
> On Fri, Dec 07, 2018 at 01:46:53PM +0100, Aleksandar Lazic wrote:
>> Hi Jerome.
>> [...] 
>> I suggest to use a dedicated function for that, jm2c.
>>
>> { "bc_http_major", smp_fetch_bc_http_major, 0, NULL, SMP_T_SINT, 
>> SMP_USE_L4SRV },
>>
> 
> If you look at src/ssl_sock.c there are several fetches applying to both
> frontend and backend connection, and each pair uses the same function. I
> shamelessly copied^W^Wtook example from them.

Got it. Thanks for answer.

> Jérôme

Regards
Aleks



Re: Simply adding a filter causes read error

2018-12-07 Thread Aleksandar Lazic
Hi.

Am 07.12.2018 um 08:37 schrieb flamese...@yahoo.co.jp:
> Hi
> 
> I tested more, and found that even with option http-pretend-keepalive enabled,
> 
> if I increase the test duration , the read error still appear.

Please can you show us some logs when the error appears.
Can you also tell us some data about the Server on which haproxy, wrk and nginx
is running and how the network setup looks like.

maybe you reach some system limits as Compression requires some more os/hw
resources.

Regards
Aleks

> Running 3m test @ http://10.0.3.15:8000
>   10 threads and 1000 connections
>   Thread Stats   Avg      Stdev     Max   +/- Stdev
>     Latency    19.84ms   56.36ms   1.34s    92.83%
>     Req/Sec    23.11k     2.55k   50.64k    87.10%
>   45986426 requests in 3.33m, 36.40GB read
>   Socket errors: connect 0, read 7046, write 0, timeout 0
> Requests/sec: 229817.63
> Transfer/sec:    186.30MB
> 
> thanks
> 
> - Original Message -
> *From:* "flamese...@yahoo.co.jp" 
> *To:* Aleksandar Lazic ; "haproxy@formilux.org"
> 
> *Date:* 2018/12/7, Fri 09:06
> *Subject:* Re: Simply adding a filter causes read error
> 
> Hi,
> 
> Thanks for the reply, I thought the mail format is corrupted..
> 
> I tried option http-pretend-keepalive, seems read error is gone, but 
> timeout
> error raised(maybe its because the 1000 connections of wrk)
> 
> Thanks
> 
> - Original Message -
> *From:* Aleksandar Lazic 
> *To:* flamese...@yahoo.co.jp; "haproxy@formilux.org" 
> 
> *Date:* 2018/12/6, Thu 23:53
> *Subject:* Re: Simply adding a filter causes read error
> 
> Hi.
> 
> Am 06.12.2018 um 15:20 schrieb flamese...@yahoo.co.jp
> <mailto:flamese...@yahoo.co.jp>:
> > Hi,
> >
> > I have a haproxy(v1.8.14) in front of several nginx backends,
> everything works
> > fine until I add compression in haproxy.
> 
> There is a similar thread about this topic.
> 
> https://www.mail-archive.com/haproxy@formilux.org/msg31897.html
> 
> Can you try to add this option in your config and see if the problem 
> is
> gone.
> 
> option http-pretend-keepalive
> 
> Regards
> Aleks
> 
> > My config looks like this:
> >
> > ### Config start #
> > global
> >     maxconn         100
> >     daemon
> >     nbproc 2
> >
> > defaults
> >     retries 3
> >     option redispatch
> >     timeout client  60s
> >     timeout connect 60s
> >     timeout server  60s
> >     timeout http-request 60s
> >     timeout http-keep-alive 60s
> >
> > frontend web
> >     bind *:8000
> >
> >     mode http
> >     default_backend app
> > backend app
> >     mode http
> >     #filter compression
> >     #filter trace 
> >     server nginx01 10.0.3.15:8080
> > ### Config end #
> >
> >
> > Lua script used in wrk:
> > a.lua:
> >
> > local count = 0
> >
> > request = function()
> >     local url = "/?count=" .. count
> >     count = count + 1
> >     return wrk.format(
> >     'GET',
> >     url
> >     )
> > end
> >
> >
> > 01. wrk test against nginx: everything if OK
> >
> > wrk -c 1000 -s a.lua http://10.0.3.15:8080 <http://10.0.3.15:8080/>
> > Running 10s test @ http://10.0.3.15:8080 <http://10.0.3.15:8080/>
> >   2 threads and 1000 connections
> >   Thread Stats   Avg      Stdev     Max   +/- Stdev
> >     Latency    34.83ms   17.50ms 260.52ms   76.48%
> >     Req/Sec    12.85k     2.12k   17.20k    62.63%
> >   255603 requests in 10.03s, 1.23GB read
> > Requests/sec:  25476.45
> > Transfer/sec:    125.49MB
> >
> >
> > 02. Wrk test against haproxy, no filters: everything is OK
> >
> > wrk -c 1000 -s a.lua http://10.0.3.15:8000 <http://10.0.3.15:8000/>
> > Running 10s test @ http://10.0.3.15:8000 <http://10.0.3.15:8000/>

Re: sample fetch: add bc_http_major

2018-12-07 Thread Aleksandar Lazic
Hi Jerome.

Am 07.12.2018 um 10:26 schrieb Jerome Magnin:
> Hi,
> 
> the attached patch adds bc_http_major. It returns the HTTP major encoding of 
> the
> backend connection, based on the the on-wire encoding. 

cool Idea ;-)

I suggest to use a dedicated function for that, jm2c.

{ "bc_http_major", smp_fetch_bc_http_major, 0, NULL, SMP_T_SINT, SMP_USE_L4SRV 
},


> Jérôme

Regards
aleks



Re: Simply adding a filter causes read error

2018-12-06 Thread Aleksandar Lazic
Hi.

Am 06.12.2018 um 15:20 schrieb flamese...@yahoo.co.jp:
> Hi,
> 
> I have a haproxy(v1.8.14) in front of several nginx backends, everything works
> fine until I add compression in haproxy.

There is a similar thread about this topic.

https://www.mail-archive.com/haproxy@formilux.org/msg31897.html

Can you try to add this option in your config and see if the problem is gone.

option http-pretend-keepalive

Regards
Aleks

> My config looks like this:
> 
> ### Config start #
> global
>     maxconn         100
>     daemon
>     nbproc 2
> 
> defaults
>     retries 3
>     option redispatch
>     timeout client  60s
>     timeout connect 60s
>     timeout server  60s
>     timeout http-request 60s
>     timeout http-keep-alive 60s
> 
> frontend web
>     bind *:8000
> 
>     mode http
>     default_backend app
> backend app
>     mode http
>     #filter compression
>     #filter trace 
>     server nginx01 10.0.3.15:8080
> ### Config end #
> 
> 
> Lua script used in wrk:
> a.lua:
> 
> local count = 0
> 
> request = function()
>     local url = "/?count=" .. count
>     count = count + 1
>     return wrk.format(
>     'GET',
>     url
>     )
> end
> 
> 
> 01. wrk test against nginx: everything if OK
> 
> wrk -c 1000 -s a.lua http://10.0.3.15:8080
> Running 10s test @ http://10.0.3.15:8080
>   2 threads and 1000 connections
>   Thread Stats   Avg      Stdev     Max   +/- Stdev
>     Latency    34.83ms   17.50ms 260.52ms   76.48%
>     Req/Sec    12.85k     2.12k   17.20k    62.63%
>   255603 requests in 10.03s, 1.23GB read
> Requests/sec:  25476.45
> Transfer/sec:    125.49MB
> 
> 
> 02. Wrk test against haproxy, no filters: everything is OK
> 
> wrk -c 1000 -s a.lua http://10.0.3.15:8000
> Running 10s test @ http://10.0.3.15:8000
>   2 threads and 1000 connections
>   Thread Stats   Avg      Stdev     Max   +/- Stdev
>     Latency    73.58ms  109.48ms   1.33s    97.39%
>     Req/Sec     7.83k     1.42k   11.95k    66.15%
>   155843 requests in 10.07s, 764.07MB read
> Requests/sec:  15476.31
> Transfer/sec:     75.88MB
> 
> 03. Wrk test against haproxy, add filter compression: read error
> 
> Change
> 
>     #filter compression
> ===>
>     filter compression
> 
> wrk -c 1000 -s a.lua http://10.0.3.15:8000
> Running 10s test @ http://10.0.3.15:8000
>   2 threads and 1000 connections
>   Thread Stats   Avg      Stdev     Max   +/- Stdev
>     Latency    60.43ms   42.63ms   1.06s    91.54%
>     Req/Sec     7.86k     1.40k   10.65k    67.54%
>   157025 requests in 10.11s, 769.87MB read
>   Socket errors: connect 0, read 20, write 0, timeout 0
> Requests/sec:  15530.67
> Transfer/sec:     76.14MB
> 
> 04. Wrk test against haproxy, add filter trace, and update flt_trace.c:
> 
> static int
> trace_attach(struct stream *s, struct filter *filter)
> {
>         struct trace_config *conf = FLT_CONF(filter);
>         // add below
>        // ignore this filter to avoid performance down since there are many 
> print
>         return 0; 
> 
> And change
>     #filter compression
>     #filter trace
> ===>
>     #filter compression
>     filter trace
> 
> Running 10s test @ http://10.0.3.15:8000
>   2 threads and 1000 connections
>   Thread Stats   Avg      Stdev     Max   +/- Stdev
>     Latency    64.88ms   77.91ms   1.09s    98.26%
>     Req/Sec     7.84k     1.47k   11.57k    67.71%
>   155800 requests in 10.05s, 763.86MB read
>   Socket errors: connect 0, read 21, write 0, timeout 0
> Requests/sec:  15509.93
> Transfer/sec:     76.04MB
> 
> 
> Is there any config error? Am I doing something wrong?
> 
> Thanks
> 




Re: [ANNOUNCE] haproxy-1.9-dev9 : the last mile

2018-12-03 Thread Aleksandar Lazic
Am 02.12.2018 um 20:29 schrieb Willy Tarreau:
> Hi,
> 
> HAProxy 1.9-dev9 was released on 2018/12/02. It added 147 new commits
> after version 1.9-dev8.

Image is now updated ;-)

https://hub.docker.com/r/me2digital/haproxy19/

###
HA-Proxy version 1.9-dev9 2018/12/02
Copyright 2000-2018 Willy Tarreau 

Build options :
  TARGET  = linux2628
  CPU = generic
  CC  = gcc
  CFLAGS  = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv
-Wno-unused-label -Wno-sign-compare -Wno-unused-parameter
-Wno-old-style-declaration -Wno-ignored-qualifiers -Wno-clobbered
-Wno-missing-field-initializers -Wtype-limits
  OPTIONS = USE_LINUX_SPLICE=1 USE_GETADDRINFO=1 USE_ZLIB=1 USE_REGPARM=1
USE_OPENSSL=1 USE_LUA=1 USE_PCRE=1 USE_PCRE_JIT=1 USE_TFO=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with OpenSSL version : OpenSSL 1.1.1a  20 Nov 2018
Running on OpenSSL version : OpenSSL 1.1.1a  20 Nov 2018
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
Built with Lua version : Lua 5.3.5
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT
IP_FREEBIND
Built with zlib version : 1.2.7
Running on zlib version : 1.2.7
Compression algorithms supported : identity("identity"), deflate("deflate"),
raw-deflate("deflate"), gzip("gzip")
Built with PCRE version : 8.32 2012-11-30
Running on PCRE version : 8.32 2012-11-30
PCRE library supports JIT : yes
Encrypted password support via crypt(3): yes
Built with multi-threading support.

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as  cannot be specified using 'proto' keyword)
  h2 : mode=HTXside=FE|BE
  h2 : mode=HTTP   side=FE
: mode=HTXside=FE|BE
: mode=TCP|HTTP   side=FE|BE

Available filters :
[SPOE] spoe
[COMP] compression
[TRACE] trace
###

I'm a little bit confused as there is no h1 in the protocol output above. I
think the "default" is h1.

> This version will give some of us a bit of relief. It is the first one in
> one year which finally integrates all the ongoing parallel development! I
> thought about calling it -rc1 but I remembered that mixing -rc and -dev
> causes some confusion, so it's dev9.
> 
> So we're starting to see the light at the end of the tunnel, which marks
> the hopefully soon completion of something that started really bad a
> year ago when we started to modify multiple interdependent areas at the
> same time, leaving each developer with limited ways to test their code
> in target situation. Just to illustrate, HTX development needed idle
> connections rework to get keep-alive, which required the connection
> management to be replaced, which in turn couldn't be tested without a
> multi-stream outgoing mux (H2), which itself couldn't be implemented
> before HTX... Now the loop is closed!
> 
> It doesn't mean it works well. Oh no, rest assured it's still full of
> awful bugs! But for the first time we'll be able to focus on integration
> bugs instead of focusing on bugs caused by temporary code, to experience
> integration failures later as we've been doing for one year.
> 
> So from now on, we'll focus on testing, bug fixing, code cleanups,
> polishing, and documentation. No new big stuff is expected anymore. In the
> worst case we'll revert certain things if they look too broken and appear
> unfixable (let's hope it will not be the case).
> 
> I'm quite hopeful in the next steps, having seen how all the infrastructure
> changes managed over time to much better isolate bugs, resulting in their
> diagnostic and fixing to be significantly faster than in the past.
> 
> Among the new stuff that we finally managed to integrate with this series,
> I can name :
>   - near completion of HTX porting : everything is supported, even
> redirects/http-rules/errorfiles, except Lua and cache (which may or
> may not be addressed soon, I've heard that the showstopper was the
> filters API but it's been addressed now). HTX opens new possibilities
> by remains in experimental status at release date. Ideally for 2.0
> we should have same coverage for legacy and HTX modes so that we can
> remove the legacy mode for 2.1.
> 
>   - server-side connection multiplexing (needed for H2, H3 later, maybe
> one day FCGI, who knows)

I hope that the FCGI solutions move to uwsgi as this tool have a great amount of
languages implemented to run as apps and it speaks http(s).

>   - server-side connection pooling : idle connections don't quit anymore
> if the front connection vanishes, they can remain on the server to be
> used by another one. Expect a few minor adjustments on this part soon.

Yesss ;-)

>   -

Re: Generic backend in HAProxy config with server options as placeholders

2018-11-14 Thread Aleksandar Lazic
Hi Vijay.

Am 14.11.2018 um 10:14 schrieb Vijay Bais:
> Hello Aleksandar,
> 
> We already considered using haproxy maps but we still have to define N 
> backends
> for corresponding N keys in the map file.
> I'm looking more at an implementation with single backend definition with the
> server options as placeholders.
> 
> Ex. Using maps would look something like this
> 
> frontend nat
>     bind *:1
>     use_backend %[req.hdr(X-MyHeader), map(/etc/haproxy/my.map)]
> 
> backend example1.com 
>     server myserver1 example1.com:80 source 10.0.0.1
> 
> backend example2.com
>     server myserver2 example2.com:80 source 10.0.0.2
> 
> backend example3.com
>     server myserver3 example3.com:80 source 10.0.0.3
> 
> 
> 
> Whereas, we are looking for something like below
> 
> 
> frontend nat
>     bind *:1
>     default_backend generic
> 
> backend generic
>     server myserver %[req.hdr(X-MyHeader)] source %[dst]

Ah now concrete examples ;-)

Maybe you can use the server template?!
https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4-server-template

So you would like to have something like this, is this possible, I don't think 
so?

backend generic
  server-template myserver 1-3 %[req.hdr(X-MyHeader)]:80 check source 0.0.0.0
usesrc %[dst]

Which version of HAProxy do you use?

haproxy -vv

> Thanks,
> Vijay

Regards
Aleks

> On Wed, Nov 14, 2018 at 1:39 PM Aleksandar Lazic  <mailto:al-hapr...@none.at>> wrote:
> 
> Hi.
> 
> Am 14.11.2018 um 08:46 schrieb Vijay Bais:
> > Hello,
> >
> > We have a requirement wherein a single generic backend with server 
> options
> > configured as placeholders, which will resolve on the fly or at runtime.
> >
> > Currently, we have to define multiple backends (has to be hardcoded) and
> select
> > them using the /use_backend/ keyword.
> >
> > Kindly help us with this generic backend implementation in HAProxy and 
> let us
> > know if its possible OR any alternative way that this can be achieved.
> 
> Maybe you can use maps for your requirement.
> 
> https://www.haproxy.com/blog/introduction-to-haproxy-maps/
> 
> As an example can you take a look at the openshift router template ;-)
> 
> 
> https://github.com/openshift/origin/blob/master/images/router/haproxy/conf/haproxy-config.template#L201-L202
> 
> > Thank you in advance,
> > Vijay B
> 
> Regards
> Aleks
> 




Re: Generic backend in HAProxy config with server options as placeholders

2018-11-14 Thread Aleksandar Lazic
Hi.

Am 14.11.2018 um 08:46 schrieb Vijay Bais:
> Hello,
> 
> We have a requirement wherein a single generic backend with server options
> configured as placeholders, which will resolve on the fly or at runtime.
> 
> Currently, we have to define multiple backends (has to be hardcoded) and 
> select
> them using the /use_backend/ keyword.
> 
> Kindly help us with this generic backend implementation in HAProxy and let us
> know if its possible OR any alternative way that this can be achieved.

Maybe you can use maps for your requirement.

https://www.haproxy.com/blog/introduction-to-haproxy-maps/

As an example can you take a look at the openshift router template ;-)

https://github.com/openshift/origin/blob/master/images/router/haproxy/conf/haproxy-config.template#L201-L202

> Thank you in advance,
> Vijay B

Regards
Aleks



Re: [PATCH] HTTP 103 response (Early Hints)

2018-11-12 Thread Aleksandar Lazic
Hi Fred.

Sorry to be picky but I still think that there is some missing text in the 
documentation, as mentioned before.

http://git.haproxy.org/?p=haproxy.git;a=commitdiff;h=06f5b6435ba99b7a6a034d27b56192e16249f6f0

MINOR: doc: Add information about "early-hint" http-request action.

As I could be wrong, just ignore this mail.

Regards
Aleks


 Ursprüngliche Nachricht 
Von: Willy Tarreau 
Gesendet: 12. November 2018 21:09:23 MEZ
An: Frederic Lecaille 
CC: HAProxy 
Betreff: Re: [PATCH] HTTP 103 response (Early Hints)

On Mon, Nov 12, 2018 at 03:38:28PM +0100, Frederic Lecaille wrote:
> Hello,
> 
> Here is a little series of patches to implement a new http-request
> action named "early-hint" to add HTTP 103 responses prior to any other
> response with headers whose values are defined by log-format rules, as
> this is done with "(add|del)-header" action.

Cool! Applied, thank you Fred :-)

Willy




Re: HTTP/3 | daniel.haxx.se

2018-11-12 Thread Aleksandar Lazic
Am 12.11.2018 um 23:31 schrieb Willy Tarreau:
> On Mon, Nov 12, 2018 at 10:52:41PM +0100, Aleksandar Lazic wrote:
>> Oh wow this is really a good time to get the hands dirty as QUIC is a major
>> design change in HTTP, IMHO.
> 
> Some first approaches were already attempted about one year ago already,
> to avoid being later in the party. But just like when we had to work on
> H2, trying to stuff this into an existing stream-based component is not
> easy and we ended up identifying a lot of lower layers that had to be
> changed first, so this work was paused and the protocol changed a lot
> during that time.
> 
>> To adopt the matrix from one of the last message, haproxy will be able to do 
>> the
>> conversion in almost every direction?
>>
>> HTTP/1.x <> HTTP/2
>> HTTP/2 <> HTTP/3
>> HTTP/1.x <> HTTP/3
>> HTTP/3 <> HTTP/3
> 
> Yes, that's the idea. In fact it will be slightly different, we're 
> implementing
> protocol conversion at the lower layers, to an internal version-agnostic HTTP
> representation supposed to accurately represent a message while keeping its
> semantics, that we've called HTX. I know that you love ASCII art, so it will
> look more or less like this (we'll have to provide a lot of doc but first we
> really want to focus on getting the code merged) :
> 
> 
>+---+ stream
>|   all HTTP processing | layer
>+---+
>^ ^ ^  ^
>HTX | HTX | HTX |  HTX |  normalised
>v v v  v  interface
>+--+ ++ ++ ++
>|applet| | HTTP/1 | | HTTP/2 | | HTTP/3 | whatever layer (called mux for 
> now
>+--+ ++ ++ ++ but may change once we have 
> others,
> cache || ||| could be presentation in OSI)
> stats | +--+  |+--+
> Lua svc   | |TLS   |  || QUIC |  transport layer
>   | +--+  |+--+ 
>   |   |   ||
> +-+ +-+
> | TCP/Unix/socketpair | | UDP |  control layer
> +-+ +-+
>   ||
>+--+
>|file descriptor   |  socket layer
>+--+
>|
>  +---+
>  | operating |
>  |  system   |
>  +---+
> 
> 
> It's really over-simplified since several layers above in fact have multiple
> upper arrows as they are multiplexed, but it's to explain more or less how
> stuff gets stacked. And since there are always transition periods, you have
> multiple protocols possible between each layer, otherwise it wouldn't be
> fun.  You have coded in the very old version where the file descriptors
> were directly present in the stream layer. As you can see, over a decade
> a lot of new layers have been built between the operating system and the
> streams, without even losing features nor compatibility. That's where the
> real challenges are.

Thank you for the picture, you know me too good 8-).

I follow the development a "little" bit, and it's amazing how you and your team
was able to rewrite a lot of haproxy and keep a high level of compatibility and
stability.

BTW: I loved to see that std* logger is now there ;-)

>> Is this technically possible as the UDP/QUIC may have not some information 
>> that
>> the TCP/HTTP need?
> 
> People tend to confuse QUIC and UDP too much. QUIC more or less uses
> UDP as a replacement for IP, which doesn't require privileges, and it
> reimplements its own transport stack on top of it, with congestion
> control, multiplexing, encryption etc. So it effectively represents a
> reliable transport layer for multiple streams based on datagrams below.
> UDP is seen as the revolutionary thing when you tell someone that HTTP
> will work over UDP but that's absolutely not true, it's almost comparable
> to saying that HTTP works over UDP when you're doing it over TLS over
> TCP over OpenVPN over UDP.

Hm, sound complicated.

I think to debug this beast isn't that "easy" as for now with `curl -v ...`,
even QUIC is planned for curl ;-)

>> As I try to imagine the new design in HAProxy will it look like this?
>>
>> TCP(TCP|UDP) HTTP/1.x,HTTP/2,HTTP/3
>&

Re: HTTP/3 | daniel.haxx.se

2018-11-12 Thread Aleksandar Lazic
Hi Willy.

Am 12.11.2018 um 21:01 schrieb Willy Tarreau:
> On Mon, Nov 12, 2018 at 07:42:21PM +0100, Aleksandar Lazic wrote:
>> Even I agree with you to the point that HAProxy should be able to handle this
>> next upcomming/available technology, I'm not sure if it's really a benefit
>> for us, the enduser.
> 
> Hey, why do you guys imagine we're suffering that much redesigning all the
> connection management in 1.9 and making HTTP processing version-agnostic ?
> :-)

Well because you love to challenge your self, I'm just kidding ;-)

Oh wow this is really a good time to get the hands dirty as QUIC is a major
design change in HTTP, IMHO.

To adopt the matrix from one of the last message, haproxy will be able to do the
conversion in almost every direction?

HTTP/1.x <> HTTP/2
HTTP/2 <> HTTP/3
HTTP/1.x <> HTTP/3
HTTP/3 <> HTTP/3

Is this technically possible as the UDP/QUIC may have not some information that
the TCP/HTTP need?

As I try to imagine the new design in HAProxy will it look like this?

TCP(TCP|UDP) HTTP/1.x,HTTP/2,HTTP/3
  \  /
 HTTP/TCP/UDP Engine - (TCP|UDP) HTTP/1.x,HTTP/2,HTTP/3
 (Here happen some magic)
  /  \
UDP(TCP|UDP) HTTP/1.x,HTTP/2,HTTP/3


Really 8-O ?

What happens with other protocols like plain TCP/UDP or grpc and so on?

Sorry a lot of questions, but you know I'm curious and try to understand some
parts of the future ;-).

> We still have a lot of work to be done before supporting QUIC but we've
> started to have a clear idea how that will fit. The only thing is that
> experience taught us the the devil is in the details, and haproxy has
> accumulated a lot of details over the years.

Oh yes as always, this devil lay there for years and suddenly he jumps out and
say "nana I'm here" 8-O.

> Willy

Cheers
Aleks



Re: [PATCH] HTTP 103 response (Early Hints)

2018-11-12 Thread Aleksandar Lazic
Hi Willy.

Am 12.11.2018 um 21:14 schrieb Willy Tarreau:
> Hi Aleks,
> 
> On Mon, Nov 12, 2018 at 06:00:25PM +0100, Aleksandar Lazic wrote:
>> Hi Fred.
>>
>> Am 12.11.2018 um 15:38 schrieb Frederic Lecaille:
>>> Hello,
>>>
>>> Here is a little series of patches to implement a new http-request
>>> action named "early-hint" to add HTTP 103 responses prior to any other
>>> response with headers whose values are defined by log-format rules, as
>>> this is done with "(add|del)-header" action.
>>
>> Cool ;-) , what can I do with that feature?
> 
> You can send some Link header fields to the client immediately before the
> server even gets the request. This allows the client to start to preload
> some contents that you're sure will be needed, without having to wait for
> the server's response to discover this. In reality the purpose is to make
> efficient use of the server's think time which can often be larger than
> one RTT. For many purposes it can be more interesting than PUSH because
> it will even allow the client to fetch from another origin, via a CDN or
> whatever (think for example about the number of sites referencing jquery.js
> or whatever CSS from Cloudflare).
> 
> This technique is still very new and is documented in RFC 8297 produced
> by the IETF HTTP workgroup. Browser support is still unclear in that the
> "link: preload" stuff is also very new and possibly at different maturity
> stages in browsers. But browsers also need server-side components to be
> available in order to improve their support, and since it's easy for us,
> let's provide our share of the work in making the net faster ;-)
Full Ack.

Thank you very much for the detailed explanation ;-)
That sounds a really great solution.

> Cheers,
> Willy

Regards
Aleks



Re: HTTP/3 | daniel.haxx.se

2018-11-12 Thread Aleksandar Lazic
Hi Manu.

Even I agree with you to the point that HAProxy should be able to handle this 
next upcomming/available technology, I'm not sure if it's really a benefit for 
us, the enduser.

The future will show it :-)

As I don't want to bother all members on the list with this topic, let's 
discuss it  further off the list, if you like.

Regads aleks


 Ursprüngliche Nachricht 
Von: Emmanuel Hocdet 
Gesendet: 12. November 2018 18:44:40 MEZ
An: Aleksandar Lazic 
CC: haproxy@formilux.org
Betreff: Re: HTTP/3 | daniel.haxx.se


Hi Aleks,

> Le 12 nov. 2018 à 18:02, Aleksandar Lazic  a écrit :
> 
> Hi Manu.
> 
> Am 12.11.2018 um 16:19 schrieb Emmanuel Hocdet:
>> 
>> Hi,
>> 
>> The primary (major) step should be to deal with QUIC transport (over UDP).
>> At the same level as TCP for haproxy?
>> Willy should already have a little idea on it ;-)
> 
> Is then the conclusion for that that haproxy will be able to proxy/load 
> balance UDP?

The only conclusion is that haproxy should be able to proxy QUIC.
From wiki: «  QUIC, Quick UDP Internet Connections, aims to be nearly 
equivalent to an independent TCP 
<https://en.wikipedia.org/wiki/Transmission_Control_Protocol> connection »
It could be see as TCP/2, the connection part is provided by the application. 
For example,  the congestion avoidance algorithms
must be provide into the application space at both endpoints.
A very cool feature is that QUIC can support IP migration.

++
Manu





Re: HTTP/3 | daniel.haxx.se

2018-11-12 Thread Aleksandar Lazic
Hi Manu.

Am 12.11.2018 um 16:19 schrieb Emmanuel Hocdet:
> 
> Hi,
> 
> The primary (major) step should be to deal with QUIC transport (over UDP).
> At the same level as TCP for haproxy?
> Willy should already have a little idea on it ;-)

Is then the conclusion for that that haproxy will be able to proxy/load balance 
UDP?

> ++
> Manu
> 
>> Le 11 nov. 2018 à 20:38, Aleksandar Lazic  a écrit :
>>
>> Hi.
>>
>> FYI.
>>
>> Oh no, that was quite fast after HTTP/2
>>
>> https://daniel.haxx.se/blog/2018/11/11/http-3/
>>
>> Regards
>> Aleks
>>
> 
> 




Re: [PATCH] HTTP 103 response (Early Hints)

2018-11-12 Thread Aleksandar Lazic
Hi Fred.

Am 12.11.2018 um 15:38 schrieb Frederic Lecaille:
> Hello,
> 
> Here is a little series of patches to implement a new http-request
> action named "early-hint" to add HTTP 103 responses prior to any other
> response with headers whose values are defined by log-format rules, as
> this is done with "(add|del)-header" action.

Cool ;-) , what can I do with that feature?

Maybe there is some text missing in the configuration.txt patch?

0004-MINOR-doc-Add-information-about-early-hint-http-requ.patch

> Regards,
> 
> Fred.

Regards
Aleks




HTTP/3 | daniel.haxx.se

2018-11-11 Thread Aleksandar Lazic
Hi.

FYI.

Oh no, that was quite fast after HTTP/2

https://daniel.haxx.se/blog/2018/11/11/http-3/

Regards
Aleks



Re: Issue with HAProxy as a forward proxy

2018-11-06 Thread Aleksandar Lazic
Hi Vijay.

Am 06.11.2018 um 10:06 schrieb Vijay Bais:
> Hello,
> 
> I'm using HAProxy 1.8 as a forward proxy with below configuration
> 
> 
> 
> defaults
>     mode                    tcp
>     log                     global
>     option                  tcplog
>     option                  dontlognull
>     option http-server-close
>     #option forwardfor       except 127.0.0.0/8 
>     option                  redispatch
>     retries                 3
>     timeout http-request    10s
>     timeout queue           1m
>     timeout connect         10s
>     timeout client          1m
>     timeout server          1m
>     timeout http-keep-alive 10s
>     timeout check           10s
>     maxconn                 3000
>     default-server          resolvers dns
> 
> resolvers dns
>     nameserver local 127.0.0.1:53 
>     nameserver ns1   10.0.0.2:53 
>     hold valid 1s
> 
> listen c1
>     bind   *10.0.0.26:10001 *
>     mode   tcp
>     option tcplog
>     server r1 *ifconfig.co:80 * source * IP>*
> 
> 
> 
> But this fails with below log lines for any internet destination (both in TCP
> and HTTP mode):
> 
> 10.0.1.79:47437  [06/Nov/2018:09:35:31.170] c1 
> c1/r1
> 1/-1/0 0 SC 1/1/0/0/3 0/0
> Cannot bind to source address before connect() for backend c1.
> 
> 
> 
> Whereas, if the destination is under my control (with my source public IP 
> fully
> whitelisted), then the flow works perfectly.
> 
> Any help to know the actual issue would be great.

The snipped does not show the global section.
I think you will need to run HAProxy as root to be able to do this.

Do you run HAProxy as root?

> Thanks,
> Vijay B

Regards
Aleks



Re: CLI proxy for master process

2018-11-05 Thread Aleksandar Lazic
Hi.

In the meantime you can use Socklog [1] or fluent-bit [2] to listen to syslog 
and write to stdout as I use it in my Image. Pay attention that Fluent-bit 
writes by default json format to stdout.

https://gitlab.com/aleks001/haproxy18-centos/blob/master/containerfiles/container-entrypoint.sh#L94

The image setup is documented in this post.

https://www.me2digital.com/blog/2017/08/openshift-egress-options/#haproxy-generic-proxy

I have also written a blog post about syslog in the container world as this is 
a non trivial topic, imho.

https://www.me2digital.com/blog/2017/05/syslog-in-a-container-world/

It would be nice to have a log to stdout option in HAProxy, but until this is 
available there are many solutions available out there to solve the issue.

Regards Aleks

[1] http://smarden.org/socklog/
[2] https://fluentbit.io


 Ursprüngliche Nachricht 
Von: William Lallemand 
Gesendet: 5. November 2018 17:08:09 GMT+00:00
An: Lukas Tribus 
CC: al-hapr...@none.at, haproxy , Willy Tarreau 

Betreff: Re: CLI proxy for master process

Hi Lukas,

On Fri, Nov 02, 2018 at 12:21:47PM +0100, Lukas Tribus wrote:
> 
> Btw could stdout logging leverage some of this infrastructure? I
> assume that if the master process would write to stdout
> (synchronously), we would not stall our workers because of slow log
> readers?

Well, it will just move the problem from the worker to the master, which will
be slowed down and can prevent the ability to do maintenance operations from the
master CLI. At the moment we don't have anything to forward the logs from the
workers to the master which is another problem, but we could use the sockpair
between them, but this is a stream socket which will lead to the same issue.

> 
> https://www.mail-archive.com/haproxy@formilux.org/msg17436.html
> https://github.com/dockerfile/haproxy/issues/3
> https://github.com/docker-library/haproxy/pull/39
> 
> 
> It looks like in the docker world, everything other than stdout
> logging is unacceptable - at least for official docker image, and I'm
> wondering if the master/worker process model along with this
> infrastructure could theoretically help there, without sacrificing the
> performance of the event loop or log reliability.

We are aware of the problem and we already discussed this issue several time
with the other developers, the ideal design should be the use of a log buffer
which will allow TCP logging, and stdout logging. We have the idea in mind but
the development is not planned yet. But it will definitively be a useful
feature.




Re: Design Proposal: http-agent-check, explict health checks & inline-mode

2018-10-30 Thread Aleksandar Lazic
Hi Robin.

Am 29.10.2018 um 20:15 schrieb Robin H. Johnson:
> On Sat, Oct 27, 2018 at 01:52:29PM +0200, Aleksandar Lazic wrote:
>>> Right now, if you want to use load feedback for weights, you either need
>>> something entirely out-of-band from the servers back to HAProxy, or you
>>> have to use the agent-check option and run a separate health agent.
>>
>> With that you mean "external-check command" ?
>>
>> https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#external-check%20command
> No, I mean 'agent-check' per
> https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#5.2-agent-check
> 
> This is an agent that runs on the realserver, not the load balancer.

Thanks for clarification.

> ...
>>> I would like to propose a new http-agent-check option, with two usage
>>> modes.
>>> 1. health-check mode: this connects like the existing agent-check, but
>>>sends does HTTP request & response rather than pure TCP.
>>>
>>> 2. inline mode: if the server has best-case knowledge about it's status,
>>>and HTTP headers are used for the feedback information, then it
>>>should be possible to include the feedback in an HTTP response header
>>>as part of normal queries. The header processing would detect & feed
>>>the data into the health system during normal traffic.
>>
>> Interesting Ideas.
>> Are there any LB's out there which already uses this concept?
> I haven't looked specifically, but I am aware of a lot of other
> dynamic-realserver weight work (mostly in the keepalived/ipvs world,
> like feedbackd and lvs-kiss from the early 2000's).
> 
> The inline mode is probably deserving of seperate work, I think it might be 
> possible
> to implement it with the existing Lua codebase.
> 
>>> Question: where & how should the feedback information be encoded in the
>>> response? 
>>> 1. HTTP payload
>>> 2. Single HTTP header
>>> 3. Multiple HTTP headers
>>
>> I would like to have it in the one header per value 'Server-State-*' as the 
>> X-
>> Prefix is Deprecated.
>>
>> https://tools.ietf.org/html/rfc6648
>> Deprecating the "X-" Prefix and Similar Constructs in Application Protocols
>>
>> for example:
>> Server-State-Load
>> Server-State-Users
>> Server-State-Health
>> Server-State-...
>
> Multiple headers easier to write new parsers for I agree, but supporting
> the other variants might be worthwhile.
> 
> I'm thinking to maybe implement my lua-check first, then write a simple
> HTTP checker within Lua.

Yes sounds good.
Lua is a good option IMHO.

What's your opinion to use http-response with an new option set-config-value
like "set-nice", as there is already the http engine which handles the http
headers from the server.

Naive example

###
declare capture response len 5
capture response header Server-State-Load len 5

# set the weight for the current backend
http-response set-config-value weight
%[res.hdr(Server-State-Load),capture-res(0)] if res.hdr(Server-State-Load)
###

It would be nice to make this possible as a value for peers so that all
instances from a distributed haproxy setup have the same weight for this server.

Regards
Aleks



Re: [ANNOUNCE] haproxy-1.9-dev5

2018-10-30 Thread Aleksandar Lazic
Hi.

Am 28.10.2018 um 21:01 schrieb Willy Tarreau:
> Hi,
> 
> HAProxy 1.9-dev5 was released on 2018/10/28. It added 58 new commits
> after version 1.9-dev4.

Image is updated.

https://hub.docker.com/r/me2digital/haproxy19/

##
HA-Proxy version 1.9-dev5 2018/10/28
Copyright 2000-2018 Willy Tarreau 

Build options :
  TARGET  = linux2628
  CPU = generic
  CC  = gcc
  CFLAGS  = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv
-Wno-unused-label -Wno-sign-compare -Wno-unused-parameter
-Wno-old-style-declaration -Wno-ignored-qualifiers -Wno-clobbered
-Wno-missing-field-initializers -Wtype-limits
  OPTIONS = USE_LINUX_SPLICE=1 USE_GETADDRINFO=1 USE_ZLIB=1 USE_REGPARM=1
USE_OPENSSL=1 USE_LUA=1 USE_PCRE=1 USE_PCRE_JIT=1 USE_TFO=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with OpenSSL version : OpenSSL 1.1.1  11 Sep 2018
Running on OpenSSL version : OpenSSL 1.1.1  11 Sep 2018
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
Built with Lua version : Lua 5.3.5
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT
IP_FREEBIND
Encrypted password support via crypt(3): yes
Built with multi-threading support.
Built with PCRE version : 8.32 2012-11-30
Running on PCRE version : 8.32 2012-11-30
PCRE library supports JIT : yes
Built with zlib version : 1.2.7
Running on zlib version : 1.2.7
Compression algorithms supported : identity("identity"), deflate("deflate"),
raw-deflate("deflate"), gzip("gzip")
Built with network namespace support.

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols markes as  cannot be specified using 'proto' keyword)
  h2 : mode=HTTP   side=FE
: mode=TCP|HTTP   side=FE|BE

Available filters :
[SPOE] spoe
[COMP] compression
[TRACE] trace
##

Regards
Aleks

> This version continues to merge new features and addresses some issues
> that came in -dev4 regarding stream processing. For now it's working
> rather well given the complexity of the changes, eventhough we still
> expect to resurrect some deeply burried issues due to the significant
> change of I/O scheduling.
> 
> Among the new features merged, I can list these ones :
>   - when running in master/worker mode, the master can now have its own
> CLI socket, and implements a proxy able to connect to all worker
> processes. It will even be able to reach older processes soon, so
> that we can kill an old cnonection preventing an old process from
> quitting, or simply figure why an old process doesn't quit. Some
> more updates are coming on this part (prompt will be disabled by
> default, older processes not joinable now, some doc etc).
> 
>   - the HTTP small object cache can now cache objects larger than a
> buffer. The new size limit defaults to 1/256 of the cache size but
> can be changed with "max-object-size".
> 
>   - the cache now implements the Age HTTP header field.
> 
> The rest is mostly infrastructure updates for the upcoming code, and
> fixes for various issues. It's worth noting that Lukas has addressed
> an interesting issue with HTTP authentication where the private
> connection mistakenly had precedence over the load balancing algorithm
> in order to cover NTLM/Negotiate. This one will be backported to 1.8.
> 
> Developers might like the addition of the ERR variable to the makefile
> to automatically add -Werror.
> 
> For now what I'm seeing overall looks pretty good. We've again put the
> finger on some old stuff around the stream interface flag SI_FL_WAIT_ROOM,
> which we expected could easily replace channel_may_recv(), until the old
> dirty zombies in the code decided to fight back :-)  It's the first time
> I've seen a 3-hours 3-person meeting dedicated to a single flag! But I
> think we've found how to address this old crap so that we can rebase the
> changes related to the internal native HTTP representation (codenamed HTX).
> 
> I'll try to issue -dev6 next week-end, eventhough this week will be short
> for some of us. Ideally if we could merge the HTX code next week-end, we
> could then switch to testing and debugging to stabilize all this stuff.
> 
> Please find the usual URLs below :
>Site index   : http://www.haproxy.org/
>Discourse: http://discourse.haproxy.org/
>Sources  : http://www.haproxy.org/download/1.9/src/
>Git repository   : http://git.haproxy.org/git/haproxy.git/
>Git Web browsing : http://git.haproxy.org/?p=haproxy.git
>Changelog: http://www.haproxy.org/download/1.9/src/CHANGELOG
>Cyril's HTML doc : http://cbonte.github.io/haproxy-dconv/
> 
> Willy
> ---
> Complete changelog :
> Christopher Faulet (3):
>

Re: apache proxy pass rules in HAproxy

2018-10-27 Thread Aleksandar Lazic
thoughts were,
all traffic passes through HAProxy.  In retrospect, HAProxy is not being
taxed at all, there is a direct connection between the client and the
backend node over a public IP ( of the backend node). But, what I don't
understand is, if a connection attempt is made through HAProxy, why would
it allow the connection to be handed off to the backend node directly (
client -> backend node )?

Thoughts?


On Sat, Oct 27, 2018 at 4:31 AM Aleksandar Lazic  wrote:

> Hi Imam.
>
> It would be helpfull to know your used versions:
>
> haproxy -vv
> apache httpd version
> shibboleth version
>
> A small workflow picture like:
>
> Client ->  haproxy -> apache httpd -> shibboleth ?
>
> Am 27.10.2018 um 07:44 schrieb Imam Toufique:
> > Hi Igor,
> >
> > Thanks very much for offering to help!  I will do this in sections,
> hopefully, I
> > can keep this from being too cluttered.
> >
> > haproxy.cfg:
> >
> --
> > global
> >#log /dev/log local0 debug
> >#log /dev/log local1 debug
> >log 127.0.0.1 local2
> >chroot /var/lib/haproxy
> >stats timeout 30s
> >user haproxy
> >group haproxy
> >tune.ssl.default-dh-param 2048
> >daemon
> >
> > defaults
> >log global
> >mode http
> >option tcplog
> >option dontlognull
> >timeout connect 5000
> >timeout client 5
> >timeout server 5
> >timeout tunnel 9h
> >option tcp-check
> >
> > frontend http_front
> >bind :80
> >bind 0.0.0.0:443 ssl crt /etc/haproxy/crsplab2_1.pem
> >stats uri /haproxy?stats
> >default_backend web1_cluster
> >option httplog
> >log global
> >#option dontlognull
> >log /dev/log local0 debug
>
> The 2 log entries are redundant, imho.
>
> >mode http
>
> Is set at default block. Please take a look at
>
> https://www.haproxy.com/blog/the-four-essential-sections-of-an-haproxy-configuration/
>
> >option forwardfor   # forward IP
> >http-request set-header X-Forwarded-Port %[dst_port]
> >http-request add-header X-Forwarded-Proto https if { ssl_fc }
>
> I personally would use here also set-header instead of add.
>
> >redirect scheme https if !{ ssl_fc }
> >
> >acl host_web2 hdr(host) -i crsplab2.oit.uci.edu/webdav
>
> This should be only the host name.
>
> https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#7.3.6-hdr
> and
> https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#7.3.6-req.hdr
>
> I would add a addition acl.
>
> acl host_web2 hdr(host) -i crsplab2.oit.uci.edu
> acl path_web2 path_beg -i /webdav
>
> >use_backend webdav_cluster if host_web2
>
> an add it to the use_backend line
>
>  use_backend webdav_cluster if host_web2 path_web2
>
> >acl host_web3 path_beg /jhub
> >use_backend web3_cluster if host_web3
> >
> >
> > backend webdav_cluster
> >balance roundrobin
>
> I would add here the following line.
>
>  cookie SRV1 insert indirect nocache
> >server  web1 10.1.100.156:8080 check inter 2000 cookie w1
> >server  web2 10.1.100.160:8080 check inter 2000 cookie w2
> >
> > backend web3_cluster
>
> I would add here the following line.
>
>  cookie SRV2 insert indirect nocache
> >server  publicIP:443 check ssl verify none inter 2000 cookie w1
> >
> -
> > Note: I have a single backend node, as it was easy to test with just one
> node,
> > instead of making changes to 2 nodes at a time.
> >
> > Here is my apache config:
> >
> > in httpd.conf, only change I have made is ( the rest is a stock centos
> 7.5
> > httpd.conf ):
> > -
> > ServerName 10.1.100.160:80 ( Internal IP of the backend node)
> > Redirect permanent /jhub https://crsplabweb1.domain.com/jhub
> > -
> >
> > in my ssl.conf, where I access the jupyterhub instance running in
> 127.0.0.1:8000
> > <http://127.0.0.1:8000> .  Also, note that the backend is running
> shibboleth
> > SP.  One of the issues I encountered is, If I did not have SSL , i was
> getting a
> > browser warning for not having SSL.
> Can you set up shibboleth in that manner that he answers with
> proxy.domain.com?
> As we don't know which version 

Re: Design Proposal: http-agent-check, explict health checks & inline-mode

2018-10-27 Thread Aleksandar Lazic
Hi Robin.

Am 26.10.2018 um 20:49 schrieb Robin H. Johnson:
> Hi,
> 
> This is something I have a vague recollection of existing somewhere, but
> didn't find any leads in documentation or source.
> 
> Right now, if you want to use load feedback for weights, you either need
> something entirely out-of-band from the servers back to HAProxy, or you
> have to use the agent-check option and run a separate health agent.

With that you mean "external-check command" ?

https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#external-check%20command

> The agent-check protocol is described only in the configuration.txt
> 'agent-check' section, and is conveyed entirely over pure TCP, no HTTP.
> It supports conveying useful health including weight and DRAIN/MAINT
> states.
> 
> The http-check behavior only supports matching strings or status codes,
> and does not convey any load feedback.
> 
> I would like to propose a new http-agent-check option, with two usage
> modes.
> 1. health-check mode: this connects like the existing agent-check, but
>sends does HTTP request & response rather than pure TCP.
> 
> 2. inline mode: if the server has best-case knowledge about it's status,
>and HTTP headers are used for the feedback information, then it
>should be possible to include the feedback in an HTTP response header
>as part of normal queries. The header processing would detect & feed
>the data into the health system during normal traffic.

Interesting Ideas.
Are there any LB's out there which already uses this concept?

> Question: where & how should the feedback information be encoded in the
> response? 
> 1. HTTP payload
> 2. Single HTTP header
> 3. Multiple HTTP headers

I would like to have it in the one header per value 'Server-State-*' as the X-
Prefix is Deprecated.

https://tools.ietf.org/html/rfc6648
Deprecating the "X-" Prefix and Similar Constructs in Application Protocols

for example:
Server-State-Load
Server-State-Users
Server-State-Health
Server-State-...

Regards
Aleks



Re: apache proxy pass rules in HAproxy

2018-10-27 Thread Aleksandar Lazic
th authentication. 
> 
> Any inputs here? 
> 
> thanks, guys! 
> 
> 
> I think it is time for you to provide the full HAP and Apache configs so 
> we
> can see what is going on (please obfuscate any sensitive data). Also the 
> use
> of the "cookie w1" is not clear since you are not setting it in HAP and is
> kinda redundant for single backend setup.
> 
> 
> 
> On Thu, Oct 25, 2018 at 1:21 AM Igor Cicimov
>  <mailto:ig...@encompasscorporation.com>>
> wrote:
> 
> 
> 
> On Thu, Oct 25, 2018 at 6:31 PM Igor Cicimov
>  <mailto:ig...@encompasscorporation.com>> wrote:
> 
> 
> 
> On Thu, 25 Oct 2018 6:13 pm Imam Toufique  <mailto:techie...@gmail.com>> wrote:
> 
> so I almost got this to work, based on the situation I am
> in.  To elaborate just a bit, my setup involves a 
> shibboleth
> SP that I need to authenticate my application.  Since I
> can't set up the HA proxy node with shibboleth SP - I had 
> to
> wrap my application in the backend with apache so I can 
> pass
> REMOTE_USER to the application.  the application I have 
> is -
> jupyterhub and it start with its own proxy.  Long story
> short, here is my current setup:
> 
> frontend
>    bind :80
>    bind :443 ssl crt /etc/haproxy/crsplab2_1.pem
>    stats uri /haproxy?stats
>    default_backend web1_cluster
>    option httplog
>    log global
>    #option dontlognull
>    log /dev/log local0 debug
>    mode http
>    option forwardfor   # forward IP
>    http-request set-header X-Forwarded-Port %[dst_port]
>    http-request add-header X-Forwarded-Proto https if { 
> ssl_fc }
>    redirect scheme https if !{ ssl_fc }
> 
> acl host_web3 path_beg /jhub
> use_backend web3_cluster if host_web3
> 
> backend
> server web1.oit.uci.edu <http://web1.oit.uci.edu>
> 128.110.80.5:80 <http://128.110.80.5:80> check
> 
> this works for the most part.  But I am confused with a
> problem. when I get to my application, my backend IP 
> address
> shows up in the browser URL.  
> 
> for example, I see this in my browser: 
> 
> http://128.110.80.5/jhub/user/itoufiqu/tree?
> 
> whereas, I was expecting that it would show the original
> URL, such as: 
> 
> http://crsplab2.domain.com/jhub/user/itoufiqu/tree
> <http://crsplab2.domain.com/jhub/user/itoufiqu/tree>?  (
> where crsplab2.domain.com <http://crsplab2.domain.com> is
> the URL to get HAproxy ) 
> 
> 
> You need to tell your backend app that it runs behind reverse
> proxy with ssl termination and that it's domain/url
> is https://crsplab2.domain.com
> <http://crsplab2.domain.com/jhub/user/itoufiqu/tree>. How you 
> do
> that depends on the backend app you are using but most of them
> like apache2, tomcat etc. have specific configs that you can
>     find in their documentation. For example if your backend is
> apache2 I bet you don't have the DomainName set in the config 
> in
> which case it defaults to the host ip address.
> 
> 
> You can also try:
> 
> rspirep ^Location:\ http://(.*):80(.*)  Location:\
> https://crsplab2.domain.com
> <http://crsplab2.domain.com/jhub/user/itoufiqu/tree>:443\2  if  {
> ssl_fc }
> 
> to fix the URL but note that this will not save you from hard 
> coded
> url's in the returned html pages the way apache does.
> 
> 
> 
> While I am no expert in HA proxy world, I think this might
> due to the fact that my backend does not have SSL and
> HAproxy frontend does have SSL.  At this point, I would
> avoid that IP address s

Re: Lots of PR state failed connections with HTTP/2 on HAProxy 1.8.14

2018-10-27 Thread Aleksandar Lazic
Hi.

Am 26.10.2018 um 19:11 schrieb James Brown:
> Y'all are quite right: one of the machines inverted the order of restarting 
> with
> the new config and updating the package and was advertising the h2 ALPN with
> HAProxy 1.7.11. 
> 
> Sorry to take up so much time with a silly question.

No probs.

Finally you fixed it.

> Cheers!

Regards
Aleks

> On Wed, Oct 24, 2018 at 12:21 AM Aleksandar Lazic  <mailto:al-hapr...@none.at>> wrote:
> 
> Am 24.10.2018 um 09:18 schrieb Igor Cicimov:
> >
> >
> > On Wed, 24 Oct 2018 5:06 pm Aleksandar Lazic  <mailto:al-hapr...@none.at>
> > <mailto:al-hapr...@none.at <mailto:al-hapr...@none.at>>> wrote:
> >
> >     Hi.
> >
> >     Am 24.10.2018 um 03:02 schrieb Igor Cicimov:
> >     > On Wed, Oct 24, 2018 at 9:16 AM James Brown  <mailto:jbr...@easypost.com>
> >     <mailto:jbr...@easypost.com <mailto:jbr...@easypost.com>>> wrote:
> >     >>
> >     >> I tested enabling HTTP/2 on the frontend for some of our sites
> today and
> >     immediately started getting a flurry of failures. Browsers (at least
> Chrome)
> >     showed a lot of SPDY protocol errors and the HAProxy logs had a lot 
> of
> lines
> >     ending in
> >     >>
> >     >> https_domain_redacted/ -1/-1/-1/-1/100 400 187 - - PR--
> 49/2/0/0/0 0/0
> >     >>
> >     >
> >     > Possible reasons:
> >     >
> >     > 1. You don't have openssl v1.0.2 installed (assuming you use 
> openssl)
> >     > on a server(s)
> >     > 2. You have changed your config for h2 suport but your server(s) 
> is
> >     > still running haproxy 1.7 (i.e. hasn't been restarted after 
> upgrade
> >     > and still using the old 1.7 binary instead 1.8)
> >
> >     That's one of the reason why we need to know the exact version.
> >
> >     James can you post the output of `haproxy -vv` and some more
> information about
> >     your setup.
> >
> >
> > This can return the correct version but it still does not mean the 
> runnig
> > process is actually using it (has not been restarted after upgrade).
> 
> Full Ack. That's the reason why we need some more information's about the
> setup ;-)
> 
> >     Regards
> >     Aleks
> >
> >     >> There were no useful or interesting errors logged to syslog. No 
> sign of
> >     any resources being exhausted (conntrack seems fine, etc). The times
> varied
> >     but Ta was always low (usually around 100ms). I have not been able 
> to
> >     reproduce this issue in a staging environment, so it may be 
> something
> "real
> >     browsers" do that doesn't show up with h2load et al.
> >     >>
> >     >> Turning off HTTP/2 (setting "alpn http/1.1") completely solves 
> the
> problem.
> >     >>
> >     >> The following timeouts are set on all of the affected frontends:
> >     >>
> >     >>     retries 3
> >     >>     timeout client 9s
> >     >>     timeout connect 3s
> >     >>     timeout http-keep-alive 5m
> >     >>     tcp-request inspect-delay 4s
> >     >>     option http-server-close
> >     >>
> >     >> Additionally, we set maxconn to a very high value (20480).
> >     >>
> >     >> Backends generally have timeout server set to a largeish value 
> (90-300
> >     seconds, depending on the backend).
> >     >>
> >     >> Anything jump out at anyone?
> >     >> --
> >     >> James Brown
> >     >> Systems & Network Engineer
> >     >> EasyPost
> >     >
> >
> 
> 
> 
> -- 
> James Brown
> Engineer




Re: CLI proxy for master process

2018-10-27 Thread Aleksandar Lazic
Am 26.10.2018 um 18:10 schrieb Willy Tarreau:
> On Fri, Oct 26, 2018 at 05:58:43PM +0200, Aleksandar Lazic wrote:
>> BTW what's nb in "nb(thread|proc)"?
>>
>> [ ] No block
>> [ ] never been
>> [ ] real answer, something in french ;-):
> 
> "NumBer" :-)

Ah it could be so easy ;-)

> This one is not derived from french, it's not like
> "option independant-streams" which I messed up years ago!
> 
> Willy
> 




Re: CLI proxy for master process

2018-10-26 Thread Aleksandar Lazic
Hi, William.

Am 26.10.2018 um 17:41 schrieb William Lallemand:
> On Fri, Oct 26, 2018 at 05:13:00PM +0200, Aleksandar Lazic wrote:
>> Hi William.
>>
>> Sorry for my lack of knowledge and my curiosity, you know I'm always curious
>> ;-), but for which usecase can I use this feature?
>>
>> Best regards.
>>
>> Aleks
>>
>>
>  
> Hi Aleks,
> 
> With a nbproc setup, the first goal is to be able to access multiple stats
> sockets from one socket.

Ah yes, you are right ;-)

> In a more "modern" nbthread setup, it's possible to have only one worker, but
> we still fork a new process upon a reload.
> The problem is that at the moment it's not possible to connect to the stats
> socket of a process which is leaving. Sometimes it's really useful to debug 
> and
> see the session which are still connected on the old process. And that's the
> ultimate goal of this feature (not covered yet, but soon :-) )

Wow, yes. I haven't used nb(thread|proc) at debug time so I have never needed
such a feature.

BTW what's nb in "nb(thread|proc)"?

[ ] No block
[ ] never been
[ ] real answer, something in french ;-):

> It also implements a "show proc" which lists the PIDs of the processes.

That's also great.

cheers
Aleks



Re:

2018-10-26 Thread Aleksandar Lazic
Hi William.

Sorry for my lack of knowledge and my curiosity, you know I'm always curious 
;-), but for which usecase can I use this feature?

Best regards.

Aleks


 Ursprüngliche Nachricht 
Von: William Lallemand 
Gesendet: 26. Oktober 2018 14:47:28 MESZ
An: haproxy@formilux.org
CC: w...@1wt.eu
Betreff: 

From: William Lallemand 
Subject: CLI proxy for master process
In-Reply-To: 

This patch series implements a CLI on the master process.

It's a work in progress but it is now in a usable state, so people might be
interessed in testing it.

The CLI on the master is organized this way:

   * The master process implements a CLI proxy which contains:
  - a listener for each -S argument on the command line
  - a server using a socketpair for each worker process
  - a CLI applet

   * The workers have a new CLI listener which is bound on a socketpair.

This CLI is special and can be configured only from the program argument. It
was done this way so a reload with a wrong configuration won't destroy the
socket. To add a new listener to this CLI proxy, use the -S argument. You can
add some bind options to these sockets, it uses the same options as the bind
keyword but the separator is a comma instead of a space.

Example:

  ./haproxy -W -S /tmp/master-socket -f test1.cfg
  ./haproxy -W -S /tmp/master-socket,mode,700,uid,1000,gid,1000 -f test1.cfg

This CLI proxy is using a CLI analyzer which allows it to send commands on the
workers. To this purpose a routing command have been implemented, it can be
used alone to send every next commands to the same place, or as a prefix for a
command. The CLI prompt will change depending of the next default target to
send a command.

Example:

$ socat /tmp/master-socket readline
help
Unknown command. Please enter one of the following commands only :
  help   : this message
  prompt : toggle interactive mode with prompt
  quit   : disconnect
  @ : send a command to the  process
  @!: send a command to the  process
  @master: send a command to the master process
  show cli sockets : dump list of cli sockets
  show proc  : show processes status

master> show proc
#   
5248 master 0
5249 worker 1
5250 worker 2
5251 worker 3

master> @1
5249> show info
[...]
5249> @
master> @1 show info; @!5250 show info
[...]

Known issues that will be fixed for 1.9:
- The prompt is enabled by default, and the "prompt" command is not
  parsed yet
- Might have difficulties with old processes
- multiple commands on the same line won't work because of recent
  changes in process_stream
- admin/oper/user permissions are not implemented

Limitations that won't be fixed for 1.9:
- The connection is closed during a reload
- It's not a stats/commands aggregator :-)

The documentation is coming later as I'm writing a more complete doc for
the master worker.






Re: Lots of PR state failed connections with HTTP/2 on HAProxy 1.8.14

2018-10-24 Thread Aleksandar Lazic
Am 24.10.2018 um 09:18 schrieb Igor Cicimov:
> 
> 
> On Wed, 24 Oct 2018 5:06 pm Aleksandar Lazic  <mailto:al-hapr...@none.at>> wrote:
> 
> Hi.
> 
> Am 24.10.2018 um 03:02 schrieb Igor Cicimov:
> > On Wed, Oct 24, 2018 at 9:16 AM James Brown  <mailto:jbr...@easypost.com>> wrote:
> >>
> >> I tested enabling HTTP/2 on the frontend for some of our sites today 
> and
> immediately started getting a flurry of failures. Browsers (at least 
> Chrome)
> showed a lot of SPDY protocol errors and the HAProxy logs had a lot of 
> lines
> ending in
> >>
> >> https_domain_redacted/ -1/-1/-1/-1/100 400 187 - - PR-- 
> 49/2/0/0/0 0/0
> >>
> >
> > Possible reasons:
> >
> > 1. You don't have openssl v1.0.2 installed (assuming you use openssl)
> > on a server(s)
> > 2. You have changed your config for h2 suport but your server(s) is
> > still running haproxy 1.7 (i.e. hasn't been restarted after upgrade
> > and still using the old 1.7 binary instead 1.8)
> 
> That's one of the reason why we need to know the exact version.
> 
> James can you post the output of `haproxy -vv` and some more information 
> about
> your setup.
> 
> 
> This can return the correct version but it still does not mean the runnig
> process is actually using it (has not been restarted after upgrade).

Full Ack. That's the reason why we need some more information's about the setup 
;-)

> Regards
> Aleks
> 
> >> There were no useful or interesting errors logged to syslog. No sign of
> any resources being exhausted (conntrack seems fine, etc). The times 
> varied
> but Ta was always low (usually around 100ms). I have not been able to
> reproduce this issue in a staging environment, so it may be something 
> "real
> browsers" do that doesn't show up with h2load et al.
> >>
> >> Turning off HTTP/2 (setting "alpn http/1.1") completely solves the 
> problem.
> >>
> >> The following timeouts are set on all of the affected frontends:
> >>
> >>     retries 3
> >>     timeout client 9s
> >>     timeout connect 3s
> >>     timeout http-keep-alive 5m
> >>     tcp-request inspect-delay 4s
> >>     option http-server-close
> >>
> >> Additionally, we set maxconn to a very high value (20480).
> >>
> >> Backends generally have timeout server set to a largeish value (90-300
> seconds, depending on the backend).
> >>
> >> Anything jump out at anyone?
> >> --
> >> James Brown
> >> Systems & Network Engineer
> >> EasyPost
> >
> 




<    1   2   3   4   5   6   7   8   9   10   >