Re: [ANNOUNCE] haproxy-1.9.2

2019-01-17 Thread Willy Tarreau
Hi Aleks,

On Thu, Jan 17, 2019 at 01:02:56PM +0100, Aleksandar Lazic wrote:
> > Very likely, yes. If you want to inspect the body you simply have to
> > enable "option http-buffer-request" so that haproxy waits for the body
> > before executing rules. From there, indeed you can pass whatever Lua
> > code on req.body. I don't know if there would be any value in trying
> > to implement some protobuf converters to decode certain things natively.
> > What I don't know is if the contents can be deserialized even without
> > compiling the proto files.
> 
> Agree. I would be interesting to here a good use case and a solution for that,
> at least haproxy have the possibility to do it ;-)

>From what I've seen, gRPC stream is reasonably easy to decode, and protobuf
doesn't require the proto file, it will just emit indexes, types and values,
which is enough as long as the schema doesn't change. I've seen that Thrift
is pretty similar. So we could decide about routing or priorities based on
values passed in the protocol :-)

> >> As we have now a separated protocol handling layer (htx) how difficult is 
> >> it to
> >> add `mode fast-cgi` like `mode http`?
> > 
> > We'd like to have this for 2.0. But it wouldn't be "mode fast-cgi" but
> > rather "proto fast-cgi" on the server lines to replace the htx-to-h1 mux
> > with an htx-to-fcgi one, because fast-cgi is another representation of
> > HTTP. The "mode http" setting is what enables all HTTP processing
> > (http-request rules, cookie parsing etc). Thus you definitely want to
> > have it enabled.
> 
> Full Ack.
> 
> This means that I can use QUICK+HTTP/3 => php-fpm with haproxy, in the future 
> ;-)

Yes.

> Fast cgi isn't a bad protocol (IMHO) but sadly it was not as wide spread as
> http(s) even it has multiplexing and keep alive feature in it.

I remember that when we checked with Thierry, there were some issues to
implement multiplexing which resulted in nobody really implementing it
in practice. I *think* the problem was due to the framing or the huge
risk of head-of-line blocking making it impossible (or very hard) to
sacrify a stream when the client doesn't read it, without damaging the
other ones. Thus it was mostly in-order delivery in the end.

(... links ...)
> All of them looks to the keep alive flag but not to the multiplex flag.

So this doesn't seem to have change much :-)

> Python is different, as always, they use mainly wsgi, AFAIK.
> https://wsgi.readthedocs.io/en/latest/

OK.

> uwsgi have also there on protocol
> https://uwsgi-docs.readthedocs.io/en/latest/Protocol.html

I remember having looked at this one many years ago when it was
presented as a replacement for fcgi, but I got contradictory feedback
depending on whom I talked to. I don't know how widespread it is
nowadays.

Cheers,
Willy



Re: [ANNOUNCE] haproxy-1.9.2

2019-01-17 Thread Aleksandar Lazic
Hi Willy.

Am 17.01.2019 um 04:25 schrieb Willy Tarreau:
> Hi Aleks,
> 
> On Wed, Jan 16, 2019 at 11:52:12PM +0100, Aleksandar Lazic wrote:
>> For service routing are the standard haproxy content routing options possible
>> (path, header, ...) , right?
> 
> Yes absolutely.
> 
>> If someone want to route based on grpc content he can use lua with body 
>> content
>> right?
>>
>> For example this library https://github.com/Neopallium/lua-pb
> 
> Very likely, yes. If you want to inspect the body you simply have to
> enable "option http-buffer-request" so that haproxy waits for the body
> before executing rules. From there, indeed you can pass whatever Lua
> code on req.body. I don't know if there would be any value in trying
> to implement some protobuf converters to decode certain things natively.
> What I don't know is if the contents can be deserialized even without
> compiling the proto files.

Agree. I would be interesting to here a good use case and a solution for that,
at least haproxy have the possibility to do it ;-)

>>> That's about all. With each major release we feel like version dot-2
>>> works pretty well. This one is no exception. We'll see in 6 months if
>>> it was wise :-)
>>
>> So you would say I can use it in production with htx ;-)
> 
> As long as you're still a bit careful, yes, definitely. haproxy.org has
> been running it in production since 1.9-dev9 or so. Since 1.9.0 was
> released, we've had one crash a few times (fixed in 1.9.1) and two
> massive slowdowns due to non-expiring connections reaching the frontend's
> maxconn limit (fixed in 1.9.2).

Yep agree. In prod is always good to keep an eye on it.

>> and the docker image is also updated ;-)
>>
>> https://hub.docker.com/r/me2digital/haproxy19
> 
> Thanks.
> 
>> As we have now a separated protocol handling layer (htx) how difficult is it 
>> to
>> add `mode fast-cgi` like `mode http`?
> 
> We'd like to have this for 2.0. But it wouldn't be "mode fast-cgi" but
> rather "proto fast-cgi" on the server lines to replace the htx-to-h1 mux
> with an htx-to-fcgi one, because fast-cgi is another representation of
> HTTP. The "mode http" setting is what enables all HTTP processing
> (http-request rules, cookie parsing etc). Thus you definitely want to
> have it enabled.

Full Ack.

This means that I can use QUICK+HTTP/3 => php-fpm with haproxy, in the future 
;-)

Fast cgi isn't a bad protocol (IMHO) but sadly it was not as wide spread as
http(s) even it has multiplexing and keep alive feature in it.

>> I ask because php have not a production ready http implementation but a 
>> robust
>> fast cgi process manager (php-fpm). There are several possible solution to 
>> add
>> http to php (nginx+php-fpm, uwsgi+php-fpm, uwsgi+embeded php) but all this
>> solutions requires a additional hop.
>>
>> My wish is to have such a flow.
>>
>> haproxy -> *.php  => php-fpm
>> -> *.static-files => nginx,h2o
> 
> It's *exactly* what I've been wanting for a long time as well. Mind you
> that Thierry implemented some experimental fast-cgi code many years ago
> in 1.3! By then we were facing some strong architectural limitations,
> but now I think we should have everything ready thanks to the muxes.

Oh wow 1.3. 8-O

In 2014 have Baptiste written a blog post how to make health checks for php-fpm
so it looks like fast-cgi is a long time on the table.

https://alohalb.wordpress.com/2014/06/06/binary-health-check-with-haproxy-1-5-php-fpmfastcgi-probe-example/

Just in case it's interesting here some receiver implementations links for
popular servers.

https://github.com/php-src/php/blob/master/main/fastcgi.h
https://github.com/php-src/php/blob/master/main/fastcgi.c

https://github.com/unbit/uwsgi/blob/master/proto/fastcgi.c
https://github.com/unbit/uwsgi/blob/master/plugins/router_fcgi/router_fcgi.c

https://golang.org/src/net/http/fcgi/fcgi.go
https://golang.org/src/net/http/fcgi/child.go

https://docs.rs/crate/fastcgi/1.0.0/source/src/lib.rs

All of them looks to the keep alive flag but not to the multiplex flag.

Python is different, as always, they use mainly wsgi, AFAIK.
https://wsgi.readthedocs.io/en/latest/

uwsgi have also there on protocol
https://uwsgi-docs.readthedocs.io/en/latest/Protocol.html

>> I have take a look into fcgi protocol but sadly I'm not a good enough 
>> programmer
>> for that task. I can offer the tests for the implementation.
> 
> That's good to know, thanks!
> 
> Cheers,
> Willy

Regards
Aleks



Re: [ANNOUNCE] haproxy-1.9.2

2019-01-16 Thread Willy Tarreau
Hi Aleks,

On Wed, Jan 16, 2019 at 11:52:12PM +0100, Aleksandar Lazic wrote:
> For service routing are the standard haproxy content routing options possible
> (path, header, ...) , right?

Yes absolutely.

> If someone want to route based on grpc content he can use lua with body 
> content
> right?
> 
> For example this library https://github.com/Neopallium/lua-pb

Very likely, yes. If you want to inspect the body you simply have to
enable "option http-buffer-request" so that haproxy waits for the body
before executing rules. From there, indeed you can pass whatever Lua
code on req.body. I don't know if there would be any value in trying
to implement some protobuf converters to decode certain things natively.
What I don't know is if the contents can be deserialized even without
compiling the proto files.

> > That's about all. With each major release we feel like version dot-2
> > works pretty well. This one is no exception. We'll see in 6 months if
> > it was wise :-)
> 
> So you would say I can use it in production with htx ;-)

As long as you're still a bit careful, yes, definitely. haproxy.org has
been running it in production since 1.9-dev9 or so. Since 1.9.0 was
released, we've had one crash a few times (fixed in 1.9.1) and two
massive slowdowns due to non-expiring connections reaching the frontend's
maxconn limit (fixed in 1.9.2).

> and the docker image is also updated ;-)
> 
> https://hub.docker.com/r/me2digital/haproxy19

Thanks.

> As we have now a separated protocol handling layer (htx) how difficult is it 
> to
> add `mode fast-cgi` like `mode http`?

We'd like to have this for 2.0. But it wouldn't be "mode fast-cgi" but
rather "proto fast-cgi" on the server lines to replace the htx-to-h1 mux
with an htx-to-fcgi one, because fast-cgi is another representation of
HTTP. The "mode http" setting is what enables all HTTP processing
(http-request rules, cookie parsing etc). Thus you definitely want to
have it enabled.

> I ask because php have not a production ready http implementation but a robust
> fast cgi process manager (php-fpm). There are several possible solution to add
> http to php (nginx+php-fpm, uwsgi+php-fpm, uwsgi+embeded php) but all this
> solutions requires a additional hop.
> 
> My wish is to have such a flow.
> 
> haproxy -> *.php  => php-fpm
> -> *.static-files => nginx,h2o

It's *exactly* what I've been wanting for a long time as well. Mind you
that Thierry implemented some experimental fast-cgi code many years ago
in 1.3! By then we were facing some strong architectural limitations,
but now I think we should have everything ready thanks to the muxes.

> I have take a look into fcgi protocol but sadly I'm not a good enough 
> programmer
> for that task. I can offer the tests for the implementation.

That's good to know, thanks!

Cheers,
Willy



Re: [ANNOUNCE] haproxy-1.9.2

2019-01-16 Thread Aleksandar Lazic
Hi.

Am 16.01.2019 um 19:02 schrieb Willy Tarreau:
> Hi,
> 
> HAProxy 1.9.2 was released on 2019/01/16. It added 58 new commits
> after version 1.9.1.
> 
> It addresses a number of lower importance pending issues that were not
> yet merged into 1.9.1, one bug in the cache and fixes some long-standing
> limitations that were affecting H2.
> 
> The highest severity issue but the hardest to trigger as well is the
> one affecting the cache, as it's possible to corrupt the shared memory
> segment when using some asymmetric caching rules, and crash the process.
> There is a workaround though, which consists in always making sure an
> "http-request cache-use" action is always performed before an
> "http-response cache-store" action (i.e.  the conditions must match).
> This bug already affects 1.8 and nobody noticed so I'm not worried :-)
> 
> The rest is of lower importance but mostly annoyance. One issue was
> causing the mailers to spam the server in loops. Another one affected
> idle server connections (I don't remember the details after seeing
> several of them to be honest), apparently the stats page could crash
> when using HTX, and there were still a few cases where stale HTTP/1
> connections would never leave in HTX (after certain situations of client
> timeout). The 0-RTT feature was broken when openssl 1.1.1 was released
> due to the anti-replay protection being enabled by default there (which
> makes sense since not everyone uses it with HTTP and proper support),
> this is now fixed.
> 
> While we have been observing a slowly growing amount of orphaned connections
> on haproxy.org last week (several per hour), and since the recent fixes we
> could confirm that it's perfectly clean now.
> 
> There's a small improvement regarding the encryption of TLS tickets. We
> used to support 128 bits only and it looks like the default setting
> changed 2 years ago without us noticing. Some users were asking for 256
> bit support, so that was implemented and backported. It will work
> transparently as the key size is determined automatically. We don't
> think it would make sense at this point to backport this to 1.8, but if
> there is compelling demand for this Emeric knows how to do it.
> 
> Regarding the long-standing limitations affecting H2, some of you
> probably remember that haproxy used not to support CONTINUATION frames,
> which was causing an issue with one very old version of chromium, and
> that it didn't support trailers, making it incompatible with gRPC (which
> may also use CONTINUATION). This has constantly resulted in h2spec to
> return 6 failed tests. These limitations could be addressed in 2.0-dev
> relatively easily thanks to the much better new architecture, and I
> considered it was right to backport these patches so that we don't have
> to work around them anymore. I'd say that while from a developer's
> perspective these limitations were not bugs ("works as designed"), from
> the user's perspective they definitely were.
> 
> I could try this with the gRPC helloworld tests (which by the way support
> H2 in clear text) :
> 
>haproxy$ cat h2grpc.cfg
>defaults
> mode http
> timeout client 5s
> timeout server 5s
> timeout connect 1s
> 
>listen grpc
> log stdout format raw local0
> option httplog
> option http-use-htx
> bind :50052 proto h2
> server srv1 127.0.0.1:50051 proto h2
>haproxy$ ./haproxy -d -f h2grpc.cfg
> 
>grpc$ go run examples/helloworld/greeter_server/main.go &
>grpc$ go run examples/helloworld/greeter_client/main.go haproxy 
>2019/01/04 11:11:40 Received: haproxy
>2019/01/04 11:11:40 Greeting: Hello haproxy
> 
>(...)haproxy$ ./haproxy -d -f h2grpc.cfg
>:grpc.accept(0008)=000b from [127.0.0.1:37538] ALPN=  
>:grpc.clireq[000b:]: POST /helloworld.Greeter/SayHello 
> HTTP/2.0
>:grpc.clihdr[000b:]: content-type: application/grpc 
>:grpc.clihdr[000b:]: user-agent: grpc-go/1.18.0-dev   
>:grpc.clihdr[000b:]: te: trailers
>:grpc.clihdr[000b:]: grpc-timeout: 994982u
>:grpc.clihdr[000b:]: host: localhost:50052
>:grpc.srvrep[000b:000c]: HTTP/2.0 200
>:grpc.srvhdr[000b:000c]: content-type: application/grpc
>:grpc.srvcls[000b:000c]
>:grpc.clicls[000b:000c]
>:grpc.closed[000b:000c]
>127.0.0.1:37538 [04/Jan/2019:11:11:40.705] grpc grpc/srv1 0/0/0/1/1 200 
> 116 - -  1/1/0/0/0 0/0 "POST /helloworld.Greeter/SayHello HTTP/2.0"
> 
> In the past we'd get an error from the client saying that the response
> came without trailers. So now this limitation is expected to be just bad
> old memories.

That's great ;-) ;-)

For service routing are the standard haproxy content routing options possible
(path, header, ...) , right?

If someone want to route based on grpc content he can use lua with body 

[ANNOUNCE] haproxy-1.9.2

2019-01-16 Thread Willy Tarreau
Hi,

HAProxy 1.9.2 was released on 2019/01/16. It added 58 new commits
after version 1.9.1.

It addresses a number of lower importance pending issues that were not
yet merged into 1.9.1, one bug in the cache and fixes some long-standing
limitations that were affecting H2.

The highest severity issue but the hardest to trigger as well is the
one affecting the cache, as it's possible to corrupt the shared memory
segment when using some asymmetric caching rules, and crash the process.
There is a workaround though, which consists in always making sure an
"http-request cache-use" action is always performed before an
"http-response cache-store" action (i.e.  the conditions must match).
This bug already affects 1.8 and nobody noticed so I'm not worried :-)

The rest is of lower importance but mostly annoyance. One issue was
causing the mailers to spam the server in loops. Another one affected
idle server connections (I don't remember the details after seeing
several of them to be honest), apparently the stats page could crash
when using HTX, and there were still a few cases where stale HTTP/1
connections would never leave in HTX (after certain situations of client
timeout). The 0-RTT feature was broken when openssl 1.1.1 was released
due to the anti-replay protection being enabled by default there (which
makes sense since not everyone uses it with HTTP and proper support),
this is now fixed.

While we have been observing a slowly growing amount of orphaned connections
on haproxy.org last week (several per hour), and since the recent fixes we
could confirm that it's perfectly clean now.

There's a small improvement regarding the encryption of TLS tickets. We
used to support 128 bits only and it looks like the default setting
changed 2 years ago without us noticing. Some users were asking for 256
bit support, so that was implemented and backported. It will work
transparently as the key size is determined automatically. We don't
think it would make sense at this point to backport this to 1.8, but if
there is compelling demand for this Emeric knows how to do it.

Regarding the long-standing limitations affecting H2, some of you
probably remember that haproxy used not to support CONTINUATION frames,
which was causing an issue with one very old version of chromium, and
that it didn't support trailers, making it incompatible with gRPC (which
may also use CONTINUATION). This has constantly resulted in h2spec to
return 6 failed tests. These limitations could be addressed in 2.0-dev
relatively easily thanks to the much better new architecture, and I
considered it was right to backport these patches so that we don't have
to work around them anymore. I'd say that while from a developer's
perspective these limitations were not bugs ("works as designed"), from
the user's perspective they definitely were.

I could try this with the gRPC helloworld tests (which by the way support
H2 in clear text) :

   haproxy$ cat h2grpc.cfg
   defaults
mode http
timeout client 5s
timeout server 5s
timeout connect 1s

   listen grpc
log stdout format raw local0
option httplog
option http-use-htx
bind :50052 proto h2
server srv1 127.0.0.1:50051 proto h2
   haproxy$ ./haproxy -d -f h2grpc.cfg

   grpc$ go run examples/helloworld/greeter_server/main.go &
   grpc$ go run examples/helloworld/greeter_client/main.go haproxy 
   2019/01/04 11:11:40 Received: haproxy
   2019/01/04 11:11:40 Greeting: Hello haproxy

   (...)haproxy$ ./haproxy -d -f h2grpc.cfg
   :grpc.accept(0008)=000b from [127.0.0.1:37538] ALPN=  
   :grpc.clireq[000b:]: POST /helloworld.Greeter/SayHello 
HTTP/2.0
   :grpc.clihdr[000b:]: content-type: application/grpc 
   :grpc.clihdr[000b:]: user-agent: grpc-go/1.18.0-dev   
   :grpc.clihdr[000b:]: te: trailers
   :grpc.clihdr[000b:]: grpc-timeout: 994982u
   :grpc.clihdr[000b:]: host: localhost:50052
   :grpc.srvrep[000b:000c]: HTTP/2.0 200
   :grpc.srvhdr[000b:000c]: content-type: application/grpc
   :grpc.srvcls[000b:000c]
   :grpc.clicls[000b:000c]
   :grpc.closed[000b:000c]
   127.0.0.1:37538 [04/Jan/2019:11:11:40.705] grpc grpc/srv1 0/0/0/1/1 200 116 
- -  1/1/0/0/0 0/0 "POST /helloworld.Greeter/SayHello HTTP/2.0"

In the past we'd get an error from the client saying that the response
came without trailers. So now this limitation is expected to be just bad
old memories.

Last, some might have followed the updates around varnishtest. It
evolved into an autonomous project called VTest, but it used to be very
difficult to build due to remaining intimate dependencies with Varnish.
Poul-Henning and Fred and have addressed this and now it's trivial to
build and works like a charm. Given that varnishtest was still affected
by a few issues causing crashes on certain tests, it was about time to
complete the switch. Thus