Hi Willy,

Am 17-01-2019 15:41, schrieb Willy Tarreau:
Hi Aleks,

On Thu, Jan 17, 2019 at 01:02:56PM +0100, Aleksandar Lazic wrote:
> Very likely, yes. If you want to inspect the body you simply have to
> enable "option http-buffer-request" so that haproxy waits for the body
> before executing rules. From there, indeed you can pass whatever Lua
> code on req.body. I don't know if there would be any value in trying
> to implement some protobuf converters to decode certain things natively.
> What I don't know is if the contents can be deserialized even without
> compiling the proto files.

Agree. I would be interesting to here a good use case and a solution for that,
at least haproxy have the possibility to do it ;-)

From what I've seen, gRPC stream is reasonably easy to decode, and protobuf doesn't require the proto file, it will just emit indexes, types and values, which is enough as long as the schema doesn't change. I've seen that Thrift is pretty similar. So we could decide about routing or priorities based on
values passed in the protocol :-)


>> As we have now a separated protocol handling layer (htx) how difficult is it 
>> add `mode fast-cgi` like `mode http`?
> We'd like to have this for 2.0. But it wouldn't be "mode fast-cgi" but
> rather "proto fast-cgi" on the server lines to replace the htx-to-h1 mux
> with an htx-to-fcgi one, because fast-cgi is another representation of
> HTTP. The "mode http" setting is what enables all HTTP processing
> (http-request rules, cookie parsing etc). Thus you definitely want to
> have it enabled.

Full Ack.

This means that I can use QUICK+HTTP/3 => php-fpm with haproxy, in the future ;-)


Fast cgi isn't a bad protocol (IMHO) but sadly it was not as wide spread as
http(s) even it has multiplexing and keep alive feature in it.

I remember that when we checked with Thierry, there were some issues to
implement multiplexing which resulted in nobody really implementing it
in practice. I *think* the problem was due to the framing or the huge
risk of head-of-line blocking making it impossible (or very hard) to
sacrify a stream when the client doesn't read it, without damaging the
other ones. Thus it was mostly in-order delivery in the end.

(... links ...)
All of them looks to the keep alive flag but not to the multiplex flag.

So this doesn't seem to have change much :-)

Not as I know. From my point of view is the keep alive feature the one which should be supported and the multiplex feature not, but that's just my opinion.

Python is different, as always, they use mainly wsgi, AFAIK.


I forgotten Ruby, they use also another protocol.

For ruby we can use http as there are a lot of web servers which have rack already
implemented ;-)


uwsgi have also there on protocol

I remember having looked at this one many years ago when it was
presented as a replacement for fcgi, but I got contradictory feedback
depending on whom I talked to. I don't know how widespread it is

Well it's not as widespread as fcgi and wsgi, AFAIK.
Let's focus on fcgi and see what's the feedback is.

I can open a issue in github as soon as it's ready to track the feedback.



Reply via email to