Hi Willy.

Am 26-07-2014 16:50, schrieb Willy Tarreau:
Hi Aleks,

On Sat, Jul 26, 2014 at 03:46:30PM +0200, Aleksandar Lazic wrote:
>Concerning the new features, no promises, but we know that we need to
>progress in the following areas :
>
>  - multi-process : better synchronization of stats and health checks,
>    and find a way to support peers in this mode. I'm still thinking a
>    lot that due to the arrival of latency monsters that are SSL and
>    compression, we could benefit from having a thread-based
>architecture
>    so that we could migrate tasks to another CPU when they're going to
>    take a lot of time. The issue I'm seeing with threads is that
>    currently the code is highly dependent on being alone to modify any
>    data. Eg: a server state is consistent between entering and leaving
>    a health check function. We don't want to start adding huge mutexes
>    everywhere.

How about to be able to inject health check status over sockets (unix,
ip, ...).

You already have it on the CLI :

  set server <backend>/<server> agent [ up | down ]
  set server <backend>/<server> health [ up | stopping | down ]
  set server <backend>/<server> state [ ready | drain | maint ]
  set server <backend>/<server> weight <weight>[%]

Sorry, thanks for pointing out.

The Idea is to have the possibility to inform haproxy about the status
from a service over the health-check-socket.
This makes possible to use a distributed check infrastructure and inform
haproxy about the state of a service.

Absolutely :-)

This socket can also be used for internal use e.g. haproxy forked
process.

Similar way could be used with peers storage.

Peers are a bit different, as they can be accessed at a very high rate.
However I'm pretty sure we'll move that to a shared memory just like
the SSL session storage at some point, except that we still have to
inform each process about the changes synchronously. I'm also thinking
that stick tables could possibly be shared between multiple processes
if we allocate entries from a shared memory area.

This way is only able to handle the informations on one machine.

Due to the fact that haproxy have already

peer <peername> <ip>:<port>

I thought to extend this to a more or less opener protocol.

When haproxy is able to send $Session-Information to redis, memcache,
... then you can "better" use current environment resources.

It's by far too slow, you have to communicate over the network for this,
even if it's on the loopback, meaning you need to poll to access them
and stop/start any processing which needs them. You really want to have
the information accessible from within the process without doing a
context switch!

But isn't this

peer <peername> <ip>:<port>

a similar solution?

I would understand this part of the documentation as a similar solution but with haproxy on protocol.
Maybe I have misunderstand it.

###
...
3.5. Peers
----------
It is possible to synchronize server entries in stick tables between several
haproxy instances over TCP connections in a multi-master fashion. ...
###

>  - HTTP/2 : this is the most troublesome part that we absolutely need
(...)
>    opportunity for going even further and definitely get rid of it.

I know haproxy not need to much external library but in this case could
be a good solution to use a external lib like http://nghttp2.org/

I know about this one but it's more of a reference code we'll use to
validate our implementation than anything else. The problem we have
is not to decode the protocol, but to completely change haproxy's
architecture to support it. A lib alone cannot solve architectural
limitations, at worst it can emphasize them :-/

Full Ack.

Currently I'm thinking about removing the notion of "session" (yes,
that structure which holds everything together). I think that instead
of instanciating an HTTP transaction structure for each session getting
an HTTP request, I'll change that to have a list of transactions in
progress attached to a connection, each with its set of buffers and
headers. And for TCP mode, we'll have a simple transaction working
like a tunnel. The logs will then be attached to the transaction,
just like the frontend, backend, server, analysers, timeouts, ...

Another important point I have not said is that the architecture
changes needed to support HTTP/2 will inevitably affect the HTTP/1
performance, and part of the difficulty is to 1) maintain support
for HTTP/1, and 2) limit this impact to its minimum. That's also
one reason for improving connection reuse : it could compensate
for the performance losses expected from the architecture changes.

Thanks for explanation.

I'm sure you have thought like a http-generic "plugin" and depend on the request the http-1 or the http-2 plugin handle the request.

Cheers Aleks

Reply via email to