Re: [ANNOUNCE] haproxy-1.9-dev2

2018-09-18 Thread Emmanuel Hocdet

> Le 18 sept. 2018 à 11:54, Lukas Tribus  a écrit :
> 
> Hi Manu,
> 
> 
> On Fri, 14 Sep 2018 at 15:45, Emmanuel Hocdet  wrote:
>> 
>> Hi,
>> 
>> Quick test with 1.9-dev2, and i see latency (in seconds) to connect to 
>> haproxy with SSL (tcp mode).
>> It’s ok in master with 9f9b0c6a.
>> No time to investigate more for the moment.
> 
> I cannot reproduce it in a simple SSL termination + tcp mode
> configuration. There is probably something more to it.
> 
> 

perhaps with: tcp-request inspect-delay 5s

++
Manu



Re: [ANNOUNCE] haproxy-1.9-dev2

2018-09-18 Thread Willy Tarreau
Hi guys,

On Tue, Sep 18, 2018 at 11:54:59AM +0200, Lukas Tribus wrote:
> Hi Manu,
> 
> 
> On Fri, 14 Sep 2018 at 15:45, Emmanuel Hocdet  wrote:
> >
> > Hi,
> >
> > Quick test with 1.9-dev2, and i see latency (in seconds) to connect to 
> > haproxy with SSL (tcp mode).
> > It's ok in master with 9f9b0c6a.
> > No time to investigate more for the moment.
> 
> I cannot reproduce it in a simple SSL termination + tcp mode
> configuration. There is probably something more to it.

We definitely have some issues related to connection setup and tear down
that we're investigating. They're all related to the changes consisting
in orienting the recv/send calls from up to down and not relying on waking
up everything bottom-to-top.

There's a very closely related issue that Pieter reported with TCP
connections without data causing issues, another one that Christopher
just faced this morning where connection errors wake up process_stream()
in loops, and some cases where client errors on H2 can crash the process.

I'm sorry for all these issues, but having the code merged as a first
step was the only option to make forward progress on this part. Having
everyone constantly rebase his own code on hypothetic changes was not
workable anymore.

Among the possible solutions currently being studied, it seems that there
are too many places where the "process" part of the mux is called instead
of being scheduled (typically the cases where we update the polling). But
this is still under investigation.

Thus if you're still finishing your devs for 1.9, 1.9-dev2 gives a preview
of the forthcoming changes that will apply to the connection layers, at the
price of accepting to work around the current limitations. If you want
something to play with on your own servers, it definitely isn't something
to play with.

I'm currently trying to stabilize this so that we can at least continue
the development with less disturbance, but it's really not easy, as this
merge has at least uncovered some limitations of the current model among
those inherited from the prehistoric code :-/

Thanks,
Willy



Re: [ANNOUNCE] haproxy-1.9-dev2

2018-09-18 Thread Lukas Tribus
Hi Manu,


On Fri, 14 Sep 2018 at 15:45, Emmanuel Hocdet  wrote:
>
> Hi,
>
> Quick test with 1.9-dev2, and i see latency (in seconds) to connect to 
> haproxy with SSL (tcp mode).
> It’s ok in master with 9f9b0c6a.
> No time to investigate more for the moment.

I cannot reproduce it in a simple SSL termination + tcp mode
configuration. There is probably something more to it.


Regards,
Lukas



Re: [ANNOUNCE] haproxy-1.9-dev2

2018-09-14 Thread Emmanuel Hocdet
Hi,

Quick test with 1.9-dev2, and i see latency (in seconds) to connect to haproxy 
with SSL (tcp mode).
It’s ok in master with 9f9b0c6a.
No time to investigate more for the moment.

++
Manu




Re: [ANNOUNCE] haproxy-1.9-dev2

2018-09-13 Thread Aleksandar Lazic
Hi.

Am 12.09.2018 um 19:11 schrieb Willy Tarreau:
> Hi,
> 
> HAProxy 1.9-dev2 was released on 2018/09/12. It added 199 new commits
> after version 1.9-dev1.
> 
> Let's be clear about this one : it's mostly aimed at developers to resync
> their ongoing work. We've changed a number of pretty sensitive stuff and
> definitely expect to spot some interesting bugs starting with "this is
> impossible" and ending with "I didn't remember we supported this". Warning
> given :-)
> 
> Since 1.9-dev1, a number of additional changes were merged. This is always
> a good sign, it indicates that the developers do not intend to significantly
> modify their design anymore. We're still not close to a release though, but
> the steady efforts are paying off.
> 
> Among the recent changes (forgive me if I forget anyone) that should be
> user-visible, I can list :
>   - removal of the synchronization point for a thinner rendez-vous point.
> This is used to propagate server status changes. It used to induce a
> huge CPU consumption on all threads when servers were changing very
> often. Now we don't need to wake the sleeping threads up anymore so
> the health checks are much less noticeable.
> 
>   - added support for the new keyword "proto" on "bind" and "server" lines.
> This permits to force the application level protocol (which normally is
> negociated via ALPN). We temporarily use it to develop the new HTTP/1
> mux and it is also useful to process clear-text HTTP/2. For example it
> is possible to strip TLS using front servers then forward this to a
> unique listener for HTTP processing.
> 
>   - the server queues used to be only ordered on request arrival time. Now
> they also support time priority offsets and classes. This means it is
> possible to decide among all the requests pending in a server queue
> which one will be picked first based on its priority. The classes are
> always considered first, which means that all requests in a low number
> class will be processed before any request in a high number class. Then
> the offsets can be enforced based on time, for example it's possible to
> enforce that requests matching this criterion will experience 250ms
> less queuing time than other requests. It can be useful to deliver
> javascript/CSS before images for example, or to boost premium level
> users compared to other ones.
> 
>   - a few new sample fetch functions were added to determine the number
> of available connections on a server or on a backend.
> 
> The rest is mostly infrastructure changes which are not (well should not
> be) user-visible :
>   - the logs can now be emitted without a stream. This will be required to
> emit logs from the muxes. The HTTP/2 mux now emits a few logs on some
> rare error cases.
> 
>   - make error captures work outside of a stream. The purpose is to be able
> to continue to capture bad messages at various protocol layers. Soon we
> should get the ability to capture invalid H2 frames and invalid HPACK
> sequences just like we used to do at the HTTP level. This will be needed
> to preserve this functionnality when HTTP/1 processing moves down to the
> mux. It could possibly even be used in other contexts (peers or SPOE
> maybe ?).
> 
>   - the error captures are now dynamically allocated. For now it remains
> simple but the purpose is to have the ability to keep only a limited
> number of captures (total and per proxy) in order to significantly
> reduce the memory usage for those with tens of thousands of backends,
> and at the same time to maintain several logs per proxy, and not just
> the last one for each direction.
> 
>   - the master-worker model has improve quite a bit. Now instead of using
> pipes between the master and the workers, it uses socket pairs. The
> goal here is to allow the master to better communicate with workers.
> More specifically, a second patch set introducing the support for
> communicating via socket pairs over UNIX sockets will be usable in
> conjunction with this, allowing the CLI to bounce from the master to
> the workers (not finished yet).
> 
>   - the HTTP definitions and semantics code were moved to a new file,
> http.c, which is version-agnostic. Thanks to this, proto_http.c now
> mostly deals with the analysers and sample processing. This is aimed
> at simplifying the porting of the existing code to the new model.
> 
>   - the new HTTP/1 parser that was implemented to convert HTTP/1 responses
> to HTTP/2 was completed so that it will be reusable by the upcoming
> HTTP/1 mux.
> 
>   - the connection scheduling was reworked to feature true async-I/O at
> every level : till now, when an upper layer wanted to send some data
> (let's say a response to a client), it had to enable data polling on
> the underlying conn_stream which itself requested polling of the FD
> 

[ANNOUNCE] haproxy-1.9-dev2

2018-09-12 Thread Willy Tarreau
Hi,

HAProxy 1.9-dev2 was released on 2018/09/12. It added 199 new commits
after version 1.9-dev1.

Let's be clear about this one : it's mostly aimed at developers to resync
their ongoing work. We've changed a number of pretty sensitive stuff and
definitely expect to spot some interesting bugs starting with "this is
impossible" and ending with "I didn't remember we supported this". Warning
given :-)

Since 1.9-dev1, a number of additional changes were merged. This is always
a good sign, it indicates that the developers do not intend to significantly
modify their design anymore. We're still not close to a release though, but
the steady efforts are paying off.

Among the recent changes (forgive me if I forget anyone) that should be
user-visible, I can list :
  - removal of the synchronization point for a thinner rendez-vous point.
This is used to propagate server status changes. It used to induce a
huge CPU consumption on all threads when servers were changing very
often. Now we don't need to wake the sleeping threads up anymore so
the health checks are much less noticeable.

  - added support for the new keyword "proto" on "bind" and "server" lines.
This permits to force the application level protocol (which normally is
negociated via ALPN). We temporarily use it to develop the new HTTP/1
mux and it is also useful to process clear-text HTTP/2. For example it
is possible to strip TLS using front servers then forward this to a
unique listener for HTTP processing.

  - the server queues used to be only ordered on request arrival time. Now
they also support time priority offsets and classes. This means it is
possible to decide among all the requests pending in a server queue
which one will be picked first based on its priority. The classes are
always considered first, which means that all requests in a low number
class will be processed before any request in a high number class. Then
the offsets can be enforced based on time, for example it's possible to
enforce that requests matching this criterion will experience 250ms
less queuing time than other requests. It can be useful to deliver
javascript/CSS before images for example, or to boost premium level
users compared to other ones.

  - a few new sample fetch functions were added to determine the number
of available connections on a server or on a backend.

The rest is mostly infrastructure changes which are not (well should not
be) user-visible :
  - the logs can now be emitted without a stream. This will be required to
emit logs from the muxes. The HTTP/2 mux now emits a few logs on some
rare error cases.

  - make error captures work outside of a stream. The purpose is to be able
to continue to capture bad messages at various protocol layers. Soon we
should get the ability to capture invalid H2 frames and invalid HPACK
sequences just like we used to do at the HTTP level. This will be needed
to preserve this functionnality when HTTP/1 processing moves down to the
mux. It could possibly even be used in other contexts (peers or SPOE
maybe ?).

  - the error captures are now dynamically allocated. For now it remains
simple but the purpose is to have the ability to keep only a limited
number of captures (total and per proxy) in order to significantly
reduce the memory usage for those with tens of thousands of backends,
and at the same time to maintain several logs per proxy, and not just
the last one for each direction.

  - the master-worker model has improve quite a bit. Now instead of using
pipes between the master and the workers, it uses socket pairs. The
goal here is to allow the master to better communicate with workers.
More specifically, a second patch set introducing the support for
communicating via socket pairs over UNIX sockets will be usable in
conjunction with this, allowing the CLI to bounce from the master to
the workers (not finished yet).

  - the HTTP definitions and semantics code were moved to a new file,
http.c, which is version-agnostic. Thanks to this, proto_http.c now
mostly deals with the analysers and sample processing. This is aimed
at simplifying the porting of the existing code to the new model.

  - the new HTTP/1 parser that was implemented to convert HTTP/1 responses
to HTTP/2 was completed so that it will be reusable by the upcoming
HTTP/1 mux.

  - the connection scheduling was reworked to feature true async-I/O at
every level : till now, when an upper layer wanted to send some data
(let's say a response to a client), it had to enable data polling on
the underlying conn_stream which itself requested polling of the FD
in the fd_cache from the mux, resulting in this FD being seen as
active and the send() callback to be called, and in turn the snd_buf()
function of the data layer to be called to deliver the data to the
lower