Re: h2 + text/event-stream: closed on both sides by FIN/ACK?

2018-09-24 Thread Lukas Tribus
On Mon, 24 Sep 2018 at 16:36, Willy Tarreau  wrote:
>
> On Mon, Sep 24, 2018 at 02:30:35PM +, Pierre Cheynier wrote:
> > OK, I conclude this SSE pattern is not working out-of-the-box when using h2 
> > as of
> > now. Is it still true even if setting the user set the proper connection 
> > headers on
> > server side?
>
> Yes, it's irrelevant to the headers, it's related to the fact that each
> request from an H2 connection is a different stream and that the server-
> side idle connection is attached to a stream. So streams are short-lived
> and the server-side connection is closed for now. But hopefully it won't
> anymore for 1.9 ;-)

Just to be clear though; Content-Length or chunked transfer-encoding
are required if you want to use keep-alive on the backend, even with
HTTP/1.1 (or with other products). It's just that it won't work in H2
either way currently, but even if you just use HTTP/1.1 you'd need it.


Lukas



Re: h2 + text/event-stream: closed on both sides by FIN/ACK?

2018-09-24 Thread Willy Tarreau
On Mon, Sep 24, 2018 at 02:30:35PM +, Pierre Cheynier wrote:
> OK, I conclude this SSE pattern is not working out-of-the-box when using h2 
> as of
> now. Is it still true even if setting the user set the proper connection 
> headers on
> server side?

Yes, it's irrelevant to the headers, it's related to the fact that each
request from an H2 connection is a different stream and that the server-
side idle connection is attached to a stream. So streams are short-lived
and the server-side connection is closed for now. But hopefully it won't
anymore for 1.9 ;-)

Willy



RE: h2 + text/event-stream: closed on both sides by FIN/ACK?

2018-09-24 Thread Pierre Cheynier
> Hi Pierre,
Hi Willy, 

> The close on the server side is expected, that's a limitation of the current
> design that we're addressing for 1.9 and which is much harder than initially
>expected. The reason is that streams are independent in H2 while in H1 the
> same stream remains idle and recycled for a new request, allowing us to keep
> the server-side connection alive. Thus in H2 we can't benefit from the
> keep-alive mechanisms we have in H1. But we're currently working on
> addressing this. As a side effect, it should end up considerably simplifying
> the H1 code as well, but for now it's a nightmare, too many changes at once...

OK, I conclude this SSE pattern is not working out-of-the-box when using h2 as 
of
now. Is it still true even if setting the user set the proper connection 
headers on
server side?

Thanks,

Pierre


Re: h2 + text/event-stream: closed on both sides by FIN/ACK?

2018-09-24 Thread Willy Tarreau
Hi Pierre,

On Mon, Sep 24, 2018 at 02:10:21PM +, Pierre Cheynier wrote:
> > You'll notice that in the HTTP/2 case, the stream is closed as you mentioned
> > (DATA len=0 + ES=1) then HAProxy immediately send FIN-ACK to the server.
> > Same for the client just after it forwarded the headers. It never wait for 
> > any 
> > SSE frame.
> 
> EDIT: in fact, analyzing my capture, I see that my workstation (curl) may be 
> the
> originator, since it sends a close at TLS level (the close-notify)..
> 
> $ curl --version
> curl 7.61.0 (x86_64-pc-linux-gnu) libcurl/7.61.0 OpenSSL/1.1.0h zlib/1.2.11 
> libidn2/2.0.5 libpsl/0.20.2 (+libidn2/2.0.4) libssh2/1.8.0 nghttp2/1.32.0 
> librtmp/2.3
> Release-Date: 2018-07-11
> Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 
> pop3s rtmp rtsp scp sftp smb smbs smtp smtps telnet tftp 
> Features: AsynchDNS IDN IPv6 Largefile GSS-API Kerberos SPNEGO NTLM NTLM_WB 
> SSL libz TLS-SRP HTTP2 UnixSockets HTTPS-proxy PSL 
> 
> curl or haproxy issue? what do you think?

In my experience when fed with a single request, curl closes right after
receiving a complete response. You can try to pass it two requests on the
same line to see if it only closes at the end.

The close on the server side is expected, that's a limitation of the current
design that we're addressing for 1.9 and which is much harder than initially
expected. The reason is that streams are independent in H2 while in H1 the
same stream remains idle and recycled for a new request, allowing us to keep
the server-side connection alive. Thus in H2 we can't benefit from the
keep-alive mechanisms we have in H1. But we're currently working on
addressing this. As a side effect, it should end up considerably simplifying
the H1 code as well, but for now it's a nightmare, too many changes at once...

Cheers,
Willy



RE: h2 + text/event-stream: closed on both sides by FIN/ACK?

2018-09-24 Thread Pierre Cheynier
> You'll notice that in the HTTP/2 case, the stream is closed as you mentioned
> (DATA len=0 + ES=1) then HAProxy immediately send FIN-ACK to the server.
> Same for the client just after it forwarded the headers. It never wait for 
> any 
> SSE frame.

EDIT: in fact, analyzing my capture, I see that my workstation (curl) may be the
originator, since it sends a close at TLS level (the close-notify)..

$ curl --version
curl 7.61.0 (x86_64-pc-linux-gnu) libcurl/7.61.0 OpenSSL/1.1.0h zlib/1.2.11 
libidn2/2.0.5 libpsl/0.20.2 (+libidn2/2.0.4) libssh2/1.8.0 nghttp2/1.32.0 
librtmp/2.3
Release-Date: 2018-07-11
Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 
pop3s rtmp rtsp scp sftp smb smbs smtp smtps telnet tftp 
Features: AsynchDNS IDN IPv6 Largefile GSS-API Kerberos SPNEGO NTLM NTLM_WB SSL 
libz TLS-SRP HTTP2 UnixSockets HTTPS-proxy PSL 

curl or haproxy issue? what do you think?

Pierre


Re: h2 + text/event-stream: closed on both sides by FIN/ACK?

2018-09-23 Thread Willy Tarreau
Hi Lukas,

On Mon, Sep 24, 2018 at 01:46:57AM +0200, Lukas Tribus wrote:
> Hello,
> 
> 
> On Fri, 21 Sep 2018 at 15:45, Pierre Cheynier  wrote:
> > Let me know if you see something obvious here, or if this is candidate to a 
> > bug.
> >
> > We have a service using SSE through text/event-stream content-type.
> >
> > In HTTP/1.1 we have a normal stream as expected :
> > < HTTP/1.1 200 OK
> > < Content-Type: text/event-stream
> > data: {"a": "b"}
> >
> > data: {"a": "b"}
> >
> > data: {"a": "b"}
> > (...)
> >
> > HAProxy on its side adds the `Connection: close` header.
> >
> > When adding 'alpn h2,http/1.1' to the bind directive, we observe the
> > following: after the first 200OK, the connection is closed by haproxy both
> > on server and client side by sending a FIN/ACK.
> 
> The backend server is not providing neither Content-Length, nor using
> chunked Transfer-Encoding in the response. This makes using keep-alive
> impossible, regardless of the HTTP version.
> 
> Theoretically the frontend connection could be kept up in this
> situation as far as I can tell, but that is an optimization that will
> require more work in haproxy (as the http layer and error handling
> becomes more http version agnostic - currently many transaction based
> problems affect the entire H2 mux).

In fact not, it should work as we close streams and not connections.
So I conclude we have a bug there that I need to explore further.
Pierre, it would be interesting to know if the connection ends up on
a timeout or after the server closes. If it's a timeout it might be
expected that it's the same on both sides and that we simply close
when it strikes. If the server closed, we should only close the
stream (empty DATA frame with ES=1) but not the connection. It is
possible that we do this and face a protocol error later leading
to the connection being closed, so that's why I'm interested in the
exact sequence.

Cheers,
Willy



Re: h2 + text/event-stream: closed on both sides by FIN/ACK?

2018-09-23 Thread Lukas Tribus
Hello,


On Fri, 21 Sep 2018 at 15:45, Pierre Cheynier  wrote:
> Let me know if you see something obvious here, or if this is candidate to a 
> bug.
>
> We have a service using SSE through text/event-stream content-type.
>
> In HTTP/1.1 we have a normal stream as expected :
> < HTTP/1.1 200 OK
> < Content-Type: text/event-stream
> data: {"a": "b"}
>
> data: {"a": "b"}
>
> data: {"a": "b"}
> (...)
>
> HAProxy on its side adds the `Connection: close` header.
>
> When adding 'alpn h2,http/1.1' to the bind directive, we observe the 
> following: after the first 200OK, the connection is closed by haproxy both on 
> server and client side by sending a FIN/ACK.

The backend server is not providing neither Content-Length, nor using
chunked Transfer-Encoding in the response. This makes using keep-alive
impossible, regardless of the HTTP version.

Theoretically the frontend connection could be kept up in this
situation as far as I can tell, but that is an optimization that will
require more work in haproxy (as the http layer and error handling
becomes more http version agnostic - currently many transaction based
problems affect the entire H2 mux).


The easiest way to fix this problem is to make your backend server
keep-alive aware. Future haproxy (major) releases will likely handle
this case better.


Correct, Willy?


Regards,
Lukas



h2 + text/event-stream: closed on both sides by FIN/ACK?

2018-09-21 Thread Pierre Cheynier
Hi list,

We observed a weird behavior yesterday at introducing h2 in a preproduction 
environment: *the connection is being closed by haproxy both on server and 
client side by immediately sending a FIN/ACK when using SSE 
(text/event-stream)*.

Let me know if you see something obvious here, or if this is candidate to a bug.

We have a service using SSE through text/event-stream content-type.

In HTTP/1.1 we have a normal stream as expected :
< HTTP/1.1 200 OK
< Content-Type: text/event-stream
data: {"a": "b"}

data: {"a": "b"}

data: {"a": "b"}
(...)

HAProxy on its side adds the `Connection: close` header.

When adding 'alpn h2,http/1.1' to the bind directive, we observe the following: 
after the first 200OK, the connection is closed by haproxy both on server and 
client side by sending a FIN/ACK.

It's obviously the same pattern than above on LB<>backend side, since there is 
a translation h2 to http/1.1. On client side it gives:

$ curl -vv (...)
(...)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*   CAfile: none
(...)
* ALPN, server accepted to use h2
* Server certificate:
(...)
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x55d5e9228de0)
> GET /something HTTP/2
> Host: 
> User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:62.0) Gecko/20100101 
> Firefox/62.0
> Accept: text/event-stream
> Accept-Language: en-US,en;q=0.5
> Accept-Encoding: gzip, deflate, br
> Referer:  Cookie: jwt=
> Connection: keep-alive
> Pragma: no-cache
> Cache-Control: no-cache
> 
* Connection state changed (MAX_CONCURRENT_STREAMS == 100)!
< HTTP/2 200 
< content-type: text/event-stream
< 
* Connection #0 to host  left intact

So the connection is abruptly closed.
Here is the config:

$ haproxy -vv
HA-Proxy version 1.8.14-52e4d43 2018/09/20
Copyright 2000-2018 Willy Tarreau 

Build options :
  TARGET  = linux2628
  CPU = generic
  CC  = gcc
  CFLAGS  = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv 
-fno-strict-overflow -Wno-unused-label -DTCP_USER_TIMEOUT=18
  OPTIONS = USE_LINUX_TPROXY=1 USE_GETADDRINFO=1 USE_ZLIB=1 USE_REGPARM=1 
USE_OPENSSL=1 USE_SYSTEMD=1 USE_PCRE=1 USE_PCRE_JIT=1 USE_TFO=1 USE_NS=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with OpenSSL version : OpenSSL 1.1.1  11 Sep 2018
Running on OpenSSL version : OpenSSL 1.1.1  11 Sep 2018
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT 
IP_FREEBIND
Encrypted password support via crypt(3): yes
Built with multi-threading support.
Built with PCRE version : 8.32 2012-11-30
Running on PCRE version : 8.32 2012-11-30
PCRE library supports JIT : yes
Built with zlib version : 1.2.7
Running on zlib version : 1.2.7
Compression algorithms supported : identity("identity"), deflate("deflate"), 
raw-deflate("deflate"), gzip("gzip")
Built with network namespace support.

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available filters :
[SPOE] spoe
[COMP] compression
[TRACE] trace

$ sudo cat /etc/haproxy/haproxy.cfg | head -70
global
 (...)
 nbproc 1
 daemon
 stats socket /var/lib/haproxy/stats level admin mode 644 expose-fd 
listeners
 stats timeout 2m
 tune.bufsize 33792
 ssl-default-bind-options no-sslv3 no-tlsv10 no-tlsv11 no-tls-tickets
 (...)
 hard-stop-after 5400s
 nbthread 6
 cpu-map auto:1/1-6 0-5

defaults
 mode http
 (...)
 timeout connect 10s
 timeout client 180s
 timeout server 180s
 timeout http-keep-alive 10s
 timeout http-request 10s
 timeout queue 1s
 timeout check 5s
 (...)
 option http-keep-alive
 option forwardfor except 127.0.0.0/8
 balance roundrobin
 maxconn 262134
 http-reuse safe
(...)

frontend fe_main
bind *:80 name http_1 process 1/1
bind *:80 name http_2 process 1/2
bind *:80 name http_3 process 1/3
bind *:443 name https_4 ssl crt /etc/haproxy/tls/fe_main process 1/4 alpn 
http/1.1,h2
bind *:443 name https_5 ssl crt /etc/haproxy/tls/fe_main process 1/5 alpn 
http/1.1,h2
bind *:443 name https_6 ssl crt /etc/haproxy/tls/fe_main process 1/6 alpn 
http/1.1,h2
(...)
# Nothing specific in the backend (no override of the aforementioned settings).

Any idea?

Best regards,

Pierre Cheynier