> -----Ursprüngliche Nachricht-----
> Von: Ruediger Pluem [mailto:[email protected]]
> Gesendet: Mittwoch, 30. September 2015 22:02
> An: [email protected]
> Betreff: Re: svn commit: r1705823 -
> /httpd/httpd/trunk/modules/ssl/ssl_engine_io.c
> 
> On 09/30/2015 08:30 PM, Ruediger Pluem wrote:
> >
> >
> > On 09/29/2015 11:42 AM, [email protected] wrote:
> >> Author: ylavic
> >> Date: Tue Sep 29 09:42:56 2015
> >> New Revision: 1705823
> >>
> >> URL: http://svn.apache.org/viewvc?rev=1705823&view=rev
> >> Log:
> >> mod_ssl: don't FLUSH output (blocking) on read.
> >> This defeats deferred write (and pipelining), eg. check_pipeline() is
> not
> >> expecting the pipe to be flushed under it.
> >> So let OpenSSL >= 0.9.8m issue the flush when necessary (earlier
> versions
> >> are known to not handle all the cases, so we keep flushing with
> those).
> >>
> >> Modified:
> >>     httpd/httpd/trunk/modules/ssl/ssl_engine_io.c
> >>
> >> Modified: httpd/httpd/trunk/modules/ssl/ssl_engine_io.c
> >> URL:
> http://svn.apache.org/viewvc/httpd/httpd/trunk/modules/ssl/ssl_engine_io
> .c?rev=1705823&r1=1705822&r2=1705823&view=diff
> >>
> ========================================================================
> ======
> >> --- httpd/httpd/trunk/modules/ssl/ssl_engine_io.c (original)
> >> +++ httpd/httpd/trunk/modules/ssl/ssl_engine_io.c Tue Sep 29 09:42:56
> 2015
> >> @@ -466,9 +466,10 @@ static int bio_filter_in_read(BIO *bio,
> >>      }
> >>
> >>      /* In theory, OpenSSL should flush as necessary, but it is known
> >> -     * not to do so correctly in some cases; see PR 46952.
> >> -     *
> >> -     * Historically, this flush call was performed only for an SSLv2
> >> +     * not to do so correctly in some cases (< 0.9.8m); see PR
> 46952.
> >> +     */
> >> +#if OPENSSL_VERSION_NUMBER < 0x0009080df
> >> +    /* Historically, this flush call was performed only for an SSLv2
> >>       * connection or for a proxy connection.  Calling _out_flush
> >>       * should be very cheap in cases where it is unnecessary (and no
> >>       * output is buffered) so the performance impact of doing it
> >>
> >>
> >>
> >
> >
> > This and the follow up r1705826 do cause t/ssl/proxy.t to hang and
> ultimately fail with timeouts.
> > I am running on Centos 6 with Openssl 1.0.1.
> > Currently no idea why. Just a heads up.
> >
> > Regards
> >
> > Rüdiger
> >
> 
> 
> It is hanging here:
> 
> 0x00007f1a810f0113 in __poll (fds=<value optimized out>, nfds=<value
> optimized out>, timeout=<value optimized out>) at
> ../sysdeps/unix/sysv/linux/poll.c:87
> #1  0x00007f1a81c6cdf7 in apr_wait_for_io_or_timeout (f=<value optimized
> out>, s=<value optimized out>, for_read=1) at
> support/unix/waitio.c:51
> #2  0x00007f1a81c66c3a in apr_socket_recv (sock=0x7f1a5400abb0,
> buf=0x7f1a5402a0e8 "", len=0x7f1a6e7814b0) at
> network_io/unix/sendrecv.c:87
> #3  0x00007f1a81c3f9e5 in socket_bucket_read (a=0x7f1a5400cbb8,
> str=0x7f1a6e7814b8, len=0x7f1a6e7814b0, block=<value
> optimized out>) at buckets/apr_buckets_socket.c:36
> #4  0x000000000044f30e in ap_core_input_filter (f=0x7f1a54013500,
> b=0x7f1a54013480, mode=<value optimized out>,
> block=APR_BLOCK_READ, readbytes=7) at core_filters.c:235
> #5  0x00007f1a79741f19 in logio_in_filter (f=<value optimized out>,
> bb=0x7f1a54013480, mode=<value optimized out>,
> block=<value optimized out>, readbytes=<value optimized out>) at
> mod_logio.c:165
> #6  0x00007f1a75e48339 in bio_filter_in_read (bio=0x7f1a540143f0,
> in=0x7f1a54019ce0 "", inlen=7) at ssl_engine_io.c:494
> #7  0x00007f1a758bf309 in BIO_read (b=0x7f1a540143f0,
> out=0x7f1a54019ce0, outl=7) at bio_lib.c:212
> #8  0x00007f1a75bf7873 in ssl23_read_bytes (s=0x7f1a54010a30, n=7) at
> s23_pkt.c:105
> #9  0x00007f1a75bf67be in ssl23_get_server_hello (s=0x7f1a54010a30) at
> s23_clnt.c:595
> #10 ssl23_connect (s=0x7f1a54010a30) at s23_clnt.c:226
> #11 0x00007f1a75e48b41 in ssl_io_filter_handshake
> (filter_ctx=0x7f1a5400f288) at ssl_engine_io.c:1162
> #12 0x00007f1a75e49a6b in ssl_io_filter_output (f=0x7f1a5400f2e0,
> bb=0x7f1a54008bf8) at ssl_engine_io.c:1702
> #13 0x00007f1a75e474c5 in ssl_io_filter_coalesce (f=0x7f1a5400f2b8,
> bb=0x7f1a54008bf8) at ssl_engine_io.c:1670
> #14 0x00007f1a77ece767 in ap_proxy_pass_brigade (bucket_alloc=<value
> optimized out>, r=0x7f1a54004980, p_conn=0x1ac1210,
> origin=0x7f1a5400ada0, bb=0x7f1a54008bf8, flush=<value optimized out>)
>     at proxy_util.c:3640
> #15 0x00007f1a778a99ee in stream_reqbody_cl (r=0x7f1a54004980,
> worker=<value optimized out>, conf=0x1a3fd50,
> url=0x7f1a54006806 "https://localhost:8532/";, proxyname=0x0,
> proxyport=35832) at mod_proxy_http.c:521
> #16 ap_proxy_http_request (r=0x7f1a54004980, worker=<value optimized
> out>, conf=0x1a3fd50, url=0x7f1a54006806
> "https://localhost:8532/";, proxyname=0x0, proxyport=35832) at
> mod_proxy_http.c:1007
> #17 proxy_http_handler (r=0x7f1a54004980, worker=<value optimized out>,
> conf=0x1a3fd50, url=0x7f1a54006806
> "https://localhost:8532/";, proxyname=0x0, proxyport=35832) at
> mod_proxy_http.c:2180
> #18 0x00007f1a77ec1002 in proxy_run_scheme_handler (r=0x7f1a54004980,
> worker=0x1a3ff58, conf=0x1a3fd50,
> url=0x7f1a54006806 "https://localhost:8532/";, proxyhost=0x0,
> proxyport=0) at mod_proxy.c:2938
> #19 0x00007f1a77ec6793 in proxy_handler (r=0x7f1a54004980) at
> mod_proxy.c:1170
> #20 0x0000000000459f10 in ap_run_handler (r=0x7f1a54004980) at
> config.c:170
> #21 0x000000000045e0de in ap_invoke_handler (r=0x7f1a54004980) at
> config.c:433
> #22 0x00000000004722aa in ap_process_async_request (r=0x7f1a54004980) at
> http_request.c:338
> #23 0x000000000046e710 in ap_process_http_async_connection
> (c=0x7f1a68029578) at http_core.c:154
> #24 ap_process_http_connection (c=0x7f1a68029578) at http_core.c:248
> #25 0x0000000000464f80 in ap_run_process_connection (c=0x7f1a68029578)
> at connection.c:41
> #26 0x00007f1a73a30910 in process_socket (thd=0x18da030, dummy=<value
> optimized out>) at event.c:1137
> #27 worker_thread (thd=0x18da030, dummy=<value optimized out>) at
> event.c:2229
> #28 0x00007f1a81c6e298 in dummy_worker (opaque=0x18da030) at
> threadproc/unix/thread.c:145
> #29 0x00007f1a815d4a51 in start_thread (arg=0x7f1a6e782700) at
> pthread_create.c:301
> #30 0x00007f1a810f993d in clone () at
> ../sysdeps/unix/sysv/linux/x86_64/clone.S:115
> 
> 
> And there is a HEAP bucket in the buffered_bb of the core output filter.
> I would guess that it is the Client Hello:
> 
> dump of brigade 0x7f1a54013db8
>    | type     (address)    | length | data addr  | contents
> | rc
> ------------------------------------------------------------------------
> --------
>  0 | HEAP     (0x7f1a5400ca78) | 273    | 0x7f1a5400cb18 |
> [~~~~~~~~~~~V~2~NO...] | 1
> 
> Content is:
> 
> x/70x 0x7f1a5400cb18
> 0x7f1a5400cb18: 0x00000001      0x00000000      0x540280d8
> 0x00007f1a
> 0x7f1a5400cb28: 0x00000111      0x00000000      0x0042b5a0
> 0x00000000
> 0x7f1a5400cb38: 0x00000000      0x00000000      0x00000000
> 0x00000000
> 0x7f1a5400cb48: 0x00000000      0x00000000      0x00000000
> 0x00000000
> 0x7f1a5400cb58: 0x00000000      0x00000000      0x00000000
> 0x00000000
> 0x7f1a5400cb68: 0x00000000      0x00000000      0x00000000
> 0x00000000
> 0x7f1a5400cb78: 0x00000000      0x00000000      0x00000000
> 0x00000000
> 0x7f1a5400cb88: 0x00000000      0x00000000      0x00000000
> 0x00000000
> 0x7f1a5400cb98: 0x000000a0      0x00000000      0x5400ca38
> 0x00007f1a
> 0x7f1a5400cba8: 0x5400ca10      0x00007f1a      0x00000000
> 0x00000000
> 0x7f1a5400cbb8: 0x54013e10      0x00007f1a      0x54013e10
> 0x00007f1a
> 0x7f1a5400cbc8: 0x81e7eec0      0x00007f1a      0xffffffff
> 0xffffffff
> 0x7f1a5400cbd8: 0xffffffff      0xffffffff      0x5400abb0
> 0x00007f1a
> 0x7f1a5400cbe8: 0x0042b5a0      0x00000000      0x5400ca38
> 0x00007f1a
> 0x7f1a5400cbf8: 0x00016411      0x00000000      0x00000000
> 0x00000000
> 0x7f1a5400cc08: 0x00000000      0x00000000      0x00000000
> 0x00000000
> 0x7f1a5400cc18: 0x00000000      0x00000000      0x00000000
> 0x00000000
> 0x7f1a5400cc28: 0x00000000      0x00000000
> 
> 
> 
> Looks like as neither ssl23_client_hello nor ssl23_write_bytes cause
> BIO_flush to be called.

Dumped the wrong memory section for a HEAP bucket. The contents of the heap 
bucket is actually (now a different session):

gdb) x/70x 0x7f03a801c0e8
0x7f03a801c0e8: 0x01010316      0x0100010c      0x56030308      0xcbee2e0d
0x7f03a801c0f8: 0x08b833ba      0xfc96ae2b      0xba2b9a9b      0xa67ea8d2
0x7f03a801c108: 0x0ef75ce9      0xb398fba7      0x004fe092      0x30c08a00
0x7f03a801c118: 0x28c02cc0      0x14c024c0      0xa3000ac0      0x6b009f00
0x7f03a801c128: 0x39006a00      0x88003800      0x32c08700      0x2ac02ec0
0x7f03a801c138: 0x0fc026c0      0x9d0005c0      0x35003d00      0x2fc08400
0x7f03a801c148: 0x27c02bc0      0x13c023c0      0xa20009c0      0x67009e00
0x7f03a801c158: 0x33004000      0x12c03200      0x450008c0      0x16004400
0x7f03a801c168: 0x31c01300      0x29c02dc0      0x0ec025c0      0x0dc004c0
0x7f03a801c178: 0x9c0003c0      0x2f003c00      0x0a004100      0x99009a00
0x7f03a801c188: 0x07009600      0x07c011c0      0x02c00cc0      0x04000500
0x7f03a801c198: 0x12001500      0xff000900      0x55000001      0x0e000000
0x7f03a801c1a8: 0x00000c00      0x636f6c09      0x6f686c61      0x0b007473
0x7f03a801c1b8: 0x00030400      0x0a000201      0x06000800      0x18001900
0x7f03a801c1c8: 0x23001700      0x0d000000      0x20002200      0x02060106
0x7f03a801c1d8: 0x01050306      0x03050205      0x02040104      0x01030304
0x7f03a801c1e8: 0x03030203      0x02020102      0x01010302      0x01000f00
0x7f03a801c1f8: 0x00000001      0x00000000

And this is indeed a client hello.

Regards

Rüdiger

Reply via email to