Re: release v1.9.0

2017-02-22 Thread Stefan Priebe - Profihost AG
Am 22.02.2017 um 12:22 schrieb Yann Ylavic:
> Hi Stefan,
> 
> On Wed, Feb 22, 2017 at 11:32 AM, Stefan Priebe - Profihost AG
>  wrote:
>>
>> @Yann how should i test? Vanilla 2.4.25 + MPM V7 + mod_http2 v1.9.1?
> 
> Yes, I think this is the right thing to do for now (no more patches than v7).
> 
>> Or
>> do i need V8 or something else?
> 
> Not ready yet, I'll propose it when that's the case if you can test it then.
> That's an mpm_event optimization (hopefully) only, v7 is good from
> correctness POV...

OK it's running. Will report back.

Greets,
Stefan

> Thanks for testing, still!
> 
> Regards,
> Yann.
> 


Re: svn commit: r1784056 - in /httpd/httpd/trunk/docs/manual/howto: public_html.html.es public_html.xml.es

2017-02-22 Thread Luis Gil de Bernabé
Thanks, I'll check it out.

El mié., 22 feb. 2017 21:02, Jacob Champion  escribió:

> On 02/22/2017 11:50 AM, lgilbern...@apache.org wrote:
> > Author: lgilbernabe
> > Date: Wed Feb 22 19:50:05 2017
> > New Revision: 1784056
> >
> > URL: http://svn.apache.org/viewvc?rev=1784056=rev
> > Log:
> > adding missing xml file and rebuild
> >
> > Added:
> > httpd/httpd/trunk/docs/manual/howto/public_html.html.es
> > httpd/httpd/trunk/docs/manual/howto/public_html.xml.es
>
> Thanks! I think docs/manual/mod/directive-dict.xml.es might also be
> missing.
>
> --Jacob
>
-- 

Luis J. Gil de Bernabé Pfeiffer.


Re: httpd 2.4.25, mpm_event, ssl: segfaults

2017-02-22 Thread Niklas Edmundsson

On Wed, 22 Feb 2017, Jacob Champion wrote:


To make results less confusing, any specific patches/branch I should
test? My baseline is httpd-2.4.25 + httpd-2.4.25-deps
--with-included-apr FWIW.


2.4.25 is just fine. We'll have to make sure there's nothing substantially 
different about it performance-wise before we backport patches anyway, so 
it'd be good to start testing it now.


OK.


- The OpenSSL test server, writing from memory: 1.2 GiB/s
- httpd trunk with `EnableMMAP on` and serving from disk: 850 MiB/s
- httpd trunk with 'EnableMMAP off': 580 MiB/s
- httpd trunk with my no-mmap-64K-block file bucket: 810 MiB/s


At those speeds your results might be skewed by the latency of
processing 10 MiB GET:s.


Maybe, but keep in mind I care more about the difference between the numbers 
than the absolute throughput ceiling here. (In any case, I don't see 
significantly different numbers between 10 MiB and 1 GiB files. Remember, I'm 
testing via loopback.)


Ah, right.


Discard the results from the first warm-up
access and your results delivering from memory or disk (cache) shouldn't
differ.


Ah, but they *do*, as Yann pointed out earlier. We can't just deliver the 
disk cache to OpenSSL for encryption; it has to be copied into some 
addressable buffer somewhere. That seems to be a major reason for the mmap() 
advantage, compared to a naive read() solution that just reads into a small 
buffer over and over again.


(I am trying to set up Valgrind to confirm where the test server is spending 
most of its time, but it doesn't care for the large in-memory static buffer, 
or for OpenSSL's compressed debugging symbols, and crashes. :( )


Any joy with something simpler like gprof? (Caveat: haven't used it in 
ages to I don't know if its even applicable nowadays).


Numbers on the "memcopy penalty" would indeed be interesting, 
especially any variation when the block size differs.



As I said, our live server does 600 MB/s aes-128-gcm and can deliver 300
MB/s https without mmap. That's only a factor 2 difference between
aes-128-gcm speed and delivered speed.

Your results above are almost a factor 4 off, so something's fishy :-)


Well, I can only report my methodology and numbers -- whether the numbers are 
actually meaningful has yet to be determined. ;D More testers are welcome!


:-)

I did some repeated tests and my initial results were actually a bit 
on the low side:


Server CPU is an Intel E5606 (1st gen aes offload), openssl speed -evp
says:

The 'numbers' are in 1000s of bytes per second processed.
type 16 bytes 64 bytes256 bytes   1024 bytes   8192 bytes
aes-128-gcm 208536.05k   452980.05k   567523.33k   607578.11k   619192.32k

Single-stream https over a 10 Gbps link with 3ms RTT (useful routing 
SNAFU when talking to stuff in the neigboring building traffic takes 
the "shortcut" through a town 300 km away ;).


Using wget -O /dev/null as a client, on a host with Intel E5-2630 CPU 
(960-ish MB/s aes-128-gcm on 8k blocks).


http (sendfile): 1.07 GB/s (repeatedly)

httpd (no mmap): 370-380 MB/s

openssl s_server: 330-340 MB/s

So httpd isn't beat by the naive openssl s_server approach at least 
;-)



Going off on a tangent here:

For those of you who actually know how the ssl stuff really works, is 
it possible to get multiple threads involved in doing the encryption, 
or do you need the results from the previous block in order to do the 
next one? Yes, I know this wouldn't make sense for most real setups 
but for a student computer club with old hardware and good 
connectivity this is a real problem ;-)


On the other hand, you would need it to do 100 Gbps single-stream 
https even on latest CPUs 8-)



/Nikke
--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
 Niklas Edmundsson, Admin @ {acc,hpc2n}.umu.se  | ni...@acc.umu.se
---
 There may be a correlation between humor and sex. - Data
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=


Re: [RFC] ?

2017-02-22 Thread Yann Ylavic
On Wed, Feb 22, 2017 at 11:47 AM, Joe Orton  wrote:
>
> It actually only works like:
>
>
>
> which is a bit ugly. Quoting the argument is a syntax error. Not sure
> how best to handle this.
>
> (b) for  match both "foo" and "
> (In core.c the start_if* code is mostly common across all the functions
> and I think can be factored out, so it's possible to make core.c
> simpler/smaller net of even two more container directives.)

+1


Regards,
Yann.


Re: svn commit: r1784056 - in /httpd/httpd/trunk/docs/manual/howto: public_html.html.es public_html.xml.es

2017-02-22 Thread Jacob Champion

On 02/22/2017 11:50 AM, lgilbern...@apache.org wrote:

Author: lgilbernabe
Date: Wed Feb 22 19:50:05 2017
New Revision: 1784056

URL: http://svn.apache.org/viewvc?rev=1784056=rev
Log:
adding missing xml file and rebuild

Added:
httpd/httpd/trunk/docs/manual/howto/public_html.html.es
httpd/httpd/trunk/docs/manual/howto/public_html.xml.es


Thanks! I think docs/manual/mod/directive-dict.xml.es might also be missing.

--Jacob


Re: httpd 2.4.25, mpm_event, ssl: segfaults

2017-02-22 Thread Daniel Lescohier
On Wed, Feb 22, 2017 at 2:42 PM, Jacob Champion 
wrote:

> Ah, but they *do*, as Yann pointed out earlier. We can't just deliver the
> disk cache to OpenSSL for encryption; it has to be copied into some
> addressable buffer somewhere. That seems to be a major reason for the
> mmap() advantage, compared to a naive read() solution that just reads into
> a small buffer over and over again.
>


IOW:
read():Three copies: copy from filesystem cache to httpd read() buffer to
encrypted-data buffer to kernel socket buffer.
mmap(): Two copies: filesystem page already mapped into httpd, so just copy
from filesystem (cached) page to encrypted-data buffer to kernel socket
buffer.


Re: httpd 2.4.25, mpm_event, ssl: segfaults

2017-02-22 Thread Jacob Champion

On 02/22/2017 10:34 AM, Niklas Edmundsson wrote:

To make results less confusing, any specific patches/branch I should
test? My baseline is httpd-2.4.25 + httpd-2.4.25-deps
--with-included-apr FWIW.


2.4.25 is just fine. We'll have to make sure there's nothing 
substantially different about it performance-wise before we backport 
patches anyway, so it'd be good to start testing it now.



- The OpenSSL test server, writing from memory: 1.2 GiB/s
- httpd trunk with `EnableMMAP on` and serving from disk: 850 MiB/s
- httpd trunk with 'EnableMMAP off': 580 MiB/s
- httpd trunk with my no-mmap-64K-block file bucket: 810 MiB/s


At those speeds your results might be skewed by the latency of
processing 10 MiB GET:s.


Maybe, but keep in mind I care more about the difference between the 
numbers than the absolute throughput ceiling here. (In any case, I don't 
see significantly different numbers between 10 MiB and 1 GiB files. 
Remember, I'm testing via loopback.)



Discard the results from the first warm-up
access and your results delivering from memory or disk (cache) shouldn't
differ.


Ah, but they *do*, as Yann pointed out earlier. We can't just deliver 
the disk cache to OpenSSL for encryption; it has to be copied into some 
addressable buffer somewhere. That seems to be a major reason for the 
mmap() advantage, compared to a naive read() solution that just reads 
into a small buffer over and over again.


(I am trying to set up Valgrind to confirm where the test server is 
spending most of its time, but it doesn't care for the large in-memory 
static buffer, or for OpenSSL's compressed debugging symbols, and 
crashes. :( )



As I said, our live server does 600 MB/s aes-128-gcm and can deliver 300
MB/s https without mmap. That's only a factor 2 difference between
aes-128-gcm speed and delivered speed.

Your results above are almost a factor 4 off, so something's fishy :-)


Well, I can only report my methodology and numbers -- whether the 
numbers are actually meaningful has yet to be determined. ;D More 
testers are welcome!


--Jacob



Re: httpd 2.4.25, mpm_event, ssl: segfaults

2017-02-22 Thread Niklas Edmundsson

On Tue, 21 Feb 2017, Jacob Champion wrote:


Is there interest in more real-life numbers with increasing
FILE_BUCKET_BUFF_SIZE or are you already on it?


Yes please! My laptop probably isn't representative of most servers; it can 
do nearly 3 GB/s AES-128-GCM. The more machines we test, the better.


To make results less confusing, any specific patches/branch I should 
test? My baseline is httpd-2.4.25 + httpd-2.4.25-deps 
--with-included-apr FWIW.



I have an older server
that can do 600 MB/s aes-128-gcm per core, but is only able to deliver
300 MB/s https single-stream via its 10 GBps interface. My guess is too
small blocks causing CPU cycles being spent not delivering data...


Right. To give you an idea of where I am in testing at the moment: I have a 
basic test server written with OpenSSL. It sends a 10 MiB response body from 
memory (*not* from disk) for every GET it receives. I also have a copy of 
httpd trunk that's serving an actual 10 MiB file from disk.


My test call is just `h2load --h1 -n 100 https://localhost/`, which should 
send 100 requests over a single TLS connection. The ciphersuite selected for 
all test cases is ECDHE-RSA-AES256-GCM-SHA384. For reference, I can do 
in-memory AES-256-GCM at 2.1 GiB/s.


- The OpenSSL test server, writing from memory: 1.2 GiB/s
- httpd trunk with `EnableMMAP on` and serving from disk: 850 MiB/s
- httpd trunk with 'EnableMMAP off': 580 MiB/s
- httpd trunk with my no-mmap-64K-block file bucket: 810 MiB/s


At those speeds your results might be skewed by the latency of 
processing 10 MiB GET:s.


I'd go for multiple GiB files (whatever you can cache in RAM) and 
deliver files from disk. Discard the results from the first warm-up 
access and your results delivering from memory or disk (cache) 
shouldn't differ.


So just bumping the block size gets me almost to the speed of mmap, without 
the downside of a potential SIGBUS. Meanwhile, the OpenSSL test server seems 
to suggest a performance ceiling about 50% above where we are now.


I'm guessing that if you redo the tests with a bigger file you should 
see even more potential.


As I said, our live server does 600 MB/s aes-128-gcm and can deliver 
300 MB/s https without mmap. That's only a factor 2 difference 
between aes-128-gcm speed and delivered speed.


Your results above are almost a factor 4 off, so something's fishy :-)

Even with the test server serving responses from memory, that seems like 
plenty of room to grow. I'm working on a version of the test server that 
serves files from disk so that I'm not comparing apples to oranges, but my 
prior testing leads me to believe that disk access is not the limiting factor 
on my machine.


Hmm. Perhaps I should just do a quick test with openssl s_server, just 
to see what numbers I get...



/Nikke
--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
 Niklas Edmundsson, Admin @ {acc,hpc2n}.umu.se  | ni...@acc.umu.se
---
 BETA testing is hazardous to your health.
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=


Re: httpd 2.4.25, mpm_event, ssl: segfaults

2017-02-22 Thread Jacob Champion

On 02/22/2017 12:00 AM, Stefan Eissing wrote:

Just so I do not misunderstand:

you increased BUCKET_BUFF_SIZE in APR from 8000 to 64K? That is what you are 
testing?


Essentially, yes, *and* turn off mmap and sendfile. My hope is to 
disable the mmap-optimization by default while still improving overall 
performance for most users.


Technically, Yann's patch doesn't redefine APR_BUCKET_BUFF_SIZE, it just 
defines a new buffer size for use with the file bucket. It's a little 
less than 64K, I assume to make room for an allocation header:


#define FILE_BUCKET_BUFF_SIZE (64 * 1024 - 64)

--Jacob


Re: Expr's lists evaluated in a string context (was: [users@httpd] mod_lua and subprocess_env)

2017-02-22 Thread Eric Covener
On Wed, Feb 22, 2017 at 10:51 AM, Yann Ylavic  wrote:
>
> For example, the attached patch uses the seprator ", " (quite HTTP
> field inspired), but it could be a json string or whatever...
> We could also have an explicit tostring/tojson() function which would
> stringify anything as argument.
>
> Or yet more operators on lists, like list_empty(), list_first(),
> list_last(), list_nth(), list_match(, ) (returning
> another list of matching entries), ... you name it.
>
> Working on anything from a certificates looks very useful at least.
>
> WDYT?

+1.

What does the list evaluate to prior to the fix? Seems like regression
risk is low (sophisticated expression with lists that don't ever
return what you'd expect).  Otherwise I'd suggest we give users e.g.
list_join() and just add a few examples.

-- 
Eric Covener
cove...@gmail.com


Expr's lists evaluated in a string context (was: [users@httpd] mod_lua and subprocess_env)

2017-02-22 Thread Yann Ylavic
> On Tue, Feb 21, 2017 at 6:32 PM, Yann Ylavic  wrote:
>>
>> Header set Client-SAN "expr=%{PeerExtList:2.5.29.17}"

This currently fails because list functions are not recognized in a
string context.

For now, lists can be either expressed with the syntax "{ ,
, ... }", with  being itself something powerful, or
obtained from the mod_ssl's PeerExtList() function (grab anything from
a peer certificate).

For the latter case (or for future functions), it could be useful to
be able to work on such strings (e.g. with extracting regexes).
So I wonder if we could return the string elements separated by
something in the case of lists evaluated in a string context.

For example, the attached patch uses the seprator ", " (quite HTTP
field inspired), but it could be a json string or whatever...
We could also have an explicit tostring/tojson() function which would
stringify anything as argument.

Or yet more operators on lists, like list_empty(), list_first(),
list_last(), list_nth(), list_match(, ) (returning
another list of matching entries), ... you name it.

Working on anything from a certificates looks very useful at least.

WDYT?
Index: server/util_expr_eval.c
===
--- server/util_expr_eval.c	(revision 1783852)
+++ server/util_expr_eval.c	(working copy)
@@ -50,6 +50,9 @@ AP_IMPLEMENT_HOOK_RUN_FIRST(int, expr_lookup, (ap_
 static const char *ap_expr_eval_string_func(ap_expr_eval_ctx_t *ctx,
 const ap_expr_t *info,
 const ap_expr_t *args);
+static apr_array_header_t *ap_expr_eval_list_func(ap_expr_eval_ctx_t *ctx,
+const ap_expr_t *info,
+const ap_expr_t *args);
 static const char *ap_expr_eval_re_backref(ap_expr_eval_ctx_t *ctx,
unsigned int n);
 static const char *ap_expr_eval_var(ap_expr_eval_ctx_t *ctx,
@@ -80,6 +83,8 @@ static int inc_rec(ap_expr_eval_ctx_t *ctx)
 return 1;
 }
 
+#define AP_EXPR_MAX_LIST_STRINGS 500
+
 static const char *ap_expr_eval_word(ap_expr_eval_ctx_t *ctx,
  const ap_expr_t *node)
 {
@@ -161,6 +166,35 @@ static const char *ap_expr_eval_word(ap_expr_eval_
 result = ap_expr_eval_string_func(ctx, info, args);
 break;
 }
+case op_ListFuncCall: {
+const ap_expr_t *info = node->node_arg1;
+const ap_expr_t *args = node->node_arg2;
+apr_array_header_t *array = ap_expr_eval_list_func(ctx, info, args);
+if (array && array->nelts > 0) {
+struct iovec *vec;
+int n = array->nelts, i = 0;
+/* sanity check */
+if (n > AP_EXPR_MAX_LIST_STRINGS) {
+n = AP_EXPR_MAX_LIST_STRINGS;
+}
+/* all entries (but last) separated by ", " */
+n = (n * 2) - 1;
+vec = apr_palloc(ctx->p, n * sizeof(struct iovec));
+for (;;) {
+const char *s = APR_ARRAY_IDX(array, i, const char *);
+vec[i].iov_base = (void *)s;
+vec[i].iov_len = strlen(s);
+if (++i >= n) {
+break;
+}
+vec[i].iov_base = (void *)", ";
+vec[i].iov_len = 2;
+++i;
+}
+result = apr_pstrcatv(ctx->p, vec, n, NULL);
+}
+break;
+}
 case op_RegexBackref: {
 const unsigned int *np = node->node_arg1;
 result = ap_expr_eval_re_backref(ctx, *np);
@@ -213,6 +247,19 @@ static const char *ap_expr_eval_string_func(ap_exp
 return (*func)(ctx, data, ap_expr_eval_word(ctx, arg));
 }
 
+static apr_array_header_t *ap_expr_eval_list_func(ap_expr_eval_ctx_t *ctx,
+const ap_expr_t *info,
+const ap_expr_t *arg)
+{
+ap_expr_list_func_t *func = (ap_expr_list_func_t *)info->node_arg1;
+const void *data = info->node_arg2;
+
+AP_DEBUG_ASSERT(info->node_op == op_ListFuncInfo);
+AP_DEBUG_ASSERT(func != NULL);
+AP_DEBUG_ASSERT(data != NULL);
+return (*func)(ctx, data, ap_expr_eval_word(ctx, arg));
+}
+
 static int intstrcmp(const char *s1, const char *s2)
 {
 apr_int64_t i1 = apr_atoi64(s1);
@@ -268,13 +315,8 @@ static int ap_expr_eval_comp(ap_expr_eval_ctx_t *c
 }
 else if (e2->node_op == op_ListFuncCall) {
 const ap_expr_t *info = e2->node_arg1;
-const ap_expr_t *arg = e2->node_arg2;
-ap_expr_list_func_t *func = (ap_expr_list_func_t *)info->node_arg1;
-apr_array_header_t *haystack;
-
-AP_DEBUG_ASSERT(func != NULL);
-AP_DEBUG_ASSERT(info->node_op == op_ListFuncInfo);
-haystack = (*func)(ctx, info->node_arg2, 

Re: [RFC] ?

2017-02-22 Thread Eric Covener
On Wed, Feb 22, 2017 at 8:43 AM, William A Rowe Jr  wrote:
> I was more concerned about our support for ...
> I'd really like to see mod_version go away in 2.next and force
> the availability of that feature so that .conf authors are assured
> of it's presence moving forwards.

+1

-- 
Eric Covener
cove...@gmail.com


Re: [RFC] ?

2017-02-22 Thread William A Rowe Jr
On Wed, Feb 22, 2017 at 1:04 AM, Nick Kew  wrote:
> On Tue, 2017-02-21 at 21:58 +, Joe Orton wrote:
>
>> Any reason  is a bad idea, so we can do that more cleanly
>> (... in a couple of decades time)?
>
> One reason it might be a very bad idea: user confusion!
>
> I'm thinking of the track record of  here.
> Our support fora are full of users who have seen it in
> default/shipped config and docs, and treat it as some
> magic incantation they need.  They end up with a problem
> "why doesn't Foo work?", which they bring to our fora
> after many hours of tearing their hair.  The usual answer:
> Get rid of all the  crap, to stop suppressing
> the error message you need!

That speaks to our docs/conf/* tree, right? Not the existence
of the  directive ... I'm guessing you don't support
eliminating that feature in the future?

I was more concerned about our support for ...
I'd really like to see mod_version go away in 2.next and force
the availability of that feature so that .conf authors are assured
of it's presence moving forwards.

An issue is that this is needed to let users devs toggle specific
tests based on patch level. Right now, testing requires a backport,
which doesn't vary by the httpd version, and only rarely varies
by .  This proposal makes introducing the tests
upon adding a feature to trunk painless; the test is accessible
from the moment the directive is backported.

If you want to propose an " considered harmful"
caution in the docs (which we can borrow or point to for the
 docs) ... that could be helpful. It often indicates
that the user's conf was not thought out, and that it is subject
to unexpected behavior changes if a module is loaded or
commented out. That doesn't mean these serve no purpose.


Re: mod_proxy_http2 sni ?

2017-02-22 Thread Steffen


Picking up the good host now.

With a download chrome says in Status bar: starting, and stays there.

Curl hanging:


0 14.4M0 327670 0  27237  0  0:09:17  0:00:01  
0:09:16 27237
 0 14.4M0 327670 0  14873  0  0:17:01  0:00:02  
0:16:59 14873
 0 14.4M0 327670 0  10230  0  0:24:44  0:00:03  
0:24:41 10230
 0 14.4M0 327670 0   7796  0  0:32:28  0:00:04  
0:32:24  7796
 0 14.4M0 327670 0   6297  0  0:40:12  0:00:05  
0:40:07  6297
 0 14.4M0 327670 0   5282  0  0:47:55  0:00:06  
0:47:49 0
 0 14.4M0 327670 0   4539  0  0:55:46  0:00:07  
0:55:39 0
 0 14.4M0 327670 0   3987  0  1:03:29  0:00:08  
1:03:21 0





 0 14.4M0 327670 0   3554  0  1:11:13  0:00:09  
1:11:04 0
 0 14.4M0 327670 0   3206  0  1:18:57  0:00:10  
1:18:47 0
 0 14.4M0 327670 0   2920  0  1:26:41  0:00:11  
1:26:30 0


Looks like that the issue is that the front is h2 and the back h2c.


On Wednesday 22/02/2017 at 11:30, Stefan Eissing  wrote:

You can try now in v1.9.1 if it works as needed now.



Am 17.02.2017 um 16:11 schrieb Steffen :

Looks like the same, is not looking for the host as with 1.1

It is on my wish list for 2.4.26





Op 16 feb. 2017 om 11:38 heeft Stefan Eissing 
 het volgende geschreven:


Is this the same as https://github.com/icing/mod_h2/issues/124 ?

It seems that the ProxyPreserveHost is not (correctly) implemented.



Am 16.02.2017 um 10:42 schrieb Steffen :


Have an Apache ssl only in front of an Apache on port 80 with several 
vhosts.


In front have:


ProtocolsHonorOrder On
Protocols h2 http/1.1
LoadModule http2_module modules/mod_http2.so



ProxyPass / http://127.0.0.1:80/
ProxyPassReverse / http://127.0.0.1:80/


In backend have:


ProtocolsHonorOrder On
Protocols h2c http/1.1
LoadModule http2_module modules/mod_http2.so

This is working great and with all the vhosts.


When I add/change the front to:


ProtocolsHonorOrder On
Protocols h2 http/1.1
LoadModule http2_module modules/mod_http2.so
LoadModule proxy_http2_module modules/mod_proxy_http2.so


ProxyPass / h2c://127.0.0.1:80/
ProxyPassReverse / h2c://127.0.0.1:80/


This is not working as expected, all is going to the default/first 
vhost.


a log line from the backend gives is all cases not found .

default 127.0.0.1 - - [16/Feb/2017:10:22:00 +0100] "GET /index.php 
HTTP/2.0" 404 207 ...



Cheers,

Steffenal




Stefan Eissing

bytes GmbH
Hafenstrasse 16
48155 Münster
http://www.greenbytes.de





Stefan Eissing

bytes GmbH
Hafenstrasse 16
48155 Münster
http://www.greenbytes.de





Re: release v1.9.0

2017-02-22 Thread Yann Ylavic
Hi Stefan,

On Wed, Feb 22, 2017 at 11:32 AM, Stefan Priebe - Profihost AG
 wrote:
>
> @Yann how should i test? Vanilla 2.4.25 + MPM V7 + mod_http2 v1.9.1?

Yes, I think this is the right thing to do for now (no more patches than v7).

> Or
> do i need V8 or something else?

Not ready yet, I'll propose it when that's the case if you can test it then.
That's an mpm_event optimization (hopefully) only, v7 is good from
correctness POV...

Thanks for testing, still!

Regards,
Yann.


Re: [RFC] ?

2017-02-22 Thread Joe Orton
On Tue, Feb 21, 2017 at 02:28:52PM -0800, Jacob Champion wrote:
> I haven't tried your patch yet, but from inspection it looks like you'd have
> to do something like this if you're looking for a :
> 
> 
> ...
> 
> (Note the missing closing angle bracket in the argument.) Assuming I've read
> that correctly, should we add some sugar to allow "" to be fully
> bracketed in the argument?

It actually only works like:

   

which is a bit ugly. Quoting the argument is a syntax error. Not sure 
how best to handle this.

(a) ignore the problem, i.e. allow above syntax
(b) for  match both "foo" and "

Re: release v1.9.0

2017-02-22 Thread Stefan Priebe - Profihost AG
Hi Stefan,
  Hi Yann,

thanks for v1.9.1 i'm happy to test.

@Yann how should i test? Vanilla 2.4.25 + MPM V7 + mod_http2 v1.9.1? Or
do i need V8 or something else?

Greets,
Stefan

Am 22.02.2017 um 11:31 schrieb Stefan Eissing:
> v1.9.1 is out. Please test at your leisure.
> 
>> Am 21.02.2017 um 09:40 schrieb Stefan Priebe - Profihost AG 
>> :
>>
>> Hi Yann,
>>
>> Am 20.02.2017 um 16:38 schrieb Yann Ylavic:
>>> On Wed, Feb 15, 2017 at 8:53 PM, Stefan Priebe - Profihost AG
>>>  wrote:

 still no segfaults.
>>>
>>> Great!
>>>

 @Yann
 Are those patches (the addon on top of v7) and the one on top of mod_ssl
 still correct / needed?
>>>
>>> I think so, but maybe I'm a bit lost (see below)...
>>>

 Am 15.02.2017 um 12:45 schrieb Stefan Priebe - Profihost AG:
>
> Am 15.02.2017 um 12:19 schrieb Yann Ylavic:
>>
>> Is this with or without the mpm_event's wakeup and/or allocator patches?
>
> it's with the mpm_event_listener_wakeup_bug57399_V7 +
>>>
>>> Does this includes any change besides v7 from bugzilla?
>>
>> Yes but just the ones mentioned below. I think i'll wait for v1.9.1 +
>> MPM v8 which may include your patch for mod_http2 as well? Stefan?
>>
>> Stefan
>>
>>>
>>> Also finally... I really wish we had something like v6 in mpm_event,
>>> these locks around pollset operations seem really unnecessary to me
>>> (and likely not good performance wise).
>>> I think the (very unlikely) race mentioned in
>>> https://svn.apache.org/r1779354 could be addressed in the listener
>>> itself (while processing the queues, lock held) rather than every
>>> worker.
>>>
>>> I you could try the v8 I'll try to propose soon it would be really
>>> nice of you (as usual ;)
>>>
>
> --- a/build/httpd/server/mpm/event/event.c  (revision 1776076)
> +++ b/build/httpd/server/mpm/event/event.c  (working copy)
>>>
>>> This one is needed I think, I was waiting for your feedbacks since it
>>> mainly affects http2.
>>> Everything looking good, I just committed it to trunk (r1783755), the
>>> final patch would be [1].
>>>
>>> I also committed the corresponding changes in mod_http2 (r1783756)
>>> which don't seem to be in v1.9.0, so you may need [2] and [3] too.
>>>
>
> Index: a/build/httpd/modules/ssl/ssl_engine_io.c
> ===
> --- a/build/httpd/modules/ssl/ssl_engine_io.c (revision 1781324)
> +++ b/build/httpd/modules/ssl/ssl_engine_io.c (working copy)
>>>
>>> This one is in trunk already (r1781582), but without this change:
>>>
> -if (APR_BUCKET_IS_METADATA(bucket)) {
> +if (APR_BUCKET_IS_METADATA(bucket) || !filter_ctx->pssl) {
>>>
>>> So I'd suggest to use [4] instead.
>>> No harm, though, this case cannot happen in current httpd, but as
>>> discussed in another thread we should handle it another way.
>>>
>>>
>>> To conclude, I think you should be using: httpd-2.4.25 +
>>> mod_http-v1.9.0 + PR57399-v7.patch + [1] + [2] + [3] + [4].
>>>
>>> Other than PR57399-v7, they are all in trunk now, so hopefully it will
>>> be easier to talk about them (e.g. with revision number).
>>>
>>> Regards,
>>> Yann.
>>>
>>>
>>> [1] 
>>> http://svn.apache.org/viewvc/httpd/httpd/trunk/server/mpm/event/event.c?r1=1783755=1783754=1783755=patch
>>> (from r1783755)
>>> [2] 
>>> http://svn.apache.org/viewvc/httpd/httpd/trunk/modules/http2/h2_mplx.c?rev=1783756=1783755=1783756=diff
>>> (from r1783756)
>>> [3] 
>>> http://svn.apache.org/viewvc/httpd/httpd/trunk/modules/http2/h2_conn.c?rev=1783756=1783755=1783756=diff
>>> (from r1783756)
>>> [4] 
>>> http://svn.apache.org/viewvc/httpd/httpd/trunk/modules/ssl/ssl_engine_io.c?r1=1781581=1781582=1781582=patch
>>> (from r1781582).
>>>
>>> PS: I could not find a way for viewvc URLs above to express a single
>>> diff for a whole revision change (e.g. [2] and [3] above are two files
>>> changed in the same commit...).
>>> With the svn client, it would be simply these three diffs:
>>> [svn.1] svn diff -r 1783754:1783755
>>> http://svn.apache.org/repos/asf/httpd/httpd/trunk/
>>> [svn.2-3] svn diff -r 1783755:1783756
>>> http://svn.apache.org/repos/asf/httpd/httpd/trunk/
>>> [svn.4] svn diff -r 1781581:1781582
>>> http://svn.apache.org/repos/asf/httpd/httpd/trunk/
> 
> Stefan Eissing
> 
> bytes GmbH
> Hafenstrasse 16
> 48155 Münster
> www.greenbytes.de
> 


Re: release v1.9.0

2017-02-22 Thread Stefan Eissing
v1.9.1 is out. Please test at your leisure.

> Am 21.02.2017 um 09:40 schrieb Stefan Priebe - Profihost AG 
> :
> 
> Hi Yann,
> 
> Am 20.02.2017 um 16:38 schrieb Yann Ylavic:
>> On Wed, Feb 15, 2017 at 8:53 PM, Stefan Priebe - Profihost AG
>>  wrote:
>>> 
>>> still no segfaults.
>> 
>> Great!
>> 
>>> 
>>> @Yann
>>> Are those patches (the addon on top of v7) and the one on top of mod_ssl
>>> still correct / needed?
>> 
>> I think so, but maybe I'm a bit lost (see below)...
>> 
>>> 
>>> Am 15.02.2017 um 12:45 schrieb Stefan Priebe - Profihost AG:
 
 Am 15.02.2017 um 12:19 schrieb Yann Ylavic:
> 
> Is this with or without the mpm_event's wakeup and/or allocator patches?
 
 it's with the mpm_event_listener_wakeup_bug57399_V7 +
>> 
>> Does this includes any change besides v7 from bugzilla?
> 
> Yes but just the ones mentioned below. I think i'll wait for v1.9.1 +
> MPM v8 which may include your patch for mod_http2 as well? Stefan?
> 
> Stefan
> 
>> 
>> Also finally... I really wish we had something like v6 in mpm_event,
>> these locks around pollset operations seem really unnecessary to me
>> (and likely not good performance wise).
>> I think the (very unlikely) race mentioned in
>> https://svn.apache.org/r1779354 could be addressed in the listener
>> itself (while processing the queues, lock held) rather than every
>> worker.
>> 
>> I you could try the v8 I'll try to propose soon it would be really
>> nice of you (as usual ;)
>> 
 
 --- a/build/httpd/server/mpm/event/event.c  (revision 1776076)
 +++ b/build/httpd/server/mpm/event/event.c  (working copy)
>> 
>> This one is needed I think, I was waiting for your feedbacks since it
>> mainly affects http2.
>> Everything looking good, I just committed it to trunk (r1783755), the
>> final patch would be [1].
>> 
>> I also committed the corresponding changes in mod_http2 (r1783756)
>> which don't seem to be in v1.9.0, so you may need [2] and [3] too.
>> 
 
 Index: a/build/httpd/modules/ssl/ssl_engine_io.c
 ===
 --- a/build/httpd/modules/ssl/ssl_engine_io.c (revision 1781324)
 +++ b/build/httpd/modules/ssl/ssl_engine_io.c (working copy)
>> 
>> This one is in trunk already (r1781582), but without this change:
>> 
 -if (APR_BUCKET_IS_METADATA(bucket)) {
 +if (APR_BUCKET_IS_METADATA(bucket) || !filter_ctx->pssl) {
>> 
>> So I'd suggest to use [4] instead.
>> No harm, though, this case cannot happen in current httpd, but as
>> discussed in another thread we should handle it another way.
>> 
>> 
>> To conclude, I think you should be using: httpd-2.4.25 +
>> mod_http-v1.9.0 + PR57399-v7.patch + [1] + [2] + [3] + [4].
>> 
>> Other than PR57399-v7, they are all in trunk now, so hopefully it will
>> be easier to talk about them (e.g. with revision number).
>> 
>> Regards,
>> Yann.
>> 
>> 
>> [1] 
>> http://svn.apache.org/viewvc/httpd/httpd/trunk/server/mpm/event/event.c?r1=1783755=1783754=1783755=patch
>> (from r1783755)
>> [2] 
>> http://svn.apache.org/viewvc/httpd/httpd/trunk/modules/http2/h2_mplx.c?rev=1783756=1783755=1783756=diff
>> (from r1783756)
>> [3] 
>> http://svn.apache.org/viewvc/httpd/httpd/trunk/modules/http2/h2_conn.c?rev=1783756=1783755=1783756=diff
>> (from r1783756)
>> [4] 
>> http://svn.apache.org/viewvc/httpd/httpd/trunk/modules/ssl/ssl_engine_io.c?r1=1781581=1781582=1781582=patch
>> (from r1781582).
>> 
>> PS: I could not find a way for viewvc URLs above to express a single
>> diff for a whole revision change (e.g. [2] and [3] above are two files
>> changed in the same commit...).
>> With the svn client, it would be simply these three diffs:
>> [svn.1] svn diff -r 1783754:1783755
>> http://svn.apache.org/repos/asf/httpd/httpd/trunk/
>> [svn.2-3] svn diff -r 1783755:1783756
>> http://svn.apache.org/repos/asf/httpd/httpd/trunk/
>> [svn.4] svn diff -r 1781581:1781582
>> http://svn.apache.org/repos/asf/httpd/httpd/trunk/

Stefan Eissing

bytes GmbH
Hafenstrasse 16
48155 Münster
www.greenbytes.de



Re: mod_proxy_http2 sni ?

2017-02-22 Thread Stefan Eissing
You can try now in v1.9.1 if it works as needed now.

> Am 17.02.2017 um 16:11 schrieb Steffen :
> 
> Looks like the same, is not looking for the host as with 1.1
> 
> It is on my wish list for 2.4.26 
> 
> 
> 
>> Op 16 feb. 2017 om 11:38 heeft Stefan Eissing  
>> het volgende geschreven:
>> 
>> Is this the same as https://github.com/icing/mod_h2/issues/124 ?
>> 
>> It seems that the ProxyPreserveHost is not (correctly) implemented.
>> 
>>> Am 16.02.2017 um 10:42 schrieb Steffen :
>>> 
>>> 
>>> Have an Apache ssl only in front of an Apache on port 80 with several 
>>> vhosts.
>>> 
>>> In front have:
>>> 
>>> 
>>> ProtocolsHonorOrder On
>>> Protocols h2 http/1.1
>>> LoadModule http2_module modules/mod_http2.so
>>> 
>>> 
>>> 
>>> ProxyPass / http://127.0.0.1:80/
>>> ProxyPassReverse / http://127.0.0.1:80/
>>> 
>>> 
>>> In backend have:
>>> 
>>> 
>>> ProtocolsHonorOrder On
>>> Protocols h2c http/1.1
>>> LoadModule http2_module modules/mod_http2.so
>>> 
>>> This is working great and with all the vhosts.
>>> 
>>> 
>>> When I add/change the front to:
>>> 
>>> 
>>> ProtocolsHonorOrder On
>>> Protocols h2 http/1.1
>>> LoadModule http2_module modules/mod_http2.so
>>> LoadModule proxy_http2_module modules/mod_proxy_http2.so
>>> 
>>> 
>>> ProxyPass / h2c://127.0.0.1:80/
>>> ProxyPassReverse / h2c://127.0.0.1:80/
>>> 
>>> 
>>> This is not working as expected, all is going to the default/first vhost.
>>> 
>>> a log line from the backend gives is all cases not found .
>>> 
>>> default 127.0.0.1 - - [16/Feb/2017:10:22:00 +0100] "GET /index.php 
>>> HTTP/2.0" 404 207 ...
>>> 
>>> 
>>> Cheers,
>>> 
>>> Steffenal
>>> 
>>> 
>> 
>> Stefan Eissing
>> 
>> bytes GmbH
>> Hafenstrasse 16
>> 48155 Münster
>> www.greenbytes.de
>> 
> 

Stefan Eissing

bytes GmbH
Hafenstrasse 16
48155 Münster
www.greenbytes.de



Re: svn commit: r1783912 - in /httpd/httpd/trunk/modules/http2: h2_conn.c h2_mplx.c h2_mplx.h

2017-02-22 Thread Yann Ylavic
On Wed, Feb 22, 2017 at 10:55 AM, Stefan Eissing wrote:
> Now you and a recent, unrepeatable crash on my stress test made me
> nervous. The mplx alloc mutex goes back in now, I want to sleep at
> night... ;-)

Sorry about that, although I already knew that counting mutexes was
better than counting sheeps :)


Re: svn commit: r1783912 - in /httpd/httpd/trunk/modules/http2: h2_conn.c h2_mplx.c h2_mplx.h

2017-02-22 Thread Stefan Eissing
Now you and a recent, unrepeatable crash on my stress test made me nervous. The 
mplx alloc mutex goes back in now, I want to sleep at night... ;-)

> Am 22.02.2017 um 10:01 schrieb Yann Ylavic :
> 
> On Wed, Feb 22, 2017 at 8:52 AM, Stefan Eissing
>  wrote:
>> 
>>> Am 21.02.2017 um 18:34 schrieb Yann Ylavic :
>>> 
>>> We are back to initial issue here, no?
>> 
>> Surely hope not. All subpools of mplx->pool are guarded by mplx->lock mutex 
>> already.
> 
> OK, slaves/tasks creations look safe indeed.
> 
> Also (mainly for my guidance), what about streams' creation (from
> session->pool)?
> This is in nghttp2's on_begin_headers_cb(), hence always by the same thread?
> 
> Thanks for your patience ;)

Stefan Eissing

bytes GmbH
Hafenstrasse 16
48155 Münster
www.greenbytes.de



Re: svn commit: r1783912 - in /httpd/httpd/trunk/modules/http2: h2_conn.c h2_mplx.c h2_mplx.h

2017-02-22 Thread Stefan Eissing

> Am 22.02.2017 um 10:01 schrieb Yann Ylavic :
> 
> On Wed, Feb 22, 2017 at 8:52 AM, Stefan Eissing
>  wrote:
>> 
>>> Am 21.02.2017 um 18:34 schrieb Yann Ylavic :
>>> 
>>> We are back to initial issue here, no?
>> 
>> Surely hope not. All subpools of mplx->pool are guarded by mplx->lock mutex 
>> already.
> 
> OK, slaves/tasks creations look safe indeed.
> 
> Also (mainly for my guidance), what about streams' creation (from
> session->pool)?
> This is in nghttp2's on_begin_headers_cb(), hence always by the same thread?

Yes, all on the master "side". And lifetime of a stream is now always >= 
lifetime of its slave/task (modulo slave reuse). Go betweens are only h2_mplx 
and h2_bucket_beam, with beams now also handling response headers and trailers.

I am currently thinking about moving the complete stream set from mplx back to 
session and only giving streams to mplx for cleanup. I also want to try the 
possible gains when each beam has its own mutex. 

Another bigger change idea would to bundle to beams to a h2_bucket_pipe and tie 
that to a slave connection, to be reused for multiple requests. That would give 
a bi-directional bucket "connection" for slaves. Make that usable in pollsets. 
Integrate that into mpm and we could get rid of h2_workers...

A man can dream...

> 
> Thanks for your patience ;)

My pleasure. ;)

Stefan Eissing

bytes GmbH
Hafenstrasse 16
48155 Münster
www.greenbytes.de



Re: svn commit: r1783912 - in /httpd/httpd/trunk/modules/http2: h2_conn.c h2_mplx.c h2_mplx.h

2017-02-22 Thread Yann Ylavic
On Wed, Feb 22, 2017 at 8:52 AM, Stefan Eissing
 wrote:
>
>> Am 21.02.2017 um 18:34 schrieb Yann Ylavic :
>>
>> We are back to initial issue here, no?
>
> Surely hope not. All subpools of mplx->pool are guarded by mplx->lock mutex 
> already.

OK, slaves/tasks creations look safe indeed.

Also (mainly for my guidance), what about streams' creation (from
session->pool)?
This is in nghttp2's on_begin_headers_cb(), hence always by the same thread?

Thanks for your patience ;)


Re: [RFC] ?

2017-02-22 Thread Stefan Eissing
Neat! +1

> Am 21.02.2017 um 22:58 schrieb Joe Orton :
> 
> For cases like HttpProtocolOptions where a new directive is introduced 
> to multiple active branches simultaneously, it gets awkward to use 
>  to write conf files which use the new directive but are 
> compatible across multiple versions.
> 
> Triggered by a conversation with a user, but also e.g. see current test 
> suite t/conf/extra.conf.in which breaks for 2.4 releases older than 
> 2.4.25 with:
> 
>  = 2.2.32>
>
>  DocumentRoot @SERVERROOT@/htdocs/
>  HttpProtocolOptions Strict Require1.0 RegisteredMethods
> 
> Any reason  is a bad idea, so we can do that more cleanly 
> (... in a couple of decades time)?
> 
> Regards, Joe
> 

Stefan Eissing

bytes GmbH
Hafenstrasse 16
48155 Münster
www.greenbytes.de



Re: httpd 2.4.25, mpm_event, ssl: segfaults

2017-02-22 Thread Stefan Eissing

> Am 22.02.2017 um 00:14 schrieb Jacob Champion :
> 
> On 02/19/2017 01:37 PM, Niklas Edmundsson wrote:
>> On Thu, 16 Feb 2017, Jacob Champion wrote:
>>> So, I had already hacked my O_DIRECT bucket case to just be a copy of
>>> APR's file bucket, minus the mmap() logic. I tried making this change
>>> on top of it...
>>> 
>>> ...and holy crap, for regular HTTP it's *faster* than our current
>>> mmap() implementation. HTTPS is still slower than with mmap, but
>>> faster than it was without the change. (And the HTTPS performance has
>>> been really variable.)
>> 
>> I'm guessing that this is with a low-latency storage device, say a
>> local SSD with low load? O_DIRECT on anything with latency would require
>> way bigger blocks to hide the latency... You really want the OS
>> readahead in the generic case, simply because it performs reasonably
>> well in most cases.
> 
> I described my setup really poorly. I've ditched O_DIRECT entirely. The 
> bucket type I created to use O_DIRECT has been repurposed to just be a copy 
> of the APR file bucket, with the mmap optimization removed entirely, and with 
> the new 64K bucket buffer limit. This new "no-mmap-plus-64K-block" file 
> bucket type performs better on my machine than the old "mmap-enabled" file 
> bucket type.
> 
> (But yes, my testing is all local, with a nice SSD. Hopefully that gets a 
> little closer to isolating the CPU parts of this equation, which is the thing 
> we have the most influence over.)
> 
>> I think the big win here is to use appropriate block sizes, you do more
>> useful work and less housekeeping. I have no clue on when the block size
>> choices were made, but it's likely that it was a while ago. Assuming
>> that things will continue to evolve, I'd say making hard-coded numbers
>> tunable is a Good Thing to do.
> 
> Agreed.
> 
>> Is there interest in more real-life numbers with increasing
>> FILE_BUCKET_BUFF_SIZE or are you already on it?
> 
> Yes please! My laptop probably isn't representative of most servers; it can 
> do nearly 3 GB/s AES-128-GCM. The more machines we test, the better.
> 
>> I have an older server
>> that can do 600 MB/s aes-128-gcm per core, but is only able to deliver
>> 300 MB/s https single-stream via its 10 GBps interface. My guess is too
>> small blocks causing CPU cycles being spent not delivering data...
> 
> Right. To give you an idea of where I am in testing at the moment: I have a 
> basic test server written with OpenSSL. It sends a 10 MiB response body from 
> memory (*not* from disk) for every GET it receives. I also have a copy of 
> httpd trunk that's serving an actual 10 MiB file from disk.
> 
> My test call is just `h2load --h1 -n 100 https://localhost/`, which should 
> send 100 requests over a single TLS connection. The ciphersuite selected for 
> all test cases is ECDHE-RSA-AES256-GCM-SHA384. For reference, I can do 
> in-memory AES-256-GCM at 2.1 GiB/s.
> 
> - The OpenSSL test server, writing from memory: 1.2 GiB/s
> - httpd trunk with `EnableMMAP on` and serving from disk: 850 MiB/s
> - httpd trunk with 'EnableMMAP off': 580 MiB/s
> - httpd trunk with my no-mmap-64K-block file bucket: 810 MiB/s
> 
> So just bumping the block size gets me almost to the speed of mmap, without 
> the downside of a potential SIGBUS. Meanwhile, the OpenSSL test server seems 
> to suggest a performance ceiling about 50% above where we are now.
> 
> Even with the test server serving responses from memory, that seems like 
> plenty of room to grow. I'm working on a version of the test server that 
> serves files from disk so that I'm not comparing apples to oranges, but my 
> prior testing leads me to believe that disk access is not the limiting factor 
> on my machine.
> 
> --Jacob

Just so I do not misunderstand: 

you increased BUCKET_BUFF_SIZE in APR from 8000 to 64K? That is what you are 
testing?

Stefan Eissing

bytes GmbH
Hafenstrasse 16
48155 Münster
www.greenbytes.de