RE: 2.5 alpha proposal

2017-11-21 Thread Bert Huijben


> -Original Message-
> From: William A Rowe Jr [mailto:wr...@rowe-clan.net]
> Sent: donderdag 16 november 2017 16:40
> To: httpd 
> Subject: Re: 2.5 alpha proposal
> 
> On Thu, Nov 16, 2017 at 8:40 AM, Stefan Eissing
>  wrote:
> >> Am 16.11.2017 um 14:03 schrieb William A Rowe Jr  clan.net>:
> >>
> >> So, we won't be able to ignore this for long...
> >>
> >> I'd propose we migrate dsp to the oldest supported vcproj format (my
> cvtdsp can help get these flags right) for those who like the IDE, until we
> show that cmake generated vcproj files work just fine. Hopefully this occurs
> prior to beta.
> >>
> >> Drop .mak and .dsp files and let cmake create make files and any alternate
> gui representations anyone needs, e.g. eclipse, code warrior etc etc etc.
> >>
> >> Thoughts?
> >
> > I am not able to contribute to the Windows build discussion. For the sake
> of understanding, however,
> > we have currently:
> >   * our AP enriched automake variant
> >   * cmake
> >   * some version of visual-c/-studio project setup
> >   * a netware build
> > ?
> >
> > Is that a complete list? And some Windows people use cmake and some
> the vcproj files?
> 
> Perfect summary.
> 
> Note that Netware maintainers have conceded that if we want to proceed
> with a newer
> generation of compiler and OS features that cannot be supported, that it
> seemed
> reasonable to drop Netware at some point. Also note Netware build files can
> also
> be generated from cmake (we likely need to add some more functionality to
> make
> that happen to select Netware-specific sources and avoid some Win32
> sources,
> but I'm already hoping to support Unix via cmake as well, so doing both at
> once
> doesn't seem like an extra headache.) I'm not speaking for our active
> Netware
> maintainers, so this position might have changed.
> 
> Some like the vcproj files for building. Others, like myself, hate
> building a distribution
> from a gui, but really like visual studio/vcproj for debugging; I'm
> much more efficient
> there than in gdb, and features like api/variable/struct member
> autocompletion make
> development simpler for the dyslexic. Not all that different than
> Eclipse and similar.
> Whether we provide them or can leverage cmake's vcproj creation logic,
> there is
> absolutely demand for these to be available somehow.

I would go the CMake route and use that as the primary buildscript for Windows.

Microsoft spends quite some time making this work directly for the last few VS 
versions. Start using .vcproj instead of .dsp, still moves us to a > 10 year no 
longer actively supported format. Visual Studio 2010 moved to using MSBuild 
based .vcxproj files, which work much better from the commandline than .dsp / 
.vcproj... But it is not hard to generate these from CMake.

Bert



RE: Serf support in trunk

2017-11-21 Thread Bert Huijben


> -Original Message-
> From: Rainer Jung [mailto:rainer.j...@kippdata.de]
> Sent: maandag 20 november 2017 14:33
> To: dev@httpd.apache.org; Bert Huijben <b...@qqmail.nl>
> Cc: d...@serf.apache.org
> Subject: Re: Serf support in trunk


> > I have no idea why some things were duplicated for different http
> engines...
> > As httpd user I don't see why I want to use different configurations for
> > different engines, except for one knob that chooses the engine.
> >
> > Mod_pagespeed (another serf consumer) is also trying to become an ASF
> > project... Not sure if there would be any interest in that project.
> 
> I don't know whether mod_pagespeed also uses an outdated serf.

They use a slightly patched Serf 1.3.x, but with 1.4 they should be able to 
move to the normal release. Last time I checked the requested features were 
implemented in our public api.

Bert
> 
> Regards,
> 
> Rainer



RE: Serf support in trunk

2017-11-20 Thread Bert Huijben


> -Original Message-
> From: Stefan Eissing [mailto:stefan.eiss...@greenbytes.de]
> Sent: maandag 20 november 2017 11:40
> To: dev@httpd.apache.org
> Subject: Re: Serf support in trunk
> 
> +1 for pulling it unless someone steps forward.
> 
> > Am 19.11.2017 um 12:49 schrieb Rainer Jung :
> >
> > While testing the 2.5.0 alpha candidate I noticed, that our optional use
of
> serf in mod_proxy and mpm_event is pretty outdated (so unmaintained):
> >
> > - the serf API we use was only present in serf until version 0.3.1
(February
> 2010)

What serf api are we talking about?

There shouldn't have been any changes to the public api withing 1.3.x... but
then around 1.3.0 the project wasn't an ASF project yet.

> > - in May 2010 it was changed inside serf and httpd does not support this
> newer API
> >
> > - serf currently is now at version 1.3.9 (August 2016) providing stable
APIs.
> There is still some basic maintenance activity in the serf project, for
instance
> for supporting newer scons build tool versions or support for OpenSSL
1.1.0.
> >
> > I do not actually know what the serf support adds to httpd, it seems
> mostly some "SerfCluster" feature for mod_proxy. There's no docs and some
> comments in the code indicate the impl is not complete. SVN logs point to
> the same direction.

I have no idea why some things were duplicated for different http engines...
As httpd user I don't see why I want to use different configurations for
different engines, except for one knob that chooses the engine.

Mod_pagespeed (another serf consumer) is also trying to become an ASF
project... Not sure if there would be any interest in that project.

Bert



RE: httpd memory consumption

2017-10-06 Thread Bert Huijben


> -Original Message-
> From: Ruediger Pluem [mailto:rpl...@apache.org]
> Sent: vrijdag 6 oktober 2017 09:47
> To: Apache HTTP Server Development List 
> Subject: httpd memory consumption
> 
> I am currently looking at a core of a httpd 2.4 process using the event MPM
> that consumed a lot of memory (core dump file size about 1.4 GB).
> While taking the core actually no request was processed by this process.
> I suspect a memory leak in a 3rd party closed source module.
> As I have no view in the 3rd party module I tried to find out the heap
> memory usage of httpd and the included modules.
> 
> For this I used the new dump_pool_and_children I added to .gdbinit which
> delivers me the memory used by all pools below 'apr_global_pool' and the
> amount of memory in the allocator free lists associated to these pools.
> This delivered a usage of only a few MB. Of course I am aware that the
> process consumes additional memory for stack, static data and text
> segments, but this usage should be static.
> 
> Is there any heap usage by httpd that I could have missed?
> 
> - We do not use the unmanaged pools from APR. So I should have caught all
> pools.
> - I do no think that we use allocators that are not used by a pool. Hence I
> should have caught all allocators and
>   its free lists.
> - As no request was processed when I captured the core (all worker threads
> were waiting for work) I doubt that missed
>   any or at least large memory consuming buckets.

Is APR configured to return memory to the OS? Otherwise you might just see all 
the allocated memory in the free list of the pool allocator(s).

Bert



RE: 2.4.27

2017-07-06 Thread Bert Huijben


> -Original Message-
> From: Jim Jagielski [mailto:j...@jagunet.com]
> Sent: woensdag 5 juli 2017 18:49
> To: dev@httpd.apache.org
> Subject: Re: 2.4.27
> 
> These are just the fixes/regressions noted in CHANGES:
> 
> Changes with Apache 2.4.27
> 
>   *) mod_lua: Improve compatibility with Lua 5.1, 5.2 and 5.3.
>  PR58188, PR60831, PR61245. [Rainer Jung]
> 
>   *) mod_http2: disable and give warning when mpm_prefork is
> encountered. The server will
>  continue to work, but HTTP/2 will no longer be negotiated. [Stefan
Eissing]

Can somebody point me to the reasoning behind this?

I have this configuration on FreeBSD with older Httpd versions, and it works
just fine for my limited load. 

Switching to a different model will require compiling more ports myself as
the FreeBSD packaging system is optimized for this model.


I do understand that there is a better mapping of http/2 streams with the
more modern MPMs, but there must be a reason that it worked and no longer
can be supported in the future. I assume this reason is already documented
somewhere...

Thanks,

Bert




RE: Upgrade Summary

2015-12-11 Thread Bert Huijben
> -Original Message-
> From: Stefan Eissing [mailto:stefan.eiss...@greenbytes.de]
> Sent: vrijdag 11 december 2015 10:20
> To: dev@httpd.apache.org
> Subject: Re: Upgrade Summary


> Regarding request bodies:
> - websocket will never switch on request bodies
> - h2c currently does not, but clients like serf (and curl) really would
prefer it to
> - TLS could switch with arbitrary request bodies, but maybe need not to
> Protocol implementations should make up their minds in the "propose"
phase, I
> think,
> because once a protocol gets selected, the upgrade *should* succeed unless
> the
> connection catches fire or something. Not upgrading is better than
failing.

+1
Not upgrading is 100% better than failing.

But for Subversion not upgrading will most likely also imply not retrying
later on the same connection.

The way Subversion uses requests implies that it needs to pipeline requests.
With h2 it can do this much more efficient and stable than with http/1.1,
but not pipelining will make operations to remote repositories unusable
slow. (And I think most of us at least sometimes use the ASF master
repository from a distance:-)


So once we get past that initial request on an operation like update,
checkout or merge we will start opening multiple http/1.1 connections and
pipelining requests on those connections without further upgrade headers.
With requests leaving and responses arriving out of sync there is no way to
upgrade later on.




One other thing that sticks in my mind:
Is it possible to upgrade to TLS in one request and to h2c later?
(The other direction is explicitly forbidden by h2)

I'm not going to implement this for the pipelining reasons noted above, but
perhaps other scenarios might want this. But more likely  we might want to
explicitly block this scenario.

TLS and then websockets...
I don't think that scenario should be supported either. Webbrowsers aren't
going to use it... and dedicated clients probably use a direct connection of
some sort instead of websockets.

Bert



RE: Upgrade Summary

2015-12-11 Thread Bert Huijben
If you request an upgrade to TLS on your initial request, upgrading with a body 
might still make sense. Especially if the server would respond with a 401. But 
also if the request can be public, but the response needs to be secured.

If we blindly ignore the upgrade as ‘doesn’t make sense’, the next request 
wouldn’t use encryption… but if we upgraded to TLS after the request, the next 
request… with the authentication headers could be sent encrypted.

The server can’t choose if that first request makes sense or not… It has to 
decide if upgrading the connection during the request makes sense… and if that 
is possible.

If upgrading the first request doesn’t make sense the client should use a 
different request (Like options *, or HEAD /).

If the server denies the request at least some parts have already travelled the 
network unencrypted. Returning an error or not upgrading will only make sure 
more requests will travel unencrypted.

The response is not the only thing encrypted when upgrading… all future 
requests are. Upgrade is a connection level request, sent via a request.


Bert


Sent from Mail for Windows 10



From: Yann Ylavic
Sent: vrijdag 11 december 2015 02:22
To: httpd-dev
Subject: Re: Upgrade Summary


On Thu, Dec 10, 2015 at 11:46 AM, Stefan Eissing
 wrote:
> Given all the input on this thread, I arrive at the following pseudo code:

Thanks for _compiling_ this thread, quite exhaustive :)

I wonder if we could let each Protocols module hook wherever
appropriate in the current hooking set.

TLS handshake is already at the right place IMO, it possibly needs a
simple fix like,

Index: modules/ssl/ssl_engine_kernel.c
===
--- modules/ssl/ssl_engine_kernel.c(revision 1718341)
+++ modules/ssl/ssl_engine_kernel.c(working copy)
@@ -233,7 +233,8 @@ int ssl_hook_ReadReq(request_rec *r)
  * has sent a suitable Upgrade header. */
 if (sc->enabled == SSL_ENABLED_OPTIONAL && !myConnConfig(r->connection)
 && (upgrade = apr_table_get(r->headers_in, "Upgrade")) != NULL
-&& ap_find_token(r->pool, upgrade, "TLS/1.0")) {
+&& ap_find_token(r->pool, upgrade, "TLS")
+&& !ap_has_request_body(r)) {
 if (upgrade_connection(r)) {
 return AP_FILTER_ERROR;
 }
--
so that we don't Upgrade when a body is given (looks like it's RFC
compliant since Upgrade is not mandatory).

Or maybe,
-&& ap_find_token(r->pool, upgrade, "TLS/1.0")) {
+&& ap_find_token(r->pool, upgrade, "TLS")) {
+if (ap_has_request_body(r)) {
+return HTTP_REQUEST_ENTITY_TOO_LARGE;
+}
 if (upgrade_connection(r)) {
 return AP_FILTER_ERROR;
 }
--
because a body in clear text while requiring TLS makes few sense, and
clients that send TLS body directly (no header) after the Upgrade
would notice (now), that looks safer.
But stricly speaking this looks not very RFC7230 compliant too, I
could not find an "Upgrade: TLS" exception there (the whole HTTP/1
request must be read otherwise before upgrade)...

Possibly we could also,
+ap_discard_request_body(r);
but I'd feel safer with the previous patch, WDYT?


Regarding WebSocket, we already have proxy_wstunnel that handshakes
successfully as a (proxy_)handler.
I don't know if the upcoming body is to be HTTP/1 or not, but should
this be enforced we could use ap_get_brigade() to forward it until
complete (while still detecting HTTP/1 "errors"), and then use the
current TCP forwarding until EOF on either side.


>
> 1. Post Read Request Hook:
>
> if (Upgrade: request header present) {
> collect protocol proposals;
> ps = protocol with highest preference from proposals;
> if (ps && ps != current) {

Here I would use ap_add_output_filter(switch_protocol_filter, r); with
switch_protocol_filter() which would flush out the 101 response (or
not) based on r->need_upgrade and r->current_protocol, before any
Protocols data.
Maybe this filter could also call ap_discard_request_body(r) when
needed/asked to (eg. r->discard_body), that'd need CONN_SENSE handling
with MPM event possibly.
For the headers in the 101 response, another hook called from
switch_protocol_filter() could be used.

If I'm talking about new r-> field, that could also be in
core_request_config, reachable with
ap_get_core_module_config(r->request_config), for possibly a better
backportabily (but that's an implementation detail).

So now I think we can let the request fall through.
If a Protocols' module needs to bypass/handle auth[nz] itself, it
would hook after this one (post_read_request still, and return DONE).
Otherwise, as a handler, each Protocols' module would check
r->current_protocol against its own one, and either handle it or
DECLINE.
If the selected module doesn't care about the request body (ie. never
reads it), switch_protocol_filter() would do 

RE: Upgrade Summary

2015-12-11 Thread Bert Huijben
I’m not talking about why the client does want the upgrade…

Perhaps it doesn’t know if port 443 uses TLS or hosts the same content. In 
general it is just a lucky guess that it does have the same content. The 
Alt-Svc spec might help here, but currently this still requires the admin to 
configure things… and I don’t see that change anytime soon. (And if it would be 
automatic, it would still assume no firewalls, certificate availability, etc., 
etc.)


What I’m trying to say is: we should not just deny connections an upgrade for 
no reason. If we can upgrade the connection with a body… why explicitly deny it 
*after* we already received the body? If we want to reuse the connection we 
have to read the body anyway.


We can’t make the connection more secure by asking the client to retry the 
request… We can only make things less secure that way.


Returning a 4XX error just makes the client retry the request (perhaps after 
asking the user). The 4XX will be more work for the server, more work for the 
client and perhaps even for the end user.

We don’t make the web more secure by denying upgrades after the request is 
already sent. And as http/1.1 doesn’t have a way to stop a request before it is 
sent without resetting the connection I would say that we upgrade to TLS and/or 
H2c whenever possible.

The server can’t decide if the client should send the request unencrypted or 
not *after* it is already on the wire unencrypted.

By not upgrading we *as server* decide that the next request on the same 
connection will be unencrypted as well… By returning a 4XX (or other error) we 
ask the client to retry again; most likely on the same connection and just as 
unencrypted as the previous request.

On disconnecting we do the same thing….

The only way to ensure encryption or more advanced protocol support is honoring 
the upgrade request as soon as possible.

The deadlock scenarios on bodies that can’t be sent while the request is read 
are no reason to just block upgrades… These same deadlocks can occur for dozens 
of different reasons and in case of h2 even after upgrade. (Don’t send window 
updates and everything stalls… by design). At that point we can handle the 
error, just like how we handle timeouts on http/1.1

With h2 we can even force stream 1 (the response for the upgraded request) to 
close with an appropriate stream error in that case… so we can still reuse the 
connection.


With h2 you can in theory start the response in h2 while you are still reading 
the http/1.1 body, while with TLS you can’t as you need a two way handshake.

There will be cases where the upgrade path definitely breaks (e.g. 
huge/infinite chunked request echoed as response), but I think in most cases it 
can succeed without a problem. Just returning an error in easy cases for 
consistency with the cases where it can’t work doesn’t really help.



The problem with errors like 413 is that in clients like Subversion we can only 
handle them by showing them as fatal error to the user. The printer scenario 
for upgrades to TLS probably does the same thing. Printing works or doesn’t 
work… It doesn’t allow retrying with a different request.

Not upgrading is an option… But upgrading under less favorable conditions is 
what makes things work.

  Bert

Sent from Mail for Windows 10



From: Yann Ylavic
Sent: zaterdag 12 december 2015 01:46
To: Bert Huijben
Subject: Re: Upgrade Summary


On Sat, Dec 12, 2015 at 12:48 AM, Bert Huijben <b...@qqmail.nl> wrote:
> If you request an upgrade to TLS on your initial request, upgrading with a
> body might still make sense. Especially if the server would respond with a
> 401. But also if the request can be public, but the response needs to be
> secured.

Why using Upgrade at the first place if you are confident enough (to
play auth) with the server being TLS ready?
Wouldn't a direct https connection be better?

Is there such an authentication protocol?
If so, I guess we can do the TLS Upgrade+handshake in the output
filter (as proposed) like any other Protocols Upgrade, letting httpd
handle that first clear text request body...
That's possible without setting aside the body anyway, still I doubt
there is a real need for it (if a TLS Upgrade is asked, the client
shouldn't mind the first (half) round trip and play it's auth on the
second (TLSed) request.

>
> If we blindly ignore the upgrade as ‘doesn’t make sense’, the next request
> wouldn’t use encryption… but if we upgraded to TLS after the request, the
> next request… with the authentication headers could be sent encrypted.

My preference would be to return an error (413), so that the client
notices what's "better" for a TLS Upgrade (no body), but if there
really exist such a cleartext body use case I'm fine with honoring the
body too.

>
> If upgrading the first request doesn’t make sense the client should use a
> different request (Like options *, or HEAD /).

That are, AFAICT, the

RE: Upgrade Summary

2015-12-10 Thread Bert Huijben


> -Original Message-
> From: Stefan Eissing [mailto:stefan.eiss...@greenbytes.de]
> Sent: donderdag 10 december 2015 11:47
> To: dev@httpd.apache.org
> Subject: Re: Upgrade Summary
> 
> Given all the input on this thread, I arrive at the following pseudo code:
> 
> 1. Post Read Request Hook:
> 
>   if (Upgrade: request header present) {
>   collect protocol proposals;
>   ps = protocol with highest preference from proposals;
>   if (ps && ps != current) {
>   status = switch protocol(phase => post_read);
>   if (status == APR_EOF) {
>   close connection;
>   }
>   else if (status == APR_EAGAIN) {
>   // protocol switch wants to be called later
> before handler
>   if (request is "OPTIONS *") {
>   // TODO: invoke again with (phase =>
> handler)?

Why handle 'OPTIONS *' different here?

Isn't that 'just another simple HTTP/1.1 request'?


>From what I read about that request, it is a recommended easy request if you
don't have anything to send yourself in this stage. (With "HEAD /" as
alternative).

If not required I would recommend not hardcoding specific behavior on this
request. There is not much that makes it that different from "GET /" or
"HEAD /", or every other request that doesn't have a request body.

Bert



RE: Upgrade Summary

2015-12-10 Thread Bert Huijben
Great to see where this discussion is headed:

+1 on the last design ideas.

 

Going with one ‘as early as possible’ upgrade and one ‘upgrade last’ should 
handle all these cases just fine.

 

I don’t think the h2c and TLS cases really have to be that different as 
suggested in the earlier parts of the discussion. Both want to upgrade as soon 
as possible, which is +- after the initial request is read, and before the 
response is generated.

 

TLS then really needs to exchange information both ways, while for h2c allowing 
exchange both ways allows doing some things a bit earlier.

 

 

Websockets are different… and in some ways not really an upgrade of the 
connection… more like a hostile takeover with a final operation to a targetJ

 

More like a CONNECT request to a proxy… or perhaps a PRI request to a HTTP/1.1 
server that handles h2direct that way… one final request until the connection 
closes.

 

Bert

 

 

From: William A Rowe Jr [mailto:wr...@rowe-clan.net] 
Sent: donderdag 10 december 2015 07:40
To: httpd 
Subject: Re: Upgrade Summary

 

On Wed, Dec 9, 2015 at 9:22 PM, Jacob Champion  > wrote:

On 12/09/2015 05:19 PM, William A Rowe Jr wrote:

 

_If_ all the other protocols worked like WebSocket and required
authnz before an upgrade could succeed, it wouldn't make sense for
each upgrade handler to have to do the authnz check themselves. But
in this case, WebSocket is different enough that I think it will
probably have to.

No, because we will simply give you two chances to accept and trigger
the upgrade, once in the post_read_request, another in ap_invoke_handler
phase before filters are inserted.


Here you make it sound as if I won't have to duplicate the authnz checks in the 
handler phase, but in a previous email you said

websocket can ignore the Upgrade: websocket offer after inspecting authnz


Did I misunderstand? I feel like I'm missing something crucial to your point.

 

I propose two chances to catch the upgrade, early and late.  You need auth,

therefore you must inspect the late catch, which follows authnz but precedes 

the handler invocation.

 

If a websocket protocol module determines that authnz denied websocket

but not the resource, then it can proceed to the normal http/1 handler without

responding with a 101-switching protocols.  But if the request meets the

requirements to be handled by the websocket protocol, you reply to the

Protocol API with an 'upgrade accepted' and take ownership of the request

and the connection.

 

Sounds reasonable.

I'd certainly like to see a websocket prototype
or outline before we declare the protocol baked for the second time :)


Working on it. :) My original experimental hook for 2.4.17 rebased nicely onto 
2.4.18, but mod_websocket has been refactored significantly since I wrote that 
and I couldn't reuse my work. Sorry I've been slow in actual code-writing; I've 
been under the weather this week.

 

No worries, 2.4.18 is an incremental set of improvements, and 2.4.19

will be moreso, but not in the next week :) 

 



RE: Upgrade Summary

2015-12-08 Thread Bert Huijben


> -Original Message-
> From: Stefan Eissing [mailto:stefan.eiss...@greenbytes.de]
> Sent: dinsdag 8 december 2015 11:55
> To: dev@httpd.apache.org
> Subject: Re: Upgrade Summary
> 
> 
> > Am 08.12.2015 um 11:44 schrieb Yann Ylavic :
> >
> > On Tue, Dec 8, 2015 at 11:07 AM, Stefan Eissing
> >  wrote:
> >> [...]
> >> Open:
> >> 1. Protocols like Websocket need to take over the 101 sending
> themselves in the "switch protocol" phase. (correct, Jacob?). Should we
> delegate the sending of the 101 to the protocol switch handler?
> >> 2. General handling of request bodies. Options:
> >>  a setaside in core of up to nnn bytes before switch invocation
> >>  b do nothing, let protocol switch handler care about it
> >
> > a. is tempting, where nnn would be the limit above which we don't honor
> Upgrade.
> > But that could be a default behaviour, i.e. when "Protocols ... http1"
> > is finally elected.
> > Possibly specific modules would bypass that?
> >
> >> 3. When to do the upgrade dance:
> >>  a post_read_request: upgrade precedes authentication
> >
> > Looks like it can't be RFC compliant, is it?
> 
> I think it will not be, right.

I read the spec as H2c and HTTP/1.1 are equivalent protocols. Handling
authentication *after* switching should work just like when not switching.

As client I would like to switch as soon as possible, as at that point I can
start multiple requests.. potentially with their own auth, using the same or
different realms; or even different auth schemes.


I don't think we should require all auth to happen on HTTP/1.1.

With equivalent protocols we should switch as soon as possible... I don't
think servers should be configured as H2c only for authenticated users.
Especially if they also allow H2direct.

Bert




RE: 2.4 pause - mod_http2 patchset Upgrade h2c vs mod_ssl Upgrade tls

2015-12-08 Thread Bert Huijben


> -Original Message-
> From: Stefan Eissing [mailto:stefan.eiss...@greenbytes.de]
> Sent: dinsdag 8 december 2015 10:25
> To: dev@httpd.apache.org
> Subject: Re: 2.4 pause - mod_http2 patchset Upgrade h2c vs mod_ssl
> Upgrade tls
> 
> 
> > Am 08.12.2015 um 01:58 schrieb William A Rowe Jr  clan.net>:
> >
> > On Mon, Dec 7, 2015 at 6:35 PM, Yann Ylavic 
> wrote:
> > On Tue, Dec 8, 2015 at 1:27 AM, William A Rowe Jr  clan.net> wrote:
> > > On Mon, Dec 7, 2015 at 6:15 PM, Yann Ylavic 
> wrote:
> > >>
> > >> On Tue, Dec 8, 2015 at 1:07 AM, Yann Ylavic 
> wrote:
> > >> >
> > >> > the body ought to be
> > >> > set aside for any (relevant) TLS response (which needs the
> > >> > handshake...).
> > >>
> > >> Hmm, no need to set aside, *unless* with must produce a response
> > >> before the entire body (and the handshake) is read.
> > >> But we'd better not Upgrade in this case...
> > >
> > >
> > > Yes, there is a set aside, because the handler will read from the
filter
> > > stack... the handler phase has not yet occurred, and other content
> > > input filters may be inserted to transform the request body.
> > >
> > > The upgrade switch would occur before the request content handler
> > > is invoked, in all cases (post_read_request, or later during fixups
> > > or the very beginning of invoke_handler).
> >
> > But this isn't what the RFC says, right?
> > The body of the first request is never Upgraded, so why would we read
> > it using the Upgraded protocol?
> >
> > How do you mean?  The RFC states we must read the request body
> > (following a 100-continue) prior to a 101-switching protocols.  AFTER
> > the protocol is switched, we are ready to invoke the handler, which
> > will read the request body (which I suggest we have set aside within
> > the http input filter) and it will write to the output filter stack, but
to
> > a different stack of protocol and network filters.
> 
> No, that is not what the RFC says. HTTP/1 switching protocols does not
need
> to happen simultaneously on upstream and downstream. Instead, the server
> switches directly after the 101 response, the client switches after the
last
> byte of the request.

The client can only switch once it has confirmation from the server, that it
is really going to perform the upgrade. If the server doesn't accept the
upgrade it must continue using 1.1.

So it has to stop producing data after that first request and first check if
there is a 101 response before it can pipeline the next request.
 
In pipelining clients such as serf this makes a huge difference.

At some points during update/checkout we may have tens of requests pipelined
on a single connection... but that process starts a bit after the initial
handshake. I'm trying to be in H2(c) at that point, to avoid opening the
additional connections at that point, which we currently have.


Once the client has the 101 it can switch protocol for both input and output
and start sending the next request. Waiting for the 101 is a full stall.

Bert



RE: 2.4 pause - mod_http2 patchset Upgrade h2c vs mod_ssl Upgradetls

2015-12-08 Thread Bert Huijben


> -Original Message-
> From: Stefan Eissing [mailto:stefan.eiss...@greenbytes.de]
> Sent: dinsdag 8 december 2015 10:43
> To: dev@httpd.apache.org
> Subject: Re: 2.4 pause - mod_http2 patchset Upgrade h2c vs mod_ssl
> Upgradetls


> If Apache accepts body lengths of up to 64KB (or something configurable)
> and some other server implements 8K and some third 1MB, how will that
> help design of a client that, for some reason, *wants* an upgrade to
> succeed?

For Subversion this would make a 100% difference.

Currently all true Subversion servers are implemented via httpd.


Upgrade to h2c is currently not interesting for webbrowsers as all of them
have decided not to implement it... But it makes a huge difference for web
applications that talk to specific servers.

Which is very similar to that TLS upgrade that is interested for
webprinting... These usecases don't use combinations of random clients, with
random servers... They use specific combinations.

Bert



RE: Upgrade Summary

2015-12-08 Thread Bert Huijben
> -Original Message-
> From: Bert Huijben [mailto:b...@qqmail.nl]
> Sent: dinsdag 8 december 2015 12:41
> To: dev@httpd.apache.org
> Subject: RE: Upgrade Summary
> 


> I don't think we should require all auth to happen on HTTP/1.1.

If you want to go for equivalent implementations for things like the TLS
upgrade... then you really don't want to have AUTH completed before upgrade.

My preferred implementation would issue the 101 on HTTP/1.1 and then produce
the 401 on H2.
(I would be happy with an EOS on the response headers... I'm not interested
in the body :-))

I can handle '100 continue' before the 101 on HTTP/1.1, but also after
switching to H2 and before the actual result on H2... But I'm happy that we
don't have to support all possible scenarios there.

Bert



RE: Upgrade Summary

2015-12-08 Thread Bert Huijben


> -Original Message-
> From: Stefan Eissing [mailto:stefan.eiss...@greenbytes.de]
> Sent: dinsdag 8 december 2015 13:22
> To: dev@httpd.apache.org
> Subject: Re: Upgrade Summary
> 
> 
> > Am 08.12.2015 um 12:40 schrieb Bert Huijben <b...@qqmail.nl>:
> >> -Original Message-
> >> From: Stefan Eissing [mailto:stefan.eiss...@greenbytes.de]
> >> Sent: dinsdag 8 december 2015 11:55
> >> To: dev@httpd.apache.org
> >> Subject: Re: Upgrade Summary
> >>>> [...]3. When to do the upgrade dance:
> >>>> a post_read_request: upgrade precedes authentication
> >>>
> >>> Looks like it can't be RFC compliant, is it?
> >>
> >> I think it will not be, right.
> >
> > I read the spec as H2c and HTTP/1.1 are equivalent protocols. Handling
> > authentication *after* switching should work just like when not
switching.
> 
> It does. We are only talking about upgrade.
> 
> > As client I would like to switch as soon as possible, as at that point I
can
> > start multiple requests.. potentially with their own auth, using the
same or
> > different realms; or even different auth schemes.
> 
> Again, your easiest choice is a direct h2c connection. The second easiest
is
> an initial OPTIONS * request without *request* body. The response may
> have
> any body length it wants.
> 
> The options * will, as I understand it, not trigger any authentication. In
> another mail, you describe that you already send such a request as 1st
thing.
> But you said it has a body. What does that contain, I wonder?
> 
> If you can live without that request body, we are all fine and have a
simple
> implementation. If we need to implement upgrades to h2c on some length
> request bodies, I personally do not have the time to do that right away
> among
> all other changes that are being discussed. At least not this week.
> 
> So, what is so relevant about the OPTIONS request body, may I ask?

I can't live without that request body. That would just add another request
to my handshake... the thing I'm trying to avoid.

I'm guessing the body of that initial body is something like
[[

   

]]
(content-type text/xml of course)

And this request is mostly handled by mod_dav, and further annotated with
additional headers by mod_dav_svn.

I can't just redesign DAV to optimize for http/2. If we had a timemachine,
we would have made other decisions.

Would have been nice if we knew DELTA/V was never really implemented outside
Subversion. But now we have compatibility guarantees with older Subversion
clients/servers and other implementations that may use some of the features.

Bert



RE: Upgrade Summary

2015-12-08 Thread Bert Huijben
> -Original Message-
> From: Stefan Eissing [mailto:stefan.eiss...@greenbytes.de]
> Sent: dinsdag 8 december 2015 11:07
> To: dev@httpd.apache.org
> Subject: Upgrade Summary
> 
> Trying to summarize the status of the discussion and where the issues are
> with the current Upgrade implementation.
> 
> Clarified:
> A. any 100 must be sent out *before* a 101 response
> B. request bodies are to be read in the original protocol, input filters
like
> chunk can be used, indeed are necessary, as if the request is being
> processed normally
> C. if a protocol supports upgrade on request bodies is up to the protocol
> implementation and needs to be checked in the "propose" phase
> 
> Open:
> 1. Protocols like Websocket need to take over the 101 sending themselves
in
> the "switch protocol" phase. (correct, Jacob?). Should we delegate the
> sending of the 101 to the protocol switch handler?

If possible I would recommend avoiding this. I think the original protocol
should setup the response and then the final protocol should somehow be able
to annotate this.

The problems we try to solve now originate from the problem of doing things
different in different protocol handlers, while in theory many upgrades are
very similar.

The TLS and H2C upgrades both begin in one form and end in a different form.
Websockets are kind of different in that they require a bad request response
in a specific case. I'm not sure in which protocol this error needs to be
send though.

In TLS and H2C further errors can always be produced in the new protocol, so
when the handshake succeeds things can just go on.

> 2. General handling of request bodies. Options:
>   a setaside in core of up to nnn bytes before switch invocation
>   b do nothing, let protocol switch handler care about it

For Subversion to be able to use upgrade we would need to support a small
body on a request (few hundred bytes. Content-Length header provided).

During our current (already optimized) handshake all requests have bodies,
and introducing an additional request just to upgrade will slow things down
quite measurable on operation like 'svn log', that are mostly bound by the
handshake time.

We can't just switch the handshake to be something else... our handshake was
built upon WEBDAV and DELTA/V. We added several headers to avoid many
requests we previously performed, but we can't move away from that initial
OPTIONS request without slowing down against all older servers.

Bert




RE: 2.4 pause - mod_http2 patchset Upgrade h2c vs mod_ssl Upgradetls

2015-12-08 Thread Bert Huijben


> -Original Message-
> From: Stefan Eissing [mailto:stefan.eiss...@greenbytes.de]
> Sent: dinsdag 8 december 2015 11:36
> To: dev@httpd.apache.org
> Subject: Re: 2.4 pause - mod_http2 patchset Upgrade h2c vs mod_ssl
> Upgradetls
> 
> Bert,
> 
> I do not understand. How do you want to make pipelining requests and
> protocol upgrades work together? I assume you talk about http pipelining
> where you send a second request before you receive the response for the
> first one...

I can't... but you were explaining that I could just switch to the new
protocol after my request... which I can't.

I have to perform a full stop... which is a performance killer in a client
that is optimized for pipelining. (Serf and libsvn_ra_serf are 100%
optimized for pipelining)

That is why I want to upgrade on the first request.


At the start of any session we currently perform 2 requests with full stop
behavior, to support user scnearios:
* one to detect if we have HTTP/1.1 or HTTP/1.0.

This is an OPTIONS request with a small Content-Length body. This will
already hand over the capabilities of the Subversion server and some
interesting URLS. If we find an old server here, the full handshake will
have more requests.

* A second request with a small chunked body to see if the server supports
chunked encoding (assuming we found 1.1).
If users use an nginx frontend (which is not uncommon) this fails and we
explicitly cache all requests to have explicit Content-Length headers in
this case.

After this we have many different scenarios, of which many want to pipeline,
or open multiple connections as soon as possible.

For Subversion I would like to switch to H2c in one of these two requests.
The first one looks like the simplest one. (Combining upgrading with chunked
makes things even harder).



I can't store whether a server supports H2 and just use it. Subversion is
used on laptops that are used on multiple networks.


Receiving Alt-Svc in the first response and just pipelining the upgrade
between further requests if there is a match, is the only other way I could
upgrade without a performance penalty on all servers that don't support H2c.

Bert



RE: Upgrade Summary

2015-12-08 Thread Bert Huijben
> -Original Message-
> From: Stefan Eissing [mailto:stefan.eiss...@greenbytes.de]
> Sent: dinsdag 8 december 2015 13:54
> To: dev@httpd.apache.org
> Subject: Re: Upgrade Summary
> 
> I see. Delta-V goodies.
> 
> My proposal therefore is:
> - keep the upgrade/protocol switch mechanism as is for the 2.4.18 release
> - make the agreed upon changes, including TLS upgrade and small content
> h2c upgrades for the next release
> 
> Since the mechanism is already out with 2.4.17, I see no sense in delaying
> 2.4.18 at this point and rather give us time to change it later, so it
works for
> everyone.

Sure... my upgrade requests for Subversion are for 2.4.x future... Certainly
no rush here.

I don't expect this upgrade support to reach Subversion far before 1.10. 

Perhaps we will backport some of this to 1.9, but I don't think we can just
enable H2 support in a minor release without testing on a bigger group
first.
(I'm pretty sure we find new interesting proxy server scenarios :-( )

Bert



RE: AW: 2.4 pause - mod_http2 patchset Upgrade h2c vs mod_ssl Upgradetls

2015-12-07 Thread Bert Huijben
Is this a h2 limitation or a mod_h2 limitation?

If I would like h2c upgrade from Subversion without an additional request I 
would have to send the upgrade request with a very short OPTIONS request that 
has a body.

The way I read the spec that should be possible if both sides go through all 
those rough edge cases to support a request in one protocol and the response in 
the other. For serf I intended to implement that, but I’m not sure if it is 
worth the trouble if httpd doesn’t implement it.

Bert

Sent from Mail for Windows 10



From: Stefan Eissing
Sent: maandag 7 december 2015 20:11
To: dev@httpd.apache.org
Subject: Re: AW: 2.4 pause - mod_http2 patchset Upgrade h2c vs mod_ssl 
Upgradetls


h2 does not propose a switch if the request has a body. 

As clarified on the http wg lists by the gurus there,  when i asked, the 
upgraded connection is in a mixed state after a 101. Any byte sent by the 
server MUST be from the switched protocol, while the client needs to send the 
body in HTTP/1 format and can only talk the new proto afterwards. 

For an Expect 100 that would mean that the 100 intermediate comes before the 
101. However that is untested with h2 as we do not propose a switch when a body 
is there.

Am 07.12.2015 um 19:50 schrieb Plüm, Rüdiger, Vodafone Group 
:
 
 
Von: William A Rowe Jr [mailto:wr...@rowe-clan.net] 
Gesendet: Montag, 7. Dezember 2015 17:39
An: httpd 
Betreff: Re: 2.4 pause - mod_http2 patchset Upgrade h2c vs mod_ssl Upgrade tls
 
https://tools.ietf.org/html/rfc7230#section-6.7 makes things more interesting, 
it calls out that 101-continue and the request body read precedes the 
101-switching protocols.  Not sure who decided that would be a good idea, 
sigh... but mod_ssl TLS upgrade has these reversed for several good reasons 
including the intent to encrypt the request body if present and simple 
economics of processing.
I think that handling upgrade advertisement and alerting must be in post read 
req, bypassing all request hooks until the 101-continue is presented, any small 
request body read and set aside for the http input brigade, and 101-switching 
protocols is presented.  This allows the request to still be processed for 
tls-style upgrades, or discarded for relevant protocols.
How do we handle this today if the client just sends a request body and not an 
Expect header? Do we set it already aside before answering with a 101?
Regards
Rüdiger




RE: No H2 Window updates!

2015-12-04 Thread Bert Huijben
> -Original Message-
> From: Stefan Eissing [mailto:stefan.eiss...@greenbytes.de]
> Sent: vrijdag 4 december 2015 10:18
> To: dev@httpd.apache.org
> Subject: Re: No H2 Window updates!
> 
> That is unfortunate. I added a test case which reproduced window
> exhaustion before I fixed it. What sort of requests do still miss window
> updates? content-length? response codes?

I don't think we send content-length headers... (see older problem). I fixed
Subversion to always send Content-Type though.

I'm going to add some more tracing to get the details again, but from what I
remember is that I had requests with 113 and 128 bytes +- over and over.

[The tool experiencing this problem is a tool that tries to replicate some
behavior we can't express in Subversion yet. To implement this it asks the
server for similar things over and over. If we decide that we go this way we
would probably send one advanced request and get one response from the
server... But currently it is a nice tool to test the network IO limits :-)]

These requests are always 1 HEADERS (/endofheaders) + 1 DATA (/eos) and the
response is not much either (Certainly less than 1 Kbyte, so probably just
HEADERS+1 DATA as well)


--
Ok, added some logging back.

I still don't see a lot of window updates, but I do see some.

After my connection windows is slightly below 32767... I receive a single
connection window update to bring it back to exactly 32767. In all cases in
this log this update is < 100 bytes. (1 <= update < 256).

But then I don't receive further window updates until my connection window
is almost depleted... And then I get another 255 bytes or so. Not enough to
get through the test.

The log I just used is on:
https://lpt1.nl/f/2015/201512-NoWindow.txt 
(Search for 'DBG: Connection window update' to find the updates between the
allocations)

Bert



RE: No H2 Window updates!

2015-12-04 Thread Bert Huijben


> -Original Message-
> From: Stefan Eissing [mailto:stefan.eiss...@greenbytes.de]
> Sent: vrijdag 4 december 2015 16:23
> To: dev@httpd.apache.org
> Subject: Re: No H2 Window updates!
> 
> If you find the time, the lastest v1.0.10 mod_http2 in 2.4.x sets the
> connection window to max which addresses for me the window starvation
> issues I was able to reproduce (and put into my test suite). I hope this
works
> for you as well, otherwise I'd need more detailed data on how to reproduce
> the hanger.

The latest version gives me immediately a connection window update of the
maximum allowed value.

But given that the initial connection window was already set via the
settings window, adding this value is not allowed.

The code has his comment copied from the RFC:
/* A sender MUST NOT allow a flow-control window to exceed 2^31-1
 octets.  If a sender receives a WINDOW_UPDATE that causes a flow-
 control window to exceed this maximum, it MUST terminate either the
 stream or the connection, as appropriate.  For streams, the sender
 sends a RST_STREAM with an error code of FLOW_CONTROL_ERROR; for
the
 connection, a GOAWAY frame with an error code of FLOW_CONTROL_ERROR
 is sent.*/

Serf does exactly this... it terminates the connection.

So it gets nowhere near the original point of failure in the test.
It fails around the first stream created the test, while the original
failure is after thousands of requests.

Bert

-- 
With the same logging enabled as on that older url, this is the whole test
output now:


START: svnmover_tests.py
I: CMD: svnadmin.exe create svn-test-work\local_tmp\repos
--compatible-version=1.10 --fs-type=fsfs
I: 
I: CMD: svn.exe import -m "Log message for revision 1."
svn-test-work\local_tmp\greekfiles
https://localhost:7829/svn-test-work/local_tmp/repos --config-dir
R:\subversion\tests\cmdline\svn-test-work\local_tmp\config --password
rayjandom --no-auth-cache --username jrandom
I: CMD: R:\subversion\svn/svn.exe import -m "Log message for revision 1."
svn-test-work\local_tmp\greekfiles
https://localhost:7829/svn-test-work/local_tmp/repos --config-dir
R:\subversion\tests\cmdline\svn-test-work\local_tmp\config --password
rayjandom --no-auth-cache --username jrandom exited with 1
I: 
I: DBG: Allocated 131 bytes for window on stream 0x1 (left: 65404, 65404)
I: DBG: Connection window update of 2147483392 to -2147418500
I: ..\..\..\subversion\svn\import-cmd.c:129,
I: ..\..\..\subversion\libsvn_client\import.c:868,
I: ..\..\..\subversion\libsvn_client\ra.c:509,
I: ..\..\..\subversion\libsvn_client\ra.c:488,
I: ..\..\..\subversion\libsvn_ra\ra_loader.c:404:
(apr_err=SVN_ERR_RA_CANNOT_CREATE_SESSION)
I: svn: E170013: Unable to connect to a repository at URL
'https://localhost:7829/svn-test-work/local_tmp/repos'
I: ..\..\..\subversion\libsvn_ra_serf\serf.c:603,
I: ..\..\..\subversion\libsvn_ra_serf\options.c:538,
I: ..\..\..\subversion\libsvn_ra_serf\util.c:1032,
I: ..\..\..\subversion\libsvn_ra_serf\util.c:981,
I: ..\..\..\subversion\libsvn_ra_serf\util.c:958: (apr_err=120153)
I: svn: E120153: Error running context: HTTP2 flow control limits exceeded
W: ..\..\..\subversion\svn\import-cmd.c:129,
W: ..\..\..\subversion\libsvn_client\import.c:868,
W: ..\..\..\subversion\libsvn_client\ra.c:509,
W: ..\..\..\subversion\libsvn_client\ra.c:488,
W: ..\..\..\subversion\libsvn_ra\ra_loader.c:404:
(apr_err=SVN_ERR_RA_CANNOT_CREATE_SESSION)
W: svn: E170013: Unable to connect to a repository at URL
'https://localhost:7829/svn-test-work/local_tmp/repos'
W: ..\..\..\subversion\libsvn_ra_serf\serf.c:603,
W: ..\..\..\subversion\libsvn_ra_serf\options.c:538,
W: ..\..\..\subversion\libsvn_ra_serf\util.c:1032,
W: ..\..\..\subversion\libsvn_ra_serf\util.c:981,
W: ..\..\..\subversion\libsvn_ra_serf\util.c:958: (apr_err=120153)
W: svn: E120153: Error running context: HTTP2 flow control limits exceeded
END: svnmover_tests.py
ELAPSED: svnmover_tests.py 0:00:00.609000




RE: svn commit: r1717970 - /httpd/test/mod_h2/trunk/test/test_window_update.sh

2015-12-04 Thread Bert Huijben


> -Original Message-
> From: ic...@apache.org [mailto:ic...@apache.org]
> Sent: vrijdag 4 december 2015 15:26
> To: c...@httpd.apache.org
> Subject: svn commit: r1717970 -
> /httpd/test/mod_h2/trunk/test/test_window_update.sh
> 
> Author: icing
> Date: Fri Dec  4 14:25:53 2015
> New Revision: 1717970
> 
> URL: http://svn.apache.org/viewvc?rev=1717970=rev
> Log:
> adding to window_update tests to verify connection window exhaustion (or
> lack of it)
> 
> Modified:
> httpd/test/mod_h2/trunk/test/test_window_update.sh
> 
> Modified: httpd/test/mod_h2/trunk/test/test_window_update.sh
> URL:
> http://svn.apache.org/viewvc/httpd/test/mod_h2/trunk/test/test_window
> _update.sh?rev=1717970=1717969=1717970=diff
> ==
> 
> --- httpd/test/mod_h2/trunk/test/test_window_update.sh (original)
> +++ httpd/test/mod_h2/trunk/test/test_window_update.sh Fri Dec  4
> 14:25:53 2015
> @@ -34,19 +34,18 @@ fi
>  # test if small uploads trigger window_udpate(s) and do not let connection
>  # windows shrink to 0.
>  #
> -echo -n "POSTing 10 x 10k to server, MUST not hang"
> +echo -n "POSTing 10 x 10k, m=1"
>  ${H2LOAD} -p h2c -c 1 -t 1 -m 1 -n 10 -d $GEN/data-10k ${URL_PREFIX} |
> -while read line; do echo -n "."; done
> -echo "ok."
> +while read line; do echo -n "."; done; echo "ok."
> 
> -echo -n "POSTing 10 x 100k to server, which are READ by app"
> -${H2LOAD} -p h2c -c 1 -t 1 -m 1 -n 100 -d $GEN/data-100k
> ${URL_PREFIX}/upload.py |
> -while read line; do echo -n "."; done
> -echo "ok."
> +echo -n "POSTing 1000 x 1k, m=100"
> +${H2LOAD} -p h2c -c 1 -t 1 -m 100 -n 1000 -d $GEN/data-1k ${URL_PREFIX} |
> +while read line; do echo -n "."; done; echo "ok."
> 
> -# FIXME: this does not work reliably for n >= 100
> -#
> -#echo -n "POSTing 10 x 100k to server, which are NOTREAD by app"
> -#${H2LOAD} -p h2c -c 1 -t 1 -m 1 -n 10 -d $GEN/data-100k ${URL_PREFIX} |
> -#while read line; do echo -n "."; done
> -#echo "ok."
> \ No newline at end of file
> +echo -n "POSTing 100 x 100k, m=5, READ"
> +${H2LOAD} -p h2c -c 1 -t 1 -m 5 -n 100 -d $GEN/data-100k
> ${URL_PREFIX}/upload.py |
> +while read line; do echo -n "."; done; echo "ok."
> +
> +echo -n "POSTing 100 x 100k, m=50, NOT READ explicitly"
> +${H2LOAD} -p h2c -c 1 -t 1 -m 50 -n 100 -d $GEN/data-100k ${URL_PREFIX} |
> +while read line; do echo -n "."; done; echo "ok."


The Subversion test I try is more like sending 2000 requests of 100 bytes over 
a single connection.

Bert 




RE: svn commit: r1717970 - /httpd/test/mod_h2/trunk/test/test_window_update.sh

2015-12-04 Thread Bert Huijben


> -Original Message-
> From: Bert Huijben [mailto:b...@qqmail.nl]
> Sent: vrijdag 4 december 2015 18:51
> To: dev@httpd.apache.org
> Subject: RE: svn commit: r1717970 -
> /httpd/test/mod_h2/trunk/test/test_window_update.sh
> 
> 
> 
> > -Original Message-
> > From: ic...@apache.org [mailto:ic...@apache.org]
> > Sent: vrijdag 4 december 2015 15:26
> > To: c...@httpd.apache.org
> > Subject: svn commit: r1717970 -
> > /httpd/test/mod_h2/trunk/test/test_window_update.sh
> >
> > Author: icing
> > Date: Fri Dec  4 14:25:53 2015
> > New Revision: 1717970
> >
> > URL: http://svn.apache.org/viewvc?rev=1717970=rev
> > Log:
> > adding to window_update tests to verify connection window exhaustion
> (or
> > lack of it)
> >
> > Modified:
> > httpd/test/mod_h2/trunk/test/test_window_update.sh
> >
> > Modified: httpd/test/mod_h2/trunk/test/test_window_update.sh
> > URL:
> >
> http://svn.apache.org/viewvc/httpd/test/mod_h2/trunk/test/test_window
> > _update.sh?rev=1717970=1717969=1717970=diff
> >
> ==
> > 
> > --- httpd/test/mod_h2/trunk/test/test_window_update.sh (original)
> > +++ httpd/test/mod_h2/trunk/test/test_window_update.sh Fri Dec  4
> > 14:25:53 2015
> > @@ -34,19 +34,18 @@ fi
> >  # test if small uploads trigger window_udpate(s) and do not let connection
> >  # windows shrink to 0.
> >  #
> > -echo -n "POSTing 10 x 10k to server, MUST not hang"
> > +echo -n "POSTing 10 x 10k, m=1"
> >  ${H2LOAD} -p h2c -c 1 -t 1 -m 1 -n 10 -d $GEN/data-10k ${URL_PREFIX} |
> > -while read line; do echo -n "."; done
> > -echo "ok."
> > +while read line; do echo -n "."; done; echo "ok."
> >
> > -echo -n "POSTing 10 x 100k to server, which are READ by app"
> > -${H2LOAD} -p h2c -c 1 -t 1 -m 1 -n 100 -d $GEN/data-100k
> > ${URL_PREFIX}/upload.py |
> > -while read line; do echo -n "."; done
> > -echo "ok."
> > +echo -n "POSTing 1000 x 1k, m=100"
> > +${H2LOAD} -p h2c -c 1 -t 1 -m 100 -n 1000 -d $GEN/data-1k ${URL_PREFIX}
> |
> > +while read line; do echo -n "."; done; echo "ok."
> >
> > -# FIXME: this does not work reliably for n >= 100
> > -#
> > -#echo -n "POSTing 10 x 100k to server, which are NOTREAD by app"
> > -#${H2LOAD} -p h2c -c 1 -t 1 -m 1 -n 10 -d $GEN/data-100k ${URL_PREFIX} |
> > -#while read line; do echo -n "."; done
> > -#echo "ok."
> > \ No newline at end of file
> > +echo -n "POSTing 100 x 100k, m=5, READ"
> > +${H2LOAD} -p h2c -c 1 -t 1 -m 5 -n 100 -d $GEN/data-100k
> > ${URL_PREFIX}/upload.py |
> > +while read line; do echo -n "."; done; echo "ok."
> > +
> > +echo -n "POSTing 100 x 100k, m=50, NOT READ explicitly"
> > +${H2LOAD} -p h2c -c 1 -t 1 -m 50 -n 100 -d $GEN/data-100k ${URL_PREFIX}
> |
> > +while read line; do echo -n "."; done; echo "ok."
> 
> 
> The Subversion test I try is more like sending 2000 requests of 100 bytes over
> a single connection.

And most likely (in this test) Subversion still waits for every request to be 
handled before sending out the next request.

Perhaps it sends two requests at once, but it is unlikely to have more requests 
in flight during this test than this.

(In update/checkout scenarios it can have many more... but not in the failing 
test)

Bert



RE: No H2 Window updates! (Probably a Serf issue!)

2015-12-04 Thread Bert Huijben


> -Original Message-
> From: Bert Huijben [mailto:b...@qqmail.nl]
> Sent: vrijdag 4 december 2015 21:45
> To: dev@httpd.apache.org
> Subject: RE: No H2 Window updates!
> 
> 
> 
> > -Original Message-
> > From: Stefan Eissing [mailto:stefan.eiss...@greenbytes.de]
> > Sent: vrijdag 4 december 2015 21:36
> > To: dev@httpd.apache.org
> > Subject: Re: No H2 Window updates!
> >
> > The code increases the window by the difference between max and
> current
> > so that the client has exactly the max value. nghttp2 accepts this on
the
> client
> > side. It rejects any larger value as you described.
> >
> > So we seem to have a difference in calculation between nghttp2 and serf.
> > which values do you see? some data would be helpful here.
> 
> The values are completely in the log file at the bottom of this mail.
> 
> I open a connection and the server announces a default window of 65535.
> 
> > > I: DBG: Allocated 131 bytes for window on stream 0x1 (left: 65404,
> 65404)
> I send a request with a DATA frame with a payload of 131 bytes, which
> updates the stream and windows size with -131.
> 
> So both the stream and connection windows are no longer default, but
> 65404.
> Logged for easy reading
> 
> 
> Then I receive a WINDOW_UPDATE frame
> > > I: DBG: Connection window update of 2147483392 to -2147418500
> Which tries to add 2147483392 to the existing window.
> 
> Which doesn't fit, because the total outgoing window has to fit in
2^31-1...
> See the RFC.

Ok, I think I found a -and provably THE- problem in serf...

There is a bug in decoding the window update frames... 


Reviewing and testing some bits there now.

Bert




RE: No H2 Window updates! (Probably a Serf issue!)

2015-12-04 Thread Bert Huijben


> -Original Message-
> From: Bert Huijben [mailto:b...@qqmail.nl]
> Sent: vrijdag 4 december 2015 22:27
> To: dev@httpd.apache.org
> Subject: RE: No H2 Window updates! (Probably a Serf issue!)
> 
> 
> 
> > -Original Message-
> > From: Bert Huijben [mailto:b...@qqmail.nl]
> > Sent: vrijdag 4 december 2015 21:45
> > To: dev@httpd.apache.org
> > Subject: RE: No H2 Window updates!
> >
> >
> >
> > > -Original Message-
> > > From: Stefan Eissing [mailto:stefan.eiss...@greenbytes.de]
> > > Sent: vrijdag 4 december 2015 21:36
> > > To: dev@httpd.apache.org
> > > Subject: Re: No H2 Window updates!
> > >
> > > The code increases the window by the difference between max and
> > current
> > > so that the client has exactly the max value. nghttp2 accepts this on
> the
> > client
> > > side. It rejects any larger value as you described.
> > >
> > > So we seem to have a difference in calculation between nghttp2 and
serf.
> > > which values do you see? some data would be helpful here.
> >
> > The values are completely in the log file at the bottom of this mail.
> >
> > I open a connection and the server announces a default window of 65535.
> >
> > > > I: DBG: Allocated 131 bytes for window on stream 0x1 (left: 65404,
> > 65404)
> > I send a request with a DATA frame with a payload of 131 bytes, which
> > updates the stream and windows size with -131.
> >
> > So both the stream and connection windows are no longer default, but
> > 65404.
> > Logged for easy reading
> >
> >
> > Then I receive a WINDOW_UPDATE frame
> > > > I: DBG: Connection window update of 2147483392 to -2147418500
> > Which tries to add 2147483392 to the existing window.
> >
> > Which doesn't fit, because the total outgoing window has to fit in
> 2^31-1...
> > See the RFC.
> 
> Ok, I think I found a -and provably THE- problem in serf...
> 
> There is a bug in decoding the window update frames...
> 
> 
> Reviewing and testing some bits there now.

>From r1718038

[[
--- serf/trunk/protocols/http2_protocol.c (original)
+++ serf/trunk/protocols/http2_protocol.c Fri Dec  4 21:35:11 2015
@@ -619,7 +619,7 @@ http2_handle_stream_window_update(void *
 window_update = (const void *)data;
 
 value = (window_update->v3 << 24) | (window_update->v2 << 16)
-| (window_update->v2 << 8) | window_update->v0;
+| (window_update->v1 << 8) | window_update->v0;
 
 value &= HTTP2_WINDOW_MAX_ALLOWED; /* The highest bit is reserved */
]]

This bug (also copied in another location) also explains the unusually small
window updates I received earlier... Most likely I missed a few interesting
bits in that second byte of the value, while the small window in httpd kept
the highest two bytes at 0.


The overflow of the max value is resolved by this patch... and the current
2.4.x code goes through the test now without any windowing delays. 


You might want to change the initial window back to something like 1MB or
16MB now, to at least test cases where a further update is necessary.


I certainly don't get connection window updates after the first one now in
my usual testing. I don't test uploading over 2 GB in a usual Subversion
test run.


Bert 



RE: No H2 Window updates!

2015-12-04 Thread Bert Huijben


> -Original Message-
> From: Stefan Eissing [mailto:stefan.eiss...@greenbytes.de]
> Sent: vrijdag 4 december 2015 21:36
> To: dev@httpd.apache.org
> Subject: Re: No H2 Window updates!
> 
> The code increases the window by the difference between max and current
> so that the client has exactly the max value. nghttp2 accepts this on the
client
> side. It rejects any larger value as you described.
> 
> So we seem to have a difference in calculation between nghttp2 and serf.
> which values do you see? some data would be helpful here.

The values are completely in the log file at the bottom of this mail.

I open a connection and the server announces a default window of 65535.

> > I: DBG: Allocated 131 bytes for window on stream 0x1 (left: 65404,
65404)
I send a request with a DATA frame with a payload of 131 bytes, which
updates the stream and windows size with -131.

So both the stream and connection windows are no longer default, but 65404.
Logged for easy reading


Then I receive a WINDOW_UPDATE frame
> > I: DBG: Connection window update of 2147483392 to -2147418500
Which tries to add 2147483392 to the existing window.

Which doesn't fit, because the total outgoing window has to fit in 2^31-1...
See the RFC.


That is all that happens.
(The windowing in the other direction is completely uninteresting here... as
it happens completely independent)

    Bert



> 
> > Am 04.12.2015 um 18:42 schrieb Bert Huijben <b...@qqmail.nl>:
> >
> >
> >
> >> -Original Message-
> >> From: Stefan Eissing [mailto:stefan.eiss...@greenbytes.de]
> >> Sent: vrijdag 4 december 2015 16:23
> >> To: dev@httpd.apache.org
> >> Subject: Re: No H2 Window updates!
> >>
> >> If you find the time, the lastest v1.0.10 mod_http2 in 2.4.x sets the
> >> connection window to max which addresses for me the window
> starvation
> >> issues I was able to reproduce (and put into my test suite). I hope
this
> > works
> >> for you as well, otherwise I'd need more detailed data on how to
> reproduce
> >> the hanger.
> >
> > The latest version gives me immediately a connection window update of
> the
> > maximum allowed value.
> >
> > But given that the initial connection window was already set via the
> > settings window, adding this value is not allowed.
> >
> > The code has his comment copied from the RFC:
> > /* A sender MUST NOT allow a flow-control window to exceed 2^31-1
> > octets.  If a sender receives a WINDOW_UPDATE that causes a
flow-
> > control window to exceed this maximum, it MUST terminate either
the
> > stream or the connection, as appropriate.  For streams, the
sender
> > sends a RST_STREAM with an error code of FLOW_CONTROL_ERROR;
> for
> > the
> > connection, a GOAWAY frame with an error code of
> FLOW_CONTROL_ERROR
> > is sent.*/
> >
> > Serf does exactly this... it terminates the connection.
> >
> > So it gets nowhere near the original point of failure in the test.
> > It fails around the first stream created the test, while the original
> > failure is after thousands of requests.
> >
> >Bert
> >
> > --
> > With the same logging enabled as on that older url, this is the whole
test
> > output now:
> >
> >
> > START: svnmover_tests.py
> > I: CMD: svnadmin.exe create svn-test-work\local_tmp\repos
> > --compatible-version=1.10 --fs-type=fsfs
> > I: 
> > I: CMD: svn.exe import -m "Log message for revision 1."
> > svn-test-work\local_tmp\greekfiles
> > https://localhost:7829/svn-test-work/local_tmp/repos --config-dir
> > R:\subversion\tests\cmdline\svn-test-work\local_tmp\config --password
> > rayjandom --no-auth-cache --username jrandom
> > I: CMD: R:\subversion\svn/svn.exe import -m "Log message for revision
1."
> > svn-test-work\local_tmp\greekfiles
> > https://localhost:7829/svn-test-work/local_tmp/repos --config-dir
> > R:\subversion\tests\cmdline\svn-test-work\local_tmp\config --password
> > rayjandom --no-auth-cache --username jrandom exited with 1
> > I: 
> > I: DBG: Allocated 131 bytes for window on stream 0x1 (left: 65404,
65404)
> > I: DBG: Connection window update of 2147483392 to -2147418500
> > I: ..\..\..\subversion\svn\import-cmd.c:129,
> > I: ..\..\..\subversion\libsvn_client\import.c:868,
> > I: ..\..\..\subversion\libsvn_client\ra.c:509,
> > I: ..\..\..\subversion\libsvn_client\ra.c:488,
> > I: ..\..\..\subversion\libsvn_ra\ra_loader.c:404:
> > (apr_err=SVN_ERR_RA_CANNOT_CREATE_SESSION)
> > I: svn: E170013: 

RE: reverse proxy wishlist

2015-12-03 Thread Bert Huijben


> -Original Message-
> From: Jim Jagielski [mailto:j...@jagunet.com]
> Sent: donderdag 3 december 2015 22:20
> To: dev@httpd.apache.org
> Subject: Re: reverse proxy wishlist
> 
> 
> > On Dec 3, 2015, at 11:09 AM, William A Rowe Jr 
> wrote:
> >
> > On Thu, Dec 3, 2015 at 8:59 AM, Jim Jagielski  wrote:
> >
> > What would *you* like to see as new features or enhancements
> > w/ mod_proxy, esp reverse proxy.
> >
> > HTTP/2 support, of course :)  It will be interesting to be able to
leverage
> > and compare a mod_proxy_serf vs a mod_proxy_http2 (nghttp2-based)
> > engine, as mentioned in another thread - multiple implementations
> > are always good for ferreting out protocol anomalies.
> >
> 
> It's kind of funny... the "need" for http/2 between proxy and
> origin seems pretty non-existant. There is a blog post by Cloudflare
> somewhere about how they don't see servers talking http/2 to the
> backend as anywhere near a driver, since all the things that make
> it "important" (koff koff!) between browser and server don't really
> apply.

After having implemented fcgi (server support) and http/2 (server and
client) in Apache Serf I was thinking that it would be nice if H2 would
replace the existing server side protocols.

http/1.1 requires chunking or explicit content-length, while http/2 and fcgi
don't have that requirement.



The reason I implemented the server side of those protocols in Apache Serf
was exactly to allow writing such origins with Serf...

Adding such a backend server process is one of the (many) possible
directions Subversion might take in the future.

Bert



RE: No H2 Window updates!

2015-12-03 Thread Bert Huijben
> -Original Message-
> From: Stefan Eissing [mailto:stefan.eiss...@greenbytes.de]
> Sent: woensdag 2 december 2015 15:45
> To: dev@httpd.apache.org
> Subject: Re: No H2 Window updates!
> 
> Please find with r1717641 version 1.0.9-DEV of mod_http2 in trunk and
> branches/2.4.x
> that fixes the issue of streams with smallish inputs and lost
> WINDOW_UPDATEs.
> 
> Please report back if this works for you. I have another time slot on
Friday
> where I can follow up on it. Thanks.

I haven't traced for other problems yet, but the current
^/httpd/httpd/branches/2.4.x branch does *NOT* the windowing issue for me.

My Subversion testrun still gets stuck on that same 'svnmover' test, unless
I set H2WindowSize to a higher value.

Bert




RE: No H2 Window updates!

2015-11-29 Thread Bert Huijben
> -Original Message-
> From: Stefan Eissing [mailto:stefan.eiss...@greenbytes.de]
> Sent: zondag 29 november 2015 09:04
> To: dev@httpd.apache.org
> Subject: Re: No H2 Window updates!
> 
> Ok, thanks. I think I have an idea of what's happening:
> - on short request bodies, window updates get omitted and gives a
shrinking
> connection window

Yes, this is the problem I'm seeing. 
I would like to see this fixed for 2.4.18... and I wouldn't be surprised if
this also fixes Jan Erhardts problem.

Web frameworks are likely to issue many small (<16K) posts to a server,
which all fit in a single frame. In my testcase I needed > 500 requests
though.

> - the window size of the connection itself should be at max Value right
from
> the start

That is one solution.  (Google sends one huge window update on the
connection directly at the start, after the settings frame)
But this really needs a proper design behind it...

Optimal windowing is a hard topic. The window needs to be large enough not
to slow down the client... but not too big to hog the server (and by that
the connection), etc. etc.

The problem with a huge connection window up front is that it may be used
for a lot of small requests or one large. That large one can easily be
throttled at the stream level, while those small ones can't.
> 
> I won't be able to do anything about it until later this week, though.


With an increased default window on the httpd site the Subversion test now
completes successfully over H2 on 2.4.17 and 2.4.18.
It just reports an XPASS in a specific case where we assume that httpd
handles a request as invalid because one header is to long with the http/1.1
rules.

If this change is expected on the httpd site, the assumption should be fixed
in the Subversion testsuite.

Thanks,

Bert



RE: No H2 Window updates!

2015-11-28 Thread Bert Huijben


> -Original Message-
> From: Jan Ehrhardt [mailto:php...@ehrhardt.nl]
> Sent: vrijdag 27 november 2015 22:35
> To: dev@httpd.apache.org
> Subject: Re: No H2 Window updates!
> 
> Bert Huijben in gmane.comp.apache.devel (Fri, 27 Nov 2015 20:04:14 +0100):
> >Well. it is not a regression, so can it be a show stopper? ?.
> >But I would like to see this fixed.
> 
> Curious: are you still testing this on Windows? If so, I guess you
> compiled your own httpd. I tried to do the same a couple of days ago, but
> ran into problems with Drupal7: the admin menu sometimes showed and
> sometimes did not show at all. I could not lay my finger on what went
> wrong.
> 
> Because I did not have the problems with Apachelounge's 2.4.18-dev at
> https://www.apachelounge.com/viewtopic.php?t=6842 I checked out an
> earlier
> revision of the alpha branch:
> | svn co -r 1715218
> http://svn.apache.org/repos/asf/httpd/httpd/branches/2.4-http2-alpha
> 
> That revision compiled into a httpd with no problems. I am waiting now for
> Stefan Eissing to finish his work on mod_http2.
> 
> BTW: did you switch to nghttp2 1.5.0 already?

Hi Jan,

No I didn't switch to nghttp2 1.5.0 yet. Thanks for reminding me to check if
there is a new version :)

The code I wrote for Serf code doesn't use nghttp2... After reading the
specs I didn't think building on nghttp2 would win me a lot of time, and
having multiple separate implementations of the same specification/RFC in
the open source world would be a good thing. (Almost every recent H2 project
I see builds on top of nghttp2)


It is entirely possible that you hit the same problem as I did. (I'm
actually very surprised that I didn't hit this problem much earlier on.
There is just one test in the Subversion testsuite that sends more than 64
Kbyte of request bodies over a single connection... I'll fix that)


But back to that problem... There is an easy workaround, which I used on the
other side until two days ago: just making the H2 default window big enough
that you never get near it.

Just configure 'H2WindowSize' to be something like 1 GB and you probably
never have to think about window updates. (The max allowed value is 2 GB -1)


This windowing allows the server to throttle incoming DATA from the other
side, so servers like httpd really want to tune this dynamically. Note that
headers and new requests are not counted to this limit, so on the server
side it is really just request bodies... (Technically data on already closed
streams should be counted too... but that is an implementation detail).


I remember reading that Stefan Eissing is away for some time ('Re: NOTICE:
Intent to T 2.4.18' thread), so perhaps I should spend some time looking
at this myself.


Disabling that line that explicitly disables Window updates from the nghttp
library, could be an easy fix... but it might require some compensating
actions, like lowering the number of supported concurrent streams if that
comment is still up to date. Allowing up to 100 concurrent streams per
connection could be a bit high, although this really depends on what these
connections are used for. I don't know how to test against that 'gets
flooded' problem though, as that isn't measurable by itself.



And yes I build my own binaries for Subversion and all its dependencies...
All my scripting that I use for that is in
https://sharpsvn.open.collab.net/svn/sharpsvn/trunk/imports (username
'guest', no password). The default build doesn't build httpd, but if you use
a Subversion dev build (copy dev-default.build to a directory one level
above imports) it builds httpd. My scripts should work for VS2005 upto 2015
and require nant, and some python and perl versions. Everything else is
built from the scripts.

These same scripts drive the Subversion and Serf win32 buildbots that I
maintain on ci.apache.org... and they also deliver the SharpSvn and 'Slik
Subversion Client' binaries.
(I currently explicitly don't deliver httpd itself or anything that depends
on that though)


Thanks [ / Groeten ;-)],

Bert

> --
> Jan




RE: No H2 Window updates!

2015-11-28 Thread Bert Huijben


> -Original Message-
> From: Stefan Eissing [mailto:stefan.eiss...@greenbytes.de]
> Sent: zaterdag 28 november 2015 13:01
> To: dev@httpd.apache.org
> Subject: Re: No H2 Window updates!
> 
> I am not really here, but...
> 
> the window updates are sent out via update_window(), line 1001,
> h2_session.c. If you do not see any window updates with a client, it may
> be that the server app you called does not read its input. I have
> several test cases with uploads and they work with nghttp and curl.

I do see window updates against other servers. And I'm pretty sure
Subversion reads its input... things are committed correctly and the
propfinds return their result.

In the testcase at hand I'm sending very small requests (+- 100 bytes max),
so the per stream window updates are not really used... They get nowhere
near the 64K limit and the first data frame has EOS set.

Perhaps this also skips sending window updates on the connection level.

I do remember testing larger uploads as a single stream... So perhaps it is
a combination that makes them not appear for me.

Bert



RE: No H2 Window updates!

2015-11-28 Thread Bert Huijben


> -Original Message-
> From: Stefan Eissing [mailto:stefan.eiss...@greenbytes.de]
> Sent: zaterdag 28 november 2015 13:01
> To: dev@httpd.apache.org
> Subject: Re: No H2 Window updates!
> 
> I am not really here, but...
> 
> the window updates are sent out via update_window(), line 1001,
> h2_session.c. If you do not see any window updates with a client, it may
> be that the server app you called does not read its input. I have
> several test cases with uploads and they work with nghttp and curl.
> 
> Basics of mod_http2 flow control:
> - the auto udates of nghttp2 are disabled because nghttp2 would
> continously update the window for the client, letting the client sent
> more and more - until we run out of memory.
> - instead, input reads from workers against the h2_mplx io are recorded
> and lead to regular window update being sent out. So clients can only
> send more when the data is actually consumed by someone.

>From h2_session.c around line 1640.
===
switch (status) {
case APR_SUCCESS:   /* successful read, reset our idle
timers */
have_read = 1;
wait_micros = 0;
break;
case APR_EAGAIN:  /* non-blocking read, nothing
there */
break;
default:
if (APR_STATUS_IS_ETIMEDOUT(status)
|| APR_STATUS_IS_ECONNABORTED(status)
|| APR_STATUS_IS_ECONNRESET(status)
|| APR_STATUS_IS_EOF(status)
|| APR_STATUS_IS_EBADF(status)) {
/* common status for a client that has left */
ap_log_cerror( APLOG_MARK, APLOG_DEBUG, status,
session->c,
  "h2_session(%ld): terminating",
  session->id);
/* Stolen from mod_reqtimeout to speed up lingering
when
 * a read timeout happened.
 */
apr_table_setn(session->c->notes,
"short-lingering-close", "1");
}
else {
/* uncommon status, log on INFO so that we see this
*/
ap_log_cerror( APLOG_MARK, APLOG_INFO, status,
session->c,
  APLOGNO(02950) 
  "h2_session(%ld): error reading,
terminating",
  session->id);
}
h2_session_abort(session, status, 0);
goto end_process;
}
===

I'm not familiar enough with differences in bucket handling between serf and
httpd to really make the call, but as the serf buckets were designed by the
same group:
I'm guessing that there might be successful reads with different status
values than just APR_SUCCESS.


In serf I would expect to see an immediate APR_EOF when there is only a
single frame to be read (or perhaps a few intermediate APR_EAGAINS and then
a EOF), which may both imply successful reading 0 or more bytes, followed by
their status code.

Bert




RE: No H2 Window updates!

2015-11-28 Thread Bert Huijben


> -Original Message-
> From: Stefan Eissing [mailto:stefan.eiss...@greenbytes.de]
> Sent: zaterdag 28 november 2015 13:01
> To: dev@httpd.apache.org
> Subject: Re: No H2 Window updates!
> 
> I am not really here, but...
> 
> the window updates are sent out via update_window(), line 1001,
> h2_session.c. If you do not see any window updates with a client, it may
> be that the server app you called does not read its input. I have
> several test cases with uploads and they work with nghttp and curl.

In my case it doesn't...

---
if (h2_stream_set_has_open_input(session->streams)) {
/* Check that any pending window updates are sent. */
status = h2_mplx_in_update_windows(session->mplx,
update_window, session);
if (APR_STATUS_IS_EAGAIN(status)) {
status = APR_SUCCESS;
}
else if (status == APR_SUCCESS) {
/* need to flush window updates onto the connection asap
*/
h2_conn_io_flush(>io);
}
}



Looks like that ' h2_stream_set_has_open_input()' always returns false for
me, with those tiny requests of only 1 data frame.

The streams are marked as closed the moment a data frame with EOS is
received... that is: before the frame is processed.

In this testcase all data frames have EOS set, so this value would only be
true for me if some other outstanding request did not receive its data yet.
(Which doesn't happen yet as Subversion still assumes http/1 like
processing)



Bert



RE: No H2 Window updates!

2015-11-28 Thread Bert Huijben
> -Original Message-
> From: Bert Huijben [mailto:b...@qqmail.nl]
> Sent: zaterdag 28 november 2015 14:09
> To: stefan.eiss...@greenbytes.de; dev@httpd.apache.org
> Subject: RE: No H2 Window updates!
> 
> 
> 
> > -Original Message-
> > From: Stefan Eissing [mailto:stefan.eiss...@greenbytes.de]
> > Sent: zaterdag 28 november 2015 13:01
> > To: dev@httpd.apache.org
> > Subject: Re: No H2 Window updates!
> >
> > I am not really here, but...
> >
> > the window updates are sent out via update_window(), line 1001,
> > h2_session.c. If you do not see any window updates with a client, it may
> > be that the server app you called does not read its input. I have
> > several test cases with uploads and they work with nghttp and curl.
> 
> In my case it doesn't...
> 
> ---
> if (h2_stream_set_has_open_input(session->streams)) {
> /* Check that any pending window updates are sent. */
> status = h2_mplx_in_update_windows(session->mplx,
> update_window, session);
> if (APR_STATUS_IS_EAGAIN(status)) {
> status = APR_SUCCESS;
> }
> else if (status == APR_SUCCESS) {
> /* need to flush window updates onto the connection
asap
> */
> h2_conn_io_flush(>io);
> }
> }
> 
> 
> 
> Looks like that ' h2_stream_set_has_open_input()' always returns false for
> me, with those tiny requests of only 1 data frame.
> 
> The streams are marked as closed the moment a data frame with EOS is
> received... that is: before the frame is processed.
> 
> In this testcase all data frames have EOS set, so this value would only be
> true for me if some other outstanding request did not receive its data
yet.
> (Which doesn't happen yet as Subversion still assumes http/1 like
> processing)

If I just replace the

if (h2_stream_set_has_open_input(session->streams))

with if(TRUE) {

then I do receive a few window updates... Of a few bytes each though...
I would have expected a few huge returns (Kbyte+) every few requests that
send data.

So over the course of hundreds of requests my connection level window
shrinks to almost zero... at some points I receive a few updates of a few
hundred bytes... I even see one of a single byte.



Data on closed streams must be accounted towards the connection window, so
conditionalizing the whole processing on 'are there any readable streams' is
a clear bug. Skipping all data in the last frame is another one.
Perhaps the code can skip some of the processing in this case, but (as a
client) I would like to see my outgoing window at the connection updated
sooner.


In case of Subversion's real usage, I want to commit potentially hundreds of
MByte, so a connection level window of more than a few bytes would be very
welcome With HTTP/1 we send out the data as fast as we can and the TCP
windowing handles this from the httpd side... Now the http/2 level windowing
should handle this.


Mod_dav and mod_dav_svn are usually fast readers; just limited by trivial
disk io or xml processing in the common operations, so I don't expect real
problems there.


Bert




RE: No H2 Window updates!

2015-11-28 Thread Bert Huijben
> -Original Message-
> From: Bert Huijben [mailto:b...@qqmail.nl]
> Sent: zaterdag 28 november 2015 14:32
> To: stefan.eiss...@greenbytes.de; dev@httpd.apache.org
> Subject: RE: No H2 Window updates!


> In case of Subversion's real usage, I want to commit potentially hundreds
of
> MByte, so a connection level window of more than a few bytes would be
> very
> welcome With HTTP/1 we send out the data as fast as we can and the TCP
> windowing handles this from the httpd side... Now the http/2 level
> windowing
> should handle this.
> 
> 
> Mod_dav and mod_dav_svn are usually fast readers; just limited by trivial
> disk io or xml processing in the common operations, so I don't expect real
> problems there.

For my commits against the svn-master.apache.org repository my latency/ping
time is +- 142 ms.

With the current simple algorithm and a maximum window of 64
Kbyte/connection, I can send a theoretical maximum of 64 Kbyte/142 ms per
connection to that server... which would be about 450 Kbyte/s.
(=1/0.142*65536/1024)

That is < 1/30th of the bandwidth I have at my disposal (+- 150 Mbit).

But currently I don't even get anywhere near that as my window continues to
shrink because some data is missed in accounting even after my simple patch.



For httpd we have to think carefully which algorithm we want to implement
here. Preferably the algorithm should be better than TCP could, as we know
the specifics of the http algorithm better than plain TCP could for
HTTP/1,1. TCP does send a lot of window updates though... Almost every
packet does.


Solving the really hard problems, like this one, is the reason I didn't
think the nghttp2 library would really be the solution for serf. 
It is nice for such a library to provide a head start on the protocol level,
but there is no way a standard library can really solve this scheduling
problem in a generic way. (If it could do it, we had used that solution for
TCP... and never had to resort to designing a HTTP/2 in the first place).

See https://en.wikipedia.org/wiki/TCP_window_scale_option for some
introduction on how TCP was extended to work above the limit that currently
applies to H2.


Dynamic window sizing/scaling will have to be implemented at some point, at
both the stream and the connection level... This will involve timing,
measuring, etc. etc. Things nghttp2 can't do for us right now.
And if I'm right this might take continuous optimizing for years to come.



When I connect to sites like google I immediately receive a large connection
level window, to allow me to post huge blobs without a delay. (Haven't
tested their stream level windows behavior yet)
In serf I do something similar... and then apply a bit of throttling at the
stream level.


I would guess some clients have already implemented some of this, so we
might be able to learn from their implementations... Clients will see much
more incoming data than servers of course :).


Bert 




RE: No H2 Window updates!

2015-11-27 Thread Bert Huijben
Well… it is not a regression, so can it be a show stopper? ☺…
But I would like to see this fixed.

I have no idea how hard it would be to fix this though. It could be as simple 
as removing that config line (which was probably added somewhere early on if I 
look at that comment). But windowing correctly for optimal performance isn’t 
easy.

Luckily most of the windowing works in the other direction.

Bert



From: Jim Jagielski
Sent: vrijdag 27 november 2015 17:55
To: dev@httpd.apache.org
Subject: Re: No H2 Window updates!


Hmmm... this seems to me enough to warrant a hold on my T
until we dig into this deeper.




No H2 Window updates!

2015-11-27 Thread Bert Huijben
Hi,

 

I finally took the time to diagnose that segfault I had, and I think it
points to a serious bug in httpd.

 

To summarize this: I don't receive window updates.

 

In this specific test we set a very huge amount of small requests (bodies of
95 and 113 bytes), until we get out of the 65535 (or 65536) bytes of window
space I get from httpd at the connection level.

(Each stream doesn't get near its limit. I can try if I can receive window
updates there. but currently I can't reproduce ever receiving a window
update)

 

 

Originally this caused a segfault in my code, but I fixed that one. But now
I'm just stuck waiting to receive a window update from httpd.

 

 

My last testing was against 2.4.x (to get the 2.4.18 goodness)

 

Bert



RE: No H2 Window updates!

2015-11-27 Thread Bert Huijben
> -Original Message-
> From: Bert Huijben [mailto:b...@qqmail.nl]
> Sent: vrijdag 27 november 2015 13:56
> To: b...@qqmail.nl
> Subject:
> 
>     Hi,
> 
> I finally took the time to diagnose that segfault I had, and I think it
> points to a serious bug in httpd.
> 
> To summarize this: I don’t receive window updates.
> 
> In this specific test we set a very huge amount of small requests (bodies
of
> 95 and 113 bytes), until we get out of the 65535 (or 65536) bytes of
window
> space I get from httpd at the connection level.
> (Each stream doesn’t get near its limit. I can try if I can receive window
> updates there… but currently I can’t reproduce ever receiving a window
> update)
> 
> 
> Originally this caused a segfault in my code, but I fixed that one. But
now
> I’m just stuck waiting to receive a window update from httpd…
> 
> 
> My last testing was against 2.4.x (to get the 2.4.18 goodness)

And I think the combination of:

=== h2_session.c around line 707 ===
/* We need to handle window updates ourself, otherwise we
 * get flooded by nghttp2. */
nghttp2_option_set_no_auto_window_update(options, 1);


And not a single call to nghttp2_submit_window_update() to find, explains
the situation.

I haven't tried what happens when I disable this auto_window call... but
sending window updates is really required by the H2 specs.



And I totally understand that this wasn't high priority... I worked around
not sending updates in my implementation until yesterday :-)


Bert 



RE: H2 stream dependencies

2015-11-26 Thread Bert Huijben
> -Original Message-
> From: Bert Huijben [mailto:b...@qqmail.nl]
> Sent: woensdag 25 november 2015 22:45
> To: dev@httpd.apache.org
> Subject: RE: H2 stream dependencies


> * 2 of these are related to HTTP/1.1 status lines where we no longer have
> access to the reason field.
> * 2 others somehow run against an httpd limit when I only configure 16
> threads (while running 4 tests in parallel; each with potentially multiple
> connections)
> * and the last one is an interesting crash in the serf code.

The current status is two failures. That last crash is one I reported
earlier, and ... 

The other appears to be related to httpd having a higher limit on total
header size when using http/2 than when using http/1.1. 
Is that an intended behavior change?

The original limit was set a long time as a security measure?
(But perhaps a new default is used for http/2)


And another thing: gor my testing it would be useful if httpd would somehow
start logging how it received the requests... Currently it still logs
HTTP/1.1 in the access logs even for h2 requests.

And when I enable more logging I get an error that Content-Length -1 doesn't
match the actual length on most of my requests... (I don't pass
Content-Length in all those cases)

Bert



RE: H2 stream dependencies

2015-11-26 Thread Bert Huijben


> -Original Message-
> From: Jan Ehrhardt [mailto:php...@ehrhardt.nl]
> Sent: donderdag 26 november 2015 19:20
> To: dev@httpd.apache.org
> Subject: Re: H2 stream dependencies
> 
> Bert Huijben in gmane.comp.apache.devel (Thu, 26 Nov 2015 18:36:00
> +0100):
> >And another thing: gor my testing it would be useful if httpd would
> somehow
> >start logging how it received the requests... Currently it still logs
> >HTTP/1.1 in the access logs even for h2 requests.
> 
> Apache 2.4.18-dev does log HTTP/2:
> 
> 127.0.0.1 - - [26/Nov/2015:12:03:26 +0100] "GET /index.php HTTP/2" 200
> 3596 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:42.0) Gecko/20100101
> Firefox/42.0"
> 
> Logformat:
> LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\"
> \"%{User-Agent}i\"" combined

Nice...

Thanks,

I'll switch to testing with 2.4.x then.

Bert
> 
> Jan




RE: apr_token_* conclusions (was: Better casecmpstr[n]?)

2015-11-25 Thread Bert Huijben
The example was the other way around. Changing SS to ß is not a valid 
transform, but the other way is. There are also transforms on the combined AE 
characters, etc.

 

That Turkish ‘I’ problem is the only case I know of where the collation 
actually changes behavior within the usual western alphabet of ASCII characters.

 

Bert

 

 

From: Mikhail T. [mailto:mi+t...@aldan.algebra.com] 
Sent: woensdag 25 november 2015 23:19
To: dev@httpd.apache.org
Subject: Re: apr_token_* conclusions (was: Better casecmpstr[n]?)

 

On 25.11.2015 14:10, Mikhail T. wrote:

Two variables, LC_CTYPE and LC_COLLATE control this text processing behavior.  
The above is the correct lower case transliteration for Turkish.  In German, 
the upper case correspondence of sharp-S ß is 'SS', but multi-char translation 
is not provided by the simple tolower/toupper functions.

So, the concern is, some hypothetical header, such as X-ASSIGN-TO may, after 
going through the locale-aware strtolower() unexpectedly become x-aßign-to?

I just tested the above on both FreeBSD and Linux, and the results are 
encouraging:

% echo STRASSE | env LANG=de_DE.ISO8859 tr '[[:upper:]]' '[[:lower:]]'
strasse

Thus, I contend, using C-library will not cause invalid results, and the only 
reason to have Apache's own implementation is performance, but not correctness.

-mi



RE: apr_token_* conclusions (was: Better casecmpstr[n]?)

2015-11-25 Thread Bert Huijben
See http://www.siao2.com/2004/12/03/274288.aspx

And http://www.siao2.com/2013/04/04/10407543.aspx

For some background and related bugs in several products.

 

I hope this blog will stay alive. (The author passed away recently)

 

Bert

 

From: Bert Huijben [mailto:b...@qqmail.nl] 
Sent: donderdag 26 november 2015 00:22
To: dev@httpd.apache.org
Subject: RE: apr_token_* conclusions (was: Better casecmpstr[n]?)

 

The example was the other way around. Changing SS to ß is not a valid 
transform, but the other way is. There are also transforms on the combined AE 
characters, etc.

 

That Turkish ‘I’ problem is the only case I know of where the collation 
actually changes behavior within the usual western alphabet of ASCII characters.

 

Bert

 

 

From: Mikhail T. [mailto:mi+t...@aldan.algebra.com] 
Sent: woensdag 25 november 2015 23:19
To: dev@httpd.apache.org <mailto:dev@httpd.apache.org> 
Subject: Re: apr_token_* conclusions (was: Better casecmpstr[n]?)

 

On 25.11.2015 14:10, Mikhail T. wrote:

Two variables, LC_CTYPE and LC_COLLATE control this text processing behavior.  
The above is the correct lower case transliteration for Turkish.  In German, 
the upper case correspondence of sharp-S ß is 'SS', but multi-char translation 
is not provided by the simple tolower/toupper functions.

So, the concern is, some hypothetical header, such as X-ASSIGN-TO may, after 
going through the locale-aware strtolower() unexpectedly become x-aßign-to?

I just tested the above on both FreeBSD and Linux, and the results are 
encouraging:

% echo STRASSE | env LANG=de_DE.ISO8859 tr '[[:upper:]]' '[[:lower:]]'
strasse

Thus, I contend, using C-library will not cause invalid results, and the only 
reason to have Apache's own implementation is performance, but not correctness.

-mi



RE: apr_token_* conclusions (was: Better casecmpstr[n]?)

2015-11-25 Thread Bert Huijben
We have a set of similar comparison functions in Subversion. I’m pretty sure we 
already had these in the time we still had ebcdic support on trunk.

(We removed that support years ago, but the code should still live on a branch)

 

Bert

 

From: William A Rowe Jr [mailto:wr...@rowe-clan.net] 
Sent: woensdag 25 november 2015 22:55
To: httpd 
Subject: Re: apr_token_* conclusions (was: Better casecmpstr[n]?)

 

On Wed, Nov 25, 2015 at 3:52 PM, Christophe JAILLET 
 > wrote:

Hi,

just in case off, gnome as a set of function g_ascii_...
(see 
https://developer.gnome.org/glib/2.28/glib-String-Utility-Functions.html#g-ascii-strcasecmp)

 

Interesting, does anyone know offhand whether these perform the expected

or the stated behavior under EBCDIC environments? 

 



RE: H2 stream dependencies

2015-11-25 Thread Bert Huijben
> -Original Message-
> From: Stefan Eissing [mailto:stefan.eiss...@greenbytes.de]
> Sent: woensdag 25 november 2015 10:05
> To: dev@httpd.apache.org
> Subject: Re: H2 stream dependencies
> 
> The execution order of requests is not defined by the protocol and up to
the
> server implementation. As you noticed, one major factor is the mpm active
in
> httpd, influencing how, and if, requests are handled in parallel.
> 
> Even setting dependencies and priorities on streams will not make this
fully
> deterministic, as I tried to explain.
> 
> The case of a prefork mpm without further configuration is an extreme
case.
> Prefork, intended to allow a single thread processing model, has by
default
> 1(!) http/2 worker. That means that requests are worked on one at a time.
A
> blocking request may therefore block all later requests. This is not how
> HTTP/2 is supposed to work, but  it is how prefork is.
> 
> You can configure more H2Workers and have multiple threads even in
> prefork. But you need to be sure that the application you run can live
with
> multi-threading.
> 
> Without knowing your client that well, it seems to assume that requests
are
> processed one after the other. And rely on that. This assumption no longer
> holds in HTTP/2. If you send 2 requests, responses may arrive in any order
or
> interleaved, not matter what you specify for priority. Relying on the
exposed
> behaviour of a certain implementation under certain configuration is
> probably the least that you'd want.

Ok... I applied a stopgap solution in Subversion until we switch to a
smarter request later on in the 1.10 devcycle. With that my number of failed
tests over h2 dropped to just 5. (Of the several thousand Subversion test
scenarios).
* 2 of these are related to HTTP/1.1 status lines where we no longer have
access to the reason field.
* 2 others somehow run against an httpd limit when I only configure 16
threads (while running 4 tests in parallel; each with potentially multiple
connections)
* and the last one is an interesting crash in the serf code.

With another one of my Subversion hats on I'm wondering what we as a module
should do for optimal performance here. We can deliver data quite fast (as
it is essentially just local file-io) and the threading overhead could be
quite large. Are there any guidelines for this at this time?

Perhaps some of this should be applied to mod_dav directly instead of
mod_dav_svn.

Bert




RE: H2 stream dependencies

2015-11-24 Thread Bert Huijben
> -Original Message-
> From: Stefan Eissing [mailto:stefan.eiss...@greenbytes.de]
> Sent: vrijdag 20 november 2015 10:26
> To: dev@httpd.apache.org
> Subject: Re: H2 stream dependencies
> 
> Bert,
> 
> interesting and nice to see the progress. You probably could use priorities 
> for
> ordering, especially when using the stream dependencies. I am not certain,
> however, if this will give you totally deterministic behavior. If stream B
> depends on A, usually A will be sent out before B. However, should stream A
> become suspended - e.g. unable to progress, either because there is no data
> available which some other thread needs to produce, or because the flow
> control window has not been updated in time - the server will start sending
> data for B.
> 
> The state of the implementation is:
> 2.4.17: fully implemented priority handling on sending stream data, as
> implemented by the nghttp2 library

I'm seeing different behavior when running against my FreeBSD httpd 2.4.17 
(prefork mpm) and Windows (winnt mpm) builds.

Everything appears to work ok when sending a dependency within the headers 
frame to my FreeBSD box, while on my Windows build I get responses out of order.

Is it possible that different requests over the same connection are handled by 
different threads in this case?

Is there any logging that may help me diagnose this further?
(See log below)

The propfind is handled by mod_dav, while the report is handled by mod_dav_svn.

Bert

--
[:09.091881 2015] [http2:debug] [pid 1956:tid 908] h2_stream.c(58): [client 
::1:15125] h2_stream(12-11): created
[:09.091881 2015] [http2:debug] [pid 1956:tid 908] h2_session.c(71): [client 
::1:15125] h2_session: stream(12-11): opened
[:09.091881 2015] [http2:debug] [pid 1956:tid 908] h2_workers.c(303): 
h2_workers: register mplx(12)
[:09.091881 2015] [http2:debug] [pid 1956:tid 908] h2_stream.c(167): [client 
::1:15125] h2_mplx(12-11): start stream, task PROPFIND 
/svn-test-work/local_tmp/repos/!svn/rev/1 (localhost:7829)
[:09.091881 2015] [http2:debug] [pid 1956:tid 908] h2_stream.c(180): [client 
::1:15125] h2_stream(12-11): closing input
[:09.091881 2015] [http2:debug] [pid 1956:tid 572] h2_workers.c(166): 
h2_worker(14): start task(12-11)
[:09.091881 2015] [http2:debug] [pid 1956:tid 908] h2_session.c(420): [client 
::1:15125] h2_stream(12-11): input closed
[:09.091881 2015] [http2:debug] [pid 1956:tid 572] h2_task_input.c(84): [client 
::1:15125] h2_task_input(12-11): request is: 
[:09.091881 2015] [http2:debug] [pid 1956:tid 572] h2_h2.c(215): [client 
::1:15125] adding h1_to_h2_resp output filter
[:09.091881 2015] [http2:debug] [pid 1956:tid 908] h2_stream.c(58): [client 
::1:15125] h2_stream(12-13): created
[:09.091881 2015] [ssl:debug] [pid 1956:tid 572] ssl_engine_kernel.c(238): 
[client ::1:15125] AH02034: Subsequent (No.2) HTTPS request received for child 
119652660 (server localhost:443)
[:09.091881 2015] [http2:debug] [pid 1956:tid 908] h2_session.c(71): [client 
::1:15125] h2_session: stream(12-13): opened
[:09.091881 2015] [http2:debug] [pid 1956:tid 908] h2_workers.c(303): 
h2_workers: register mplx(12)
[:09.091881 2015] [http2:debug] [pid 1956:tid 908] h2_stream.c(167): [client 
::1:15125] h2_mplx(12-13): start stream, task REPORT 
/svn-test-work/local_tmp/repos/!svn/rev/1 (localhost:7829)
[:09.091881 2015] [http2:debug] [pid 1956:tid 340] h2_workers.c(166): 
h2_worker(4): start task(12-13)
[:09.091881 2015] [http2:debug] [pid 1956:tid 908] h2_stream.c(180): [client 
::1:15125] h2_stream(12-13): closing input
[:09.091881 2015] [http2:debug] [pid 1956:tid 340] h2_task_input.c(84): [client 
::1:15125] h2_task_input(12-13): request is: 
[:09.091881 2015] [http2:debug] [pid 1956:tid 908] h2_session.c(420): [client 
::1:15125] h2_stream(12-13): input closed
[:09.091881 2015] [http2:debug] [pid 1956:tid 340] h2_h2.c(215): [client 
::1:15125] adding h1_to_h2_resp output filter
[:09.091881 2015] [ssl:debug] [pid 1956:tid 340] ssl_engine_kernel.c(238): 
[client ::1:15125] AH02034: Subsequent (No.2) HTTPS request received for child 
56701100 (server localhost:443)
[:09.091881 2015] [http2:debug] [pid 1956:tid 340] h2_task_input.c(107): 
[client ::1:15125] h2_task_input(12-13): read, block=0, mode=1, readbytes=0
[:09.091881 2015] [http2:debug] [pid 1956:tid 340] h2_task_input.c(107): 
[client ::1:15125] h2_task_input(12-13): read, block=0, mode=0, readbytes=153
[:09.091881 2015] [http2:debug] [pid 1956:tid 340] h2_task_input.c(107): 
[client ::1:15125] h2_task_input(12-13): read, block=0, mode=1, readbytes=0
[:09.091881 2015] [http2:debug] [pid 1956:tid 340] h2_task_input.c(107): 
[client ::1:15125] h2_task_input(12-13): read, block=0, mode=1, readbytes=0
[:09.091881 2015] [http2:debug] [pid 1956:tid 340] h2_task_input.c(107): 
[client ::1:15125] h2_task_input(12-13): read, block=0, mode=1, readbytes=0
[:09.092880 2015] [http2:debug] [pid 1956:tid 572] h2_task_input.c(107): 
[client ::1:15125] h2_task_input(12-11): read, block=0, 

RE: svn commit: r1715363 - in /httpd/httpd/trunk/modules/http2: h2_request.c h2_response.h h2_session.c h2_stream.c h2_stream.h h2_util.c h2_util.h

2015-11-20 Thread Bert Huijben


> -Original Message-
> From: Jim Jagielski [mailto:j...@jagunet.com]
> Sent: vrijdag 20 november 2015 17:04
> To: dev@httpd.apache.org
> Subject: Re: svn commit: r1715363 - in /httpd/httpd/trunk/modules/http2:
> h2_request.c h2_response.h h2_session.c h2_stream.c h2_stream.h
> h2_util.c h2_util.h
> 
> Ugg... I *just* noticed:
> 
>   However, header field names MUST be converted to lowercase
>   prior to their encoding in HTTP/2. A request or response
>   containing uppercase header field names MUST be treated as
>   malformed

Nghttp2 handles this just fine.

My early tests with serf showed that some servers didn't enforce the
validation rule, while others (nghttp2 based) did.

Bert




RE: H2 stream dependencies

2015-11-20 Thread Bert Huijben
> -Original Message-
> From: Stefan Eissing [mailto:stefan.eiss...@greenbytes.de]
> Sent: vrijdag 20 november 2015 10:26
> To: dev@httpd.apache.org
> Subject: Re: H2 stream dependencies
> 
> Bert,
> 
> interesting and nice to see the progress. You probably could use priorities 
> for
> ordering, especially when using the stream dependencies. I am not certain,
> however, if this will give you totally deterministic behavior. If stream B
> depends on A, usually A will be sent out before B. However, should stream A
> become suspended - e.g. unable to progress, either because there is no data
> available which some other thread needs to produce, or because the flow
> control window has not been updated in time - the server will start sending
> data for B.
> 
> The state of the implementation is:
> 2.4.17: fully implemented priority handling on sending stream data, as
> implemented by the nghttp2 library
> 2.4.18: additionally, streams are scheduled for execution by priority. Makes a
> difference when the number of available worker is less than the stream
> queue length, e.g. when der server is under load or only few workers are
> available (in prefork, for certain).
> 
> Another way to influence the ordering is for the client to play with the
> stream window sizes. If you give stream A 2^31-1 and stream B 0 until you
> have A, then update B, it may also do what you want. If both streams
> produce response bodies...flow only applies to DATA...

I'm currently thinking about combining both approaches... Sending the 
dependencies and (just to be sure) also keep the window low on such a stream. 
In that case only a very small cache should be able to handle the delaying in 
case (somehow) the dependency doesn't do the right thing.

In the specific case of Subversion there shouldn't be many cases where output 
is delayed, so I hope this will just work for us. But once we see h2 proxies, 
I'm guessing we will really need the window handling as fallback.



In a few of the remaining test failures I see httpd spinning at 100% CPU 
(automatically cooling down after some timout; perhaps if the connection is 
closed). I haven't investigated those, but it could be an httpd issue. This 
usually triggers other test failures for me as I limit the amount of workers 
and threads while testing. I'll follow-up when I know more.

Bert




RE: svn commit: r1715294 - /httpd/httpd/trunk/server/core.c

2015-11-20 Thread Bert Huijben


> -Original Message-
> From: Stefan Eissing [mailto:stefan.eiss...@greenbytes.de]
> Sent: vrijdag 20 november 2015 10:11
> To: dev@httpd.apache.org
> Subject: Re: svn commit: r1715294 - /httpd/httpd/trunk/server/core.c
> 
> +1 for lowercasing and pls backport, since it just arrived as is in 2.4.x,
i think.



> >  apr_table_clear(r->headers_out);
> >  apr_table_setn(r->headers_out, "Upgrade",
protocol);
> > -apr_table_setn(r->headers_out, "Connection",
"upgrade");
> > +apr_table_setn(r->headers_out, "Connection",
"Upgrade");

As the 'Connection' header technically refers to the 'Upgrade' header I
would paint both sheds in the same color.

Either would be fine by me.

Bert




H2 stream dependencies

2015-11-19 Thread Bert Huijben
Hi All (and Stefan in particular),

 

As already noted I'm trying to make Subversion work over http/2 via the
Apache Serf library. Today I made a few huge steps forward and got most of
the Subversion tests working over h2. (Just +- 60 failures left of the +-
2000 tests)

 

One particular problem that remains is that we have some code that assumes
some related responses are delivered in strict order. This code worked for
many releases, so there is not much I can change on the information needed,
without adding features that only work in future Subversion versions.

 

In theory I should be able to get exactly the behavior I want by adding
priority information to my h2 requests. but I don't know if httpd is ready
to process that information in a way that would help us.

(Currently I just ignore all this in my implementation. as will at least
some other implementations)

 

Can somebody give me an update on what the current status of priority
handling in httpd is? (trunk vs 2.4.17)

 

Thanks,

Bert



RE: svn commit: r1714219 - in /httpd/httpd/trunk: docs/manual/mod/ modules/http2/

2015-11-18 Thread Bert Huijben


> -Original Message-
> From: ic...@apache.org [mailto:ic...@apache.org]
> Sent: vrijdag 13 november 2015 15:54
> To: c...@httpd.apache.org
> Subject: svn commit: r1714219 - in /httpd/httpd/trunk: docs/manual/mod/
> modules/http2/
> 
> Author: icing
> Date: Fri Nov 13 14:54:15 2015
> New Revision: 1714219
> 
> URL: http://svn.apache.org/viewvc?rev=1714219=rev
> Log:
> new directive H2Push on/off to en-/disable HTTP/2 server pushes. Server
> pushes are recognized by Link: headers in responses that carry the
> rel=preload parameter
> 


> Modified: httpd/httpd/trunk/modules/http2/h2_request.c
> URL:
> http://svn.apache.org/viewvc/httpd/httpd/trunk/modules/http2/h2_reque
> st.c?rev=1714219=1714218=1714219=diff
> ==
> 
> --- httpd/httpd/trunk/modules/http2/h2_request.c (original)
> +++ httpd/httpd/trunk/modules/http2/h2_request.c Fri Nov 13 14:54:15
> 2015
> @@ -31,11 +31,22 @@
> 
>  h2_request *h2_request_create(int id, apr_pool_t *pool)
>  {
> +return h2_request_createn(id, pool, NULL, NULL, NULL, NULL, NULL);
> +}
> +
> +h2_request *h2_request_createn(int id, apr_pool_t *pool,
> +   const char *method, const char *scheme,
> +   const char *authority, const char *path,
> +   apr_table_t *header)
> +{
>  h2_request *req = apr_pcalloc(pool, sizeof(h2_request));
> 
> -req->id = id;
> -req->headers = apr_table_make(pool, 10);
> -req->content_length = -1;
> +req->id = id;
> +req->method = method;
> +req->scheme = scheme;
> +req->authority  = authority;
> +req->path   = path;
> +req->headers= header? header : apr_table_make(pool, 10);
> 
>  return req;
>  }
> @@ -44,45 +55,26 @@ void h2_request_destroy(h2_request *req)
>  {
>  }
> 
> +static apr_status_t inspect_clen(h2_request *req, const char *s)
> +{
> +char *end;
> +req->content_length = apr_strtoi64(s, , 10);
> +return (s == end)? APR_EINVAL : APR_SUCCESS;
> +}
> +
>  static apr_status_t add_h1_header(h2_request *req, apr_pool_t *pool,
>const char *name, size_t nlen,
>const char *value, size_t vlen)
>  {
>  char *hname, *hvalue;
> 
> -if (H2_HD_MATCH_LIT("transfer-encoding", name, nlen)) {
> -if (!apr_strnatcasecmp("chunked", value)) {
> -/* This should never arrive here in a HTTP/2 request */
> -ap_log_perror(APLOG_MARK, APLOG_ERR, APR_BADARG, pool,
> -  APLOGNO(02945)
> -  "h2_request: 'transfer-encoding: chunked' 
> received");
> -return APR_BADARG;
> -}
> -}
> -else if (H2_HD_MATCH_LIT("content-length", name, nlen)) {
> -char *end;
> -req->content_length = apr_strtoi64(value, , 10);
> -if (value == end) {
> -ap_log_perror(APLOG_MARK, APLOG_WARNING, APR_EINVAL, pool,
> -  APLOGNO(02959)
> -  "h2_request(%d): content-length value not parsed: 
> %s",
> -  req->id, value);
> -return APR_EINVAL;
> -}
> -req->chunked = 0;
> -}
> -else if (H2_HD_MATCH_LIT("content-type", name, nlen)) {
> -/* If we see a content-type and have no length (yet),
> - * we need to chunk. */
> -req->chunked = (req->content_length == -1);
> -}
> -else if ((req->seen_host && H2_HD_MATCH_LIT("host", name, nlen))
> - || H2_HD_MATCH_LIT("expect", name, nlen)
> - || H2_HD_MATCH_LIT("upgrade", name, nlen)
> - || H2_HD_MATCH_LIT("connection", name, nlen)
> - || H2_HD_MATCH_LIT("proxy-connection", name, nlen)
> - || H2_HD_MATCH_LIT("keep-alive", name, nlen)
> - || H2_HD_MATCH_LIT("http2-settings", name, nlen)) {
> +if (H2_HD_MATCH_LIT("expect", name, nlen)
> +|| H2_HD_MATCH_LIT("upgrade", name, nlen)
> +|| H2_HD_MATCH_LIT("connection", name, nlen)
> +|| H2_HD_MATCH_LIT("proxy-connection", name, nlen)
> +|| H2_HD_MATCH_LIT("transfer-encoding", name, nlen)
> +|| H2_HD_MATCH_LIT("keep-alive", name, nlen)
> +|| H2_HD_MATCH_LIT("http2-settings", name, nlen)) {
>  /* ignore these. */
>  return APR_SUCCESS;


Not added in this revision, but this is not 100% to the spec:

http2-settings is only special if it is referenced in the upgrade header, so 
dropping "connection" would be enough.

Expect is still 100% valid in http/2; as are trailing headers, which used 
transfer-encoding in http/1. So that header shouldn't be dropped.

Not sure about keep-alive. But if I remember correctly the spec says that it 
needs to be referenced from connection too, to be applied at the connection 
layer. (But I wouldn't be surprised if actual 

RE: [openssl-dev] [openssl.org #4145] Enhancement: patch to support s_client -starttls http

2015-11-18 Thread Bert Huijben
Hi William,

 

Is any commonly used client actually implementing this spec in a way that makes 
this RFC relevant for httpd?

 

Sure we could implement this… Perhaps we already did but once you switch to TLS 
there are so many security related things to account for.

 

Ignoring the server certificate case, what about SNI and ALPN?

 

Is there really a specific upgrade to tls/1.0, 1.1 and 1.2. Or is one upgrade 
enough as the handshake does the rest.

 

Does this also allow switching to http/2 in one step via ALPN?

 

Or is that explicitly forbidden?

 

Bert

 

From: William A Rowe Jr [mailto:wr...@rowe-clan.net] 
Sent: woensdag 18 november 2015 01:10
To: httpd 
Subject: Fwd: [openssl-dev] [openssl.org #4145] Enhancement: patch to support 
s_client -starttls http

 

I'm fairly certain this will be applied to 1.1.0 and not necessarily

backported to 1.0.2, so this hack might be useful to some of you 

who want to test for the preservation of the SSLEngine optional 

Upgrade: TLS/1.0 behavior on trunk and 2.4.x branch...

 

 

 

-- Forwarded message --
From: William A. Rowe Jr. via RT  >
Date: Tue, Nov 17, 2015 at 5:26 PM
Subject: [openssl-dev] [openssl.org   #4145] Enhancement: 
patch to support s_client -starttls http
To: 
Cc: openssl-...@openssl.org  



RFC 2817 defines upgrading HTTP/1.1 to TLS (or SSL).

Because Apache httpd supports Connection: Upgrade and Upgrade: TLS/1.x I've
gone ahead and instrumented s_client to support this behavior (and noted a
small optimization in the same logic stream for starttls support).

Attached is the patch to introduce this behavior.  It is a bit crufty, but
lacking a CUPS client that did connection upgrade to TLS, I needed
something for testing and experimentation.

I don't know that there is a justification for implementing Upgrade: h2
since this is a binary protocol that is not conducive to terminal mode :)

Source licensed by me under the OpenSSL license at
https://www.openssl.org/source/license.txt - don't see a need for a CLA,
but email me privately if so.


___
openssl-bugs-mod mailing list
openssl-bugs-...@openssl.org  
https://mta.openssl.org/mailman/listinfo/openssl-bugs-mod
___
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev

 



RE: mod_http2 / H2WindowSize default

2015-11-17 Thread Bert Huijben


> -Original Message-
> From: Yann Ylavic [mailto:ylavic@gmail.com]
> Sent: dinsdag 17 november 2015 00:49
> To: httpd-dev <dev@httpd.apache.org>
> Subject: Re: mod_http2 / H2WindowSize default
> 
> On Mon, Nov 16, 2015 at 8:23 PM, Bert Huijben <b...@qqmail.nl> wrote:
> >
> > Currently it looks like bodies of requests are not delivered over http/2
> unless I add a Content-Length header to the request.
> 
> Do you mean it does not work with the "Transfer-Encoding: chunked"
> header either?

Not following up on this:
http/2 strictly disallows transfer-encoding chunked.


http/2 always uses streams and framing... completely different system, but 
solves the same problem as chunking did in http/1.1.

If a content-length header exists, it is transferred... but it isn't used at 
the protocol level. 
(In most cases it is still used at the application level, but that is a 
different story)

Bert



RE: mod_http2 / H2WindowSize default

2015-11-16 Thread Bert Huijben
> -Original Message-
> From: Stefan Eissing [mailto:stefan.eiss...@greenbytes.de]
> Sent: maandag 16 november 2015 12:41
> To: dev@httpd.apache.org
> Subject: Re: mod_http2 / H2WindowSize default
> 
> 
> > Am 14.11.2015 um 11:14 schrieb Bert Huijben <b...@qqmail.nl>:
> >
> > Hi,
> >
> > I was wondering why mod_http2 currently uses a default window size of
> 65536.
> 
> Pure mistake on my part. I put this in almost at the start of development and
> never reviewed this.
> 
> Thanks for catching it. I am just about to commit a change that
> a) set the default to 65535
> b) will not send a setting for it to the client unless it has been configured 
> to
> have another value
> 
> I also removed the sending of the MAX_HEADER_LIST_SIZE, since it is not
> enforced in any way currently.
> 
> Hope this change works for you.

Thanks,

Those settings are enforced in (my) Serf implementation now, so my code will 
see a difference. Thanks for fixing.
(BTW: It is very easy to spot these two if you run nghttp2 in verbose mode 
against httpd).


As my http2 implementation in serf is coming closer to full-featured I'm trying 
to exercise more and more things in httpd 2.4.17.

Currently it looks like bodies of requests are not delivered over http/2 unless 
I add a Content-Length header to the request.


Serf is a client that is eager to use chunked encoding over http/1, so by 
default it won't send such a header. And in http/2 this header is fully 
optional as the existence of a body on a request is already available with a 
flag on the headers frame.

I would love to see this fixed soon, as this is currently a blocker for me to 
start testing Subversion over http/2.

Bert



mod_http2 / H2WindowSize default

2015-11-14 Thread Bert Huijben
Hi,

 

I was wondering why mod_http2 currently uses a default window size of 65536.

 

The http2 protocol defines a default window size of 65535, just 1 byte less
than the current default. Changing the default requires transferring this
setting to every client in the initial settings frame, while sticking to
65535 would allow not doing that.

 

I can imagine that a completely different default makes sense (and allowing
to configure this certainly does), but this 1 byte of the protocol default
doesn't make much sense to me.

 

This is just 6 bytes wasted in the handshake for no obvious benefit.

 

 

I would recommend changing the default to 65535 or perhaps to something
completely different. (1MB. Don't know). But sending a setting that is less
than 0.002% off the protocol default doesn't make much sense to me.

 

 

Note that this size is not really used by httpd itself. This size is used by
clients that need to transfer request bodies to httpd. This number of bytes
specifies the maximum number of bytes the client is allowed to send before
it needs to receive a WINDOW_UPDATE from the server, allowing it to transfer
more data.

 

The client determines the default in the other direction.

 

 

Bert



RE: h1/h2/h2c throughput numbers

2015-11-04 Thread Bert Huijben
> -Original Message-
> From: Stefan Eissing [mailto:stefan.eiss...@greenbytes.de]
> Sent: woensdag 4 november 2015 12:54
> To: dev@httpd.apache.org
> Subject: h1/h2/h2c throughput numbers
> 
> See https://icing.github.io/mod_h2/throughput.html
> 
> This discusses the performance improvements regarding raw throughput of
> the current mod_http2 implementation vs. the 2.4.17 one. Some outlook
> onto the next release, hopefully.
> 
> tl;dr
> 
> h2 (https) raw transfer numbers are on par with HTTP/1.1 now, h2c (http:)
> are at 95% - in my setup.

Hi Stefan,

Thanks for all your work on this.

I'm currently working on adding http/2 support to Apache Serf, and 2.4.17
provides an easy to use test target for me :)

Good to see the progress here. I hope to start testing Subversion over
http/2 soon.

Thanks,

Bert Huijben

--
Subversion, Serf, AnkhSVN, SharpSvn, SharpGit, ...



RE: ALPN patch comments

2015-06-04 Thread Bert Huijben
Can we really do ALPN per vhost?

 

If this is handled before or at the same time as SNI, then SSLAlpnEnable is 
eventually applied per listening address, while H2Engine would make sense even 
for multiple hosts at the same ip. 

 

I would say returning some error is a valid response for not enabling H2Engine 
on a VHost, while still handling ALPN when explicitly disabled is not.

 

Bert

 

From: Stefan Eissing [mailto:stefan.eiss...@greenbytes.de] 
Sent: woensdag 3 juni 2015 22:20
To: dev@httpd.apache.org
Subject: Re: ALPN patch comments

 

That is why mod_h2 allowe H2Engine on|off on base server and vhosts. If I 
understand you correctly, this does what you ask for. 

 

//Stefan




Am 03.06.2015 um 19:45 schrieb William A Rowe Jr wr...@rowe-clan.net 
mailto:wr...@rowe-clan.net :

On Wed, Jun 3, 2015 at 8:43 AM, Stefan Eissing stefan.eiss...@greenbytes.de 
mailto:stefan.eiss...@greenbytes.de  wrote:

Hmm, personally, I do not like redundant configurations. If someone configures 
a module, like mod_h2, to be enabled (H2Engine on), she could expect the module 
to take all the necessary steps. So I am no fan of a „SSLAlpnEnable“.

 

The reason boils down to vhosts and interop.  If someone does not wish

for a specific vhost (perhaps interacting with bad clients, or created for

backwards compatibility) to respond with a feature, it is useful to have

a fine-grained toggle.  The default -could- be 'enabled', although this

probably should not happen on the stable/maintenance branches, but 

simply on the future release branch, to avoid surprises.

 

OpenSSL does the wrong thing in some cases with respect to TLS/SNI

and my current patch development - in some respect - is backing out

that callback change for customers who have been burned by this

specific nonsense.  You should reconsider absolutist behaviors, 

because they make it much harder for people to inject 'experimental' 

behaviors into specific hosts.

 

 



RE: Looking for a release of 2.4.x soonish

2014-06-24 Thread Bert Huijben


 -Original Message-
 From: Jim Jagielski [mailto:j...@jagunet.com]
 Sent: dinsdag 24 juni 2014 20:40
 To: httpd
 Subject: FYI: Looking for a release of 2.4.x soonish
 
 I'm hoping to encourage us to push out the next 2.4 release within
 the next coupla weeks, maybe after the July 4th US-based
 holiday.
 
 Comments?

I would really like to see the mod_dav escaping fixes (where breser is
looking at) to be backported for this next release.

Without those patches Subversion doesn't properly support some special
characters inside repository paths. (Regression against 2.0.x, 2.2.x and
earlier 2.4.x versions).

As soon as these are backported:
  +2 :-)

Bert



RE: [Patch] non blocking writes in core

2013-11-19 Thread Bert Huijben
 -Original Message-
 From: Graham Leggett [mailto:minf...@sharp.fm]
 Sent: dinsdag 19 november 2013 18:44
 To: dev@httpd.apache.org
 Subject: Re: [Patch] non blocking writes in core
 
 On 18 Nov 2013, at 1:24 PM, Plüm, Rüdiger, Vodafone Group
 ruediger.pl...@vodafone.com wrote:
 
  +rv = send_brigade_nonblocking(net-client_socket, bb,
  +  (ctx-bytes_written), c);
  +if (APR_STATUS_IS_EAGAIN(rv)) {
  +setaside_remaining_output(f, ctx, bb, c);
  +}
  +else if (rv != APR_EAGAIN) {
 
  What if rv is APR_SUCCESS?
 
 This is indeed broken, fixed.

This also breaks unsafe for platforms where there are multiple EAGAIN
values, like on Windows where APR_STATUS_IS_EAGAIN() returns true for quite
a few error codes.

Bert




Playing with cmake: LONG_NAME= problems

2013-11-18 Thread Bert Huijben
Hi,

As I already mentioned I'm re-scripting my build of httpd to work using the
new cmake generator.

It looks like I have things working now, with about half as many local
patches as before..., but I think one problem I had to patch around will be
common for everybody using project files for Visual Studio 2005 and later:
The very ugly escaping of the LONG_NAME= argument.

E.g. CMakeLists.txt contains:

SET_TARGET_PROPERTIES(${mod_name} PROPERTIES COMPILE_FLAGS
-DLONG_NAME=${mod_name} for Apache HTTP Server
-DBIN_NAME=${mod_name}.so ${EXTRA_COMPILE_FLAGS})

The long name value is then later generated in project files, but
differently for the C compiler and the RC (=resource) compiler. This
resource compiler doesn't like the way the value is generated, and just
handles the value literally... And then generates parser errors.

In Subversion where we used this same pattern for years, we avoided all the
'' escaping problems by using the APR_STRINGIFY() macro. That allows simply
passing the value.

E.g.

BEGIN
  BLOCK StringFileInfo
  BEGIN
BLOCK 040904B0
BEGIN
  VALUE CompanyName, http://subversion.apache.org/\0;
  VALUE FileDescription, APR_STRINGIFY(SVN_FILE_DESCRIPTION) \0
  VALUE FileVersion, SVN_VER_NUMBER \0
  VALUE InternalName, SVN\0
  VALUE LegalCopyright, Copyright (c) The Apache Software
Foundation\0
  VALUE OriginalFilename, APR_STRINGIFY(SVN_FILE_NAME) \0
  VALUE ProductName, Subversion\0
  VALUE ProductVersion, SVN_VERSION \0
#ifdef SVN_SPECIALBUILD
  VALUE SpecialBuild, SVN_SPECIALBUILD \0
#endif
END
  END
  BLOCK VarFileInfo
  BEGIN
 VALUE Translation, 0x409, 1200
  END
END

I've fixed the problem for me with a local hack, but I think many future
users of the cmake build scripts would be very happy if this problem could
be fixed in the standard scripts.


In my case that would allow me to reduce my own patches, to cmake specific
things. (E.g. I like to have .pdb files even for the fully optimized builds,
and cmake doesn't support that scenario)

Bert






RE: Playing with cmake: LONG_NAME= problems

2013-11-18 Thread Bert Huijben
Hi Jeff,

 

Thanks for looking into this.

 

I tried the Visual Studio 9 and Visual Studio 11 generators in both standard
Win32 and x64 forms. Both of these have similar problems even though the
first uses .vcproj files and the late .vcxproj files.

 

The cmake generated code is accepted by the resource compiler argument
parser but then fails during the resource compile stage.

 

It appears that the code does work for some cases where the long name only
contains a single word, but it always fails when there are multiple words.

 

Bert

 

From: Jeff Trawick [mailto:traw...@gmail.com] 
Sent: maandag 18 november 2013 19:22
To: Apache HTTP Server Development List
Subject: Re: Playing with cmake: LONG_NAME= problems

 

On Mon, Nov 18, 2013 at 12:37 PM, Bert Huijben b...@qqmail.nl
mailto:b...@qqmail.nl  wrote:

Hi,

As I already mentioned I'm re-scripting my build of httpd to work using the
new cmake generator.

It looks like I have things working now, with about half as many local
patches as before..., but I think one problem I had to patch around will be
common for everybody using project files for Visual Studio 2005 and later:

 

I don't doubt it, but just for fun: Exactly which generator/studio version
were you using, in case I have a problem reproducing?

 

The very ugly escaping of the LONG_NAME= argument.

E.g. CMakeLists.txt contains:

SET_TARGET_PROPERTIES(${mod_name} PROPERTIES COMPILE_FLAGS
-DLONG_NAME=${mod_name} for Apache HTTP Server
-DBIN_NAME=${mod_name}.so ${EXTRA_COMPILE_FLAGS})

The long name value is then later generated in project files, but
differently for the C compiler and the RC (=resource) compiler. This
resource compiler doesn't like the way the value is generated, and just
handles the value literally... And then generates parser errors.

In Subversion where we used this same pattern for years, we avoided all the
'' escaping problems by using the APR_STRINGIFY() macro. That allows simply
passing the value.

 

We do that two, though with a little indirection:

 

#define LONG_NAME_STR APR_STRINGIFY(LONG_NAME)

#define BIN_NAME_STR APR_STRINGIFY(BIN_NAME)

 

  VALUE FileDescription, LONG_NAME_STR \0

  VALUE FileVersion, AP_SERVER_BASEREVISION \0

  VALUE InternalName, BIN_NAME_STR \0

  VALUE LegalCopyright, AP_SERVER_COPYRIGHT \0

  VALUE OriginalFilename, BIN_NAME_STR \0

 

I guess the LONG_NAME definition set in CMakeLists.txt doesn't need to try
to put literal quotes there.

 


E.g.

BEGIN
  BLOCK StringFileInfo
  BEGIN
BLOCK 040904B0
BEGIN
  VALUE CompanyName, http://subversion.apache.org/\0
http://subversion.apache.org/0 
  VALUE FileDescription, APR_STRINGIFY(SVN_FILE_DESCRIPTION) \0
  VALUE FileVersion, SVN_VER_NUMBER \0
  VALUE InternalName, SVN\0
  VALUE LegalCopyright, Copyright (c) The Apache Software
Foundation\0
  VALUE OriginalFilename, APR_STRINGIFY(SVN_FILE_NAME) \0
  VALUE ProductName, Subversion\0
  VALUE ProductVersion, SVN_VERSION \0
#ifdef SVN_SPECIALBUILD
  VALUE SpecialBuild, SVN_SPECIALBUILD \0
#endif
END
  END
  BLOCK VarFileInfo
  BEGIN
 VALUE Translation, 0x409, 1200
  END
END

I've fixed the problem for me with a local hack, but I think many future
users of the cmake build scripts would be very happy if this problem could
be fixed in the standard scripts.


In my case that would allow me to reduce my own patches, to cmake specific
things. (E.g. I like to have .pdb files even for the fully optimized builds,
and cmake doesn't support that scenario)

Bert









 

-- 
Born in Roswell... married an alien...
http://emptyhammock.com/



RE: Playing with cmake: LONG_NAME= problems

2013-11-18 Thread Bert Huijben
Hi Jeff,

 

I can confirm that this fixes the LONG_NAME problems :)

 

I have one remaining problem, that I hoped would be fixed by the same fix
you applied, but it wasn't.

 

If the httpd build directory contains a '-', such as in my case
'F:\svn-dev\build\httpd', then the ICON_FILE argument doesn't get through as
a valid token.

 

[[

build\win32\httpd.rc(34): error RC2135: file not found: F:/svn
[F:\svn-dev\build\httpd\httpd.vcxproj]

build\win32\httpd.rc(40): error RC2135: file not found: VERSIONINFO
[F:\svn-dev\build\httpd\httpd.vcxproj]

build\win32\httpd.rc(41): error RC2135: file not found: 4
[F:\svn-dev\build\httpd\httpd.vcxproj]

build\win32\httpd.rc(42): error RC2135: file not found: PRODUCTVERSION
[F:\svn-dev\build\httpd\httpd.vcxproj]

]]

 

The generated line in httpd.vcxproj is now:

 
PreprocessorDefinitionsWIN32;_WINDOWS;NDEBUG;APP_FILE;LONG_NAME=Apache
HTTP
Server;BIN_NAME=httpd.exe;ICON_FILE=F:/svn-dev/build/httpd/build/win32/apach
e.ico;CMAKE_INTDIR=\Release\;%(PreprocessorDefinitions)/PreprocessorDefin
itions

 

I can fix this by updating the relevant lines in build/win32/httpd.rc from

[[

#ifdef ICON_FILE

1 ICON DISCARDABLE ICON_FILE

#endif

]]

to

#ifdef ICON_FILE

1 ICON DISCARDABLE APR_STRINGIFY(ICON_FILE)

#endif

 

Bert

 

From: Jeff Trawick [mailto:traw...@gmail.com] 
Sent: maandag 18 november 2013 22:19
To: Apache HTTP Server Development List
Subject: Re: Playing with cmake: LONG_NAME= problems

 

On Mon, Nov 18, 2013 at 4:14 PM, Jeff Trawick traw...@gmail.com
mailto:traw...@gmail.com  wrote:

On Mon, Nov 18, 2013 at 4:10 PM, Bert Huijben b...@qqmail.nl
mailto:b...@qqmail.nl  wrote:

Hi Jeff,

 

Thanks for looking into this.

 

I tried the Visual Studio 9 and Visual Studio 11 generators in both standard
Win32 and x64 forms. Both of these have similar problems even though the
first uses .vcproj files and the late .vcxproj files.

 

The cmake generated code is accepted by the resource compiler argument
parser but then fails during the resource compile stage.

 

It appears that the code does work for some cases where the long name only
contains a single word, but it always fails when there are multiple words.

 

Bert

 

Cool...  In the meantime, I have a fix in trunk (r1543149) and am building
the 2.4.x fix with Visual Studio 2010 currently.  (Luckily exiftool makes it
easy to quickly check File Description.)

 

r1543165 is the 2.4.x fix

 

 

 

From: Jeff Trawick [mailto:traw...@gmail.com mailto:traw...@gmail.com ] 
Sent: maandag 18 november 2013 19:22
To: Apache HTTP Server Development List
Subject: Re: Playing with cmake: LONG_NAME= problems

 

On Mon, Nov 18, 2013 at 12:37 PM, Bert Huijben b...@qqmail.nl
mailto:b...@qqmail.nl  wrote:

Hi,

As I already mentioned I'm re-scripting my build of httpd to work using the
new cmake generator.

It looks like I have things working now, with about half as many local
patches as before..., but I think one problem I had to patch around will be
common for everybody using project files for Visual Studio 2005 and later:

 

I don't doubt it, but just for fun: Exactly which generator/studio version
were you using, in case I have a problem reproducing?

 

The very ugly escaping of the LONG_NAME= argument.

E.g. CMakeLists.txt contains:

SET_TARGET_PROPERTIES(${mod_name} PROPERTIES COMPILE_FLAGS
-DLONG_NAME=${mod_name} for Apache HTTP Server
-DBIN_NAME=${mod_name}.so ${EXTRA_COMPILE_FLAGS})

The long name value is then later generated in project files, but
differently for the C compiler and the RC (=resource) compiler. This
resource compiler doesn't like the way the value is generated, and just
handles the value literally... And then generates parser errors.

In Subversion where we used this same pattern for years, we avoided all the
'' escaping problems by using the APR_STRINGIFY() macro. That allows simply
passing the value.

 

We do that two, though with a little indirection:

 

#define LONG_NAME_STR APR_STRINGIFY(LONG_NAME)

#define BIN_NAME_STR APR_STRINGIFY(BIN_NAME)

 

  VALUE FileDescription, LONG_NAME_STR \0

  VALUE FileVersion, AP_SERVER_BASEREVISION \0

  VALUE InternalName, BIN_NAME_STR \0

  VALUE LegalCopyright, AP_SERVER_COPYRIGHT \0

  VALUE OriginalFilename, BIN_NAME_STR \0

 

I guess the LONG_NAME definition set in CMakeLists.txt doesn't need to try
to put literal quotes there.

 


E.g.

BEGIN
  BLOCK StringFileInfo
  BEGIN
BLOCK 040904B0
BEGIN
  VALUE CompanyName, http://subversion.apache.org/\0
http://subversion.apache.org/0 
  VALUE FileDescription, APR_STRINGIFY(SVN_FILE_DESCRIPTION) \0
  VALUE FileVersion, SVN_VER_NUMBER \0
  VALUE InternalName, SVN\0
  VALUE LegalCopyright, Copyright (c) The Apache Software
Foundation\0
  VALUE OriginalFilename, APR_STRINGIFY(SVN_FILE_NAME) \0
  VALUE ProductName, Subversion\0
  VALUE ProductVersion, SVN_VERSION \0
#ifdef

r1542328 breaks cmake build

2013-11-16 Thread Bert Huijben
Hi,

 

I'm trying to switch my personal Windows build from the old system to cmake
in preparation for the next 2.4 tag and my build of httpd started to work
until r1542328 removed a file that is still referenced from CMakeLists.txt.

 

Can somebody with enough karma remove the reference from CMakeLists.txt?

(I've confirmed that just removing it fixes the build)

 

Thanks,

Bert