Re: Feedback needed: suexec different-owner patch

2016-03-19 Thread Tim Bannister
On 19 March 2016, montt...@heavyspace.ca wrote:
>Since its been a while since this issue was mentioned, this patch
>allows 
>Apache to suexec files by a different (but still restricted by UID) 
>owner, to avoid the security issue where apache forces you to suexec to
>files it has full chmod access to.


That patch builds on what I'd consider as a legacy feature. I have not used 
suexec for a long time: it is risky, and on the one recent-ish occasion when I 
wanted something like suexec, I also wanted to chroot() / jail() / otherwise 
separate the CGI application from the main system.

httpd's users do sometimes need to have web content served using processes that 
have different privileges to httpd, and perhaps are also isolated from one 
another. suexec achieves some of this albeit not well.
It feels to me as if some kind of FastCGI process manager, combined with a 
privileged helper, could be used to fill the gap that mpm_itk and suexec don't 
completely cover.

I'll add to my To Do list (and maybe also Bugzilla) a task to see what already 
exists and document how to use that in place of suexec.
If nothing out there already works, then my idea is to code that up as well.

I wish I could say when I might get round to that, but the way if these things 
is that it's easy to start this kind of task and rather more difficult to 
complete them.

As to whether to take the suggested patch: +0. I don't think it will make 
things worse; however, I don't feel qualified to comment on security-critical 
code.

Tim


-- 
Tim Bannister – is...@c8h10n4o2.org.uk


Re: [PATCH] Add "FreeListen" to support IP_FREEBIND

2016-03-08 Thread Tim Bannister

> On 8 Mar 2016, at 18:13, William A Rowe Jr <wr...@rowe-clan.net> wrote:
>> 
>> On Tue, Mar 8, 2016 at 11:38 AM, Tim Bannister <is...@c8h10n4o2.org.uk> 
>> wrote:
>> On 8 Mar 2016, at 10:43, Jan Kaluža <jkal...@redhat.com> wrote:
>> > On 03/08/2016 10:25 AM, Yann Ylavic wrote:
>> >> On Tue, Mar 8, 2016 at 9:46 AM, Yann Ylavic <ylavic@gmail.com> wrote:
>> >>> On Tue, Mar 8, 2016 at 9:28 AM, Jan Kaluža <jkal...@redhat.com> wrote:
>> >>>>
>> >>>> I have chosen FreeListen over the flags
>> >>>
>> >>> FWIW, should be take the YAD path, I'd prefer ListenFree (over
>> >>> FreeListen) to emphasize on the "Listen directive family" with a
>> >>> prefix...
>> >>
>> >> Thinking more about this, I think I second Jim on the wish to have a
>> >> single Listen directive with some parameter like
>> >> "options=freebind,backlog:4095,reuseport,...".
>> >
>> > Thinking about right syntax for options...
>> >
>> > I would personally like something like "Listen [IP-address:]portnumber 
>> > [protocol] [option1] [option2] ...". Do we have list of supported 
>> > protocols by Listen directive, or we support whatever protocol is there?
>> >
>> > If we have explicit list of protocols, then the protocols itself could 
>> > become an options.
>> >
>> > If not, can it be acceptable, that you always have to define protocol when 
>> > you wan to use options?
>> 
>> That sounds fine too.
>> 
>> One proviso comes with the idea of a single socket that can serve several 
>> protocols. Think of WebSocket, because it is awkward: from an HTTP 
>> point-of-view, the protocol is initially HTTP and then upgrades to 
>> WebSocket; however, from a WebSocket point of view, the protocol is 
>> WebSocket throughout with a preamble that also happens to resemble HTTP/1.1.
>> 
>> Using the first model, only one protocol need be specified (but it's not 
>> clear which upgrades are valid for this socket). Using the second model, the 
>> Listen directive needs a way for the admin to specify multiple protocols. 
>> Maybe the answer is for that to be set in the Protocols directive only?

Either mental model might be valid, and I wouldn't presume to say which we 
should be using. Maybe it's not even feasible to have a single abstraction for 
how httpd works (because things like HTTP/2, TLS, WebSocket confuse matters by 
each using their own interpretation).

…

> Keep in mind this becomes a nightmare entanglement between optional, loadable 
> support modules and the server core.  The existing implementation
> of listen was flexible enough to provide new arbitrary protocols and resolve 
> these at runtime.  There is no reason to distinguish http/1.1, as we would 
> have already done so (e.g. http/1.0, http/0.9 etc).  It isn't necessary.

I agree, but: Protocols vs. Protocol is already awkward to document and use. I 
hope we don't accidentally make anything worse.


> If a websocket implementation is properly stacked on top of the core, there 
> is no need for special-casing this interaction.  It will be able to speak 
> over http or https, or conceivably even over a single h2, or h2c stream, and 
> will support httpready or freebind mechanics.

I chose WebSocket precisely because it's a pain and will illustrate awkward 
cases. WebSocket over HTTP/2 sounds like a red herring, as normal WebSocket 
runs over a stream transport (TCP). In RFC 6455 it's an upgrade, and once that 
upgrade has happened then HTTP is not part of the stack any more. WebSocket 
Secure runs over TLS but, again, discards HTTP after the upgrade.

> Re-implementing the handshake entirely in a WebSocket module seems like 
> overkill, much like re-implementing the h2c handshake would be.

No worries there – sounds like you're more in favour of the first of the two 
models then? In that case the Listen directive simplifies to one of:
Listen [2001:db8::a00:20ff:fea7:ccea]:1234 http options=freebind

or:
Listen [2001:db8::a00:20ff:fea7:ccea]:1234 options=protocol:http,freebind


I'm mindful that there is already an overlap between Listen, Protocol, and 
Protocols. Making the Listen directive more complex makes sense to me; the 
added complexity makes it more important to try get it right.



My aim here is that we agree on a new definition of Listen that's 
implementable, understandable, and has a way to specify use of IP_FREEBIND. 
It's nice to have it extensible but that is not as strong a consideration as 
the other details. Which choice do people like best?


-- 
Tim Bannister – is...@c8h10n4o2.org.uk

Re: [PATCH] Add "FreeListen" to support IP_FREEBIND

2016-03-08 Thread Tim Bannister
On 8 Mar 2016, at 10:43, Jan Kaluža <jkal...@redhat.com> wrote:
> On 03/08/2016 10:25 AM, Yann Ylavic wrote:
>> On Tue, Mar 8, 2016 at 9:46 AM, Yann Ylavic <ylavic@gmail.com> wrote:
>>> On Tue, Mar 8, 2016 at 9:28 AM, Jan Kaluža <jkal...@redhat.com> wrote:
>>>> 
>>>> I have chosen FreeListen over the flags
>>> 
>>> FWIW, should be take the YAD path, I'd prefer ListenFree (over
>>> FreeListen) to emphasize on the "Listen directive family" with a
>>> prefix...
>> 
>> Thinking more about this, I think I second Jim on the wish to have a
>> single Listen directive with some parameter like
>> "options=freebind,backlog:4095,reuseport,...".
> 
> Thinking about right syntax for options...
> 
> I would personally like something like "Listen [IP-address:]portnumber 
> [protocol] [option1] [option2] ...". Do we have list of supported protocols 
> by Listen directive, or we support whatever protocol is there?
> 
> If we have explicit list of protocols, then the protocols itself could become 
> an options.
> 
> If not, can it be acceptable, that you always have to define protocol when 
> you wan to use options?

That sounds fine too.

One proviso comes with the idea of a single socket that can serve several 
protocols. Think of WebSocket, because it is awkward: from an HTTP 
point-of-view, the protocol is initially HTTP and then upgrades to WebSocket; 
however, from a WebSocket point of view, the protocol is WebSocket throughout 
with a preamble that also happens to resemble HTTP/1.1.

Using the first model, only one protocol need be specified (but it's not clear 
which upgrades are valid for this socket). Using the second model, the Listen 
directive needs a way for the admin to specify multiple protocols. Maybe the 
answer is for that to be set in the Protocols directive only?

What should the Listen directive look like, ideally, for a freebind-enabled 
socket that can be either HTTP or WebSocket, and needs to specify options? Like 
this perhaps:

Listen [2001:db8::a00:20ff:fea7:ccea]:1234 http/1.1,websocket options=freebind




-- 
Tim Bannister – is...@c8h10n4o2.org.uk



Re: [PATCH] Add "FreeListen" to support IP_FREEBIND

2016-03-08 Thread Tim Bannister
On 8 Mar 2016, at 09:25, Yann Ylavic <ylavic@gmail.com> wrote:
> On Tue, Mar 8, 2016 at 9:46 AM, Yann Ylavic <ylavic@gmail.com> wrote:
>> On Tue, Mar 8, 2016 at 9:28 AM, Jan Kaluža <jkal...@redhat.com> wrote:
>>> 
>>> I have chosen FreeListen over the flags
>> 
>> FWIW, should be take the YAD path, I'd prefer ListenFree (over
>> FreeListen) to emphasize on the "Listen directive family" with a
>> prefix...
> 
> Thinking more about this, I think I second Jim on the wish to have a
> single Listen directive with some parameter like
> "options=freebind,backlog:4095,reuseport,...".
> 
> We could then whatever (new) IP option more easily (less docs work...)
> and maybe deprecate ListenBacklog.

+1

I thought of having an feature / module for having a separate process bind the 
listening TCP socket (and send the FD to httpd over an AF_UNIX socket*), ending 
up with the same "options=freebind,backlog:4095,reuseport,..." concept.

I'm presuming that “options=protocol:https” would be fine too, and “https” on 
its own would be taken to be a deprecated shorthand?


* similar to how https://github.com/JiriHorky/privbind works

-- 
Tim Bannister – is...@c8h10n4o2.org.uk



Re: httpd + systemd

2016-02-26 Thread Tim Bannister
On 26 February 2016, Reindl Harald wrote:
>
>
>
>in case of a SIGTERM the daemon is supposed to do a clean shutdown
>anyways
>
>[Service]
>Type=simple
>EnvironmentFile=-/etc/sysconfig/httpd
>ExecStart=/usr/sbin/httpd $OPTIONS -D FOREGROUND
>ExecReload=/usr/sbin/httpd $OPTIONS -k graceful
>Restart=always
>RestartSec=1
>
Maybe add an ExecStop as well which calls graceful-stop? This is more reliable 
than a signal.

After DefaultTimeoutStopSec seconds, systemd will intervene regardless.
-- 
Tim Bannister – is...@c8h10n4o2.org.uk


Re: balancer-manager docs

2016-02-09 Thread Tim Bannister
On 9 Feb 2016, at 16:02, Rainer Jung <rainer.j...@kippdata.de> wrote:
> Am 09.02.2016 um 13:25 schrieb Jim Jagielski:
>> We currently have really really little info about the balancer-manager in 
>> our docs, just a short little blurb on how to enable it and a brief 
>> description of what it does [1]. I'd like to extend that, but does it make 
>> sense to add it to the mod_proxy_balancer module page, or have a separate 
>> page dedicated to it which we can link to?
>> 
>> 1. https://httpd.apache.org/docs/trunk/mod/mod_proxy_balancer.html
> 
> Adding even more questions:
> 
> I always think it is confusing for newbies, that all configuration directives 
> for any mod_proxy_* are documented on the mod_proxy page. Although this 
> reflects the code, config is done by mod_proxy, it is not what a user would 
> expect. If e.g. He is working with a balancer, he would expect more info 
> about how to configure a balancer in the mod_proxy_balancer page.

Cc: to docs@

The module pages can document the module; I think that's appropriate for 
reference documentation.

What's missing is more of a “how do I set up X” guide. I think the topics could 
be:

• forward proxy (and access control) with or without cacheing
• reverse proxy with or without cacheing
• balancing and high availability for reverse proxies

I think this is me volunteering to at least draft some text, if people agree 
this approach makes sense.

-- 
Tim Bannister – is...@c8h10n4o2.org.uk



Re: Worker states for balancer members

2016-02-03 Thread Tim Bannister
On 3 February 2016 12:25:21 GMT, Jim Jagielski wrote:
>
>Maybe we can just say that STOPPED is there for potential
>3rd party uses and be done w/ it :)

+1 to that philosophy

-- 
Tim Bannister – is...@c8h10n4o2.org.uk


Re: Worker states for balancer members

2016-02-03 Thread Tim Bannister
On 3 February 2016 14:21:58 GMT, Jim Jagielski <j...@jagunet.com> wrote:

>STOPPED: Never sent proxy traffic. Never health-checked. Never
> re-tried. Never automatically re-enabled.
>
>DISABLED: Never sent proxy traffic. Is health-checked. Retried.
>  Could be automatically re-enabled.
>

Some users could actually expect to see health checks sent to STOPPED workers 
but not to a DISABLED worker.

I'm trying to think like a newbieish webmaster: if they declare a balancer 
member +D they are saying that it is not in use: maybe the host is not set up 
yet, maybe it no longer provides that service.
STOPPED comes over as saying “temporarily do not use” whereas DISABLED feels 
like “administratively disabled”, “no longer in service”, that kind of.

I'd say this doesn't match how the balancer-manager portrays things. Whichever 
interpretation wins out is going to be worth documenting (I think?) to avoid 
that risk of confusion.

-- 
Tim Bannister – is...@c8h10n4o2.org.uk


Re: Work in progress: mod_proxy Health Check module

2016-01-19 Thread Tim Bannister
On 19 January 2016 19:01:12 GMT, Jim Jagielski <j...@jagunet.com> wrote:
>Okey dokey... this is quite functional at this stage.
>Right now we have:
>
>   o TCP health checking (socket up/down)
>   o HTTP OPTIONS checking
>   o HTTP HEAD checking
>   o Support for ap_expr against the response of
> the backend for OPTIONS/HEAD
>   o Ability to add a URI to the worker's path for
> a "health check" URL (eg: /check/up.php)
>   o Allow for a set number of successes or failures
> before enabling/disabling the worker.
>   o Some basic balancer-manager view

That looks really good.

In the longer term, what do people think about the idea of supporting GET as a 
health check method?

“up.php” or whatever might supply a special response which an ap_expr checks 
for. I've seen this approach used to protect against serving an apparently 
healthy backend (2xx status) which is actually serving the wrong page, eg “this 
domain is for sale!”


-- 
Tim Bannister – is...@c8h10n4o2.org.uk


Re: Shouldn't ap_get_remote_host use req->useragent_addr?

2016-01-07 Thread Tim Bannister
On 8 January 2016 06:23:15 GMT, "Jan Kaluža" <jkal...@redhat.com> wrote:
>On 01/07/2016 04:06 PM, Eric Covener wrote:
>> 
>>> Is this expected behaviour? Maybe the ap_get_remote_host method
>should use
>>> req->useragent_addr instead of conn->client_addr to obtain the
>REMOTE_HOST.
>>
>> what about "Require ip ..."?


“ip” is a minimal and doesn't explain much.

How about, maybe:
Require remote-ip-host 192.0.2.42/30?

I'm assuming that this would succeed  if the TCP peer is in the specified range 
OR if mod_remoteip makes a similar declaration.

-- 
Tim Bannister – is...@c8h10n4o2.org.uk


Re: Upgrades

2015-12-09 Thread Tim Bannister
On 9 Dec 2015, at 23:19, William A Rowe Jr <wr...@rowe-clan.net> wrote:

> Because the request body is inbound already at some state of completion
> or incomplete transmission, it is competing with the TLS handshake, which
> is a bidirectional engagement with the same socket between the user agent
> and server.  Unlike h2c and websocket, where Roy suggests we leave the
> http/1 input filter in place and inject the http/2 output filter for the 
> duration
> of the initial Upgrade: request, we must pass TLS traffic in both directions 
> at once during Upgrade: TLS.
…
> Please remember that a request handler is based on a method and location,
> while Upgrade is not the request itself, but a proposal to switch protocols.
> In the case of TLS and h2c, that request is satisfied over the corresponding 
> HTTP/1 or HTTP/2 output filter, but I'm not clear whether websocket has
> any equivalence in terms of a content handler phase.

In a sense, all upgrades happen after of an existing request (& response). So 
how about this high level behaviour model:
httpd keeps note, for each connection, of what upgrades are (1) feasible and 
(2) agreed upon.
At end of any given request, either zero or one of the feasible upgrades will 
have been agreed between web server and client. That's when the upgrade should 
happen if it has been negotiated.

Eg, for a new, inbound port 80 connection: an upgrade to TLS or h2c would be 
feasible. During the first request it may transpire that an upgrade to 
WebSocket is feasible too (authz having been satisfied). Once the request in 
which the upgrade has been negotiated is complete, said upgrade takes place.

Any subsequent requests on the same TCP connection won't be eligible for 
upgrade to WebSocket. This kind of rule ought to live outside the HTTP/1.x 
implementation as it has more to do with WebSocket than HTTP.


-- 
Tim Bannister – is...@c8h10n4o2.org.uk



Re: reverse proxy wishlist

2015-12-05 Thread Tim Bannister
On 3 December 2015 14:59:00 GMT, Jim Jagielski wrote:

>What would *you* like to see as new features or enhancements
>w/ mod_proxy, esp reverse proxy.

I'd like to have more options about error responses. Where httpd is reverse 
proxy for an application that may fail, and I want to have httpd send nicer 5xx 
responses.

ProxyErrorOverride is a good starting point. Often I want to let through only 
some error pages: the ones explicitly coded to be shown to this website's 
visitors. If the backend fails and produces an unstyled page of jargon and 
diagnostics, I want httpd to intervene.

The application could signal to httpd that its response has a user-friendly 
body via a special header.

I don't think httpd can do what I have in mind yet (maybe with mod_lua, but 
that's too much for many webmasters).

Tim


-- 
Tim Bannister – is...@c8h10n4o2.org.uk


Re: Question about "Trailer" header field

2015-11-02 Thread Tim Bannister
On 2 Nov 2015, at 22:29, Christophe Jaillet wrote:
> 
> Severals places in httpd seems to deal with RFC2616 13.5.1 End-to-end and 
> Hop-by-hop Headers.
>   Line 1211 of cache_util.c [1]
>   Line 1311 and 1562 of mod_proxy_http.c [2]
>   Line 3567 of proxy_util.c [3]
> 
> 
> 1) [1] is an exact copy of what is said in RFC2616 13.5.1
> However, I wonder if the Trailers here, should not be a Trailer (without s)
> Trailers (with a s) does not seem to a header field, just a keyword for TE.
> Is this a typo in the RFC?

With [1], I think you're right that this is a typo in RFC2616 which has been 
copied into httpd.



As for [2] and [3], the Connection: header seems to be handled in 
mod_proxy_http.c (circa line 970). Transfer-Encoding: is likewise given special 
case handling (circa line 795). I can see a case for adding a comment to 
proxy_util.c explaining this.


That leaves Proxy-Authenticate: and Proxy-Authorisation: headers. Are these 
hop-by-hop?

It makes sense for a shared cache to delete Proxy-Authenticate: from a 
response, unconditionally. Similarly Proxy-Authorisation: from a request. The 
shared nature of cacheing brings obvious security issues.


https://tools.ietf.org/html/rfc7235 says “when multiple proxies are used within 
the same administrative domain, such as office and regional caching proxies 
within a large corporate network, it is common for credentials to be generated 
by the user agent and passed through the hierarchy until consumed” and “A proxy 
MAY relay the credentials from the client request to the next proxy if that is 
the mechanism by which the proxies cooperatively authenticate a given request.”

So maybe there's an opportunity (enhancement request?) to make the forwarding 
of these headers configurable. I'm not sure what the default should be. I think 
the safe option, at least for trunk, is to remove those headers in the proxy 
code as well.


-- 
Tim Bannister – is...@c8h10n4o2.org.uk



Re: Is Apache getting too patchy?

2015-10-26 Thread Tim Bannister
On 26 Oct 2015, at 22:23, Nick Kew <n...@webthing.com> wrote:
> On Mon, 26 Oct 2015 09:51:43 -0700
> Jacob Champion <champio...@gmail.com> wrote:
> 
>> I'd rather not feel like I'm just annoying dev@ until you submit my 
>> stuff -- I want to *talk* about it, and improve the server.
> 
> That may not be easy.  You need to find someone who'll be interested in an 
> idea or patch, and has the time to review it.
> Plus, the community as a whole to agree it's a good idea, or at least not 
> actively oppose it.
> 
> I wonder if workflow would be improved if we had named maintainers for 
> particular parts of the codebase - for example individual modules?  Not 
> necessarily to do all the work, but to take primary responsibility to see 
> that your ideas don't just fall through the gaps and get ignored?

How does the word “sponsor” sound?

Someone who encourages and champions the development activity around a 
particular feature (and is also very welcome to contribute). The existing and 
more formal mechanisms for approving commits seem to work fine as a way of 
controlling the quality of code.


Improving the workflow means, to me, coaching and leadership, and different 
kinds of code review. Someone who isn't very good at C (like me) might well 
want to make a code contribution but not be sure how. I saw recently how much 
perseverance Yingqi Lu put in towards getting SO_REUSEPORT support into trunk 
and then into 2.4.17 – and that's great. It's unfortunate that the same 
perseverance also offers a lesson about the kind of barriers that a would-be 
contributor might encounter.

So, sponsorship can be about encouraging participation and progress. I'm 
imagining someone who rarely has to settle a decision – those should stay 
consensual and democratic - but often leads discussions and moves things on.

Comments very welcome.


-- 
Tim Bannister – is...@c8h10n4o2.org.uk



Re: SSLUseStapling: ssl handshake fails until httpd restart

2015-10-04 Thread Tim Bannister
On 4 Oct 2015, at 11:38, Kaspar Brand wrote:
> 
> As far as the mod_ssl side is related, it seems to me that for the 
> "SSLStaplingReturnResponderErrors off" case, we should make sure that we only 
> staple responses with status "good" (i.e. OCSP_RESPONSE_STATUS_SUCCESSFUL and 
> V_OCSP_CERTSTATUS_GOOD for the cert).

If the OCSP response is successful but the status isn't V_OCSP_CERTSTATUS_GOOD, 
I'd want httpd to at least log a warning (as well as not stapling the OCSP 
information). Maybe even add a Warning: header for any client that's interested.

I can attempt a patch for this if other people think it'd be useful.


-- 
Tim Bannister – is...@c8h10n4o2.org.uk



Re: [Patch] Async write completion for the full connection filter stack

2015-10-04 Thread Tim Bannister
On 4 Oct 2015, at 12:40, Graham Leggett <minf...@sharp.fm> wrote:
> 
> The next bit of this is the ability to safely suspend a filter.
…
> I am thinking of the sharp end of mod_proxy_httpd (and friends) becoming an 
> async filter that suspends or not based on whether data is available on the 
> other connection. In the process, mod_proxy becomes asynchronous.

Also super cool mojo.

-- 
Tim Bannister – is...@c8h10n4o2.org.uk



Re: [RFC] Enable OCSP Stapling by default in httpd trunk

2015-09-05 Thread Tim Bannister
On 5 Sep 2015, at 11:53, Ben Laurie <b...@links.org> wrote:
> On Sat, 5 Sep 2015 at 09:32 Kaspar Brand <httpd-dev.2...@velox.ch> wrote:
>> On 04.09.2015 17:54, Rob Stradling wrote:
>>> Today, roughly 25% of HTTPS servers on the Internet have OCSP stapling 
>>> enabled.  Browsers aren't likely to start hard-failing by default until 
>>> that % is a lot higher.
> 
> …the reason browsers don't hard fail is because OCSP (or any out of band 
> communication) is unreliable. So that either means you fail for sites that 
> are actually perfectly OK, or you allow an attacker to override revocation 
> (by blocking OCSP).
…
> Blocking stapling (and presumably you will also object to CT for similar 
> reasons) is hurting security.
> 
> You've argued that there's no point switching on stapling because browsers 
> won't pay attention to OCSP anyway. That is not true. They don't pay 
> attention to OCSP because it is unreliable. If stapling were widely deployed, 
> then it would be possible to switch on hard fail.

It's not just conventional browsers. I think automated / embedded HTTP clients 
will also benefit from stapling, either because  networking filters would block 
a conversation between the client and the CA's OCSP responder, or the extra 
latency from using conventional OCSP is a problem.

For another example of a non-interactive application implementing OCSP, look at 
the Exim mail transfer agent (which can be both client and server).

-- 
Tim Bannister – is...@c8h10n4o2.org.uk



Re: SSLCertificateChainFile deprecation, still

2015-06-15 Thread Tim Bannister
On 15 June 2015 14:12:27 UTC+01:00, Eric Covener cove...@gmail.com wrote:
Anyone else inclined to just remove the message? It's a deprecation
that didn't happen on a release boundary. AFAICT there's no reason to change
how you run your server unless you use two different cert chains and then
you'd find the info in the manual. 


I think that suggestion is a good approach if the SSLCertificateChainFile 
directive can remain available for the full lifespan of 2.4.x

-- 
Tim Bannister – is...@c8h10n4o2.org.uk


Re: 2.2 and 2.4 and 2.6/3.0

2015-05-27 Thread Tim Bannister
Now that even stability-loving Debian is providing 2.4.x with full security 
support, moving on from 2.2 seems to make sense.


-- 
Tim Bannister – is...@c8h10n4o2.org.uk


Re: 2.2 and 2.4 and 2.6/3.0

2015-05-27 Thread Tim Bannister
On 27 May 2015, at 18:26, Jeff Trawick traw...@gmail.com wrote:
 
 one thing it means is having compelling stories involving the latest hot tech 
 that use 2.4
 
 basically, any time there is a how-to-FOO somewhere on the www that uses 
 nginx for the web server component, there needs to be a better how-to-FOO 
 that uses httpd 2.4 ;)  (I don't even think 2.2 is an issue here)

…same with forward- and reverse-proxying (Squid, Pound, Varnish, etc)

Is the httpd wiki a good place to publish these?

-- 
Tim Bannister – is...@c8h10n4o2.org.uk



Re: mod_ssl: Reading dhparams and ecparams not only from the first certificate file

2015-05-26 Thread Tim Bannister
On 26 May 2015, at 09:37, Reindl Harald h.rei...@thelounge.net wrote:
 
 
 Am 26.05.2015 um 10:33 schrieb Rainer Jung:
 Current mod_ssl code tries to read embedded DH and ECC parameters only from 
 the first certificate file. Although this is documented
 
 DH and ECDH parameters, however, are only read from the first
 SSLCertificateFile directive, as they are applied independently of the
 authentication algorithm type.
 
 I find it questionable. I would find it more natural to embed the params in 
 the cert files they apply to, so e.g. the DH params in the RSA cert file and 
 the EC params in the ECDH cert file and also to not require a special order 
 for the files which at the end we do not check. Since missing the embedded 
 params goes unnoticed (finding them is only a DEBUG log line) it is not very 
 user friendly
 
 honestly it would be much more user friendly to have a own parameter for that 
 which would make it easy to regenerate the params via cronjobs without 
 touching the PEM file containing the real certificate and private key

With that kind of directive it would also leave flexibility for this kind of 
thing:

DHParamsEC /tmp/example
DHParamsEC none
DHParamsEC auto

(that last case – I'm imagining that httpd generates the D-H parameters at each 
startup, blocking use of ECDH until generation is complete).

-- 
Tim Bannister – is...@c8h10n4o2.org.uk



Re: mod_proxy_fcgi default port

2015-05-26 Thread Tim Bannister
How about asking IANA to assign a port?
-- 
Tim Bannister – is...@c8h10n4o2.org.uk


Re: SSL/TLS best current practice

2015-05-23 Thread Tim Bannister
On 23 May 2015, at 12:50, Jeff Trawick traw...@gmail.com wrote:
 
 On 05/06/2015 07:22 PM, William A Rowe Jr wrote:
 Here is my proposed global config for httpd.conf.in for 2.4 and 2.2, which I 
 believe mirrors the 'MUST' of RFC 7525.
 
 So new default configs are improved, and that's great.
 
 Any joint interest in maintaining a guide to implementing SSL/TLS best 
 practices in the documentation for those that don't normally see our 
 latest/greatest default configuration and/or need some extra prose around it?

I can help with this.

-- 
Tim Bannister - is...@c8h10n4o2.org.uk



Re: Disable SSLv3 by default

2015-05-04 Thread Tim Bannister
On 4 May 2015, at 22:26, William A Rowe Jr wr...@rowe-clan.net wrote:
 
 It seems to me that SAFE at this time is TLSv1.2.
  
 It also seems to me that the first problem to solve is to ensure if the user 
 removes SSLv3 (+/- TLSv1.0) from their openssl installed binary, that we 
 simply respect that.  In that case, 'SSLProtocol all' should be just the 
 remaining supported TLSv1.1 and TLSv1.2 protocols, or TLSv1.2-only.

FWIW, I agree.

-- 
Tim Bannister – is...@c8h10n4o2.org.uk



Re: Proposal/RFC: informed load balancing

2015-05-02 Thread Tim Bannister
On 1 May 2015, at 01:30, Daniel Ruggeri drugg...@primary.net wrote:
 
 4. The backend MUST add the X-Backend-Info token to the Connection 
 response header, making it a hop-by-hop field that is removed by the 
 frontend from the downstream response (RFC2616 14.10 and RFC7230 6.1). [Note 
 there appears to be an httpd bug here that I intend to submit and that needs 
 to be addressed.]
 
Connection: X-Backend-Info
 
 I'm not sure if this is a stroke of brilliance or extra work that isn't 
 needed :-) . As we discussed at the Con, it is vital for the proxy to remove 
 the header to avoid leaking any potentially useful information to an attacker 
 out to the 'tubes... but parsing Connection for X-Backend-Info seems like 
 it wouldn't be needed since one could just as well check if X-Backend-Info 
 header is present. I'm probably missing the obvious, but can you help me 
 understand more about why we would want this here instead of treating the 
 presence of the header as a sign to do some kind of work?

Here's a situation that could go wrong if this new header weren't marked as 
hop-by-hop. Imagine if there are two webserver products in a reverse proxy 
topology, something like this:
user-agent ← httpd-proxy ← acme-proxy ← httpd-origin

(the server tiers might in fact be clusters of identically configured hosts).

All 4 tiers are doing HTTP/1.1 cacheing, correctly using Vary: and so on. If 
httpd-origin is sending X-Backend-Info then it must signal to ACME-proxy that 
this is a hop-by-hop header. Let's say httpd-origin signals that workers-free 
is 0. httpd-proxy receives a copy of this header and from acme-proxy. 
httpd-proxy incorrectly concludes that workers-free is 0 and starts sending 503 
responses as per its intended configuration, even though acme-proxy would be 
able to serve stale responses from its cache.

The sysadmin contacts the vendor “ACME Proxy”; the vendor asserts that their 
product is conforming to HTTP 1.1 and that the incorrect behaviour is in Apache 
httpd. Which, in my view, it would be.

-- 
Tim Bannister – is...@c8h10n4o2.org.uk



Re: Proposal/RFC: informed load balancing

2015-05-02 Thread Tim Bannister
On 1 May 2015, at 01:30, Daniel Ruggeri drugg...@primary.net wrote:
 
 On 4/29/2015 11:54 PM, Jim Riggs wrote:
 
 So, this has come up in the past several times, and we discussed it again 
 this year at ApacheCon: How do we get the load balancer to make smarter, 
 more informed decisions about where to send traffic?
…
string-entry  = string-field = ( token | quoted-string )
 
 A useful token could be status=OK|ERROR|MAINTENANCE where a backend could 
 advertise to the upstream load balancer that it may want to be put in drain 
 mode or something to that effect. Since this list can't/won't be exhaustive 
 of all things people could care to send, let's add some head room in the spec 
 by allowing custom-integer and/or custom-string. Otherwise, I suspect 
 people would cram things into the wrong fields just to get the data back to 
 the proxy.

Although I think it's an approach deprecated by IETF, how about allowing any 
field name provided it's prefixed with “x-”?

-- 
Tim Bannister – is...@c8h10n4o2.org.uk



Re: Listen on UDS

2015-04-30 Thread Tim Bannister
I'd been musing, coincidentally, about being able to run httpd as a FastCGI.

The motivation for this is a packaged webapp - Wordpress, say - that includes 
.htaccess files in the deployed package.
Having the genuine Apache httpd able to serve the application and apply 
.htaccess restrictions would be a boon, even if the daemon listening on port 
443 is different.

-- 
Tim Bannister – is...@c8h10n4o2.org.uk


Re: Unexpected Warnings from Macro Use in 2.4

2015-02-19 Thread Tim Bannister
On 19 Feb 2015, at 13:02, Nick Gearls nickgea...@gmail.com wrote:
 
 Wrong answer: mod_macro uses the syntax $var but also ${var}, which is 
 mandatory if you want the variable to be a part of a string, like in 
 ${var}abc.
 The syntax really clashes with the Define directive, so it should be changed.
 Another unused character could be used, like §

There aren't many suitable symbols left unused.

To make interpolation not clash with Define I'd prefer “${macro:var}”, or 
something like that, to “§{var}”.

-- 
Tim Bannister – is...@c8h10n4o2.org.uk



Re: replacing Date header

2015-02-18 Thread Tim Bannister
On 17 Feb 2015, at 22:21, André Malo n...@perlig.de wrote:
 
 * Eric Covener wrote:
 
 Java application servers like WebSphere and WebLogic provide Apache modules 
 like this.  I don't know how to address the why, I just want to remove the 
 special treatment for mod_proxy / r-proxyreq and only set a Date if one 
 wasn't provided by the handler.  The user I was working with didn't fully 
 understand how how his software re-used the value in the Date header as sent 
 in the handler.
 
 Uhm, I have no real idea about those, but are they not integrated with the 
 proxy framework? ajp?
 
 However, I always saw this Date header handling as a way to enforce RFC 
 compliance (e.g. to overwrite Date-headers in mod_asis handlers and crappy 
 backends). Wrong Date headers may have a huge impact, as I see it. But then, 
 maybe I'm overrating that.

So maybe the logic should be to preserve a Date: header iff it is compliant 
with the relevant RFC? 
With this, modules that want a Date: header automatically added need only to 
ensure they don't assert an apparently valid Date header.

-- 
Tim Bannister – +44 7980408788 – is...@c8h10n4o2.org.uk



Re: disable SSLv3 the same way SSLv2 was disabled in mod_ssl

2015-01-03 Thread Tim Bannister
IMO this is one for packagers (as well as anyone wishing to contribute 
packaging patches).

How did Traffic Server disable SSL – just an edit to the default configuration, 
or code changes as well?

-- 
Tim Bannister - is...@c8h10n4o2.org.uk



 On 2 Jan 2015, at 19:38, Leif Hedstrom zw...@apache.org wrote:
 
 We disabled SSLv3 in the defaults for Traffic Server as well. It's still 
 available to be explicitly turned on though.
 
 -- Leif



Re: disable SSLv3 the same way SSLv2 was disabled in mod_ssl

2015-01-02 Thread Tim Bannister
On 2 Jan 2015, at 18:18, olli hauer oha...@gmx.de wrote:
 
 Hi,
 
 is there a special reason to keep SSLv3 support on current httpd version 
 (CVE-2014-3566 POODLE attack) ?

See the previous thread starting at http://tinyurl.com/ouyk2cd

My summary:
As you note, major browsers have already disabled SSLv3. It's easy to configure 
httpd not to offer SSLv3 (and this makes a good default for new installs).

-- 
Tim Bannister – is...@c8h10n4o2.org.uk



Re: [Patch] Simplifying mod_alias

2014-12-21 Thread Tim Bannister
On 21 Dec 2014, at 13:48, Graham Leggett minf...@sharp.fm wrote:
 
 This patch implements the above.
 
 The idea is that the existing syntaxes remain unaltered (and can be 
 deprecated in future), while we introduce new Location syntaxes with a single 
 argument, like so:
 
 Location /image
  Alias /ftp/pub/image
 /Location
 LocationMatch /error/(?NUMBER[0-9]+)
  Alias /usr/local/apache/errors/%{env:MATCH_NUMBER}.html
 /LocationMatch
 Location /one
  Redirect permanent http://example.com/two
 /Location
 Location /three
  Redirect 303 http://example.com/other
 /Location
 LocationMatch /error/(?NUMBER[0-9]+)
  Redirect permanent http://example.com/errors/%{env:MATCH_NUMBER}.html
 /LocationMatch
 Location /cgi-bin 
  ScriptAlias /web/cgi-bin/
 /Location
 LocationMatch /cgi-bin/errors/(?NUMBER[0-9]+)
  ScriptAlias /web/cgi-bin/errors/%{env:MATCH_NUMBER}.cgi
 /LocationMatch

This might look odd, though:

Location /gone
 Redirect 410
/Location

…so how about adding one new directive e.g. ForceStatus:
Location /gone
 ForceStatus 410
/Location


-- 
Tim Bannister – is...@c8h10n4o2.org.uk



Re: ApacheCon Austin, httpd track

2014-12-03 Thread Tim Bannister
On 3 Dec 2014, at 16:00, Rich Bowen rbo...@rcbowen.com wrote:
 
 If this content can be put into a half-day three-talk series, where each talk 
 stands alone or works in concert, that would be ideal. Do you think that we 
 can put together like that? Any chance we could even persuade one of the 
 OpenSSL folks to come for that? Anyone have any contacts there?

A day on SSL/TLS could and perhaps should cover both OpenSSL and GnuTLS. 

-- 
Tim Bannister – is...@c8h10n4o2.org.uk



Re: commercial support

2014-11-20 Thread Tim Bannister
On 20 Nov 2014, at 22:00, Jim Jagielski j...@jagunet.com wrote:
 
 Honestly though, how much of the uptake in nginx do people think is actually 
 due to nginx being better or the best choice, and how much do you think 
 is cue simply because it's *seen* as better or that we are seen as old and 
 tired?
 
 This is our 20year anniversary... It would be cool to use that to remind 
 people! :)

Here are some plausible explanations, off the top of my head but with editing.

I reckon that at least some of the perception is down to Apache httpd being 
used in “enterprise” systems that are a long way back from the bleeding edge. 
If your mission-critical system is running a webserver release that's older 
than nginx itself then it's likely that nginx will look and work better.

Another challenge is compatibility. As the default webserver on lots of 
distributions, httpd has a lot of existing users who don't want to see it break 
in an upgrade. For that reason, an upgrade typically won't convert an 
installation from prefork to another MPM. Install nginx… and it performs very 
differently; it's also complicated enough to merit a HOWTO. There won't be as 
many HOWTO guides about a one-line change to select a different MPM.

There are now plenty of guides to building nginx from source. To be honest, 
this is a bit more straightforward than the equivalent task for httpd 2.4.x 
because operating systems that include httpd 2.2 may well have too-old APR and 
APR-Util  as well. AIUI, nginx has fewer dependencies.


Commercial support sounds nice. I think firms who'd pay for it would really 
like to get a commercially-supported web server bundled with their “enterprise” 
operating system. In that sense, Oracle and Red Hat are already offering 
commercial support for httpd.

-- 
Tim Bannister – is...@c8h10n4o2.org.uk

Re: [Patch] mod_ssl SSL_CLIENT_CERT_SUBJECTS - access to full client certificate chain

2014-11-09 Thread Tim Bannister
On 1 Nov 2014, at 12:41, Graham Leggett minf...@sharp.fm wrote:
 
 The use case this solves is that I want to uniquely identify a certificate 
 and store that identity in an LDAP directory. The most obvious solution - 
 just store the cert in the userCertificate attribute and do a direct binary 
 match - doesn’t work in most directories, as direct certificate matching was 
 forgotten in the specs that were involved (unfortunately).

What's stopping this from working? RFC 4523 calls for the userCertificate to 
contain a DER-encoded version of the user's certificate.

The approach I have in mind is to have the directory searchable by issuer DN 
and serial number, with a subsequent comparison of the certificate retrieved by 
LDAP (DER+base64) against SSL_CLIENT_CERT.

I speculate that this could look like:
   Require expr %{SSL_CLIENT_CERT} -x509certeq 
%{LDAP_ATTRIBUTE_USERCERTIFICATE} 

-- 
Tim Bannister – is...@c8h10n4o2.org.uk

Re: [Patch] Async write completion for the full connection filter stack

2014-09-11 Thread Tim Bannister
On 10 Sep 2014, at 18:19, Jim Jagielski j...@jagunet.com wrote:

 On Sep 10, 2014, at 12:07 PM, Graham Leggett minf...@sharp.fm wrote:
 
 Having thought long and hard about this, giving filters an opportunity to 
 write has nothing to do with either data or metadata, we just want to give 
 the filter an opportunity to write. “Dear filter, please wake up and if 
 you’ve setaside data please write it, otherwise do nothing, thanks”. Empty 
 brigade means “do nothing new”.
 
 Hey filter, here is an empty brigade, meaning I have no data for you; if you 
 have setaside data, now is the time to push it thru.

If someone is new to writing filters, I think that's a bit of a gotcha. An 
empty brigade meaning “do nothing new” is easy to code for and unsurprising.

I'm thinking about how I'd help httpd cope with a filter that didn't know about 
this special behaviour. I would introduce a new bucket: NUDGE, and use that to 
wake up filters.

If a filter doesn't understand NUDGE, this could cause a hang from the client's 
point of view. To avoid this, I'd have a configurable timeout after which a 
NUDGE-d filter would get a FLUSH bucket - say 30 seconds as a default, nice and 
high because FLUSH is an expensive operation. Server admins could drop this if 
they know that they have a problematic filter.

If a filter gets a NUDGE and returns APR_SUCCESS I think it makes sense to 
NUDGE the next downstream filter.


A new metadata bucket has all sorts of compatibility issues that need thinking 
about (and I don't know enough about). Changing the meaning of “empty brigade” 
also has compatibility issues but they will show up much later than build time.

-- 
Tim Bannister – is...@jellybaby.net



Re: [RFC] enhancement: mod_cache bypass

2014-08-23 Thread Tim Bannister
On 23 August 2014 14:40:36 GMT+01:00, Mark Montague m...@catseye.org wrote:
On 2014-08-23 5:19, Graham Leggett wrote:
 On 23 Aug 2014, at 03:50, Mark Montague m...@catseye.org 
 mailto:m...@catseye.org wrote:

 I've attached a proof-of-concept patch against httpd 2.4.10 that 
 allows mod_cache to be bypassed under conditions specified in the 
 conf files.

 Does this not duplicate the functionality of the If directives?

No, not in this case:

If -z %{req:Cookie}
 CacheEnable disk /
/If

[root@sky ~]# httpd -t
AH00526: Syntax error on line 148 of
/etc/httpd/conf/dev.catseye.org.conf:
CacheEnable cannot occur within If section
[root@sky ~]#

Also, any solution has to work within both the quick handler phase and 
the normal handler phase of mod_cache.


 # Only serve cached data if no (login or other) cookies are present 
 in the request:
 CacheEnable disk / expr=-z %{req:Cookie}

 As an aside, trying to single out and control just one cache using 
 directives like this is ineffective, as other caches like ISP caches 
 and browser caches will not be included in the configuration.

 Rather control the cache using the Cache-Control headers in the
formal 
 HTTP specs.

The proposed enhancement is about the server deciding when to serve 
items from the cache.  Although the client can specify a Cache-Control 
request header in order to bypass the server's cache, there is no good 
way for a web application to signal to a client when it should do this 
(for example., when a login cookie is set). The behavior of other
caches 
is controlled using the Cache-Control response header.

This functionality is provided by Varnish Cache: 
https://www.varnish-cache.org/docs/4.0/users-guide/increasing-your-hitrate.html#cookies

Squid does not currently provide this functionality, but it seems like 
there is consensus that it should: 
http://bugs.squid-cache.org/show_bug.cgi?id=2258

Here is a more detailed example scenario, in case it helps.  There are 
also many other scenarios in which conditionally bypassing mod_cache is

useful.

- Reverse proxy setup using mod_proxy_fcgi
- Static resources served through httpd front-end with response header 
Cache-Control: max-age=14400 so that they are cached by mod_cache,
ISP 
caches, and browser caches.
- Back-end pages are dynamic (PHP), but very expensive to generate (1-2

seconds).
- Back-end sets response header Cache-Control: max-age=0, 
s-maxage=14400 so that mod_cache caches the response, but ISP caches 
and browser caches do not.  (mod_cache removes s-maxage and does not 
pass it upstream).
- When back-end content changes (e.g., an author makes an update), the 
back-end invokes htcacheclean /path/to/resource to invalidate the 
cached page so that it is regenerated the next time a client requests
it.
- Clients have multiple cookies set.  Tracking cookies and cookies used

by JavaScript should not cause a mod_cache miss.
- Dynamic pages that are generated when a login cookie is set should
not 
be cached.  This is accomplished by the back-end setting the response 
header Cache-Control: max-age=0.
- However, when a login cookie is set, dynamic pages that are currently

cached should not be served to the client with the login cookie, while 
they should still be served to all other clients.

A web application can and should use
Cache-Control: private
or
Vary:
headers on its responses, to avoid having them be incorrectly served from a 
shared cache.

I can see a case for webapps having better control over invalidation but I 
wouldn't do it like this.

If there's still demand, why not arrange for CacheEnable to be valid within 
If?

Tim
-- 
Tim Bannister – is...@jellybaby.net


Re: mod_autoindex issue with multibyte chars

2014-07-16 Thread Tim Bannister
On 16 Jul 2014, at 18:34, Guenter Knauf fua...@apache.org wrote:

 Hi all,
 few days back I found that mod_autoindex seems to have a prob with multibyte 
 chars in filenames; the trailing spaces seem to be calculated for the real 
 string, but since they're finally displayed in the browser as one char this 
 causes lack of spaces and the following data is misaligned ...
 I've seen this 1st with Windows and thought it might be because the 
 filesystem uses another charset than httpd; but today I tested some more, and 
 see same issue also on Linux:
 http://people.apache.org/~fuankg/testautoindex/
 
 I've not yet looked through mod_autoindex due lack of time, but I thought 
 just I mention it here in case someone finds quickly a fix;
 affected are 2.2.x and 2.4.x and most likely trunk too.

This is a documented b^Hfeature: “HTMLTable … is necessary for utf-8 enabled 
platforms or if file names or description text will alternate between 
left-to-right and right-to-left reading order”

Changing the default IndexOptions (e.g. to include “XHTML HTMLtable 
FancyIndexing”) would mitigate this.
I wouldn't change the default behaviour for 2.2.x / 2.4.x though.


-- 
Tim Bannister – is...@jellybaby.net



Re: stop copying footers to r-headers_in?

2014-07-15 Thread Tim Bannister
On 15 Jul 2014, at 15:38, Rüdiger Plüm ruediger.pl...@vodafone.com wrote:

 -Original Message-
 From: Eric Covener [mailto:cove...@gmail.com]
 Sent: Dienstag, 15. Juli 2014 15:25
 To: Apache HTTP Server Development List
 Subject: Re: stop copying footers to r-headers_in?

 What do people think about allowing two-character log formats?  I
 think patch below only breaks someone who had a %XX where XX is a
 registered two digit tag and they expect the 1 char + literal (seems
 safe enough to me even for 2.2)
 
 Is there a way for people with such a setup to fix this with a different 
 config?
 From the top of my head I would say no and that would be a blocker.

%{thing}[label] perhaps? Maybe it's too confusing / contrived, maybe it's not 
needed (yet).


Like this, for an imaginary new format string label %[foo]
LogFormat %{%Y-%m-%d}tT%{%H:%M:%S}t%{usec_frac}t %{usec_frac}[foo]

and for another label %[bar]:
LogFormat %{sec}t%{msec_frac}t %s %[bar] %L %{REQUEST_STATUS} -strcmatch 
'5*'


-- 
Tim Bannister – is...@jellybaby.net



Re: Change of web site layout

2014-06-18 Thread Tim Bannister
On 18 Jun 2014, at 17:03, Ben Reser b...@reser.org wrote:

 The best TLP site I've ever seen at Apache is CouchDB's
 
 http://couchdb.apache.org/

Some notes to save time (I hope):
I've noticed that there's a credit to “Apache Cordova team for the original 
design” (Cordova is also a TLP)
The Cordova website project is 
https://issues.apache.org/jira/browse/CB/component/12320562/
The README for Cordova's website is at 
https://svn.apache.org/repos/asf/cordova/site/README.md
It's a different CMS using, AFAICT, hastings.

-- 
Tim Bannister – is...@jellybaby.net



Re: Change of web site layout

2014-06-17 Thread Tim Bannister
On 17 Jun 2014, at 14:24, Rich Bowen rbo...@rcbowen.com wrote:

 On 06/17/2014 05:19 AM, Daniel Gruno wrote:
 On 06/17/2014 12:46 AM, Tim Bannister wrote:
 On 16 Jun 2014, at 22:23, Rich Bowen wrote:
 
 In addition, I have some comments about your design proposal:
 
 - The apache.org design might be changing RSN (it's being discussed), so 
 using it might not be the most optimal route.
 There is no requirement that a project site look like the main foundation 
 site. Pick any project. Say, http://flume.apache.org/ or 
 http://cloudstack.apache.org/ or http://etch.apache.org/ - each has their own 
 unique feel.

If I could, I'd have httpd and Tomcat use the same site structure. There two 
projects are complements / substitutes, and users don't particularly like 
learning a new site layout for each thing.

Tomcat has the same sort of problem with having multiple versions that are out 
there in production use, too.


…I'm saying this even though I don't have the time or the contacts to liaise 
with the Tomcat web people properly.


 - You use JavaScript to display the tabs. This, apparently, needs to be
   done in a way that people without JS can view it as well. I have tried
 to accommodate that in my second proposal (see link above).
 - The documentation link just leads to our boring and unattractive docs
   front page. I would prefer if people can go directly to documentation
   for e.g. 2.4 right away from the front page (dropdowns?).
 
 Yes, please.

“+1” to this.



-- 
Tim Bannister - +44 7980408788 - is...@jellybaby.net



Re: SSL and NPN

2014-04-28 Thread Tim Bannister
On 28 Apr 2014, at 22:50, Jim Jagielski j...@jagunet.com wrote:

 Any reason to NOT include
 
   http://svn.apache.org/viewvc?view=revisionrevision=1332643
   http://svn.apache.org/viewvc?view=revisionrevision=1487772
 
 in 2.4??

I don't think https://www.imperialviolet.org/2013/03/20/alpn.html is enough 
reason not to backport, but I'll mention it.

-- 
Tim Bannister – is...@jellybaby.net



Re: SSLUserName - mod_auth_user

2014-04-21 Thread Tim Bannister
On 21 Apr 2014, at 12:38, Graham Leggett minf...@sharp.fm wrote:

 Hi all,
 
 Right now, we have the SSLUserName directive, which takes an arbitrary SSL 
 variable and turns it into a username for the benefit of the request. This 
 has the downside that only SSL variables (and some CGI variables) are usable 
 as usernames, and it combines with FakeBasicAuth to create undesirable side 
 effects.
 
 What would be cleaner is if we deprecate SSLUserName and create a 
 mod_auth_user.c module that declares AuthType User, and then offers a 
 AuthUser directive that sets the user based on an arbitrary expression from 
 ap_expr.h. This will make client certificates easier to work with, and 
 provide options for authentication that aren't based purely on logins, such 
 as tokens in URLs, etc.

What string should httpd return to mean “no user found”? Users are going to 
want this.
I suggest  (empty string).

PS. I'd be tempted to call it AuthType Expr.


-- 
Tim Bannister - is...@jellybaby.net



Re: [PATCH ASF bugzilla# 55897] prefork_mpm patch with SO_REUSEPORT support

2014-03-17 Thread Tim Bannister
I'm afraid I don't understand this particular part from 
httpd_trunk_so_reuseport.patch:

#ifndef SO_REUSEPORT
#define SO_REUSEPORT 15
#endif

Why 15? Is this going to be portable across different platforms?

-- 
Tim Bannister – is...@jellybaby.net



Re: Improving The RewriteMap Program Feature

2014-03-02 Thread Tim Bannister
On 1 Mar 2014, at 12:20, Eric Covener cove...@gmail.com wrote:

 If the RewriteMap Program fails, the code within mod_rewrite returns an
 empty string rather than NULL. In my tests this caused /index.htm to be
 returned as the URL which is not very useful. I think it makes more sense to
 handle this situation as a NULL so that the default key is used as we could
 then provide a backup method.
 eg:
 RewriteRule ^/proxy/(.*) ${proxymap:$1|/proxybackup/$1} [P]
 RewriteRule ^/proxybackup/(.*) /proxybackup.php?url=$1 [L]
 
 Looking at the mod_rewrite source code this appears to be a one liner change
 in lookup_map_program:
 if (i == 4  !strcasecmp(buf, NULL))
 .
 becomes:
 if ((i == 0) || (i == 4  !strcasecmp(buf, NULL))) {
 .
 
 Is this minor change something that you would consider implementing?
 
 I think it would need to be opt-in in 2.4, as changing it could break
 configs depending on the current behavior.Maybe some extra flag on
 the definition of the RewriteMap or a RewriteOption?

Here's how I'd want it:

RewriteMap foo prgfb:/www/bin/example.pl

(prgfb — program with fallback).


I can write and submit a patch for this if there's interest.

-- 
Tim Bannister – is...@jellybaby.net



Re: Improving The RewriteMap Program Feature

2014-03-02 Thread Tim Bannister
On 2 Mar 2014, at 16:46, Tim Bannister is...@jellybaby.net wrote:

 On 1 Mar 2014, at 12:20, Eric Covener cove...@gmail.com wrote:
 
 If the RewriteMap Program fails, the code within mod_rewrite returns an
 empty string rather than NULL. In my tests this caused /index.htm to be
 returned as the URL which is not very useful. I think it makes more sense to
 handle this situation as a NULL so that the default key is used as we could
 then provide a backup method.
 eg:
RewriteRule ^/proxy/(.*) ${proxymap:$1|/proxybackup/$1} [P]
RewriteRule ^/proxybackup/(.*) /proxybackup.php?url=$1 [L]
 
 Looking at the mod_rewrite source code this appears to be a one liner change
 in lookup_map_program:
if (i == 4  !strcasecmp(buf, NULL))
.
 becomes:
if ((i == 0) || (i == 4  !strcasecmp(buf, NULL))) {
.
 
 Is this minor change something that you would consider implementing?
 
 I think it would need to be opt-in in 2.4, as changing it could break
 configs depending on the current behavior.Maybe some extra flag on
 the definition of the RewriteMap or a RewriteOption?
 
 Here's how I'd want it:
 
 RewriteMap foo prgfb:/www/bin/example.pl
 
 (prgfb — program with fallback).

In other words, a user could choose “prg” or “prgfb”. prg selects the legacy, 
httpd-2.0 behaviour. prgfb selects a new behaviour which handles map program 
failure as NULL.

Eventually (2.6?), httpd could merge “prg” and “prgfb” into a single map type.


-- 
Tim Bannister – is...@jellybaby.net



Re: Improving The RewriteMap Program Feature

2014-03-01 Thread Tim Bannister
On 28 Feb 2014, at 19:52, Kev s7g2...@yahoo.co.uk wrote:

 As for working around the potential bottleneck, I think this would be more 
 complicated. One solution would be to launch a pool of programs and allow 
 incoming requests to be handled by the first unlocked program that was still 
 running. Does this sound like a sensible approach and does anybody see any 
 potential drawbacks with this?

I think that's a sensible approach… but the pooling behaviour should be 
implemented by the RewriteMap program and not by httpd. httpd could ship a 
contributed example program to show how it might be done.

-- 
Tim Bannister – is...@jellybaby.net



Re: [VOTE] obscuring (or not) commit logs/CHANGES for fixes to vulnerabilities

2014-01-12 Thread Tim Bannister
On 12 Jan 2014, at 13:33, Jeff Trawick  wrote:

 On Fri, Jan 10, 2014 at 8:38 AM, Jeff Trawick traw...@gmail.com wrote:
 Open source projects, ASF or otherwise, have varying procedures for commits 
 of fixes to vulnerabilities. ...
 
 I plan to update http://httpd.apache.org/dev/guidelines.html based on the 
 outcome of the vote.
 
 Folks, if you want to express an opinion but haven't yet, please do so before 
 Tuesday.
 
 I'll add something very close to the following, unless the vote changes 
 considerably:
 
 ---cut here---
 Open source projects, ASF or otherwise, have varying procedures for commits 
 of vulnerability fixes.  One important aspect of these procedures is whether 
 or not fixes to vulnerabilities can be committed to a repository with commit 
 logs and possibly CHANGES entries which purposefully obscure the 
 vulnerability and omit any available vulnerability tracking information.  The 
 Apache HTTP Server project has decided that it is in the best interest of our 
 users that the initial commit of such code changes to any branch will provide 
 the best description available at that time as well as any available tracking 
 information such as CVE number when committing fixes for vulnerabilities to 
 any branch.  Committing of the fix will be delayed until the project 
 determines that all of the information about the issue can be shared.
 
 In some cases there are very real benefits to sharing code early even if full 
 information about the issue cannot, including the potential for broader 
 review, testing, and distribution of the fix. This is outweighed by the 
 concern that sharing only the code changes allows skilled analysts to 
 determine the impact and exploit mechanisms but does not allow the general 
 user community to determine if preventative measures should be taken.
 ---cut here---

s/outweighed by/balanced against/ ?

-- 
Tim Bannister – is...@jellybaby.net



Re: Revisiting: xml2enc, mod_proxy_html and content compression

2014-01-05 Thread Tim Bannister
On 5 Jan 2014, at 02:21, Nick Kew wrote:

 IIRC the OP wants to decompress such contents and run them through 
 mod_proxy_html.  I don't think that works with any sane setup: running 
 non-HTML content-types through proxy_html will always be an at-your-own-risk 
 hack.

I've believed for a while that the right way to address this is for httpd to 
support gzip Transfer-Encoding which is always hop-by-hop and applies to the 
transfer rather than the entity being transferred. For this scenario, it could 
look like this:

[Client] ⇦ gzip content-encoding ⇦ [transforming reverse proxy] ⇦ gzip,chunked 
transfer-encodings ⇦ [origin server]

(I'm assuming that the client doesn't negotiate gzip transfer encoding)


Of course, this still won't help with a badly-configured origin server.

-- 
Tim Bannister – is...@jellybaby.net



Re: Revisiting: xml2enc, mod_proxy_html and content compression

2014-01-04 Thread Tim Bannister
On 4 Jan 2014, at 00:20, Nick Kew wrote:
 On 3 Jan 2014, at 13:39, Thomas Eckert wrote:
 
 This does not solve the problem regarding .gz files however. They still 
 suffer from a double-compression.
…
 I'd say any such fix must lie in adding a compression-sniffing option
 to mod_deflate:
  - let the inflate filter sniff for compressed contents
  - let the deflate filter sniff for already-compressed contents
 even if the headers fail to declare it.
 
 An option with big at your own risk warnings.

Gzip compressed content sometimes gets served with no declared encoding and a 
media type of, e.g., “application/x-gzip”. I reckon that's more common than 
serving it as application/octet-stream or with no Content-Type: declared.

mod_deflate could use this information to avoid compressing the response, and 
without sniffing the content.

This more limited approach is already available through configuration, so maybe 
the way to handle this is via a change to documentation / default 
configuration, rather than code.

Any thoughts?

-- 
Tim Bannister – is...@jellybaby.net



Re: Behavior of Host: vs. SNI Hostname in proxy CONNECT requests

2013-12-13 Thread Tim Bannister
On 13 Dec 2013, at 06:05, Kaspar Brand httpd-dev.2...@velox.ch wrote:
 On 12.12.2013 20:06, William A. Rowe Jr. wrote:
 On Thu, 12 Dec 2013 09:28:16 + Plüm, Rüdiger, Vodafone Group 
 ruediger.pl...@vodafone.com wrote:
 
 Yes, and?  Why would this differ from the historical handling of the Host: 
 header?  The HTTP Host header is not the dns name of this hop, but the 
 hostname component of the uri.  This logic has completely broken forward 
 proxies in httpd on the 2.4 and 2.2.25 releases.
 
 completely broken is a relatively bold statement. As far as I can tell, it 
 essentially boils down to the interpretation of the url parameter in the 
 ProxyPass directive (a partial URL for the remote server, as the docs 
 currently say). In my understanding, in the https:// case, it's a URL for 
 which mod_proxy_http should perform TLS name checking (à la RFC 6125), not 
 simply a hostname [plus port] for opening an TCP connection and then issuing 
 a CONNECT request.

ProxyPass doesn't get used on my forward proxies. This is the case where, e.g., 
your user-agent wants to reach an HTTPS URL via a proxy and so sends, e.g.:

CONNECT remote.host.example:443 HTTP/1.1
Host: remote.host.example

over an HTTPS connection to proxy.example. The configuration for my forward 
proxies isn't much different from the example at 
http://httpd.apache.org/docs/current/mod/mod_proxy.html#examples


I'm not sure what the TLS SNI hostname should contain but, AIUI, clients would 
send proxy.example. It's not reasonable to expect the proxy server to know 
the private key for remote.host.example

-- 
Tim Bannister – is...@jellybaby.net



Re: Forbid directive in core?

2013-09-28 Thread Tim Bannister
On 28 Sep 2013, at 14:19, Eric Covener cove...@gmail.com wrote:

 I've come back to this because I've struggled in another area with 
 access_checker vs. access_checker_ex.  I really think we need basic access 
 control outside of Require and Satisfy.
 
 I have a copy of the Forbidden directive in mod_authz_core and I am 
 currrently allowing ON/OFF flags.
 
 * using a new directive means someone won't casually add forbidden OFF when 
 they think they're turnong on more access control with Require
 * we can document that forbidden OFF is extreme from the start.
 
 I am on the fence about having an argument at all.  My fear is that it will 
 evolve into a misguided FAQ of 'try forbidden OFF if you get a 403' then 
 we're right back to
 
 Files .ht*
 Forbidden
 /Files
 
 ...
 
 Location /
 ...
 Require ldap-group cn=foo
 Forbidden OFF
 /location

The second time in a few days, I'm going to suggest adding an optional 
parameter to a directive. 

Taking a leaf out of cascading stylesheets, how about “Forbidden On 
Level=Important” and perhaps “Forbidden On Level=Indelible”?

(the idea being that the “Indelible” level can't be removed).


This lets distributions ship a fairly safe default configuration but gives 
users enough scope to hang themselves. With this, “forbidden OFF” isn't so 
risky and “Forbidden Off Level=Important” can carry a health warning (and 
perhaps an ErrorLog warning as well).


Too complex or worth having? What do people think? If there's appetite for it 
then I will have  a go at providing a patch.

-- 
Tim Bannister – is...@jellybaby.net



Re: any interest in massaging the new error log provider to fit into 2.4.x?

2013-09-26 Thread Tim Bannister
 You realize : is a problematic overload for Netware (and in theory for Win32 
 unless you dodge the X: single-char drive letter bullet)?
 
 What about a [provider]path syntax instead?  Any other good ideas? A 
 notoriously bad idea was the (size) overload of the SSLSessionCache directive.


How about making these pairs of directives equivalent:

ErrorLog /var/log/apache2/error.log
ErrorLog file /var/log/apache2/error.log

ErrorLog syslog:user
ErrorLog syslog syslog:user

ErrorLog |/usr/local/bin/loghandler -parameter foo
ErrorLog pipe-with-shell /usr/local/bin/loghandler -parameter foo


…and by analogy, these could be valid too:

ErrorLog syslog 127.0.0.1:user
ErrorLog syslog [::1]:user
ErrorLog console 
ErrorLog relp remotehost.example
ErrorLog compresslog /var/log/apache2/error.log.gz

-- 
Tim Bannister – is...@jellybaby.net



Re: mod_autoindex string pluggability

2013-08-05 Thread Tim Bannister
How about implementing XHTML → JSON as a filter? Either with existing modules 
or with something dedicated to autoindex.

TimOn 05/08/2013 7:26 Sven Dowideit wrote:
Hello Everyone,

I'm scratching an itch to make mod_autoindex output what I want, and
would love to know what, if anything would make the changes merge-able.

In its simplest form, I'd like apache to be able to give me an index in
JSON format - previously, I've parsed the html in javascript, but
somehow I think I can do better.

While I was reading the code (today) it occurred to me that there are a
lot of if statements throughout, which could be eliminated by moving
(obscuring) the output strings to a switchable container (right now I'm
using arrays of strings for my simplicity - I don't know this codebase
any better than you know me :)

so here is the kind of thing I started to do (partial diff, its all
really kind of dull - I've extracted the HTML/XHTML strings into another
similarly replaceable array):


#define STRING_INDEX_START   0
#define STRING_INDEX_END 1

const char *table_index_string[] = {
table\n   tr,
  /table\n
};

const char *table_index_string_stylesheet[] = {
table id=\indexlist\\n   tr class=\indexhead\,
  /table\n
};

const char *fancy_index_string[] = {
  pre,
  /pre\n
};

const char *default_index_string[] = {
  ul,
  /ul\n
};

/* set the default string set (choose alternatives based on user conf
options) */
const char **index_string = default_index_string;

@@ -1873,23 +1872,14 @@ static void output_directories(struct ent **ar,
int n,
 }
 }
 if (autoindex_opts  TABLE_INDEXING) {
-ap_rvputs(r, breakrow, /table\n, NULL);
+ap_rputs(breakrow, r);
 }
 else if (autoindex_opts  FANCY_INDEXING) {
 if (!(autoindex_opts  SUPPRESS_RULES)) {
-ap_rputs(hr, r);
-if (autoindex_opts  EMIT_XHTML) {
-ap_rputs( /, r);
-}
-ap_rputs(/pre\n, r);
-}
-else {
-ap_rputs(/pre\n, r);
+ap_rputs(output_string[STRING_HR], r);
 }
 }
-else {
-ap_rputs(/ul\n, r);
-}
+ap_rputs(index_string[STRING_INDEX_END], r);
 }

Cheers
Sven




Re: Struggling with AuthMerging

2013-07-30 Thread Tim Bannister
On 31 Jul 2013, at 00:18, Mikhail T. wrote:

 Hello!
 
 I realize, configurations questions aren't meant for this list, but I'm 
 beginning to suspect a bug...

I'd try the users list first. The server might be working properly and it's 
just the documentation that has fallen short.

Tim

-- 
Tim Bannister – is...@jellybaby.net



Re: [Bug 45023] DEFLATE preventing 304 NOT MODIFIED response

2013-07-09 Thread Tim Bannister
On 9 Jul 2013, at 15:49, Eric Covener cove...@gmail.com wrote:

 What to do in 2.4?  Maybe still early enough to still change 2.4 behavior?

Roy Fielding links this to bug #39727…

I still want to push for gzip Transfer-Encoding: in trunk (and maybe 2.4 as 
well). It works, but my code is far too ugly to consider committing:

https://issues.apache.org/bugzilla/show_bug.cgi?id=52860

Any help is definitely welcome.

-- 
Tim Bannister – is...@jellybaby.net



Re: [Bug 45023] DEFLATE preventing 304 NOT MODIFIED response

2013-07-09 Thread Tim Bannister
On 9 Jul 2013, at 15:56, Tim Bannister is...@jellybaby.net wrote:

 On 9 Jul 2013, at 15:49, Eric Covener cove...@gmail.com wrote:
 
 What to do in 2.4?  Maybe still early enough to still change 2.4 behavior?
 
 Roy Fielding links this to bug #39727…
 
 I still want to push for gzip Transfer-Encoding: in trunk (and maybe 2.4 as 
 well). It works, but my code is far too ugly to consider committing:

I may as well add that there are two reasons for wanting to see this in httpd.

First, I think availability in httpd will (slowly) drive adoption by clients 
because of httpd's share of the market. There's no real issue with legacy 
clients because existing browsers don't request gzip transfer-encoding (proxies 
are more of an issue).

Second, most webservers treat transfer-encoding as two states (identity or 
chunked) and some even store this in a bool. Retrofitting compressed transfer 
encodings into this kind of code is much more of a challenge. I think httpd is 
the only webserver (or reverse proxy) with the foundations for this kind of 
enhancement.

-- 
Tim Bannister – is...@jellybaby.net



Re: Forbid directive in core?

2013-06-10 Thread Tim Bannister
On 10 Jun 2013, at 14:35, Eric Covener cove...@gmail.com wrote:

 I'd like to add an immutable Forbid directive to the core and use it in some 
 places in the default configuration instead of require all denied.
 
 http://people.apache.org/~covener/forbid.diff
 
 This protects from a broad Location or If being added that supercedes 
 Directory/Files.
 
 I thought someone might object to the duplication w/ AAA or the presence in 
 the core, so opting for RTC.


Just a comment: other places that do broadly similar things often have a “deny 
by default” philosophy. I like this approach.
Obviously this isn't going to please admins with existing configurations, so is 
there a way to design the mechanism so it's still possible to get something 
more strict than we have at the moment?

In terms of directives, this could look like:

Directory /
  # For example, insiset that exemptions must be defined in the same place as 
the Forbid is set.
  Forbid
  ForbidExemption /srv/web /nfs/foo/bar
/Directory

# Require HTTPS except from IPv4 localhost
If %{REQUEST_SCHEME} != HTTPS  (! -R 127.0.0.0/8 ) 
  # Expression evaluation doesn't need exemptions
  Forbid
/Directory


-- 
Tim Bannister – is...@jellybaby.net



Re: Forbid directive in core?

2013-06-10 Thread Tim Bannister
On 10 Jun 2013, at 15:17, Graham Leggett minf...@sharp.fm wrote:
 On 10 Jun 2013, at 3:35 PM, Eric Covener cove...@gmail.com wrote:
 
 I'd like to add an immutable Forbid directive to the core and use it in some 
 places in the default configuration instead of require all denied.
 
 http://people.apache.org/~covener/forbid.diff
 
 This protects from a broad Location or If being added that supercedes 
 Directory/Files.
 
 Does Location supercede Directory/Files?
 
 My understanding is that if the Directory/Files says no, then the access is 
 denied, regardless of what Location says. Or to state it another way, we are 
 successful until the first directive comes along that says denied. We don't 
 deny, and then later on change our mind and succeed again.

I think that “dangerous” behaviour IS how httpd behaves. Have a look at the end 
of http://httpd.apache.org/docs/2.4/sections.html#merging

-- 
Tim Bannister – is...@jellybaby.net



Re: disable pid file writing?

2013-05-10 Thread Tim Bannister
On 10 May 2013, William A. Rowe, Jr. wrote:

 On Wed, 08 May 2013 19:08:56 -0500 Daniel Ruggeri drugg...@primary.net 
 wrote:
 
 On 5/8/2013 3:29 PM, Rainer Jung wrote:
 Careful: I didn't test it but we delete the pid file during web server 
 shutdown. That might remove /dev/null then.
 
 On a quick look through the code I had the impression you can not easily 
 get rid of the pid file.
 Agreed - setting to /dev/null under the current code also fails
 startup anyway with the following error:
 (20014)Internal error: Error retrieving pid file /dev/null
 Remove it before continuing if it is corrupted.
 
 I haven't looked into it any further than that, though.
 
 Yes, to both concerns, it definately needs special treatment with a strcmp() 
 (as I had hinted in my original note).  But there isn't a sane reason to 
 honor /dev/null, whereas there's no reason you couldn't name a pidfile 'none' 
 in the serverroot directory.  That's why I thought it would make a good 
 no-pid sentinel value.

How about  as a non-sane name? /dev might be /Devices on some arcane 
Unix-like system but  isn't a valid filename anywhere I've ever seen.

-- 
Tim Bannister – is...@jellybaby.net



Re: mod_cache with Cache-Control no-cache= or private=

2013-03-13 Thread Tim Bannister
On 13 Mar 2013, at 17:41, Yann Ylavic ylavic@gmail.com wrote:
 On Wed, Mar 13, 2013 at 6:35 PM, Tom Evans tevans...@googlemail.com wrote:
 On Wed, Mar 13, 2013 at 5:27 PM, Yann Ylavic ylavic@gmail.com wrote:
 
 How would the origin invalidate a Set-Cookie, with an empty one ?
 
 Regards,
 Yann.
 
 Set it again, with an in the past expiry date.
 
 Well, that's not exactly the same thing, the user may have a valid Cookie 
 (which is not the one cached) the origin wants to keep going on.
 I meant invalidating the cached cookie, not the one of the user.


Is this the situation you're worried about:

ClientA: GET /foo HTTP/1.1
ClientA: Host: …

ReverseProxy: GET /foo HTTP/1.1
ReverseProxy: Host: …

Origin: HTTP/1.1 200 OK
Origin: Date: …
Origin: Set-Cookie: data=AA
Origin: Cache-Control: private=Set-Cookie

ReverseProxy: HTTP/1.1 200 OK
ReverseProxy: Date: …
ReverseProxy: Set-Cookie: data=AA
ReverseProxy: Cache-Control: private=Set-Cookie



ClientB: GET /foo HTTP/1.1
ClientB: Host: …
ClientB: Cookie: data=BB

ReverseProxy: GET /foo HTTP/1.1
ReverseProxy: Host: …
ReverseProxy: Cookie: data=BBB

Origin: HTTP/1.1 304 Not Modified
Origin: Date: …
Origin: Cache-Control: private=Set-Cookie



This should just work. The final reply from the cacheing reverse proxy should 
look like this:
ReverseProxy: HTTP/1.1 304 Not Modified
ReverseProxy: Date: …

and the Set-Cookie: header from the stored request would not be used (in fact, 
the proxy may have elected not to store it). The origin doesn't have to mention 
that header in the 304 response.


-- 
Tim Bannister – is...@jellybaby.net



Re: If/If-Match don't work for COPY

2013-02-26 Thread Tim Bannister
On 25 Feb 2013, at 21:56, Reindl Harald wrote:
Am 25.02.2013 22:47, schrieb Timothy Wood:
 Sending a If or If-Match header with an invalid ETag doesn't result in a 412 
 Precondition Failed
 
 why in the world should it?
…
 why would you response with a 412?

Maybe if you want to create a copy but only if you haven't lost an update? 
ETags are used to avoid lost updates; checking that cached data are fresh is 
just a common special case.

-- 
Tim Bannister – is...@jellybaby.net



smime.p7s
Description: S/MIME cryptographic signature


Re: mod_lbmethod_byrequests required to define a BalancerMember

2012-12-28 Thread Tim Bannister
On 28 Dec 2012, at 16:11, Eric Covener cove...@gmail.com wrote:

 When defining a balancer, mod_lbmethod_byrequests is always looked up
 explicitly and used as the initial LB method.
 
 I am curious how others feel about this:
 
 [ ] document that mod_lbmethod_byrequests needs to be loaded and
 improve the error
 [ ] make it work if ProxySet lbmethod=other occurs before BalancerMember
 [x] make it work if  ProxySet lbmethod=other occurs after BalancerMember
 [ ] refactor byrequests back into mod_proxy or mod_proxy_balancer so
 it's always available

…I like that one and would also like the moon on a stick please.

Maybe there could be a very simple lbmethod that isn't byrequests, and is 
always available? For example, purely random allocation using a poor quality 
PRNG?

-- 
Tim Bannister – is...@jellybaby.net



smime.p7s
Description: S/MIME cryptographic signature


Re: The Case for a Universal Web Server Load Value

2012-11-15 Thread Tim Bannister
On 15 Nov 2012, at 07:01, Issac Goldstand wrote:
 On 15/11/2012 00:48, Tim Bannister wrote:
 On 14 Nov 2012, at 22:19, Ask Bjørn Hansen wrote:
 The backend should/can know if it can take more requests.  When it can't it 
 shouldn't and the load balancer shouldn't pass that back to the end-user 
 but rather just find another available server or hold on to the request 
 until one becomes available (or some timeout value is met if things are 
 that bad).
 
 This only makes sense for idempotent requests. What about a POST or PUT?
 
 What's the problem?  LB will get the request, send OPTIONS * to the backends 
 to find an available one and only then push the POST/PUT back to it...

Sorry; I was trying to be brief but that meant skipping some details.

We have to assume that at some point we have uneven loading and that there is a 
backend with spare capacity (otherwise, yeah, no loadbalancer will help). A 
backend that started off responsive may slow down due to load but still be able 
to keep the TCP connection alive. With GET, we can just chuck requests at the 
backends and only decide what to do when a request goes bad or the response is 
late. GET's idempotency means we can retry the same request with a different 
backend. This strategy doesn't work with POST etc.

Uneven load could arise through imperfect balancing by a reverse proxy, or it 
could be exogenous – maybe one of the backends has fired off an expensive 
scheduled task?


PS. If we are doing load skewing or otherwise managing the number of active 
backends, we definitely want a way to learn the load on each backend. A bit of 
standardisation would be nice here (de facto or otherwise). Apache httpd is a 
good place to start off, because of its market share, even if this goes beyond 
the scope of httpd itself.

-- 
Tim Bannister – is...@jellybaby.net



smime.p7s
Description: S/MIME cryptographic signature


Re: The Case for a Universal Web Server Load Value

2012-11-14 Thread Tim Bannister
On 14 Nov 2012, at 18:49, Ask Bjørn Hansen wrote:

 I really like how Perlbal does it:
 
 It opens a connection when it thinks it needs more and issues a (by default, 
 it's configurable) OPTIONS * request and only after getting a successful 
 response to the test will it send real requests on that connection (and then 
 it will keep the connection open with Keep-Alive for further requests).

X-Server-Load: would still be an improvement, eg with this response to OPTIONS:
HTTP/1.1 200 OK
Date: Wed, 14 Nov 2012 19:00:00 GMT
Server: Apache/2.5.x
X-Server-Load: 0.999

…the balancer might decide to use a backend that is reporting a lower load.

-- 
Tim Bannister – is...@jellybaby.net



smime.p7s
Description: S/MIME cryptographic signature


Re: The Case for a Universal Web Server Load Value

2012-11-14 Thread Tim Bannister
On 14 Nov 2012, at 22:19, Ask Bjørn Hansen wrote:

 I know I am fighting the tide here, but it's really the wrong smarts to put 
 in the load balancer.
 
 The backend should/can know if it can take more requests.  When it can't it 
 shouldn't and the load balancer shouldn't pass that back to the end-user but 
 rather just find another available server or hold on to the request until one 
 becomes available (or some timeout value is met if things are that bad).

This only makes sense for idempotent requests. What about a POST or PUT?


For a plausible example that mixes POST and GET: a cluster of N webservers 
providing SPARQL HTTP access to a triplestore. Most queries will use GET but 
some might use POST, either because they are too long for GET or because the 
query is an update.

The reverse proxy / balancer manager might want to:
 • balance query workload across the active set of webservers
 • spin up an extra backend as required by load
 • skew load onto the minimum number of webservers (and suspend any spares)

SPARQL is an example of a varying workload where none of httpd's existing 
lbmethods is perfect. One complex query can punish a backend whilst its peers 
are idle handling multiple concurrent requests. SPARQL sometimes means POST 
requests; a subset of these are safely repeatable but determining which ones is 
too complex for any HTTP proxy.

-- 
Tim Bannister – is...@jellybaby.net



smime.p7s
Description: S/MIME cryptographic signature


Re: The Case for a Universal Web Server Load Value

2012-11-13 Thread Tim Bannister
On 12 Nov 2012, at 15:04, Jim Jagielski wrote:

 Booting the discussion:
 
   
 http://www.jimjag.com/imo/index.php?/archives/248-The-Case-for-a-Universal-Web-Server-Load-Value.html


There's bound to be more than one way to do it :-)

I'm afraid I don't favour providing status data in every response. Doing it 
that way means that the reverse proxy has to filter something out and it isn't 
really clean HTTP. Would a strict implementation need to throw in a Vary: * as 
well?


Instead, I would rather have load information provided via something broadly 
RESTful. httpd already has server-status and a machine readable variant, but 
there's room to improve it. I'd start with offering status via JSON and / or 
XML. I'd prefer XML because of the designed-in extensibility.

With this approach, peers that want frequent server-status updates can request 
this status as often as they like, and can use the usual HTTP performance 
tweaks such as keepalive. A load-balancing reverse proxy can read this 
information, or a separate tool can track it and update the load balancer's 
weightings.



As for how to express load? How about a float where 0.0 represents idle and 1.0 
represents running flat out. A trivial implementation for Unix would take the 
load average and divide it by the number of CPUs.




I would keep all of this separate from whether or not the backend has outright 
failed. Perlbal, and maybe some other software, will check an HTTP connection 
via an initial “OPTIONS *”, and will of course remember when a connection goes 
bad either via a TCP close or a 5xx response.


-- 
Tim Bannister – is...@jellybaby.net



smime.p7s
Description: S/MIME cryptographic signature


Re: Rethinking be liberal in what you accept

2012-11-07 Thread Tim Bannister
On 7 Nov 2012, at 11:26, Stefan Fritsch wrote:

 considering the current state of web security, the old principle of be 
 liberal in what you accept seems increasingly inadequate for web servers. It 
 causes lots of issues like response splitting, header injection, cross site 
 scripting, etc. The book Tangled Web by Michal Zalewski is a good read on 
 this topic, the chapter on HTTP is available for free download at 
 http://nostarch.com/tangledweb .

 If a method is not registered, bail out early.


Good idea, but it would be nice to be able to use Limit or LimitExcept to 
re-allow it.

-- 
Tim Bannister – is...@jellybaby.net



smime.p7s
Description: S/MIME cryptographic signature


Re: svn commit: r1406719 - in /httpd/httpd/trunk: CHANGES docs/log-message-tags/next-number include/http_core.h server/core.c server/protocol.c

2012-11-07 Thread Tim Bannister
On 7 Nov 2012, at 18:12, Stefan Fritsch wrote:
 On Wed, 7 Nov 2012, Graham Leggett wrote:
 
 New directive HttpProtocol which allows to disable HTTP/0.9 support.
 
 It feels wrong targeting 0.9 only, would it be possible to do this in a 
 generic way, say by listing the ones accepted, or by specifying a minimum?
 
 Any suggestions for a syntax? Maybe:
 
 HttpProtocol 1.1  # only 1.1
 HttpProtocol 1.0- # 1.0 and above
 HttpProtocol 1.0-1.1  # 1.0 and 1.1
 HttpProtocol -1.0 # 1.0 and below

Does it need its own directive? How about a new environment variable and 
Require:

Require expr %{HTTP_PROTOCOL} -gt 1.1


I realise that won't work as things stand, because -gt only handles integers. 
Maybe another binary operator could allow decimals?

NB. SERVER_PROTOCOL would not be suitable because the initial “HTTP/” makes it 
harder to do math.

-- 
Tim Bannister – is...@jellybaby.net



smime.p7s
Description: S/MIME cryptographic signature


Re: [PATCH] mod_systemd

2012-09-26 Thread Tim Bannister
On 26 Sep 2012, at 16:10, Jan Kaluza wrote:

 Hi,
 
 attached patch adds new module called mod_systemd. Systemd [1] is service 
 manager for Linux. Although httpd works with systemd normally, systemd 
 provides sd_notify(...) function [2] to inform service manager about current 
 status of the service. Status message passed to service manager using this 
 function is later visible in systemctl status httpd.service output and can 
 provide useful information about current httpd status.
 
 The goal of this module is to update httpd's status message regularly to 
 provide information like number of idle/busy workers, total requests or for 
 example number of requests per second. It uses data from the 
 ap_get_sload(...) function and depends on my httpd-sload.patch from previous 
 mail.
 
 I've tried to choose some interesting data for the status message, but if you 
 think admins would like to see something different there, I'm open to 
 suggestions. Note that it has to be single line of text, so there's no space 
 for lot of data.

I'd like to be able to show the date/time of the last configuration load (eg 
from a HUP). However, I don't use systemd yet so please treat this as only a 
suggestion.


-- 
Tim Bannister – is...@jellybaby.net



smime.p7s
Description: S/MIME cryptographic signature


Re: DNT IE10 (was svn commit: r1371878 - /httpd/httpd/trunk/docs/conf/httpd.conf.in)

2012-09-13 Thread Tim Bannister
On 13 Sep 2012, at 18:24, Jeff Trawick traw...@gmail.com wrote:

 I don't think it is a transparency issue so much as a poor choice of
 venues for airing the disagreement.  We've put something in the .conf
 file that many administrators will need to remove and almost none will
 have a need to keep.  The message to Microsoft, such as it is, suffers
 because of that.

s/administrators/packagers/ ?

-- 
Tim Bannister – is...@jellybaby.net



smime.p7s
Description: S/MIME cryptographic signature


Re: Ideas for an output filter for mod_lua

2012-08-23 Thread Tim Bannister
On 23 Aug 2012, at 11:45, Daniel Gruno rum...@cord.dk wrote:
 On 08/23/2012 12:02 AM, Tim Bannister wrote:
 
 I don't know if this is another way of phrasing Nick's question or not, but 
 would I be able to implement gzip Transfer-Encoding: just using Lua and this 
 new directive?
 
 I found (bug 52860) it a bit tricky to achieve in C, so I think it could be 
 harder still with the extra limitations of the Lua environment. My C code 
 uses AP_FTYPE_TRANSCODE which I think is the right choice but few modules 
 get involved at this filtering stage.
…
 So yes, theoretically you should be able to implement decompression this
 way, by doing something along the lines of this (totally just making it up):
 
 -
 local zip = require zlib -- or something...
 function gzip_handle(r)
r.headers_out['Transfer-Encoding'] = gzip -- or ?
do_magic_header_stuff_here() -- add header data
coroutine.yield() -- yield and wait for buckets
while (buffer) do  -- for each bucket, deflate it
local deflated = zip.deflate(buffer)
coroutine.yield(deflated) -- pass on new data
end
append_tail_to_output() -- pass on a tail if needed
 end
 -

My patch is for implementing gzip compression by httpd, not decompression, but 
the code will look pretty similar.

That's quite neat, then. I will try to make an actual implementation in Lua.
The part I found difficult was the interaction with the second 
transfer-encoding, “chunked”. Using gzip Transfer-Encoding: implies using 
chunked, because we want to shorten the response and this means that the 
Content-Length definitely doesn't match the size of the HTTP response body.

-- 
Tim Bannister – is...@jellybaby.net



smime.p7s
Description: S/MIME cryptographic signature


Re: Ideas for an output filter for mod_lua

2012-08-22 Thread Tim Bannister
On 22 Aug 2012, at 22:25, Daniel Gruno rum...@cord.dk wrote:

 Would your concept meaningfully generalise beyond application-level filters?
 
 I'm not entirely sure what you mean by this, could you elaborate?
 If you want some more sophisticated examples of what could be achieved with 
 Lua filtering, I'd be happy to provide some more details on how this concept 
 could be utilised.

I don't know if this is another way of phrasing Nick's question or not, but 
would I be able to implement gzip Transfer-Encoding: just using Lua and this 
new directive?

I found (bug 52860) it a bit tricky to achieve in C, so I think it could be 
harder still with the extra limitations of the Lua environment. My C code uses 
AP_FTYPE_TRANSCODE which I think is the right choice but few modules get 
involved at this filtering stage.

-- 
Tim Bannister – is...@jellybaby.net



smime.p7s
Description: S/MIME cryptographic signature


Re: [ANNOUNCEMENT] Apache HTTP Server 2.4.3 Released

2012-08-21 Thread Tim Bannister
On 21 Aug 2012, Jim Jagielski  wrote:

 NOTE to Windows users: The issues with AcceptFilter None replacing 
 Win32DisableAcceptEx appears to have resolved starting with version 2.4.3 
 make Apache httpd 2.4.x suitable for Windows servers.

I know what this means, but the grammar doesn't seem very clear. I realise the 
release is done but I thought I'd mention it anyway… maybe the same note will 
go in the next release announcement.

-- 
Tim Bannister – +44 7980408788 – is...@jellybaby.net



Re: utf-8 - punycode for ServerName|Alias?

2012-07-30 Thread Tim Bannister
On 30 Jul 2012, at 23:00, William A. Rowe Jr. wrote:

 Exactly my point.  If you configure a utf-8 hostname, we know in fact it is
 a punycode encoding of that value, which is why I believe it makes sense to
 represent both when you test the vhost configs with -D DUMP_VHOSTS.  If you
 configure a punycode hostname, it will be accepted with no hassle.  There
 is no such thing as an actual utf-8 or extended ASCII (8 bit) hostname.

At the moment I have configuration (not working, but “ready” anyway :-) for the 
same virtual host in UTF-8 and punycode variants. I could easily set one of 
them to differ from the other.

How will the new httpd handle this kind of situation? I think what I'd expect 
is a warning and then for one of them to take precedence and the other to be 
ignored.

-- 
Tim Bannister – is...@jellybaby.net



smime.p7s
Description: S/MIME cryptographic signature


Re: mpm-itk and upstream Apache, once again

2012-07-19 Thread Tim Bannister
On 19 Jul 2012, at 17:26, Nick Kew wrote:

  2. Fixes to get Apache to drop the connection if it detects (during 
 .htaccess lookup) that it would need to change the uid.
 
 Dropping the connection gratuitously breaks HTTP, and so has no place in 
 httpd (of course, a third-party module sets its own rules). Would it need a 
 core patch to return an Internal Server Error (500)?

Vanilla httpd does this all the time… after a timed-out keepalive. The client 
cannot make any assumptions about the configured timeout, and can't tell 
whether the dropped connection is due to a genuine timeout or a UID mismatch 
between the previous and current request.

-- 
Tim Bannister - +44 7980408788 - is...@jellybaby.net



smime.p7s
Description: S/MIME cryptographic signature


Re: mpm-itk and upstream Apache, once again

2012-07-19 Thread Tim Bannister
On 19 Jul 2012, at 18:22, Graham Leggett wrote:

 I would hate to have to troubleshoot this - two completely independent 
 behaviors, with the same symptom but completely different cause.
 
 Nick is right, a 500 is the right thing to do here.

I'm really not convinced. I'd expect a user agent to retry in the 
keepalive-disconnect case, whereas a 500 response usually gets displayed to the 
user. Very different experiences.

I think there's a case for leaving itk separate, a bit like mod_fcgid. It is a 
bit unusual and troubleshooting won't be straightforward.

-- 
Tim Bannister – is...@jellybaby.net



smime.p7s
Description: S/MIME cryptographic signature


Re: Scripting for a windows installer

2012-07-14 Thread Tim Bannister
On 14 Jul 2012, at 03:31, William A Rowe Jr wrote:

 Another option with one downside is to script the install using PowerShell.  
 It was introduced to all installations in Vista and 2008 R2 server.  I dont 
 care about EOL'ed XP users needing to provision it, but what is the group 
 concensus about having 2008 (original release) users pre provision it?
 
 WSH windows scripting host would be lovely if it spoke utf8, but as a 
 practical matter, support for utf8 is poor to nonexistant depending on what 
 you are trying to rewrite.  Powershell overcomes this defect.

I'd be happy to see PowerShell used here. I think httpd contributors are more 
likely to know / learn PowerShell than alternatives like WSH.

-- 
Tim Bannister – is...@jellybaby.net



smime.p7s
Description: S/MIME cryptographic signature


Re: utf-8 - punycode for ServerName|Alias?

2012-04-07 Thread Tim Bannister
On 7 Apr 2012, at 07:33, William A. Rowe Jr. wrote:

 So we have live registrars, no longer experimental, who are now registering 
 domains in punycode.  Make of it what you will.
 
 Do we want to recognize non-ASCII strings in the ServerName|Alias directives 
 as utf-8 - punycode encodings?  Internally, from the time the servername 
 field is assigned, it can be an ascii mapping.

I think this is more important for mass virtual hosting (VirtualDocumentRoot 
from mod_vhost_alias, etc). Users would create a document root directory named, 
eg, テスト.example and expect it to work. They don't know anything about Unicode, 
let alone punycode.
I reckon a lot of users would work out quickly that only Roman characters work 
in domain names, but they aren't going to be able to work out how to rename 
that folder into the correct punycode nor to tell the folders apart if renamed 
in this way.


As a user: I already have a configuration file with a UTF-8 ServerAlias 
defined, that's just waiting for httpd to implement this feature … and until 
then, I have the punycoded version in there as well.

-- 
Tim Bannister – is...@jellybaby.net



smime.p7s
Description: S/MIME cryptographic signature


Re: A push for 2.4.2

2012-03-31 Thread Tim Bannister
With the code:
if (APR_SUCCESS != rv  err) {
ap_log_error(APLOG_MARK, APLOG_ERR, rv, s, APLOGNO(01845)
%s, err-msg);
return rv;
}

then 01845 gets associated with lots of different crypto driver messages.

How about logging something like crypto driver error: %s instead?

-- 
Tim Bannister – is...@jellybaby.net



Re: TRACE still enabled by default

2012-03-21 Thread Tim Bannister
On 21 Mar 2012, at 12:39, Reindl Harald wrote:

 1 out of a million servers needs TRACE enabled
 
 it was ALWAYS a good idea to disable ANYTHING by default what is not really 
 needed and this principle will stay

inetd normally ships with echo not running, but kernels usually ship with ICMP 
enabled. I think TRACE is more like ICMP echo than tcp/7 echo.

If a distribution wants to ship a default configuration that disables TRACE, 
isn't that enough? The issue is naïve / lazy server admins, and almost all of 
those will install httpd from a distribution.

-- 
Tim Bannister – is...@jellybaby.net



smime.p7s
Description: S/MIME cryptographic signature


Re: TRACE still enabled by default

2012-03-21 Thread Tim Bannister
On 21 Mar 2012, at 21:46, Stefan Fritsch wrote:

 But one thing that would be very interesting in this case, namely the 
 X-Forwarded-For header, is something that most admins of a reverse-proxied 
 site do NOT want to disclose at the end-point. They may also not want to 
 reveal other headers sent from the reverse proxy to the end-point.

The same may apply to Via: … and in both cases the answer may be to disable or 
restrict the TRACE method.
But isn't this more a documentation issue than an argument for changing the 
compiled-in default?

-- 
Tim Bannister – is...@jellybaby.net



smime.p7s
Description: S/MIME cryptographic signature


[Bug 52860] Support Transfer-Encoding: gzip

2012-03-14 Thread Tim Bannister
I've filed Bug 52860 as an RFE for httpd to support gzip Transfer-Encoding:

My patch extends mod_deflate to provide this behaviour through a filter, but 
I'm not sure if this is right approach.
Would anyone have the time to look at this and help me improve the patch to 
something committable?

https://issues.apache.org/bugzilla/show_bug.cgi?id=52860

-- 
Tim Bannister – is...@jellybaby.net

smime.p7s
Description: S/MIME cryptographic signature


Re: [proposed] remove docs/1.3/

2012-02-27 Thread Tim Bannister
On 27 Feb 2012, at 19:16, André Malo wrote:

 A compromise I'd actively support would be:
 
 - to not only put these red blocks above each document, but  provide 
 'position: fixed' block, being always visible (for modern 
  browsers) (maybe on the left side, simply saying UNSUPPORTED SOFTWARE or 
 something, linking the read block above.)
 
 - put robots=noindex into the documents and/or add a line to the robots.txt
 
 - we could probably remove 1.3 docs from the navigation

I'm much more a fan of that approach. Another way to reinforce the point: how 
about serving the old content with 410 Gone status? The red block would 
contain an error message after all.

-- 
Tim Bannister – is...@jellybaby.net



smime.p7s
Description: S/MIME cryptographic signature


Re: [proposed] remove docs/1.3/

2012-02-26 Thread Tim Bannister
On 26 Feb 2012, at 10:34, Graham Leggett wrote:

 On 26 Feb 2012, at 9:35 AM, William A. Rowe Jr. wrote:
 
 Ok folks, it's been a few years... over 10, in fact, that 1.3 has
 been dead.
 
 Doesn't it seem overtime to take down 1.3 docs from the site, altogether?
 
 I find that from time to time, v1.3 documentation comes up in Google 
 searches, which probably confuses users who don't know what they're looking 
 at.

There are ways to leave it there but persuade crawlers not to index it. Maybe 
even serve it with 410 status and some JavaScript to point out that the page is 
deprecated.

I think the first one is worthwhile and the second one is not worth the extra 
effort.

-- 
Tim Bannister – is...@jellybaby.net



smime.p7s
Description: S/MIME cryptographic signature


Re: [proposed] remove docs/1.3/

2012-02-26 Thread Tim Bannister
On 26 Feb 2012, at 10:34, Graham Leggett wrote:
 On 26 Feb 2012, at 9:35 AM, William A. Rowe Jr. wrote:
 
 Ok folks, it's been a few years... over 10, in fact, that 1.3 has
 been dead.

The other thing I want to add is that 1.3 is dead but not buried; there are 
still servers running httpd 1.3.x and admins who can't or won't upgrade. Taking 
the documents offline altogether is a bit strong … and it won't persuade those 
laggards to upgrade. Anyone who hasn't upgraded yet is going to take a lot more 
persuasion.

-- 
Tim Bannister – is...@jellybaby.net



smime.p7s
Description: S/MIME cryptographic signature


Re: Include strangeness

2012-01-22 Thread Tim Bannister
On 22 Jan 2012, at 18:14, Stefan Fritsch wrote:

 I have conf/extra/httpd-userdir.conf and a directory conf/original (without 
 httpd-userdir.conf in it). This makes
 
 Include conf/*/httpd-userdir.conf
 
 fail
…
 even though the whole Include statement does match one file. I think this 
 makes directory wildcards a lot less useful with Include. Is this intentional 
 or an implementation quirk? Of course, one can always use IncludeOptional...

Can of worms or not, I worry that releasing 2.4.x means setting this behaviour 
in stone. Until 2.6 comes out, at least.

Without writing a line of code, committers can agree on what the /expected/ 
behaviour is and document that for the release. Code to implement this can 
follow later.

If this triggers a long, unhappy discussion then I would regret posting. On the 
other hand, if Stefan's question leads to a better experience for httpd users, 
that's really great.

-- 
Tim Bannister – is...@jellybaby.net



smime.p7s
Description: S/MIME cryptographic signature


Re: [VOTE] Release Apache httpd 2.4.0

2012-01-17 Thread Tim Bannister
On 16 Jan 2012, at 22:31, Stefan Fritsch wrote:
On Monday 16 January 2012, Tim Bannister wrote:
 $ ./configure --with-included-apr
 …
 Configuring Apache Portable Runtime library ...
 
 configuring package in srclib/apr now
 /bin/sh: /home/isoma/src/httpd-2.4.0/srclib/apr/configure: No such file or 
 directory
 configure failed for srclib/apr
 
 
 This looks like what I'd expect if building from Subversion, but for a 
 release my understanding is that APR should be bundled with httpd and “just 
 work” with that command line. With httpd 2.2.21 the same command line 
 completes I as expected.
 
 There has been a change in 2.3/2.4: You need to download and extract the 
 *-deps tarball as well if you want to use --with-included-apr.

I've added 
http://wiki.apache.org/httpd/FAQ#I_get_an_error_about_.22configure_failed_for_srclib.2BAC8-apr.22
 but it would be nice to make the same information more easily available.

-- 
Tim Bannister - +44 7980408788 - is...@jellybaby.net



smime.p7s
Description: S/MIME cryptographic signature


Re: documenting -deps

2012-01-17 Thread Tim Bannister
On 17 Jan 2012, at 20:31, Graham Leggett wrote:

 The simplest fix for this issue is to modify the file not found error 
 message to say something sensible about requiring the -deps package.
 
 At the end of the day, the most likely reason someone is trying to add  
 --with-included-apr is because they did this in the past, and these people 
 aren't going to have looked in any documentation, and so won't find any 
 explanation for what to do.

This is what I would have expected. But my autoconf is not up to making a patch.

How about adding a hyperlink to a page that explains the change and ways to 
deal with it?

-- 
Tim Bannister – is...@jellybaby.net



smime.p7s
Description: S/MIME cryptographic signature


Re: [VOTE] Release Apache httpd 2.4.0

2012-01-16 Thread Tim Bannister
On 16 Jan 2012, at 17:50, Jim Jagielski wrote:

 The 2.4.0 (prerelease) tarballs are available for download and test:
 
   http://httpd.apache.org/dev/dist/
 
 I'm calling a VOTE on releasing these as Apache httpd 2.4.0 GA.
 
 Vote will last the normal 72 hours... Can I get a w00t w00t!

$ ./configure --with-included-apr
…
Configuring Apache Portable Runtime library ...

configuring package in srclib/apr now
/bin/sh: /home/isoma/src/httpd-2.4.0/srclib/apr/configure: No such file or 
directory
configure failed for srclib/apr



This looks like what I'd expect if building from Subversion, but for a release 
my understanding is that APR should be bundled with httpd and “just work” with 
that command line. With httpd 2.2.21 the same command line completes I as 
expected.

-- 
Tim Bannister – is...@jellybaby.net



smime.p7s
Description: S/MIME cryptographic signature


Re: [VOTE] Release Apache httpd 2.4.0

2012-01-16 Thread Tim Bannister
On 16 Jan 2012, at 17:50, Jim Jagielski wrote:

 The 2.4.0 (prerelease) tarballs are available for download and test:
 
   http://httpd.apache.org/dev/dist/
 
 I'm calling a VOTE on releasing these as Apache httpd 2.4.0 GA.
 
 Vote will last the normal 72 hours... Can I get a w00t w00t!

I readily admit that I'm not au fait with how you'd do a minor version release, 
but http://httpd.apache.org/dev/dist/CHANGES_2.4.0 seems rather sparse and 
http://httpd.apache.org/dev/dist/CHANGES_2.4 seems appropriately long. And I 
didn't expect that there would be a difference.

For 2.2.0, it looks as if there was a CHANGES_2.2 but no CHANGES_2.2.0 file: 
http://web.archive.org/web/20051203032228/http://archive.apache.org/dist/httpd/

I can't find what the 2.0.x equivalents look like because 
http://archive.apache.org/dist/httpd/ does not have them… but it seems odd to 
have things as they stand.

-- 
Tim Bannister – is...@jellybaby.net



smime.p7s
Description: S/MIME cryptographic signature


Re: [VOTE] Release Apache httpd 2.4.0

2012-01-16 Thread Tim Bannister
On 16 Jan 2012, at 22:31, Stefan Fritsch wrote:

 There has been a change in 2.3/2.4: You need to download and extract the 
 *-deps tarball as well if you want to use --with-included-apr.

That's not documented in http://httpd.apache.org/docs/2.4/install.html

Is it also worth adding a note to INSTALL and/or README?

Finally, I spotted that INSTALL refers to 
http://httpd.apache.org/docs/2.3/install.html which should perhaps be bumped to 
2.4

-- 
Tim Bannister - +44 7980408788 - is...@jellybaby.net



smime.p7s
Description: S/MIME cryptographic signature


Re: Proposal: error codes

2011-11-30 Thread Tim Bannister
On 27 Nov 2011, at 17:14, Stefan Fritsch wrote:

 Yes, that would be a good idea and I agree with Daniel that we should use a 
 distinct prefix or format. We currently have around 2700 calls to 
 *_log_?error in trunk, so a 4-digit number should be ok. Together with for 
 example AH as prefix for Apache HTTPD this would result in numbers like 
 AH0815 which don't seem to cause many hits on google.

I think most people still use file logging, but Apache httpd does also support 
syslog. And over the life of Apache httpd syslog has gained features too, such 
as message codes. 

http://tools.ietf.org/html/rfc5424 section 6.2.7 says:
   The MSGID SHOULD identify the type of message.  For example, a
   firewall might use the MSGID TCPIN for incoming TCP traffic and the
   MSGID TCPOUT for outgoing TCP traffic.  Messages with the same
   MSGID should reflect events of the same semantics.  The MSGID itself
   is a string without further semantics.  It is intended for filtering
   messages on a relay or collector.

There is also a mechanism for structured metadata. I don't know whether either 
or both of these will be written into future Apache httpd… but I thought it was 
worth mentioning these early on in the discussion.

-- 
Tim Bannister – is...@jellybaby.net



Re: Can we be less forgiving about what we accept?

2011-11-28 Thread Tim Bannister
On 28 Nov 2011, at 00:37, Stefan Fritsch wrote:

 * With 'ProxyRequests off', we accept absolute urls like http://hostname/path 
 for local requests, but we don't check that the hostname contained in it 
 actually matches the Host header if there is one. The hostname from the URI 
 is then used for vhost matching and put into r-hostname. This is mandated by 
 RFC2616 but I guess there are quite a few buggy webapps that always look into 
 the Host header. A workaround may be to set the Host header to the hostname 
 from the URI in this case.

I'd sooner see a 400 response. Are there any circumstances where mismatch is 
required / sent by a current client?

Some tolerance might be required, for example if the request line specifies a 
port but the Host: header does not.

-- 
Tim Bannister — is...@jellybaby.net



Re: svn commit: r1163833 - /httpd/httpd/trunk/modules/http/byterange_filter.c

2011-09-01 Thread Tim Bannister

On Wed, Aug 31, 2011 at 6:28 PM, Roy T. Fielding wrote:

On Aug 31, 2011, at 6:10 PM, William A. Rowe Jr. wrote:

The presumption here is that the client requests bytes=0- to begin the 
transmission, and provided it sees a 206, restarting somewhere in the 
stream results in aborting the connection and streaming bytes=n- from 
the restart point.  Further testing should determine if this was the 
broken assumption.


Do we send the Accept-Ranges header field?

  http://tools.ietf.org/html/rfc2616#page-105


Apache httpd 2.2.9 is sending this header in the Debian bug report at 
http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=639825

Tim


Re: Fixing Ranges

2011-08-25 Thread Tim Bannister
On 25 Aug 2011, at 15:48, Plüm, Rüdiger, VF-Group wrote:

 For 2.3 the last one could be 3 state:
 
 off - Don't do anything about that
 on - reply with 200 if misuse is detected.
 optimize - Do sorts and merges and fill too small chunks between the ranges.
 
 Default for 2.3 would be optimize.

I don't know exactly how the PDF plugins work, but we might expect requests 
like:

HEAD /foo HTTP/1.1
Host: xxx.example

to learn the content length, followed by:

GET /foo HTTP/1.1
Host: xxx.example
If-Range: bar
Range: bytes=0-4095,8384511-8388607,4096-8384510,8384512-

(hoping to read the indexes at the start and end of the document first, then 
fill in the rest).


A default that forces the clients back to seeing only the whole entity seems 
too strong, especially if httpd will now have better code to handle this case. 
Detecting misuse and handling that with a 200 still fine though.

I expect that clients exist which would get confused at having small chunks 
filled in.
For example, a client that expects either a multipart/byte-ranges response or a 
whole-entity 200 (because the server doesn't accept ranges). With the above 
“optimize”, the client instead gets a sorted and merged single-range response. 
Naive coding could have the client believe that it is seeing the whole entity 
rather than just a range.

…yes, such a client is badly written but badly written clients can and do 
exist. If httpd punishes their users unduly, httpd itself may attract some 
blame.

-- 
Tim Bannister – is...@jellybaby.net



Re: DoS with mod_deflate range requests

2011-08-24 Thread Tim Bannister

On Tue, Aug 23, 2011, Roy T. Fielding wrote:

And the spec says ...

   When a client requests multiple ranges in one request, the
   server SHOULD return them in the order that they appeared in the
   request.

My suggestion is to reject any request with overlapping ranges or more 
than five ranges with a 416, and to send 200 for any request with 4-5 
ranges.  There is simply no need to support random access in HTTP.


Deshpande  Zeng in http://dx.doi.org/10.1145/500141.500197 describe a 
method for streaming JPEG 2000 documents over HTTP, using many more than 
5 ranges in a single request.
A client that knows about any server-side limit could make multiple 
requests each with a small number of ranges, but discovering that limit 
will add latency and take more code.


Tim Bannister


Re: DoS with mod_deflate range requests

2011-08-24 Thread Tim Bannister
On 24 Aug 2011, at 17:47, Stefan Fritsch wrote:
On Wednesday 24 August 2011, Jim Jagielski wrote:
 On Aug 24, 2011, at 12:05 PM, Plüm, Rüdiger, VF-Group wrote:
 
 But merging might require sorting...
 
 then we don't do that merge, imo… In other words, we progress thru the set 
 of ranges and once a range is merged as far as it can be (due to the next 
 range not being merge-able with the previous one), we let it go...
 
 We could also use a two stage approach: Up to some limit (e.g. 50) ranges, we 
 return them as the client requested them. Over that limit, we violate the 
 RFC-SHOULD and sort and merge them.

Another option is just to return 200. Servers MAY ignore the Range header. I 
prefer this because existing clients already handle that case well, and there's 
no opportunity for a client to exploit this (“malicious” clients that want the 
whole entity need only request it).

Can anyone see why returning 200 for these complex requests (by ignoring Range 
/ If-Range) is a bad idea?

-- 
Tim Bannister – is...@jellybaby.net



Re: DoS with mod_deflate range requests

2011-08-24 Thread Tim Bannister
On 24 Aug 2011, at 20:13, Jim Jagielski wrote:

 Another option is just to return 200. Servers MAY ignore the Range header. I 
 prefer this because existing clients already handle that case well, and 
 there's no opportunity for a client to exploit this (“malicious” clients 
 that want the whole entity need only request it).
 
 Can anyone see why returning 200 for these complex requests (by ignoring 
 Range / If-Range) is a bad idea?
 
 In what cases would we ignore it? Dependent only on =X ranges?

I don't have any strong opinion about exactly when to ignore Range. From an 
HTTP client PoV I wouldn't want to get 416 from requesting a satisfiable but 
complex range (maliciously or otherwise).

Ignoring Range on (ranges = X) is simple to implement and easy to document, so 
why not do that?

-- 
Tim Bannister – is...@jellybaby.net



Re: DoS with mod_deflate range requests

2011-08-23 Thread Tim Bannister

On Tue, Aug 23, 2011 at 02:15:16PM +0200, Lazy wrote:

2011/8/23 Stefan Fritsch s...@sfritsch.de:
 http://seclists.org/fulldisclosure/2011/Aug/175

 I haven't looked into it so far. And I am not sure I will have time today.


it is sending HEAD requests with lots of  ranges
HEAD / HTTP/1.1
Host: 
Range:bytes=0-,5-1,5-2,5-3,.

…

doeas Range in HEAD request have any sense at all ?


One /possible/ use is as an equivalent for a conditional GET, ie
GET / HTTP/1.1
Host: xxx
Range: bytes=1024-
If-Range: foo

…to which the correct response should I think be either 200 or 206 depending 
on whether the document is modified.


But it's a pretty odd case. I can't imagine any published client or proxy 
that would make such a request. It would in any case be acceptable to 
return a 200 response instead; RFC 2616 states that A server MAY ignore 
the Range header


Tim Bannister


  1   2   >