Re: [Fwd: svn commit: r581493 - in /httpd/mod_ftp/trunk: include/mod_ftp.h modules/ftp/ftp_util.c]

2007-10-03 Thread William A. Rowe, Jr.
William A. Rowe, Jr. wrote:
 I would appreciate the active confirmation of this new parser by at
 least a second set of eyeballs.  We all know how notorious parsers
 are at creating holes in the security of fresh software and code.
 
 The relevant RFC is;

http://www.ietf.org/rfc/rfc2428.txt

While on the subject, should we also accept canonical form 3 from
section 2.2 (x:x:x:x:x:x:d.d.d.d form) called out in...

http://www.ietf.org/rfc/rfc1884.txt

in which case there is an omission in this parser?


mod_proxy headers

2007-10-03 Thread Nick Gearls

Hi,

I think mod_proxy should be enhanced/fixed in some way:
- If I use ProxyPass  ProxyPassReverse to forward connection from 
proxy to back-end, ProxyPassReverse adapts the Location header 
from back-end/... to proxy/


1. Why only the Location header ?
2. In case you access your proxy in HTTPS, and your back-end in HTTP, 
ProxyPassReverse should also remove the protocol from all headers
3. ProxyPass should do the same for all headers - fixing the protocol, 
and possibly the host if ProxyPreserveHost was not used


To summarise, this should be done for _all_ headers:
- ProxyPass: change https://proxy/...; to http://back-end/...;
  or http://proxy/...; if ProxyPreserveHost was used
 (replace https by protocol1 and http by protocol2 to be generic)
- ProxyPassReverse: change http://back-end/...; to https://proxy/...;

Does it sound sensible ?
Obviously, you could do it manually with Header  RequestHeader 
(although I have problems to do it with 2.24 - is there a known issue 
with mod_proxy), but you have to know every header by advance, and I do 
not see any reason to not do it systematically.


Please correct or acknowledge my thought.

Nick


Re: mod_proxy headers

2007-10-03 Thread Nick Kew
On Wed, 03 Oct 2007 12:11:09 +0200
Nick Gearls [EMAIL PROTECTED] wrote:

 Hi,
 
 I think mod_proxy should be enhanced/fixed in some way:
 - If I use ProxyPass  ProxyPassReverse to forward connection from 
 proxy to back-end, ProxyPassReverse adapts the Location header 
 from back-end/... to proxy/
 
 1. Why only the Location header ?

Because that's the only header you had that contained the string
back-end with URL semantics.

 2. In case you access your proxy in HTTPS, and your back-end in HTTP, 
 ProxyPassReverse should also remove the protocol from all headers
 3. ProxyPass should do the same for all headers - fixing the
 protocol, and possibly the host if ProxyPreserveHost was not used

How would you propose to rewrite, for example, the Date header?

 Does it sound sensible ?

I expect if you think through what you're asking, you'll eventually
redesign the existing behaviour.  If not, tell us what's wrong.

-- 
Nick Kew

Application Development with Apache - the Apache Modules Book
http://www.apachetutor.org/


Re: mod_proxy headers

2007-10-03 Thread Nick Gearls

Maybe I didn't describe the global picture, sorry.
There are obviously known headers that will never contain a URL, like 
the Date you mentioned, and several others.
However, you may have other headers containing the host URL, like 
Destination for the WebDAV protocol.
So, I was asking to check every header (we may potentially discard known 
ones, but that's an optimization) for the proxy/back-end URL, and fix it 
if needed.


Concretely, when using WebDAV, you can copy a file from one location to 
another; the client sends a COPY command on a URI /dir/file, and 
sets a Destination header to - in my example - 
https://proxy/newdir/newfile;.
The WebDAV server refuses this because it receives a command to copy 
from http://back-end/dir/file; to https://proxy/newdir/newfile;.


I personally have the problem with WebDAV, which is obviously a common 
one because of the standard, so we could fix only that header; but I 
imagine some other applications could use the same mechanism, so why not 
generalizing this to all headers ?
Excepting the little overhead, I find this a very robust solution. I 
don't think it breaks anything, so I would not see it as a behavior 
redesign.



Nick Kew wrote:

On Wed, 03 Oct 2007 12:11:09 +0200
Nick Gearls [EMAIL PROTECTED] wrote:


Hi,

I think mod_proxy should be enhanced/fixed in some way:
- If I use ProxyPass  ProxyPassReverse to forward connection from 
proxy to back-end, ProxyPassReverse adapts the Location header 
from back-end/... to proxy/


1. Why only the Location header ?


Because that's the only header you had that contained the string
back-end with URL semantics.

2. In case you access your proxy in HTTPS, and your back-end in HTTP, 
ProxyPassReverse should also remove the protocol from all headers

3. ProxyPass should do the same for all headers - fixing the
protocol, and possibly the host if ProxyPreserveHost was not used


How would you propose to rewrite, for example, the Date header?


Does it sound sensible ?


I expect if you think through what you're asking, you'll eventually
redesign the existing behaviour.  If not, tell us what's wrong.



Re: mod_proxy headers

2007-10-03 Thread Nick Kew
On Wed, 03 Oct 2007 12:44:28 +0200
Nick Gearls [EMAIL PROTECTED] wrote:

 I personally have the problem with WebDAV, which is obviously a
 common one because of the standard, so we could fix only that header;
 but I imagine some other applications could use the same mechanism,
 so why not generalizing this to all headers ?

Different headers have different patterns and different semantics.
There's no single rewrite that would work everywhere.  See for
example the treatment of cookies.

What might be an idea is to make the list of headers to rewrite
a configuration option.  I did a similar thing in mod_proxy_html 3
with ProxyHTMLLinks and ProxyHTMLEvents.

 Excepting the little overhead, I find this a very robust solution. I 
 don't think it breaks anything, so I would not see it as a behavior 
 redesign.

It would break headers that contain a URL-like pattern that isn't
a URL.  And if you think that's unlikely, just look at the number
of false positives in desktop software (e.g. mailers) that guesses
links and makes http://www.example.org or even just www.example.com
clickable.

-- 
Nick Kew

Application Development with Apache - the Apache Modules Book
http://www.apachetutor.org/


Re: mod_proxy headers

2007-10-03 Thread Ruediger Pluem


On 10/03/2007 12:44 PM, Nick Gearls wrote:
 Maybe I didn't describe the global picture, sorry.
 There are obviously known headers that will never contain a URL, like
 the Date you mentioned, and several others.
 However, you may have other headers containing the host URL, like
 Destination for the WebDAV protocol.
 So, I was asking to check every header (we may potentially discard known
 ones, but that's an optimization) for the proxy/back-end URL, and fix it
 if needed.
 
 Concretely, when using WebDAV, you can copy a file from one location to
 another; the client sends a COPY command on a URI /dir/file, and
 sets a Destination header to - in my example -
 https://proxy/newdir/newfile;.
 The WebDAV server refuses this because it receives a command to copy
 from http://back-end/dir/file; to https://proxy/newdir/newfile;.

AFAIK this information is not only contained in the header but also in
the XML bodies WebDav uses during request and response.
There are cases where you need to ensure that reverse proxy and backend
have the same name and WebDav seams to be one of them.

There are different ways of getting this. e.g:

1. Using different DNS servers or via hosts files and using ProxyPass / 
http://proxy/
2. Using ProxyPreserveHost and using the same ServerName on the backend as on 
the frontend.

In order to create SSL URL's on the non SSL backend set ServerName to 
https://proxy.


Regards

RĂ¼diger


Re: mod_proxy headers

2007-10-03 Thread Graham Leggett
On Wed, October 3, 2007 1:03 pm, Nick Kew wrote:

 It would break headers that contain a URL-like pattern that isn't
 a URL.  And if you think that's unlikely, just look at the number
 of false positives in desktop software (e.g. mailers) that guesses
 links and makes http://www.example.org or even just www.example.com
 clickable.

As I recall the ProxyPassReverse does an exact string prefix match on
Location, and if there is a match, the header is changed, otherwise it
leaves the header alone.

By saying ProxyPassReverse it seems sane to be telling the proxy that it
should hide every and all occurences of the backend url by replacing it
with the frontend url, although from the perspective of changing existing
behaviour in existing installations, a compromise would be to identify
headers used by WebDAV, and alter those headers as well as Location.

Regards,
Graham
--




Re: Proxying OPTIONS *

2007-10-03 Thread Jim Jagielski


On Oct 2, 2007, at 5:56 PM, Ruediger Pluem wrote:



Slightly off topic, but this gives me the idea that we could use
OPTIONS * as some kind of ping / health check for pooled connections
in mod_proxy_http before sending a request (at least in the reverse
proxy case before sending a request that is not idempotent or after
the connection has not been used for a certain amount of time).
The current is_connected check has a race condition if the keepalive
timer of the backend server kills the connection just after our check
and before it received our request.



:)

We already do similar with AJP where, right after  
ap_proxy_connect_backend()
it does a quick check. I think we even discussed having HTTP do  
something

similar awhile ago...



proxy health checks [was: Proxying OPTIONS *]

2007-10-03 Thread Rainer Jung

Jim Jagielski wrote:

On Oct 2, 2007, at 5:56 PM, Ruediger Pluem wrote:

Slightly off topic, but this gives me the idea that we could use
OPTIONS * as some kind of ping / health check for pooled connections
in mod_proxy_http before sending a request (at least in the reverse
proxy case before sending a request that is not idempotent or after
the connection has not been used for a certain amount of time).
The current is_connected check has a race condition if the keepalive
timer of the backend server kills the connection just after our check
and before it received our request.


:)

We already do similar with AJP where, right after 
ap_proxy_connect_backend()

it does a quick check. I think we even discussed having HTTP do something
similar awhile ago...


It would be nice to have the health check running asynchronously from 
the normal request handling (error detection for idle connections, 
before requests fail). Even more important, the recovery test could be 
done independently from request handling.


For this one would need some scheduling service (independent threads, 
API extension for callbacks). A simple start could be something like the 
monitor() hook. Although that one depends on the type of MPM and the 
module scheduling service should not be decoupled from the timing of 
important internal procedures.


Regards,

Rainer


Re: mod_proxy headers

2007-10-03 Thread Nick Gearls
I agree, we have to check if it latches the back-end before changing it 
to the front-end, and vice-versa.

This way, it sounds totally safe, no ?

Graham Leggett wrote:

On Wed, October 3, 2007 1:03 pm, Nick Kew wrote:


It would break headers that contain a URL-like pattern that isn't
a URL.  And if you think that's unlikely, just look at the number
of false positives in desktop software (e.g. mailers) that guesses
links and makes http://www.example.org or even just www.example.com
clickable.


As I recall the ProxyPassReverse does an exact string prefix match on
Location, and if there is a match, the header is changed, otherwise it
leaves the header alone.

By saying ProxyPassReverse it seems sane to be telling the proxy that it
should hide every and all occurences of the backend url by replacing it
with the frontend url, although from the perspective of changing existing
behaviour in existing installations, a compromise would be to identify
headers used by WebDAV, and alter those headers as well as Location.

Regards,
Graham
--





Re: As we contemplate what to fix, and how to roll out 2.4 and 3.0

2007-10-03 Thread Erik Abele

On 02.10.2007, at 20:52, Paul Querna wrote:


So, the first step is to cut out any illusion that new features are
going into 1.3, with a statement like this:

Starting in January 2008, only critical security issues will be  
fixed in

Apache HTTP Server versions 1.3.x or 2.0.x.

I honestly believe we will be somewhat responsible for fixing any  
major

security issues in 1.3 and 2.0 for the next 5-10 years, unless Waka
suddenly explodes and replaces http :-)

Thoughts?


I really like that, but as Bill showed nicely (I love the last  
paragraph!) I think we should couple that with some PR...


Cheers,
Erik


Re: proxy health checks [was: Proxying OPTIONS *]

2007-10-03 Thread Nick Kew
On Wed, 03 Oct 2007 15:04:52 +0200
Rainer Jung [EMAIL PROTECTED] wrote:

 It would be nice to have the health check running asynchronously from 
 the normal request handling (error detection for idle connections, 
 before requests fail). Even more important, the recovery test could
 be done independently from request handling.

Indeed.  That could apply to other backends (such as mod_dbd) too.

 For this one would need some scheduling service (independent threads, 
 API extension for callbacks). A simple start could be something like
 the monitor() hook. Although that one depends on the type of MPM and
 the module scheduling service should not be decoupled from the timing
 of important internal procedures.

Patches welcome!

Maybe a scheduled event thread could be a standard feature for httpd-3,
to make this kind of thing straightforward.

-- 
Nick Kew

Application Development with Apache - the Apache Modules Book
http://www.apachetutor.org/


ETag and Content-Encoding

2007-10-03 Thread Nick Kew
http://issues.apache.org/bugzilla/show_bug.cgi?id=39727

We have some controversy surrounding this bug, and bugzilla
has turned into a technical discussion that belongs here.

Fundamental question:  Does a weak ETag preclude (negotiated) 
changes to Content-Encoding?

Summary:

Original bug: mod_deflate may compress/decompress content
but leave an existing ETag in place.

[ various discussion followed ]

Yesterday: I committed a fix to /trunk/, assuming it would
be uncontroversial.  The fix is that any existing ETag should
be made a weak ETag if mod_deflate either inflates or
deflates the contents.  Rationale: a weak ETag promises
equivalent but not byte-by-byte identical contents, and
that's exactly what you have with mod_deflate.

Henrik Nordstrom commented:

  Not sufficient. The two versions is not semantically equivalen as one
  can not be exchanged for the other without breaking the protocol. In
  the context of If-None-Match the weak comparator is used in HTTP and
  there a strong ETag is equal to a weak ETag.

Further discussion followed.  I won't repost it here in full, but
since there clearly is an issue, it needs discussing here.

Cc: folks subscribed to the bug.

-- 
Nick Kew

Application Development with Apache - the Apache Modules Book
http://www.apachetutor.org/


Re: ETag and Content-Encoding

2007-10-03 Thread Henrik Nordstrom
On ons, 2007-10-03 at 14:23 +0100, Nick Kew wrote:
 http://issues.apache.org/bugzilla/show_bug.cgi?id=39727
 
 We have some controversy surrounding this bug, and bugzilla
 has turned into a technical discussion that belongs here.
 
 Fundamental question:  Does a weak ETag preclude (negotiated) 
 changes to Content-Encoding?

A weak etag means the response is semantically equivalent both at
protocol and content level, and may be exchanged freely.

Two resource variants with different content-encoding is not
semantically equivalent as the recipient may not be able to understand
an variant sent with an incompatible encoding.

Sending a weak ETag do not signal that there is negotiation taking place
(Vary does that), all it signals is that there may be multiple but fully
compatible versions of the entity variant in circulation, or that each
request results in a slightly different object where the difference has
no practical meaning (i.e. embedded non-important timestamp or similar).

 deflates the contents.  Rationale: a weak ETag promises
 equivalent but not byte-by-byte identical contents, and
 that's exactly what you have with mod_deflate.

I disagree. It's two very different entities.

Note: If mod_deflate is deterministic and always returning the exact
same encoded version then using a strong ETag is correct.


What this boils down to in the end is

a) HTTP must be able to tell if an already cached variant is valid for a
new request by using If-None-Match. This means that each negotiated
entity needs to use a different ETag value. Accept-Encoding is no
different in this any of the other inputs to content negotiation.

b) If the object undergo some transformation that is not deterministic
then the ETag must be weak to signify that byte-equivalence can not be
guaranteed.

Note regarding a: The weak/strong property of the ETag has no
significance here. If-None-Match uses the weak comparision function
where only the value is compared, not the strength. See 13.3.3 paragraph
The weak comparison function.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: ETag and Content-Encoding

2007-10-03 Thread Ruediger Pluem


On 10/03/2007 03:23 PM, Nick Kew wrote:
 http://issues.apache.org/bugzilla/show_bug.cgi?id=39727
 
 We have some controversy surrounding this bug, and bugzilla
 has turned into a technical discussion that belongs here.
 
 Fundamental question:  Does a weak ETag preclude (negotiated) 
 changes to Content-Encoding?
 
 Summary:
 
 Original bug: mod_deflate may compress/decompress content
 but leave an existing ETag in place.
 
 [ various discussion followed ]
 
 Yesterday: I committed a fix to /trunk/, assuming it would
 be uncontroversial.  The fix is that any existing ETag should
 be made a weak ETag if mod_deflate either inflates or
 deflates the contents.  Rationale: a weak ETag promises
 equivalent but not byte-by-byte identical contents, and
 that's exactly what you have with mod_deflate.
 
 Henrik Nordstrom commented:
 
   Not sufficient. The two versions is not semantically equivalen as one
   can not be exchanged for the other without breaking the protocol. In
   the context of If-None-Match the weak comparator is used in HTTP and
   there a strong ETag is equal to a weak ETag.
 
 Further discussion followed.  I won't repost it here in full, but
 since there clearly is an issue, it needs discussing here.

Currently I share your opinion that a weak etag should fix the issue
(besides ap_meets_condition currently does not work correctly with
weak etags, but this is another story).
OTOH I try to understand why Henrik thinks it is not sufficient.

Ok, before the patch we had the following situation:

Depending on the client httpd sent an uncompressed or an compressed
response with the *same* (possibly) strong ETag and a Vary: Accept-Encoding 
header.
A cache in the line stored the response and because both responses had
the *same* (possibly) strong ETag it only stored it *once* (either the 
compressed or
uncompressed version) and in fact ignored the Vary header. So if a client
requested that resource from the cache either conditional (If-none-match) or
unconditional it delivered what it had in stock ignoring the Accept-Encoding
header of the client.

Now after the patch we have the following situation:

Depending on the client httpd sends an uncompressed or an compressed
response with the original ETag if it does not modify the response and
with a weak version of the ETag if does compress / uncompress the response.
In any case it sets a Vary: Accept-Encoding header.
Ok, sending the original ETag if we do not alter the response might be an
error, but lets assume we do not and sent a weak version of the original
ETag in both cases (altering the response / not altering the response).
Does this allow the cache in the line to store it only *once* and ignoring
the Vary header?
If yes, then the fix is not sufficient, but if a weak ETag forces the cache
to store each variant based on the Vary header than it should work.


Regards

RĂ¼diger


Re: ETag and Content-Encoding

2007-10-03 Thread Justin Erenkrantz
On Oct 3, 2007 7:20 AM, Henrik Nordstrom [EMAIL PROTECTED] wrote:
  deflates the contents.  Rationale: a weak ETag promises
  equivalent but not byte-by-byte identical contents, and
  that's exactly what you have with mod_deflate.

 I disagree. It's two very different entities.

As before, I still don't understand why Vary is not sufficient to
allow real-world clients to differentiate here.  If Squid is ignoring
Vary, then it does so at its own peril - regardless of ETags.

The problem with trying to invent new ETags is that we'll almost
certainly break conditional requests and I find that a total
non-starter.  Your suggestion of appending ;gzip leaks information
that doesn't belong in the ETag - as it is quite possible for that to
appear in a valid ETag from another source - for example, it is
trivial to make Subversion generate ETags containing that at the end -
this would create nasty false positives and corrupt Subversion's
conditional request checks.  Plus, rewriting every filter to append or
delete a 'special' marker in the ETag is bound to make the situation
way worse.  -- justin


Re: How to kill 1.3?

2007-10-03 Thread William A. Rowe, Jr.
Roy T. Fielding wrote:
 I don't see why we care, either way.

Could you clarify what we aren't caring about, since your answer was
a bit ambiguous?  (Abandon or not, message our users or not, etc)


Re: svn commit: r581660 - in /httpd/httpd/trunk: CHANGES modules/filters/mod_ext_filter.c

2007-10-03 Thread Eric Covener
On 10/3/07, William A. Rowe, Jr. [EMAIL PROTECTED] wrote:
 AIUI this is the desired default behavior from apr; why change httpd
 at all?  I was receptive to your suggestion/patch to resolve this as
 the default on apr 0.9/1.2/1.x branches, per the unix implementation.

I think I misunderstood  your +1 (below) as an endorsement for the
HTTPD fix (my wording in the final sentence was poor)


On 10/2/07, William A. Rowe, Jr. [EMAIL PROTECTED] wrote:
 Eric Covener wrote:
 
  While the API might be a little ambiguous, and the caller can
  explicitly set the timeout, is this a discrepancy APR should
  eliminate?
 
  I'm going to add the apr_file_pipe_timeout_set(foo, 0) call instead to
  mod_ext_filter unless there are any objections.

 +1



-- 
Eric Covener
[EMAIL PROTECTED]


Re: svn commit: r581660 - in /httpd/httpd/trunk: CHANGES modules/filters/mod_ext_filter.c

2007-10-03 Thread William A. Rowe, Jr.
AIUI this is the desired default behavior from apr; why change httpd
at all?  I was receptive to your suggestion/patch to resolve this as
the default on apr 0.9/1.2/1.x branches, per the unix implementation.

Bill

[EMAIL PROTECTED] wrote:
 Author: covener
 Date: Wed Oct  3 10:17:24 2007
 New Revision: 581660
 
 URL: http://svn.apache.org/viewvc?rev=581660view=rev
 Log:
 mod_ext_filter: Prevent a  hang on Windows when the filter
 input data is pipelined 
 PR 29901 
 
 
 Modified:
 httpd/httpd/trunk/CHANGES
 httpd/httpd/trunk/modules/filters/mod_ext_filter.c
 
 Modified: httpd/httpd/trunk/CHANGES
 URL: 
 http://svn.apache.org/viewvc/httpd/httpd/trunk/CHANGES?rev=581660r1=581659r2=581660view=diff
 ==
 --- httpd/httpd/trunk/CHANGES [utf-8] (original)
 +++ httpd/httpd/trunk/CHANGES [utf-8] Wed Oct  3 10:17:24 2007
 @@ -2,6 +2,10 @@
  Changes with Apache 2.3.0
  [ When backported to 2.2.x, remove entry from this file ]
  
 +  *) mod_ext_filter: Prevent a hang on Windows when the filter
 + input data is pipelined. 
 + PR 29901 [Eric Covener]
 +
*) mod_deflate: Don't leave a strong ETag in place while transforming
   the entity.
   PR 39727 [Nick Kew]
 
 Modified: httpd/httpd/trunk/modules/filters/mod_ext_filter.c
 URL: 
 http://svn.apache.org/viewvc/httpd/httpd/trunk/modules/filters/mod_ext_filter.c?rev=581660r1=581659r2=581660view=diff
 ==
 --- httpd/httpd/trunk/modules/filters/mod_ext_filter.c (original)
 +++ httpd/httpd/trunk/modules/filters/mod_ext_filter.c Wed Oct  3 10:17:24 
 2007
 @@ -485,6 +485,14 @@
  return rc;
  }
  
 +rc = apr_file_pipe_timeout_set(ctx-proc-out, 0);
 +if (rc != APR_SUCCESS) {
 +ap_log_rerror(APLOG_MARK, APLOG_ERR, rc, f-r,
 +  couldn't set child stdin pipe timeout to 0 for filter 
 %s ,
 +  ctx-filter-name);
 +return rc;
 +}
 +
  apr_pool_note_subprocess(ctx-p, ctx-proc, APR_KILL_AFTER_TIMEOUT);
  
  /* We don't want the handle to the child's stdin inherited by any
 
 
 
 



Re: How to kill 1.3?

2007-10-03 Thread Roy T. Fielding

On Oct 3, 2007, at 8:30 AM, William A. Rowe, Jr. wrote:


Roy T. Fielding wrote:

I don't see why we care, either way.


Could you clarify what we aren't caring about, since your answer was
a bit ambiguous?  (Abandon or not, message our users or not, etc)


Why do we need to announce anything?  Why do we need to presuppose
that there is no further work on 1.3 when our policy has always
been that any person can work on what that person wants to do?

This announcement thing is simply telling people that some current
subset of the apache group no longer maintains 1.3, so you should
fork the server somewhere else if you don't like it.  -1

I don't care what the uptake graph says.  I don't care what people
outside this project mailing list think, period, about this project.
And if five years from now there are three or more Apache committers
that want to release 1.3.x, then no stupid marketing announcement
is going to stop them.

We have better things to do.  Let's do those things and stop worrying
about other people's perceptions.

Roy


Re: ETag and Content-Encoding

2007-10-03 Thread Roy T. Fielding

On Oct 3, 2007, at 7:20 AM, Henrik Nordstrom wrote:


On ons, 2007-10-03 at 14:23 +0100, Nick Kew wrote:

http://issues.apache.org/bugzilla/show_bug.cgi?id=39727

We have some controversy surrounding this bug, and bugzilla
has turned into a technical discussion that belongs here.

Fundamental question:  Does a weak ETag preclude (negotiated)
changes to Content-Encoding?


A weak etag means the response is semantically equivalent both at
protocol and content level, and may be exchanged freely.

Two resource variants with different content-encoding is not
semantically equivalent as the recipient may not be able to understand
an variant sent with an incompatible encoding.


That is not true.  The weak etag is for content that has changed but
is just as good a response content as would have been received.
In other words, protocol equivalence is irrelevant.

Sending a weak ETag do not signal that there is negotiation taking  
place
(Vary does that), all it signals is that there may be multiple but  
fully

compatible versions of the entity variant in circulation, or that each
request results in a slightly different object where the difference  
has
no practical meaning (i.e. embedded non-important timestamp or  
similar).


Yes.  Compression has no practical meaning.


deflates the contents.  Rationale: a weak ETag promises
equivalent but not byte-by-byte identical contents, and
that's exactly what you have with mod_deflate.


I disagree. It's two very different entities.


That is irrelevant.  What matters is the resource semantics, not the
message bits.  Every bit can change randomly and still be semantically
equivalent to a resource representation of random bits.


Note: If mod_deflate is deterministic and always returning the exact
same encoded version then using a strong ETag is correct.


What this boils down to in the end is

a) HTTP must be able to tell if an already cached variant is valid  
for a

new request by using If-None-Match. This means that each negotiated
entity needs to use a different ETag value. Accept-Encoding is no
different in this any of the other inputs to content negotiation.


That is not HTTP.  Don't confuse the needs of caching with the needs
of range requests -- only range requests need strong etags.


b) If the object undergo some transformation that is not deterministic
then the ETag must be weak to signify that byte-equivalence can not be
guaranteed.

Note regarding a: The weak/strong property of the ETag has no
significance here. If-None-Match uses the weak comparision function
where only the value is compared, not the strength. See 13.3.3  
paragraph

The weak comparison function.


As intended,

Roy



Re: How to kill 1.3?

2007-10-03 Thread Joshua Slive
On 10/3/07, Roy T. Fielding [EMAIL PROTECTED] wrote:

 I don't care what the uptake graph says.  I don't care what people
 outside this project mailing list think, period, about this project.
 And if five years from now there are three or more Apache committers
 that want to release 1.3.x, then no stupid marketing announcement
 is going to stop them.

 We have better things to do.  Let's do those things and stop worrying
 about other people's perceptions.

I agree, with a caveat. I think that we are doing a disservice to our
users if we don't communicate to them the attitude of the people on
this mailing list towards 1.3. In particular, I don't think our main
page or download page is currently clear enough about the status of
1.3 development. I think we should say something like:

The Apache HTTP Server version 1.3 is not recommended and is not
being actively developed. It /may/ continue to receive updates for
major security issues, but other updates are unlikely. We recommend
that you choose version 2.2 in its place.

Unlike Bill's manifesto, this statement is not meant to constrain the
developers, but simply to communicate the current state of development
to the users.

Joshua.


Re: ETag and Content-Encoding

2007-10-03 Thread Roy T. Fielding

On Oct 3, 2007, at 7:53 AM, Justin Erenkrantz wrote:


The problem with trying to invent new ETags is that we'll almost
certainly break conditional requests and I find that a total
non-starter.  Your suggestion of appending ;gzip leaks information
that doesn't belong in the ETag - as it is quite possible for that to
appear in a valid ETag from another source - for example, it is
trivial to make Subversion generate ETags containing that at the end -
this would create nasty false positives and corrupt Subversion's
conditional request checks.  Plus, rewriting every filter to append or
delete a 'special' marker in the ETag is bound to make the situation
way worse.  -- justin


I don't see how that is possible, unless subversion is depending
on content-encoding to twiddle between compressed and uncompressed
transfer without changing the etag.  In that case, subversion will be
broken, as would any poster child for misusing content-encoding as
a transfer encoding.

Roy


Re: How to kill 1.3?

2007-10-03 Thread Jim Jagielski


On Oct 3, 2007, at 3:15 PM, Joshua Slive wrote:


On 10/3/07, Roy T. Fielding [EMAIL PROTECTED] wrote:


I don't care what the uptake graph says.  I don't care what people
outside this project mailing list think, period, about this project.
And if five years from now there are three or more Apache committers
that want to release 1.3.x, then no stupid marketing announcement
is going to stop them.

We have better things to do.  Let's do those things and stop worrying
about other people's perceptions.


I agree, with a caveat. I think that we are doing a disservice to our
users if we don't communicate to them the attitude of the people on
this mailing list towards 1.3. In particular, I don't think our main
page or download page is currently clear enough about the status of
1.3 development. I think we should say something like:

The Apache HTTP Server version 1.3 is not recommended and is not
being actively developed. It /may/ continue to receive updates for
major security issues, but other updates are unlikely. We recommend
that you choose version 2.2 in its place.

Unlike Bill's manifesto, this statement is not meant to constrain the
developers, but simply to communicate the current state of development
to the users.



Even that would be good enough... To be honest, however, we need
to recall that 1.3 also pre-supposes the availability of various
1.3 modules (mod_jk, mod_ssl, PHP, ) that we really don't
have that much control over (note that it took some time after
1.3.39 was released before mod_ssl was updated for it). So even
if we continue to release 1.3, the reality is that module developers
will also tend to drop support for 1.3, which kind of
causes a feedback.



Re: ETag and Content-Encoding

2007-10-03 Thread Justin Erenkrantz
On Oct 3, 2007 12:19 PM, Roy T. Fielding [EMAIL PROTECTED] wrote:
 I don't see how that is possible, unless subversion is depending
 on content-encoding to twiddle between compressed and uncompressed
 transfer without changing the etag.  In that case, subversion will be
 broken, as would any poster child for misusing content-encoding as
 a transfer encoding.

I don't understand - why should Subversion care?  It doesn't know
anything related to gzip - that's purely mod_deflate's job.

The issue here is that mod_dav_svn generates an ETag (based off rev
num and path) and that ETag can be later used to check for conditional
requests.  But, if mod_deflate always strips a 'special' tag from the
ETag (per Henrik), then by the time that mod_dav_svn sees it, the tag
could be corrupt - as that special tag could have been part of a valid
ETag produced by mod_dav_svn as we've *never* placed restrictions on
the format of the ETag produced by our modules.  -- justin


Cc: lists (Re: ETag and Content-Encoding)

2007-10-03 Thread Nick Kew
On Wed, 3 Oct 2007 07:53:31 -0700
Justin Erenkrantz [EMAIL PROTECTED] wrote:

 [chop]

The Cc: list on this and subsequent postings is screwed:

  (1) It includes me, so I get everything twice.
  OK, I can live with that, but it's annoying.
  (2) It fails to include Henrik Nordstrom, the principal 
  non-Apache protagonist in this discussion.

-- 
Nick Kew

Application Development with Apache - the Apache Modules Book
http://www.apachetutor.org/


Re: How to kill 1.3?

2007-10-03 Thread Graham Leggett

Joshua Slive wrote:


We have better things to do.  Let's do those things and stop worrying
about other people's perceptions.


I agree, with a caveat. I think that we are doing a disservice to our
users if we don't communicate to them the attitude of the people on
this mailing list towards 1.3. In particular, I don't think our main
page or download page is currently clear enough about the status of
1.3 development. I think we should say something like:

The Apache HTTP Server version 1.3 is not recommended and is not
being actively developed. It /may/ continue to receive updates for
major security issues, but other updates are unlikely. We recommend
that you choose version 2.2 in its place.


This is a reasonable step, but like Roy I don't think an announcement as 
originally proposed is necessarily a good idea.


People who are on v1.3 today are probably still on v1.3 for reasons of 
their own choice, they'll move when they're ready. An announcement with 
best intentions may bring out criticism from people who feel pressure to 
upgrade when they would rather choose not to.


Regards,
Graham
--



Re: ETag and Content-Encoding

2007-10-03 Thread Henrik Nordstrom
On ons, 2007-10-03 at 07:53 -0700, Justin Erenkrantz wrote:

 As before, I still don't understand why Vary is not sufficient to
 allow real-world clients to differentiate here.  If Squid is ignoring
 Vary, then it does so at its own peril - regardless of ETags.

See RFC2616 13.6 Caching Negotiated Responses and you should understand
why returing an unique ETag on each variant is very important. (yes, the
gzip and identity content-encoded responses is two different variants of
the same resource, see earlier discussions if you don't agree on that).

But yes, thinking over this a second time converting the ETag to a weak
ETag is sufficient to plaster over the problem assuming the original
ETag is a strong one. Not because it's correct from a protocol
perspective, but becase Apache do not use the weak compare function when
processing If-None-Match so in Apache's world changing a strong ETag to
a weak one is about the same as assigning a new ETag.

However, if the original ETag is already weak then the problem remains
exactly as it is today..

Also it's also almost the same as deleting the ETag as you also destroy
If-None-Match processing of filtered responses, which also is why it
works..

 The problem with trying to invent new ETags is that we'll almost
 certainly break conditional requests and I find that a total
 non-starter.

Only because your processing of conditional requests is broken. See
earlier discussions on the topic of this bug already covering this
aspect.

To work proper the conditionals needs to (logically) be processed when
the response entity is known, this is after mod_deflate (or another
filter) does it's dance to transform the response headers. Doing
conditionals before the actual response headers is known is very
errorprone and likely to cause false matches as you don't know this is
the response which will be sent to the requestor.

 Your suggestion of appending ;gzip leaks information
 that doesn't belong in the ETag - as it is quite possible for that to
 appear in a valid ETag from another source - for example, it is
 trivial to make Subversion generate ETags containing that at the end -
 this would create nasty false positives and corrupt Subversion's
 conditional request checks.

Then use something stronger, less likely to be seen in the original
etag. Or fix the filter architecture to deal with conditionals proper
making this question (collisions) pretty much a non-issue.

Or until conditionals can be processed correctly in precense of filters
drop the ETag on filtered responses where the filter do some kind of
negotiation.

 Plus, rewriting every filter to append or
 delete a 'special' marker in the ETag is bound to make the situation
 way worse.  -- justin

I don't see much choice if you want to comply with the RFC requirements.
The other choice is to drop the ETag header on such responses, which
also is not a nice thing but at least complying with the specifications
making it better than sending out the same ETag on incompatible
responses from the same resource.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: How to kill 1.3?

2007-10-03 Thread Paul Querna
Roy T. Fielding wrote:
 I don't see why we care, either way.

I don't care if committers continue to maintain 1.3 longer than any
statement we make.

A public statement is about setting expectations of our users.

We all know that 1.3 is dead as a doornail at this point, but many of
our users do not.

We should be upfront and truthful to our users about this.

-Paul


Re: ETag and Content-Encoding

2007-10-03 Thread Henrik Nordstrom
On ons, 2007-10-03 at 13:29 -0700, Justin Erenkrantz wrote:

 The issue here is that mod_dav_svn generates an ETag (based off rev
 num and path) and that ETag can be later used to check for conditional
 requests.  But, if mod_deflate always strips a 'special' tag from the
 ETag (per Henrik),

That was only a suggestion on how you may work around your somewhat
limited conditional processing capabilities wrt filters like
mod_deflate, but I think it's probably the cleanest approach considering
the requirements of If-Match and modifying methods (PUT, DELETE,
PROPATCH etc). In that construct the tag added to the ETag by
mod_deflate (or another entity transforming filter) needs to be
sufficiently unique that it is not likely to be seen in the original
ETag value.

It's not easy to fulfill the needs of all components when doing dynamic
entity transformations, especially when there is negotiation involved..

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: ETag and Content-Encoding

2007-10-03 Thread Henrik Nordstrom
On ons, 2007-10-03 at 12:10 -0700, Roy T. Fielding wrote:

  Two resource variants with different content-encoding is not
  semantically equivalent as the recipient may not be able to understand
  an variant sent with an incompatible encoding.
 
 That is not true.  The weak etag is for content that has changed but
 is just as good a response content as would have been received.
 In other words, protocol equivalence is irrelevant.

By protocol semantic equivalence I mean responses being acceptable to
requests.

Example: Two negotiated responses with different Content-Encoding is not
semantically equivalent at the HTTP level as their negotiation
properties is different, and one can not substitute one for the other
and expect that HTTP works.

But two compressed response entities with different compression level
depending on the CPU load is.

Note: Ignoring transfer-encoding here as it's transport and pretty much
irrelevant to the operations of the protocol other than wire message
encoding/decoding.

  a) HTTP must be able to tell if an already cached variant is valid  
  for a
  new request by using If-None-Match. This means that each negotiated
  entity needs to use a different ETag value. Accept-Encoding is no
  different in this any of the other inputs to content negotiation.
 
 That is not HTTP.  Don't confuse the needs of caching with the needs
 of range requests -- only range requests need strong etags.

I am not. I am talking about If-None-Match, not If-Range. And
specifically the use of If-None-Match in 13.6 Caching Negotiated
Responses.

It's a very simple and effective mechanism, but requires servers to
properly assign ETags to each (semantically in case of weak) unique
entity of a resource (not the resource as such).

Content-Encoding is no different in this than any of the other
negotiated properties (Content-Type, Content-Language, whatever).

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: Cc: lists (Re: ETag and Content-Encoding)

2007-10-03 Thread Henrik Nordstrom
On ons, 2007-10-03 at 21:44 +0100, Nick Kew wrote:

 The Cc: list on this and subsequent postings is screwed:
 
   (1) It includes me, so I get everything twice.
   OK, I can live with that, but it's annoying.

Use a Message-Id filter?

   (2) It fails to include Henrik Nordstrom, the principal 
   non-Apache protagonist in this discussion.

No problem. I am a dev@ subscriber

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: ETag and Content-Encoding

2007-10-03 Thread Julian Reschke

Henrik Nordstrom wrote:

On ons, 2007-10-03 at 13:29 -0700, Justin Erenkrantz wrote:


The issue here is that mod_dav_svn generates an ETag (based off rev
num and path) and that ETag can be later used to check for conditional
requests.  But, if mod_deflate always strips a 'special' tag from the
ETag (per Henrik),


That was only a suggestion on how you may work around your somewhat
limited conditional processing capabilities wrt filters like
mod_deflate, but I think it's probably the cleanest approach considering
the requirements of If-Match and modifying methods (PUT, DELETE,
PROPATCH etc). In that construct the tag added to the ETag by
mod_deflate (or another entity transforming filter) needs to be
sufficiently unique that it is not likely to be seen in the original
ETag value.
...


Two cents -- no three cents :-):

#1) I agree with Henrik's analysis.

#2) If Content-Encoding is implemented through a separate module, it 
will have to rewrite both outgoing and incoming etags; note that this 
includes the If-* headers from RFC2616 and the If header defined in 
RFC4918 (obsoleting RFC2518).


#3) If just appending -gzip doesn't provide sufficient uniqueness, the 
 implementation may want to *always* append a token (such as 
-identity), even when no compression occurred.


Best regards, Julian




Re: ETag and Content-Encoding

2007-10-03 Thread Henrik Nordstrom
On ons, 2007-10-03 at 23:52 +0200, Henrik Nordstrom wrote:
  That is not HTTP.  Don't confuse the needs of caching with the needs
  of range requests -- only range requests need strong etags.
 
 I am not. I am talking about If-None-Match, not If-Range. And
 specifically the use of If-None-Match in 13.6 Caching Negotiated
 Responses.

To clarify, I do not care much about strong/weak etags. This is a
property of how the server generates the content with no significant
relevance to caching other than that the ETags as such must be
sufficiently unique (there is some cache impacts of weak etags, but not
really relevant to this discussion)

It anything I said seems to imply that I only want to see strong ETags
then that's solely due to the use of poor language on my part and not
intentional.

All I am trying to say is that the responses

[no Content-Encoding]
and
Content-Encoding: gzip

from the same negotiated resource is two different variants in terms of
HTTP and must carry different ETag values, if any.

End.

The rest is just trying to get people to see this.

Apache mod_deflate do not do this when doing it's dynamic content
negotiation driven transformations, and that is a bug (13.11 MUST) with
quite nasty implications on caching of negotiated responses (13.6).

The fact that responses with different Content-Encoding is meant to
result in the same object after decoding is pretty much irrelevant here.
It's two incompatible different negotated variants of the resource and
is all that matters.

I am also saying that the simple change of making mod_deflate transform
any existing ETag into a weak one is not sufficient to address this
proper, but it's quite likely to plaster over the problem for a while in
most uses except when the original response ETag is already weak. It
will however break completely if Apache GET If-None-Match processing is
changed to use the weak comparison as mandated by the RFC (13.3.3) (to
my best knowledge Apache always uses the strong function, but I may be
wrong there..).

Negotiation of Content-Encoding is really not any different than
negotiation of any of the other content properties such as
Content-Language or Content-Type. The same rules apply, and each unique
outcome (variant) of the negotiation process needs to be assigned an
unique ETag with no overlaps between variants, and for strong ETag's
each binary version of each variant needs to have an unique ETag with no
overlaps.

This ignoring any out-of-band dynamic parameters to the negotiation
process such as server load which might affect responses to the same
request, only talking about negotiation based on request headers. For
out-of-band negotiation properties it's important to respect the strong
ETag binary equivalence requirements.


Note: Changed language to use the more proper term variant instead of
entity. Hopefully less confusing.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


mod_proxy and interim responses

2007-10-03 Thread Nick Kew
RFC2616 tells us a proxy MUST forward interim (1xx) responses
to an HTTP/1.1 Client, except where the proxy itself requested
the response.  Currently mod_proxy is just eating interim
responses.  There's a history of problems here (PR#16518).

I've hacked up a simple patch, based on a new
ap_send_interim_response API function.  This works, but could
raise issues of ordering if we start to support asynchronous
pipelining of requests on a connection.

Patch attached.  Comments?

-- 
Nick Kew

Application Development with Apache - the Apache Modules Book
http://www.apachetutor.org/
Index: include/http_protocol.h
===
--- include/http_protocol.h	(revision 581744)
+++ include/http_protocol.h	(working copy)
@@ -664,6 +664,12 @@
  * @param sub_r Subrequest that is now compete
  */
 AP_DECLARE(void) ap_finalize_sub_req_protocol(request_rec *sub_r);
+
+/**
+ * Send an interim (HTTP 1xx) response immediately.
+ * @param r The request
+ */
+AP_DECLARE(void) ap_send_interim_response(request_rec *r);
 
 #ifdef __cplusplus
 }
Index: server/protocol.c
===
--- server/protocol.c	(revision 581744)
+++ server/protocol.c	(working copy)
@@ -1631,6 +1631,48 @@
 }
 }
 
+typedef struct hdr_ptr {
+ap_filter_t *f;
+apr_bucket_brigade *bb;
+} hdr_ptr;
+static int send_header(void *data, const char *key, const char *val)
+{
+ap_fputstrs(((hdr_ptr*)data)-f, ((hdr_ptr*)data)-bb,
+key, : , val, \r\n, NULL);
+return 1;
+}
+AP_DECLARE(void) ap_send_interim_response(request_rec *r)
+{
+/* write interim response directly to core filter */
+hdr_ptr x;
+
+if (r-proto_num  1001) {
+/* don't send interim response to HTTP/1.0 Client */
+return;
+}
+if (!ap_is_HTTP_INFO(r-status)) {
+ap_log_rerror(APLOG_MARK, APLOG_DEBUG, 0, NULL,
+  Status is %d - not sending interim response, r-status);
+return;
+}
+
+/* write the interim response direct to core filter
+ * to bypass initialising the final response
+ */
+x.f = r-proto_output_filters;
+while (strcmp(x.f-frec-name, core)) {
+x.f = x.f-next;
+}
+x.bb = apr_brigade_create(r-pool, r-connection-bucket_alloc);
+ap_fputstrs(x.f, x.bb, HTTP/1.1 , r-status_line, \r\n, NULL);
+apr_table_do(send_header, x, r-headers_out, NULL);
+ap_fputs(x.f, x.bb, \r\n);
+ap_fflush(x.f, x.bb);
+apr_brigade_destroy(x.bb);
+apr_table_clear(r-headers_out);
+}
+
+
 AP_IMPLEMENT_HOOK_RUN_ALL(int,post_read_request,
   (request_rec *r), (r), OK, DECLINED)
 AP_IMPLEMENT_HOOK_RUN_ALL(int,log_transaction,
Index: modules/proxy/mod_proxy_http.c
===
--- modules/proxy/mod_proxy_http.c	(revision 581744)
+++ modules/proxy/mod_proxy_http.c	(working copy)
@@ -1497,6 +1497,7 @@
 ap_log_error(APLOG_MARK, APLOG_DEBUG, 0, NULL,
  proxy: HTTP: received interim %d response,
  r-status);
+ap_send_interim_response(r);
 }
 /* Moved the fixups of Date headers and those affected by
  * ProxyPassReverse/etc from here to ap_proxy_read_headers


Re: mod_proxy and interim responses

2007-10-03 Thread Nick Kew
On Thu, 4 Oct 2007 00:41:06 +0100
Nick Kew [EMAIL PROTECTED] wrote:

 Patch attached.  Comments?
 
OK, slightly dumb patch.  Correction is to write to
r-connection-output_filters, not bypass the whole chain!

/me heads for bed before doing more dumb things

-- 
Nick Kew

Application Development with Apache - the Apache Modules Book
http://www.apachetutor.org/


[STATUS] (httpd-2.0) Wed Oct 3 23:47:19 2007

2007-10-03 Thread Rodent of Unusual Size
APACHE 2.0 STATUS:  -*-text-*-
Last modified at [$Date: 2007-10-03 01:18:51 -0400 (Wed, 03 Oct 2007) $]

The current version of this file can be found at:

  * http://svn.apache.org/repos/asf/httpd/httpd/branches/2.0.x/STATUS

Documentation status is maintained seperately and can be found at:

  * docs/STATUS in this source tree, or
  * http://svn.apache.org/repos/asf/httpd/httpd/branches/2.0.x/docs/STATUS

Consult the following STATUS files for information on related projects:

  * http://svn.apache.org/repos/asf/apr/apr/branches/0.9.x/STATUS
  * http://svn.apache.org/repos/asf/apr/apr-util/branches/0.9.x/STATUS

Consult the trunk/ for all new development and documentation efforts:

  * http://svn.apache.org/repos/asf/httpd/httpd/trunk/STATUS
  * http://svn.apache.org/repos/asf/httpd/httpd/trunk/docs/STATUS


Release history:

2.0.62  : In maintenance
2.0.61  : Released September 7, 2007.
2.0.60  : Tagged August 10, 2007, not released.
2.0.59  : released July 28, 2006 as GA.
2.0.58  : released May 1, 2006 as GA. 
2.0.57  : tagged April 19, 2006, not released.
2.0.56  : tagged April 16, 2006, not released.
2.0.55  : released October 16, 2005 as GA.
2.0.54  : released April 17, 2005 as GA.
2.0.53  : released February 7, 2005 as GA.
2.0.52  : released September 28, 2004 as GA.
2.0.51  : released September 15, 2004 as GA.
2.0.50  : released June 30, 2004 as GA.
2.0.49  : released March 19, 2004 as GA.
2.0.48  : released October 29, 2003 as GA.
2.0.47  : released July 09, 2003 as GA.
2.0.46  : released May 28, 2003 as GA.
2.0.45  : released April 1, 2003 as GA.
2.0.44  : released January 20, 2003 as GA.
2.0.43  : released October 3, 2002 as GA.
2.0.42  : released September 24, 2002 as GA.
2.0.41  : rolled September 16, 2002.  not released.
2.0.40  : released August 9, 2002 as GA.
2.0.39  : released June 17, 2002 as GA.
2.0.38  : rolled June 16, 2002.  not released.
2.0.37  : rolled June 11, 2002.  not released.
2.0.36  : released May 6, 2002 as GA.
2.0.35  : released April 5, 2002 as GA.
2.0.34  : tagged March 26, 2002.
2.0.33  : tagged March 6, 2002.  not released.
2.0.32  : released Feburary 16, 2002 as beta.
2.0.31  : rolled Feburary 1, 2002.  not released.
2.0.30  : tagged January 8, 2002.  not rolled.
2.0.29  : tagged November 27, 2001.  not rolled.
2.0.28  : released November 13, 2001 as beta.
2.0.27  : rolled November 6, 2001
2.0.26  : tagged October 16, 2001.  not rolled.
2.0.25  : rolled August 29, 2001
2.0.24  : rolled August 18, 2001
2.0.23  : rolled August 9, 2001
2.0.22  : rolled July 29, 2001
2.0.21  : rolled July 20, 2001
2.0.20  : rolled July 8, 2001
2.0.19  : rolled June 27, 2001
2.0.18  : rolled May 18, 2001
2.0.17  : rolled April 17, 2001
2.0.16  : rolled April 4, 2001
2.0.15  : rolled March 21, 2001
2.0.14  : rolled March 7, 2001
2.0a9   : released December 12, 2000
2.0a8   : released November 20, 2000
2.0a7   : released October 8, 2000
2.0a6   : released August 18, 2000
2.0a5   : released August 4, 2000
2.0a4   : released June 7, 2000
2.0a3   : released April 28, 2000
2.0a2   : released March 31, 2000
2.0a1   : released March 10, 2000


Contributors looking for a mission:

* Just do an egrep on TODO or XXX in the source.

* Review the bug database at: http://issues.apache.org/bugzilla/

* Review the PatchAvailable bugs in the bug database:

  
http://issues.apache.org/bugzilla/buglist.cgi?bug_status=NEWbug_status=ASSIGNEDbug_status=REOPENEDproduct=Apache+httpd-2.0keywords=PatchAvailable

  After testing, you can append a comment saying Reviewed and tested.

* Open bugs in the bug database.


CURRENT RELEASE NOTES:

* Forward binary compatibility is expected of Apache 2.0.x releases, such
  that no MMN major number changes will occur.  Such changes can only be
  made in the trunk.

* All commits to branches/2.0.x must be reflected in SVN trunk,
  as well, if they apply.  Logical progression is commit to trunk,
  get feedback and votes on list or in STATUS, then merge into 
  branches/2.2.x, and finally merge into branches/2.0.x, as applicable.


RELEASE SHOWSTOPPERS:

   * core log.c: Authored and Reviewed by both rplume and wrowe within 
 the same 10 minutes, share only a single apr_file_t/fd between the
 stderr and server_main-error_log to prevent any lingering write 
 handles from hanging around in unexpected ways.
http://svn.apache.org/viewvc?view=revrevision=580437
  PR 43491, solution validated by reporter
  +1: wrowe, rpluem

   * mpm_winnt: Correct the approach to std file handles by simplifying
 the approach and taking better advantage of apr's now-proper support.
http://svn.apache.org/viewvc?view=revrevision=580433

[STATUS] (httpd-2.2) Wed Oct 3 23:47:50 2007

2007-10-03 Thread Rodent of Unusual Size
APACHE 2.2 STATUS:  -*-text-*-
Last modified at [$Date: 2007-10-03 01:17:42 -0400 (Wed, 03 Oct 2007) $]

The current version of this file can be found at:

  * http://svn.apache.org/repos/asf/httpd/httpd/branches/2.2.x/STATUS

Documentation status is maintained seperately and can be found at:

  * docs/STATUS in this source tree, or
  * http://svn.apache.org/repos/asf/httpd/httpd/trunk/docs/STATUS

Consult the following STATUS files for information on related projects:

  * http://svn.apache.org/repos/asf/apr/apr/trunk/STATUS
  * http://svn.apache.org/repos/asf/apr/apr-util/trunk/STATUS

Patches considered for backport are noted in their branches' STATUS:

  * http://svn.apache.org/repos/asf/httpd/httpd/branches/1.3.x/STATUS
  * http://svn.apache.org/repos/asf/httpd/httpd/branches/2.0.x/STATUS
  * http://svn.apache.org/repos/asf/httpd/httpd/branches/2.2.x/STATUS


Release history:
[NOTE that x.{odd}.z versions are strictly Alpha/Beta releases,
  while x.{even}.z versions are Stable/GA releases.]

2.2.7   : In development
2.2.6   : Released September 7, 2007.
2.2.5   : Tagged August 10, 2007, not released.
2.2.4   : Released on January 9, 2007 as GA.
2.2.3   : Released on July 28, 2006 as GA.
2.2.2   : Released on May 1, 2006 as GA.
2.2.1   : Tagged on April 1, 2006, not released.
2.2.0   : Released on December 1, 2005 as GA.
2.1.10  : Tagged on November 19, 2005, not released.
2.1.9   : Released on November 5, 2005 as beta.
2.1.8   : Released on October 1, 2005 as beta.
2.1.7   : Released on September 12, 2005 as beta.
2.1.6   : Released on June 27, 2005 as alpha.
2.1.5   : Tagged on June 17, 2005.
2.1.4   : not released.
2.1.3   : Released on  February 22, 2005 as alpha.
2.1.2   : Released on December 8, 2004 as alpha.
2.1.1   : Released on November 19, 2004 as alpha.
2.1.0   : not released.


Contributors looking for a mission:

* Just do an egrep on TODO or XXX in the source.

* Review the bug database at: http://issues.apache.org/bugzilla/

* Review the PatchAvailable bugs in the bug database:

  
https://issues.apache.org/bugzilla/buglist.cgi?bug_status=NEWbug_status=ASSIGNEDbug_status=REOPENEDproduct=Apache+httpd-2keywords=PatchAvailable

  After testing, you can append a comment saying Reviewed and tested.

* Open bugs in the bug database.


CURRENT RELEASE NOTES:

* Forward binary compatibility is expected of Apache 2.2.x releases, such
  that no MMN major number changes will occur.  Such changes can only be
  made in the trunk.

* All commits to branches/2.2.x must be reflected in SVN trunk,
  as well, if they apply.  Logical progression is commit to trunk,
  get feedback and votes on list or in STATUS, then merge into
  branches/2.2.x, as applicable.


RELEASE SHOWSTOPPERS:

PATCHES ACCEPTED TO BACKPORT FROM TRUNK:
  [ start all new proposals below, under PATCHES PROPOSED. ]

PATCHES PROPOSED TO BACKPORT FROM TRUNK:
  [ New proposals should be added at the end of the list ]

* mpm_winnt: Eliminate wait_for_many_objects.  Allows the clean 
  shutdown of the server when the MaxClients is higher then 257,
  in a more responsive manner.
  Trunk version of patch:
http://svn.apache.org/viewvc?view=revrevision=573103
http://svn.apache.org/viewvc?view=revrevision=573105
  2.2.x version of patch:
http://people.apache.org/~wrowe/mpm_winnt_shutdown-2.2.patch
  +1: wrowe

   * mod_authn_dbd: Export any additional columns queried in the SQL select
 into the environment with the name AUTHENTICATE_COLUMN. This brings
 mod_authn_dbd behaviour in line with mod_authnz_ldap.
 Trunk: http://svn.apache.org/viewvc?view=revrevision=466865
http://svn.apache.org/viewvc?view=revrevision=571798
http://svn.apache.org/viewvc?view=revrevision=571804
 +1: minfrin
 rpluem says: r466865 has a conflict in modules/aaa/mod_auth.h
  r571804 has a conflict in docs/manual/mod/mod_authnz_ldap.xml
  Without r571838 the documentation for mod_authn_dbd fails
  to build.

* multiple files,   Trivial cleanups
  PR: 39518 - Christophe JAILLET
  http://svn.apache.org/viewvc?view=revrevision=557837
  http://svn.apache.org/viewvc?view=revrevision=557972
  +1: rpluem
  niq: this isn't a straight backport (which is why I dropped it).
  +1 for core, modules/dav, modules/filters, and modules/ssl
  Not Applicable to modules/aaa

   * mod_include: Add an if directive syntax to test whether an URL
 is accessible, and if so, conditionally display content. This
 allows a webmaster to hide a link to a private page when the user
 has no access to that page.
 http://svn.apache.org/viewvc?view=revrevision=571872
 http://svn.apache.org/viewvc?view=revrevision=571927
 

[STATUS] (httpd-trunk) Wed Oct 3 23:51:35 2007

2007-10-03 Thread Rodent of Unusual Size
APACHE 2.3 STATUS:  -*-text-*-
Last modified at [$Date: 2006-08-22 16:41:03 -0400 (Tue, 22 Aug 2006) $]

The current version of this file can be found at:

  * http://svn.apache.org/repos/asf/httpd/httpd/trunk/STATUS

Documentation status is maintained seperately and can be found at:

  * docs/STATUS in this source tree, or
  * http://svn.apache.org/repos/asf/httpd/httpd/trunk/docs/STATUS

Consult the following STATUS files for information on related projects:

  * http://svn.apache.org/repos/asf/apr/apr/trunk/STATUS
  * http://svn.apache.org/repos/asf/apr/apr-util/trunk/STATUS

Patches considered for backport are noted in their branches' STATUS:

  * http://svn.apache.org/repos/asf/httpd/httpd/branches/1.3.x/STATUS
  * http://svn.apache.org/repos/asf/httpd/httpd/branches/2.0.x/STATUS
  * http://svn.apache.org/repos/asf/httpd/httpd/branches/2.2.x/STATUS


Release history:
[NOTE that x.{odd}.z versions are strictly Alpha/Beta releases,
  while x.{even}.z versions are Stable/GA releases.]

2.3.0   : in development


Contributors looking for a mission:

* Just do an egrep on TODO or XXX in the source.

* Review the bug database at: http://issues.apache.org/bugzilla/

* Review the PatchAvailable bugs in the bug database:

  
https://issues.apache.org/bugzilla/buglist.cgi?bug_status=NEWbug_status=ASSIGNEDbug_status=REOPENEDproduct=Apache+httpd-2keywords=PatchAvailable

  After testing, you can append a comment saying Reviewed and tested.

* Open bugs in the bug database.


CURRENT RELEASE NOTES:


RELEASE SHOWSTOPPERS:

* Handling of non-trailing / config by non-default handler is broken
  http://marc.theaimsgroup.com/?l=apache-httpd-devm=105451701628081w=2
  jerenkrantz asks: Why should this block a release?
  wsanchez agrees: this may be a change in behavior, but isn't
clearly wrong, and even if so, it doesn't seem like a
showstopper.

* the edge connection filter cannot be removed 
  http://marc.theaimsgroup.com/?l=apache-httpd-devm=105366252619530w=2

  jerenkrantz asks: Why should this block a release?

  stas replies: because it requires a rewrite of the filters stack
implementation (you have suggested that) and once 2.2 is
released you can't do that anymore. 


CURRENT VOTES:

* If the parent process dies, should the remaining child processes
  gracefully self-terminate. Or maybe we should make it a runtime
  option, or have a concept of 2 parent processes (one being a 
  hot spare).
  See: Message-ID: [EMAIL PROTECTED]

  Self-destruct: Ken, Martin, Lars
  Not self-destruct: BrianP, Ian, Cliff, BillS
  Make it runtime configurable: Aaron, jim, Justin, wrowe, rederpj, nd

  /* The below was a concept on *how* to handle the problem */
  Have 2 parents: +1: jim
  -1: Justin, wrowe, rederpj, nd
  +0: Lars, Martin (while standing by, could it do
something useful?)

* Make the worker MPM the default MPM for threaded Unix boxes.
  +1:   Justin, Ian, Cliff, BillS, striker, wrowe, nd
  +0:   BrianP, Aaron (mutex contention is looking better with the
latest code, let's continue tuning and testing), rederpj, jim
  -0:   Lars

  pquerna: Do we want to change this for 2.2?


RELEASE NON-SHOWSTOPPERS BUT WOULD BE REAL NICE TO WRAP THESE UP:

* Patches submitted to the bug database:
  
http://issues.apache.org/bugzilla/buglist.cgi?bug_status=NEWbug_status=ASSIGNEDbug_status=REOPENEDproduct=Apache+httpd-2keywords=PatchAvailable

* Filter stacks and subrequests, redirects and fast redirects.
  There's at least one PR that suffers from the current unclean behaviour
  (which lets the server send garbage): PR 17629
  nd says: Every subrequest should get its own filter stack with the
   subreq_core filter as bottom-most. That filter does two things:
 - swallow EOS buckets
 - redirect the data stream to the upper request's (rr-main)
   filter chain directly after the subrequest's starting
   point.
   Once we have a clean solution, we can try to optimize
   it, so that the server won't be slow down too much.

* RFC 2616 violations.
  Closed PRs: 15857.
  Open PRs: 15852, 15859, 15861, 15864, 15865, 15866, 15868, 15869,
15870, 16120, 16125, 16126, 16133, 16135, 16136, 16137,
16138, 16139, 16140, 16142, 16518, 16520, 16521, 
  jerenkrantz says: need to decide how many we need to backport and/or
if these rise to showstopper status.
  wrowe suggests: it would be nice to see MUST v.s. SHOULD v.s. MAY
  out of this list, without reviewing them individually.

* There is a bug in how we sort some hooks, at