RE: Preventing the use of chunked TF encoding while content-filtering

2009-11-09 Thread Anthony J. Biacco
Christoph,

I had a mod_buffer module written for me by Konstantin Chuguev 
(konstan...@chuguev.com) which collects chunks and buffers them for transfer in 
one shot.
You should contact him and see whether he'll give/license it to you.

-Tony
---
Manager, IT Operations
Format Dynamics, Inc.
303-573-1800x27
abia...@formatdynamics.com
http://www.formatdynamics.com

 -Original Message-
 From: Christoph Gröver [mailto:gro...@sitepark.com]
 Sent: Monday, November 09, 2009 11:07 AM
 To: modules-dev@httpd.apache.org
 Subject: Preventing the use of chunked TF encoding while content-
 filtering
 
 
 Hello list,
 
 
 I have written a module which does filtering the content.
 
 It gets those buckets and works on them and passes on the brigade of
 buckets. OK, works perfect for nearly all conditions.
 
 Execpt 
 
 When the user runs the infamous Internet Explorer and uses a
 misconfigured proxy, it doesn't.
 
 With the help of some debugging we have found out what happens: The
 proxy is not HTTP/1.1-aware, which means it just changes the
 HTTP-response to be HTTP/1.0, but it keeps the
 'chunked transfer encoding' of the original content.
 
 Almost all browsers recognize that it's still chunked although the
 header claims to be HTTP/1.0 and do the right thing.
 
 MSIE doesn't. It gives the user the content and displays it with those
 hex encoded chunk lengths in it.
 
 Of course, this breaks website design and often the functionality.
 
 So, what can we do about it?
 
 We cannot change the proxy people are using and we cannot fix their
 misconfigured proxies (or their broken proxies).
 
 We cannot tell them to not use MSIE, either.
 
 
 So we should tell the Apache Webserver to not use 'chunked transfer
 encoding'. I thought this might be possible by just saying
 
 r-chunked = FALSE;
 
 But it didn't help.
 
 So after some talking: Is there a way to get rid of CTFE ?
 
 Perhaps if we collect all the chunks, put it in one chunk and set a
 ContentLength-Header ?
 
 Or is there another trick to do this?
 
 Greetings from Münster, looking forward to your ideas.
 
 --
 Christoph Gröver, gro...@sitepark.com
 Sitepark GmbH, Gesellschaft für Informationsmanagement, AG Münster, HRB
 5017 Rothenburg 14-16, D-48143 Münster, Telefon (0251) 48265-50
 Geschäftsführer: Dipl.-Phys. Martin Kurze, Dipl.-Des. Thorsten Liebold


Re: Apache 2.2 coredumping on Solaris with Subversion 1.6

2009-11-09 Thread Nick Kew

skrishnam...@bloomberg.com wrote:

You're on the wrong list: this belongs on users@
(I know you posted there, but your mailer sent a bunch of
pseudo-HTML crap that made it too annoying to read).


I built it with the below two flags that should point it to the same apr and 
apr-util that were used for the
Apache installation I am using:

--with-apr=/bb/web/apache_2.2.0/src/httpd-2.2.0/srclib/apr 
--with-apr-util=/bb/web/apache_2.2.0/src/httpd-2.2.0/srclib/apr-util


That's the source.  And it's old source!  Do you know that httpd
was compiled with that same source (by default it isn't unless
it can't find an installed APR on your system).

Better to use the installed APR and up-to-date httpd.


Current function is access_checker
 548 authz_svn_config_rec *conf =
ap_get_module_config(r-per_dir_config,


Definitely binary-incompatible builds, though not necessarily
anything to do with APR versions.

--
Nick Kew


RE: Apache 2.2 coredumping on Solaris with Subversion 1.6

2009-11-09 Thread skrishnam...@bloomberg.com
Per my knowledge this is the apr source tar ball that was used. How do I find 
the 'installed' apr and use that instead? Posted here because I didn't get any 
response on the users list and this seemed to be a modules issue.
Do let me know and Ill continue the posting there. thanks 

-Original Message-
From: nicholas@sun.com [mailto:nicholas@sun.com] On Behalf Of Nick Kew
Sent: Monday, November 09, 2009 3:06 PM
To: modules-dev@httpd.apache.org
Subject: Re: Apache 2.2 coredumping on Solaris with Subversion 1.6

skrishnam...@bloomberg.com wrote:

You're on the wrong list: this belongs on users@
(I know you posted there, but your mailer sent a bunch of
pseudo-HTML crap that made it too annoying to read).

 I built it with the below two flags that should point it to the same apr and 
 apr-util that were used for the
 Apache installation I am using:
 
 --with-apr=/bb/web/apache_2.2.0/src/httpd-2.2.0/srclib/apr 
 --with-apr-util=/bb/web/apache_2.2.0/src/httpd-2.2.0/srclib/apr-util

That's the source.  And it's old source!  Do you know that httpd
was compiled with that same source (by default it isn't unless
it can't find an installed APR on your system).

Better to use the installed APR and up-to-date httpd.

 Current function is access_checker
  548 authz_svn_config_rec *conf =
 ap_get_module_config(r-per_dir_config,

Definitely binary-incompatible builds, though not necessarily
anything to do with APR versions.

-- 
Nick Kew


RE: TLS renegotiation attack, mod_ssl and OpenSSL

2009-11-09 Thread Boyle Owen
 -Original Message-
 From: Dirk-Willem van Gulik [mailto:di...@webweaving.org] 
 Sent: Saturday, November 07, 2009 12:28 AM
 To: dev@httpd.apache.org
 Subject: Re: TLS renegotiation attack, mod_ssl and OpenSSL
 
 +1 from me. (FreeBSD, Solaris). Test with and without certs (firefox, 
 safari, openssl tool). Tested with renegotion break script openssl.

Can I just verify what is supposed to happen with the break script test?

I have built 2.2.14 with 0.9.8l on Solaris 10. I do:

$ openssl -connect wibble:443
...
GET / HTTP/1.1  =20
Host:wibble
R
RENEGOTIATING

Then the connection hangs and I get no further data back from the
server. On http://wibble/server-status, I see:

6-0 17718 0/1/1 R 0.14 31 90 0.0 0.00 0.00 ? ? ..reading..

Is this the intended behaviour? I thought it was supposed to drop the
connection?

Rgds,
Owen Boyle
Disclaimer: Any disclaimer attached to this message may be ignored. 
 
This message is for the named person's use only. It may contain confidential, 
proprietary or legally privileged information. If you receive this message in 
error, please notify the sender urgently and then immediately delete the 
message and any copies of it from your system. Please also immediately destroy 
any hardcopies of the message. 
The sender's company reserves the right to monitor all e-mail communications 
through their networks.


Re: svn commit: r833738 - in /httpd/httpd/trunk: CHANGES docs/manual/mod/mod_log_config.xml modules/loggers/mod_log_config.c

2009-11-09 Thread Stefan Fritsch
On Sunday 08 November 2009, Ruediger Pluem wrote:
 Just a random thought: Wouldn't it be possible to simply things
  even further with apr_strtok?
 
Yes. Done in r834006.


Re: TLS renegotiation attack, mod_ssl and OpenSSL

2009-11-09 Thread Ruediger Pluem


On 11/09/2009 10:39 AM, Boyle Owen wrote:
 -Original Message-
 From: Dirk-Willem van Gulik [mailto:di...@webweaving.org] 
 Sent: Saturday, November 07, 2009 12:28 AM
 To: dev@httpd.apache.org
 Subject: Re: TLS renegotiation attack, mod_ssl and OpenSSL

 +1 from me. (FreeBSD, Solaris). Test with and without certs (firefox, 
 safari, openssl tool). Tested with renegotion break script openssl.
 
 Can I just verify what is supposed to happen with the break script test?
 
 I have built 2.2.14 with 0.9.8l on Solaris 10. I do:
 
   $ openssl -connect wibble:443
   ...
   GET / HTTP/1.1  =20
   Host:wibble
   R
   RENEGOTIATING
 
 Then the connection hangs and I get no further data back from the
 server. On http://wibble/server-status, I see:
 
   6-0 17718 0/1/1 R 0.14 31 90 0.0 0.00 0.00 ? ? ..reading..
 
 Is this the intended behaviour? I thought it was supposed to drop the
 connection?

Dirks tests are about the httpd patch

(http://www.apache.org/dist/httpd/patches/apply_to_2.2.14/CVE-2009-3555-2.2.patch)

which drops the connection. Not sure what openssl 0.9.8l does or what
the intended behaviour is. You might need to ask on the openssl dev list
about that.

Regards

Rüdiger



Re: svn commit: r834006 - /httpd/httpd/trunk/modules/loggers/mod_log_config.c

2009-11-09 Thread Ruediger Pluem


On 11/09/2009 11:00 AM, s...@apache.org wrote:
 Author: sf
 Date: Mon Nov  9 09:59:53 2009
 New Revision: 834006
 
 URL: http://svn.apache.org/viewvc?rev=834006view=rev
 Log:
 Simplify code by using apr_strtok
 
 Modified:
 httpd/httpd/trunk/modules/loggers/mod_log_config.c
 
 Modified: httpd/httpd/trunk/modules/loggers/mod_log_config.c
 URL: 
 http://svn.apache.org/viewvc/httpd/httpd/trunk/modules/loggers/mod_log_config.c?rev=834006r1=834005r2=834006view=diff
 ==
 --- httpd/httpd/trunk/modules/loggers/mod_log_config.c (original)
 +++ httpd/httpd/trunk/modules/loggers/mod_log_config.c Mon Nov  9 09:59:53 
 2009

 @@ -508,31 +508,20 @@
   * - commas to separate cookies
   */
  
 -if ((cookies = apr_table_get(r-headers_in, Cookie))) {
 -const char *cookie;
 -const char *cookie_end;
 -const char *cp;
 -int a_len = strlen(a);
 -/*
 - * Loop over semicolon-separated cookies.
 - */
 -for (cookie = cookies; *cookie != '\0'; cookie = cookie_end + 
 strspn(cookie_end, ; \t)) {
 -/* Loop invariant: cookie always points to start of cookie 
 name */
 -
 -/* Set cookie_end to ';' that ends this cookie, or '\0' at EOS */
 -cookie_end = cookie + strcspn(cookie, ;);
 -
 -cp = cookie + a_len;
 -if (cp = cookie_end)
 -continue;
 -cp += strspn(cp,  \t);
 -if (*cp == '='  !strncasecmp(cookie, a, a_len)) {
 -char *cookie_value;
 -cp++;  /* Move past '=' */
 -cp += strspn(cp,  \t);  /* Move past WS */
 -cookie_value = apr_pstrmemdup(r-pool, cp, cookie_end - cp);
 -return ap_escape_logitem(r-pool, cookie_value);
 - }
 +if ((cookies_entry = apr_table_get(r-headers_in, Cookie))) {
 +char *cookie, *last1, *last2;
 +char *cookies = apr_pstrdup(r-pool, cookies_entry);
 +
 +while ((cookie = apr_strtok(cookies, ;, last1))) {
 +char *name = apr_strtok(cookie, =, last2);
 +char *value;
 +apr_collapse_spaces(name, name);
 +
 +if (!strcasecmp(name, a)  (value = apr_strtok(NULL, =, 
 last2))) {
 +value += strspn(value,  \t);  /* Move past WS */

What about trailing spaces in the value?

 +return ap_escape_logitem(r-pool, value);
 +}
 +cookies = NULL;
  }
  }
  return NULL;
 
 

Regards

Rüdiger


ssl related test failures

2009-11-09 Thread Stefan Fritsch
Hi,

with openssl 0.9.8k, I currently get a large number of test failures:

Test Summary Report
---
t/ssl/basicauth.t (Wstat: 0 Tests: 3 Failed: 2)
  Failed tests:  2-3
t/ssl/env.t   (Wstat: 0 Tests: 30 Failed: 15)
  Failed tests:  16-30
t/ssl/extlookup.t (Wstat: 0 Tests: 2 Failed: 2)
  Failed tests:  1-2
t/ssl/fakeauth.t  (Wstat: 0 Tests: 3 Failed: 2)
  Failed tests:  2-3
t/ssl/proxy.t (Wstat: 0 Tests: 172 Failed: 118)
  Failed tests:  1-59, 114-172
t/ssl/require.t   (Wstat: 0 Tests: 5 Failed: 2)
  Failed tests:  2, 5
t/ssl/varlookup.t (Wstat: 0 Tests: 72 Failed: 72)
  Failed tests:  1-72
t/ssl/verify.t(Wstat: 0 Tests: 3 Failed: 1)
  Failed test:  2


Can somebody verify that this is a problem in trunk and not with my 
perl-framework setup?

Thanks.

Stefan


Re: ssl related test failures

2009-11-09 Thread Ruediger Pluem


On 11/09/2009 11:25 AM, Stefan Fritsch wrote:
 Hi,
 
 with openssl 0.9.8k, I currently get a large number of test failures:
 
 Test Summary Report
 ---
 t/ssl/basicauth.t (Wstat: 0 Tests: 3 Failed: 2)
   Failed tests:  2-3
 t/ssl/env.t   (Wstat: 0 Tests: 30 Failed: 15)
   Failed tests:  16-30
 t/ssl/extlookup.t (Wstat: 0 Tests: 2 Failed: 2)
   Failed tests:  1-2
 t/ssl/fakeauth.t  (Wstat: 0 Tests: 3 Failed: 2)
   Failed tests:  2-3
 t/ssl/proxy.t (Wstat: 0 Tests: 172 Failed: 118)
   Failed tests:  1-59, 114-172
 t/ssl/require.t   (Wstat: 0 Tests: 5 Failed: 2)
   Failed tests:  2, 5
 t/ssl/varlookup.t (Wstat: 0 Tests: 72 Failed: 72)
   Failed tests:  1-72
 t/ssl/verify.t(Wstat: 0 Tests: 3 Failed: 1)
   Failed test:  2
 
 
 Can somebody verify that this is a problem in trunk and not with my 
 perl-framework setup?

Quick and maybe stupid question: Did you do a 'make clean' before
you complied httpd against 0.9.8k?

Regards

Rüdiger



Re: ssl related test failures

2009-11-09 Thread Stefan Fritsch
On Monday 09 November 2009, Ruediger Pluem wrote:
 On 11/09/2009 11:25 AM, Stefan Fritsch wrote:
  Hi,
  
  with openssl 0.9.8k, I currently get a large number of test
  failures: 
  Test Summary Report
  ---
  t/ssl/basicauth.t (Wstat: 0 Tests: 3 Failed: 2)
Failed tests:  2-3
  t/ssl/env.t   (Wstat: 0 Tests: 30 Failed: 15)
Failed tests:  16-30
  t/ssl/extlookup.t (Wstat: 0 Tests: 2 Failed: 2)
Failed tests:  1-2
  t/ssl/fakeauth.t  (Wstat: 0 Tests: 3 Failed: 2)
Failed tests:  2-3
  t/ssl/proxy.t (Wstat: 0 Tests: 172 Failed: 118)
Failed tests:  1-59, 114-172
  t/ssl/require.t   (Wstat: 0 Tests: 5 Failed: 2)
Failed tests:  2, 5
  t/ssl/varlookup.t (Wstat: 0 Tests: 72 Failed: 72)
Failed tests:  1-72
  t/ssl/verify.t(Wstat: 0 Tests: 3 Failed: 1)
Failed test:  2
  
  
  Can somebody verify that this is a problem in trunk and not with
  my  perl-framework setup?
 
 Quick and maybe stupid question: Did you do a 'make clean' before
 you complied httpd against 0.9.8k?
 
Yes, I did 'make distclean' and 'buildconf'


Re: dropping inode keyed locks in mod_dav_fs

2009-11-09 Thread Stefan Fritsch
On Friday 23 October 2009, Stefan Fritsch wrote:
 On Thursday 22 October 2009, Joe Orton wrote:
   Is the performance improvement of inode keyed locking so large
   that it  is worth the hassle? If mod_dav_fs used filename keyed
   locking entirely, there would be an easy way to make file
   replacement by PUT atomic (see PR 39815). The current behaviour
   of deleting the old and the new file when the PUT fails is
   really bad.
 
  I believe the intent of using inode/device-number keyed locks was
   to  ensure that the lock database is independent of the mount
   point - i.e. you could move it around in the filesystem and it'd
   still work.
 
 Interesting. Do you think this feature is actually used?
 
  I certainly agree that the delete-on-PUT-failure behaviour is
  bad; I  think the correct behaviour would be to do the deletion
  only if the resource is newly created by the PUT.
 
 That would still replace the old file with a broken new file. Even
 better would be to save the new file to a temp file and move that
  over the old file if it the transfer has completed successfully.
  But this breaks locking with inode keyed locks. Therefore I would
  like to move to filename keyed locks (which are already there for
  systems without inode numbers). Any opinions on this?

Since nobody disagreed, I am going ahead with this and remove inode 
keyed locks and make PUT use temp files.

Using temp files and inode keyed locks would require to either copy 
(instead of move) the temp file to the target file, or to extend the 
dav_hooks_locks interface. I think both solutions are worse than just 
switching to file name keyed locks.


Re: dropping inode keyed locks in mod_dav_fs

2009-11-09 Thread Greg Stein
Sorry for missing earlier messages; I don't follow httpd as closely as before.

See my replies below:

On Mon, Nov 9, 2009 at 06:28, Stefan Fritsch s...@sfritsch.de wrote:
 On Friday 23 October 2009, Stefan Fritsch wrote:
 On Thursday 22 October 2009, Joe Orton wrote:
   Is the performance improvement of inode keyed locking so large
   that it  is worth the hassle? If mod_dav_fs used filename keyed
   locking entirely, there would be an easy way to make file
   replacement by PUT atomic (see PR 39815). The current behaviour
   of deleting the old and the new file when the PUT fails is
   really bad.
 
  I believe the intent of using inode/device-number keyed locks was
   to  ensure that the lock database is independent of the mount
   point - i.e. you could move it around in the filesystem and it'd
   still work.

No, I don't think so. The database would be movable with or without
inodes, afaik.

That is dim history :-P, but I am pretty it is because I was looking
forward to when the BIND protocol was implemented and you could bind
multiple URLs to the same resource. Thus, if you wanted to lock the
*resource*, then the inode was your primary key. Locking one path and
allowing changes through another would be Badness.

... [snip: bad PUT behavior] ...

 Since nobody disagreed, I am going ahead with this and remove inode
 keyed locks and make PUT use temp files.

 Using temp files and inode keyed locks would require to either copy
 (instead of move) the temp file to the target file, or to extend the
 dav_hooks_locks interface. I think both solutions are worse than just
 switching to file name keyed locks.

People aren't running around with symlinks in their repository, I
believe. mod_dav is pretty unaware of them, and Bad Things are
advertised to happen. It already takes some steps to explicitly
disallow symlinks as files. Thus, the notion of one resource in two
locations is probably moot.

Therefore, improving the PUT situation rather than keeping inodes is a
very good tradeoff.

Thanks,
-g


How does this Known Problem in Clients solve?

2009-11-09 Thread dreamice

Trailing CRLF on POSTs
This is a legacy issue. The CERN webserver required POST data to have an
extra CRLF following it. Thus many clients send an extra CRLF that is not
included in the Content-Length of the request. Apache works around this
problem by eating any empty lines which appear before a request.

Thank a lot!
-- 
View this message in context: 
http://old.nabble.com/How-does-this-Known-Problem-in-Clients-solve--tp26266536p26266536.html
Sent from the Apache HTTP Server - Dev mailing list archive at Nabble.com.



Re: svn commit: r834049 - in /httpd/httpd/trunk: CHANGES modules/dav/fs/lock.c modules/dav/fs/repos.c

2009-11-09 Thread Greg Stein
On Mon, Nov 9, 2009 at 08:14,  s...@apache.org wrote:
 Author: sf
 Date: Mon Nov  9 13:14:07 2009
 New Revision: 834049

 URL: http://svn.apache.org/viewvc?rev=834049view=rev
 Log:
 Make PUT with DAV_MODE_WRITE_TRUNC create a temporary file first and, when the
 transfer has been completed successfully, move it over the old file.

 Since this would break inode keyed locking, switch to filename keyed locking
 exclusively.

 PR: 39815
 Submitted by: Paul Querna, Stefan Fritsch

 Modified:
    httpd/httpd/trunk/CHANGES
    httpd/httpd/trunk/modules/dav/fs/lock.c
    httpd/httpd/trunk/modules/dav/fs/repos.c

 Modified: httpd/httpd/trunk/CHANGES
 URL: 
 http://svn.apache.org/viewvc/httpd/httpd/trunk/CHANGES?rev=834049r1=834048r2=834049view=diff
 ==
 --- httpd/httpd/trunk/CHANGES [utf-8] (original)
 +++ httpd/httpd/trunk/CHANGES [utf-8] Mon Nov  9 13:14:07 2009
 @@ -10,6 +10,13 @@
      mod_proxy_ftp: NULL pointer dereference on error paths.
      [Stefan Fritsch sf fritsch.de, Joe Orton]

 +  *) mod_dav_fs: Make PUT create files atomically and no longer destroy the
 +     old file if the transfer aborted. PR 39815. [Paul Querna, Stefan 
 Fritsch]
 +
 +  *) mod_dav_fs: Remove inode keyed locking as this conflicts with atomically
 +     creating files. This is a format cange of the DavLockDB. The old
 +     DavLockDB must be deleted on upgrade. [Stefan Fritsch]
 +
   *) mod_log_config: Make ${cookie}C correctly match whole cookie names
      instead of substrings. PR 28037. [Dan Franklin dan dan-franklin.com,
      Stefan Fritsch]
...

Why did you go with a format change of the DAVLockDB? It is quite
possible that people will miss that step during an upgrade. You could
just leave DAV_TYPE_FNAME in there.

Cheers,
-g


Re: svn commit: r834049 - in /httpd/httpd/trunk: CHANGES modules/dav/fs/lock.c modules/dav/fs/repos.c

2009-11-09 Thread Stefan Fritsch
On Monday 09 November 2009, Greg Stein wrote:
 On Mon, Nov 9, 2009 at 08:14,  s...@apache.org wrote:
  Author: sf
  Date: Mon Nov  9 13:14:07 2009
  New Revision: 834049
 
  URL: http://svn.apache.org/viewvc?rev=834049view=rev
  Log:
  Make PUT with DAV_MODE_WRITE_TRUNC create a temporary file first
  and, when the transfer has been completed successfully, move it
  over the old file.
 
  Since this would break inode keyed locking, switch to filename
  keyed locking exclusively.
 
  PR: 39815
  Submitted by: Paul Querna, Stefan Fritsch
 
  Modified:
 httpd/httpd/trunk/CHANGES
 httpd/httpd/trunk/modules/dav/fs/lock.c
 httpd/httpd/trunk/modules/dav/fs/repos.c
 
  Modified: httpd/httpd/trunk/CHANGES
  URL:
  http://svn.apache.org/viewvc/httpd/httpd/trunk/CHANGES?rev=834049
 r1=834048r2=834049view=diff
  =
 = --- httpd/httpd/trunk/CHANGES [utf-8] (original)
  +++ httpd/httpd/trunk/CHANGES [utf-8] Mon Nov  9 13:14:07 2009
  @@ -10,6 +10,13 @@
   mod_proxy_ftp: NULL pointer dereference on error paths.
   [Stefan Fritsch sf fritsch.de, Joe Orton]
 
  +  *) mod_dav_fs: Make PUT create files atomically and no longer
  destroy the + old file if the transfer aborted. PR 39815.
  [Paul Querna, Stefan Fritsch] +
  +  *) mod_dav_fs: Remove inode keyed locking as this conflicts
  with atomically + creating files. This is a format cange of
  the DavLockDB. The old + DavLockDB must be deleted on
  upgrade. [Stefan Fritsch] +
*) mod_log_config: Make ${cookie}C correctly match whole cookie
  names instead of substrings. PR 28037. [Dan Franklin dan
  dan-franklin.com, Stefan Fritsch]
 ...
 
 Why did you go with a format change of the DAVLockDB? It is quite
 possible that people will miss that step during an upgrade. You
  could just leave DAV_TYPE_FNAME in there.

That wouldn't help because it would still break with DAV_TYPE_INODE 
locks existing in the DAVLockDB. Or am I missing something?


Re: svn commit: r834049 - in /httpd/httpd/trunk: CHANGES modules/dav/fs/lock.c modules/dav/fs/repos.c

2009-11-09 Thread Ruediger Pluem


On 11/09/2009 02:14 PM, s...@apache.org wrote:
 Author: sf
 Date: Mon Nov  9 13:14:07 2009
 New Revision: 834049
 
 URL: http://svn.apache.org/viewvc?rev=834049view=rev
 Log:
 Make PUT with DAV_MODE_WRITE_TRUNC create a temporary file first and, when the
 transfer has been completed successfully, move it over the old file.
 
 Since this would break inode keyed locking, switch to filename keyed locking
 exclusively.
 
 PR: 39815
 Submitted by: Paul Querna, Stefan Fritsch
 
 Modified:
 httpd/httpd/trunk/CHANGES
 httpd/httpd/trunk/modules/dav/fs/lock.c
 httpd/httpd/trunk/modules/dav/fs/repos.c
 

 Modified: httpd/httpd/trunk/modules/dav/fs/repos.c
 URL: 
 http://svn.apache.org/viewvc/httpd/httpd/trunk/modules/dav/fs/repos.c?rev=834049r1=834048r2=834049view=diff
 ==
 --- httpd/httpd/trunk/modules/dav/fs/repos.c (original)
 +++ httpd/httpd/trunk/modules/dav/fs/repos.c Mon Nov  9 13:14:07 2009
 dav_stream **stream)
 @@ -876,7 +890,19 @@
  
  ds-p = p;
  ds-pathname = resource-info-pathname;
 -rv = apr_file_open(ds-f, ds-pathname, flags, APR_OS_DEFAULT, ds-p);
 +ds-temppath = NULL;
 +
 +if (mode == DAV_MODE_WRITE_TRUNC) {
 +ds-temppath = apr_pstrcat(p, ap_make_dirstr_parent(p, ds-pathname),
 +   DAV_FS_TMP_PREFIX XX, NULL);
 +rv = apr_file_mktemp(ds-f, ds-temppath, flags, ds-p);

This causes the following warning:

repos.c: In function 'dav_fs_open_stream':
repos.c:900: warning: passing argument 2 of 'apr_file_mktemp' discards 
qualifiers from pointer target type


Regards

Rüdiger


Re: svn commit: r834049 - in /httpd/httpd/trunk: CHANGES modules/dav/fs/lock.c modules/dav/fs/repos.c

2009-11-09 Thread Stefan Fritsch
On Monday 09 November 2009, Ruediger Pluem wrote:
 This causes the following warning:
 
 repos.c: In function 'dav_fs_open_stream':
 repos.c:900: warning: passing argument 2 of 'apr_file_mktemp'
  discards qualifiers from pointer target type
 
Thanks. Fixed.


Re: svn commit: r834049 - in /httpd/httpd/trunk: CHANGES modules/dav/fs/lock.c modules/dav/fs/repos.c

2009-11-09 Thread Greg Stein
On Mon, Nov 9, 2009 at 08:42, Stefan Fritsch s...@sfritsch.de wrote:
 On Monday 09 November 2009, Greg Stein wrote:
 On Mon, Nov 9, 2009 at 08:14,  s...@apache.org wrote:
  Author: sf
  Date: Mon Nov  9 13:14:07 2009
  New Revision: 834049
 
  URL: http://svn.apache.org/viewvc?rev=834049view=rev
  Log:
  Make PUT with DAV_MODE_WRITE_TRUNC create a temporary file first
  and, when the transfer has been completed successfully, move it
  over the old file.
 
  Since this would break inode keyed locking, switch to filename
  keyed locking exclusively.
 
  PR: 39815
  Submitted by: Paul Querna, Stefan Fritsch
 
  Modified:
     httpd/httpd/trunk/CHANGES
     httpd/httpd/trunk/modules/dav/fs/lock.c
     httpd/httpd/trunk/modules/dav/fs/repos.c
 
  Modified: httpd/httpd/trunk/CHANGES
  URL:
  http://svn.apache.org/viewvc/httpd/httpd/trunk/CHANGES?rev=834049
 r1=834048r2=834049view=diff
  =
 = --- httpd/httpd/trunk/CHANGES [utf-8] (original)
  +++ httpd/httpd/trunk/CHANGES [utf-8] Mon Nov  9 13:14:07 2009
  @@ -10,6 +10,13 @@
       mod_proxy_ftp: NULL pointer dereference on error paths.
       [Stefan Fritsch sf fritsch.de, Joe Orton]
 
  +  *) mod_dav_fs: Make PUT create files atomically and no longer
  destroy the +     old file if the transfer aborted. PR 39815.
  [Paul Querna, Stefan Fritsch] +
  +  *) mod_dav_fs: Remove inode keyed locking as this conflicts
  with atomically +     creating files. This is a format cange of
  the DavLockDB. The old +     DavLockDB must be deleted on
  upgrade. [Stefan Fritsch] +
    *) mod_log_config: Make ${cookie}C correctly match whole cookie
  names instead of substrings. PR 28037. [Dan Franklin dan
  dan-franklin.com, Stefan Fritsch]
 ...

 Why did you go with a format change of the DAVLockDB? It is quite
 possible that people will miss that step during an upgrade. You
  could just leave DAV_TYPE_FNAME in there.

 That wouldn't help because it would still break with DAV_TYPE_INODE
 locks existing in the DAVLockDB. Or am I missing something?

Heh. Yeah. I realized that right *after* I hit the Send button :-P

Tho: mod_dav could error if it sees an unrecognized type, rather than
simply misinterpreting the data and silently unlocking all nodes.

Cheers,
-g


Re: ssl related test failures

2009-11-09 Thread Sander Temme

Hi Stefan,

On Nov 9, 2009, at 2:25 AM, Stefan Fritsch wrote:


Hi,

with openssl 0.9.8k, I currently get a large number of test failures:


These tests do not fail for me.  Can you run a subset in verbose and  
see how they fail?  Like:


t/TEST ... -verbose t/ssl/basicauth.t

should get you some more insight.  Also, which platform?

S.


Test Summary Report
---
t/ssl/basicauth.t (Wstat: 0 Tests: 3 Failed: 2)
 Failed tests:  2-3
t/ssl/env.t   (Wstat: 0 Tests: 30 Failed: 15)
 Failed tests:  16-30
t/ssl/extlookup.t (Wstat: 0 Tests: 2 Failed: 2)
 Failed tests:  1-2
t/ssl/fakeauth.t  (Wstat: 0 Tests: 3 Failed: 2)
 Failed tests:  2-3
t/ssl/proxy.t (Wstat: 0 Tests: 172 Failed: 118)
 Failed tests:  1-59, 114-172
t/ssl/require.t   (Wstat: 0 Tests: 5 Failed: 2)
 Failed tests:  2, 5
t/ssl/varlookup.t (Wstat: 0 Tests: 72 Failed: 72)
 Failed tests:  1-72
t/ssl/verify.t(Wstat: 0 Tests: 3 Failed: 1)
 Failed test:  2


Can somebody verify that this is a problem in trunk and not with my
perl-framework setup?

Thanks.

Stefan






--
Sander Temme
scte...@apache.org
PGP FP: 51B4 8727 466A 0BC3 69F4  B7B8 B2BE BC40 1529 24AF





smime.p7s
Description: S/MIME cryptographic signature


Re: Httpd 3.0 or something else

2009-11-09 Thread Akins, Brian
On 11/9/09 12:32 AM, Brian McCallister bri...@skife.org wrote:

 A 3.0, a fundamental architectural shift, would be interesting to
 discuss, I am not sure there is a ton of value in it, though, to be
 honest.

So I should continue to investigate nginx? ;)

FWIW, nginx delivers on its performance promises, but is a horrible hairball
of code (my opinion).  We (httpd-dev type folks) could do much better - if
we just would. (Easy for the guy with no time to say, I know...)

 
-- 
Brian Akins



Re: ssl related test failures

2009-11-09 Thread Stefan Fritsch
On Monday 09 November 2009, Sander Temme wrote:
 Hi Stefan,
 
 On Nov 9, 2009, at 2:25 AM, Stefan Fritsch wrote:
  Hi,
 
  with openssl 0.9.8k, I currently get a large number of test
  failures:
 
 These tests do not fail for me.  Can you run a subset in verbose
  and see how they fail?  Like:
 
 t/TEST ... -verbose t/ssl/basicauth.t
 
 should get you some more insight.  Also, which platform?

This is Debian unstable with the Debian openssl. It seems to complain
about an expired CRL. AFAICS with tcpdump, it doesn't try to connect
anywhere to get the CRL. Any ideas? If not I will dig deeper later,
no time ATM.

t/ssl/basicauth.t ..
1..3
# Running under perl version 5.010001 for linux
# Current time local: Mon Nov  9 16:36:42 2009
# Current time GMT:   Mon Nov  9 15:36:42 2009
# Using Test.pm version 1.25_02
# Using Apache/Test.pm version 1.31
# testing : Getting /ssl-fakebasicauth/index.html with no cert
# expected: 500
# received: 500
ok 1
# testing : Getting /ssl-fakebasicauth/index.html with client_snakeoil cert
# expected: 200
# received: 500
not ok 2
# Failed test 2 in t/ssl/basicauth.t at line 25
# testing : Getting /ssl-fakebasicauth/index.html with client_ok cert
# expected: 401
# received: 500
not ok 3
# Failed test 3 in t/ssl/basicauth.t at line 30
Failed 2/3 subtests

From the error log:

[Mon Nov 09 16:38:53 2009] [info] Initial (No.1) HTTPS request received for 
child 1 (server localhost:8532)
[Mon Nov 09 16:38:53 2009] [debug] ssl_engine_kernel.c(552): [client 127.0.0.1] 
Changed client verification type will force renegotiation
[Mon Nov 09 16:38:53 2009] [info] [client 127.0.0.1] Requesting connection 
re-negotiation
[Mon Nov 09 16:38:53 2009] [debug] ssl_engine_kernel.c(728): [client 127.0.0.1] 
Performing full renegotiation: complete handshake protocol
[Mon Nov 09 16:38:53 2009] [debug] ssl_engine_kernel.c(1831): OpenSSL: 
Handshake: start
[Mon Nov 09 16:38:53 2009] [debug] ssl_engine_kernel.c(1839): OpenSSL: Loop: 
SSL renegotiate ciphers
[Mon Nov 09 16:38:53 2009] [debug] ssl_engine_kernel.c(1839): OpenSSL: Loop: 
SSLv3 write hello request A
[Mon Nov 09 16:38:53 2009] [debug] ssl_engine_kernel.c(1839): OpenSSL: Loop: 
SSLv3 flush data
[Mon Nov 09 16:38:53 2009] [debug] ssl_engine_kernel.c(1839): OpenSSL: Loop: 
SSLv3 write hello request C
[Mon Nov 09 16:38:53 2009] [info] [client 127.0.0.1] Awaiting re-negotiation 
handshake
[Mon Nov 09 16:38:53 2009] [debug] ssl_engine_kernel.c(1831): OpenSSL: 
Handshake: start
[Mon Nov 09 16:38:53 2009] [debug] ssl_engine_kernel.c(1839): OpenSSL: Loop: 
before accept initialization
[Mon Nov 09 16:38:53 2009] [debug] ssl_engine_kernel.c(1839): OpenSSL: Loop: 
SSLv3 read client hello A
[Mon Nov 09 16:38:53 2009] [debug] ssl_engine_kernel.c(1839): OpenSSL: Loop: 
SSLv3 write server hello A
[Mon Nov 09 16:38:53 2009] [debug] ssl_engine_kernel.c(1839): OpenSSL: Loop: 
SSLv3 write certificate A
[Mon Nov 09 16:38:53 2009] [debug] ssl_engine_kernel.c(1231): [client 
127.0.0.1] handing out temporary 1024 bit DH key
[Mon Nov 09 16:38:53 2009] [debug] ssl_engine_kernel.c(1839): OpenSSL: Loop: 
SSLv3 write key exchange A
[Mon Nov 09 16:38:53 2009] [debug] ssl_engine_kernel.c(1839): OpenSSL: Loop: 
SSLv3 write certificate request A
[Mon Nov 09 16:38:53 2009] [debug] ssl_engine_kernel.c(1839): OpenSSL: Loop: 
SSLv3 flush data
[Mon Nov 09 16:38:53 2009] [debug] ssl_engine_kernel.c(1273): [client 
127.0.0.1] Certificate Verification, depth 1 [subject: 
/C=US/ST=California/L=San 
Francisco/O=ASF/OU=httpd-test/CN=ca/emailaddress=test-...@httpd.apache.org, 
issuer: /C=US/ST=California/L=San 
Francisco/O=ASF/OU=httpd-test/CN=ca/emailAddress=test-
dev@httpd.apache.org, serial: D11C47D1766CFD0D]
[Mon Nov 09 16:38:53 2009] [debug] ssl_engine_kernel.c(1480): CA CRL: Issuer: 
C=US, ST=California, L=San Francisco, O=ASF, OU=httpd-test, 
CN=ca/emailAddress=test-
dev@httpd.apache.org, lastUpdate: Oct  3 12:01:39 2009 GMT, nextUpdate: Nov  2 
12:01:39 2009 GMT
[Mon Nov 09 16:38:53 2009] [warn] Found CRL is expired - revoking all 
certificates until you get updated CRL
[Mon Nov 09 16:38:53 2009] [error] [client 127.0.0.1] Certificate Verification: 
Error (12): CRL has expired
[Mon Nov 09 16:38:53 2009] [debug] ssl_engine_kernel.c(1849): OpenSSL: Write: 
SSLv3 read client certificate B
[Mon Nov 09 16:38:53 2009] [debug] ssl_engine_kernel.c(1868): OpenSSL: Exit: 
error in SSLv3 read client certificate B
[Mon Nov 09 16:38:53 2009] [error] [client 127.0.0.1] Re-negotiation handshake 
failed: Not accepted by client!?
[Mon Nov 09 16:38:53 2009] [debug] ssl_engine_kernel.c(1273): [client 
127.0.0.1] Certificate Verification, depth 1 [subject: 
/C=US/ST=California/L=San 
Francisco/O=ASF/OU=httpd-test/CN=ca/emailaddress=test-...@httpd.apache.org, 
issuer: /C=US/ST=California/L=San 
Francisco/O=ASF/OU=httpd-test/CN=ca/emailAddress=test-
dev@httpd.apache.org, serial: D11C47D1766CFD0D]
[Mon Nov 09 16:38:53 2009] [debug] ssl_engine_kernel.c(1480): CA CRL: Issuer: 
C=US, ST=California, L=San 

Apache 2.2 coredumping on Solaris with Subversion 1.6

2009-11-09 Thread skrishnam...@bloomberg.com
Hi,

I compiled subversion with apache 2.2 on solaris but when I hit the server with 
an svn request, the svn modules produce a core dump. Has anyone faced anything 
similar or have any ideas about how to fix  or workaround this issue? Any help 
is appreciated.
Running a pstack on the core file produces:

# /bin/pstack /bb/cores/core.httpd.5718.1257279721
core '/bb/cores/core.httpd.5718.1257279721' of 5718:
/bb/web/apache_2_2/bin/httpd -k start
 fdaa2214 access_checker (36aea0, 36c1a0, 640, fefefeff, 80808080, 1010101) + 44
 00069ef8 ap_run_access_checker (36aea0, 0, 0, 0, 0, 0) + 70
 0006aa58 ap_process_request_internal (36aea0, 0, 0, 0, 0, 0) + 360
 0008a378 ap_process_request (36aea0, 4, 36aea0, 364f60, 0, 0) + 60
 00085c50 ap_process_http_connection (364f60, 364e88, 0, 0, 0, 0) + 88
 0007e9e4 ap_run_process_connection (364f60, 364e88, 364e88, 0, 362f98, 368e60) 
+ 74
 0007f0a8 ap_process_connection (364f60, 364e88, 364e88, 0, 362f98, 368e60) + 98
 00091d3c child_main (0, 91200, 0, fefb8000, fef73700, fee42a00) + 60c
 00091fa8 make_child (109e00, 0, 0, 10, 10b4, fef73b00) + 1b8
 00092070 startup_children (5, 2, 5, 108120, 232d20, 0) + 70
 00092784 ap_mpm_run (108120, 1361d8, 109e00, 109e00, 4, 4) + 31c
 0004eb68 main (3, ffbff43c, ffbff44c, f3800, fee40100, fee40140) + f08
 0004d198 _start   (0, 0, 0, 0, 0, 0) + 108

Ive seen several other posts with people having the same issue but no real 
solutions

e.g :

http://forums.sun.com/thread.jspa?threadID=5360736messageID=10573946#10573946
http://forums.sun.com/thread.jspa?threadID=5360736
http://markmail.org/message/jwhlkgvrnbmgunsd#query:subversion%20crashes%20at%20access_checker+page:1+mid:5cjtts3co442pxxh+state:results
http://subversion.tigris.org/ds/viewMessage.do?dsForumId=462viewType=browseAlldsMessageId=588712#http://subversion.tigris.org/ds/viewMessage.do?dsForumId=462viewType=browseAlldsMessageId=588712

stepping though the core file with dbx gives me :

Running the core files through dbx gives me :

Reading libz.so.1.2.3
Reading libm.so.1
Reading libgcc_s.so.1
Reading mod_dav_svn.so
Reading libsvn_repos-1.so.0.0.0
Reading libsvn_fs-1.so.0.0.0
Reading libsvn_delta-1.so.0.0.0
Reading libsvn_subr-1.so.0.0.0
Reading libexpat.so.0.1.0
Reading libsvn_fs_fs-1.so.0.0.0
Reading libsvn_fs_util-1.so.0.0.0
Reading libsqlite3.so.0.8.6
Reading mod_authz_svn.so
Reading mod_jk.so
t...@1 (l...@1) program terminated by signal SEGV (Segmentation Fault)
Current function is access_checker
  548 authz_svn_config_rec *conf = ap_get_module_config(r-per_dir_config,


cheers


Re: ssl related test failures

2009-11-09 Thread Jeff Trawick
On Mon, Nov 9, 2009 at 10:55 AM, Stefan Fritsch s...@sfritsch.de wrote:
 On Monday 09 November 2009, Sander Temme wrote:
 Hi Stefan,

 On Nov 9, 2009, at 2:25 AM, Stefan Fritsch wrote:
  Hi,
 
  with openssl 0.9.8k, I currently get a large number of test
  failures:

 These tests do not fail for me.  Can you run a subset in verbose
  and see how they fail?  Like:

 t/TEST ... -verbose t/ssl/basicauth.t

 should get you some more insight.  Also, which platform?

 This is Debian unstable with the Debian openssl. It seems to complain
 about an expired CRL.

this is a test framework tree you've had for a while?  the certs will
expire after a while (30 days perhaps?)

does t/TEST -clean force the certs to be generated next time you run
the tests?  (you can see the openssl output scroll by)


Re: Making a binary distribution of apache 2.2.14 on Aix 6.1

2009-11-09 Thread Michael Felt
Actually, the reason I started this thread is because I wanted to start
making builds that used IBM's installp format for distribution rather than
RPM - which is the format chosen for most of the AIX toolbox. Imho much of
the difficulity the libtool devs have with the AIX platform (as generally
the solution is playing with libtool, or loading a newer version) is this
mixed install environment.

I have been trying to develop and package without using any of the AIX
toolbox as I do not want any dependencies on it. Instead, I shall, as
suggested towards the beginning of this thread, make my own packages to
fulfill dependancies and/or specify IBM installp packages (i.e. libraries
installed into /usr/lib and maybe /opt/lib).

There are a couple of other respositories out there - my site - when I
finally get it assembled - will be yet another, but with forums behind it so
that people can share experiences.

Michael

On Wed, Oct 28, 2009 at 6:04 PM, Graham Leggett minf...@sharp.fm wrote:

 Ali Halawi wrote:

  Well rpmbuild doesnt exist on my aix..

 What packaging system does aix use? RPM? Or something else?


  Any IDeaaa? i tried a lot of solution but cant figure it out today !!!
 any
  help would be appreciated!!!

 APR and APR-util are distributed separately, you would probably need to
 deploy packages of APR and APR-util before you can expect httpd to build.

 (For legacy reasons, the apr and apr-util trees are still included in
 the tarballs, but the packaging scripts ignore them).

 Regards,
 Graham
 --



Re: Httpd 3.0 or something else

2009-11-09 Thread Graham Leggett
Akins, Brian wrote:

 FWIW, nginx delivers on its performance promises, but is a horrible hairball
 of code (my opinion).  We (httpd-dev type folks) could do much better - if
 we just would. (Easy for the guy with no time to say, I know...)

I think it is entirely reasonable for the httpd v3.0 codebase to do this
as a goal:

- Be asynchronous throughout; while
- Supporting prefork as httpd does now; and
- Allow variable levels of event-driven-ness in between.

This gives us the option of prefork reliability, and event driven speed,
as required by the admin.

Regards,
Graham
--


intend to roll 2.3 alpha on Wednesday

2009-11-09 Thread Paul Querna
Hello dev@,

I intend to roll a 2.3 alpha release on Wednesday November 11th.

I will bundle APR from the 1.4.x branch. (APR people should make a
release, but this shouldn't be a blocker for our own alpha releases).

I am almost 90% sure the release might fail due to various issues, but
we need to start cleaning those issues out.

Thanks,

Paul


Re: Httpd 3.0 or something else

2009-11-09 Thread Akins, Brian
On 11/9/09 12:52 PM, Graham Leggett minf...@sharp.fm wrote:

 This gives us the option of prefork reliability, and event driven speed,
 as required by the admin.

I think if we try to do both, we will wind up with the worst of both worlds.
(Or is it worse??)  Blocking/buggy modules should be ran out of process
(FactCGI/HTTP/Thrift).

-- 
Brian Akins



Re: Making a binary distribution of apache 2.2.14 on Aix 6.1

2009-11-09 Thread Graham Leggett
Michael Felt wrote:

 Actually, the reason I started this thread is because I wanted to start
 making builds that used IBM's installp format for distribution rather
 than RPM - which is the format chosen for most of the AIX toolbox. Imho
 much of the difficulity the libtool devs have with the AIX platform (as
 generally the solution is playing with libtool, or loading a newer
 version) is this mixed install environment.
 
 I have been trying to develop and package without using any of the AIX
 toolbox as I do not want any dependencies on it. Instead, I shall, as
 suggested towards the beginning of this thread, make my own packages to
 fulfill dependancies and/or specify IBM installp packages (i.e.
 libraries installed into /usr/lib and maybe /opt/lib).
 
 There are a couple of other respositories out there - my site - when I
 finally get it assembled - will be yet another, but with forums behind
 it so that people can share experiences.

One of the things that may be tripping you up is apr - for legacy
reasons, apr is shipped included with httpd v2.2.x. However, for a long
time now, binary distributions have been packaging apr and apr-util as
completely separate packages, and httpd has been typically configured
during these binary builds to use these external apr and apr-util packages.

The apr and apr-util trees in httpd are therefore ignored.

What I suggest you do is to try get AIX packaging to work on apr and
apr-util first, and when that works, attempt to get httpd to work,
depending on apr and apr-util as just-another-external-package.
Completely ignore anything in the srclib directory, and assume those are
system provided.

You'll see similar scripts for producing rpms and solaris pkg files in
the apr and apr-util trees.

Regards,
Graham
--


Re: intend to roll 2.3 alpha on Wednesday

2009-11-09 Thread Graham Leggett
Paul Querna wrote:

 I intend to roll a 2.3 alpha release on Wednesday November 11th.
 
 I will bundle APR from the 1.4.x branch. (APR people should make a
 release, but this shouldn't be a blocker for our own alpha releases).
 
 I am almost 90% sure the release might fail due to various issues, but
 we need to start cleaning those issues out.

Is there a need to bundle APR at all?

Otherwise +1.

Regards,
Graham
--


Re: [PATCH] mod_ssl: improving session caching for SNI configurations

2009-11-09 Thread Kaspar Brand
Dr Stephen Henson wrote:
 Yes that looks better. There is an alternative technique if it is easier to 
 find
 a base SSL_CTX, you can retrieve the auto generated keys using
 SSL_CTX_get_tlsext_ticket_keys() and then copy to the new context as above.

The loop actually iterates over all contexts, so we could just remember
the keys of the first SSL-enabled vhost, we don't have to find the
base context. I.e., simply replace

  RAND_bytes(tlsext_tick_keys, tick_keys_len);

by

  SSL_CTX_get_tlsext_ticket_keys(sc-server-ssl_ctx,
 tlsext_tick_keys, tick_keys_len);

I prefer the former, however, because 1) it's shorter, 2) RAND_bytes are
cheap (aren't they? ;-) and 3) ... it would actually need another
workaround, for OpenSSL  0.9.8l, as I realized in the meantime: I
should have compiled against 0.9.8k for my tests, not 0_9_8-stable -
because this way I missed the TLXEXT_TICKET_KEYS typo :-/ In the
attached patches (v4), I've therefore added a workaround for
SSL_CTRL_SET_TLXEXT_TICKET_KEYS.

And back to the question whether ap_md5_binary should be used or not, I
have now switched to apr_sha1 for the trunk version - maybe that's an
acceptable compromise (use SHA-1 for trunk, stay with MD5 in 2.2.x, for
backward compatibility)?

Could one of the httpd committers take over and make a decision,
therefore...? Help with getting this into the tree would be much
appreciated - thanks!

Kaspar
Index: httpd-trunk/modules/ssl/ssl_private.h
===
--- httpd-trunk/modules/ssl/ssl_private.h   (revision 833672)
+++ httpd-trunk/modules/ssl/ssl_private.h   (working copy)
@@ -405,6 +405,9 @@ typedef struct {
 #if defined(HAVE_OPENSSL_ENGINE_H)  defined(HAVE_ENGINE_INIT)
 const char *szCryptoDevice;
 #endif
+#ifndef OPENSSL_NO_TLSEXT
+ssl_enabled_t  session_tickets_enabled;
+#endif
 
 #ifdef HAVE_OCSP_STAPLING
 const ap_socache_provider_t *stapling_cache;
@@ -585,6 +588,7 @@ const char  *ssl_cmd_SSLUserName(cmd_par
 const char  *ssl_cmd_SSLLogLevelDebugDump(cmd_parms *, void *, const char *);
 const char  *ssl_cmd_SSLRenegBufferSize(cmd_parms *cmd, void *dcfg, const char 
*arg);
 const char  *ssl_cmd_SSLStrictSNIVHostCheck(cmd_parms *cmd, void *dcfg, int 
flag);
+const char  *ssl_cmd_SSLSessionTicketExtension(cmd_parms *cmd, void *cdfg, int 
flag);
 
 const char  *ssl_cmd_SSLProxyEngine(cmd_parms *cmd, void *dcfg, int flag);
 const char  *ssl_cmd_SSLProxyProtocol(cmd_parms *, void *, const char *);
Index: httpd-trunk/modules/ssl/ssl_engine_init.c
===
--- httpd-trunk/modules/ssl/ssl_engine_init.c   (revision 833672)
+++ httpd-trunk/modules/ssl/ssl_engine_init.c   (working copy)
@@ -363,6 +363,15 @@ static void ssl_init_server_check(server
 (theoretically shouldn't happen!));
 ssl_die();
 }
+
+/*
+ * Session tickets (stateless resumption)
+ */
+if ((myModConfig(s))-session_tickets_enabled == SSL_ENABLED_FALSE) {
+ap_log_error(APLOG_MARK, APLOG_DEBUG, 0, s,
+ Disabling TLS session ticket support);
+SSL_CTX_set_options(mctx-ssl_ctx, SSL_OP_NO_TICKET);
+}
 }
 
 #ifndef OPENSSL_NO_TLSEXT
@@ -1061,6 +1070,11 @@ void ssl_init_CheckServers(server_rec *b
 
 BOOL conflict = FALSE;
 
+#if !defined(OPENSSL_NO_TLSEXT)  OPENSSL_VERSION_NUMBER  0x009080d0
+unsigned char *tlsext_tick_keys = NULL;
+long tick_keys_len;
+#endif
+
 /*
  * Give out warnings when a server has HTTPS configured
  * for the HTTP port or vice versa
@@ -1085,6 +1099,27 @@ void ssl_init_CheckServers(server_rec *b
  ssl_util_vhostid(p, s),
  DEFAULT_HTTP_PORT, DEFAULT_HTTPS_PORT);
 }
+
+#if !defined(OPENSSL_NO_TLSEXT)  OPENSSL_VERSION_NUMBER  0x009080d0
+/* we have to work around a typo in OpenSSL  0.9.8l, too */
+#define SSL_CTRL_SET_TLXEXT_TICKET_KEYS SSL_CTRL_SET_TLSEXT_TICKET_KEYS
+/*
+ * When using OpenSSL versions 0.9.8f through 0.9.8l, configure
+ * the same ticket encryption parameters for every SSL_CTX (workaround
+ * for SNI+SessionTicket extension interoperability issue in these 
versions)
+ */
+if ((sc-enabled == SSL_ENABLED_TRUE) ||
+(sc-enabled == SSL_ENABLED_OPTIONAL)) {
+if (!tlsext_tick_keys) {
+tick_keys_len = 
SSL_CTX_set_tlsext_ticket_keys(sc-server-ssl_ctx,
+   NULL, -1);
+tlsext_tick_keys = (unsigned char *)apr_palloc(p, 
tick_keys_len);
+RAND_bytes(tlsext_tick_keys, tick_keys_len);
+}
+SSL_CTX_set_tlsext_ticket_keys(sc-server-ssl_ctx,
+   tlsext_tick_keys, tick_keys_len);
+}
+#endif
 }
 
 /*
Index: httpd-trunk/modules/ssl/ssl_engine_config.c

Preventing the use of chunked TF encoding while content-filtering

2009-11-09 Thread Christoph Gröver

Hello list,


I have written a module which does filtering the content.

It gets those buckets and works on them and passes on the brigade of
buckets. OK, works perfect for nearly all conditions.

Execpt 

When the user runs the infamous Internet Explorer and uses a
misconfigured proxy, it doesn't.

With the help of some debugging we have found out what happens: The
proxy is not HTTP/1.1-aware, which means it just changes the
HTTP-response to be HTTP/1.0, but it keeps the 
'chunked transfer encoding' of the original content.

Almost all browsers recognize that it's still chunked although the
header claims to be HTTP/1.0 and do the right thing.

MSIE doesn't. It gives the user the content and displays it with those
hex encoded chunk lengths in it.

Of course, this breaks website design and often the functionality.

So, what can we do about it?

We cannot change the proxy people are using and we cannot fix their
misconfigured proxies (or their broken proxies).

We cannot tell them to not use MSIE, either.


So we should tell the Apache Webserver to not use 'chunked transfer
encoding'. I thought this might be possible by just saying

r-chunked = FALSE;

But it didn't help.

So after some talking: Is there a way to get rid of CTFE ?

Perhaps if we collect all the chunks, put it in one chunk and set a
ContentLength-Header ?

Or is there another trick to do this?

Greetings from Münster, looking forward to your ideas.

-- 
Christoph Gröver, gro...@sitepark.com
Sitepark GmbH, Gesellschaft für Informationsmanagement, AG Münster, HRB
5017 Rothenburg 14-16, D-48143 Münster, Telefon (0251) 48265-50
Geschäftsführer: Dipl.-Phys. Martin Kurze, Dipl.-Des. Thorsten Liebold


Re: Httpd 3.0 or something else

2009-11-09 Thread Graham Leggett
Akins, Brian wrote:

 This gives us the option of prefork reliability, and event driven speed,
 as required by the admin.
 
 I think if we try to do both, we will wind up with the worst of both worlds.
 (Or is it worse??)  Blocking/buggy modules should be ran out of process
 (FactCGI/HTTP/Thrift).

That is exactly what prefork means - to run something out of process, so
that it can leak and crash at will.

I disagree we'll end up with the worst of both worlds. A lot of head
banging in the cache code has been caused because we are doing blocking
reads and blocking writes on the filter stacks.

When I say be asynchronous I mean use non-blocking reads and writes
everywhere, in both prefork, worker and event.

We know from 15+ years of experience that prefork works, and we know
from the same period of experience from others that a pure event driven
model is useful for shipping static data and not much further. But some
people have a need to just ship static data, and there is no reason why
httpd and an event MPM can't do that job well too.

Regards,
Graham
--


Re: Httpd 3.0 or something else

2009-11-09 Thread Akins, Brian
On 11/9/09 1:18 PM, Graham Leggett minf...@sharp.fm wrote:

 and we know
 from the same period of experience from others that a pure event driven
 model is useful for shipping static data and not much further.

It works really well for proxy.

-- 
Brian Akins



Re: intend to roll 2.3 alpha on Wednesday

2009-11-09 Thread Sander Temme


On Nov 9, 2009, at 10:04 AM, Graham Leggett wrote:


Paul Querna wrote:


I intend to roll a 2.3 alpha release on Wednesday November 11th.

I will bundle APR from the 1.4.x branch. (APR people should make a
release, but this shouldn't be a blocker for our own alpha releases).

I am almost 90% sure the release might fail due to various issues,  
but

we need to start cleaning those issues out.


+1


Is there a need to bundle APR at all?


Not sure that we do... we could do as Subversion does, and release a  
dependencies tarball with srclib/{apr,apr-util,pcre} from a known  
release.


S.


Otherwise +1.

Regards,
Graham
--






--
Sander Temme
scte...@apache.org
PGP FP: 51B4 8727 466A 0BC3 69F4  B7B8 B2BE BC40 1529 24AF





smime.p7s
Description: S/MIME cryptographic signature


Re: Httpd 3.0 or something else

2009-11-09 Thread Graham Leggett
Akins, Brian wrote:

 and we know
 from the same period of experience from others that a pure event driven
 model is useful for shipping static data and not much further.
 
 It works really well for proxy.

Aka static data :)

The key advantage to doing both prefork and event behaviour in the same
server is that operationally, it is one beast to feed and care for. You
might deploy them differently in different environments, but it is one
set of skills to manage.

Regards,
Graham
--


Re: Httpd 3.0 or something else

2009-11-09 Thread Akins, Brian
On 11/9/09 1:36 PM, Graham Leggett minf...@sharp.fm wrote:

 It works really well for proxy.
 
 Aka static data :)

Nah, we proxy to fastcgi php stuff, http java stuff, some horrid HTTP perl
stuff, etc (Full disclosure, I wrote the horrid perl stuff.)


-- 
Brian Akins



Re: intend to roll 2.3 alpha on Wednesday

2009-11-09 Thread Lars Eilebrecht
Paul Querna wrote:

 I intend to roll a 2.3 alpha release on Wednesday November 11th.

+1

ciao...
-- 
Lars Eilebrecht
l...@eilebrecht.net


Re: intend to roll 2.3 alpha on Wednesday

2009-11-09 Thread Paul Querna
On Mon, Nov 9, 2009 at 10:23 AM, Sander Temme scte...@apache.org wrote:

 On Nov 9, 2009, at 10:04 AM, Graham Leggett wrote:

 Paul Querna wrote:

 I intend to roll a 2.3 alpha release on Wednesday November 11th.

 I will bundle APR from the 1.4.x branch. (APR people should make a
 release, but this shouldn't be a blocker for our own alpha releases).

 I am almost 90% sure the release might fail due to various issues, but
 we need to start cleaning those issues out.

 +1

 Is there a need to bundle APR at all?

 Not sure that we do... we could do as Subversion does, and release a
 dependencies tarball with srclib/{apr,apr-util,pcre} from a known release.



Yes, we already have a separate -deps tarball. I hacked that into the
last alpha release :-)


Re: Httpd 3.0 or something else

2009-11-09 Thread Akins, Brian
On 11/9/09 1:40 PM, Brian Akins brian.ak...@turner.com wrote:

 On 11/9/09 1:36 PM, Graham Leggett minf...@sharp.fm wrote:
 
 It works really well for proxy.
 
 Aka static data :)
 
 Nah, we proxy to fastcgi php stuff, http java stuff, some horrid HTTP perl
 stuff, etc (Full disclosure, I wrote the horrid perl stuff.)

Replying to my own post:

What we discussed some on list some at Apachecon, was having a really good
and simple process manager.  Mod_fcgid is too much work to configure for
mere mortals.  If we just had something like:

AssociateExternal .php /path/to/my/php-cgi

And it did the sensible thing (whether fcgi, http, wscgi, etc.) then all the
config is in one place.  Obviously, we could have some advanced process
management directives.

If your app needed some special config stuff, we could easily pass it across
somehow.

-- 
Brian Akins



Re: Httpd 3.0 or something else

2009-11-09 Thread Graham Leggett
Akins, Brian wrote:

 It works really well for proxy.
 Aka static data :)
 
 Nah, we proxy to fastcgi php stuff, http java stuff, some horrid HTTP perl
 stuff, etc (Full disclosure, I wrote the horrid perl stuff.)

Doesn't matter, once httpd proxy gets hold of it, it's just shifting
static bits.

Something I want to teach httpd to do is buffer up data for output, and
then forget about the output to focus on releasing the backend resources
ASAP, ready for the next request when it (eventually) comes. The fact
that network writes block makes this painful to achieve.

Proxy had an optimisation that released proxied backend resources when
it detected EOS from the backend but before attempting to pass it to the
frontend, but someone refactored that away at some point. It would be
good if such an optimisation was available server wide.

I want to be able to write something to the filter stack, and get an
EWOULDBLOCK (or similar) back if it isn't ready. I could then make
intelligent decisions based on this. For example, if I were a cache, I
would carry on reading from the backend and writing the data to the
cache, while the frontend was saying not now, slow browser ahead. I
could have long since finished caching and closed the backend connection
and freed the resources, before the frontend returned cool, ready for
you now, at which point I answer no worries, have the cached content I
prepared earlier.

Regards,
Graham
--


Re: Httpd 3.0 or something else

2009-11-09 Thread Graham Leggett
Akins, Brian wrote:

 FWIW, nginx buffers backend stuff to a file, then sendfiles it out -  I
 think this is what perlbal does as well.  Same can be done outside apache
 using X-sendfile like methods.  Seems like we could move this inside
 apache fairly easy.  May can do it with a filter.  I tried once and got it
 to filter most backend stuff to a temp file, but it tended to miss and
 block.  That was a while ago, but I haven't learned anymore about the
 filters since then to think it would work any better.
 
 Maybe a mod_buffer that goes to a file?

mod_disk_cache can be made to do this quite trivially (it's on the list
of things to do When I Have Time(TM)).

In theory, a mod_disk_buffer could do this quite easily, on condition
upstream writes didn't block.

Regards,
Graham
--



Re: Httpd 3.0 or something else

2009-11-09 Thread Akins, Brian
On 11/9/09 2:06 PM, Greg Stein gst...@gmail.com wrote:

 These issues are already solved by moving to a Serf core. It is fully
 asynchronous.

Okay that's one convert, any others? ;)

That's what Paul and I discussed a lot last week.

My ideal httpd 3.0 is:

Libev + serf + lua

-- 
Brian Akins



Re: Httpd 3.0 or something else

2009-11-09 Thread Paul Querna
On Mon, Nov 9, 2009 at 11:06 AM, Greg Stein gst...@gmail.com wrote:
 On Mon, Nov 9, 2009 at 13:59, Graham Leggett minf...@sharp.fm wrote:
 Akins, Brian wrote:

 It works really well for proxy.
 Aka static data :)

 Nah, we proxy to fastcgi php stuff, http java stuff, some horrid HTTP perl
 stuff, etc (Full disclosure, I wrote the horrid perl stuff.)

 Doesn't matter, once httpd proxy gets hold of it, it's just shifting
 static bits.

 Something I want to teach httpd to do is buffer up data for output, and
 then forget about the output to focus on releasing the backend resources
 ASAP, ready for the next request when it (eventually) comes. The fact
 that network writes block makes this painful to achieve.

 Proxy had an optimisation that released proxied backend resources when
 it detected EOS from the backend but before attempting to pass it to the
 frontend, but someone refactored that away at some point. It would be
 good if such an optimisation was available server wide.

 I want to be able to write something to the filter stack, and get an
 EWOULDBLOCK (or similar) back if it isn't ready. I could then make
 intelligent decisions based on this. For example, if I were a cache, I
 would carry on reading from the backend and writing the data to the
 cache, while the frontend was saying not now, slow browser ahead. I
 could have long since finished caching and closed the backend connection
 and freed the resources, before the frontend returned cool, ready for
 you now, at which point I answer no worries, have the cached content I
 prepared earlier.

 These issues are already solved by moving to a Serf core. It is fully
 asynchronous.

 Backend handlers will no longer push bits towards the network. The
 core will pull them from a bucket. *Which* bucket is defined by a
 {URL,Headers}-Bucket mapping system.

I was talking to Aaron about this at ApacheCon.

I agree in general, a serf-based core does give us a good start.

But Serf Buckets and the event loop definitely do need some more work
-- simple things, like if the backend bucket is a socket, how do you
tell the event loop, that a would block rvalue maps to a file
descriptor talking to an origin server.   You don't want to just keep
looping over it until it returns data, you want to poll on the origin
socket, and only try to read when data is available.

I am also concerned about the patterns of sendfile() in the current
serf bucket archittecture, and making a whole pipeline do sendfile
correctly seems quite difficult.

-Paul


Re: svn commit: r834049 - in /httpd/httpd/trunk: CHANGES modules/dav/fs/lock.c modules/dav/fs/repos.c

2009-11-09 Thread Stefan Fritsch
On Monday 09 November 2009, Greg Stein wrote:
  Why did you go with a format change of the DAVLockDB? It is
  quite possible that people will miss that step during an
  upgrade. You could just leave DAV_TYPE_FNAME in there.
 
  That wouldn't help because it would still break with
  DAV_TYPE_INODE locks existing in the DAVLockDB. Or am I missing
  something?
 
 Heh. Yeah. I realized that right after I hit the Send button :-P
 
 Tho: mod_dav could error if it sees an unrecognized type, rather
  than simply misinterpreting the data and silently unlocking all
  nodes.

What do you want to do exactly? Check the db at httpd startup and 
abort if it contains old format entries? I don't think it is possible 
to convert the entries, at least not without traversing the whole dav 
tree.

In any case, I wonder if this is worth the effort. It definitely isn't 
for 2.2 - 2.4 upgrades. And if we backport the changes to 2.2.x, I 
would still primarily see it as responsibility of the distributors to 
warn the user/remove the old db in postinst/etc.

And for those people who compile it themselves and run a system 
critical enough that they cannot affort to loose the locks during an 
httpd upgrade, those people should really read the changelog.

Cheers,
Stefan


Re: ssl related test failures

2009-11-09 Thread Ruediger Pluem


On 11/09/2009 08:34 PM, Stefan Fritsch wrote:
 On Monday 09 November 2009, Jeff Trawick wrote:
  and see how they fail?  Like:

 t/TEST ... -verbose t/ssl/basicauth.t

 should get you some more insight.  Also, which platform?
 This is Debian unstable with the Debian openssl. It seems to
 complain about an expired CRL.
 this is a test framework tree you've had for a while?  the certs
  will expire after a while (30 days perhaps?)

 does t/TEST -clean force the certs to be generated next time you
  run the tests?  (you can see the openssl output scroll by)
 
 Thanks, that was the right hint. With a new svn checkout of the 
 framework, all tests pass and t/TEST -clean or make clean cleans 
 the certs.
 
 For some reason, the cleaning of the certs does not work with the old 
 tree. I don't think I am interested enough in the problem right now to 
 debug it, though.
 

I noticed as well that from time to time for whatever reason t/TEST -clean
doesn't clean the certificates. But as a fresh checkout fixes this I haven't
had the energy so far to look deep into this.

Regards

Rüdiger



Re: Httpd 3.0 or something else

2009-11-09 Thread Greg Stein
On Mon, Nov 9, 2009 at 14:21, Paul Querna p...@querna.org wrote:
...
 I agree in general, a serf-based core does give us a good start.

 But Serf Buckets and the event loop definitely do need some more work
 -- simple things, like if the backend bucket is a socket, how do you
 tell the event loop, that a would block rvalue maps to a file
 descriptor talking to an origin server.   You don't want to just keep
 looping over it until it returns data, you want to poll on the origin
 socket, and only try to read when data is available.

The goal would be that the handler's (aka content generator, aka serf
bucket) socket would be process in the same select() as the client
connections. When the bucket has no more data from the backend, then
it returns done for now. Eventually, all network reads/writes
finalize and control returns to the core loop. If data comes in the
backend, then the core opens and that bucket can read/return data.

There are two caveats that I can think of, right off hand:

1) Each client connection is associated with one bucket generating the
response. Ideally, you would not bother to read that bucket
unless/until the client connection is ready for reading. But that
could create a deadlock internal to the bucket -- *some* data may need
to be consumed from the backend, processed, and returned to the
backend to unstick the entire flow (think SSL). Even though nothing
pops out the top of the bucket, internal processing may need to
happen.

2) If you have 10,000 client connections, and some number of sockets
in the system ready for read/write... how do you quickly determine
*which* buckets to poll to get those sockets processed? You don't want
to poll  idle connections/buckets if only one is ready for
read/write. (note: there are optimizations around this; if the bucket
wants to return data, but wasn't asked to, then next-time-around it
has the same data; no need to drill way down to the source bucket to
attempt to read network data; tho this kinda sets up a busy loop until
that bucket's client is ready for writing)

Are either of these the considerations you were thinking of?

I can certainly see some kind of system to associate buckets and the
sockets that affect their behavior. Though that could get pretty crazy
since it doesn't have to be a 1:1 mapping. One backend socket might
actually service multiple buckets, and vice-versa.

 I am also concerned about the patterns of sendfile() in the current
 serf bucket archittecture, and making a whole pipeline do sendfile
 correctly seems quite difficult.

Well... it generally *is* quite difficult in the presence of SSL,
gzip, and chunking. Invariably, content is mangled before hitting the
network, so sendfile() rarely gets a chance to play ball.

But if you really are just dealing with plain files (maybe prezipped),
then the read_for_sendfile() should be workable. Most buckets can't do
squat with it, and should just use a default function. But the file
bucket can return a proper handle.
(and it is entirely possible/reasonable that the signature should be
adjusted to simplify the process)

Cheers,
-g


Re: intend to roll 2.3 alpha on Wednesday

2009-11-09 Thread Nick Kew

Graham Leggett wrote:


Is there a need to bundle APR at all?


Yep, let's draw a line under that.  APR is a dependency,
not a component.


Otherwise +1.


MeToo.

--
Nick Kew


Re: svn commit: r834049 - in /httpd/httpd/trunk: CHANGES modules/dav/fs/lock.c modules/dav/fs/repos.c

2009-11-09 Thread Greg Stein
On Mon, Nov 9, 2009 at 15:21, Greg Stein gst...@gmail.com wrote:
 On Mon, Nov 9, 2009 at 14:46, Stefan Fritsch s...@sfritsch.de wrote:
 On Monday 09 November 2009, Greg Stein wrote:
  Why did you go with a format change of the DAVLockDB? It is
  quite possible that people will miss that step during an
  upgrade. You could just leave DAV_TYPE_FNAME in there.
 
  That wouldn't help because it would still break with
  DAV_TYPE_INODE locks existing in the DAVLockDB. Or am I missing
  something?

 Heh. Yeah. I realized that right after I hit the Send button :-P

 Tho: mod_dav could error if it sees an unrecognized type, rather
  than simply misinterpreting the data and silently unlocking all
  nodes.

 What do you want to do exactly? Check the db at httpd startup and
 abort if it contains old format entries? I don't think it is possible
 to convert the entries, at least not without traversing the whole dav
 tree.

 Oh dear, no. I was just thinking of dropping a log like, cannot open
 DAVLockDB at /some/path; it has old-style entries.

 That would effectively shut off DAV and leave a reason for why it
 happened. Quite enough for a sysadmin to figure out how to fix it.

To clarify this comment I made:

 If
 the sysadmin missed the erase your db step, then there will be a
 silent failure (locks will be missed).

I mean: using the *current* code, as checked-in, there will be a
silent failure to recognize old/outstanding locks.

I think it would be best for us to at least say woah. you didn't read
the instructions. go RTFM. The admin can then decide to blast the
lockdb, or avoid the upgrade. Their choice.

Cheers,
-g


 In any case, I wonder if this is worth the effort. It definitely isn't

 Not suggest any more work than to revert the removal of the TYPE byte
 from the lock record. Then add a simple check/error if the type is not
 FNAME.

 for 2.2 - 2.4 upgrades. And if we backport the changes to 2.2.x, I
 would still primarily see it as responsibility of the distributors to
 warn the user/remove the old db in postinst/etc.

 And for those people who compile it themselves and run a system
 critical enough that they cannot affort to loose the locks during an
 httpd upgrade, those people should really read the changelog.

 Sure, but I'd rather make it a bit easier for them. We don't simply
 change directives on people without trying to do something intelligent
 for when we see the old format. We never just say well, we'll
 silently misinterpret that. them's the breaks.

 Cheers,
 -g



Re: ssl related test failures

2009-11-09 Thread Sander Temme


On Nov 9, 2009, at 11:49 AM, Ruediger Pluem wrote:


Thanks, that was the right hint. With a new svn checkout of the
framework, all tests pass and t/TEST -clean or make clean cleans
the certs.

For some reason, the cleaning of the certs does not work with the old
tree. I don't think I am interested enough in the problem right now  
to

debug it, though.



I noticed as well that from time to time for whatever reason t/TEST - 
clean
doesn't clean the certificates. But as a fresh checkout fixes this I  
haven't

had the energy so far to look deep into this.


Same here.  perl-framework insists on reconfiguring, recompiling and  
re-keying every time I run it on my Mac.  It reconfigures when I don't  
want it to, and I can't make it reconfigure when I do want it to.


I don't have the perl-fu, time or energy to figure this out.

S.

--
Sander Temme
scte...@apache.org
PGP FP: 51B4 8727 466A 0BC3 69F4  B7B8 B2BE BC40 1529 24AF





smime.p7s
Description: S/MIME cryptographic signature


Re: Httpd 3.0 or something else

2009-11-09 Thread Graham Leggett
Greg Stein wrote:

 These issues are already solved by moving to a Serf core. It is fully
 asynchronous.
 
 Backend handlers will no longer push bits towards the network. The
 core will pull them from a bucket. *Which* bucket is defined by a
 {URL,Headers}-Bucket mapping system.

How is pull different from push[1]?

Pull, by definition, is blocking behaviour.

You will only run as often as you are pulled, and never more often. And
if the pull is controlled by how quickly the client is accepting the
data, which is typically orders of magnitude slower than the backend can
push, you have no opportunity to try speed up the server in any way.

Push however, gives you a choice: the push either worked (yay! go
browser!), or it didn't (sensible alternative behaviour, like cache it
for later in a connection filter). Push happens as fast the backend, not
as slow as the frontend.

So far I'm not convinced it is a step forward, will have to think about
it more.

[1] Apart from the obvious.

Regards,
Graham
--


Re: Httpd 3.0 or something else

2009-11-09 Thread Greg Stein
On Mon, Nov 9, 2009 at 16:19, Graham Leggett minf...@sharp.fm wrote:
 Greg Stein wrote:
 These issues are already solved by moving to a Serf core. It is fully
 asynchronous.

 Backend handlers will no longer push bits towards the network. The
 core will pull them from a bucket. *Which* bucket is defined by a
 {URL,Headers}-Bucket mapping system.

 How is pull different from push[1]?

The network loop pulls data from the content-generator.

Apache 1.x and 2.x had a handler that pushed data at the network.
There is no loop, of course, since each worker had direct control of
the socket to push data into.

 Pull, by definition, is blocking behaviour.

You may want to check your definitions.

When you read from a serf bucket, it will return however much you ask
for, or as much as it has without blocking. When it gives you that
data, it can say I have more, I'm done, or This is what I had
without blocking.

 You will only run as often as you are pulled, and never more often. And
 if the pull is controlled by how quickly the client is accepting the
 data, which is typically orders of magnitude slower than the backend can
 push, you have no opportunity to try speed up the server in any way.

Eh? Are you kidding me?

One single network thread can manage N client connections. As each
becomes writable, the loop reads (pulls) from the bucket and jams it
into the client socket. If you're really fancy, then you know what the
window is, and you ask the bucket for that much data.

 Push however, gives you a choice: the push either worked (yay! go
 browser!), or it didn't (sensible alternative behaviour, like cache it
 for later in a connection filter). Push happens as fast the backend, not
 as slow as the frontend.

Push means that you have a worker per connection, pushing the response
onto the network. I really would like to see us get away from a worker
per connection.

Once a worker thread determines which bucket to create/build, then it
passes it along to the network thread, and returns for more work. The
network thread can then manage N connections with their associated
response buckets.

If one network thread cannot read/generate the content fast enough,
then you use multiple threads to keep the connections full.

Then you want to add in a bit of control around reading of requests in
order to manage the backlog of responses (and any potential memory
buildup that entails). If the network thread is consuming 100M and 20k
sockets, you may want to stop accepting connections or accept but read
them slowly until the pressure eases. etc...

Cheers,
-g


mod_fcgid: different instances of the same program

2009-11-09 Thread Danny Sadinoff
Here are two details of mod_fcgid process management that I've just
learned after a long debug session and squinting at the mod_fcgid
code.

1) symlinks  you.
It seems that mod_fcgid identifies fcgid programs by inode and device,
not by filename.  So two fcgid programs invoked by the webserver
along different paths will be counted as the same if the two paths are
hardlinks or softlinks to each other.

2) Virtual hosts
The above item holds true even across virtual hosts.   So while
it's possible to adjust the FcgidInitialEnv items on a per-vhost
basis, this is a recipe for disaster if two vhosts point at the same
fcgi executable, because the resulting processes with potentially
different Environments will be inserted into the same pool.  Once that
occurs, we may expect that a server spawned with config defined in
vhost A will be parcelled out to vhost B.


The Apache httpd 2.3 docs do not address the symlink issue at all, and
the virtual host issue only indirectly.
http://httpd.apache.org/mod_fcgid/mod/mod_fcgid.html


I'd appreciate it if someone could confirm or deny the above.  If I'm
right, can we add it to the docs?  None of it seems obvious to me.
Apologies in advance if this is the sort of thing that belongs on the
dev list.  I'm happy to throw together a doc patch.

Thanks in advance

P.S. having a lot of trouble getting this message posted to the list.  Not
sure what's up with that.

--
Danny Sadinoff
da...@sadinoff.com


Re: svn commit: r834013 - /httpd/httpd/trunk/modules/loggers/mod_log_config.c

2009-11-09 Thread Jeff Trawick
On Mon, Nov 9, 2009 at 5:43 AM,  s...@apache.org wrote:
 Author: sf
 Date: Mon Nov  9 10:43:16 2009
 New Revision: 834013

 URL: http://svn.apache.org/viewvc?rev=834013view=rev
 Log:
 Also remove trailing whitespace in the value

 Modified:
    httpd/httpd/trunk/modules/loggers/mod_log_config.c

 Modified: httpd/httpd/trunk/modules/loggers/mod_log_config.c
 URL: 
 http://svn.apache.org/viewvc/httpd/httpd/trunk/modules/loggers/mod_log_config.c?rev=834013r1=834012r2=834013view=diff
 ==
 --- httpd/httpd/trunk/modules/loggers/mod_log_config.c (original)
 +++ httpd/httpd/trunk/modules/loggers/mod_log_config.c Mon Nov  9 10:43:16 
 2009
 @@ -502,7 +502,7 @@
      * This supports Netscape version 0 cookies while being tolerant to
      * some properties of RFC2109/2965 version 1 cookies:
      * - case-insensitive match of cookie names
 -     * - white space around the '='
 +     * - white space between the tokens
      * It does not support the following version 1 features:
      * - quoted strings as cookie values
      * - commas to separate cookies
 @@ -518,7 +518,14 @@
             apr_collapse_spaces(name, name);

             if (!strcasecmp(name, a)  (value = apr_strtok(NULL, =, 
 last2))) {
 -                value += strspn(value,  \t);  /* Move past WS */
 +                char *last;
 +                value += strspn(value,  \t);  /* Move past leading WS */
 +                last = value + strlen(value);

doesn't this expression set last to point to the trailing '\0' instead
of the last character

 +                while (last = value  apr_isspace(*last)) {

such that this loop is never entered?


Backport proposal for CVE-2009-3555

2009-11-09 Thread Rainer Jung
I did a first try on backporting the CVE-2009-3555 patch to 2.0:

http://people.apache.org/~rjung/patches/cve-2009-3555_httpd_2_0_x.patch

I hadn't yet time for intensive testing, but first tests looked OK.
I noticed I couldn't log the SSL_SESSION_ID, but maybe that was a
Windows thing. Hadn't yet time and access to test on Unix resp. test on
Windows without patch.

I'll be unfortunately offline for about 10 hours not responding to comments.

Regards,

Rainer



Re: mod_fcgid: different instances of the same program

2009-11-09 Thread Jeff Trawick
On Mon, Nov 9, 2009 at 5:16 PM, Danny Sadinoff danny.sadin...@gmail.com wrote:
 Here are two details of mod_fcgid process management that I've just
 learned after a long debug session and squinting at the mod_fcgid
 code.

 1) symlinks  you.
 It seems that mod_fcgid identifies fcgid programs by inode and device,
 not by filename.  So two fcgid programs invoked by the webserver
 along different paths will be counted as the same if the two paths are
 hardlinks or softlinks to each other.

Mostly yes.

The path to the file doesn't matter; it is the file itself that matters.

There are different requirements for how programs are distinguished.
One possibility is changing from stat() to lstat() (i.e., distinguish
symlinks but not hard links).  Another possibility is looking only at
the basename.  This was discussed in this thread:
http://www.mail-archive.com/dev@httpd.apache.org/msg45516.html

What are you trying to accomplish which is hindered by the current
implementation?


 2) Virtual hosts
 The above item holds true even across virtual hosts.   So while
 it's possible to adjust the FcgidInitialEnv items on a per-vhost
 basis, this is a recipe for disaster if two vhosts point at the same
 fcgi executable, because the resulting processes with potentially
 different Environments will be inserted into the same pool.  Once that
 occurs, we may expect that a server spawned with config defined in
 vhost A will be parcelled out to vhost B.

Where does this occur?  Entries in the process table are distinguished
by virtual host.  (I think the implementation of this check is broken,
in that it requires that ServerName is set in the virtual hosts.  Are
you using a simple test config that doesn't have ServerName set?)



 The Apache httpd 2.3 docs do not address the symlink issue at all, and
 the virtual host issue only indirectly.
 http://httpd.apache.org/mod_fcgid/mod/mod_fcgid.html

FWIW, this isn't part of Apache httpd 2.3.  mod_fcgid is released
separately from the web server and only by coincidence has the same
version number (2.3.x) as development levels of the web server.



 I'd appreciate it if someone could confirm or deny the above.  If I'm
 right, can we add it to the docs?  None of it seems obvious to me.
 Apologies in advance if this is the sort of thing that belongs on the
 dev list.  I'm happy to throw together a doc patch.

Let's see what needs to be fixed in the code first ;)


Re: mod_fcgid: different instances of the same program

2009-11-09 Thread Graham Dumpleton
2009/11/10 Jeff Trawick traw...@gmail.com:
 On Mon, Nov 9, 2009 at 5:16 PM, Danny Sadinoff danny.sadin...@gmail.com 
 wrote:
 Here are two details of mod_fcgid process management that I've just
 learned after a long debug session and squinting at the mod_fcgid
 code.

 1) symlinks  you.
 It seems that mod_fcgid identifies fcgid programs by inode and device,
 not by filename.  So two fcgid programs invoked by the webserver
 along different paths will be counted as the same if the two paths are
 hardlinks or softlinks to each other.

 Mostly yes.

 The path to the file doesn't matter; it is the file itself that matters.

 There are different requirements for how programs are distinguished.
 One possibility is changing from stat() to lstat() (i.e., distinguish
 symlinks but not hard links).  Another possibility is looking only at
 the basename.  This was discussed in this thread:
 http://www.mail-archive.com/dev@httpd.apache.org/msg45516.html

FWIW, in the mod_wsgi module for Python applications, by default
applications are distinguished based on mount point and host/port they
are running under. That is, combination of SERVER_HOST, SERVER_PORT
and SCRIPT_NAME values.

Well, actually it is a little bit more complicated than that because
ports 80/443 are treated effectively the same given usage would
normally be paired.

What it means is that can have one script file mounted multiple times
and for each to be treated as separate instance of application.

In mod_wsgi the separation by default is done based on Python sub
interpreters within a process rather than actual processes. This is
because mod_wsgi supports running in embedded mode, ie., within Apache
server child processes, or as distinct daemon processes like with
FASTCGI.

There is the flexibility in mod_wsgi though to override this however
and manually specific what named application group (Python sub
interpreter within process), or whether embedded or daemon mode used
for processes and if daemon mode which named daemon process group.

Anyway, thought the strategy of using SERVER_HOST, SERVER_PORT and
SCRIPT_NAME values may be of interest as an alternative to
distinguishing based on path to script.

Graham


Re: mod_fcgid: different instances of the same program

2009-11-09 Thread Danny Sadinoff
On Tue, Nov 10, 2009 at 12:53 AM, Jeff Trawick traw...@gmail.com wrote:

 On Mon, Nov 9, 2009 at 5:16 PM, Danny Sadinoff danny.sadin...@gmail.com
wrote:
 ...
  1) symlinks  you.
  It seems that mod_fcgid identifies fcgid programs by inode and device,
  not by filename.  So two fcgid programs invoked by the webserver
  along different paths will be counted as the same if the two paths are
  hardlinks or softlinks to each other.

 Mostly yes.

 The path to the file doesn't matter; it is the file itself that matters.

 There are different requirements for how programs are distinguished.
 One possibility is changing from stat() to lstat() (i.e., distinguish
 symlinks but not hard links).  Another possibility is looking only at
 the basename.  This was discussed in this thread:
 http://www.mail-archive.com/dev@httpd.apache.org/msg45516.html

 What are you trying to accomplish which is hindered by the current
 implementation?

My goal is a fairly simple one-application per vhost setup.  But, I'm seeing
application pools shared amongst virtual hosts with distinct ServerName
declarations, all of whom refer to the same file path (and inode) for the
fcgi executable.  From what you're telling me, this is buggy behavior.  I'll
try to boil my config down further and come up with a good testcase.

Whether my config is wrong or the implementation is buggy, I would think
that the  mere existance of the dev thread trying to nail down the semantics
ought to be argument enough for
documenting the file-path-vs-inode behavior.


 
  2) Virtual hosts
  The above item holds true even across virtual hosts.   So while
  it's possible to adjust the FcgidInitialEnv items on a per-vhost
  basis, this is a recipe for disaster if two vhosts point at the same
  fcgi executable, because the resulting processes with potentially
  different Environments will be inserted into the same pool.  Once that
  occurs, we may expect that a server spawned with config defined in
  vhost A will be parcelled out to vhost B.

 Where does this occur?  Entries in the process table are distinguished
 by virtual host.  (I think the implementation of this check is broken,
 in that it requires that ServerName is set in the virtual hosts.  Are
 you using a simple test config that doesn't have ServerName set?)

My case is not yet simple.  I'll get back to you.


  The Apache httpd 2.3 docs do not address the symlink issue at all, and
  the virtual host issue only indirectly.
  http://httpd.apache.org/mod_fcgid/mod/mod_fcgid.html

 FWIW, this isn't part of Apache httpd 2.3.  mod_fcgid is released
 separately from the web server and only by coincidence has the same
 version number (2.3.x) as development levels of the web server.

Well, that's another doc bug, since the page I link to has a big
header that says:
Apache HTTP Server Version 2.3
:)

-danny


Re: Httpd 3.0 or something else

2009-11-09 Thread Graham Leggett
Greg Stein wrote:

 How is pull different from push[1]?
 
 The network loop pulls data from the content-generator.
 
 Apache 1.x and 2.x had a handler that pushed data at the network.
 There is no loop, of course, since each worker had direct control of
 the socket to push data into.

As I said in [1], apart from the obvious ;)

 Pull, by definition, is blocking behaviour.
 
 You may want to check your definitions.
 
 When you read from a serf bucket, it will return however much you ask
 for, or as much as it has without blocking. When it gives you that
 data, it can say I have more, I'm done, or This is what I had
 without blocking.

Who is you?

Up till now, my understanding is that you is the core, and therefore
not under control of a module writer.

Let me put it another way. Imagine I am a cache module. I want to read
as much as possible as fast as possible from a backend, and I want to
write this data to two places simultaneously: the cache, and the
downstream network. I know the cache is always writable, but the
downstream network I am not sure of, I only want to write to the
downstream network when the downstream network is ready for me.

How would I do this in a serf model?

 You will only run as often as you are pulled, and never more often. And
 if the pull is controlled by how quickly the client is accepting the
 data, which is typically orders of magnitude slower than the backend can
 push, you have no opportunity to try speed up the server in any way.
 
 Eh? Are you kidding me?
 
 One single network thread can manage N client connections. As each
 becomes writable, the loop reads (pulls) from the bucket and jams it
 into the client socket. If you're really fancy, then you know what the
 window is, and you ask the bucket for that much data.

That I understand, but it makes no difference as I see it - your loop
only reads from the bucket and jams it into the client socket if the
client socket is good and ready to accept data.

If the client socket isn't good and ready, the bucket doesn't get pulled
from, and resources used by the bucket are left in limbo until the
client is done. If the bucket wants to do something clever, like cache,
or release resources early, it can't - because as soon as it returns the
data it has to wait for the client socket to be good and ready all over
again. The server runs as slow as the browser, which in computing terms
is glacially slow.

 Push however, gives you a choice: the push either worked (yay! go
 browser!), or it didn't (sensible alternative behaviour, like cache it
 for later in a connection filter). Push happens as fast the backend, not
 as slow as the frontend.
 
 Push means that you have a worker per connection, pushing the response
 onto the network. I really would like to see us get away from a worker
 per connection.

Only if you write it that way (which we have done till now).

There is no reason why one event loop can't handle many requests at the
same time.

One event loop handling many requests each == event MPM (speed and
resource efficient, but we'd better be bug free).
Many event loops handling many requests each == worker MPM (compromise).
Many event loops handling one request each == prefork (reliable old
workhorse).

In theory if we turn the content handler into a filter and bootstrap the
filter stack with a bucket of some kind, this may work.

In fact, using both push and pull at the same time might also make
some sense - your event loop creates a bucket from which data is
pulled (serf model), which is in turn pulled by a filter stack
(existing filter stack model) and pushed upstream.

Functions that work better as a pull (proxy and friends) can be
pulled, functions that work better as a push (like caching) can be
filters.

Regards,
Graham
--


Re: mod_fcgid: different instances of the same program

2009-11-09 Thread Jeff Trawick
On Mon, Nov 9, 2009 at 6:47 PM, Danny Sadinoff da...@sadinoff.com wrote:
 On Tue, Nov 10, 2009 at 12:53 AM, Jeff Trawick traw...@gmail.com wrote:

 On Mon, Nov 9, 2009 at 5:16 PM, Danny Sadinoff danny.sadin...@gmail.com
 wrote:
 ...
  1) symlinks  you.
  It seems that mod_fcgid identifies fcgid programs by inode and device,
  not by filename.  So two fcgid programs invoked by the webserver
  along different paths will be counted as the same if the two paths are
  hardlinks or softlinks to each other.

 Mostly yes.

 The path to the file doesn't matter; it is the file itself that matters.

 There are different requirements for how programs are distinguished.
 One possibility is changing from stat() to lstat() (i.e., distinguish
 symlinks but not hard links).  Another possibility is looking only at
 the basename.  This was discussed in this thread:
 http://www.mail-archive.com/dev@httpd.apache.org/msg45516.html

 What are you trying to accomplish which is hindered by the current
 implementation?

 My goal is a fairly simple one-application per vhost setup.  But, I'm seeing
 application pools shared amongst virtual hosts with distinct ServerName
 declarations, all of whom refer to the same file path (and inode) for the
 fcgi executable.  From what you're telling me, this is buggy behavior.  I'll
 try to boil my config down further and come up with a good testcase.
 Whether my config is wrong or the implementation is buggy, I would think
 that the  mere existance of the dev thread trying to nail down the semantics
 ought to be argument enough for
 documenting the file-path-vs-inode behavior.

sure

does this cover it for you?

http://svn.apache.org/viewvc/httpd/mod_fcgid/trunk/docs/manual/mod/mod_fcgid.xml?r1=823178r2=834283diff_format=h

 FWIW, this isn't part of Apache httpd 2.3.  mod_fcgid is released
 separately from the web server and only by coincidence has the same
 version number (2.3.x) as development levels of the web server.

 Well, that's another doc bug, since the page I link to has a big
 header that says:
 Apache HTTP Server Version 2.3

ouch; mod_fcgid uses the same doc framework/settings as httpd

(maybe somebody will figure out how to fold the mod_fcgid docs into
the manual for the level of httpd it is installed with)


Re: mod_fcgid: different instances of the same program

2009-11-09 Thread pqf
Hi,
   Yes, mod_fcgid search process node base on file's inode and deviceid(plus 
share_group_id, virtual host name). The goal is to create as less process as 
possible. Some administrators like the idea that all virtual hosts share one 
PHP process pool. (But some other don't, they can turn that off anyway. This is 
what share_group_id for in the first place, administrator can make who share 
who's process pool)
   But the document should provide more detail about it, I missed that part in 
my old documents. I am sure some native English speekers will modify the 
documents soon.

Thanks


From: Danny Sadinoff 
Sent: Tuesday, November 10, 2009 6:16 AM
To: dev@httpd.apache.org 
Subject: mod_fcgid: different instances of the same program


Here are two details of mod_fcgid process management that I've just
learned after a long debug session and squinting at the mod_fcgid
code.

1) symlinks  you.
It seems that mod_fcgid identifies fcgid programs by inode and device,
not by filename.  So two fcgid programs invoked by the webserver
along different paths will be counted as the same if the two paths are
hardlinks or softlinks to each other.

2) Virtual hosts
The above item holds true even across virtual hosts.   So while
it's possible to adjust the FcgidInitialEnv items on a per-vhost
basis, this is a recipe for disaster if two vhosts point at the same
fcgi executable, because the resulting processes with potentially
different Environments will be inserted into the same pool.  Once that
occurs, we may expect that a server spawned with config defined in
vhost A will be parcelled out to vhost B.


The Apache httpd 2.3 docs do not address the symlink issue at all, and
the virtual host issue only indirectly.
http://httpd.apache.org/mod_fcgid/mod/mod_fcgid.html


I'd appreciate it if someone could confirm or deny the above.  If I'm
right, can we add it to the docs?  None of it seems obvious to me.
Apologies in advance if this is the sort of thing that belongs on the
dev list.  I'm happy to throw together a doc patch.

Thanks in advance 


P.S. having a lot of trouble getting this message posted to the list.  Not sure 
what's up with that.

--
Danny Sadinoff
da...@sadinoff.com

mod_fcgid: add mod_status support?

2009-11-09 Thread pqf
Hi, all
I am new to this community, I am think to add mod_status support to 
mod_fcgid, which provide more internal information to administrators. Is it a 
good idea? I am working on it now, but if someone think it's not a good idea, 
please let me know.
BTW, I did test spin lock on share memory process table, the result is, hit 
per second is not as good as current solution with global mutex, so I give up :)

Thanks

Re: mod_fcgid: add mod_status support?

2009-11-09 Thread Sander Temme


On Nov 9, 2009, at 5:51 PM, pqf wrote:


Hi, all
   I am new to this community, I am think to add mod_status support  
to mod_fcgid, which provide more internal information to  
administrators. Is it a good idea? I am working on it now, but if  
someone think it's not a good idea, please let me know.


+1

See mod_ssl.c:314, ssl_scache.c:228 and ssl_scache.c:199 in trunk to  
see how mod_ssl does it.


S.

--
Sander Temme
scte...@apache.org
PGP FP: 51B4 8727 466A 0BC3 69F4  B7B8 B2BE BC40 1529 24AF





smime.p7s
Description: S/MIME cryptographic signature