Re: [PATCH] Apache::Test 5.005 compatible

2002-05-31 Thread Stas Bekman
Tatsuhiko Miyagawa wrote:
Apache::Test seems to be compatible for perl 5.005_03, but it's
currently not. here's a patch;
Thanks Tatsuhiko, committed!

__
Stas BekmanJAm_pH -- Just Another mod_perl Hacker
http://stason.org/ mod_perl Guide --- http://perl.apache.org
mailto:[EMAIL PROTECTED] http://use.perl.org http://apacheweek.com
http://modperlbook.org http://apache.org   http://ticketmaster.com



rewritemap breakage

2002-05-31 Thread Doug MacEachern
seems that the test suite now requires httpd-2.0-cvs from 
HEAD?  server does not start with 1.3.x or 2.0.36:

Syntax error on line 139 of .../t/conf/extra.conf
RewriteMap: map file or program not 
found:/.../t/htdocs/modules/rewrite/append.pl 
foo



Re: rewritemap breakage

2002-05-31 Thread Cliff Woolley
On Fri, 31 May 2002, Doug MacEachern wrote:

 seems that the test suite now requires httpd-2.0-cvs from
 HEAD?  server does not start with 1.3.x or 2.0.36:

 Syntax error on line 139 of .../t/conf/extra.conf RewriteMap: map file
 or program not found:/.../t/htdocs/modules/rewrite/append.pl foo

Hmph.  Well, it definitely won't work with 2.0.36... 2.0.36 was broken in
this regard.  :)  I thought this worked on 1.3.x though.  Oh well, no big
deal.  You can remove that test if you like.

--Cliff



RE: rewritemap breakage

2002-05-31 Thread Ryan Bloom
I fixed this in 2.0 the other day.  Try updating again.

Ryan

--
Ryan Bloom  [EMAIL PROTECTED]
645 Howard St.  [EMAIL PROTECTED]
San Francisco, CA 

 -Original Message-
 From: Doug MacEachern [mailto:[EMAIL PROTECTED]
 Sent: Friday, May 31, 2002 11:49 AM
 To: [EMAIL PROTECTED]
 Subject: rewritemap breakage
 
 seems that the test suite now requires httpd-2.0-cvs from
 HEAD?  server does not start with 1.3.x or 2.0.36:
 
 Syntax error on line 139 of .../t/conf/extra.conf
 RewriteMap: map file or program not
 found:/.../t/htdocs/modules/rewrite/append.pl
 foo




Re: Need a new feature: Listing of CGI-enabled directories.

2002-05-31 Thread Ronald F. Guilmette


In message [EMAIL PROTECTED], 
Zac Stevens [EMAIL PROTECTED] wrote:

Let me preface this by noting that I agree with your goals, however I
believe that you may not have considered the nature of the problem in
sufficient depth.

I'll buy that.  I mean it would be arrogant of me... and possibly also
just plain wrong... not to admit the possibility that I have misjudged
the true nature of the problem.

On Thu, May 30, 2002 at 07:45:42PM -0700, Ronald F. Guilmette wrote:
 The first step in finding all such scripts however may often be the most
 difficult one.  That first step consists of simply gathering into one
 big list a list of all of the CGI-enabled directories on the local web
 server.  Once such a list has been compiled, other standard UNIX tools
 such as `find' and `file' and `grep' and be set to work, plowing through
 all of the files in those (CGI) directories and finding all of the bad
 FormMail scripts.

How are you defining bad FormMail scripts here?

Spammer exploitable.

A more detailed elaboration of that term, as I use it, may be found here:

http://www.monkeys.com/anti-spam/formmail-advisory.pdf

In the simplest case,
you could just run 'find' across the filesystem containing the web content
looking for formmail.cgi or formmail.pl...

Hold on there a moment!

The object here is to do the search _efficiently_, i.e. such that it can
be done even by very large virtual web hosting companies, on all web
servers, and every night.

Searching the entire filesystem is out of the question.

 and checking those found
against a list of known good/bad versions.  This doesn't require any
specific knowledge of the Apache configuration in use, and is an entirely
viable approach even on filesystems of several hundred gigabytes.

I believe that I disagree.

There are two separate problems with what you proposed.  First is the
fact that just searching for _filenames_ with the word formmail or
FormMail in them is not sufficient to find all of the bad Matt Wright
FormMail scripts that are installed on a given server.  End-lusers
often install the scripts using differet names, e.g. form.pl or
mail.pl or just fm.pl.  The second problem is the notion of searching
several hunderd gigabytes of filesystem.  That just isn't a viable approach,
especially given that some of the parties I'm dealing with on this issue
are already balking, even at the notion of merely scanning _just_ their
CGI-enabled directories.

A more thorough check would involve testing all executable ascii files,
perhaps excluding .html/.php and variants thereof.

Yes, and that is what is needed.

Every plain file that has the execute bit set and that resides in any
directory for which ExecCGI is enabled must be checked (a) to see if
it is a Perl script and then (b) to see if it is a Matt Wright Perl
script.

 But seriously, is there already a way to do what I need to do with the
 Apache server?  Looking at the command line options on the man page for
 httpd, there doesn't seem to be an option to request httpd to just parse
 the config file, list the CGI directories, and then exit.  But that's
 exactly what I need.

It isn't possible in the generic case.  Apache allows many aspects of the
configuration to be modified by the inclusion of .htaccess files beneath
the DocumentRoot of the web server.  Unless Apache has been configured not
to allow the ExecCGI option to be overridden, you would need to parse both
httpd.conf AND every .htaccess file on the filesystem.  Apache itself does
not do this at startup - it is done only as required.

Assume the simplifying assumption that enabling ExecCGI via .htaccess files
has been disallowed within the http.conf file.

_Now_ tell me how to get a list of top-level CGI-enabled directories out
of httpd... please.

You also can't assume that only files in designated CGI directories are
going to be problematic.

Actually, I believe that I can, under a certain set of conditions.

If you would like to help me flesh out what that exact set of conditions
is, then pby all means, please do.  I will appreciate it.

Understand that I am _not_ trying to build a solution to this searching
problem that will cover every possible contingency.  I will be satisfied
to build a solution that I can offer to web hosting companies and then
tell them ``This will work if you carefully restrict ExecCGI capability
by doing thus-and-such.''

That will be adequate for my purposes.

There's a long history of people using all sorts
of redirection and other techniques to access/execute things that they
shouldn't be able to.

OK, so where is the ``best practices'' document for Apache that describes,
in detail, exactly what webmasters must do (e.g. in the httpd.conf files)
in order to avoid exactly the kind of ``unexpected execute permission''
problem that you have just mentioned?

If there is no such document, then maybe it is time somebody wrote one.
(I will volunteer to write it, if other folks will provide the necessary
technical input.)


Re: Need a new feature: Listing of CGI-enabled directories.

2002-05-31 Thread Rasmus Lerdorf

mod_info will tell you some of this.  ie. Look for ScriptAlias lines under
mod_alias.c and AddHandler cgi-script lines under mod_mime.c.

But you are fighting a bit of a lost cause here.  If you allow users to
upload arbitrary cgi scripts there really isn't much you can do at the
Apache level.  It becomes a system security issue.  ie. blocking outbound
port 25 connections, for example.

-Rasmus

On Thu, 30 May 2002, Ronald F. Guilmette wrote:


 In message [EMAIL PROTECTED],
 Zac Stevens [EMAIL PROTECTED] wrote:

 Let me preface this by noting that I agree with your goals, however I
 believe that you may not have considered the nature of the problem in
 sufficient depth.

 I'll buy that.  I mean it would be arrogant of me... and possibly also
 just plain wrong... not to admit the possibility that I have misjudged
 the true nature of the problem.

 On Thu, May 30, 2002 at 07:45:42PM -0700, Ronald F. Guilmette wrote:
  The first step in finding all such scripts however may often be the most
  difficult one.  That first step consists of simply gathering into one
  big list a list of all of the CGI-enabled directories on the local web
  server.  Once such a list has been compiled, other standard UNIX tools
  such as `find' and `file' and `grep' and be set to work, plowing through
  all of the files in those (CGI) directories and finding all of the bad
  FormMail scripts.
 
 How are you defining bad FormMail scripts here?

 Spammer exploitable.

 A more detailed elaboration of that term, as I use it, may be found here:

 http://www.monkeys.com/anti-spam/formmail-advisory.pdf

 In the simplest case,
 you could just run 'find' across the filesystem containing the web content
 looking for formmail.cgi or formmail.pl...

 Hold on there a moment!

 The object here is to do the search _efficiently_, i.e. such that it can
 be done even by very large virtual web hosting companies, on all web
 servers, and every night.

 Searching the entire filesystem is out of the question.

  and checking those found
 against a list of known good/bad versions.  This doesn't require any
 specific knowledge of the Apache configuration in use, and is an entirely
 viable approach even on filesystems of several hundred gigabytes.

 I believe that I disagree.

 There are two separate problems with what you proposed.  First is the
 fact that just searching for _filenames_ with the word formmail or
 FormMail in them is not sufficient to find all of the bad Matt Wright
 FormMail scripts that are installed on a given server.  End-lusers
 often install the scripts using differet names, e.g. form.pl or
 mail.pl or just fm.pl.  The second problem is the notion of searching
 several hunderd gigabytes of filesystem.  That just isn't a viable approach,
 especially given that some of the parties I'm dealing with on this issue
 are already balking, even at the notion of merely scanning _just_ their
 CGI-enabled directories.

 A more thorough check would involve testing all executable ascii files,
 perhaps excluding .html/.php and variants thereof.

 Yes, and that is what is needed.

 Every plain file that has the execute bit set and that resides in any
 directory for which ExecCGI is enabled must be checked (a) to see if
 it is a Perl script and then (b) to see if it is a Matt Wright Perl
 script.

  But seriously, is there already a way to do what I need to do with the
  Apache server?  Looking at the command line options on the man page for
  httpd, there doesn't seem to be an option to request httpd to just parse
  the config file, list the CGI directories, and then exit.  But that's
  exactly what I need.
 
 It isn't possible in the generic case.  Apache allows many aspects of the
 configuration to be modified by the inclusion of .htaccess files beneath
 the DocumentRoot of the web server.  Unless Apache has been configured not
 to allow the ExecCGI option to be overridden, you would need to parse both
 httpd.conf AND every .htaccess file on the filesystem.  Apache itself does
 not do this at startup - it is done only as required.

 Assume the simplifying assumption that enabling ExecCGI via .htaccess files
 has been disallowed within the http.conf file.

 _Now_ tell me how to get a list of top-level CGI-enabled directories out
 of httpd... please.

 You also can't assume that only files in designated CGI directories are
 going to be problematic.

 Actually, I believe that I can, under a certain set of conditions.

 If you would like to help me flesh out what that exact set of conditions
 is, then pby all means, please do.  I will appreciate it.

 Understand that I am _not_ trying to build a solution to this searching
 problem that will cover every possible contingency.  I will be satisfied
 to build a solution that I can offer to web hosting companies and then
 tell them ``This will work if you carefully restrict ExecCGI capability
 by doing thus-and-such.''

 That will be adequate for my purposes.

 There's a long history of people using 

Re: Need a new feature: Listing of CGI-enabled directories.

2002-05-31 Thread Ronald F. Guilmette


In message [EMAIL PROTECTED], 
Rasmus Lerdorf [EMAIL PROTECTED] wrote:

mod_info will tell you some of this.  ie. Look for ScriptAlias lines under
mod_alias.c and AddHandler cgi-script lines under mod_mime.c.

I was hoping to find a volunteer to actually hack on this for me.

I am _not_ well versed in Apache internals myself.

But you are fighting a bit of a lost cause here.  If you allow users to
upload arbitrary cgi scripts there really isn't much you can do at the
Apache level.  It becomes a system security issue.  ie. blocking outbound
port 25 connections, for example.

I think you miss the point.

Yes, what you say is quite true.  It _is_ a security issue, and it is
not in any sense either (a) Apache's fault or even (b) something that
Apache can do anything directly about.

However this is a little like a chronic disease... You may not be able
to fully cure it, but if you can keep the symptoms in check then at
least that is a lot better than doing nothing.

In the case of FormMail scripts, if the big web hosting companies can
just scan all of their CGI directories for them every night and then
simply `rm' or `chmod ' anything found by the scans of the previous
night every morning, then that will be about 99.9% sufficent to eliminate
the problem.

And that's a lot better than doing nothing.




icarus running HEAD as of 3:46a EDT

2002-05-31 Thread Cliff Woolley


I just built and started up HEAD of httpd on icarus as of 3:46am Eastern.
As far as we know, all the funky error handling stuff is sorted out now...
crosses fingers

--Cliff




Re: [OT] Need a new feature: Listing of CGI-enabled directories.

2002-05-31 Thread Zac Stevens

Hi Ron,

Thanks for your detailed response to my post, I'll reply later this
evening off list.

I do want to jump in on this, though..

On Fri, May 31, 2002 at 12:24:30AM -0700, Ronald F. Guilmette wrote:
 In the case of FormMail scripts, if the big web hosting companies can
 just scan all of their CGI directories for them every night and then
 simply `rm' or `chmod ' anything found by the scans of the previous
 night every morning, then that will be about 99.9% sufficent to eliminate
 the problem.

To provide you with a bit of context, my comments come after having run a
Solaris/Apache-based virtual hosting service in Australia for approximately
5000 sites on around 200GB of disk (on a Network Appliance filer.)  I'm
security conscious, but I'm also pragmatic - as you yourself seem to be
aware, commercial realities do put a stop to best practices some of the
time.

I have tried, and failed, in attempts to correct bad user behaviour in the
means you have described above.  In addition to removing execute
permissions and chown'ing files to root, I have attempted to leave messages 
by way of conspicuously-named files in the relevant directories.

None of this was met with much success.  Typically, the user would
re-upload the script, or delete and re-upload the entire site, and the
problem would begin anew.  Naturally this probably could have been stopped
with more fascist permissioning, but this really isn't the sort of thing
most companies are comfortable doing to their paying customers.

The problem is that none of these things alerts the user to the problem -
it just creates a new problem to grab their attention.  In sites where
valid contact e-mail addresses are available for every customer, this can
be a more effective form of resolving the issue.


In the aforementioned web hosting environment, we did two things to
minimise the amount of spam mail originating from CGI scripts on our
system.

1) Supplied a formmail equivalent as standard.  Our script added a few
   extra niceties, but essentially it was just a safe formmail.

The existance of this script and clear instructions on how to use it were
supplied to every customer when their site was setup.  The support staff
were trained in using the script and diagnosing problems.  Combined with
the fact that support was not offered for 3rd-party scripts (although they
were permitted), this resulted in a high usage rate for our custom script.

2) The sendmail binary was replaced with a script which did sanity checking
   and added identifying details.

This is a simple and extremely effective way to put an end to the problem
of spam mail originating from virtual-hosted customers.  Perhaps due to the
nature of our business (And the fact we supplied a safe CGI mailer), the
majority of our problems came from developers who had written their own PHP
or CGI scripts.  Many did not understand the need to write secure code, nor
knew how to.  By applying checks on the mail leaving the system, you catch
all the problem scripts - not just those you already know about.


 And that's a lot better than doing nothing.

Anything is better than nothing - it's an issue of getting the best result
from the most acceptable expenditure of effort.  Perhaps we merely differ
in our opinions of where that effort is best spent.

Cheers,


Zac



[PATCH] 1.3: de-harcoding SHLIB_PREFIX_NAME in Makefile.tmpl

2002-05-31 Thread Stipe Tolj

The attached patch will allow non hardcoded names for the shared
library, which is currently harcoded to
lib$(TARGET).$${SHLIB_SUFFIX_NAME} by adding a SHLIB_PREFIX_NAME and
resulting in a new
$${SHLIB_PREFIX_NAME}$(TARGET).$${SHLIB_SUFFIX_NAME} value in the
Makefile templates.

This is mainly used for the Cygwin platform, which uses conventions to
use cygfoobar.dll names of shared system libraries instead of
libfoobar.dll, but other platforms may benefit from it too and it does
not break or change anything in the other platforms.

The changes against cvs tree of 2002-05-28 are:

  * Makfile.tmpl: substituting lib$(TARGET).$${SHLIB_SUFFIX_NAME} to
$${SHLIB_PREFIX_NAME}$(TARGET).$${SHLIB_SUFFIX_NAME} and adding
another substitution regexp for Cygwin's .dll extensions of shared
libs.

  * src/Configure: added default SHLIB_PREFIX_NAME=lib before
processing platform specific blocks. Changes Cygwin's define block to
use cyg prefix instead of lib.

  * src/Makefile.tmpl: same substitutions as in upper directory.


Please review and apply to cvs head. Thanks.

Stipe

[EMAIL PROTECTED]
---
Wapme Systems AG

Münsterstr. 248
40470 Düsseldorf

Tel: +49-211-74845-0
Fax: +49-211-74845-299

E-Mail: [EMAIL PROTECTED]
Internet: http://www.wapme-systems.de
---
wapme.net - wherever you are

diff -ur apache-1.3/Makefile.tmpl apache-1.3-cygwin/Makefile.tmpl
--- apache-1.3/Makefile.tmplWed Mar 13 20:05:27 2002
+++ apache-1.3-cygwin/Makefile.tmpl Tue May 28 11:15:10 2002
@@ -304,19 +304,20 @@
$(INSTALL_PROGRAM) $(TOP)/$(SRC)/$(TARGET) 
$(root)$(sbindir)/$(TARGET); \
fi
-@if [ .`grep 'SUBTARGET=target_shared' $(TOP)/$(SRC)/Makefile` != . ]; then 
\
+   SHLIB_PREFIX_NAME=`grep '^SHLIB_PREFIX_NAME=' $(TOP)/$(SRC)/Makefile 
+| sed -e 's:^.*=::'`; \
SHLIB_SUFFIX_NAME=`grep '^SHLIB_SUFFIX_NAME=' $(TOP)/$(SRC)/Makefile 
| sed -e 's:^.*=::'`; \
SHLIB_SUFFIX_LIST=`grep '^SHLIB_SUFFIX_LIST=' $(TOP)/$(SRC)/Makefile 
| sed -e 's:^.*=::'`; \
-   echo $(INSTALL_CORE) $(TOP)/$(SRC)/lib$(TARGET).ep 
$(root)$(libexecdir)/lib$(TARGET).ep; \
-   $(INSTALL_CORE) $(TOP)/$(SRC)/lib$(TARGET).ep 
$(root)$(libexecdir)/lib$(TARGET).ep; \
-   echo $(INSTALL_DSO) $(TOP)/$(SRC)/lib$(TARGET).$${SHLIB_SUFFIX_NAME} 
$(root)$(libexecdir)/lib$(TARGET).$${SHLIB_SUFFIX_NAME}; \
-   $(INSTALL_DSO) $(TOP)/$(SRC)/lib$(TARGET).$${SHLIB_SUFFIX_NAME} 
$(root)$(libexecdir)/lib$(TARGET).$${SHLIB_SUFFIX_NAME}; \
+   echo $(INSTALL_CORE) $(TOP)/$(SRC)/$${SHLIB_PREFIX_NAME}$(TARGET).ep 
+$(root)$(libexecdir)/$${SHLIB_PREFIX_NAME}$(TARGET).ep; \
+   $(INSTALL_CORE) $(TOP)/$(SRC)/$${SHLIB_PREFIX_NAME}$(TARGET).ep 
+$(root)$(libexecdir)/$${SHLIB_PREFIX_NAME}$(TARGET).ep; \
+   echo $(INSTALL_DSO) 
+$(TOP)/$(SRC)/$${SHLIB_PREFIX_NAME}$(TARGET).$${SHLIB_SUFFIX_NAME} 
+$(root)$(libexecdir)/$${SHLIB_PREFIX_NAME}$(TARGET).$${SHLIB_SUFFIX_NAME}; \
+   $(INSTALL_DSO) 
+$(TOP)/$(SRC)/$${SHLIB_PREFIX_NAME}$(TARGET).$${SHLIB_SUFFIX_NAME} 
+$(root)$(libexecdir)/$${SHLIB_PREFIX_NAME}$(TARGET).$${SHLIB_SUFFIX_NAME}; \
if [ .$${SHLIB_SUFFIX_LIST} != . ]; then \
-   echo $(RM) 
$(libexecdir)/lib$(TARGET).$${SHLIB_SUFFIX_NAME}.*; \
-   $(RM) $(libexecdir)/lib$(TARGET).$${SHLIB_SUFFIX_NAME}.*; \
+   echo $(RM) 
+$(libexecdir)/$${SHLIB_PREFIX_NAME}$(TARGET).$${SHLIB_SUFFIX_NAME}.*; \
+   $(RM) 
+$(libexecdir)/$${SHLIB_PREFIX_NAME}$(TARGET).$${SHLIB_SUFFIX_NAME}.*; \
for suffix in $${SHLIB_SUFFIX_LIST} ; do \
[ .$${suffix} = . ]  continue; \
-   echo $(LN) 
$(root)$(libexecdir)/lib$(TARGET).$${SHLIB_SUFFIX_NAME} 
$(root)$(libexecdir)/lib$(TARGET).$${SHLIB_SUFFIX_NAME}.$${suffix}; \
-   $(LN) 
$(root)$(libexecdir)/lib$(TARGET).$${SHLIB_SUFFIX_NAME} 
$(root)$(libexecdir)/lib$(TARGET).$${SHLIB_SUFFIX_NAME}.$${suffix}; \
+   echo $(LN) 
+$(root)$(libexecdir)/$${SHLIB_PREFIX_NAME}$(TARGET).$${SHLIB_SUFFIX_NAME} 
+$(root)$(libexecdir)/$${SHLIB_PREFIX_NAME}$(TARGET).$${SHLIB_SUFFIX_NAME}.$${suffix};
+ \
+   $(LN) 
+$(root)$(libexecdir)/$${SHLIB_PREFIX_NAME}$(TARGET).$${SHLIB_SUFFIX_NAME} 
+$(root)$(libexecdir)/$${SHLIB_PREFIX_NAME}$(TARGET).$${SHLIB_SUFFIX_NAME}.$${suffix}; 
+\
done; \
fi; \
fi
@@ -341,7 +342,7 @@
sed -e 's:SharedModule:AddModule:' \
-e 's:modules/[^/]*/::' \
-e 's:[ ]lib: mod_:' \
-   -e 's:\.[soam].*$$:.c:' $(SRC)/.apaci.install.conf; \
+   -e 

[PATCH] 1.3: Cygwin specific changes to the build process

2002-05-31 Thread Stipe Tolj

Attached is a patch against cvs of 2002-05-28 with changes for the
Cygwin platform build process. Again, everything is designed not to
hurt other platforms.

The changes are:

  * src/helpers/binbuild.sh: adding a MAKERERUN variable to run make
twice on Cygwin platforms while binbuild.sh is executed. This is
necessary because the we need to link shared modules against the
shared core library, which is not present throughout the first make
run, so we need to run it twice. Cygwin now has the same configure
flags in binbuild.sh as other unix flavors.

  * src/helpers/install.sh: this is a long standing issue. The patch
fixes the make install problem we still see with .exe files. I hope
this is suitable to all for commitment.

  * src/modules/standard/Makefile.Cygwin: minor text changes to
indicate we use the filename cyghttpd.dll instead of libhttpd.dll
as shared core lib.


Stipe

[EMAIL PROTECTED]
---
Wapme Systems AG

Münsterstr. 248
40470 Düsseldorf

Tel: +49-211-74845-0
Fax: +49-211-74845-299

E-Mail: [EMAIL PROTECTED]
Internet: http://www.wapme-systems.de
---
wapme.net - wherever you are

diff -ur apache-1.3/src/helpers/binbuild.sh apache-1.3-cygwin/src/helpers/binbuild.sh
--- apache-1.3/src/helpers/binbuild.sh  Wed May 15 15:27:55 2002
+++ apache-1.3-cygwin/src/helpers/binbuild.sh   Tue May 28 11:15:10 2002
@@ -7,10 +7,11 @@
 # See http://www.apache.org/docs/LICENSE
 
 OS=`src/helpers/GuessOS`
+MAKERERUN=no
 case x$OS in
   x*OS390*) CONFIGPARAM=--with-layout=BinaryDistribution --enable-module=most;;
-  *cygwin*) CONFIGPARAM=--with-layout=BinaryDistribution --enable-module=most \
- --enable-rule=SHARED_CORE --libexecdir=bin;;
+  *cygwin*) CONFIGPARAM=--with-layout=BinaryDistribution --enable-module=most 
+--enable-shared=max
+MAKERERUN=yes;;
   *) CONFIGPARAM=--with-layout=BinaryDistribution --enable-module=most 
--enable-shared=max;;
 esac
 APDIR=`pwd`
@@ -55,6 +56,7 @@
   rm -rf bindist install-bindist.sh *.bindist
   echo --  \
   make  \
+  if [ x$MAKERERUN = xyes ]; then make; fi  \
   echo --  \
   make install-quiet root=bindist/  \
   echo --  \
diff -ur apache-1.3/src/helpers/install.sh apache-1.3-cygwin/src/helpers/install.sh
--- apache-1.3/src/helpers/install.sh   Tue Jun 12 10:24:53 2001
+++ apache-1.3-cygwin/src/helpers/install.shTue May 28 11:15:10 2002
@@ -89,12 +89,8 @@
 
 #  Check if we need to add an executable extension (such as .exe) 
 #  on specific OS to src and dst
-if [ -f $src.exe ]; then
-  if [ -f $src ]; then
-: # Cygwin [ test ] is too stupid to do [ -f $src.exe ]  [ ! -f $src ]
-  else
-ext=.exe
-  fi
+if [ -f $src.exe ]  [ ! -f $src. ]; then
+  ext=.exe
 fi
 src=$src$ext
 dst=$dst$ext
diff -ur apache-1.3/src/modules/standard/Makefile.Cygwin 
apache-1.3-cygwin/src/modules/standard/Makefile.Cygwin
--- apache-1.3/src/modules/standard/Makefile.Cygwin Thu Jan 17 12:20:51 2002
+++ apache-1.3-cygwin/src/modules/standard/Makefile.Cygwin  Tue May 28 11:15:10 
+2002
@@ -4,7 +4,7 @@
 # On Cygwin OS the user needs to run twice make if shared modules have 
 # been requested using the --enable-shared=module configure flag.
 # This is because when we pass the module mod_foo.c we have no import 
-# library, usually src/libhttpd.dll to link against in this case. So the
+# library, usually src/cyghttpd.dll to link against in this case. So the
 # two make runs do the following:
 #
 #   1st run: builds all static modules and links everything to the 
@@ -42,7 +42,7 @@
else \
 if [ ! -f $(SRCDIR)/$(SHCORE_IMPLIB).$$ ]; then \
  echo ++; \
- echo | There is no shared core 'libhttpd.dll' available!  |; \
+ echo | There is no shared core 'cyghttpd.dll' available!  |; \
  echo ||; \
  echo | This is obviously your first 'make' run with configure |; \
  echo | flag SHARED_CORE enabled and shared modules.   |; \



[PATCH] 1.3: Cygwin specifics in http_main.c

2002-05-31 Thread Stipe Tolj

Attached is a patch for allowing user changes on the cygwin platform
and a #define wrapper for the timeout singal we use to kill of pending
open childs that do not react on usual signals.

The signalling issue seems to be a problem on the Cygwin platform, but
it's abstracted, so other platforms may benefit from it. Again,
nothing else is changed in behaviour.

Changes are:

  * src/include/ap_config.h: added the system uid for Cygwin that is
the root user on Cygwin

  * src/main/http_main.c: some cygwin specific #defines around
setpgrp() and getuid() calls. Adding the #define SIG_TIMEOUT_KILL to
define which singal should be used to kill of timed out childs.
Defaulting to the know value for all other plaforms.

  * src/modules/proxy/proxy_cache.c: cygwin specific #define around
setpgrp()


Please review and apply to cvs if suitable. Thanks in advance.

Stipe

[EMAIL PROTECTED]
---
Wapme Systems AG

Münsterstr. 248
40470 Düsseldorf

Tel: +49-211-74845-0
Fax: +49-211-74845-299

E-Mail: [EMAIL PROTECTED]
Internet: http://www.wapme-systems.de
---
wapme.net - wherever you are

diff -ur apache-1.3/src/include/ap_config.h apache-1.3-cygwin/src/include/ap_config.h
--- apache-1.3/src/include/ap_config.h  Wed Mar 13 20:05:29 2002
+++ apache-1.3-cygwin/src/include/ap_config.h   Tue May 28 11:15:10 2002
@@ -1003,8 +1003,10 @@
 #define NEED_HASHBANG_EMUL
 
 #elif defined(CYGWIN)   /* Cygwin 1.x POSIX layer for Win32 */
+#define SYSTEM_UID 18
 #define JMP_BUF jmp_buf
 #define NO_KILLPG
+#define NO_SETSID
 #define USE_LONGJMP
 #define GDBM_STATIC
 #define HAVE_MMAP 1
diff -ur apache-1.3/src/main/http_main.c apache-1.3-cygwin/src/main/http_main.c
--- apache-1.3/src/main/http_main.c Mon May 27 17:39:24 2002
+++ apache-1.3-cygwin/src/main/http_main.c  Tue May 28 11:15:10 2002
@@ -3409,6 +3409,13 @@
 #elif defined(MPE)
 /* MPE uses negative pid for process group */
 pgrp = -getpid();
+#elif defined(CYGWIN)
+/* Cygwin does not take any argument for setpgrp() */
+if ((pgrp = setpgrp()) == -1) {
+perror(setpgrp);
+fprintf(stderr, %s: setpgrp failed\n, ap_server_argv0);
+exit(1);
+}
 #else
 if ((pgrp = setpgrp(getpid(), 0)) == -1) {
perror(setpgrp);
@@ -4225,8 +4232,15 @@
 }
 GETUSERMODE();
 #else
-/* Only try to switch if we're running as root */
+/* 
+ * Only try to switch if we're running as root
+ * In case of Cygwin we have the special super-user named SYSTEM
+ */
+#ifdef CYGWIN
+if (getuid() == SYSTEM_UID  (
+#else
 if (!geteuid()  (
+#endif
 #ifdef _OSD_POSIX
os_init_job_environment(server_conf, ap_user_name, one_process) != 0 || 
 #endif
@@ -4798,13 +4812,16 @@
  * is greater then ap_daemons_max_free. Usually we will use SIGUSR1
  * to gracefully shutdown, but unfortunatly some OS will need other 
  * signals to ensure that the child process is terminated and the 
- * scoreboard pool is not growing to infinity. This effect has been
- * seen at least on Cygwin 1.x. -- Stipe Tolj [EMAIL PROTECTED]
+ * scoreboard pool is not growing to infinity. Also set the signal we
+ * use to kill of childs that exceed timeout. This effect has been
+* seen at least on Cygwin 1.x. -- Stipe Tolj [EMAIL PROTECTED]
  */
 #if defined(CYGWIN)
 #define SIG_IDLE_KILL SIGKILL
+#define SIG_TIMEOUT_KILL SIGUSR2
 #else
 #define SIG_IDLE_KILL SIGUSR1
+#define SIG_TIMEOUT_KILL SIGALRM
 #endif
 
 static void perform_idle_server_maintenance(void)
@@ -4876,7 +4893,7 @@
else if (ps-last_rtime + ss-timeout_len  now) {
/* no progress, and the timeout length has been exceeded */
ss-timeout_len = 0;
-   kill(ps-pid, SIGALRM);
+   kill(ps-pid, SIG_TIMEOUT_KILL);
}
}
 #endif
@@ -5492,8 +5509,16 @@
}
GETUSERMODE();
 #else
-   /* Only try to switch if we're running as root */
+/* 
+ * Only try to switch if we're running as root
+ * In case of Cygwin we have the special super-user named SYSTEM
+ * with a pre-defined uid.
+ */
+#ifdef CYGWIN
+if ((getuid() == SYSTEM_UID)  setuid(ap_user_id) == -1) {
+#else
if (!geteuid()  setuid(ap_user_id) == -1) {
+#endif
ap_log_error(APLOG_MARK, APLOG_ALERT, server_conf,
setuid: unable to change to uid: %ld,
(long) ap_user_id);
@@ -7686,7 +7711,7 @@
 #endif
 
 
-int ap_main(int argc, char *argv[]); /* Load time linked from libhttpd.dll */
+int ap_main(int argc, char *argv[]); /* Load time linked from cyghttpd.dll */
 
 int main(int argc, char *argv[])
 {
diff -ur apache-1.3/src/modules/proxy/proxy_cache.c 
apache-1.3-cygwin/src/modules/proxy/proxy_cache.c
--- apache-1.3/src/modules/proxy/proxy_cache.c  Fri Apr 12 12:34:46 2002
+++ 

Re: cvs commit: httpd-2.0/modules/test mod_bucketeer.c

2002-05-31 Thread Ben Laurie

[EMAIL PROTECTED] wrote:
 jwoolley2002/05/31 00:43:22
 
   Modified:modules/test mod_bucketeer.c
   Log:
   we should be copying over all metadata buckets we don't understand, not
   just error buckets.
   
   Revision  ChangesPath
   1.12  +5 -4  httpd-2.0/modules/test/mod_bucketeer.c
   
   Index: mod_bucketeer.c
   ===
   RCS file: /home/cvs/httpd-2.0/modules/test/mod_bucketeer.c,v
   retrieving revision 1.11
   retrieving revision 1.12
   diff -u -d -u -r1.11 -r1.12
   --- mod_bucketeer.c 30 May 2002 21:10:19 -  1.11
   +++ mod_bucketeer.c 31 May 2002 07:43:22 -  1.12
   @@ -141,10 +141,11 @@
continue;
}

   -if (AP_BUCKET_IS_ERROR(e)) { 
   -apr_bucket *err_copy;
   -apr_bucket_copy(e, err_copy);
   -APR_BRIGADE_INSERT_TAIL(ctx-bb, err_copy);
   +if (e-length == 0) {

Looks like magic to me - perhaps wrap it in AP_BUCKET_IS_METADATA()?

Cheers,

Ben.

-- 
http://www.apache-ssl.org/ben.html   http://www.thebunker.net/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff




[PATCH] 1.3: mod_proxy

2002-05-31 Thread Matt Kern

There is a feature I have wanted in Apache for a little while now: the
ability to preserve the host header when using mod_proxy.  This is
mostly useful for two (or more) tier systems, where it is desirable to
pass certain virtual server requests through to backend servers.

The attached patch adds an option:

ProxyPreserveHostHeader On|Off

This option, when set, causes the Host: header of the proxy request to
be passed on intact (including port number) to the backend server.  I
know there are other ways of jiggering this, but none I have found are
quite as neat.  My primary goal in implementing this is to leave the
front end server configuration untouched (with a wildcard dns entry
pointed at it) and have unmatched virtual hosts passed through to a
backend server on a different host/port.  The servers running on this
backend server change with time.

Regards,
Matt

-- 
Matt Kern
http://www.undue.org/


diff -ur proxy/mod_proxy.c proxy/mod_proxy.c
--- proxy/mod_proxy.c   Thu May 30 17:11:32 2002
+++ proxy/mod_proxy.c   Thu May 30 17:54:01 2002
 -436,6 +436,8 
 ps-viaopt_set = 0; /* 0 means default */
 ps-req = 0;
 ps-req_set = 0;
+ps-hheader = 0;
+ps-hheader_set = 0;
 ps-recv_buffer_size = 0;   /* this default was left unset for some
  * reason */
 ps-recv_buffer_size_set = 0;
 -483,6 +485,7 
 ps-domain = (overrides-domain == NULL) ? base-domain : overrides-domain;
 ps-viaopt = (overrides-viaopt_set == 0) ? base-viaopt : overrides-viaopt;
 ps-req = (overrides-req_set == 0) ? base-req : overrides-req;
+ps-hheader = (overrides-hheader_set == 0) ? base-hheader : overrides-hheader;
 ps-recv_buffer_size = (overrides-recv_buffer_size_set == 0) ? 
base-recv_buffer_size : overrides-recv_buffer_size;
 ps-io_buffer_size = (overrides-io_buffer_size_set == 0) ? base-io_buffer_size 
: overrides-io_buffer_size;
 
 -703,6 +706,19 
 
 
 static const char *
+ set_proxy_preserve_host_header(cmd_parms *parms, void *dummy, int flag)
+{
+proxy_server_conf *psf =
+ap_get_module_config(parms-server-module_config, proxy_module);
+
+psf-hheader = flag;
+psf-hheader_set = 1;
+
+return NULL;
+}
+
+
+static const char *
  set_cache_size(cmd_parms *parms, char *struct_ptr, char *arg)
 {
 proxy_server_conf *psf =
 -936,6 +952,8 
 a virtual path and a URL},
 {ProxyPassReverse, add_pass_reverse, NULL, RSRC_CONF, TAKE2,
 a virtual path and a URL for reverse proxy behaviour},
+{ProxyPreserveHostHeader, set_proxy_preserve_host_header, NULL, RSRC_CONF, FLAG,
+on if the original Host: header should be preserved},
 {ProxyBlock, set_proxy_exclude, NULL, RSRC_CONF, ITERATE,
 A list of names, hosts or domains to which the proxy will not connect},
 {ProxyReceiveBufferSize, set_recv_buffer_size, NULL, RSRC_CONF, TAKE1,
diff -ur proxy/mod_proxy.h proxy/mod_proxy.h
--- proxy/mod_proxy.h   Thu May 30 17:11:32 2002
+++ proxy/mod_proxy.h   Thu May 30 17:52:48 2002
 -192,6 +192,8 
 char *domain;   /* domain name to use in absence of a domain name in 
the request */
 int req;/* true if proxy requests are enabled */
 char req_set;
+int hheader;   /* true if we want to preserve the host header */
+char hheader_set;
 enum {
   via_off,
   via_on,
diff -ur proxy/proxy_http.c proxy/proxy_http.c
--- proxy/proxy_http.c  Thu May 30 17:11:32 2002
+++ proxy/proxy_http.c  Thu May 30 18:38:03 2002
 -177,6 +177,7 
 struct noproxy_entry *npent = (struct noproxy_entry *) conf-noproxies-elts;
 struct nocache_entry *ncent = (struct nocache_entry *) conf-nocaches-elts;
 int nocache = 0;
+int incoming_port = htons ((unsigned short)r-connection-local_addr.sin_port);
 
 if (conf-cache.root == NULL)
 nocache = 1;
 -312,10 +313,20 
 ap_bvputs(f, r-method,  , proxyhost ? url : urlptr,  HTTP/1.1 CRLF,
   NULL);
 /* Send Host: now, adding it to req_hdrs wouldn't be much better */
-if (destportstr != NULL  destport != DEFAULT_HTTP_PORT)
+if (conf-hheader) {
+  i = ap_get_server_port(r);
+  if (ap_is_default_port(i, r))
+   strcpy(portstr, );
+  else
+   ap_snprintf (portstr, sizeof portstr, :%d, i);
+  ap_bvputs(f, Host: , r-hostname, portstr, CRLF, NULL);
+}
+else {
+  if (destportstr != NULL  destport != DEFAULT_HTTP_PORT)
 ap_bvputs(f, Host: , desthost, :, destportstr, CRLF, NULL);
-else
+  else
 ap_bvputs(f, Host: , desthost, CRLF, NULL);
+}
 
 if (conf-viaopt == via_block) {
 /* Block all outgoing Via: headers */



Re: don't try this at home

2002-05-31 Thread Jeff Trawick

Justin Erenkrantz [EMAIL PROTECTED] writes:

 My suggestion is as follows: if the ap_die() code is one that
 forces us to drop the connection, we don't report the recursive
 error, but instead just report this one.
 
 So the conditional on http_request.c:121 may work as:
 
 if (r-status != HTTP_OK  !ap_status_drops_connection(type)) {
 
 Can you try this?  -- justin

oops, just saw your note... but you fixed it already anyway :)

Thanks!

-- 
Jeff Trawick | [EMAIL PROTECTED]
Born in Roswell... married an alien...



Re: cvs commit: httpd-2.0 STATUS

2002-05-31 Thread Jeff Trawick

Justin Erenkrantz [EMAIL PROTECTED] writes:

 On Thu, May 30, 2002 at 08:20:30PM -, [EMAIL PROTECTED] wrote:
+  2) edit the path to LIBTOOL in config_vars.mk to reflect the
+ place where it was actually installed
 
 In this case, I think it would be best if it used apr-config's
 --apr-libtool option.
 
 % /pkg/httpd-2.0/bin/apr-config --apr-libtool
 /bin/sh /pkg/httpd-2.0/build/libtool
 % srclib/apr/apr-config --apr-libtool
 /bin/sh /home/jerenkrantz/cvs-apache/build/prefork/srclib/apr/libtool
 
 This might be better than relying on config_vars.mk.  -- justin

agreed, assuming it is practical

Note that the makefile template generated from apxs -g uses the
normal Apache build files rules.mk et al, so I'm not sure how
practical it is to do such heavy patching in install-bindist.sh.

-- 
Jeff Trawick | [EMAIL PROTECTED]
Born in Roswell... married an alien...



Re: cvs commit: httpd-2.0/modules/test mod_bucketeer.c

2002-05-31 Thread Jim Jagielski

Justin Erenkrantz wrote:
 
 The fact here is that not all buckets can be read such as EOS,
 flush, and error.  Perhaps these buckets should return an error
 if they are read?
 

Certainly that's not the case... all buckets must be readable.

-- 
===
   Jim Jagielski   [|]   [EMAIL PROTECTED]   [|]   http://www.jaguNET.com/
  A society that will trade a little liberty for a little order
 will lose both and deserve neither - T.Jefferson



Re: [PATCH] 1.3: Cygwin specifics in http_main.c

2002-05-31 Thread Jim Jagielski

At 11:20 AM +0200 5/31/02, Stipe Tolj wrote:
Attached is a patch for allowing user changes on the cygwin platform
and a #define wrapper for the timeout singal we use to kill of pending
open childs that do not react on usual signals.

The signalling issue seems to be a problem on the Cygwin platform, but
it's abstracted, so other platforms may benefit from it. Again,
nothing else is changed in behaviour.

Changes are:

  * src/include/ap_config.h: added the system uid for Cygwin that is
the root user on Cygwin

  * src/main/http_main.c: some cygwin specific #defines around
setpgrp() and getuid() calls. Adding the #define SIG_TIMEOUT_KILL to
define which singal should be used to kill of timed out childs.
Defaulting to the know value for all other plaforms.

  * src/modules/proxy/proxy_cache.c: cygwin specific #define around
setpgrp()

Please review and apply to cvs if suitable. Thanks in advance.


This looks safe to me... I'll give it a day or so for any vetos
to pop up, but if they don't I'll committ
-- 
===
   Jim Jagielski   [|]   [EMAIL PROTECTED]   [|]   http://www.jaguNET.com/
  A society that will trade a little liberty for a little order
 will lose both and deserve neither - T.Jefferson



Re: [PATCH] 1.3: Cygwin specific changes to the build process

2002-05-31 Thread Jim Jagielski

At 11:12 AM +0200 5/31/02, Stipe Tolj wrote:
diff -ur apache-1.3/src/helpers/install.sh apache-1.3-cygwin/src/helpers/install.sh
--- apache-1.3/src/helpers/install.sh  Tue Jun 12 10:24:53 2001
+++ apache-1.3-cygwin/src/helpers/install.sh   Tue May 28 11:15:10 2002
@@ -89,12 +89,8 @@
 
 #  Check if we need to add an executable extension (such as .exe)
 #  on specific OS to src and dst
-if [ -f $src.exe ]; then
-  if [ -f $src ]; then
-: # Cygwin [ test ] is too stupid to do [ -f $src.exe ]  [ ! -f $src ]
-  else
-ext=.exe
-  fi
+if [ -f $src.exe ]  [ ! -f $src. ]; then
+  ext=.exe
 fi
 src=$src$ext
 dst=$dst$ext

Why the above change?? If [] is fixed, what about backwards compatibility?
-- 
===
   Jim Jagielski   [|]   [EMAIL PROTECTED]   [|]   http://www.jaguNET.com/
  A society that will trade a little liberty for a little order
 will lose both and deserve neither - T.Jefferson



[PATCH] Re: don't try this at home

2002-05-31 Thread Jeff Trawick

Jeff Trawick [EMAIL PROTECTED] writes:

 okay, do try it, but (unlike somebody last night) don't try it on daedalus
 
 GET / HTTP/1.1
 Accept: */*
 Host: test
 Content-Type: application/x-www-form-urlencoded
 Transfer-Encoding: chunked
 
 AAA

The situation where I (still) get 200 on the GET followed by 500 for
method AAA... is when GET / is handled by mod_autoindex.

Isn't mod_autoindex responsible for discarding the request body?

With this change I'm getting 413 for the autoindex request:

Index: modules/generators/mod_autoindex.c
===
RCS file: /home/cvs/httpd-2.0/modules/generators/mod_autoindex.c,v
retrieving revision 1.108
diff -u -r1.108 mod_autoindex.c
--- modules/generators/mod_autoindex.c  18 May 2002 04:13:12 -  1.108
+++ modules/generators/mod_autoindex.c  31 May 2002 13:30:52 -
@@ -2133,6 +2133,12 @@
 /* OK, nothing easy.  Trot out the heavy artillery... */
 
 if (allow_opts  OPT_INDEXES) {
+int errstatus;
+
+if ((errstatus = ap_discard_request_body(r)) != OK) {
+return errstatus;
+}
+
 /* KLUDGE --- make the sub_req lookups happen in the right directory.
  * Fixing this in the sub_req_lookup functions themselves is difficult,
  * and would probably break virtual includes...


-- 
Jeff Trawick | [EMAIL PROTECTED]
Born in Roswell... married an alien...



Re: Repeating Calls to apr_dso_load()

2002-05-31 Thread Eli Marmor

Aaron Bannert wrote:

 [copying the APR dev list]

[I'm not subscribed to APR, and this message will probably be refused,
so please forward it to them]

 On Tue, May 28, 2002 at 04:55:14AM +0300, Eli Marmor wrote:
  Can it be assumed that calling apr_dso_load() twice for the same shared
  object, will not re-open that file, but just returns the same handle?
 
 LoadLibrary*() on Windows does reference counting, and same goes for
 dlopen().

Thank you very much for the info about LoadLibrary().
Regarding dlopen(), I already mentioned it in my original question
(well, you wouldn't expect me to raise a question without checking it
first...).

I think that assuming that most of the rest are not critical
(NSLinkModule, load_add_on of BeOS, DosLoadModule of OS/2, dllload of
OS/390), we have only to check what happens with shl_load() (HP-UX).
If the behavior of shl_load() is similar, then it will not be wrong
to say that more than 99% of the installations of Apache behave this
way (the sum of dlopen() + shl_load() + LoadLibrary() ).

-- 
Eli Marmor
[EMAIL PROTECTED]
CTO, Founder
Netmask (El-Mar) Internet Technologies Ltd.
__
Tel.:   +972-9-766-1020  8 Yad-Harutzim St.
Fax.:   +972-9-766-1314  P.O.B. 7004
Mobile: +972-50-23-7338  Kfar-Saba 44641, Israel



Re: [PATCH] 1.3: mod_proxy

2002-05-31 Thread Daniel Lopez

On Fri, May 31, 2002 at 11:26:55AM +0100, Matt Kern wrote:
 There is a feature I have wanted in Apache for a little while now: the
 ability to preserve the host header when using mod_proxy.  This is
 mostly useful for two (or more) tier systems, where it is desirable to
 pass certain virtual server requests through to backend servers.
 
 The attached patch adds an option:
 
 ProxyPreserveHostHeader On|Off
 
 This option, when set, causes the Host: header of the proxy request to
 be passed on intact (including port number) to the backend server.  I
 know there are other ways of jiggering this, but none I have found are
 quite as neat.  My primary goal in implementing this is to leave the
 front end server configuration untouched (with a wildcard dns entry
 pointed at it) and have unmatched virtual hosts passed through to a
 backend server on a different host/port.  The servers running on this
 backend server change with time.

This is already implemented in 2.0 as ProxyPreserveHost, you may want to backport it 
to 1.3

http://httpd.apache.org/docs-2.0/mod/mod_proxy.html#proxypreservehost




 Regards,
 Matt
 
 -- 
 Matt Kern
 http://www.undue.org/

 diff -ur proxy/mod_proxy.c proxy/mod_proxy.c
 --- proxy/mod_proxy.c Thu May 30 17:11:32 2002
 +++ proxy/mod_proxy.c Thu May 30 17:54:01 2002
  -436,6 +436,8 
  ps-viaopt_set = 0; /* 0 means default */
  ps-req = 0;
  ps-req_set = 0;
 +ps-hheader = 0;
 +ps-hheader_set = 0;
  ps-recv_buffer_size = 0;   /* this default was left unset for some
   * reason */
  ps-recv_buffer_size_set = 0;
  -483,6 +485,7 
  ps-domain = (overrides-domain == NULL) ? base-domain : overrides-domain;
  ps-viaopt = (overrides-viaopt_set == 0) ? base-viaopt : overrides-viaopt;
  ps-req = (overrides-req_set == 0) ? base-req : overrides-req;
 +ps-hheader = (overrides-hheader_set == 0) ? base-hheader : 
overrides-hheader;
  ps-recv_buffer_size = (overrides-recv_buffer_size_set == 0) ? 
base-recv_buffer_size : overrides-recv_buffer_size;
  ps-io_buffer_size = (overrides-io_buffer_size_set == 0) ? 
base-io_buffer_size : overrides-io_buffer_size;
  
  -703,6 +706,19 
  
  
  static const char *
 + set_proxy_preserve_host_header(cmd_parms *parms, void *dummy, int flag)
 +{
 +proxy_server_conf *psf =
 +ap_get_module_config(parms-server-module_config, proxy_module);
 +
 +psf-hheader = flag;
 +psf-hheader_set = 1;
 +
 +return NULL;
 +}
 +
 +
 +static const char *
   set_cache_size(cmd_parms *parms, char *struct_ptr, char *arg)
  {
  proxy_server_conf *psf =
  -936,6 +952,8 
  a virtual path and a URL},
  {ProxyPassReverse, add_pass_reverse, NULL, RSRC_CONF, TAKE2,
  a virtual path and a URL for reverse proxy behaviour},
 +{ProxyPreserveHostHeader, set_proxy_preserve_host_header, NULL, RSRC_CONF, 
FLAG,
 +on if the original Host: header should be preserved},
  {ProxyBlock, set_proxy_exclude, NULL, RSRC_CONF, ITERATE,
  A list of names, hosts or domains to which the proxy will not connect},
  {ProxyReceiveBufferSize, set_recv_buffer_size, NULL, RSRC_CONF, TAKE1,
 diff -ur proxy/mod_proxy.h proxy/mod_proxy.h
 --- proxy/mod_proxy.h Thu May 30 17:11:32 2002
 +++ proxy/mod_proxy.h Thu May 30 17:52:48 2002
  -192,6 +192,8 
  char *domain;   /* domain name to use in absence of a domain name 
in the request */
  int req;/* true if proxy requests are enabled */
  char req_set;
 +int hheader; /* true if we want to preserve the host header */
 +char hheader_set;
  enum {
via_off,
via_on,
 diff -ur proxy/proxy_http.c proxy/proxy_http.c
 --- proxy/proxy_http.cThu May 30 17:11:32 2002
 +++ proxy/proxy_http.cThu May 30 18:38:03 2002
  -177,6 +177,7 
  struct noproxy_entry *npent = (struct noproxy_entry *) conf-noproxies-elts;
  struct nocache_entry *ncent = (struct nocache_entry *) conf-nocaches-elts;
  int nocache = 0;
 +int incoming_port = htons ((unsigned short)r-connection-local_addr.sin_port);
  
  if (conf-cache.root == NULL)
  nocache = 1;
  -312,10 +313,20 
  ap_bvputs(f, r-method,  , proxyhost ? url : urlptr,  HTTP/1.1 CRLF,
NULL);
  /* Send Host: now, adding it to req_hdrs wouldn't be much better */
 -if (destportstr != NULL  destport != DEFAULT_HTTP_PORT)
 +if (conf-hheader) {
 +  i = ap_get_server_port(r);
 +  if (ap_is_default_port(i, r))
 + strcpy(portstr, );
 +  else
 + ap_snprintf (portstr, sizeof portstr, :%d, i);
 +  ap_bvputs(f, Host: , r-hostname, portstr, CRLF, NULL);
 +}
 +else {
 +  if (destportstr != NULL  destport != DEFAULT_HTTP_PORT)
  ap_bvputs(f, Host: , desthost, :, destportstr, CRLF, NULL);
 -else
 +  else
  ap_bvputs(f, Host: , desthost, CRLF, NULL);
 +}
  
  if (conf-viaopt == via_block) {
  

Re: Need a new feature: Listing of CGI-enabled directories.

2002-05-31 Thread Andrew Ho

Hello,

RGWith regards to this it would be most helpful if I could get Apache,
RGwhich already has code to parse and analyze Apache configuration files,
RGto simply spit out a list of all of the CGI-enabled directories that are
RGspecified in a given http.conf file to, say, stdout.

The reason you can't do this easily is because Apache doesn't work this
way. It doesn't ever build up a big list of every directory that has
permission to do foo or bar capability. In real time, it takes requests,
and then it compares those requests against a set of rules to decide
whether foo or bar capability is called for.

I doubt it is a good idea to include this functionality in Apache.

RLmod_info will tell you some of this. ie. Look for ScriptAlias lines under
RLmod_alias.c and AddHandler cgi-script lines under mod_mime.c.

RGI was hoping to find a volunteer to actually hack on this for me. I am
RG_not_ well versed in Apache internals myself.

So as Rasmus points out, you can parse configuration information either
manually or automatically to achieve a similar goal.

I recommend parsing httpd.conf instead using a script of your own
devising. There are some Perl scripts written by the mod_perl crowd which
will take you 95% of the way to getting good parsing. Try looking on CPAN
for the Apache::ConfigParser or Apache::Admin::Config modules.

But note that the best that you can do is to spit out a big list of files
and directories, which you then must scan the filesystem for using find
or an equivalent anyway. And during that scanning you'll have to worry
about whether to, for example, FollowSymLinks.

RGIn the case of FormMail scripts, if the big web hosting companies can
RGjust scan all of their CGI directories for them every night and then
RGsimply `rm' or `chmod ' anything found by the scans of the previous
RGnight every morning, then that will be about 99.9% sufficent to
RGeliminate the problem.

I think the question is, if all your VirtualHost DocumentRoots and
ScriptAliases are under one big tree anyway, why not scan the entire tree
and be more confident, rather than scanning a subset of it which may not
be that much bigger?

If it is known that ScriptAliases and directories with ExecCGI enabled are
rare, and always have FollowSymLinks disabled, then I suggest the parsing
approach mentioned above.

Humbly,

Andrew

--
Andrew Ho   http://www.tellme.com/   [EMAIL PROTECTED]
Engineer   [EMAIL PROTECTED]  Voice 650-930-9062
Tellme Networks, Inc.   1-800-555-TELLFax 650-930-9101
--






Re: Links on http://xml.apache.org/axis are incorrect

2002-05-31 Thread Cliff Woolley



Okay, I've bumped icarus back down to the version of httpd it was running
yesterday (2.0.37-dev from a week ago).  No more segfaults.  I'll analyze
the corefiles from the new version.  Thanks for letting me know.

--Cliff



On Fri, 31 May 2002, Sam Ruby wrote:

 What changed?  The ~checkout~ convention used to work (as recently as
 yesterday).

 - Sam Ruby

 -- Forwarded by Sam Ruby/Raleigh/IBM on 05/31/2002
 11:40 AM ---

 Jim Madl [EMAIL PROTECTED] on 05/31/2002 10:32:15 AM

 Please respond to [EMAIL PROTECTED]

 To:[EMAIL PROTECTED]
 cc:
 Subject:Links on http://xml.apache.org/axis are incorrect




 Your links changed overnight to use the following  semantics
 http://cvs.apache.org/viewcvs.cgi/~checkout~/xml-axis/java/docs/install.html

 The above link is invalid it should be as  follows

 http://cvs.apache.org/viewcvs.cgi/*checkout*/xml-axis/java/docs/install.html

 The only differance is the ~checkout~ value should  be *checkout*

 Then the page is displayed.  This occurs for  about 90% of the links on
 this page.

 Thanks,
 Jim Madl
 Home Phone: (610)9701-1139
 Mobile  Phone: (484)919-1263
 E-mail: [EMAIL PROTECTED]







SEGV in head (was Re: Links on http://xml.apache.org/axis areincorrect)

2002-05-31 Thread Cliff Woolley

On Fri, 31 May 2002, Cliff Woolley wrote:

 Okay, I've bumped icarus back down to the version of httpd it was running
 yesterday (2.0.37-dev from a week ago).  No more segfaults.  I'll analyze
 the corefiles from the new version.  Thanks for letting me know.

Okay, so HEAD from last night caused the following segfault on the URL
http://cvs.apache.org/viewcvs.cgi/~checkout~/xml-axis/java/docs/install.html

I have three corefiles, but they're all the same.  They're in
/usr/local/apache2.0.37-dev3/corefiles and are against the
/usr/local/apache2.0.37-dev3/bin/httpd binary.

--Cliff

(gdb) bt
#0  0x281de546 in strchr () from /usr/lib/libc.so.4
#1  0x80c6444 in __DTOR_END__ ()
#2  0x8085b82 in translate_userdir (r=0x816a050) at mod_userdir.c:320
#3  0x80a7594 in ap_run_translate_name (r=0x816a050) at request.c:108
#4  0x80a8384 in ap_process_request_internal (r=0x816a050) at request.c:167
#5  0x80aaa53 in ap_sub_req_method_uri (method=0x80c1471 GET,
new_file=0x816f838 /~checkout~/xml-axis/java/docs/install.html,
r=0x8170050, next_filter=0x0) at request.c:1639
#6  0x80c in ap_sub_req_lookup_uri (
new_file=0x816f838 /~checkout~/xml-axis/java/docs/install.html,
r=0x8170050, next_filter=0x0) at request.c:1651
#7  0x8095fc8 in ap_add_cgi_vars (r=0x8170050) at util_script.c:404
#8  0x807c011 in run_cgi_child (script_out=0xbfbff71c, script_in=0xbfbff718,
script_err=0xbfbff714, command=0x81713f7 viewcvs.cgi, argv=0x816f7e0,
r=0x8170050, p=0x8170018, e_info=0xbfbff6d4) at mod_cgi.c:436
#9  0x807ca85 in cgi_handler (r=0x8170050) at mod_cgi.c:686
#10 0x80895c0 in ap_run_handler (r=0x8170050) at config.c:193
#11 0x8089ec2 in ap_invoke_handler (r=0x8170050) at config.c:373
#12 0x8071cb8 in ap_process_request (r=0x8170050) at http_request.c:268
#13 0x806aef7 in ap_process_http_connection (c=0x815b120) at http_core.c:291
#14 0x8097af4 in ap_run_process_connection (c=0x815b120) at connection.c:85
#15 0x8097f80 in ap_process_connection (c=0x815b120, csd=0x815b050)
at connection.c:207
#16 0x80878de in child_main (child_num_arg=14) at prefork.c:671
#17 0x8087ab9 in make_child (s=0x810e680, slot=14) at prefork.c:765
#18 0x8087d9b in perform_idle_server_maintenance (p=0x80d2018) at prefork.c:900
#19 0x808825b in ap_mpm_run (_pconf=0x80d2018, plog=0x810a018, s=0x810e680)
at prefork.c:1093
#20 0x808fca7 in main (argc=3, argv=0xbfbffb1c) at main.c:646
#21 0x805e9b5 in _start ()





RE: cvs commit: httpd-2.0/modules/test mod_bucketeer.c

2002-05-31 Thread Cliff Woolley

On Fri, 31 May 2002, Ryan Bloom wrote:

 -if (AP_BUCKET_IS_ERROR(e)) {
 -apr_bucket *err_copy;
 -apr_bucket_copy(e, err_copy);
 -APR_BRIGADE_INSERT_TAIL(ctx-bb, err_copy);
 +if (e-length == 0) {
 
  Looks like magic to me - perhaps wrap it in AP_BUCKET_IS_METADATA()?

 ++1!  This question actually came up on IRC yesterday as well, and I
 told Justin that it was absolutely required.  I hope to do the work
 later today unless somebody beats me to it.   :-)

-1.

e-length == 0 does NOT imply it *has* to be metadata.  It could just be a
data bucket that's empty.  On the other hand, if it's metadata that DOES
imply e-length == 0.  It's a one-way relationship, not an if-and-only-if.
It turns out that most filters won't much care... if they act on data in a
bucket and the bucket contains no data, they should just pass it on along.
But it could easily be an empty HEAP bucket, and that's definitely not a
metadata bucket.  If you're going to have an APR_BUCKET_IS_METADATA()
macro, it will have to test a new ismetadata flag in the
apr_bucket_type_t.  But I'm -0.5 to that too, because it leads filter
authors to believe that there should be a distinction between metadata
buckets and empty data buckets when that's not the case.

--Cliff




Re: SEGV in head (was Re: Links on http://xml.apache.org/axis are incorrect)

2002-05-31 Thread Jeff Trawick

Cliff Woolley [EMAIL PROTECTED] writes:

 On Fri, 31 May 2002, Cliff Woolley wrote:
 
  Okay, I've bumped icarus back down to the version of httpd it was running
  yesterday (2.0.37-dev from a week ago).  No more segfaults.  I'll analyze
  the corefiles from the new version.  Thanks for letting me know.
 
 Okay, so HEAD from last night caused the following segfault on the URL
 http://cvs.apache.org/viewcvs.cgi/~checkout~/xml-axis/java/docs/install.html
 
 I have three corefiles, but they're all the same.  They're in
 /usr/local/apache2.0.37-dev3/corefiles and are against the
 /usr/local/apache2.0.37-dev3/bin/httpd binary.
 
 --Cliff
 
 (gdb) bt
 #0  0x281de546 in strchr () from /usr/lib/libc.so.4
 #1  0x80c6444 in __DTOR_END__ ()
 #2  0x8085b82 in translate_userdir (r=0x816a050) at mod_userdir.c:320

It looks to me that this code in translate_userdir() is referencing x
whether or not the char was found (by ap_strchr_c()).  I haven't
looked in the coredump to verify that was the problem for this
segfault. 

-- 
Jeff Trawick | [EMAIL PROTECTED]
Born in Roswell... married an alien...



Re: SEGV in head (was Re: Links on http://xml.apache.org/axis are incorrect)

2002-05-31 Thread Justin Erenkrantz

On Fri, May 31, 2002 at 12:40:03PM -0400, Jeff Trawick wrote:
 It looks to me that this code in translate_userdir() is referencing x
 whether or not the char was found (by ap_strchr_c()).  I haven't
 looked in the coredump to verify that was the problem for this
 segfault. 

I agree.  Anyone think this wouldn't do the trick?  -- justin

Index: modules/mappers/mod_userdir.c
===
RCS file: /home/cvs/httpd-2.0/modules/mappers/mod_userdir.c,v
retrieving revision 1.48
diff -u -r1.48 mod_userdir.c
--- modules/mappers/mod_userdir.c   28 May 2002 23:14:15 -  1.48
+++ modules/mappers/mod_userdir.c   31 May 2002 16:43:42 -
 -317,7 +317,7 
 else
 filename = apr_pstrcat(r-pool, userdir, /, w, NULL);
 }
-else if (ap_strchr_c(x, ':')) {
+else if (x  ap_strchr_c(x, ':')) {
 redirect = apr_pstrcat(r-pool, x, w, dname, NULL);
 apr_table_setn(r-headers_out, Location, redirect);
 return HTTP_MOVED_TEMPORARILY;



RE: cvs commit: httpd-2.0/modules/test mod_bucketeer.c

2002-05-31 Thread Cliff Woolley

On Fri, 31 May 2002, Cliff Woolley wrote:

 e-length == 0 does NOT imply it *has* to be metadata.  It could just be a
 data bucket that's empty.  On the other hand, if it's metadata that DOES
 imply e-length == 0.  It's a one-way relationship, not an if-and-only-if.
 It turns out that most filters won't much care... if they act on data in a
 bucket and the bucket contains no data, they should just pass it on along.
 But it could easily be an empty HEAP bucket, and that's definitely not a
 metadata bucket.  If you're going to have an APR_BUCKET_IS_METADATA()
 macro, it will have to test a new ismetadata flag in the
 apr_bucket_type_t.  But I'm -0.5 to that too, because it leads filter
 authors to believe that there should be a distinction between metadata
 buckets and empty data buckets when that's not the case.

Hmmm... okay, on second thought, +1 to the second approach.  One could
make the valid argument that it's safe to remove zero-length buckets if
and only if they contain no metadata.  How can you know which ones those
are without asking the bucket itself?  You can't.  That requires you to
pass on *all* zero-length buckets, which is a waste of time.  So I'll
agree to adding a field to apr_bucket_type_t to allow the bucket to
indicate whether it contains metadata or not [ie, whether or not
zero-length implies no useful information here].

--Cliff




Re: SEGV in head (was Re: Links on http://xml.apache.org/axis areincorrect)

2002-05-31 Thread Cliff Woolley

On Fri, 31 May 2002, Justin Erenkrantz wrote:

 -else if (ap_strchr_c(x, ':')) {
 +else if (x  ap_strchr_c(x, ':')) {

+1.  This looks correct to me.

--Cliff




Re: SEGV in head (was Re: Links on http://xml.apache.org/axis are incorrect)

2002-05-31 Thread Jeff Trawick

Justin Erenkrantz [EMAIL PROTECTED] writes:

 On Fri, May 31, 2002 at 12:40:03PM -0400, Jeff Trawick wrote:
  It looks to me that this code in translate_userdir() is referencing x
  whether or not the char was found (by ap_strchr_c()).  I haven't
  looked in the coredump to verify that was the problem for this
  segfault. 
 
 I agree.  Anyone think this wouldn't do the trick?  -- justin

looks fine... the only difference between that and what I coded up is
that I moved the decl for x down to line 290 to shorten the search for
where x is used :)

 Index: modules/mappers/mod_userdir.c
 ===
 RCS file: /home/cvs/httpd-2.0/modules/mappers/mod_userdir.c,v
 retrieving revision 1.48
 diff -u -r1.48 mod_userdir.c
 --- modules/mappers/mod_userdir.c 28 May 2002 23:14:15 -  1.48
 +++ modules/mappers/mod_userdir.c 31 May 2002 16:43:42 -
 @@ -317,7 +317,7 @@
  else
  filename = apr_pstrcat(r-pool, userdir, /, w, NULL);
  }
 -else if (ap_strchr_c(x, ':')) {
 +else if (x  ap_strchr_c(x, ':')) {
  redirect = apr_pstrcat(r-pool, x, w, dname, NULL);
  apr_table_setn(r-headers_out, Location, redirect);
  return HTTP_MOVED_TEMPORARILY;
 

-- 
Jeff Trawick | [EMAIL PROTECTED]
Born in Roswell... married an alien...



Re: SEGV in head (was Re: Links on http://xml.apache.org/axis are incorrect)

2002-05-31 Thread Justin Erenkrantz

On Fri, May 31, 2002 at 01:05:34PM -0400, Jeff Trawick wrote:
 looks fine... the only difference between that and what I coded up is
 that I moved the decl for x down to line 290 to shorten the search for
 where x is used :)

And I caught that too when I went to commit it.  =)  -- justin



Re: cvs commit: httpd-2.0/modules/test mod_bucketeer.c

2002-05-31 Thread Justin Erenkrantz

On Fri, May 31, 2002 at 11:02:15AM +0100, Ben Laurie wrote:
+if (e-length == 0) {
 
 Looks like magic to me - perhaps wrap it in AP_BUCKET_IS_METADATA()?

++1.  (I'd say APR_BUCKET_IS_METADATA.)  -- justin



Re: don't try this at home

2002-05-31 Thread Allard Hoeve


On 30 May 2002, Jeff Trawick wrote:

  GET / HTTP/1.1
  Accept: */*
  Host: test
  Content-Type: application/x-www-form-urlencoded
  Transfer-Encoding: chunked
 
  AAA

 Yesterday afternoon:

   on one build I was getting a 200 followed by a 500 (access_log said
   500 was for method A...)

   on another build I was getting a segfault


Dear Jeff,

I tried it on my server at my home webserver and the server (child)
segfaults with an entry in error_log:

  [Fri May 31 19:52:12 2002] [error] [client 127.0.0.1] Directory index
  forbidden by rule: /usr/local/apache/www/
  [Fri May 31 19:52:19 2002] [notice] child pid 17945 exit signal
  Segmentation fault (11)

My configuration is listed below. Mail me privately for more information
if you need it.

Cheers,

Allard Hoeve




-- 

Self-compiled on Linux 2.4.19-pre6, Debian, gcc 2.95.4,
ld version 2.12.90.0.1

Server: Apache/1.3.20 (Unix) mod_ssl/2.8.4 OpenSSL/0.9.6b mod_perl/1.26
PHP/4.1.2

Compiled-in modules:
  http_core.c
  mod_env.c
  mod_log_config.c
  mod_mime.c
  mod_negotiation.c
  mod_status.c
  mod_include.c
  mod_autoindex.c
  mod_dir.c
  mod_cgi.c
  mod_asis.c
  mod_imap.c
  mod_actions.c
  mod_userdir.c
  mod_alias.c
  mod_rewrite.c
  mod_access.c
  mod_auth.c
  mod_proxy.c
  mod_so.c
  mod_setenvif.c
suexec: enabled; valid wrapper /usr/local/httpd/bin/suexec




Re: [PATCH] 1.3: mod_proxy

2002-05-31 Thread Matt Kern

 This is already implemented in 2.0 as ProxyPreserveHost, you may
 want to backport it to 1.3
 
 http://httpd.apache.org/docs-2.0/mod/mod_proxy.html#proxypreservehost

Ah.  I have fixed up my patch to have the same name and few minor
tweaks to the patch to make it smaller.  Can someone with CVS access
look the patch over and put it in?

Matt

-- 
Matt Kern
http://www.undue.org/


diff -ur apache.orig/htdocs/manual/mod/mod_proxy.html 
apache_1.3.24/htdocs/manual/mod/mod_proxy.html
--- apache.orig/htdocs/manual/mod/mod_proxy.htmlThu Mar 21 17:28:49 2002
+++ apache_1.3.24/htdocs/manual/mod/mod_proxy.html  Fri May 31 18:57:58 2002
 -61,6 +61,8 
 
   lia href=#proxypassreverseProxyPassReverse/a/li
 
+  lia href=#proxypreservehostProxyPreserveHost/a/li
+
   lia href=#proxyblockProxyBlock/a/li
 
   lia href=#allowconnectAllowCONNECT/a/li
 -453,6 +455,31 
 href=mod_rewrite.html#RewriteRulettmod_rewrite/tt/a
 because its doesn't depend on a corresponding
 sampProxyPass/samp directive./p
+hr /
+
+h2a id=proxypreservehost
+name=proxypreservehostProxyPreserveHost/a directive/h2
+a href=directive-dict.html#Syntax
+rel=HelpstrongSyntax:/strong/a ProxyPreserveHost
+on|offbr /
+ a href=directive-dict.html#Default
+rel=HelpstrongDefault:/strong/a codeProxyPreserveHost Off/codebr /
+ a href=directive-dict.html#Context
+rel=HelpstrongContext:/strong/a server config, virtual
+hostbr /
+ a href=directive-dict.html#Override
+rel=HelpstrongOverride:/strong/a emNot
+applicable/embr /
+ a href=directive-dict.html#Status
+rel=HelpstrongStatus:/strong/a Basebr /
+ a href=directive-dict.html#Module
+rel=HelpstrongModule:/strong/a mod_proxybr /
+ a href=directive-dict.html#Compatibility
+rel=HelpstrongCompatibility:/strong/a ProxyPassReverse
+is only available in Apache 1.3.25 and later.
+
+pThis directive tells Apache to preserve the ttHost/tt
+header from incoming requests on outgoing proxy requests./p
 hr /
 
 h2a id=allowconnect name=allowconnectAllowCONNECT/a
diff -ur apache.orig/src/modules/proxy/mod_proxy.c 
apache_1.3.24/src/modules/proxy/mod_proxy.c
--- apache.orig/src/modules/proxy/mod_proxy.c   Fri May 31 18:56:10 2002
+++ apache_1.3.24/src/modules/proxy/mod_proxy.c Fri May 31 18:57:58 2002
 -436,6 +436,8 
 ps-viaopt_set = 0; /* 0 means default */
 ps-req = 0;
 ps-req_set = 0;
+ps-phh = 0;
+ps-phh_set = 0;
 ps-recv_buffer_size = 0;   /* this default was left unset for some
  * reason */
 ps-recv_buffer_size_set = 0;
 -483,6 +485,7 
 ps-domain = (overrides-domain == NULL) ? base-domain : overrides-domain;
 ps-viaopt = (overrides-viaopt_set == 0) ? base-viaopt : overrides-viaopt;
 ps-req = (overrides-req_set == 0) ? base-req : overrides-req;
+ps-phh = (overrides-phh_set == 0) ? base-phh : overrides-phh;
 ps-recv_buffer_size = (overrides-recv_buffer_size_set == 0) ? 
base-recv_buffer_size : overrides-recv_buffer_size;
 ps-io_buffer_size = (overrides-io_buffer_size_set == 0) ? base-io_buffer_size 
: overrides-io_buffer_size;
 
 -703,6 +706,19 
 
 
 static const char *
+ set_proxy_preserve_host(cmd_parms *parms, void *dummy, int flag)
+{
+proxy_server_conf *psf =
+ap_get_module_config(parms-server-module_config, proxy_module);
+
+psf-phh = flag;
+psf-phh_set = 1;
+
+return NULL;
+}
+
+
+static const char *
  set_cache_size(cmd_parms *parms, char *struct_ptr, char *arg)
 {
 proxy_server_conf *psf =
 -936,6 +952,8 
 a virtual path and a URL},
 {ProxyPassReverse, add_pass_reverse, NULL, RSRC_CONF, TAKE2,
 a virtual path and a URL for reverse proxy behaviour},
+{ProxyPreserveHost, set_proxy_preserve_host, NULL, RSRC_CONF, FLAG,
+on if the original Host: header should be preserved},
 {ProxyBlock, set_proxy_exclude, NULL, RSRC_CONF, ITERATE,
 A list of names, hosts or domains to which the proxy will not connect},
 {ProxyReceiveBufferSize, set_recv_buffer_size, NULL, RSRC_CONF, TAKE1,
diff -ur apache.orig/src/modules/proxy/mod_proxy.h 
apache_1.3.24/src/modules/proxy/mod_proxy.h
--- apache.orig/src/modules/proxy/mod_proxy.h   Fri May 31 18:56:10 2002
+++ apache_1.3.24/src/modules/proxy/mod_proxy.h Fri May 31 18:57:58 2002
 -192,6 +192,8 
 char *domain;   /* domain name to use in absence of a domain name in 
the request */
 int req;/* true if proxy requests are enabled */
 char req_set;
+int phh;   /* true if we want to preserve the host header */
+char phh_set;
 enum {
   via_off,
   via_on,
diff -ur apache.orig/src/modules/proxy/proxy_http.c 
apache_1.3.24/src/modules/proxy/proxy_http.c
--- apache.orig/src/modules/proxy/proxy_http.c  Fri May 31 18:56:10 2002
+++ apache_1.3.24/src/modules/proxy/proxy_http.cFri May 31 18:57:58 2002
 -311,11 +311,14 
 

Re: [OT] Need a new feature: Listing of CGI-enabled directories.

2002-05-31 Thread Ronald F. Guilmette


In message [EMAIL PROTECTED], 
Zac Stevens [EMAIL PROTECTED] wrote:

Hi Ron,

Thanks for your detailed response to my post, I'll reply later this
evening off list.

I do want to jump in on this, though..

On Fri, May 31, 2002 at 12:24:30AM -0700, Ronald F. Guilmette wrote:
 In the case of FormMail scripts, if the big web hosting companies can
 just scan all of their CGI directories for them every night and then
 simply `rm' or `chmod ' anything found by the scans of the previous
 night every morning, then that will be about 99.9% sufficent to eliminate
 the problem.

To provide you with a bit of context, my comments come after having run a
Solaris/Apache-based virtual hosting service in Australia for approximately
5000 sites on around 200GB of disk (on a Network Appliance filer.)  I'm
security conscious, but I'm also pragmatic - as you yourself seem to be
aware, commercial realities do put a stop to best practices some of the
time.

I have tried, and failed, in attempts to correct bad user behaviour in the
means you have described above.  In addition to removing execute
permissions and chown'ing files to root, I have attempted to leave messages 
by way of conspicuously-named files in the relevant directories.

None of this was met with much success.  Typically, the user would
re-upload the script, or delete and re-upload the entire site, and the
problem would begin anew...

You have just written a rationale statement for exactly what I am trying
to develop, i.e. a procedure whereby web hosting firms can scan (rescan?)
all of their CGI-enabled directories for new (or re-installed) problems
every single night.

The problem is that none of these things alerts the user to the problem -
it just creates a new problem to grab their attention.  In sites where
valid contact e-mail addresses are available for every customer, this can
be a more effective form of resolving the issue.

OK, let me clarify that _my_ job at this point _isn't_ to figure out how
to alert the end-users that they have screwed up (by installing bad scripts).
That is the responsibility of the individula web hosting firms to figure
out a procedire/mechanism for doing that.  *My* job is just to give these
web hosting firms a procedire they can use to at least detect 100% of the
bad CGI scripts that their end-lusers have installed, say, over the past
24 hours.  If I can just do that, then I will have at least done my part.

2) The sendmail binary was replaced with a script which did sanity checking
   and added identifying details.

This is a simple and extremely effective way to put an end to the problem
of spam mail originating from virtual-hosted customers...

Please elaborate.  Please describe the ``sanity checking'' you are talking
about and please explain to me how it actually prevented spam from leaving
your web servers.

(I have been down this road already with some web hosting firms, and I
don't know of any kind of ``sanity checking'' that can be applied to
e-mail messages generated by CGI scripts that would still allow end-users
of the web hosting company to, say, have the output of their forms e-mailed
to their @yahoo.com accounts.)




Re: SEGV in head (was Re: Links on http://xml.apache.org/axis are incorrect)

2002-05-31 Thread Greg Stein

On Fri, May 31, 2002 at 01:05:34PM -0400, Jeff Trawick wrote:
 Justin Erenkrantz [EMAIL PROTECTED] writes:
 
  On Fri, May 31, 2002 at 12:40:03PM -0400, Jeff Trawick wrote:
   It looks to me that this code in translate_userdir() is referencing x
   whether or not the char was found (by ap_strchr_c()).  I haven't
   looked in the coredump to verify that was the problem for this
   segfault. 
  
  I agree.  Anyone think this wouldn't do the trick?  -- justin
 
 looks fine... the only difference between that and what I coded up is
 that I moved the decl for x down to line 290 to shorten the search for
 where x is used :)

What the hell is x ?? Damn. That code should have a USEFUL variable name...

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/



Re: SEGV in head

2002-05-31 Thread Greg Stein

On Fri, May 31, 2002 at 12:26:36PM -0400, Cliff Woolley wrote:
...
 Okay, so HEAD from last night caused the following segfault on the URL
 http://cvs.apache.org/viewcvs.cgi/~checkout~/xml-axis/java/docs/install.html
 
 I have three corefiles, but they're all the same.  They're in
 /usr/local/apache2.0.37-dev3/corefiles and are against the
 /usr/local/apache2.0.37-dev3/bin/httpd binary.
 
 --Cliff
 
 (gdb) bt
 #0  0x281de546 in strchr () from /usr/lib/libc.so.4
 #1  0x80c6444 in __DTOR_END__ ()
 #2  0x8085b82 in translate_userdir (r=0x816a050) at mod_userdir.c:320

I'd like to know *how* it even got to that code. Right at the top of
translate_userdir(), there are the following lines:

/*
 * If the URI doesn't match our basic pattern, we've nothing to do with
 * it.
 */
if (name[0] != '/' || name[1] != '~') {
return DECLINED;
}

[ where name == r-uri ]

So how does the above URL match that at all? Shouldn't r-uri be equal to
/viewcvs.cgi/~checkout~/...

???

 #3  0x80a7594 in ap_run_translate_name (r=0x816a050) at request.c:108
 #4  0x80a8384 in ap_process_request_internal (r=0x816a050) at request.c:167
 #5  0x80aaa53 in ap_sub_req_method_uri (method=0x80c1471 GET,
 new_file=0x816f838 /~checkout~/xml-axis/java/docs/install.html,
 r=0x8170050, next_filter=0x0) at request.c:1639
 #6  0x80c in ap_sub_req_lookup_uri (
 new_file=0x816f838 /~checkout~/xml-axis/java/docs/install.html,
 r=0x8170050, next_filter=0x0) at request.c:1651
 #7  0x8095fc8 in ap_add_cgi_vars (r=0x8170050) at util_script.c:404

Oh!! Just had to read further down the stack. What the hell is happening
here?

Ah. It wants to take the rest of the path (after the CGI script; the
/~checkout~/... part), assume it is a URI, and try to translate that into
a filename. If it is successful, it stores that into PATH_TRANSLATED.

Wow. What possible utility does that serve?

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/



Re: SEGV in head

2002-05-31 Thread Cliff Woolley

On Fri, 31 May 2002, Greg Stein wrote:

 Ah. It wants to take the rest of the path (after the CGI script; the
 /~checkout~/... part), assume it is a URI, and try to translate that into
 a filename. If it is successful, it stores that into PATH_TRANSLATED.
 Wow. What possible utility does that serve?

It allows the script to process the actual file referenced by that URI (as
in PATH_INFO-style) as an action, basically.  So let's say I have a URI:

/cgi-bin/processit.cgi/foo/bar.html

And processit.cgi reads html files and manipulates them in some way.
processit.cgi needs to know the actual location of /foo/bar.html in the
filesystem.  PATH_TRANSLATED is how it does that.

I've actually used this functionality before, it's quite handy.  :)

--Cliff




Re: cvs commit: httpd-2.0/modules/http http_protocol.c

2002-05-31 Thread Justin Erenkrantz

On Fri, May 31, 2002 at 08:41:06PM -, [EMAIL PROTECTED] wrote:
 rbb 2002/05/31 13:41:06
 
   Modified:modules/http http_protocol.c
   Log:
   If the request doesn't have a body, then don't try to read it.  Without
   this, the httpd-test suite was taking five minutes for EVERY test.

This breaks chunk trailers.  Please revert.

What is the exact problem you are seeing?  httpd-test was working
fine for me.  But, I will revert this locally and see if I can
reproduce this.  This may have been related to Jeff's autoindex
problem.  -- justin



RE: cvs commit: httpd-2.0/modules/http http_protocol.c

2002-05-31 Thread Ryan Bloom

 From: Justin Erenkrantz [mailto:[EMAIL PROTECTED]]
 
 On Fri, May 31, 2002 at 08:41:06PM -, [EMAIL PROTECTED] wrote:
  rbb 2002/05/31 13:41:06
 
Modified:modules/http http_protocol.c
Log:
If the request doesn't have a body, then don't try to read it.
 Without
this, the httpd-test suite was taking five minutes for EVERY test.
 
 This breaks chunk trailers.  Please revert.
 
 What is the exact problem you are seeing?  httpd-test was working
 fine for me.  But, I will revert this locally and see if I can
 reproduce this.  This may have been related to Jeff's autoindex
 problem.  -- Justin

Without this fix, the entire test suite fails, because the HTTP_IN
filter is sending requests with 0 Content-Length to the
CORE_INPUT_FILTER to read the body.  This means that every request times
out after some timeout.  It has nothing to do with Jeff's problem,
because EVERY test was taking forever.  I did run the test-suite, so if
this breaks anything, there is no test for it.

Ryan





Re: cvs commit: httpd-2.0/modules/http http_protocol.c

2002-05-31 Thread Justin Erenkrantz

On Fri, May 31, 2002 at 02:21:52PM -0700, Ryan Bloom wrote:
 Without this fix, the entire test suite fails, because the HTTP_IN
 filter is sending requests with 0 Content-Length to the
 CORE_INPUT_FILTER to read the body.  This means that every request times
 out after some timeout.  It has nothing to do with Jeff's problem,
 because EVERY test was taking forever.  I did run the test-suite, so if
 this breaks anything, there is no test for it.

As I said, I didn't see this problem.  I'm currently building an
updated tree to test again.  If I can reproduce it, I'll try to
fix it.  However, we need the BODY_NONE state in order to handle
trailers.  Your commit breaks trailers.

httpd-test has no tests for input filtering.  If I knew how to
get perl to send bogus requests, I would.  But, my perl-fu is
severely lacking.  -- justin



Re: cvs commit: httpd-2.0/modules/http http_protocol.c

2002-05-31 Thread Doug MacEachern

On Fri, 31 May 2002, Justin Erenkrantz wrote:
 
 httpd-test has no tests for input filtering.

mod_input_body_filter.c at least, no?  the protocol/ tests also hit input 
filters.

  If I knew how to
 get perl to send bogus requests, I would.  But, my perl-fu is
 severely lacking.  -- justin

see t/protocol/echo,nntp-like.t, you can send a request in any 

my $module = 'default'; #normally connects to port 8529
my $sock = Apache::TestRequest::vhost_socket($module);

print $socket SET \ ppp/2.2
...





Discarding bodies multiple times

2002-05-31 Thread Justin Erenkrantz

On Fri, May 31, 2002 at 02:21:52PM -0700, Ryan Bloom wrote:
 Without this fix, the entire test suite fails, because the HTTP_IN
 filter is sending requests with 0 Content-Length to the
 CORE_INPUT_FILTER to read the body.  This means that every request times
 out after some timeout.  It has nothing to do with Jeff's problem,
 because EVERY test was taking forever.  I did run the test-suite, so if
 this breaks anything, there is no test for it.

Well, it's any request where ap_discard_request_body() is called
more than once.  In the case of apache/404.t, default_handler calls
ap_discard_request_body() and then ap_die() calls it too.

I'm not terribly sure if this sequence is valid.  Why is
default_handler discarding the body if it can't handle the
request?  Shouldn't we only discard the body right before we
send the response?  

Or, we could add an eos_gotten to request_rec to indiciate
that the input filters have received EOS so that
discard_request_body won't be re-entrant.  I dunno.  -- justin



RE: Discarding bodies multiple times

2002-05-31 Thread Cliff Woolley

On Fri, 31 May 2002, Ryan Bloom wrote:

 The default handler discards the body, because it can't handle a body.
 The assumption is that if the request gets to default_handler, then no
 body will be allowed.  There are only two options as I see it.  1)  Keep
 a record of having received an EOS in the request_rec.  2)  Only call
 ap_discard_request_body if the default_handler will succeed.

You two are agreeing.. it's scaring me.  ;)

As for option #2... it seems cleaner, but can't a filter call ap_die()?
What then?

--Cliff