Linking mod_ssl with a specific OpenSSL version (Re: svn commit: r1358167 - in /httpd/httpd/trunk: acinclude.m4 modules/ssl/ssl_engine_init.c)

2012-08-05 Thread Kaspar Brand
On 08.07.2012 10:30, Kaspar Brand wrote:
 On 06.07.2012 14:41, b...@apache.org wrote:
 Author: ben
 Date: Fri Jul  6 12:41:10 2012
 New Revision: 1358167

 URL: http://svn.apache.org/viewvc?rev=1358167view=rev
 Log:
 Work correctly with a development version of OpenSSL. I suspect
 something similar is needed when there are two OpenSSL installations,
 one in a default location.

I had another look at this, since it has been proposed for backporting
to 2.4 in the meantime, and still think the following is true:

 If I'm understanding correctly, then this
 patch tries to support building against an OpenSSL source tree (or
 perhaps a build directory where only make libs has been executed)?

(should have read make build_libs instead)

It's a useful enhancement if mod_ssl can be linked with a specific
OpenSSL version in a non-default location, but the current approach has
at least one problem, AFAICT: it will only work if the directory pointed
to by --with-ssl does not include shared libraries for OpenSSL (by
default, OpenSSL only builds libssl.a and libcrypto.a, so the issue
might not be obvious at first sight).

 I would suggest to use a separate
 configure argument to support this build option, e.g. --with-ssl-srcdir.

I gave it a try, see the attached work-in-progress patch. While we're
at it, I think we should also fix a flaw in the handling of the
--with-ssl argument: in
http://svn.apache.org/viewvc?view=revisionrevision=730926, acinclude.m4
was modified to always give pkg-config precedence over any argument
specified through --with-ssl. While the rationale for this change
becomes clear from the commit log, I consider it an unfortunate side
effect that pkg-config always trumps any --with-ssl directory argument.

My suggestion would be to handle OpenSSL paths in configure arguments
like this, instead:

1) use --with-ssl-builddir for linking with the static OpenSSL libraries
in that directory (and ignore --with-ssl in this case)

2) use --with-ssl for linking against an installed version of OpenSSL

3) use pkg-config to locate OpenSSL

Does that sound like a reasonable proposal? Comments welcome, and test
feedback would be much appreciated (remember to run buildconf after
applying the patch to acinclude.m4, and before calling configure).

Kaspar

Index: acinclude.m4
===
--- acinclude.m4(revision 1369535)
+++ acinclude.m4(working copy)
@@ -467,86 +467,97 @@
 dnl
 dnl APACHE_CHECK_OPENSSL
 dnl
-dnl Configure for OpenSSL, giving preference to
-dnl --with-ssl=path if it was specified.
+dnl Configure for OpenSSL, giving preference to the following options:
+dnl 1) --with-ssl-builddir=path, for linking against static libraries
+dnlin an OpenSSL build directory where at least make build_libs
+dnlhas been executed
+dnl 2) --with-ssl=path, pointing to a path where an installed version
+dnlof OpenSSL can be found
+dnl 3) the path as determined by pkg-config
 dnl
 AC_DEFUN(APACHE_CHECK_OPENSSL,[
-  AC_CACHE_CHECK([for OpenSSL], [ac_cv_openssl], [
+  AC_CACHE_VAL([ac_cv_openssl], [
 dnl initialise the variables we use
 ac_cv_openssl=no
-ap_openssl_found=
 ap_openssl_base=
-ap_openssl_libs=
+saved_CPPFLAGS=$CPPFLAGS
+SSL_LIBS=
 
-dnl Determine the OpenSSL base directory, if any
-AC_MSG_CHECKING([for user-provided OpenSSL base directory])
-AC_ARG_WITH(ssl, APACHE_HELP_STRING(--with-ssl=DIR,OpenSSL base 
directory), [
-  dnl If --with-ssl specifies a directory, we use that directory
-  if test x$withval != xyes -a x$withval != x; then
-dnl This ensures $withval is actually a directory and that it is 
absolute
+AC_MSG_NOTICE([checking for OpenSSL...])
+
+dnl Allow linking against static libraries from an OpenSSL build directory
+AC_MSG_CHECKING([for user-provided OpenSSL build directory with static 
libraries])
+AC_ARG_WITH(ssl-builddir, 
APACHE_HELP_STRING(--with-ssl-builddir=DIR,OpenSSL build directory with static 
libraries to link with), [
+  if test x$withval != xyes -a -d $withval; then
+dnl This ensures $withval is actually a directory
+dnl and that it is absolute
 ap_openssl_base=`cd $withval ; pwd`
+if test x$ap_openssl_base != x; then
+  AC_MSG_RESULT($ap_openssl_base)
+  CPPFLAGS=-I$ap_openssl_base/include $CPPFLAGS
+  INCLUDES=-I$ap_openssl_base/include $INCLUDES
+  if test x$enable_ssl = xstatic; then
+APR_ADDTO(LIBS, [$ap_openssl_base/libssl.a 
$ap_openssl_base/libcrypto.a])
+  else
+LDFLAGS=-L$ap_openssl_base -Wl,-L$ap_openssl_base $LDFLAGS
+dnl force the linker to use libssl.a and libcrypto.a (but only
+dnl these, i.e. make sure that we are switching back to dynamic
+dnl mode afterwards - from ld(1): affects library searching
+dnl for -l options which follow it)
+APR_ADDTO(SSL_LIBS, 

Re: Linking mod_ssl with a specific OpenSSL version (Re: svn commit: r1358167 - in /httpd/httpd/trunk: acinclude.m4 modules/ssl/ssl_engine_init.c)

2012-08-05 Thread Kaspar Brand
On 05.08.2012 10:10, Kaspar Brand wrote:
 test feedback would be much appreciated (remember to run buildconf after
 applying the patch to acinclude.m4, and before calling configure).

The patch attached to the previous message was missing an important
line, unfortunately (sorry to anybody who already played with it).

Apply the one below on top of mod_ssl-configure-options.patch, and
things should work properly.

Kaspar
--- acinclude.m4.orig
+++ acinclude.m4
@@ -574,6 +574,7 @@
   ap_apr_libs=`$apr_config --libs`
   APR_ADDTO(SSL_LIBS, [$ap_apr_libs])
   APR_ADDTO(LIBS, [-lssl -lcrypto $ap_apr_libs])
+  APACHE_SUBST(SSL_LIBS)
 
   dnl Run library and function checks
   liberrors=


Re: Linking mod_ssl with a specific OpenSSL version (Re: svn commit: r1358167 - in /httpd/httpd/trunk: acinclude.m4 modules/ssl/ssl_engine_init.c)

2012-08-05 Thread Guenter Knauf

Hi Kaspar,
Am 05.08.2012 10:10, schrieb Kaspar Brand:

My suggestion would be to handle OpenSSL paths in configure arguments
like this, instead:

1) use --with-ssl-builddir for linking with the static OpenSSL libraries
in that directory (and ignore --with-ssl in this case)

what about splitting into two arguments:
--with-ssl-include=
--with-ssl-lib=
this would be equal to what many other configure also use ...

just a thought ...

Gün.




Why does CacheLockPath default to a directory in /tmp instead of runtime-dir?

2012-08-05 Thread Jeff Trawick




Re: Why does CacheLockPath default to a directory in /tmp instead of runtime-dir?

2012-08-05 Thread Filipe Cifali
Normally /tmp is mounted at RAM (tmpfs) and it does provide better
performance?

2012/8/5 Jeff Trawick traw...@gmail.com





-- 
[]'s

Filipe Cifali Stangler


Re: Why does CacheLockPath default to a directory in /tmp instead of runtime-dir?

2012-08-05 Thread Jeff Trawick
On Sun, Aug 5, 2012 at 9:29 AM, Filipe Cifali cifali.fil...@gmail.com wrote:
 Normally /tmp is mounted at RAM (tmpfs) and it does provide better
 performance?

Sure.  And I guess there's a bit of value with them getting cleaned up
at reboot.

mod_auth_digest is another module that places in /tmp the sort of
artifact that other modules place in the runtimedir (or hard-coded
logs/).

Hopefully in the future you'll be able to code DefaultRuntimeDir
/var/run/foo.example.com and everything gets the same benefit (and
without such a directive everything is created in a predictable
location).  It appears that a bunch of modules still need to get
smarter though.


 2012/8/5 Jeff Trawick traw...@gmail.com





 --
 []'s

 Filipe Cifali Stangler




-- 
Born in Roswell... married an alien...
http://emptyhammock.com/


Re: mpm-itk and upstream Apache, once again

2012-08-05 Thread Steinar H. Gunderson
On Wed, Aug 01, 2012 at 01:58:16PM -0400, Jeff Trawick wrote:
 Your post-perdir-config patch has been committed to trunk with r1368121.

Thanks!

 Attached is a patch to trunk that allows you to hook in to the stat
 calls from directory walk.  Call apr_stat() like core_dirwalk_stat()
 but check for APR_STATUS_IS_EACCES(rv) and decide whether to run
 lingering close and exit.  Let us know how that goes.
 
 You still need the parse-htaccess patch for now.

I backported this to 2.4.2, and changed mpm-itk to hook into that function
with the following hook:

  static apr_status_t itk_dirwalk_stat(apr_finfo_t *finfo, request_rec *r,
   apr_int32_t wanted)
  {
  apr_status_t status = apr_stat(finfo, r-filename, wanted, r-pool);
  if (ap_has_irreversibly_setuid  APR_STATUS_IS_EACCES(status)) {
   ap_log_rerror(APLOG_MARK, APLOG_WARNING, status, r,
 Couldn't read %s, closing connection.,
 r-filename);
   ap_lingering_close(r-connection);
   clean_child_exit(0);
  }
  return status;
  }

Seems to work great, from my limited testing. As an extra bonus, I can easily
call clean_child_exit() (which runs more cleanup hooks) instead of exit(),
since this is in the MPM's own .c file.

/* Steinar */
-- 
Homepage: http://www.sesse.net/


Re: mpm-itk and upstream Apache, once again

2012-08-05 Thread Jeff Trawick
On Sun, Aug 5, 2012 at 11:00 AM, Steinar H. Gunderson
sgunder...@bigfoot.com wrote:
 On Wed, Aug 01, 2012 at 01:58:16PM -0400, Jeff Trawick wrote:
 Your post-perdir-config patch has been committed to trunk with r1368121.

 Thanks!

 Attached is a patch to trunk that allows you to hook in to the stat
 calls from directory walk.  Call apr_stat() like core_dirwalk_stat()
 but check for APR_STATUS_IS_EACCES(rv) and decide whether to run
 lingering close and exit.  Let us know how that goes.

 You still need the parse-htaccess patch for now.

 I backported this to 2.4.2, and changed mpm-itk to hook into that function
 with the following hook:

   static apr_status_t itk_dirwalk_stat(apr_finfo_t *finfo, request_rec *r,
apr_int32_t wanted)
   {
   apr_status_t status = apr_stat(finfo, r-filename, wanted, r-pool);
   if (ap_has_irreversibly_setuid  APR_STATUS_IS_EACCES(status)) {
ap_log_rerror(APLOG_MARK, APLOG_WARNING, status, r,
  Couldn't read %s, closing connection.,
  r-filename);
ap_lingering_close(r-connection);
clean_child_exit(0);
   }
   return status;
   }

 Seems to work great, from my limited testing. As an extra bonus, I can easily
 call clean_child_exit() (which runs more cleanup hooks) instead of exit(),
 since this is in the MPM's own .c file.

Great!  I'll do something about the remaining patch before long.


 /* Steinar */
 --
 Homepage: http://www.sesse.net/



-- 
Born in Roswell... married an alien...
http://emptyhammock.com/


Re: mpm-itk and upstream Apache, once again

2012-08-05 Thread Steinar H. Gunderson
On Sun, Aug 05, 2012 at 11:05:59AM -0400, Jeff Trawick wrote:
 Great!  I'll do something about the remaining patch before long.

When the time comes, do we have any hopes of getting this back from trunk to
2.4, or would it need to wait for 2.6/3.0?

FWIW, the mpm-itk security hardening that was discussed (running with uid != 0,
and limiting setuid/setgid ranges through seccomp) is starting to come quite
nicely along, although the problem of initgroups() remains (a rogue process
with CAP_SETGID can add any supplementary group it pleases, and seccomp is
unable to check it), and there's been very limited user testing so far.
I guess we can't get fully down to the level of prefork, but it can get
pretty close.

/* Steinar */
-- 
Homepage: http://www.sesse.net/


Re: mpm-itk and upstream Apache, once again

2012-08-05 Thread Jeff Trawick
On Sun, Aug 5, 2012 at 11:32 AM, Steinar H. Gunderson
sgunder...@bigfoot.com wrote:
 On Sun, Aug 05, 2012 at 11:05:59AM -0400, Jeff Trawick wrote:
 Great!  I'll do something about the remaining patch before long.

 When the time comes, do we have any hopes of getting this back from trunk to
 2.4, or would it need to wait for 2.6/3.0?

2.4.small-number


 FWIW, the mpm-itk security hardening that was discussed (running with uid != 
 0,
 and limiting setuid/setgid ranges through seccomp) is starting to come quite
 nicely along, although the problem of initgroups() remains (a rogue process
 with CAP_SETGID can add any supplementary group it pleases, and seccomp is
 unable to check it), and there's been very limited user testing so far.
 I guess we can't get fully down to the level of prefork, but it can get
 pretty close.

 /* Steinar */
 --
 Homepage: http://www.sesse.net/



-- 
Born in Roswell... married an alien...
http://emptyhammock.com/


Re: RequireAll: seems to evaluate require lines unnecessarily

2012-08-05 Thread Graham Leggett
On 03 Aug 2012, at 9:25 AM, Stefan Fritsch wrote:

 I have a config like this using httpd v2.4, in an effort to
 password protect each person's userdir:
 
RequireAll
  Require valid-user
  Require expr %{note:mod_userdir_user} == %{REMOTE_USER}
/RequireAll
 
 Hit it with a browser, and instead of 401 Unauthorized I'm getting
 403 Forbidden instead, which prevents the basic authentication
 from kicking in and the user is denied.
 
 The log however shows something odd - despite the RequireAll
 directive being used, which implies AND behaviour, which in turn
 implies that require lines should be parsed until the first one
 fails and then the parsing should stop, both require lines are
 being evaluated even though the first line failed, and the result
 of the second require line is being sent instead.
 
 This works as designed. Authentication will only be triggered if the 
 end result depends on a valid user being present. The reason is to 
 avoid a password dialogue if the access will be denied anyway.

This breaks basic authentication though, because basic auth relies on that 
initial 401 Unauthorized to tell the client that a password is required. In 
this case, access would have been approved, not denied, but the client never 
got the opportunity to try log in as it was forbidden from the outset.

Right now, I cannot get aaa to work in either a browser or in the webdav client 
for MacOSX with two require lines. In both cases, the user is forbidden 
immediately with no opportunity to log in.

 In theory, in the RequireAll situation, require directives should
 be parsed until one fails, and the result of that failure returned
 to the client. All further require lines should be ignored as is
 standard behaviour for AND implementations. In the example above,
 the authorization result of Require valid-user : denied (no
 authenticated user yet) part should prevent the authorization
 result of Require expr %{note:mod_userdir_user} == %{REMOTE_USER}:
 denied part from being attempted at all.
 
 Can someone check whether my thinking is correct?
 
 I guess your approach would have been valid, too. But it causes other 
 problems since the user cannot influence the order in which require 
 directives are evaluated if AuthMerging is or or and.

In theory the evaluation should be done in the order that the directives appear 
in the config file.

 One could 
 have solved that with some additional directives like AuthMerging or 
 prepend, etc., but that would get rather complex if one wants to keep 
 the full flexibility. I could imagine cases where additional require 
 lines would need to be evaluated in the middle of the inherited lines.

How does inheritance work now?

 For 2.4.x, we are now stuck with the current behavior. For 2.6/3.0, we 
 may of course consider to change it, if the alternative is better.

Right now from what I can see the RequireAll directive isn't working at all, so 
to fix it in v2.4 would just be a bugfix.

 2.4.3 will give require expr some special casing for REMOTE_USER 
 that makes your use case work (see r1364266). PR 52892 has a 
 workaround that works with 2.4.2, too.
 
 If the special case for REMOTE_USER is not enough, one could add a 
 trigger_authn function that allows the same behavior for arbitrary 
 variables. E.g.
 
 Require expr %{note:mod_userdir_user} == 
 trigger_authn(%{AUTHZ_VAR_FOO})

I'm not convinced. Both AND and OR have behaviour dictated by the principle of 
least astonishment, and I think it would be better for end users for the 
behaviour to match what they expect, rather than try to second guess what the 
end user wants with special cases.

Regards,
Graham
--



smime.p7s
Description: S/MIME cryptographic signature


Re: RequireAll: seems to evaluate require lines unnecessarily

2012-08-05 Thread Stefan Fritsch

On Sun, 5 Aug 2012, Graham Leggett wrote:

This works as designed. Authentication will only be triggered if the
end result depends on a valid user being present. The reason is to
avoid a password dialogue if the access will be denied anyway.


This breaks basic authentication though, because basic auth relies on 
that initial 401 Unauthorized to tell the client that a password is 
required. In this case, access would have been approved, not denied, but 
the client never got the opportunity to try log in as it was forbidden 
from the outset.


Right now, I cannot get aaa to work in either a browser or in the webdav 
client for MacOSX with two require lines. In both cases, the user is 
forbidden immediately with no opportunity to log in.


You mean you can't get Require expr to work. All other providers should 
work ok. Or do you have an example that does not involve Require expr?



I guess your approach would have been valid, too. But it causes other
problems since the user cannot influence the order in which require
directives are evaluated if AuthMerging is or or and.


In theory the evaluation should be done in the order that the directives appear 
in the config file.


One could
have solved that with some additional directives like AuthMerging or
prepend, etc., but that would get rather complex if one wants to keep
the full flexibility. I could imagine cases where additional require
lines would need to be evaluated in the middle of the inherited lines.


How does inheritance work now?


The Require lines are evaluated in normal config merge order. This 
means that Require lines from Directory blocks are always evaluated 
before Location, etc.





For 2.4.x, we are now stuck with the current behavior. For 2.6/3.0, we
may of course consider to change it, if the alternative is better.


Right now from what I can see the RequireAll directive isn't working at all, so 
to fix it in v2.4 would just be a bugfix.


I disagree. It works except for Require expr. And fixing it your way 
would cause behavior changes in many cases, not just for Require expr.



2.4.3 will give require expr some special casing for REMOTE_USER
that makes your use case work (see r1364266). PR 52892 has a
workaround that works with 2.4.2, too.

If the special case for REMOTE_USER is not enough, one could add a
trigger_authn function that allows the same behavior for arbitrary
variables. E.g.

Require expr %{note:mod_userdir_user} ==
trigger_authn(%{AUTHZ_VAR_FOO})


I'm not convinced. Both AND and OR have behaviour dictated by the 
principle of least astonishment, and I think it would be better for end 
users for the behaviour to match what they expect, rather than try to 
second guess what the end user wants with special cases.


If the interpretation of Require lines depends on the order, but you 
cannot influence the order in case of inheritance, then inheritance gets 
mostly useless. This is not the kind of change to make in a stable 
release.


Re: Additional core functions for mod_lua

2012-08-05 Thread Daniel Gruno
On 08/03/2012 04:51 PM, Igor Galić wrote:
 
 
 I cannot seem to be able to find this stuff…

I have put together some of the scripts I use myself at
http://httpd.apache.org/docs/trunk/developer/lua.html but it's far from
done (and thus not linked to from any index page). Most of the scripts
are there, but I have yet to add actual explanations to the various
examples as well as add the map handler examples and some other things.

This page also holds all the functions that I have proposed to import
into mod_lua (unless someone objects to specific functions), so this
should answer Eric's question about documenting the functions as well.

I have chosen to add the functions to the existing apache2 library,
since the name makes sense. If there are no objections, I'll consider it
a lazy consensus :)

With regards,
Daniel.



Re: RequireAll: seems to evaluate require lines unnecessarily

2012-08-05 Thread Graham Leggett
On 05 Aug 2012, at 10:39 PM, Stefan Fritsch wrote:

 This works as designed. Authentication will only be triggered if the
 end result depends on a valid user being present. The reason is to
 avoid a password dialogue if the access will be denied anyway.
 
 This breaks basic authentication though, because basic auth relies on that 
 initial 401 Unauthorized to tell the client that a password is required. In 
 this case, access would have been approved, not denied, but the client never 
 got the opportunity to try log in as it was forbidden from the outset.
 
 Right now, I cannot get aaa to work in either a browser or in the webdav 
 client for MacOSX with two require lines. In both cases, the user is 
 forbidden immediately with no opportunity to log in.
 
 You mean you can't get Require expr to work. All other providers should 
 work ok. Or do you have an example that does not involve Require expr?

Most specifically, as per my original mail, I can't get the following to work:

   RequireAll
 Require valid-user
 Require expr %{note:mod_userdir_user} == %{REMOTE_USER}
   /RequireAll

Can you clarify what is special about the expr specifically that triggers 
forbidden instead of unauthorized?

Perhaps this is a bug inside the expr code.

Regards,
Graham
--



smime.p7s
Description: S/MIME cryptographic signature


Re: RequireAll: seems to evaluate require lines unnecessarily

2012-08-05 Thread Stefan Fritsch
On Sunday 05 August 2012, Graham Leggett wrote:
  You mean you can't get Require expr to work. All other
  providers should work ok. Or do you have an example that does
  not involve Require expr?
 
 Most specifically, as per my original mail, I can't get the
 following to work:
 
RequireAll
  Require valid-user
  Require expr %{note:mod_userdir_user} == %{REMOTE_USER}
/RequireAll
 
 Can you clarify what is special about the expr specifically that
 triggers forbidden instead of unauthorized?
 
 Perhaps this is a bug inside the expr code.

The API is currently such that an authz provider must return 
AUTHZ_DENIED_NO_USER instead of AUTHZ_DENIED if its result may change 
after authentication. Require expr in 2.4.2 does not do that. But it 
will be fixed in 2.4.3 with

http://svn.apache.org/viewvc?view=revisionrevision=1364266


Re: Additional core functions for mod_lua

2012-08-05 Thread Stefan Fritsch
On Sunday 05 August 2012, Daniel Gruno wrote:
 On 08/03/2012 04:51 PM, Igor Galić wrote:
  I cannot seem to be able to find this stuff…
 
 I have put together some of the scripts I use myself at
 http://httpd.apache.org/docs/trunk/developer/lua.html but it's far
 from done (and thus not linked to from any index page). Most of
 the scripts are there, but I have yet to add actual explanations
 to the various examples as well as add the map handler examples
 and some other things.
 
 This page also holds all the functions that I have proposed to
 import into mod_lua (unless someone objects to specific
 functions), so this should answer Eric's question about
 documenting the functions as well.
 
 I have chosen to add the functions to the existing apache2 library,
 since the name makes sense. If there are no objections, I'll
 consider it a lazy consensus :)

Nice work. If you talk about the existing apache2 library, you mean 
it is existing in mod_lua? Or is it an external file?

There is some overlap with the r table: These already exist:

apache2.context_document_root
apache2.context_prefix
apache2.add_output_filter

These can be done wit r.subprocess_env:

apache2.getenv
apache2.setenv

(though subprocess_env could use an abbreviation). I may have missed 
some others.

Wouldn't it make sense to add those new functions which are really 
related to the request (as opposed to just using the request pool) to 
the r table, too?

Some other random notes:

apache2.requestbody: This should take a size limit as argument.

apache2.get_server_name: The example and the synopsis don't agree if 
this should have an argument

apache2.get_server_name_for_url: This is missing but would be very 
useful.

apache2.satisfies: This is obsolete, and should IMHO be removed.

apache2.getenv/setenv: call them request environment variable in the 
docs, to distinguish from OS environment variables

Cheers,
Stefan


Re: Additional core functions for mod_lua

2012-08-05 Thread Daniel Gruno
On 08/06/2012 12:17 AM, Stefan Fritsch wrote:
 On Sunday 05 August 2012, Daniel Gruno wrote:
 On 08/03/2012 04:51 PM, Igor Galić wrote:
 I cannot seem to be able to find this stuff…

 I have put together some of the scripts I use myself at
 http://httpd.apache.org/docs/trunk/developer/lua.html but it's far
 from done (and thus not linked to from any index page). Most of
 the scripts are there, but I have yet to add actual explanations
 to the various examples as well as add the map handler examples
 and some other things.

 This page also holds all the functions that I have proposed to
 import into mod_lua (unless someone objects to specific
 functions), so this should answer Eric's question about
 documenting the functions as well.

 I have chosen to add the functions to the existing apache2 library,
 since the name makes sense. If there are no objections, I'll
 consider it a lazy consensus :)
 
 Nice work. If you talk about the existing apache2 library, you mean 
 it is existing in mod_lua? Or is it an external file?
 
 There is some overlap with the r table: These already exist:
 
 apache2.context_document_root
 apache2.context_prefix
 apache2.add_output_filter
 
 These can be done wit r.subprocess_env:
 
 apache2.getenv
 apache2.setenv
 
 (though subprocess_env could use an abbreviation). I may have missed 
 some others.
 
 Wouldn't it make sense to add those new functions which are really 
 related to the request (as opposed to just using the request pool) to 
 the r table, too?
 
 Some other random notes:
 
 apache2.requestbody: This should take a size limit as argument.
 
 apache2.get_server_name: The example and the synopsis don't agree if 
 this should have an argument
 
 apache2.get_server_name_for_url: This is missing but would be very 
 useful.
 
 apache2.satisfies: This is obsolete, and should IMHO be removed.
 
 apache2.getenv/setenv: call them request environment variable in the 
 docs, to distinguish from OS environment variables
 
 Cheers,
 Stefan
 
Yes, you caught some thing that I should have mentioned.

The redundant functions you mentioned have been scrapped, it is simply
because I'm generating the source for the xml doc via xslt, and I
haven't gotten around to change the original xml file just yet (I've
been painting :3). I'll get some sleep and fix up the docs tomorrow,
including some mislabelled return values and parameters. As stated, this
is a work in progress.

As for the requestbody, satisfies etc functions, it's been noted and
I'll see to it that they get changed/removed/fixed before I start
committing anything.

With regards,
Daniel.


Re: Linking mod_ssl with a specific OpenSSL version (Re: svn commit: r1358167 - in /httpd/httpd/trunk: acinclude.m4 modules/ssl/ssl_engine_init.c)

2012-08-05 Thread Kaspar Brand
On 05.08.2012 14:38, Guenter Knauf wrote:
 Am 05.08.2012 10:10, schrieb Kaspar Brand:
 1) use --with-ssl-builddir for linking with the static OpenSSL libraries
 in that directory (and ignore --with-ssl in this case)
 what about splitting into two arguments:
 --with-ssl-include=
 --with-ssl-lib=
 this would be equal to what many other configure also use ...

That's an option, yes, although the way the proposed option would work
doesn't mirror the typical case of --with-xyz-include/--with-xyz-lib
(i.e., add those args with -I and -L): --with-ssl-builddir forces
mod_ssl to always be linked with the static libraries in that directory.
Maybe --with-ssl-static-libdir would be a more appropriate name?

Kaspar