Re: 3.2 beta release today?

2005-08-15 Thread Gregory (Grisha) Trubetskoy


I think the best thing to do is just to go ahead and tag and a create a 
tarball. Whether everyone was ready will become apparent during the 
testing/voting.


Grisha


On Mon, 15 Aug 2005, Jim Gallacher wrote:


Are we on track to release a 3.2.0beta tarball today?

Regards,
Jim



Re: ApacheCon Europe and http://httpd.apache.org/test/

2005-08-15 Thread Geoffrey Young


Philip M. Gollucci wrote:
 Jim Martinez wrote:
 
 Who maintains http://httpd.apache.org/test/ ?

 There's a image on it that reads ApacheCon Europe 2005 that links to
 the
 ApacheCon US 2005 (via a redirect).

 ApacheCon Europe 2005 was, according to the web site, held around July
 18th, 2005.  Most likely the image should be changed to something
 applicable for ApacheCon US 2005 (maybe a nonexistent logo).

looks like that link isn't part of test/ proper, but rather part of the
global httpd site nav - it also shows up on /, for example.  anyway, I think
that the folks who really spend the time on httpd.apache.org design will fix
it when they find some tuits.

 I believe anyone that is an httpd committer can change it.

I think that's right.

speaking of which, the real Apache-Test homepage is here

  http://perl.apache.org/Apache-Test/

anyone looking for something to contribute back might spend some time
sprucing it up - IIRC there is already an infrastructure in place to support

  Apache-Test/api
  Apache-Test/user/perl
  Apache-Test/user/php

etc

--Geoff


Re: ApacheCon Europe and http://httpd.apache.org/test/

2005-08-15 Thread Philip M. Gollucci

Geoffrey Young wrote:

I believe anyone that is an httpd committer can change it.



I think that's right.

speaking of which, the real Apache-Test homepage is here

  http://perl.apache.org/Apache-Test/

anyone looking for something to contribute back might spend some time
sprucing it up - IIRC there is already an infrastructure in place to support

  Apache-Test/api
  Apache-Test/user/perl
  Apache-Test/user/php


I saw that.. The link of perl.apache.org is blank though right ?
--
END

What doesn't kill us can only make us stronger.
Nothing is impossible.

Philip M. Gollucci ([EMAIL PROTECTED]) 301.254.5198
Consultant / http://p6m7g8.net/Resume/
Senior Developer / Liquidity Services, Inc.
  http://www.liquidityservicesinc.com
   http://www.liquidation.com
   http://www.uksurplus.com
   http://www.govliquidation.com
   http://www.gowholesale.com



Re: ApacheCon Europe and http://httpd.apache.org/test/

2005-08-15 Thread Geoffrey Young

 I saw that.. The link of perl.apache.org is blank though right ?

I'm not quite with the program yet... what do you mean?
httpd.apache.org/test links to perl.apache.org/Apache-Test.

--Geoff


Re: New mod_smtpd release

2005-08-15 Thread Jem Berkes
 Well there's also another problem. RFC 2821 (SMTP) doesn't define a
 particular message format for SMTP (in wide use there the RFC 822 and
 MIME message formats). I don't think that mod_smtpd should assume a  RFC
 822 or MIME message format since its strictly a SMTP module,  that's why

I agree with this

 I still think header parsing should be in another module.  Of course
 this module is free to register itself as an mod_smtpd  filter and do
 what it needs to do, but it shouldn't be part of the  main mod_smtpd.

That seems wise. Any weird thing can come through over SMTP, it could look 
very much unlike an email after all. You're handling the protocol in your 
module and that means the SMTP protocol as I understand, not MIME or 
anything.




Re: [PATCH] Make caching hash more deterministic

2005-08-15 Thread Colm MacCarthaigh
On Sat, Aug 13, 2005 at 10:29:54AM +0200, Graham Leggett wrote:
 The idea of canonicalising the name is sound, but munging them into an 
 added :80 and an added ? is really ugly - these are not the kind of URLs 
 that an end user would understand at a glance if they had to see them 
 listed.

An end-user should never see these keys, the only place they are visible
to any user is the semi-binary mod_disk_cache files. An administrator
would have to really know what they're doing to find them, or be using
htcacheadmin - once I finish that, and if it gets accepted. 

 Is it possible to remove the :80 if the scheme is https, and remove the 
 :443 if the scheme is https:? What is the significance of the added ??

The ? isn't me, that's current mod_cache behaviour, so I left it
alone.

It doesn't have any significance except for avoiding an extra condition.
r-args is part of the key aswell, it just happens to have been NULL in
those examples.

Either way, doing as you suggest is trivial, but is there really a point
adding more conditions? Any tool which does inspect the cache files can
clean it up for presentation to the administrator.

-- 
Colm MacCárthaighPublic Key: [EMAIL PROTECTED]


[PATCH] htcacheclean fix-ups

2005-08-15 Thread Colm MacCarthaigh

[EMAIL PROTECTED]:~/svn/httpd-trunk/modules/cache$ grep -e define.*FORMAT *
mod_disk_cache.c:#define VARY_FORMAT_VERSION 3
mod_disk_cache.c:#define DISK_FORMAT_VERSION 4

[EMAIL PROTECTED]:~/svn/httpd-trunk/support$ grep -e define.*FORMAT *
htcacheclean.c:#define VARY_FORMAT_VERSION 1
htcacheclean.c:#define DISK_FORMAT_VERSION 2

Patch attached, also fixes the file_open calls to use APR_FOPEN_BINARY.

-- 
Colm MacCárthaighPublic Key: [EMAIL PROTECTED]
Index: htcacheclean.c
===
--- htcacheclean.c  (revision 232769)
+++ htcacheclean.c  (working copy)
@@ -44,8 +44,8 @@
 
 /* mod_disk_cache.c extract start */
 
-#define VARY_FORMAT_VERSION 1
-#define DISK_FORMAT_VERSION 2
+#define VARY_FORMAT_VERSION 3
+#define DISK_FORMAT_VERSION 4
 
 typedef struct {
 /* Indicates the format of the header struct stored on-disk. */
@@ -497,8 +497,8 @@
 case HEADERDATA:
 nextpath = apr_pstrcat(p, path, /, d-basename,
CACHE_HEADER_SUFFIX, NULL);
-if (apr_file_open(fd, nextpath, APR_READ, APR_OS_DEFAULT,
-  p) == APR_SUCCESS) {
+if (apr_file_open(fd, nextpath, APR_FOPEN_READ | 
APR_FOPEN_BINARY, 
+  APR_OS_DEFAULT, p) == APR_SUCCESS) {
 len = sizeof(format);
 if (apr_file_read_full(fd, format, len, 
len) == APR_SUCCESS) {
@@ -570,8 +570,8 @@
 current = apr_time_now();
 nextpath = apr_pstrcat(p, path, /, d-basename,
CACHE_HEADER_SUFFIX, NULL);
-if (apr_file_open(fd, nextpath, APR_READ, APR_OS_DEFAULT,
-  p) == APR_SUCCESS) {
+if (apr_file_open(fd, nextpath, APR_FOPEN_READ | 
APR_FOPEN_BINARY, 
+  APR_OS_DEFAULT, p) == APR_SUCCESS) {
 len = sizeof(format);
 if (apr_file_read_full(fd, format, len, 
len) == APR_SUCCESS) {


RFC: can I make mod_cache per-dir?

2005-08-15 Thread Colm MacCarthaigh

mod_cache configurability sucks big-time. CacheEnable adds yet another
location mapping scheme for administrators to deal with, but this scheme
lacks basic flexibility;

It can't reliably disable caching for a directory. 

It's about 99.9% useless for a forward proxy configuration. ;-)

It can't do regex matching, unlike every other part of Apache.

It involves some fairly pants linear searches through the url lists, 
which means not a hope of implementing complex configurations while 
keeping the performance mod_cache is supposed to be for :-/

Unfortunately, I want to do some pretty complex things, including all of
the above and I've bitten the bullet have achieved a rough implemention
by throwing away the CacheEnable and CacheDisable directives, and
completely changing the basic configuration of mod_cache. *cough*. 

I'm guessing that the majority of CacheEnable instances out there in the
world probably take / as their url argument. For this case, the
changes I've made speed things up. For other cases there is some small
potential slowdown, for example if you had only;

CacheEnable disk /wiki/

Previously mod_cache would have done a url match at the handle stage and
if it didn't match, that would have been that. With this patch, it
instead looks up the url with the caching provider directly. This has
two consequences; 

1. It means all requests are hit with the cost of a lookup 
   in the cache provider, but this shouldn't be expensive.
   It's already what most sites are doing. And even with
   mod_disk_cache it's relatively painless, just a hashcalc
   and an attempt at open(). 

   Either way, the url match functionality at this stage can 
   be added back trivially, but I decided not to in my patch
   because it's so confusing to have.

2. If an admin re-configures with caching enabled for less
   locations that they had previously, they have to know to 
   either clear the cache or to know that the entities will 
   still get served from the cache until they have expired. 
   The patch includes a new Caching user guide, for this and 
   other reasons.

As I was saying; What I've done gets rid of the CacheEnable and
CacheDisable directives, and instead lets you do this;

# Cache everything to memory, or then disk
CacheContent mem disk

# Cache content for /foo/ to disk only
Location /foo/
CacheContent disk
/Location

# Don't cache these files at all
LocationMatch ~/foo/*.txt$
CacheContent disk off
/LocationMatch

Proxy *
   # Only cache to disk
   CacheConent disk
/Proxy

Proxy http://securityupdates/dist/
# Don't cache the list of security updates, ever
CacheContent off
Proxy

VirtualHost foobar
# This vhost should never be cached
CacheContent off
/VirtualHost  

But I'm still not finished, and I'd like some advice on what next. The
per-dir information isn't availabe at the quick-handle stage, so the
mod_cache handle has to rely on per-server config to decide which
providers to try and use for serving content. (right now I've simply
hard-coded mem and disk).

There are two options for doing this;

1. Register any providers used by CacheContent at the config
   stage in the per-server conf. Has the advantage of reducing
   the ammount of directives involved and minimisming admin
   confusion. Disadvantages; Makes using CacheContent in 
   htaccess files a bit iffy, there would have to be a 
   CacheContent directive in the base config files first.
   Making the order providers are tried in would also be a
   bit of a pain.

2. Adding a another directive. CacheEnable makes the most
   sense as a name, but it would also be a change in its
   behaviour. So CacheServe as a name might be an option.
   This would be a per-server directive, that says;

CacheEnable mem disk

   Which would mean serve from memory, or then disk (ie in
   that order) for this server. 

I vastly prefer 2. myself, but I'd like to know what hope (if any) have
I of getting major changes to directives and the basic configuration of
a module committed? And also, people's thoughts on the trade-off of not
performing a url comparison at the handle stage.

-- 
Colm MacCárthaighPublic Key: [EMAIL PROTECTED]


Re: RFC: can I make mod_cache per-dir?

2005-08-15 Thread Graham Leggett
Colm MacCarthaigh said:

 mod_cache configurability sucks big-time. CacheEnable adds yet another
 location mapping scheme for administrators to deal with, but this scheme
 lacks basic flexibility;

The config scheme for mod_cache mirrors that of mod_proxy, from where the
cache originated, allowing configs like this:

ProxyPass /blah http://some-backend/blah
CacheEnable mem /blah

ScriptAlias /cgi-bin /var/some/cgi-bin
CacheEnable disk /cgi-bin

   It can't reliably disable caching for a directory.

Proxy has a mechanism to do this, cache should have a similar mechanism.
Does CacheDisable not do this?

   It's about 99.9% useless for a forward proxy configuration. ;-)

If so, then this is a bug that should be fixed.

   It can't do regex matching, unlike every other part of Apache.

Ok.

   It involves some fairly pants linear searches through the url lists,
   which means not a hope of implementing complex configurations while
   keeping the performance mod_cache is supposed to be for :-/

These URL lists are not very long (or am I misunderstanding you?), have
you got some profiling numbers to show this?

 Unfortunately, I want to do some pretty complex things, including all of
 the above and I've bitten the bullet have achieved a rough implemention
 by throwing away the CacheEnable and CacheDisable directives, and
 completely changing the basic configuration of mod_cache. *cough*.

I don't think throwing out the current config syntax and redoing it from
scratch is really necessary - rather fix what's broken incrementally.

 I'm guessing that the majority of CacheEnable instances out there in the
 world probably take / as their url argument.

Why would this be the case?

It only makes sense to cache expensive server operations, probably
limited to the /cgi-bin directory, or to cache ProxyPass directives.
Apache is already really good at shipping static data, so caching
everything will probably only serve to slow things down rather than speed
them up.

The case of caching / would only really make sense if an entire site was
dynamically generated, but this is the exception rather than the rule.

 For this case, the
 changes I've made speed things up. For other cases there is some small
 potential slowdown, for example if you had only;

   CacheEnable disk /wiki/

 Previously mod_cache would have done a url match at the handle stage and
 if it didn't match, that would have been that. With this patch, it
 instead looks up the url with the caching provider directly. This has
 two consequences;

   1. It means all requests are hit with the cost of a lookup
  in the cache provider, but this shouldn't be expensive.
  It's already what most sites are doing. And even with
  mod_disk_cache it's relatively painless, just a hashcalc
  and an attempt at open().

An attempt at open() can be very expensive compared to some of the
performance improvements httpd already has (caching file handles, that
sort of thing).

The cost is likely to be greater than comparing lists of URLs, so this
would very likely be a step backwards performance wise.

  Either way, the url match functionality at this stage can
  be added back trivially, but I decided not to in my patch
  because it's so confusing to have.

   2. If an admin re-configures with caching enabled for less
  locations that they had previously, they have to know to
  either clear the cache or to know that the entities will
  still get served from the cache until they have expired.
  The patch includes a new Caching user guide, for this and
  other reasons.

-1.

At no point can an admin be expected to know anything that's not
immediately obvious. If the admin said I am removing the cache option by
commenting out this directive, then httpd should immediately disable the
cache option.

 As I was saying; What I've done gets rid of the CacheEnable and
 CacheDisable directives, and instead lets you do this;

What you seem to be wanting to do is support the concept where there is no
URL at all, ie cache everything in the current scope, be it
virtual|directory|location|whatever.

In other words, extending the current behavior from:

 CacheEnable disk /blah

(cache everything below /blah) to support something like this:

  Location /blah
CacheEnable disk *
  /Location

(cache everything regardless of the scope). This * means we don't care
for doing a URL check, cache everything within the current scope.

This is a small incremental change to the current config.

 I vastly prefer 2. myself, but I'd like to know what hope (if any) have
 I of getting major changes to directives and the basic configuration of
 a module committed? And also, people's thoughts on the trade-off of not
 performing a url comparison at the handle stage.

There is going to have to be a very compelling case for fundamentally
changing the current config method when problems 

Re: RFC: can I make mod_cache per-dir?

2005-08-15 Thread Colm MacCarthaigh
On Mon, Aug 15, 2005 at 01:50:14PM +0200, Graham Leggett wrote:
  It can't reliably disable caching for a directory.
 
 Proxy has a mechanism to do this, cache should have a similar mechanism.
 Does CacheDisable not do this?

That's per-location, not per-directory. If multiple uri's map to the
same directory, I have to add per-location directives for each one. But
I'm *really* screwed when multiple vhosts, with different aliases point
to the same directory.

This is the situation I find myself in on ftp.heanet.ie. Where say

ftp.ie.debian.org/debian/pool/ 

is also;

ftp.heanet.ie/mirrors/ftp.debian.org/pool/
ftp.heanet.ie/pub/pool/
ftp.*.debian.org/debian/pool/
apt.heanet.ie/pool/

and the list goes on, that's just four :/ But you get the idea :) I can
hack it by using mod_expires on a per-directory basis, but I don't want
to prevent remote caching - just local.

  It's about 99.9% useless for a forward proxy configuration. ;-)
 
 If so, then this is a bug that should be fixed.

That behaviour is particularly annoying. mod_cache insists that the URL
string start with a /, and it compared parsed_uri.path, so;

http://www.apache.org/
http://www.PlaceIDontWantToCache.com/

are exactly the same as far as mod_cache's url checking stage goes.
Though that's easy enough to fix as-is.

  It involves some fairly pants linear searches through the url lists,
  which means not a hope of implementing complex configurations while
  keeping the performance mod_cache is supposed to be for :-/
 
 These URL lists are not very long (or am I misunderstanding you?), have
 you got some profiling numbers to show this?

I just gave up on ftp.heanet.ie after I reached over 300 CacheDisable
entries. Didn't run any profiling numbers though.

 The case of caching / would only really make sense if an entire site
 was dynamically generated, but this is the exception rather than the
 rule.

There are other cases in which it makes a lot of sense (it certainly
does for me). But you're right , I guess the majority case isn't going
to be / once mod_cache becomes widely deployed.

 An attempt at open() can be very expensive compared to some of the
 performance improvements httpd already has (caching file handles, that
 sort of thing).

In general open() isn't expensive if it doesn't succeed, and if the
open() succeed's, it gets used by mod_cache anyway - there is no
penalty.  

 The cost is likely to be greater than comparing lists of URLs, so this
 would very likely be a step backwards performance wise.

Even for a failed open() this is going to be true.

 At no point can an admin be expected to know anything that's not
 immediately obvious. If the admin said I am removing the cache option
 by commenting out this directive, then httpd should immediately
 disable the cache option.

That's fair enough.

  As I was saying; What I've done gets rid of the CacheEnable and
  CacheDisable directives, and instead lets you do this;
 
 What you seem to be wanting to do is support the concept where there
 is no URL at all, ie cache everything in the current scope, be it
 virtual|directory|location|whatever.

Yes.

 In other words, extending the current behavior from:
 
  CacheEnable disk /blah
 
 (cache everything below /blah) to support something like this:
 
   Location /blah CacheEnable disk * /Location
 
 (cache everything regardless of the scope). This * means we don't
 care for doing a URL check, cache everything within the current
 scope.
 
 This is a small incremental change to the current config.

But not to the code. It's pretty much impossible to implement in code
without running the handler later, or replicating the entire url mapping
stage.

 There is going to have to be a very compelling case for fundamentally
 changing the current config method when problems can be solved with
 some more simple changes, and saving on some URL comparisons doesn't
 seem to me to be a good enough reason at this stage, unless some
 performance numbers can tell us otherwise?

My main reason is that I can't enable or disable caching on a
per-directory or per-file basis. 

-- 
Colm MacCárthaighPublic Key: [EMAIL PROTECTED]


Call for PPMC members for mod_ftp

2005-08-15 Thread Jim Jagielski

Now that mod_ftp is entering Incubation, if you are interested in
serving on the PPMC, please contact me directly.

Please recall that you will be required to submit an ASF iCLA
if you do not have one on file.


Re: Update NetWare AP21 build files....

2005-08-15 Thread Brad Nicholes
done.

Brad

 On Saturday, August 13, 2005 at 4:19 pm, in message
[EMAIL PROTECTED], [EMAIL PROTECTED] wrote:
 Greetings All,
 Some kind soul needs to update the NetWare build files for AP2.1
proxy 
 modules, to include the recently added
'proxy_hook_load_lbmethods()'.
 
 Presently getting the following error:
 
 Linking Release.o/proxybalancer.nlm
 ### mwldnlm Linker Error:
 #   Undefined symbol: proxy_hook_load_lbmethods in
 #   mod_proxy_balancer.o
 
 Thanks,
 Norm


Re: svn commit: r220307 - in /httpd/httpd/trunk/modules: metadata/mod_setenvif.c ssl/mod_ssl.c ssl/mod_ssl.h ssl/ssl_expr_eval.c

2005-08-15 Thread Joe Orton
On Fri, Aug 05, 2005 at 08:00:01PM +0200, Martin Kraemer wrote:
 On Tue, Aug 02, 2005 at 07:14:10PM +0200, Martin Kraemer wrote:
  I wanted something like
  
SSLRequire committers in SSLPeerExtList(1.3.6.1.4.1.18060.1);
  
  to mean at least one extension with an OID of
  1.3.6.1.4.1.18060.1 with a value of 'committers' exists in the
  client cert.
 
 I'll be on vacation next week, and will send in another patch after
 that.

OK, hope you had a good holiday.  I wasn't trying to argue about the 
semantics just to nitpick the naming.  Having SSL in the SSLRequire 
function is redundant, but not in the context of mod_setenvif.  So, my 
preference is:

SSLRequire committers in PeerExtList(1.3.6.1.4.1.18060.1);

SetEnvIf SSL_PeerExtList(etc) ...

I just went to write a test case for the SetEnvIf function, and there 
seems to be a rather annoying fundamental problem: the match_headers 
hooks runs too early to be useful for this when doing per-dir client 
cert negotiation.

Attached the patch I have for mod_setenvif to clean it up and adopt the 
naming above; untested so far as I'm blocked by the fact that it doesn't 
work for per-dir negotiation.

joe
Index: modules/metadata/mod_setenvif.c
===
--- modules/metadata/mod_setenvif.c (revision 232271)
+++ modules/metadata/mod_setenvif.c (working copy)
@@ -104,7 +104,7 @@
 SPECIAL_REQUEST_METHOD,
 SPECIAL_REQUEST_PROTOCOL,
 SPECIAL_SERVER_ADDR,
-SPECIAL_OID_VALUE
+SPECIAL_SSL_PEEREXTLIST
 };
 typedef struct {
 char *name; /* header name */
@@ -349,30 +349,30 @@
 else if (!strcasecmp(fname, server_addr)) {
 new-special_type = SPECIAL_SERVER_ADDR;
 }
-else if (!strncasecmp(fname, oid(,4)) {
-ap_regmatch_t match[AP_MAX_REG_MATCH];
+else if (!strncasecmp(fname, ssl_peerextlist(, 
+  sizeof(ssl_peerextlist() - 1)) {
+const char *oid, *oidend;
 
-new-special_type = SPECIAL_OID_VALUE;
+new-special_type = SPECIAL_SSL_PEEREXTLIST;
 
-/* Syntax check and extraction of the OID as a regex: */
-new-pnamereg = ap_pregcomp(cmd-pool,
-^oid\\(\?([0-9.]+)\?\\)$,
-(AP_REG_EXTENDED // | AP_REG_NOSUB
- | AP_REG_ICASE));
-/* this can never happen, as long as pcre works:
-  if (new-pnamereg == NULL)
-return apr_pstrcat(cmd-pool, cmd-cmd-name,
-   OID regex could not be compiled., 
NULL);
- */
-if (ap_regexec(new-pnamereg, fname, AP_MAX_REG_MATCH, match, 0) 
== AP_REG_NOMATCH) {
+oid = fname + strlen(ssl_peerextlist();
+oidend = ap_strchr_c(oid, ')');
+
+/* skip over quotes if present */
+if (oid[0] == '') {
+oid++;
+}
+if (oidend  oidend[-1] == '') {
+oidend--;
+}
+
+if (!oidend || oidend = oid) {
 return apr_pstrcat(cmd-pool, cmd-cmd-name,
-   OID syntax is: oid(\1.2.3.4.5\); 
error in: ,
-   fname, NULL);
+   invalid ssl_peerextlist() syntax in: ,
+   fname, NULL);
 }
-new-pnamereg = NULL;
-/* The name field is used for the stripped oid string */
-new-name = fname = apr_pstrdup(cmd-pool, fname+match[1].rm_so);
-fname[match[1].rm_eo - match[1].rm_so] = '\0';
+
+new-name = apr_pstrmemdup(cmd-pool, oid, oidend - oid);
 }
 else {
 new-special_type = SPECIAL_NOT;
@@ -504,8 +504,10 @@
  * same header.  Remember we don't need to strcmp the two header
  * names because we made sure the pointers were equal during
  * configuration.
- * In the case of SPECIAL_OID_VALUE values, each oid string is
- * dynamically allocated, thus there are no duplicates.
+ *
+ * In the case of SPECIAL_SSL_PEEREXTLIST values, each
+ * extension list is dynamically allocated, thus there are no
+ * duplicates.
  */
 if (b-name != last_name) {
 last_name = b-name;
@@ -529,32 +531,19 @@
 case SPECIAL_REQUEST_PROTOCOL:
 val = r-protocol;
 break;
-case SPECIAL_OID_VALUE:
+case SPECIAL_SSL_PEEREXTLIST:
 /* If mod_ssl is not loaded, the accessor function is NULL */
-if (ssl_extlist_by_oid_func != NULL)
-{
-apr_array_header_t *oid_array;
-char **oid_value;
-int j, len = 0;
-char 

Re: [PATCH] Make caching hash more deterministic

2005-08-15 Thread Jim Jagielski


On Aug 15, 2005, at 4:10 AM, Colm MacCarthaigh wrote:


On Sat, Aug 13, 2005 at 10:29:54AM +0200, Graham Leggett wrote:

The idea of canonicalising the name is sound, but munging them  
into an
added :80 and an added ? is really ugly - these are not the kind  
of URLs

that an end user would understand at a glance if they had to see them
listed.



An end-user should never see these keys, the only place they are  
visible

to any user is the semi-binary mod_disk_cache files. An administrator
would have to really know what they're doing to find them, or be using
htcacheadmin - once I finish that, and if it gets accepted.


Is it possible to remove the :80 if the scheme is https, and  
remove the
:443 if the scheme is https:? What is the significance of the  
added ??




The ? isn't me, that's current mod_cache behaviour, so I left it
alone.

It doesn't have any significance except for avoiding an extra  
condition.
r-args is part of the key aswell, it just happens to have been  
NULL in

those examples.

Either way, doing as you suggest is trivial, but is there really a  
point
adding more conditions? Any tool which does inspect the cache files  
can

clean it up for presentation to the administrator.



I think Colm has a valid point... since these are internal  
representations

then the cleaning up would best be done by the actual view process.
I would imagine that keeping the internal representations consistent
would streamline the actual functional aspects.


Re: New mod_smtpd release

2005-08-15 Thread Joe Schaefer
Jem Berkes [EMAIL PROTECTED] writes:

 Well there's also another problem. RFC 2821 (SMTP) doesn't define a
 particular message format for SMTP (in wide use there the RFC 822 and
 MIME message formats). I don't think that mod_smtpd should assume a  RFC
 822 or MIME message format since its strictly a SMTP module,  that's why

 I agree with this

Now I'm confused; 2821 S-2.3.1 defines SMTP content as headers + body.
What am I overlooking?

 I still think header parsing should be in another module.  Of course
 this module is free to register itself as an mod_smtpd  filter and do
 what it needs to do, but it shouldn't be part of the  main mod_smtpd.

If you put in into a separate module, that means there will be no hooks
which can expect the headers to be in r-headers_in.  So every hook that
needs them will need to parse it themselves, which seems absolutely redundant.

Furthermore it is a requirement (MUST) for a 2821 compliant server
to implement loop detection.  That means at least one hook will *always*
care about the Received: headers, for every smtp transaction.  Why you
wouldn't want to provide an API for hook authors to use for inspecting
headers, seems like a very spartan choice,  which IMO is counter to the
spirit of httpd.

-- 
Joe Schaefer



Re: Memory leak not fixed from 2003

2005-08-15 Thread Sander Temme


On Aug 12, 2005, at 2:07 AM, [EMAIL PROTECTED]  
[EMAIL PROTECTED] wrote:



Bug #25659 is about a memory leak.

The (quite trivial) patch has been provided in 2003, and the bug is  
still not corrected !!!

Could somebody include this is next version ?


+1 on the patch.

This leak is triggered once, in the SSL passphrase handler... do you  
run into this leak in deployment? Like, do you restart your server a  
lot?


S.

--
[EMAIL PROTECTED]  http://www.temme.net/sander/
PGP FP: 51B4 8727 466A 0BC3 69F4  B7B8 B2BE BC40 1529 24AF



smime.p7s
Description: S/MIME cryptographic signature


3.2 beta release today?

2005-08-15 Thread Jim Gallacher

Are we on track to release a 3.2.0beta tarball today?

Regards,
Jim


Re: New mod_smtpd release

2005-08-15 Thread Rian Hunter


On Aug 15, 2005, at 10:22 AM, Joe Schaefer wrote:


Jem Berkes [EMAIL PROTECTED] writes:



Well there's also another problem. RFC 2821 (SMTP) doesn't define a
particular message format for SMTP (in wide use there the RFC 822  
and
MIME message formats). I don't think that mod_smtpd should assume  
a  RFC
822 or MIME message format since its strictly a SMTP module,   
that's why




I agree with this



Now I'm confused; 2821 S-2.3.1 defines SMTP content as headers +  
body.

What am I overlooking?


2821 s-2.3.1 says:

If the content conforms to other
contemporary standards, the headers form a collection of field/value
pairs structured as in the message format specification [32]; the
body, if structured, is defined according to MIME [12].

Personally I interpret this to mean that the content may not conform  
to other contemporary standards although I do doubt the existence  
of non-RFC 2822 header formats.



I still think header parsing should be in another module.  Of course
this module is free to register itself as an mod_smtpd  filter  
and do
what it needs to do, but it shouldn't be part of the  main  
mod_smtpd.




If you put in into a separate module, that means there will be no  
hooks
which can expect the headers to be in r-headers_in.  So every hook  
that
needs them will need to parse it themselves, which seems absolutely  
redundant.


Furthermore it is a requirement (MUST) for a 2821 compliant server
to implement loop detection.  That means at least one hook will  
*always*

care about the Received: headers, for every smtp transaction.  Why you
wouldn't want to provide an API for hook authors to use for inspecting
headers, seems like a very spartan choice,  which IMO is counter to  
the

spirit of httpd.


Well not exactly. A module that parses headers can register itself as  
an input_filter for mod_smtpd. It can parse the headers and the  
headers can be accessed by a get_smtpd_headers(request_rec*) function  
or similar exported by the parsing module, and modules that rely on  
this module will know about that API. This module can even be  
included with the default mod_smptd package. Do you agree that this  
is possible?


The module that implements loop detection could rely on that module  
and also be included with the defautl mod_smtpd package.


Either way, lacking header parsing in mod_smtpd is being  
impractically pedant since probably 99% of SMTP transfers involve  
messages in the RFC 2822/MIME formats. Although I think that maybe  
there will be a plugin that wants data from the DATA command  
verbatim. I still feel this needs some thought.

-rian


Re: [PATCH] use arrays in smtpd_request_rec (was Re: smtpd_request_rec questions)

2005-08-15 Thread Rian Hunter

On Aug 14, 2005, at 11:08 PM, Garrett Rooney wrote:


Rian Hunter wrote:

This patch looks good but I have some questions. You seem to use  
the  returned pointers from apr_array_push without checking if  
they are  NULL. Even in apr_array_push, apr_palloc is used without  
checking for  NULL even though apr_palloc can definitely return NULL.
Because of that, I'm not sure whether or not you don't check for  
NULL  on purpose. Could you explain? Thanks.




Well, it depends on what your general attitude towards checking for  
errors in memory allocation.  In many projects it's generally  
considered to be the kind of error you can't effectively recover  
from anyway, so cluttering up the code with if (foo == NULL) checks  
is kind of pointless, you're likely to have been killed by a kernel  
OOM checker before that can do anything useful, or you could be on  
an OS that doesn't even return NULL (memory overcommit), so the  
checks are pointless anyway.  The only way to be safe is to make  
sure that algorithmicly your program can't allocate unbounded  
amounts of memory, then tune your box and app so that this kind of  
problem doesn't happen in practice.


APR generally doesn't bother checking for this kind of error for  
just this reason, same with Subversion and if I'm not mistaken  
Apache HTTPD itself.


-garrett



Thanks for this information! After looking at code in httpd it seems  
this is the case. I'll change the mod_smtpd code to reflect this  
convention.

-rian



Re: New mod_smtpd release

2005-08-15 Thread Joe Schaefer
Rian Hunter [EMAIL PROTECTED] writes:

 Either way, lacking header parsing in mod_smtpd is being  
 impractically pedant since probably 99% of SMTP transfers involve  
 messages in the RFC 2822/MIME formats. Although I think that maybe  
 there will be a plugin that wants data from the DATA command  
 verbatim. I still feel this needs some thought.

IMO a good bit of code to look at is Qpsmtpd::Transaction, 
which is what I think should correspond to a request_rec in mod_smtpd.
Any per-transaction extensions you need can go into r-request_config.

-- 
Joe Schaefer



Loading Apache2::Request under CGI on Win32

2005-08-15 Thread Nikolay Ananiev
Hello,
I'd like my application to do the following:

if(eval{require Apache2::Request}) {
use_apreq();
} else {
use_cgi_pm();
}

This works with mod_perl on Win32, but has problems under CGI (again on
Win32).

The problem appears when  libapreq2.dll and mod_apreq2.so are not in
$ENV{PATH}.
When eval{require Apache2::Request} is executed, on the desktop appears a
message saying that libapreq2.dll could not be found in the path.
This would hold the perl process until the administrator hits OK.
Is there any way to prevent this message from showing up?
Or maybe there should be a warning while installing apreq2 that in order to
use Apache2::Request under CGI, libapreq2.dll and mod_apreq2.so must be in
the PATH?

I know I can just add the missing paths to the environment, but I'm going to
sell my application and I'd like to prevent the problems that may appear on
my clients' servers.





Re: New mod_smtpd release

2005-08-15 Thread Garrett Rooney

Joe Schaefer wrote:

Rian Hunter [EMAIL PROTECTED] writes:


Either way, lacking header parsing in mod_smtpd is being  
impractically pedant since probably 99% of SMTP transfers involve  
messages in the RFC 2822/MIME formats. Although I think that maybe  
there will be a plugin that wants data from the DATA command  
verbatim. I still feel this needs some thought.



IMO a good bit of code to look at is Qpsmtpd::Transaction, 
which is what I think should correspond to a request_rec in mod_smtpd.

Any per-transaction extensions you need can go into r-request_config.



+1, Qpsmtpd has already solved many of these problems, there's little 
reason to spend lots of time doing it ourselves when we can just learn 
from what they've already done.


-garrett


Amit still researching HTTP Splitting protection

2005-08-15 Thread William A. Rowe, Jr.
There is an interesting post by Amit Klein on the methods
that could be employed to detect/ward against splitting
attacks, those interested should review;

http://www.webappsec.org/lists/websecurity/archive/2005-08/msg00044.html

It's an interesting paper, opens some prospects for better
handling these issues before we open the floodgates of httpd-2.2
and http connection pooling. 



Re: [PATCH] Make caching hash more deterministic

2005-08-15 Thread Graham Leggett

Colm MacCarthaigh wrote:


An end-user should never see these keys, the only place they are visible
to any user is the semi-binary mod_disk_cache files. An administrator
would have to really know what they're doing to find them, or be using
htcacheadmin - once I finish that, and if it gets accepted. 


Ok cool - I was not sure whether the URL list would be mined for data 
for any reason.



Either way, doing as you suggest is trivial, but is there really a point
adding more conditions? Any tool which does inspect the cache files can
clean it up for presentation to the administrator.


In that case you're right - the less work done on the URL, the faster it 
will be.


Regards,
Graham
--


mod_mbox seg faulting on ajax

2005-08-15 Thread Joshua Slive

[What is the active mod_mbox devel list?  Trying [EMAIL PROTECTED]

As the subject says, mod_mbox has seg faulted about 50 times so far 
today on ajax.  Some core files in ajax:/tmp (after hacking apachectl to 
adjust the ulimits).


Here's a backtrace
#0  find_thread (r=0x6021b870,
msgID=0x6021ce60 
[EMAIL PROTECTED], 
   c=0x0) at mod_mbox_file.c:155

#1  0x2100a920 in find_next_thread (r=0x6021b870,
msgID=0x6021ce60 
[EMAIL PROTECTED], 
   c=0x0) at mod_mbox_file.c:216

#2  0x2100ad20 in fetch_relative_message (r=0x6021b870,
f=0x6021ce60, flags=3) at mod_mbox_file.c:298
#3  0x2100da90 in mbox_file_handler (r=0x6021b870)
at mod_mbox_file.c:945
#4  0x400358f0 in ap_run_handler (r=0x6021b870) at 
config.c:153
#5  0x400368d0 in ap_invoke_handler (r=0x6021b870) at 
config.c:317

#6  0x4002f460 in ap_process_request (r=0x6021b870)
at http_request.c:226
#7  0x400249d0 in ap_process_http_connection (c=0x601d9320)
at http_core.c:233
#8  0x4004d1b0 in ap_run_process_connection (c=0x601d9320)
at connection.c:43
#9  0x40032270 in child_main (child_num_arg=23984) at prefork.c:610
#10 0x40032540 in make_child (s=0x60045730, slot=60) at 
prefork.c:704
#11 0x400327c0 in startup_children (number_to_start=89) at 
prefork.c:715

#12 0x40033950 in ap_mpm_run (_pconf=0x60008c4c,
plog=0x60040288, s=0x0) at prefork.c:941
#13 0x40041610 in main (argc=5, argv=0x6fffb0d8) at 
main.c:618


Hash table growth

2005-08-15 Thread Jem Berkes
When I looked at the expand function used by apr_hash.c it looked to me 
like it keeps growing if you keep using 'set' with novel values. I was 
thinking of using apr_hash in order to cache DNSBL queries for my module. 
It would ensure rapid cache search but I am having trouble figuring out how 
I could remove existing entries. I really _want_ collisions to happen but 
I'm not sure if this is possible.

Any tips on how I can overwrite existing entries in the hash table, rather 
than keep expanding the table entries?

e.g. key ABC is already in the table, and it collides with XYZ which I now 
want to add. However, if I apr_hash_get(XYZ) it will tell me correctly that 
this key is not present; and apr_hash_set(XYZ) now expands the table right?




Re: Hash table growth

2005-08-15 Thread Garrett Rooney

Jem Berkes wrote:
When I looked at the expand function used by apr_hash.c it looked to me 
like it keeps growing if you keep using 'set' with novel values. I was 
thinking of using apr_hash in order to cache DNSBL queries for my module. 
It would ensure rapid cache search but I am having trouble figuring out how 
I could remove existing entries. I really _want_ collisions to happen but 
I'm not sure if this is possible.


Any tips on how I can overwrite existing entries in the hash table, rather 
than keep expanding the table entries?


As is documented in apr_hash.h, you can remove entries from an 
apr_hash_t by setting the value for the key in question to NULL.


-garrett


Performance proxy_ajp vs. mod_jk when TOMCAT integration with Apache

2005-08-15 Thread Xuekun Hu
Hi, 
From performance point, which connector will be used for TOMCAT
intergration with Apache? proxy_ajp or mod_jk?

I read some docs which said mod_jk should have better performance than
proxying. While proxy_ajp in Apache2.1 is an addition to the mod_proxy
and uses Tomcat's AJP protocol stack. So I ask the upper question.

Thx, Xuekun


Re: New mod_smtpd release

2005-08-15 Thread Joe Schaefer
Branko Čibej [EMAIL PROTECTED] writes:

 May I suggest you resend this patch, using svn diff instead of diff 
 -pur to create it? You're diffing the SVN administrative directory...

OK, here's a patch against mod_smtpd trunk that replaces the 
// comments with /**/:


Property changes on: 
___
Name: svn:ignore
   + smtp_core.slo
smtp_protocol.slo
aclocal.m4
autom4te.cache
config.log
config.status
configure
Makefile
.libs


Index: smtp_protocol.c
===
--- smtp_protocol.c (revision 232921)
+++ smtp_protocol.c (working copy)
@@ -90,7 +90,7 @@
 handle_func = apr_hash_get(handlers, command, APR_HASH_KEY_STRING);
 
 in_data.msg = NULL;
-// command not recognized
+/* command not recognized */
 if (!handle_func)  {
   if (!smtpd_handle_unrecognized_command(r, in_data, command, buffer))
break;
@@ -104,7 +104,7 @@
   }
 
  end:
-  // flush any output we may have before disconnecting
+  /* flush any output we may have before disconnecting */
   ap_rflush(r);
   return;
 }
@@ -152,10 +152,10 @@
 break;
   }
 
-  // default behavior:
+  /* default behavior: */
 
-  // RFC 2821 states that when ehlo or helo is received, reset
-  // state 
+  /* RFC 2821 states that when ehlo or helo is received, reset */
+  /* state */
   smtpd_reset_transaction(r);
 
   sr-helo = apr_pstrdup(sr-p, buffer);
@@ -180,7 +180,7 @@
 HANDLER_DECLARE(helo) {
   smtpd_request_rec *sr = smtpd_get_request_rec(r);
 
-  // bad syntax
+  /* bad syntax */
   if (buffer[0] == '\0') {
 ap_rprintf(r, %d %s\r\n, 501, Syntax: HELO hostname);
 return 501;
@@ -201,8 +201,8 @@
 break;
   }
 
-  // RFC 2821 states that when ehlo or helo is received, reset
-  // state 
+  /* RFC 2821 states that when ehlo or helo is received, reset */
+  /* state */
   smtpd_reset_transaction(r);
   
   sr-helo = apr_pstrdup(sr-p, buffer);
@@ -216,13 +216,13 @@
   smtpd_request_rec *sr = smtpd_get_request_rec(r);
   char *loc;
 
-  // already got mail
+  /* already got mail */
   if (sr-state_vector == SMTPD_STATE_GOT_MAIL) {
 ap_rprintf(r, %d %s\r\n, 503, Error: Nested MAIL command);
 return 503;
   }
 
-  // bad syntax
+  /* bad syntax */
   if ((loc = ap_strcasestr(buffer, from:)) == NULL) {
 ap_rprintf(r, %d %s\r\n, 501, Syntax: MAIL FROM:address);
 return 501;
@@ -237,7 +237,7 @@
   case SMTPD_DONE:
 return 1;
   case SMTPD_DONE_DISCONNECT:
-// zero to disconnect
+/* zero to disconnect */
 return 0;
   case SMTPD_DENY:
 ap_log_error(APLOG_MARK, APLOG_NOTICE, 0, r-server,
@@ -268,7 +268,7 @@
 } else {
   ap_rprintf(r, %d %s, denied\r\n, 550, loc);
 }
-// zero to disconnect
+/* zero to disconnect */
 return 0;
   case SMTPD_DENYSOFT_DISCONNECT:
 ap_log_error(APLOG_MARK, APLOG_NOTICE, 0, r-server,
@@ -279,13 +279,13 @@
 } else {
   ap_rprintf(r, %d %s, temporarily denied\r\n, 450, loc);
 }
-// zero to disconnect
+/* zero to disconnect */
 return 0;
   default:
 break;
   }
 
-  // default handling
+  /* default handling */
   sr-mail_from = apr_pstrdup(sr-p, loc);
   sr-state_vector = SMTPD_STATE_GOT_MAIL;
   ap_rprintf(r, %d %s\r\n, 250, Ok);
@@ -297,14 +297,14 @@
   smtpd_request_rec *sr = smtpd_get_request_rec(r);
   char *loc;
 
-  // need mail first
+  /* need mail first */
   if ((sr-state_vector != SMTPD_STATE_GOT_MAIL) 
   (sr-state_vector != SMTPD_STATE_GOT_RCPT)) {
 ap_rprintf(r, %d %s\r\n, 503, Error: need MAIL command);
 return 503;
   }
 
-  // bad syntax
+  /* bad syntax */
   if ((loc = ap_strcasestr(buffer, to:)) == NULL) {
 ap_rprintf(r, %d %s\r\n, 501, Syntax: RCPT TO:address);
 return 501;
@@ -342,7 +342,7 @@
 ap_rprintf(r, %d %s\r\n, 450, in_data-msg ? in_data-msg :
   relaying temporarily denied);
 return 0;
-  case SMTPD_OK: // recipient is okay
+  case SMTPD_OK: /* recipient is okay */
 break;
   default:
 ap_rprintf(r, %d %s\r\n, 450, No plugin decided if relaying is 
@@ -350,7 +350,7 @@
 return 450;
   }
 
-  // add a recipient
+  /* add a recipient */
   (*((char **)apr_array_push(sr-rcpt_to))) = apr_pstrdup(sr-p, loc);
   sr-state_vector = SMTPD_STATE_GOT_RCPT;
   ap_rprintf(r, %d %s\r\n, 250, Ok);
@@ -403,14 +403,14 @@
   case SMTPD_DONE_DISCONNECT:
 return 0;
   case SMTPD_DENY:
-// REVIEW: should we reset state here?
-// smtpd_clear_request_rec(sr);
+/* REVIEW: should we reset state here? */
+/* smtpd_clear_request_rec(sr); */
 ap_rprintf(r, %d %s\r\n, 554, in_data-msg ? in_data-msg :
   Message denied);
 return 554;
   case SMTPD_DENYSOFT:
-// REVIEW: should we reset state here?
-// smtpd_clear_request_rec(sr);
+/* REVIEW: should we reset state here? */
+/* smtpd_clear_request_rec(sr); */
 ap_rprintf(r, %d %s\r\n, 451, in_data-msg ? in_data-msg :
   

Re: [PATCH] use arrays in smtpd_request_rec

2005-08-15 Thread Joe Schaefer
Garrett Rooney [EMAIL PROTECTED] writes:

 Index: smtp_protocol.c
 ===
 --- smtp_protocol.c   (revision 232680)
 +++ smtp_protocol.c   (working copy)

[...]
 +for (i = 0; i  sr-extensions-nelts; ++i) {
 +  ap_rprintf(r, %d-%s\r\n, 250, ((char **)sr-extensions-nelts)[i]);
   ^

That should be elts, shouldn't it?

-- 
Joe Schaefer



Re: [PATCH] use arrays in smtpd_request_rec

2005-08-15 Thread Garrett Rooney

Joe Schaefer wrote:

Garrett Rooney [EMAIL PROTECTED] writes:



Index: smtp_protocol.c
===
--- smtp_protocol.c (revision 232680)
+++ smtp_protocol.c (working copy)



[...]


+for (i = 0; i  sr-extensions-nelts; ++i) {
+  ap_rprintf(r, %d-%s\r\n, 250, ((char **)sr-extensions-nelts)[i]);


   ^

That should be elts, shouldn't it?



Yes indeed, it should.  One of the problems with data structures that 
require casting in order to access what you've stored in them...  The 
version that Rian committed is slightly different, but still has the 
same problem.  I'd post a patch, but it's probably faster to fix it by 
hand than it is to detach a patch from an email and apply it.


Over in Subversion-land we have a couple of macros defined for dealing 
with apr arrays, they make it rather difficult to make this kind of 
mistake, and I'd love to see them in the main APR release, but the last 
time it was brought up I believe someone objected to their inclusion...


-garrett


Re: [PATCH] use arrays in smtpd_request_rec

2005-08-15 Thread Paul Querna
Garrett Rooney wrote:
[...]
 +for (i = 0; i  sr-extensions-nelts; ++i) {
 +  ap_rprintf(r, %d-%s\r\n, 250, ((char
 **)sr-extensions-nelts)[i]);


^

 That should be elts, shouldn't it?

 
 Yes indeed, it should.  One of the problems with data structures that
 require casting in order to access what you've stored in them...  The
 version that Rian committed is slightly different, but still has the
 same problem.  I'd post a patch, but it's probably faster to fix it by
 hand than it is to detach a patch from an email and apply it.
 
 Over in Subversion-land we have a couple of macros defined for dealing
 with apr arrays, they make it rather difficult to make this kind of
 mistake, and I'd love to see them in the main APR release, but the last
 time it was brought up I believe someone objected to their inclusion...

I don't object, it sounds like a bloody good idea to me.

-Paul



New mod_dnsbl_lookup release

2005-08-15 Thread Jem Berkes
I don't have svn access yet, but I have posted the module here:
http://www.sysdesign.ca/archive/mod_dnsbl_lookup-0.91.tar.gz

This is much improved from my earlier 0.90, taking advice from Colm. With 
this new style of configuration the module can be used more flexibly for 
blacklists, whitelists, or other things. Configuration now looks like:

DnsblZone spammers  sbl.spamhaus.org.   any
DnsblZone spammers  dnsbl.sorbs.net.127.0.0.5
DnsblZone spammers  dnsbl.sorbs.net.127.0.0.6
DnsblZone whitelist customers.dnsbl any
RhsblZone spammers  rhsbl.ahbl.org. 127.0.0.2

The README in the above tarball is very thorough and describes how to use 
the module's functions. I'm interested in adding the functionality into 
mod_smtpd of course. Rian and Nick: how should we proceed on that?

Here in brief is a relevant part of my README

===
4. Using from mod_smtpd
===

The function calls work in isolation, without requiring any prior setup 
before using DNSBLs. The server configuration takes care of all 
DNSBL and RHSBL setup, including domains to query and responses to 
interpret as positive.

The important knowledge link between mod_dnsbl_lookup and its user, say 
mod_smtpd, is the chain name that defines the desired DNSBLs. Instead of 
hard coding a chain name, it makes much more sense to have a module such 
as mod_smtpd load during its configuration some chains to work with.

So mod_smtpd might have configuration directives such as:
SmtpBlacklistChain blackchain
SmtpWhitelistChain whitechain

Now mod_smtpd knows which chain to query for blacklisting purposes, and 
which chain to query for whitelisting purposes. The admin may leave either 
chain undefined of course and can easily modify the configuration by 
substituting different chain names (as used by DnsblZone and RhsblZone). 
The pseudo code within mod_smtpd might then be:

Attempt to load optional dnsbl_lookup functions
If functions are available
If dnsbl_lookup_ip(whitechain, client) == DNSBL_POSITIVE
return ALLOW_SERVICE// even if blacklisted
Else If dnsbl_lookup_ip(blackchain, client) == DNSBL_POSITIVE
return DENY_SERVICE
return ALLOW_SERVICE// default action

- Jem




Re: mod_dnsbl_lookup 0.90

2005-08-15 Thread Jem Berkes
 That's super in-efficient for the majority case, and there's no
 application level caching, which tends to be a must for most
 implementations (even if it is only per-request, like Exim's or

We talked about this on IRC, and it seems the preferred approach is to 
delegate the caching responsibility to an entity that is made purely for 
that purpose, for example DJB's local DNS cache software or even rbldnsd 
(an extremely fast DNSBL server) running locally.

I did start to implement software side caching in mod_dnsbl_lookup but it 
raised questions as to whether it's appropriate to have global scale 
caching when we're doing connection and request oriented processing.

So I've left caching out of mod_dnsbl_lookup 0.91




Re: Performance proxy_ajp vs. mod_jk when TOMCAT integration with Apache

2005-08-15 Thread Mladen Turk

Xuekun Hu wrote:
Hi, 

From performance point, which connector will be used for TOMCAT

intergration with Apache? proxy_ajp or mod_jk?

I read some docs which said mod_jk should have better performance than
proxying. While proxy_ajp in Apache2.1 is an addition to the mod_proxy
and uses Tomcat's AJP protocol stack. So I ask the upper question.



Right, both mod_jk and proxy_ajp have almost the same performance,
and they are up to twice as fast compared with http mostly because
of constant connection pool. AJP protocol gives it's share too
by transferring less data across the wire.

If you are using Apache 2.1+ there is no need to use mod_jk and
maintain additional module.

Regards,
Mladen.