Re: [PATCH 39299] - Patch review request

2007-03-01 Thread Plüm , Rüdiger , VF EITO


 -Ursprüngliche Nachricht-
 Von: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
 Gesendet: Freitag, 2. März 2007 02:15
 An: dev@httpd.apache.org
 Betreff: Re: [PATCH 39299] - Patch review request
 
 
 Thanks Nick for responding to my request.
 
 My comments are in between.
 
 On Wed, Feb 28, 2007 at 10:49:48PM +, Nick Kew wrote:
  On Wed, 28 Feb 2007 14:31:19 -0800
  Basant Kukreja [EMAIL PROTECTED] wrote:
  
   Hi,
  I am Basant. I work in web tier group in Sun Microsystems Inc.
   
   I have submitted the patch for bug 39299.
   Summary : Internal Server Error (500) on COPY
   URI : http://issues.apache.org/bugzilla/show_bug.cgi?id=39299
   
   
   Can some of the committer kindly review my patch please 
 to see if it
   is acceptable or not?
   Patch is against 2.2.x branch.
  
  409 implies a condition the client can fix.  Your patch tests for
  a particular condition that is likely to be fixable in a server
  with DAV uprunning.  But AFAICS it could also give a bogus 409,
  for example in the case of a newly-installed and misconfigured
  server.
 Can you kindly elaborate more? How newly misconfigured server can
 send 409? Here is my test case :
 
 DavLockDB /disk/apache/apache2/var/DAVLockFs
 Directory /disk/apache/apache2/htdocs/DAVtest
 Options Indexes FollowSymLinks
 AllowOverride None
 order allow,deny
 allow from all
 AuthName SMA Development server
 AuthType Basic
 DAV On
 /Directory
 
 Now assuming, I misconfigured the server and I intended to 
 configure /DAVtest1 instead of
 /DAVtest, if I send a request.
 
 --
 COPY /DAVtest1/litmus/copysrc HTTP/1.1
 Host: myhostname.mydomain:4004
 User-Agent: litmus/0.11 neon/0.25.5
 Connection: TE
 TE: trailers
 Depth: 0
 Destination: 
 http://myhostname.mydomain:4004/DAVtest/litmus/nonesuch/foo
 Overwrite: F
 X-Litmus: copymove: 5 (copy_nodestcoll

I guess Nicks Idea was the other way round:

COPY /DAVtest/litmus/copysrc

Destination: http://myhostname.mydomain:4004/DAVtest1/litmus/nonesuch/foo

IMHO this direction would also better match the problem described in PR39299.

Regards

Rüdiger




Re: Questions on configuring Apache Server

2007-02-27 Thread Plüm , Rüdiger , VF EITO
Maybe modsecurity (http://www.modsecurity.org/) already does what you need.
Otherwise it gives you an impression how to write an appropriate module to do 
so.
Looking at http://apache.webthing.com/ for mod_accessibility and for 
mod_proxy_html
also seems to be good idea to either find out that what you need is already 
there
or to get an idea how to do it yourself.

Regards

Rüdiger

 -Ursprüngliche Nachricht-
 Von: Erica Zhang [mailto:[EMAIL PROTECTED] 
 Gesendet: Dienstag, 27. Februar 2007 17:50
 An: dev@httpd.apache.org
 Betreff: Re: Questions on configuring Apache Server
 
 
 Hi,
 Thanks.
 
 Well, my idea is want to analyze all requests from client before they 
 arrives to the user applications and also analyze all responsed html 
 after they are created by server applications and before they are 
 arrived at client.
 
 To solve this problem, originally, I want to set up two 
 ports. One port 
 for the user applications and the other is for my tool. My tool will 
 communicate with the user applications through Apache HTTP Server. 
 However, now, I do not think it is a good idea.
 
 Now I am considering to develop a simple tool to solve this 
 problem like 
 Apache HTTP Server to catch the request and response. But I 
 am not still 
 sure about if this is a good idea, because I am not familiar with web 
 application development .
 
 What is your idea ?
 
 Thanks,
 
 Erica
 
 Joshua Slive wrote:
 
  On 2/26/07, Erica Zhang [EMAIL PROTECTED] wrote:
 
  Hi,
 
  I am developing some component, which need Apache to be 
 able to listen
  to two ports, instead of only one default port. I do not 
 know if there
  is some way to configure Apache http server to work in 
 this way. I do
  not want to configure it to be virtual host.
 
 
  Listen 80
  Listen 81
  in httpd.conf should do the trick.
 
  Or if not, you need to better specify what you are trying to do.
 
  Joshua.
 
 
 


AW: svn commit: r507955 - /httpd/httpd/branches/2.0.x/STATUS

2007-02-16 Thread Plüm , Rüdiger , VF EITO

 -Ursprüngliche Nachricht-
 Von: Stuart Children 
 Gesendet: Freitag, 16. Februar 2007 12:40
 An: dev@httpd.apache.org
 Betreff: Re: svn commit: r507955 - /httpd/httpd/branches/2.0.x/STATUS
 
 
 William A. Rowe, Jr. wrote:
  -1 as is, Jim would you -please- post the actual patch you 
 are asking
  us to vote on :-?  (veto removal automatic upon availability of the
  link to the specific patch you plan to commit that actually applies,
  and probably my review pretty quickly, too.)
 
 Trunk patch should apply with offset, but to help things 
 along here's a 

It applies to 2.2.x, but this is about 2.0.x where the trunk version does
NOT work cleanly.


Regards

Rüdiger


Re: Regarding graceful restart

2007-02-09 Thread Plüm , Rüdiger , VF EITO


 -Ursprüngliche Nachricht-
 Von: Henrik Nordstrom 
 Gesendet: Freitag, 9. Februar 2007 16:33
 An: dev@httpd.apache.org
 Betreff: Re: Regarding graceful restart
 
 
 tor 2007-02-08 klockan 17:15 -0800 skrev Devi Krishna:
  Hi, 
  
   Resending this mail, just in case anyone would have
  suggestions/inputs as how to fix this for connections that 
 are in the
  ESTABLISHED state or FIN state or any other TCP state other than
  LISTEN
 
 Maybe change the wake up call to just connect briefly 
 without actually
 sending a full HTTP request? This should be sufficient to wake up any
 processes sleeping in accept() and will not cause anything to get
 processed..

Not if BSD accept filters are in place. In this case the kernel waits until it
sees a HTTP request until it wakes up the process.
And on Linux with TCP_DEFER_ACCEPT enabled you need to sent a least one byte of 
data.

Regards

Rüdiger



AW: mod_cache mod_disk_cache

2007-02-01 Thread Plüm , Rüdiger , VF EITO


 -Ursprüngliche Nachricht-
 Von: Tigges 
 Gesendet: Donnerstag, 1. Februar 2007 12:36
 An: dev@httpd.apache.org
 Betreff: mod_cache  mod_disk_cache
 
 
 Hi,
 I use an Apache 2.2 with mod_perl, mod_cache  mod_disk_cache.
 I add an unique string to $r-uri  with an perl script, but have to
 save/load the cache files without this string. The uri I build looks
 like /UNIQ123456789/filename, but the cached target should look like
 /filename. Other pathes, like /media/pic1.gif should not 
 be affected.
 The UNIQ is needed for some special operations behind the proxy.
 Changes on this UNIQ-technique is impossible!
 Now I have to know how I can set/change the uri which is used to save
 and load the cache file.
 When I change the r-uri in mod_cache.c, the mod_disk_cache.c does not
 get the edited uri.
 I think I have to use an apr_xxx hook or smthg like this, but 
 I have no
 idea which to use.

This may be possible if you register a different ap_cache_generate_key
optional function and provide this function on your own.
I guess you should register your optional function during the register_hooks
processing.

Regards

Rüdiger



Re: mod_cache: save filter recalls body to non-empty brigade?

2007-01-24 Thread Plüm , Rüdiger , VF EITO


 -Ursprüngliche Nachricht-
 Von: Graham Leggett 
 Gesendet: Mittwoch, 24. Januar 2007 16:15
 An: dev@httpd.apache.org
 Cc: dev@httpd.apache.org
 Betreff: Re: mod_cache: save filter recalls body to non-empty brigade?
 
 
 On Wed, January 24, 2007 2:15 pm, Niklas Edmundsson wrote:
 
  In mod_cache, recall_body() is called in the 
 cache_save_filter() when
  revalidating an entity.
 
  However, if I have understood things correctly the brigade 
 is already
  populated when the save filter is called, so calling 
 recall_body() in
  this case would place additional stuff in the bucket brigade.
 
  Wouldn't it be more correct to empty the brigade before calling
  recall_body()? Or am I missing something?
 
 I think the theory is that recall_body() should only be 
 called on a 304
 not modified (with no body), so in theory there is no existing body
 present, so no need to clear the brigade.
 
 Of course practically you don't want to make assumptions about the
 emptiness of the existing brigade, so clearing the brigade as 
 a first step
 makes definite sense.

It is not needed to clear the brigade, because the brigade passed to the filter
is named in, the one where recall_body stores the cached file is bb. I the case
of a recalled body we pass bb down the chain not in.

Regards

Rüdiger



Re: setting request timeout in mod_proxy/mod_proxy_balancer

2007-01-23 Thread Plüm , Rüdiger , VF EITO


 -Ursprüngliche Nachricht-
 Von: Lucas Brasilino 
 Gesendet: Dienstag, 23. Januar 2007 14:43
 An: dev@httpd.apache.org
 Betreff: Re: setting request timeout in mod_proxy/mod_proxy_balancer

 
  Based on your configuration from above, the following should work:
 
 No, it doesn't. I've tried yet.

It should.

 From docs:
 'Connection timeout in seconds. If not set the Apache
 will wait until the free connection is available. This
 directive is used for limiting the number of connections
 to the backend server together with max  parameter.'

The documentation is misleading and probably wrong, but
the code speaks a clear language.

 
 My understanding is that this options set the
 TCP connection (SYN - SYN+ACK - ACK) timeout,
 not the HTTP response timeout.

It is used for the response timeout.

Regards

Rüdiger



Re: mod_cache+mod_rewrite behaviour

2007-01-19 Thread Plüm , Rüdiger , VF EITO


 -Ursprüngliche Nachricht-
 Von: Fredrik Widlund 
 Gesendet: Freitag, 19. Januar 2007 10:23
 An: dev@httpd.apache.org
 Betreff: mod_cache+mod_rewrite behaviour
 
 
 I'm trying to get mod_cache to ignore the query_string part of the
 request, since our customers use clicktags in references to static
 banners. I need to cache these request to improve performance.
 
 My idea was to RewriteRule .* %{REQUEST_URI}?, however I 
 have learned
 that mod_cache is run as a quick_handler. However the actual
 cache_create_entity() call run _after_ the url being rewritten,
 resulting in the behaviour below (some debug calls have been added to
 mod_cache). One can see that create_select use the unrewritten request
 http://1.2.3.4:80/index.html?ref=x;, and create_select the rewritten
 http://1.2.3.4:80/index.html?;. This clearly makes mod_cache and
 mod_rewrite incompatible.

This is a known issue and fixed on trunk and proposed for backport to 2.2.x.
Please have a look at PR40805 
(http://issues.apache.org/bugzilla/show_bug.cgi?id=40805)
You will find references to a patch there.
The fix is to use the *unmodified* URL / query string consistently.

Regards

Rüdiger



Re: mod_cache+mod_rewrite behaviour

2007-01-19 Thread Plüm , Rüdiger , VF EITO


 -Ursprüngliche Nachricht-
 Von: Fredrik Widlund 
 Gesendet: Freitag, 19. Januar 2007 12:30
 An: dev@httpd.apache.org
 Betreff: Re: mod_cache+mod_rewrite behaviour
 
 
 Hi,
 
 Thanks for the information. Tried the patch and it mends it the
 behaviour, however it doesn't really help me of course since 
 I indeed am
 trying to rewrite the url before it's cached.
 
 What are the chances of getting a patch that adds a
 CacheIgnoreQueryString option accepted? Who/where do I ask this?

This is the right place for discussion. I would propose the following:

1. Create a bug report describing your problem and mark it as enhancement.
2. If you have a patch for CacheIgnoreQueryString attach it to the report.
3. Come back here with your problem (in this thread), refer to the report
   and attach the patch for convenience.
4. Give some arguments why this is not only useful for you but for everyone 
else.
   And if there are any drawbacks as a result of your patch why it is worth
   the tradeoff.
5. Be PPP (Patient, Polite, Persistent) :-). Keep on buging us from time to
   time if the reaction to your proposal is only inactivity and not decline.

Regards

Rüdiger



Re: Mod_cache expires check

2007-01-18 Thread Plüm , Rüdiger , VF EITO


 -Ursprüngliche Nachricht-
 Von: Roy T. Fielding 
 Gesendet: Mittwoch, 17. Januar 2007 23:23
 An: Ruediger Pluem
 Cc: dev@httpd.apache.org; dev@apr.apache.org
 Betreff: Re: Mod_cache expires check
 
 
 On Jan 17, 2007, at 12:23 PM, Ruediger Pluem wrote:

 
  I would say 0 is not a bad day. But if this is a bug it is an APR(- 
  util) bug.
  Thus I forward it to the apr dev list.
 
 No, it is a bug in the expression.  A date is an unsigned 
 value and any

But apr_time_t is a signed 64 bit integer. So it would be possible to
define -1 as the flag for an invalid date. I agree that a date itself
is an unsigned value, but this does not make 0 an invalid date per se or
is there a definition somewhere that the first valid date is 1 second
after the Unix epoch?

 error is specifically forced to 0 so that the comparison

Just curious: Is the Unix epoch an invalid date in the Expires header
(as this in the past it does not really matter for the question whether
this document is expired or not as it would be in both cases)?


Regards

Rüdiger



Re: vote on concept of ServerTokens Off

2006-12-06 Thread Plüm , Rüdiger , VF EITO


 -Ursprüngliche Nachricht-
 Von: Jeff Trawick 
 Gesendet: Mittwoch, 6. Dezember 2006 04:17
 An: dev@httpd.apache.org
 Betreff: Re: vote on concept of ServerTokens Off
 
 
 On 12/5/06, Ruediger Pluem [EMAIL PROTECTED] wrote:
 


  What is the latest patch that should be applied?
 
 I'm pretty darn sure there is no latest ServerTokens Off patch to
 apply, because it needs to be reworked slightly to work with the
 patch/commit you refer to below.
 
  I just did a quick review of
 
  http://issues.apache.org/bugzilla/attachment.cgi?id=18775
 
  and I think we should use the long version (aka 
 ap_get_server_description())
  in the output of mod_status and mod_info instead the 
 possibly turned off version
  ap_get_server_description()
 
 better to review this commit:
 
 http://svn.apache.org/viewvc?view=revrevision=440337
 or for 2.2.x
 http://svn.apache.org/viewvc?view=revrevision=446606
 
 which does what you want.
 
 Maybe I was confused earlier ;)  Reviewed by: rpluem, jim

Ahm, good point. This all seemed to be so well known to me :-).
I guess I need to increase my TTL for votes I gave on backports ;-).

Regards

Rüdiger



Re: walk caching to avoid extra authnz

2006-12-06 Thread Plüm , Rüdiger , VF EITO


 -Ursprüngliche Nachricht-
 Von: Nick Kew 
 Gesendet: Mittwoch, 6. Dezember 2006 14:34
 An: dev@httpd.apache.org


 
 In this instance, we need to work through how this relates to
 relevant updates leading to the CHANGES file entry:
 
  core: Do not allow internal redirects like the DirectoryIndex of
  mod_dir to circumvent the symbolic link checks imposed by
  FollowSymLinks and SymLinksIfOwnerMatch. [Nick Kew, 
 Ruediger Pluem,
  William Rowe]
 
 I'm struggling to find the relevant changes in SVN, and there are
 no pointers in the relevant bug report PR#14206.

I guess

r423886
r425057
r425394

is what you are looking for.
Furthermore I remember from the discussions on these changes that we
should be very very cautious in changing this code as it is very
security sensitive.


Regards

Rüdiger



Re: Creating a thread safe module and the problem of calling of 'CRYPTO_set_locking_callback' twice!

2006-12-06 Thread Plüm , Rüdiger , VF EITO


 -Ursprüngliche Nachricht-
 Von: Nick Kew
 Gesendet: Mittwoch, 6. Dezember 2006 15:06
 An: dev@httpd.apache.org
 Betreff: Re: Creating a thread safe module and the problem of 
 calling of 'CRYPTO_set_locking_callback' twice!

 
 OpenSSL is just one of thousands of libraries a module developer might
 want to use, and isn't one I've drawn on for examples (as you will
 no doubt infer from my ignorance of it:-)

Maybe Openssl is a good reason for your 2nd edition :-).

Regards

Rüdiger



AW: vote on concept of ServerTokens Off

2006-12-06 Thread Plüm , Rüdiger , VF EITO


 -Ursprüngliche Nachricht-
 Von: Mads Toftum 
 Gesendet: Mittwoch, 6. Dezember 2006 15:50
 An: dev@httpd.apache.org
 Betreff: Re: vote on concept of ServerTokens Off
 
 
 On Wed, Dec 06, 2006 at 03:45:54PM +0100, Lars Eilebrecht wrote:
  So, is that a -1 or -0?
  
 A peanut gallery -1. I feel very strongly about pretending to 
 implement
 security measures that does not help one bit.

The original motivation is not about security but about amount of data
as every single byte transmitted has to be paid).
FWIW I think that saving about 20 bytes in the HTTP headers
does not solve your cost problem. Usually you can save much more with
other measures like mod_deflate or tidying your output.


Regards

Rüdiger



Re: Auth in Location: can one basic auth config override another?

2006-11-29 Thread Plüm , Rüdiger , VF EITO


 -Ursprüngliche Nachricht-
 Von: Graham Leggett 
 Gesendet: Mittwoch, 29. November 2006 12:31
 An: dev@httpd.apache.org
 Betreff: Auth in Location: can one basic auth config override another?
 
 
 Hi all,
 
 After much experimentation with httpd v2.0, where an attempt 
 is made to
 set one basic auth policy for Location /, and a different basic auth
 policy for Location /bugzilla, it seems that regardless of 
 the order of
 the Location in the config file, the most general config always wins.
 
 In other words, it seems currently to be impossible to define 
 a different
 basic auth config in a subdirectory in an urlspace, if an 
 existing more
 general basic auth config exists for a parent directory.
 
 Can anyone who knows the AAA stuff better confirm whether 
 this is true or
 not?

I don't think so. I have the following configuration running successfully:

Location /somewhere
   Options None
   AllowOverride None
   DAV On
   AuthName Access for somewhere
   AuthType Basic
   AuthUserFile /opt/apache-2.0.55/conf/transfer/passwd.manager

   order allow,deny
   allow from all
   Satisfy all

   Options none
   ForceType application/octet-stream

 LimitExcept OPTIONS
   require user manager
 /LimitExcept
/Location

Location /somewhere/deeper/evendeeper
   AuthName Access for evendeeper
   
 LimitExcept OPTIONS
   require user manager somebodyelse
 /LimitExcept
/Location

If I go to /somewhere/deeper/evendeeper I get the correct Realm presented in 
the browser.

Regards

Rüdiger



Re: Auth in Location: can one basic auth config override another?

2006-11-29 Thread Plüm , Rüdiger , VF EITO


 -Ursprüngliche Nachricht-
 Von: Graham Leggett
 Gesendet: Mittwoch, 29. November 2006 13:31
 
 Which version of httpd are you using?

2.0.55

 
 
 In my case, I am trying to define one basic auth config under / using
 mod_authz_ldap, and a second completely separate and independent basic
 auth config /bugzilla using mod_auth_ldap (note the different 
 modules).
 This doesn't want to work.

Ok, this is different from my case. I only have different realms and requires.

Regards

Rüdiger



Re: http://svn.apache.org/viewvc?view=revrevision=426799

2006-11-16 Thread Plüm , Rüdiger , VF EITO


 -Ursprüngliche Nachricht-
 Von: Nick Kew 
 Gesendet: Donnerstag, 16. November 2006 02:36
 
 
 On Wed, 15 Nov 2006 21:33:07 +0100
 Ruediger Pluem [EMAIL PROTECTED] wrote:
 
  
  Because of your question I had to rewalk the code path and I think
  I found two other bugs with my code. I fixed them on trunk:
  
  http://svn.apache.org/viewvc?view=revrevision=475403
 
 Hang on.  It's worse than that.  Or else I'm suffering 
 shouldn't be working in the wee hours syndrome.
 
 When you first set up the validation buffer, you copy available
 data into it, and set validation_buffer_length.  Now the memcpy
 in this section of code is overwriting validation_buffer,
 when it should be appending to it.  Then you increment the
 buffer_length, and decrement avail_in by the number of bytes
 appended.  At that point, if avail_in is nonzero we might want
 to log a warning of extra junk.

Argh. You are right. Good catch. I have currently no svn access but
the patch below should fix this:

Index: mod_deflate.c
===
--- mod_deflate.c   (revision 475613)
+++ mod_deflate.c   (working copy)
@@ -1144,20 +1144,24 @@
 copy_size = VALIDATION_SIZE - ctx-validation_buffer_length;
 if (copy_size  ctx-stream.avail_in)
 copy_size = ctx-stream.avail_in;
-memcpy(ctx-validation_buffer, ctx-stream.next_in, copy_size);
-} else {
+memcpy(ctx-validation_buffer + ctx-validation_buffer_length,
+   ctx-stream.next_in, copy_size);
+/* Saved copy_size bytes */
+ctx-stream.avail_in -= copy_size;
+}
+if (ctx-stream.avail_in) {
 ap_log_rerror(APLOG_MARK, APLOG_DEBUG, 0, r,
   Zlib: %d bytes of garbage at the end of 
   compressed stream., ctx-stream.avail_in);
+/*
+ * There is nothing worth consuming for zlib left, because it 
is
+ * either garbage data or the data has been copied to the
+ * validation buffer (processing validation data is no business
+ * for zlib). So set ctx-stream.avail_in to zero to indicate
+ * this to the following while loop.
+ */
+ctx-stream.avail_in = 0;
 }
-/*
- * There is nothing worth consuming for zlib left, because it is
- * either garbage data or the data has been copied to the
- * validation buffer (processing validation data is no business for
- * zlib). So set ctx-stream.avail_in to zero to indicate this to
- * the following while loop.
- */
-ctx-stream.avail_in = 0;
 }
 
 zRC = Z_OK;


 
  http://svn.apache.org/viewvc?view=revrevision=475406
 
 Why?  I'd like to understand what makes that necessary.

I think that I remember cases where two EOS buckets run down the chain. IMHO
this is a bug elsewhere, but I would like to prevent SEGFAULTing in mod_deflate
if this happens. So you might call it defensive programmming here.

 
 Edge-cases can be notoriously hard to test.  I wonder if there's
 a compression/zlib test suite we could use?

Regarding the compressed content I simply used files that I compressed with
gzip and stripped of the gz extension. I guess the hard case here is to split
them in a way to buckets and brigades such that all edge cases can be tested.
In this case a filter just after the default handler would be handy that would
allow us to split the brigade from the default handler at certain boundaries and
let us split the buckets inside these brigades at certain predefined boundaries.

Regards

Rüdiger



Re: [PATCH] mod_disk_cache fails to compile [Was: svn commit: r468373 - in /httpd/httpd/trunk: CHANGES modules/cache/mod_cache.c modules/cache/mod_cache.h modules/cache/mod_disk_cache.c modules/cache/

2006-11-02 Thread Plüm , Rüdiger , VF EITO


 -Ursprüngliche Nachricht-
 Von: Martin Kraemer 
 Gesendet: Donnerstag, 2. November 2006 13:22
 An: dev@httpd.apache.org
 Betreff: Re: [PATCH] mod_disk_cache fails to compile [Was: 
 svn commit: r468373 - in /httpd/httpd/trunk: CHANGES 
 modules/cache/mod_cache.c modules/cache/mod_cache.h 
 modules/cache/mod_disk_cache.c modules/cache/mod_disk_cache.h 
 modules/cache/mod_mem_cache.c]
 
 
 On Thu, Nov 02, 2006 at 01:00:29PM +0200, Graham Leggett wrote:
  If you can send more details, I can get to the bottom of it -
  fails to compile doesn't tell me anything useful.
 
 mod_disk_cache.c: In function `open_new_file':
 mod_disk_cache.c:1304: `pdconf' undeclared (first use in this 
 function)
 mod_disk_cache.c:1304: (Each undeclared identifier is 
 reported only once
 mod_disk_cache.c:1304: for each function it appears in.)

So Martin you are failing exactly at the same point as I do
(http://mail-archives.apache.org/mod_mbox/httpd-dev/200611.mbox/[EMAIL 
PROTECTED]).
So probably this convinces Graham that it is not fixed :-).

Regards

Rüdiger



Re: [PATCH] mod_disk_cache fails to compile [Was: svn commit: r468373 - in /httpd/httpd/trunk: CHANGES modules/cache/mod_cache.c modules/cache/mod_cache.h modules/cache/mod_disk_cache.c

2006-11-02 Thread Plüm , Rüdiger , VF EITO


 -Ursprüngliche Nachricht-
 Von: Graham Leggett [mailto:[EMAIL PROTECTED] 
 Gesendet: Donnerstag, 2. November 2006 13:57
 An: dev@httpd.apache.org
 Cc: dev@httpd.apache.org
 Betreff: Re: [PATCH] mod_disk_cache fails to compile [Was: 
 svn commit: r468373 - in /httpd/httpd/trunk: CHANGES 
 modules/cache/mod_cache.c modules/cache/mod_cache.h 
 modules/cache/mod_disk_cache.c modules/cache/mod_disk_cache.h 
 modules/cache/mod_mem_c
 
 
 On Thu, November 2, 2006 2:40 pm, Plüm, Rüdiger, VF EITO wrote:
 
  So Martin you are failing exactly at the same point as I do
  
 (http://mail-archives.apache.org/mod_mbox/httpd-dev/200611.mbo
x/[EMAIL PROTECTED]).
 So probably this convinces Graham that it is not fixed :-).

 I don't have a FreeBSD machine, or more accurately a dev machine with
 SENDFILE at the moment, so just saying it didn't build doesn't help me
 fix it :)

Sorry for that. I assumed that at least you have one of the following OS's at
your disposal: :-)

FreeBSD
Linux
Solaris

Regards

Rüdiger




Re: cache: the store_body interface

2006-10-31 Thread Plüm , Rüdiger , VF EITO


 -Ursprüngliche Nachricht-
 Von: Joe Orton  
 Gesendet: Dienstag, 31. Oktober 2006 11:27
 An: dev@httpd.apache.org
 Betreff: Re: cache: the store_body interface
 
 
 On Mon, Oct 30, 2006 at 10:13:09PM +0100, Ruediger Pluem wrote:
   2) keep the interface as-is, but read buckets in 
 mod_cache and partition
   the brigade manually; only pass a small brigade with 
 known-length
   buckets to the provider. (so no morphing and no arbitrary memory
   consumption)
  
  As far as I can see this small brigade would only contain 
 the following bucket
  types (pipe and file buckets would get morphed due to 
 apr_bucket_read):
  
  heap transient mmap
 
 Any bucket type which directly represents mapped memory would 
 remain: so 
 POOL and IMMORTAL too of those shipped in APR-util.

Thanks, for pointing out. I missed those two types. But this changes nothing
on my remark that 2) is effectively 3) with just having the buffers
packaged into buckets and brigades. So 3) seems to be the clearer 
and more honest interface to me.
Furthermore it does not require provider programers to understand
buckets and brigades :-).

 
   4) change the interface: pass some abstract flush-me 
 callback in,
   which the provider can call to pass up then delete the bucket.
   (apr_brigade_flush doesn't quite fit the bill unfortunately)
  
  Just curious: Why do you think it does not fit the bill? 
 Because it requires
  a brigade instead of a bucket or because we possibly would 
 need to pass the
  filter to it as ctx?
 
 Yes to both :) it would require the provider to move 
 flushable buckets 
 into a temp brigade before calling the apr_brigade_flush-type 
 callback, 
 and would require passing f-next in somehow as well.  So you get all 
 the downsides/benefits of the provider having to know how to 
 be a Good 
 Filter, but adding complexity to prevent it from being a real filter; 
 smells wrong.

Ok, sounds like 4) is a disguise of 1) :-).
As I thought that 1) was bad and 2) is the same as 3) with 3)
having the better interface my only remaining vote is 3).

Regards

Rüdiger



Re: cache: the store_body interface

2006-10-31 Thread Plüm , Rüdiger , VF EITO


 -Ursprüngliche Nachricht-
 Von: Joe Orton 
 Gesendet: Dienstag, 31. Oktober 2006 13:52
 An: dev@httpd.apache.org
 Betreff: Re: cache: the store_body interface
 
 
 On Tue, Oct 31, 2006 at 10:59:49AM +, Joe Orton wrote:
  On Mon, Oct 30, 2006 at 02:56:24PM -0700, Justin Erenkrantz wrote:
   On 10/30/06, Nick Kew [EMAIL PROTECTED] wrote:
   What does that [#1] break?
   
   Seems an easy/low-level solution.  Does the provider return a
   status value to say I have/haven't passed this stuff down the
   chain?  It has the feel of something that pulls down the level
   of abstraction: not sure what that loses.
   
   With #1, you don't have any knowledge as to when the 
 filter died or
   how it died - was it the cache storage that died?  Or was it the
   client?  Who knows.  So, it makes recovering from storage failures
   very problematic or you put a set of 'hidden' constraints on the
   storage functions to try to encourage recovery - but I'd 
 rather make
   such things explicit.  -- justin
  
  I very much sympathise with this argument.  But it does 
 mean that the 
  storage provider cannot break any of the assumptions 
 mentioned in the 
  other thread: it enforces the synchronous store-to-disk and 
  write-to-client model.
 
 I meant to also mention (sorry for all the mail): it prevents 
 mod_mem_cache's fd caching trick too, which relies on being 
 passed FILE 
 buckets.

That seems to be an important point to me. Although I never used
the fd caching of mod_mem_cache this would mean we actually would
have to dump this feature. This looks bad to me. Isn't this a showstopper
for implementing #3 as new interface?

Regards

Rüdiger



Re: cache: the store_body interface

2006-10-31 Thread Plüm , Rüdiger , VF EITO


 Von:  
  Justin Erenkrantz
 Gesendet: Dienstag, 31. Oktober 2006 16:15
 An: dev@httpd.apache.org
 Betreff: Re: cache: the store_body interface
 
 
 On 10/31/06, Plüm, Rüdiger, VF EITO 
 [EMAIL PROTECTED] wrote:
  That seems to be an important point to me. Although I never used
  the fd caching of mod_mem_cache this would mean we actually would
  have to dump this feature. This looks bad to me. Isn't this 
 a showstopper
  for implementing #3 as new interface?
 
 Honestly, mod_mem_cache is a complete joke.  It's never worked

I am not quite sure if Bill (not other Bill) shares this point of view :-).

Regards

Rüdiger



AW: [Fwd: Re: Apache 2.2.3 mod_proxy issue]

2006-10-31 Thread Plüm , Rüdiger , VF EITO


 -Ursprüngliche Nachricht-
 Von: Jim Jagielski 
 Gesendet: Dienstag, 31. Oktober 2006 15:18
 An: dev@httpd.apache.org
 Betreff: Re: [Fwd: Re: Apache 2.2.3 mod_proxy issue]
 

 
  How about
 
  RewriteEngine On
  RewriteRule ^(.*\.jsp|/servlet/.*)$ balancer://mycluster$1 [P]
 
 
  Proxy balancer://mycluster
  ProxySet stickysession=JSESSIONID nofailover=On
  BalancerMember ajp://1.2.3.4:8009 route=tomcat1 max=10
  BalancerMember ajp://1.2.3.5:8010 route=tomcat2 max=10
  /Proxy
 
 
 Seems to be that we should simply make ProxyPass more
 pattern aware... We don't need a full regex for 95% of
 the cases, and so we'd have a nice faster impl.
 
 Needing to switch to (and load in) mod_rewrite for something
 that the proxy module should do itself seems backwards :)

I am a regexp fan and have mod_rewrite loaded in any of our servers anyway
for some standard tasks, so this actually does not bother me :-).
But of course there are other users as well and using and understanding
regular expressions is not always easy and straight forward. So your proposal 
makes sense
to me. OTOH AFAIK we currently only have two types of matching through httpd:

1. Prefix matching as in ProxyPass, Location, 
2. Regular expressions as in mod_rewrite, LocationMatch, FilesMatch, 

So this wildcard matching type would be something new.

Regards

Rüdiger


Re: AW: [Fwd: Re: Apache 2.2.3 mod_proxy issue]

2006-10-31 Thread Plüm , Rüdiger , VF EITO


 -Ursprüngliche Nachricht-
 Von: Jim Jagielski 
 Gesendet: Dienstag, 31. Oktober 2006 17:05
 An: dev@httpd.apache.org
 Betreff: Re: AW: [Fwd: Re: Apache 2.2.3 mod_proxy issue]
 
 
  
  I am a regexp fan and have mod_rewrite loaded in any of our 
 servers =
  anyway
  for some standard tasks, so this actually does not bother me :-).
 
 My concern is that the proxy-selection and operation should be
 fast, as fast a possible, and mod_rewrite simply isn't the fastest
 puppy in the world. Avoiding full-blown regex in ProxyPass allows
 for nice pattern matching (globbing) without the overhead of
 a full regex engine... fast fast fast should be the name of
 the game.

That depends on the usage of the proxy. In the example given we talk about
a connection to a jsp in the backend. I am pretty sure that mod_rewrite's
part on the total response time / resource consumption (CPU / memory) here
is fairly low. So honestly, this is not my concern here.

Regards

Rüdiger



Re: svn commit: r467655 - in /httpd/httpd/trunk: CHANGES docs/manual/mod/mod_cache.xml modules/cache/mod_cache.c modules/cache/mod_cache.h

2006-10-26 Thread Plüm , Rüdiger , VF EITO


 -Ursprüngliche Nachricht-
 Von: Graham Leggett 
 Gesendet: Mittwoch, 25. Oktober 2006 22:21
 An: dev@httpd.apache.org
 Betreff: Re: svn commit: r467655 - in /httpd/httpd/trunk: 
 CHANGES docs/manual/mod/mod_cache.xml 
 modules/cache/mod_cache.c modules/cache/mod_cache.h
 

 
 I managed to solve this problem last night.
 
 Took a while and a lot of digging to figure it out, but in 
 the end it is 
 relatively simple.
 
 The ap_core_output_filter helps us out:
 
  /* Scan through the brigade and decide whether to 
 attempt a write,
   * based on the following rules:
   *
   *  1) The new_bb is null: Do a nonblocking write of as much as
   * possible: do a nonblocking write of as much data 
 as possible,
   * then save the rest in ctx-buffered_bb.  (If 
 new_bb == NULL,
   * it probably means that the MPM is doing asynchronous write
   * completion and has just determined that this connection
   * is writable.)
   *
  [snip]
 

AFAIK this is only true on trunk due to Brians async write patches there.
They have not been backported to 2.2.x and I think they are unlikely to
get backported. Thus this solution for mod_disk_cache only solves
the problem on trunk. This not necessarily bad, but I guess people should
be aware of it.

Regards

Rüdiger




Re: svn commit: r467655 - in /httpd/httpd/trunk: CHANGES docs/manual/mod/mod_cache.xml modules/cache/mod_cache.c modules/cache/mod_cache.h

2006-10-26 Thread Plüm , Rüdiger , VF EITO


 -Ursprüngliche Nachricht-
 Von: Davi Arnaut 
 Gesendet: Donnerstag, 26. Oktober 2006 13:47
 An: dev@httpd.apache.org
 Betreff: Re: svn commit: r467655 - in /httpd/httpd/trunk: 
 CHANGES docs/manual/mod/mod_cache.xml 
 modules/cache/mod_cache.c modules/cache/mod_cache.h
 
 
 Graham Leggett wrote:
  On Wed, October 25, 2006 11:48 pm, Davi Arnaut wrote:
  

  What Joe's patch does is remove this first implicitly 
 created bucket
  from the brigade, placing it on the brigade on a temporary 
 brigade for
  sending it to the client.
  
  Ok, this makes more sense, but in its current form the 
 temporary brigade
  should not be necessary.
 
 But it would be better. We don't need to keep creating brigades with
 apr_brigade_split(), we could only move the buckets to a temporary

+1 on this. Creating brigades over and over again consumes memory that
is only freed once the request pool gets cleared. So if we create a lot
of brigades we have a temporary memory leak here.
AFAIK the (data only?) memory allocated by buckets is freed immediately when 
they get destroyed.

Regards

Rüdiger



Re: svn commit: r467655 - in /httpd/httpd/trunk: CHANGES docs/manual/mod/mod_cache.xml modules/cache/mod_cache.c modules/cache/mod_cache.h

2006-10-26 Thread Plüm , Rüdiger , VF EITO


 -Ursprüngliche Nachricht-
 Von: Joe Orton 
 Gesendet: Mittwoch, 25. Oktober 2006 17:59
 An: dev@httpd.apache.org
 Betreff: Re: svn commit: r467655 - in /httpd/httpd/trunk: 
 CHANGES docs/manual/mod/mod_cache.xml 
 modules/cache/mod_cache.c modules/cache/mod_cache.h
 
 

 Index: modules/cache/mod_disk_cache.c
 ===
 --- modules/cache/mod_disk_cache.c(revision 450104)
 +++ modules/cache/mod_disk_cache.c(working copy)

 @@ -998,12 +998,16 @@
  dobj-file_size = 0;
  }
  
 -for (e = APR_BRIGADE_FIRST(bb);
 - e != APR_BRIGADE_SENTINEL(bb);
 - e = APR_BUCKET_NEXT(e))
 -{
 +e = APR_BRIGADE_FIRST(bb);
 +while (e != APR_BRIGADE_SENTINEL(bb)  !APR_BUCKET_IS_EOS(e)) {
  const char *str;
  apr_size_t length, written;
 +
 +if (APR_BUCKET_IS_METADATA(e)) {
 +e = APR_BUCKET_NEXT(e);
 +continue;
 +}
 +

Why ignore the metadata buckets? This means that flush buckets do not get passed
up the chain via the temporary brigade tmpbb. This is bad IMHO.
I guess the best thing we can do here is to add them to tmpbb before doing the
continue. Of course this means that all additions need to be done to the tail
of tmpbb.

Regards

Rüdiger



AW: mod_deflate and flush?

2006-10-25 Thread Plüm , Rüdiger , VF EITO


 -Ursprüngliche Nachricht-
 Von: news 
 Gesendet: Mittwoch, 25. Oktober 2006 00:36
 An: dev@httpd.apache.org
 Betreff: mod_deflate and flush?
 
 
 Hi,
 
 JSP (via mod_jk) and maybe other plugins sometimes flush the 
 connection,
 so that the browsers receive everything that's stuck in some internal
 buffer. Here's a quote from mod_jk's docs:
 
 
 JkOptions +FlushPackets
 JkOptions FlushPackets, you ask mod_jk to flush Apache's connection
 buffer after each AJP packet chunk received from Tomcat.
 
 
 mod_deflate breaks that. I know the issue from ssh already. 

I know. There are some patches to fix that. These are proposed for backport to 
2.2.x:

* mod_deflate: Rework inflate output and deflate output filter to fix
  several issues: Incorrect handling of flush buckets, potential memory
  leaks, excessive memory usage in inflate output filter for large
  compressed content
PR: 39854
  Trunk version of patch:
Changes on deflate output filter:
  http://svn.apache.org/viewvc?view=revrevision=422731
  http://svn.apache.org/viewvc?view=revrevision=422736
  http://svn.apache.org/viewvc?view=revrevision=422739
  http://svn.apache.org/viewvc?view=revrevision=423940
  http://svn.apache.org/viewvc?view=revrevision=424759
  http://svn.apache.org/viewvc?view=revrevision=424950
  http://svn.apache.org/viewvc?view=revrevision=425109
  http://svn.apache.org/viewvc?view=revrevision=426790
  http://svn.apache.org/viewvc?view=revrevision=426791
  http://svn.apache.org/viewvc?view=revrevision=426793
  http://svn.apache.org/viewvc?view=revrevision=426795
 
Changes on inflate output filter:
  http://svn.apache.org/viewvc?view=revrevision=416165
  http://svn.apache.org/viewvc?view=revrevision=426799
 
Changlog entry:
  http://svn.apache.org/viewvc?view=revrevision=437668
 
  2.2.x version of patch:
Trunk versions work. For convenience of review I merged the patches
for the deflate output filter and the inflate output filter:
  Changes on deflate output filter:
Patch: 
http://people.apache.org/~rpluem/patches/mod_deflate_rework/deflate_output.diff
Merged subversion comments: 
http://people.apache.org/~rpluem/patches/mod_deflate_rework/deflate_output.log
  Changes on inflate output filter:
Patch: 
http://people.apache.org/~rpluem/patches/mod_deflate_rework/inflate_output.diff
Merged subversion comments: 
http://people.apache.org/~rpluem/patches/mod_deflate_rework/inflate_output.log
The patch for the inflate output filter requires the patch for the
deflate output filter.

Keep in mind that people.apache.org is currently down. I would appreciate if
you could test the patches and report back here.


Regards

Rüdiger


Re: mod_cache responsibilities vs mod_xxx_cache provider responsibilities

2006-09-21 Thread Plüm , Rüdiger , VF EITO


 -Ursprüngliche Nachricht-
 Von: Niklas Edmundsson 
 Gesendet: Donnerstag, 21. September 2006 11:38
 An: dev@httpd.apache.org
 Betreff: Re: mod_cache responsibilities vs mod_xxx_cache 
 provider responsibilities
 
 
 On Thu, 21 Sep 2006, Graham Leggett wrote:
 
 
  This means the backend server will still see a spike of 
 traffic while the
  object is being cached, but the cache will no try and cache multiple
  entities until the first one wins, which happens now.
 
 Our patch solves this by pausing read-threads while the object is 
 being cached until there is a known length of the body, with 
 a timeout 
 to detect if the caching thread has died. Drawback is that 

IMHO this waits for a DoS to happen if the requestor can trick the backend
to get stuck with the request. So one request of this type would be sufficient
to DoS the whole server if the timeout is not very short. BTW: What do you
do once the timeout is reached? Do you start a request to the backend of your
own (so timeout = 0 would be more or less the same as today) or do you sent an
error back to the client?

Regards

Rüdiger



Re: load balancer and http(s) sticky sessions

2006-09-15 Thread Plüm , Rüdiger , VF EITO


 -Ursprüngliche Nachricht-
 Von: Sander Temme
 Gesendet: Freitag, 15. September 2006 01:02
 An: dev@httpd.apache.org
 Betreff: Re: load balancer and http(s) sticky sessions
 
 
 
 On Sep 14, 2006, at 3:49 PM, Ruediger Pluem wrote:
 
 
  On 09/14/2006 06:14 PM, Jim Jagielski wrote:
 
 
  That's what I'm thinking, sort of like an 'autostickysession'  
  attribute.
  We could even have it default to cookies but add something like
  'autostickysession=url' to force URL rewriting and adding a tag
  to the end of the URL (for sites that don't like cookies)
 
  I guess URL rewriting can be tricky when you have no idea of the  
  application on the
  backend. How do you handle POST requests?
 
 Wouldn't we get the pathinfo back that we tack onto the action  
 attribute of a form?

But what if you need to switch the backend due to a failure there and adjust
the routing information?

Regards

Rüdiger



Re: load balancer and http(s) sticky sessions

2006-09-15 Thread Plüm , Rüdiger , VF EITO


 -Ursprüngliche Nachricht-
 Von: Sander Temme 
 Gesendet: Freitag, 15. September 2006 17:41
 An: dev@httpd.apache.org
 Betreff: Re: load balancer and http(s) sticky sessions
 
 
 
 On Sep 15, 2006, at 12:29 AM, Plüm, Rüdiger, VF wrote:
 

 
  But what if you need to switch the backend due to a failure there  
  and adjust
  the routing information?
 
 Same as when the LB session ID comes in as a cookie? The pathinfo  
 would contain the same or similar information as the cookie and be  
 used for the same purpose. I don't know if we want to put explicit  
 route information in the tag in the way we do now 
 (cookievalue.route)  
 or keep that state on the server, but the result would be the same.

Yes, but if we keep the state at the server we need to build up a different
infrastructure there. We need to have a session cache there that is shared
across the processes and we need to do locking on it. To be honest
I wouldnt't like that. I like the approach we currently have.

 
 What happens today when a request arrives with cookie value.tomcat1  
 and tomcat1 is out of commission? It gets routed to tomcat2, right?

Yes and tomcat2 updates the cookie in its response such that the next
request goes to tomcat2. This is easy with cookies as you can do this
also as a response to a POST request whereas POST requests and URL
rewriting get hard in this situation.

Regards

Rüdiger



Re: svn commit: r442758 - in /httpd/httpd/trunk/modules/generators: mod_cgi.c mod_cgid.c

2006-09-14 Thread Plüm , Rüdiger , VF EITO


 -Ursprüngliche Nachricht-
 Von: Jeff Trawick 

 
 Yes, I am supporting Rüdiger's proposition.  Don't make up some HTTP
 status code for the aborted-connection condition.  We already have a
 way to record this issue (%c).

I guess by %c you mean what is now %X in, mod_log_config, right?
As said previously it seems to make sense to me to log a 408 when we know that
a timeout caused this trouble. Honestly I currently do not know how we can 
notice
this (maybe the return value from the filter can be examined).


Regards

Rüdiger



AW: why does ap_invoke_handler init input filters?

2006-09-12 Thread Plüm , Rüdiger , VF EITO


 -Ursprüngliche Nachricht-
 Von: Issac Goldstand 
 Gesendet: Dienstag, 12. September 2006 12:04
 An: dev@httpd.apache.org
 Betreff: why does ap_invoke_handler init input filters?
 
 
 Hi all,
   I've been trying to solve my confusion on the exact order 
 of hooks and
 filters being invoked inside an HTTP request, and was wondering why
 ap_invoke_filter_init(r-input_filters) is called inside of
 ap_invoke_handler (server/config.c:338) ?
 
 I can understand initializing output filters at this point, but input
 filters that want to work with the request headers would have been

The purpose of input filters is to process the request body (if present of 
course).
If you want to do something on the request headers you need to do this in a 
diffferent
hook (e.g. header_parser). See also mod_setenvif as an example.

 inserted at the create_request hook (and invoked by the time we got to
 post_read_request and the quick-handler, let alone the response
 handler), so what are we really trying to initialize at this point?
 
 Thanks for the insight,
   Issac
 

Regards

Rüdiger


Re: [Announcement] Apache HTTP Server 2.2.3 (2.0.59, 1.3.37) Released

2006-08-03 Thread Plüm , Rüdiger , VF EITO


 -Ursprüngliche Nachricht-
 Von: Philip M. Gollucci
 Is a release in the 2.0.x (2.0.59) soon to follow ?

It is already there.

Regards

Rüdiger


Re: svn commit: r427172 - in /httpd/httpd/trunk: CHANGES modules/proxy/mod_proxy.c

2006-08-01 Thread Plüm , Rüdiger , VF EITO


 -Ursprüngliche Nachricht-
 Von: Ruediger Pluem 
 
 BTW: Don't we need to reset checked_standby and 
 checking_standby to zero in
  the outer while loop?

What about this? I guess we don't enter the inner while loop a second time
because checked_standby  0 once we have checked for the standby workers
during the first time.

Regards

Rüdiger



Re: Backport PCKS#7 patch to 2.2?

2006-07-31 Thread Plüm , Rüdiger , VF EITO
Please add it to the STATUS file of 2.2.x for voting.

Regards

Rüdiger

 -Ursprüngliche Nachricht-
 Von: Ben Laurie 
 Gesendet: Montag, 31. Juli 2006 16:13
 An: Apache List
 Betreff: Backport PCKS#7 patch to 2.2?
 
 
 Will it be OK to do this?
 
 Cheers,
 
 Ben.
 
 -- 
 http://www.apache-ssl.org/ben.html   http://www.links.org/
 
 There is no limit to what a man can do or how far he can go if he
 doesn't mind who gets the credit. - Robert Woodruff
 


Re: load balancer cluster set

2006-07-31 Thread Plüm , Rüdiger , VF EITO


 -Ursprüngliche Nachricht-
 Von: Jim Jagielski 


 In other words, lets assume members a, b and c are in
 set 0 and d, e and f are in set 1 and g, h and i are in
 set 2. We check a, b and c and they are not usable, so
 we now start checking set 1. Should we re-check the
 members in set 0 (maybe they are usable now) or
 just check members of set 1 (logically, the question
 is whether we doing a = set# or == set#). I have
 both methods coded and am flip-flopping on which
 makes the most sense. I'm leaning towards #1 (=set#).

I would also lean to #1 as this means that once cluster set 0
failed and is back again we are using it again, which seems
natural to me. OTH I guess we need to consider session stickyness
in this case. So sessions that have been migrated to set 1 should
stay there until they vanish or someone knocks them out by disabling
this cluster set (BTW:
feature-creep
 will it be possible to disable complete cluster sets via the manager?
/feature-creep
)and thus forcing them back to cluster set 0.

Regards

Rüdiger



Re: hot standby in proxy added

2006-07-12 Thread Plüm , Rüdiger , VF EITO


 -Ursprüngliche Nachricht-
 Von: Jim Jagielski 
 Gesendet: Mittwoch, 12. Juli 2006 00:55
 An: dev@httpd.apache.org
 Betreff: Re: hot standby in proxy added
 
 

 
 My thoughts are how to make this better; overloading the concept
 of disabled workers to mean multiple things depending on
 different situations seems glocky to me. Stopped should mean
 stopped, disabled should mean disabled.

But what should be the difference between stopped and disabled in this case?

 
 I do like the distance attribute and am looking at the best
 way to impl something similar while still keeping things as
 abstracted out as possible. Almost a sort of balancer set
 would be nice, where, for example, you define a balancer
 set with some members as set #1, and they would always be
 chosen (and looked at for LB) and set #2 (and later) would only
 be considered if no one in set #1 is available.

...or if no one matches the given route in #1?

Regards

Rüdiger



AW: ap_proxy_get_worker

2006-07-12 Thread Plüm , Rüdiger , VF EITO


 -Ursprüngliche Nachricht-
 Von: Jean-frederic Clere 
 Gesendet: Mittwoch, 12. Juli 2006 14:21
 An: dev@httpd.apache.org
 Betreff: ap_proxy_get_worker
 
 
 Hi,
 
 Why does ap_proxy_get_worker() gives the best matched worker? - 
 Shouldn't it give the exact match -

Normally you do not have an exact match between the request URL and
the worker name (which is the last parameter of ProxyPass).
So longest match is the best thing you can do here.

Regards

Rüdiger



Re: AW: ap_proxy_get_worker

2006-07-12 Thread Plüm , Rüdiger , VF EITO


 -Ursprüngliche Nachricht-
 Von: Jean-frederic Clere 
   
 
 Ok.
 What happends in a configuration like:
 +++
 ProxyPass /foo http://foo.example.com/bar
 ProxyPass /bar http://foo.example.com/bar/foo
 +++
 Only one worker will be created.

Yes this is true. The question is: Is this wrong?

If this is considered wrong, we need to split ap_proxy_get_worker
into two functions or we need to add a boolean parameter longest_match to it.
In the case of the request processing we need longest match whereas in the
configuration case me might not need it.
Keep in mind that the above configuration should work today, but requests
for /foo and /bar use the same worker.


Regards

Rüdiger



Re: writing 2.x module thaty makes SSL connection

2006-06-12 Thread Plüm , Rüdiger , VF EITO


 -Ursprüngliche Nachricht-
 Von: Graham Leggett 
 
 Use mod_proxy_http to request the document, and configure mod_ssl to
 handle the SSL - there should be no need to write any code as 
 far as I can
 see.

I guess a subrequest with subrequest-filename starting with proxy: should do 
the trick.
So something like proxy:https://www.somewhere.com/;. Furthermore you should set
subrequest-proxyreq to PROXYREQ_REVERSE.
subrequest-handler can be set to proxy-server, but I do not think that this 
is strictly needed.


Regards

Rüdiger



Re: restructuring mod_ssl as an overlay

2006-06-09 Thread Plüm , Rüdiger , VF EITO


 -Ursprüngliche Nachricht-
 Von: Colm MacCarthaigh 
 
 After that, based on your excellent summary, I'm begining to see the
 wisdom of a subproject - despite the overhead, maximising developer
 involvement and the potential community size is much more important.

Just for my clarification: Keeping mod_ssl inside the tree would ban
developer involvement (and downloads) from the T-8 countries, right?
What is the actual list of T-8 countries? I found older lists that consist
of Cuba, Syria, North Korea, Sudan, Iran, Libya, Iraq.

I guess that Libya and Iraq have fallen of this list in the meantime. So that
would mean that 5 countries would remain where we need to impose these
restrictions, correct?


Regards

Rüdiger



Re: restructuring mod_ssl as an overlay

2006-06-09 Thread Plüm , Rüdiger , VF EITO


 -Ursprüngliche Nachricht-
 Von: Roy T. Fielding 

 The sane solution would be to convince the US government to remove
 encryption from the export control list, since that regulation has
 been totally ineffective.  That is not likely to happen during this

I totally agree, but I fear that this approach will not bring us to a solution 
any time
soon :-). Regrettably we have to deal with the legal situation as it is and
find the best way out.

Regards

Rüdiger



Re: restructuring mod_ssl as an overlay

2006-06-09 Thread Plüm , Rüdiger , VF EITO


 -Ursprüngliche Nachricht-
 Von: Joe Orton [
 
 Would only committers count as participating in the project 
 for this 
 purpose, do you think?  Random people submitting patches would not?

Stupid question: How can someone who is not allowed to download the sources
can submit patches? :-).

Regards

Rüdiger



AW: restructuring mod_ssl as an overlay

2006-06-08 Thread Plüm , Rüdiger , VF EITO


 -Ursprüngliche Nachricht-
 Von: Joe Orton [
 
 Thanks for doing the research, Roy.

Yep, thanks from me too.

 
 On Wed, Jun 07, 2006 at 02:03:33PM -0700, Roy T. Fielding wrote:
  Okay, let me put it in a different way.  The alternatives are
  
   1) retain the status quo, forbid distributing ssl binaries, and 
  include in our documentation that people in banned 
 countries are not 
  allowed to download httpd 2.x.
 
 This gets my vote.  I don't see why it's necessary for the 
 ASF to be in 
 the business of distributing binaries; letting other people 
 assume the 
 technical and legal responsibilites for doing that seems reasonable.
 
 The documentation work necessary would be greater if mod_ssl is split 
 into a separate package, and having mod_ssl in the tree is one of the 
 compelling features of 2.x anyway.

Provided that we do not find a solution that allows us to keep mod_ssl inside
the httpd tree (having an additional non ssl source tar ball that has the
modules/ssl directory removed during rolling the package seems to be 
acceptable to me (not knowing if this would solve our legal pains))
I agree with Joe.

Regards

Rüdiger



Re: AW: restructuring mod_ssl as an overlay

2006-06-08 Thread Plüm , Rüdiger , VF EITO


 -Ursprüngliche Nachricht-
 Von: Colm MacCarthaigh 
 
 
 On Thu, Jun 08, 2006 at 08:16:48AM -0500, William A. Rowe, Jr. wrote:
  The group of people who concern me are not those in T-8, 
 they are those who
  live in jurisdictions where *they* would be breaking local 
 law by possessing
  crypto.  Leave them a) in the backwaters / b) in fear / c) 
 in violation, or
  give them a silly httpd-2.2.x-no-ssl.tar.gz?  The later 
 seems sane to me.
 
 The personal liabilites of users are not our concern. That's 
 strictly a
 matter for users and their legal counsel. What's next, do we start

And they would be a target hard to hit. US export controls are one fixed
(complex) issue, but trying to deliver packages (whether source or binary) that
do not violate laws in (almost) all contries over the world seems undoable to 
me.
Apart from this I do not think that this should be our task.

Regards

Rüdiger



AW: mod_mbox: incorrect parsing of MIME part-names

2006-06-02 Thread Plüm , Rüdiger , VF EITO


 -Ursprüngliche Nachricht-
 Von: Paul Querna
 
 Currently, it might be more, since the server hosting the 
 mail archives, 
 ajax, has an rsync that can take longer than an hour to 
 complete... so, 
 it might be up to several hours behind, but it should never 
 be more than 
 that.

Sadly it is. See also:

http://issues.apache.org/jira/browse/INFRA-826

There had been already a problem with that shortly after the server migration.
Someone of the infrastructure guys fixed it somehow and it worked for while, but
it has stopped working again. Depending on the list they seem to be days behind 
(as cvs@).
bugs@ seems to be about a day behind.

Regards

Rüdiger



Re: svn commit: r395211 - /httpd/httpd/trunk/configure.in

2006-06-01 Thread Plüm , Rüdiger , VF EITO


 -Ursprüngliche Nachricht-
 Von: Joe Orton 
  
  Can you replace them with a variable, such that later on 
 only this variable
  needs to be adjusted? Then I would be +1 for backport immediately.
 
 Sure, done, r410828.

Thanks. +1 on backport to 2.2.x

Regards

Rüdiger



Re: There should be a filter spec

2006-06-01 Thread Plüm , Rüdiger , VF EITO


 von Garrett Rooney
 Gesendet: Donnerstag, 1. Juni 2006 16:32
 An: dev@httpd.apache.org; [EMAIL PROTECTED]
 Betreff: Re: There should be a filter spec
 

 
 I've had similar issues lately, it's very unclear how a filter setting
 f-r-status or f-r-status_line should act.  Depending on what
 modules your working with, that may be enough to get an error out to
 the browser, and it may not.  Some specific rules about that sort of
 thing would be quite useful...

As far as I remember there had been also a discussion on who owns a brigade.
So who has to call / should not call apr_brigade_destroy / apr_brigade_cleanup
in the filter chain. I think rules for this would be also useful.

Regards

Rüdiger



Re: [PATCH] mod_disk_cache early size-check

2006-05-30 Thread Plüm , Rüdiger , VF EITO


 -Ursprüngliche Nachricht-
 Von: Brian Akins 
 Niklas Edmundsson wrote:
  
  This patch takes advantage of the possibility to do the 
 size-check of 
  the file to be cached early.
   obj-vobj = dobj = apr_pcalloc(r-pool, sizeof(*dobj));
 
 
 Shouldn't this be in mod_cache so that all providers do not have to 
 duplicate this logic?

Possibly, but right now there is no mod_cache directive regarding the object 
sizes.
All object size directives are provider specific. Thus this check has to happen 
in the
provider code as mod_cache does not know anything about object size limitations.
And yes, I think that having them provider specfic is a good thing, because if 
you use
multiple providers in parallel you might want to have provider specific max and 
min values
for the objects.

Regards

Rüdiger


Re: mod_disk_cache read-while-caching patch (try2)

2006-05-30 Thread Plüm , Rüdiger , VF EITO


 -Ursprüngliche Nachricht-
 Von: Paul Querna 
 Brian Akins wrote:

  Does it make more sens at this time to do all these changes 
 in another 
  module and leave mod_disk_cache as stable and useable?  
 call it disk2 or 
  something...
 
 If its in trunk, thats fine, I wouldn't support backporting these 
 changes to 2.2.x anytime soon however :)

Maybe some simple improvements could be backported sooner. So lets wait for
the splitted patches and see :-).

 
 Worst case, we can always create a devel-branch in subversion, but I 
 don't see any reason to make a mod_disk_cache2 module.

+1

Regards

Rüdiger


Re: mod_disk_cache read-while-caching patch (try2)

2006-05-29 Thread Plüm , Rüdiger , VF EITO


 -Ursprüngliche Nachricht-
 Von: Niklas Edmundsson 
 Gesendet: Montag, 29. Mai 2006 10:37
 An: dev@httpd.apache.org
 Betreff: mod_disk_cache read-while-caching patch (try2)
 
 
 
 Since I haven't got any reply on the mail I sent on Thursday I assume 
 that it has been buried in the mod_cache-thread and noone has 
 bothered 
 reading it.

I read it, but I had no time studying it.

 
 I'll start breaking this down into smaller patches, but given the 
 recent cache-cleanup-effort I'd like to know how to do it.
 
 * Should I provide patches against httpd-2.2.2 or trunk?

Against trunk please.

 * Should I just attach them to bug #39380 or post them here first?

Post them here directly.


Regards

Rüdiger



mail-archives.apache.org not refreshed

2006-05-19 Thread Plüm , Rüdiger , VF EITO
Someone an idea why at least the httpd lists do not get refreshed any more in 
the mod_mbox
archive on mail-archives.apache.org? The latest entries e.g. for 
bugs@httpd.apache.org are
rather old.

Regards

Rüdiger



Re: mail-archives.apache.org not refreshed

2006-05-19 Thread Plüm , Rüdiger , VF EITO


 -Ursprüngliche Nachricht-
 Von: 
 von Garrett Rooney
 Gesendet: Freitag, 19. Mai 2006 17:34
 An: dev@httpd.apache.org
 Betreff: Re: mail-archives.apache.org not refreshed
 
 

  Someone an idea why at least the httpd lists do not get 
 refreshed any more in the mod_mbox
  archive on mail-archives.apache.org? The latest entries 
 e.g. for bugs@httpd.apache.org are
  rather old.
 
 Yeah, that was noticed yesterday, and we thought it was a cron job
 that wasn't being run, but apparently getting the cron job going again
 hasn't fixed things.  Will get people to look into it again today.

Thanks for that. It was also my thought that it has something to do with the
changes that are currently done in the infrastructure (e.g. decoupling
of svn.apache.org and people.apache.org to different boxes and so on).

BTW: Thanks to all infrastructure guys for doing a great job from here.

Regards 

Rüdiger


Re: svn commit: r407357 - in /httpd/httpd/trunk: CHANGES modules/cache/cache_storage.c

2006-05-18 Thread Plüm , Rüdiger , VF EITO


 -Ursprüngliche Nachricht-
 Von: Joe Orton 
 ...
  @@ -375,15 +380,18 @@
   }
   }
   else {
  -scheme = http;
  +scheme = ap_http_scheme(r);
   }
 
 cache_storage.c: In function `cache_generate_key_default':
 cache_storage.c:383: warning: assignment discards qualifiers 
 from pointer target type
 
 The below fixes the warning but I've no test setup for the 
 cache handy, 
 does it still work? (the lower-casing code just above here could be 
 simplified like this too)


I am currently away from my dev environment but I try to have a look into this
this evening. Thanks for noticing.

At first glance I tend to say that this should work as it does not change the 
logic
from my point of view. Your lower-casing is easier, but as far as I can see the
current solution saves a few cycles by avoiding to do a copy of the string 
first.
Seems to be the old conflict between more simplified code and some cycles :-).
But finally any of both approaches is fine with me.


 
 Index: modules/cache/cache_storage.c
 ===
 --- modules/cache/cache_storage.c (revision 407503)
 +++ modules/cache/cache_storage.c (working copy)
 @@ -320,8 +320,8 @@
  apr_status_t cache_generate_key_default(request_rec *r, 
 apr_pool_t* p,
  char**key)
  {
 -char *port_str, *scheme, *hn;
 -const char * hostname;
 +char *port_str, *hn;
 +const char *hostname, *scheme;
  int i;
  
  /*
 @@ -373,11 +373,10 @@
   * manner (see above why this is needed).
   */
  if (r-proxyreq  r-parsed_uri.scheme) {
 -/* Copy the scheme */
 -scheme = apr_pcalloc(p, strlen(r-parsed_uri.scheme) + 1);
 -for (i = 0; r-parsed_uri.scheme[i]; i++) {
 -scheme[i] = apr_tolower(r-parsed_uri.scheme[i]);
 -}
 +/* Copy the scheme and lower-case it. */
 +char *lcs = apr_pstrdup(p, r-parsed_uri.scheme);
 +ap_str_tolower(lcs);
 +scheme = lcs;
  }
  else {
  scheme = ap_http_scheme(r);
 

Regards

Rüdiger


Re: Possible new cache architecture

2006-05-04 Thread Plüm , Rüdiger , VF EITO


 -Ursprüngliche Nachricht-
 Von: Joe Orton
  
  1. This is an API change which might be hard to backport.
  2. I do not really like the close tie between the storage provider
 and the filter chain. It forces the provider to do things it
 should not care about from my point of view.
 
 At least this much could be solved I suppose by passing in a 
 callback of 
 type apr_brigade_flush which does the pass to f-next; the storage 

Sorry, but I guess that I do not understand this completely. So instead
of passing f-next to store_body and make it call ap_pass_brigade with
the small brigade and f-next you propose to create a callback function
of type apr_brigade_flush inside of mod_cache and pass the pointer to this
function and f-next to store_body, such that it can call this function
with the small brigade and f-next as the ctx parameter of apr_brigade_flush?
This function of course calls ap_pass_brigade then.

 provider could remain filter-agnostic then.  No idea about your other 
 issues, sorry.

I will keep on thinking on this. Thanks for your help.

Regards

Rüdiger


Re: Possible new cache architecture

2006-05-03 Thread Plüm , Rüdiger , VF EITO


 -Ursprüngliche Nachricht-
 Von: Joe Orton 
 
 The way I would expect it to work would be by passing f-next 
 in to the 
 store_body callback, it looks doomed to eat RAM as currently 
 designed. 
 mod_disk_cache's store_body implementation can then do:
 
  1. read bucket(s) from brigade, appending to some temp brigade
  2. write bucket(s) in temp brigade to cache file
  3. pass temp brigade on to f-next
  4. clear temp brigade to ensure memory is released
  5. goto 1

Yes, this was also my idea, but I would like to avoid this, because:

1. This is an API change which might be hard to backport.
2. I do not really like the close tie between the storage provider
   and the filter chain. It forces the provider to do things it
   should not care about from my point of view.
   Furthermore: What about mod_cache in this case? Do you want to
   skip ap_pass_brigade there or do you want to cleanup the original
   brigade inside store_body of mod_disk_cache and let mod_cache pass
   an empty brigade up the chain?
   If we decide to skip ap_pass_brigade inside mod_cache all storage
   providers need to ensure that they pass the data up the chain
   which seems duplicated code to me and does not seem to belong to
   their core tasks.
   OTH doing this in mod_cache and only pass the small brigade to
   store_body of the provider has the drawback that mod_mem_cache wants
   to see the original file buckets in order to save the file descriptors
   of the files.
   To be honest, currently I have no solution at hand that I really like,
   but I agree that this really needs to be changed.

Regards

Rüdiger



Re: Possible new cache architecture

2006-05-02 Thread Plüm , Rüdiger , VF EITO


 -Ursprüngliche Nachricht-
 Von: Graham Leggett 
  * Don't block the requesting thread when requestng a large uncached
 item, cache in the background and reply while caching 
 (currently it
 stalls).
 
 This is great, in doing this you've been solving a proxy bug that was
 first reported in 1998 :).

This already works in the case you get the data from the proxy backend. It does
not work for local files that get cached (the scenario Niklas uses the cache
for). The reason it does not work currently is that that a local file usually is
delivered in one brigade with, depending on the size of the file, one or more
file buckets.
For Niklas purposes Colm's ideas regarding the use of the new Linux system calls
tee and splice will get handy 
(http://mail-archives.apache.org/mod_mbox/apr-dev/200604.mbox/[EMAIL PROTECTED])
as they should speed up such things.

Regards

Rüdiger



Re: Possible new cache architecture

2006-05-02 Thread Plüm , Rüdiger , VF EITO


 -Ursprüngliche Nachricht-
 Von: Graham Leggett 

  The reason it does not work currently is that that a local file
  usually is
  delivered in one brigade with, depending on the size of the 
 file, one or
  more
  file buckets.
 
 Hmmm - ok, this makes sense.
 
 Something I've never checked for, do output filters support 
 asynchronous
 writes?

I don't think so. Of course this would be a nice feature. Maybe somehow
possible with Colm's ideas.
Another thing: I guess on systems with no mmap support the current 
implementation
of mod_disk_cache will eat up a lot of memory if you cache a large local file,
because it transforms the file bucket(s) into heap buckets in this case.
Even if mmap is present I think that mod_disk_cache causes the file buckets
to be transformed into many mmap buckets if the file is large. Thus we do not
use sendfile in the case we cache the file.
I the case that a brigade only contains file_buckets it might be possible to
copy this brigade, sent it up the chain and process the copy of the brigade
for disk storage afterwards. Of course this opens a race if the file gets
changed in between these operations.
This approach does not work with socket or pipe buckets for obvious reasons.
Even heap buckets seem to be a somewhat critical idea because of the added 
memory usage.


Regards

Rüdiger



Re: Possible new cache architecture

2006-05-02 Thread Plüm , Rüdiger , VF EITO


 -Ursprüngliche Nachricht-
 Von: Niklas Edmundsson 

 
 Correct. When caching a 4.3GB file on a 32bit arch it gets so 
 bad that 
 mmap eats all your address space and the thing segfaults. I initally 
 thought it was eating memory, but that's only if you have mmap 
 disabled.

Ahh, good point. So I guess its needed to remove the mmap buckets
in the loop from the brigade.

Regards

Rüdiger


Re: [VOTE] 2.0.57 candidate

2006-04-21 Thread Plüm , Rüdiger , VF EITO


 -Ursprüngliche Nachricht-
 Von: Plüm, Rüdiger,
 Also +1 (compiled and started) on
 
 Solaris 8, gcc 3.3.2
 Solaris 9, gcc 3.3.2

Forgot to mention: Both Solaris SPARC

Regards

Rüdiger


AW: [VOTE] 2.0.57 candidate

2006-04-20 Thread Plüm , Rüdiger , VF EITO


 -Ursprüngliche Nachricht-
 Von: Colm MacCarthaigh 

 Candidate tarballs for 2.0.57 are now available for testing/voting at;
 
   http://httpd.apache.org/dev/dist/
 
 This doesn't include a changed notice-of-license text though, 
 which is a
 potential open issue.

Compiled and tested on RedHat AS4 :

waiting 60 seconds for server to start: ...
waiting 60 seconds for server to start: ok (waited 1 secs)
server localhost.localdomain:8529 started
server localhost.localdomain:8530 listening (mod_nntp_like)
server localhost.localdomain:8531 listening (mod_nntp_like_ssl)
server localhost.localdomain:8532 listening (mod_ssl)
server localhost.localdomain:8533 listening (ssl_optional_cc)
server localhost.localdomain:8534 listening (ssl_pr33791)
server localhost.localdomain:8535 listening (mod_cache)
server localhost.localdomain:8536 listening (mod_include)
server localhost.localdomain:8537 listening (mod_proxy)
server localhost.localdomain:8538 listening (proxy_http_bal1)
server localhost.localdomain:8539 listening (proxy_http_bal2)
server localhost.localdomain:8540 listening (proxy_http_balancer)
server localhost.localdomain:8541 listening (proxy_http_reverse)
server localhost.localdomain:8542 listening (mod_headers)
server localhost.localdomain:8543 listening (error_document)
server localhost.localdomain:8544 listening (mod_vhost_alias)
server localhost.localdomain:8545 listening (proxy_http_https)
server localhost.localdomain:8546 listening (proxy_https_https)
server localhost.localdomain:8547 listening (proxy_https_http)
[   info] adding source lib /tmp/httpd-test/perl-framework/Apache-Test/lib to 
@INC
[   info] adding source lib /tmp/httpd-test/perl-framework/Apache-Test/lib to 
@INC
[   info] adding source lib /tmp/httpd-test/perl-framework/Apache-Test/lib to 
@INC
t/apache/404ok   
t/apache/acceptpathinfo.ok   
t/apache/byterange..ok   
t/apache/byterange2.ok   
t/apache/chunkinput.ok   
t/apache/contentlength..ok   
2/20 skipped: skipping tests with empty C-L for httpd  2.1.0
t/apache/errordoc...ok   
t/apache/etags..ok   
t/apache/getfileok   
t/apache/headersok   
t/apache/limits.ok   
t/apache/optionsok   
t/apache/passbrigadeok   
t/apache/post...ok   
t/apache/pr18757skipped
all skipped: apache version 2.2.1 or higher is required, this is 
version 2.0.57, cannot find module 'cgi'
t/apache/pr35292skipped
all skipped: apache version 2.1.8 or higher is required, this is 
version 2.0.57
t/apache/pr35330ok   
t/apache/rwrite.ok   
t/apr/uri...ok   
t/filter/case...skipped
all skipped: cannot find module 'case_filter'
t/filter/case_inskipped
all skipped: cannot find module 'case_filter_in'
t/filter/input_body.ok   
t/http11/basicauth..ok   
t/http11/chunkedok   
t/http11/chunked2...skipped
all skipped: cannot find module 'bucketeer'
t/http11/post...ok   
t/modules/accessok   
t/modules/alias.ok   
t/modules/asis..ok   
t/modules/autoindex.ok   
t/modules/autoindex2ok   
t/modules/cache.skipped
all skipped: apache version 2.1.9 or higher is required, this is 
version 2.0.57
t/modules/cgi...ok   
t/modules/dav...skipped
all skipped: cannot find module 'HTTP::DAV'
t/modules/deflate...ok   
3/7 skipped: skipping 304/deflate tests without mod_cgi and httpd = 
2.1.0
t/modules/digestok   

Re: [VOTE] 2.0.57 candidate

2006-04-20 Thread Plüm , Rüdiger , VF EITO


 -Ursprüngliche Nachricht-
 Von: Colm MacCarthaigh 
 
 
 Candidate tarballs for 2.0.57 are now available for testing/voting at;
 
   http://httpd.apache.org/dev/dist/
 
 This doesn't include a changed notice-of-license text though, 
 which is a
 potential open issue.

Also +1 (compiled and started) on

Solaris 8, gcc 3.3.2
Solaris 9, gcc 3.3.2

Regards

Rüdiger




AW: Fold mod_proxy_fcgi into trunk (and maybe 2.2...)

2006-04-19 Thread Plüm , Rüdiger , VF EITO


 -Ursprüngliche Nachricht-
 Von: Jim Jagielski 
 
 
 I think that the Proxy FastCGI module is at a point where
 we should consider folding it into trunk, with the hope
 of it being backported to 2.2.x and some not-too-distant
 future.
 
 Comments?


Questions:

I am a lazy guy :-).
Would it be possible for you to provide the changes as a diff that
need to be applied to the *existing* sources on the trunk?
This would help to understand what changes in the existing files as a
result of this merge.

Are there any test cases for the test framework to check the FastCGI module?


Regards

Rüdiger



Re: Large file support in 2.0.56?

2006-04-19 Thread Plüm , Rüdiger , VF EITO


 -Ursprüngliche Nachricht-
 Von: Brandon Fosdick 
 
 At this point I'm not sure if I should bother trying the 
 large file hack for 2.0.55 or just start migrating to 2.2.x. 
 This no longer seems to be a large file problem, but I'm not 
 sure what kind of problem it is. Judging by the mod_security 
 audit log, the client doesn't appear to be doing anything 
 odd. That is, it's just issuing a LOCK and a PUT for each 
 file upload. Naturally, my own code is suspect as well, but 
 it never sees anything bigger than 64K, and it can handle 
 larger files when I haven't taken half the RAM out.
 
 I'm stumped.

Have you checked if you can write the files with the default mod_dav_fs 
provider to
the disk?

Maybe this is a pool issue in your provider. Pool issue can cause
large memory grows.

Regards

Rüdiger


AW: It's that time of the year again

2006-04-18 Thread Plüm , Rüdiger , VF EITO


 -Ursprüngliche Nachricht-
 Von: Ian Holsman 

 - mod_cache_requestor (which i don't think really took off)

I guess one of the reasons why mod_cache_requestor did not take off was its
dependency on an external http client library. So what about creating
a http client library and adding it to apr-util / creating apr-http-client
as a SoC project?

The httpd proxy code would also benefit from such a library as the current
way of doing http requests via fake connection and request records has some
drawbacks.

Regards

Rüdiger



AW: [VOTE] 2.0.56 candidate

2006-04-18 Thread Plüm , Rüdiger , VF EITO


 -Ursprüngliche Nachricht-
 Von: Colm MacCarthaigh 

 
 There are some 2.0.56 candidate tarballs now at;
 
   http://httpd.apache.org/dev/dist/
 
 available for review/voting.

Compiled and started on the following environments:

Solaris 8, gcc 3.3.2  
Solaris 9, gcc 3.3.2

Compiled and Test framework run on Linux RedHat AS 4:

using Apache/2.0.56 (worker MPM)
 
waiting 60 seconds for server to start: 
waiting 60 seconds for server to start: ok (waited 3 secs)
server localhost.localdomain:8529 started
server localhost.localdomain:8530 listening (mod_nntp_like)
server localhost.localdomain:8531 listening (mod_nntp_like_ssl)
server localhost.localdomain:8532 listening (mod_ssl)
server localhost.localdomain:8533 listening (ssl_optional_cc)
server localhost.localdomain:8534 listening (ssl_pr33791)
server localhost.localdomain:8535 listening (mod_cache)
server localhost.localdomain:8536 listening (mod_include)
server localhost.localdomain:8537 listening (mod_proxy)
server localhost.localdomain:8538 listening (proxy_http_bal1)
server localhost.localdomain:8539 listening (proxy_http_bal2)
server localhost.localdomain:8540 listening (proxy_http_balancer)
server localhost.localdomain:8541 listening (proxy_http_reverse)
server localhost.localdomain:8542 listening (mod_headers)
server localhost.localdomain:8543 listening (error_document)
server localhost.localdomain:8544 listening (mod_vhost_alias)
server localhost.localdomain:8545 listening (proxy_http_https)
server localhost.localdomain:8546 listening (proxy_https_https)
server localhost.localdomain:8547 listening (proxy_https_http)
[   info] adding source lib /tmp/httpd-test/perl-framework/Apache-Test/lib to 
@INC
[   info] adding source lib /tmp/httpd-test/perl-framework/Apache-Test/lib to 
@INC
[   info] adding source lib /tmp/httpd-test/perl-framework/Apache-Test/lib to 
@INC
t/apache/404ok   
t/apache/acceptpathinfo.ok   
t/apache/byterange..ok   
t/apache/byterange2.ok   
t/apache/chunkinput.ok   
t/apache/contentlength..ok   
2/20 skipped: skipping tests with empty C-L for httpd  2.1.0
t/apache/errordoc...ok   
t/apache/etags..ok   
t/apache/getfileok   
t/apache/headersok   
t/apache/limits.ok   
t/apache/optionsok   
t/apache/passbrigadeok   
t/apache/post...ok   
t/apache/pr18757skipped
all skipped: apache version 2.2.1 or higher is required, this is 
version 2.0.56, cannot find module 'cgi'
t/apache/pr35292skipped
all skipped: apache version 2.1.8 or higher is required, this is 
version 2.0.56
t/apache/pr35330ok   
t/apache/rwrite.ok   
t/apr/uri...ok   
t/filter/case...skipped
all skipped: cannot find module 'case_filter'
t/filter/case_inskipped
all skipped: cannot find module 'case_filter_in'
t/filter/input_body.ok   
t/http11/basicauth..ok   
t/http11/chunkedok   
t/http11/chunked2...skipped
all skipped: cannot find module 'bucketeer'
t/http11/post...ok   
t/modules/accessok   
t/modules/alias.ok   
t/modules/asis..ok   
t/modules/autoindex.ok   
t/modules/autoindex2ok   
t/modules/cache.skipped
all skipped: apache version 2.1.9 or higher is required, this is 
version 2.0.56
t/modules/cgi...ok   
t/modules/dav...skipped
all skipped: cannot find module 'HTTP::DAV'
t/modules/deflate...ok   
3/7 skipped: skipping 304/deflate tests without mod_cgi and httpd = 
2.1.0
t/modules/digestok  

Re: It's that time of the year again

2006-04-18 Thread Plüm , Rüdiger , VF EITO


 -Ursprüngliche Nachricht-
 Von: Paul Querna 
 
 *cough* serf *cough*
 
 http://svn.webdav.org/repos/projects/serf/trunk


I know :-).
But I would like to see this inside the apr framework as it
seems to me that serf has some acceptance problems here in the httpd
community. Please correct me if this impression is wrong.

Regards

Rüdiger


Re: mod_echo problem on Solaris

2006-04-13 Thread Plüm , Rüdiger , VF EITO


 -Ursprüngliche Nachricht-
 Von: William A. Rowe, Jr. 
 
 It's fixed in head and the coming release of 2.2, and 
 probably 2.0 as well
 although I have to look at my notes if the backport was committed.

Yes, it was backported (Changes with 2.0.56):

  *) Elimiated the NET_TIME filter, restructuring the timeout logic.
 This provides a working mod_echo on all platforms, and ensures any
 custom protocol module is at least given an initial timeout value
 based on the VirtualHost  context's Timeout directive.
 [William Rowe]

Regards

Rüdiger