AW: prefork mpm in linux: ap_process_connection isn't called on connection

2006-04-04 Thread Plüm , Rüdiger , VIS


 -Ursprüngliche Nachricht-
 Von: Rian Hunter 
 
  Is there a kernel accept filter enabled?
 
 No, it is a default clean install. Are you suggesting that for me to  
 get the behavior I want that there should be one enabled?

No, but on Linux I think TCP_DEFER_ACCEPT is set by default which makes
the accept call only return if data is available. That would be wrong
in your situation. So I guess you need to add something like

AcceptFilter smtp none

to your configuration.

Regards

Rüdiger


Re: AW: mod_http_proxy bug?

2006-04-04 Thread Plüm , Rüdiger , VIS


 -Ursprüngliche Nachricht-
 Von: Brian Akins 
 
  Proxy sents up an error_bucket with HTTP_BAD_GATEWAY if the 
 connection
  to the backend broke in the middle.
 
 So should every modules that reads the brigade check for an error 
 bucket? 

yes

 It does not appear that any of the cache modules do that.

They don't need to since r-no_cache is also set by the proxy
in this case.

Regards

Rüdiger


Re: Mod_proxy_http ProxyErrorOverride eating cookies

2006-04-03 Thread Plüm , Rüdiger , VIS


 -Ursprüngliche Nachricht-
 Von: Bart van der Schans
 
 Hi,
 
 The ProxyErrorOverride On setting is correctly catching the errors
 from the (reverse) proxied server. Only, it overrides too much IMHO.
 Right now it overrides anything that's not in the 2xx range, 
 but I think
 it should allow also the 3xx range for redirects etc.

I had a quick look into this and noticed the following:

1. It may make sense to add ap_is_HTTP_INFO to this also.
2. ProxyErrorOverride is currently only honoured by mod_proxy_http,
   mod_proxy_ajp ignores it. Is this intended?
3. This is a change in behaviour for people who use customized redirect
   pages for browsers that do not support redirects (are there any?)
4. 304 not modified responses from the backend are currently not supported
   without this patch.

Thoughts?

Regards

Rüdiger



AW: mod_http_proxy bug?

2006-04-03 Thread Plüm , Rüdiger , VIS

 -Ursprüngliche Nachricht-
 Von: Brian Akins 
 
 It looks like I'm getting some wierdness with http_proxy. it is not 
 returning any error, but is giving back a 0 length bucket from 
 apr_bucket_read when it should have some length.

This description requires some clarification IMO. Is this 0 length
bucket really a data bucket? Is it the last bucket that
comes thru your filter or do you see non zero length buckets afterwards
for this request?

 
 is there yet another way to check in a filter (ie, mod_cache 
 stuff) if 
 proxy had an error?  doesn't look like it's getting bubbled up..

Proxy sents up an error_bucket with HTTP_BAD_GATEWAY if the connection
to the backend broke in the middle.

Regards

Rüdiger



AW: apu_version mess

2006-04-03 Thread Plüm , Rüdiger , VIS


 -Ursprüngliche Nachricht-
 Von: Paul Querna 
 
 To resolve the problems we have with calling apu_version from 
 httpd, we 
 have three main options:
 
 [ ] Remove the new code that outputted the versions.
 [X] Make the code only present on systems that didn't have a 
 broken build.
 [ ] Wait for APR-Util 1.2.7 to be released.
 
 Votes/Thoughts?
 
 All of the options pretty much mean scraping the 2.2.1 release, and 
 moving on to 2.2.2.
 

I think that this is wise anyway because I forgot to backport r379237.
Meanwhile r379237 has been backported. Without this patch SSL to proxy
backends do not work and nearly all ssl proxy tests from the test framework
fail.
Version numbers are cheap :-).


Regards

Rüdiger


AW: svn commit: r389697 - /httpd/httpd/trunk/modules/cache/mod_disk_cache.c

2006-03-29 Thread Plüm , Rüdiger , VIS


 -Ursprüngliche Nachricht-
 Von: Jim Jagielski 
 

  +file_cache_errorcleanup(dobj, r);
  +return APR_EGENERAL;
  +}
 
 Why don't we return rv ?

Because we also return APR_EGENERAL in the cases below. I think the behaviour
should be consistent. So if we think that returning rv is better we should
also do this below. I am fine with either decision.

Regards

Rüdiger


Re: AW: svn commit: r389697 - /httpd/httpd/trunk/modules/cache/mod_disk_cache.c

2006-03-29 Thread Plüm , Rüdiger , VIS


 -Ursprüngliche Nachricht-
 Von: Jim Jagielski 
  
 
 That seem to be the case when we have a general error. In other
 places where we have a valid 'rv', we tend to return that.
 Look at file_cache_recall_mydata() for example...
 
 In the above, I think the return status may be useful, so
 we shouldn't mask it, imo.

As said I am fine with either way, but currently we also return
APR_EGENERAL if apr_file_write_full fails below. So we should return
in this case rv too.

Regards

Rüdiger



Re: AW: Config Bug in proxy_balancer?

2006-03-27 Thread Plüm , Rüdiger , VIS


 -Ursprüngliche Nachricht-
 Von: Jim Jagielski
   to do here.
  
  Ok, but this actually works already without your patch.
 
 I never even bothered to check... Brian's initial
 Email said that it didn't. Are you saying that his Email
 is wrong and that balancers defined in the main server
 conf via Proxy, as well as their workers, ARE fully
 inherited by Vhosts?

As far as I saw in my very limited tests: Yes.
This does also match with my code analysis I did in one
of my previous mails (the one with the 'correct me if I am wrong').
Of course there remain other weird things that are not nice
(e.g. the empty balancer created by the VHOST, the fact that the
empty balancer is not used because it comes later in the array)

Regards

Rüdiger



AW: Config Bug in proxy_balancer?

2006-03-23 Thread Plüm , Rüdiger , VIS


 -Ursprüngliche Nachricht-
 Von: Jim Jagielski 
 
  I want to be able to use same balancer in multiple vhosts.
 
 
 This is actually that way by design, iirc. I've no
 real issues with it being Vhost specific or inheritable.
 So if others think it's worthwhile having the above
 functionality, I'm +1 for it and would work on something
 to implement that.

+1 makes sense.

Regards

Rüdiger
 


AW: svn commit: r384580 - in /httpd/httpd/trunk/modules/proxy: mod_proxy.c mod_proxy.h mod_proxy_ajp.c proxy_util.c

2006-03-09 Thread Plüm , Rüdiger , VIS


 -Ursprüngliche Nachricht-
 von Garrett Rooney

 
 Sticking per-backend info like ajp_flush_wait into the worker 
 object and the code to configure it in mod_proxy.c itself 
 seems very wrong to me.  There should be a per-backend 
 contect pointer to hold per-backend information, and the work 
 of handling worker parameters really should be pushed to a 
 per-backend callback or something like that.

I agree in general, but I guess in this case things are different.
Although Jim chose names with ajp_ contained I think we can also
use this to teach mod_proxy_http flushing. Not quite sure when I have
time for this, but I want to give this a serious check as people also
complainted frequently of this problem for mod_proxy_http.


Regards

Rüdiger


Re: svn commit: r384580 - in /httpd/httpd/trunk/modules/proxy: mod_proxy.c mod_proxy.h mod_proxy_ajp.c proxy_util.c

2006-03-09 Thread Plüm , Rüdiger , VIS


 -Ursprüngliche Nachricht-
 Von: Jim Jagielski 
 
 Agreed. I'm +1 for making it non-AJP specific to handle
 the other issues. But I wanted it to crawl before walk :)

Thats a very good idea :-).

Regards

Rüdiger



Re: mod_proxy_ajp - The purpose of FLUSHING_BANDAID

2006-03-08 Thread Plüm , Rüdiger , VIS


 -Ursprüngliche Nachricht-
 Von:  Justin Erenkrantz

 
 Until the protocol is fixed, we should do the right thing - 
 and that means we shouldn't ever allow the entire response to 
 be spooled in memory.  -- justin

Actually we do not do this. The original code did this which lead to a problem,
but this was fixed in 2.1.9. See PR37100 for the gory details
(http://issues.apache.org/bugzilla/show_bug.cgi?id=37100).
BTW: After a quick review of this PR I noticed that we already had some 
discussion
about flushing there :-).
What we do is that we send the data up the chain *after each* SEND_BODY_CHUNK.
So the we only do the buffering that is done by the core output filter (max 
8KB?)
if do not send a flush bucket.
As we recycle the AJP msg struct and work with transient buffers the memory 
usage
should be fairly constant.

Regards

Rüdiger




AW: mod_proxy_ajp - The purpose of FLUSHING_BANDAID

2006-03-07 Thread Plüm , Rüdiger , VIS


 -Ursprüngliche Nachricht-
 Von: Mladen Turk 
 
 

First: I am the author.

 Hi,
 
 I would love that we remove the FLUSHING_BANDAID from the code
 because it concept breaks the AJP protocol specification.

I do not understand how this breaks the spec. There might be reasons
to handle this differently, but I see no violation of the specs.
The flushing bandaid simply tries to detect whether it needs to add
a flush bucket or not after the data of *one* packet has been added
to the brigade.
So if buffering the data in the core output filter without the flushing
bandaid or with flush=off does not break the spec and if setting
flush=on does not break the spec how does the flushing bandaid
breaks this?

 
 Instead FLUSHING_BANDAID I propose that we introduce a new
 directive 'flush=on' that would behave like the most recent
 mod_jk directive 'JkOptions +FlushPackets'.

The drawback of this solution form my point of view is that

- The user has to configure it
- It is a bang bang switch: Either you flush after each packet
  or never do explicit flushing (apart from the case that the
  core buffer is filled).

BTW: mod_proxy_http also tries to be that intelligent, but it does not
work as the EAGAIN handling does not work as expected and httpd
always reads in blocking mode from the backend.

 
 The point is that the AJP protocol is packet based, so trying
 to mimic the 'stream' behavior is bogus.
 Further more, this (FLUSHING_BANDAID) does not resolve the
 explicit flush from the application server, because it take

Ok, it does not do this exactly, but in most cases it works, because
if you flush data explicitly on the application server it usually takes
some time until you sent the next data.

 care only on the transport rather then a spec.
 
 I know that for such cases we would need to extend the AJP
 protocol with explicit flushing, but for now the only solution is
 to have an directive that will flush on each packet.

That seems to be the final solution to me. Something like a
SEND_BODY_FLUSH AJP message.

 
 So, since FLUSHING_BANDAID has none particular usage I'm asking
 the author to remove that code, so we can work on a packet
 flushing rather then a time based one.
 

I am happy to discuss a better solution. As the name says it is
a BANDAID :-).
So I am keen on additional proposals / comments on this.

As a summary from your side I see:

1. Extend AJP protocol [The desired target from my view].
2. Add an option to flush after every AJP packet [has some drawbacks from my
   point of view (s.o).]


Regards

Rüdiger 


AW: SSL enabled name virtual hosts

2006-03-06 Thread Plüm , Rüdiger , VIS


 -Ursprüngliche Nachricht-
 Von: William A. Rowe, Jr. 
 Stop bitching about a 10 year old spec.  It's trivial, use a 
 modern browser (beyond today - none exist yet) that can do 
 Connection-Upgrade and agree about the text of the headers 
 before the ssl handshake is performed.  The browser people 
 haven't caught up, because it's a non-trivial problem to 
 represent that the agreed-upon connection is secure to the 
 user, or that a secure connection is available to be toggled, 
 or whatever.  These aren't https:// requests, they are 
 http:// with extra semantics.  Modern clients such as remote 
 printing over http and neon/curl libraries already support it 
 now, IIUC.  As does httpd 2.2.

Or wait for RFC3546 (ftp://ftp.rfc-editor.org/in-notes/rfc3546.txt)
be implemented in the browsers and servers. IE 7 beta is said to
support it and upcoming openssl 0.9.9 is likely to support it.
After that we can start implementing in httpd.

Regards

Rüdiger




WG: SSL enabled name virtual hosts

2006-03-06 Thread Plüm , Rüdiger , VIS
As this is public information and in everybodys interest I forward this.

-Ursprüngliche Nachricht-
Von: Yusuf Goolamabbas 


 Or wait for RFC3546 (ftp://ftp.rfc-editor.org/in-notes/rfc3546.txt)
 be implemented in the browsers and servers. IE 7 beta is said to 
 support it and upcoming openssl 0.9.9 is likely to support it. After 
 that we can start implementing in httpd.

Hi, Rudiger (sorry, can't enter the accent character)

I've blogged a bit about this

http://blog.goolamabbas.org/?p=34



AW: AW: SSL enabled name virtual hosts

2006-03-06 Thread Plüm , Rüdiger , VIS


 -Ursprüngliche Nachricht-
 Von: Paul Querna 
 
 I didn't know that openssl 0.9.9 is likely to include support for SNI.
 
 If it does, that is great.
 
 I would be happy to write the code for mod_ssl to also support SNI.

See also http://www.mail-archive.com/openssl-dev@openssl.org/msg20834.html


Regards

Rüdiger


AW: Should fastcgi be a proxy backend?

2006-03-06 Thread Plüm , Rüdiger , VIS


 -Ursprüngliche Nachricht-
 Von: [EMAIL PROTECTED] 
  
   Isn't that very unreliable?
 
  Why should Unix domain sockets be unreliable?
 
 Yeah, that's my question as well.  Quite a few people seem to 
 use them...

Maybe he is working on an upatched Solaris 10 ;-).

Regards

Rüdiger



AW: Should fastcgi be a proxy backend?

2006-03-06 Thread Plüm , Rüdiger , VIS


 -Ursprüngliche Nachricht-
 Von: [EMAIL PROTECTED] 
 
 We actually have a way to do that, it's the close_on_recycle flag, and
 I had to turn it on in order to get anything approaching reliability
 for fastcgi.  The problem with just using that is that without some
 coordination between worker processes you're still going to end up
 with collisions where more than one connection is made to a given
 fastcgi process, and the majority of those don't know how to handle

I think the problem is that we only manage connection pools that are local
to the httpd processes.

Regards

Rüdiger


AW: Should fastcgi be a proxy backend?

2006-03-06 Thread Plüm , Rüdiger , VIS


 von Garrett Rooney

 
 Exactly, the pool of available backends needs to be managed globally,
 which we don't currently have and it's not clear if that ability would
 be useful outside of fastcgi.

But as connection pools are per worker and not per cluster
this problem should also appear in the unbalanced environment.

Regards

Rüdiger



AW: Limiting CGIs in 2.2.0

2006-03-01 Thread Plüm , Rüdiger , VIS


 -Ursprüngliche Nachricht-
 Von: Graham Leggett 
 
 Is there a mechanism within v2.2.0 to put resource limits onto CGI 
 programs (maximum running simultaneously, longest time in seconds to 
 run, that sort of thing)?

What about

RLimitCPU
RLimitMEM
RLimitNPROC

But I remember that they do not work either with mod_cgi or mod_cgid.
I cannot remember which one it was.

Regards

Rüdiger



AW: AW: Limiting CGIs in 2.2.0

2006-03-01 Thread Plüm , Rüdiger , VIS


 -Ursprüngliche Nachricht-
 Von: Graham Leggett 
 
 Hmmm... the docs explain both a soft limit and a maximum 
 limit, but 
 doesn't describe the difference between the two.

It is the same thing as the soft and hard limit on Unix OS'es.

 
  But I remember that they do not work either with mod_cgi or 
 mod_cgid. 
  I cannot remember which one it was.
 
 The docs don't mention this either.

Yes, its not in the docs, but I am pretty sure that this issue was on
dev@ sometime ago and that one of them does not work.

Regards

Rüdiger



AW: httpd-trunk with MSIE (async read broken?) and AJP problems was Re: httpd-trunk sucking CPU on ajax

2006-02-28 Thread Plüm , Rüdiger , VIS

 Von:  Im Auftrag von Justin Erenkrantz
 Gesendet: Montag, 27. Februar 2006 22:37


 BTW: Justins idea to exchange the transient buckets with heap buckets was 
 also correct 
 as it seems that the transient buckets might get overwritten in some 
 situations.
 So Justin could you please commit this to the trunk?

 Sure, I will do so tonight.  -- justin

Something that just came up my mind. The problem with the transient bucket 
should only
happen if the bucket is not consumed during the pass_brigade. But if a filter 
down
the chain decides not to consume this bucket for whatever reason it should set 
it aside
and thus transform the transient bucket into a heap bucket.
Provided that all filters in the chain work correctly, the patch should not be 
needed.
Sorry for the confusion created :-)

Regards

Rüdiger


Serf, WAS: Re: AW: AW: svn commit: r378032 - in /httpd/httpd/trunk: CHANGES modules/proxy/mod_proxy_http.c modules/proxy/proxy_util.c

2006-02-21 Thread Plüm , Rüdiger , VIS


 -Ursprüngliche Nachricht-
 Von: 
 
 Serf's mailing list is at: 
 http://mailman.webdav.org/mailman/listinfo/serf-dev/
 
 (You'll find some familiar faces posting there.   *duck*)


Thanks for the hints. I see it is a low traffic mailing list :-).
I hope to find time to have a look into serf.
So many interesting things and so few time *sigh*

Regards

Rüdiger



AW: svn commit: r378032 - in /httpd/httpd/trunk: CHANGES modules/proxy/mod_proxy_http.c modules/proxy/proxy_util.c

2006-02-21 Thread Plüm , Rüdiger , VIS


 -Ursprüngliche Nachricht-
 Von: Joe Orton 
 
 The regression test runs last night had between zero and 
 three failures 
 in t/ssl/proxy.t in various builds (timing dependent, I guess); the 
 build which had three failures was:

So that seems to be related to the other issue we discussed before and
where you made some proposals.

 
 t/ssl/proxy.# Failed test 62 in t/ssl/proxy.t 
 at line 112 fail #2
 # Failed test 92 in 
 /tmp/regressm31826/pf-trunk-prefork/Apache-Test/lib/Apache/Tes
 tCommonPost.pm at line 131 fail #81
 # Failed test 109 in 
 /tmp/regressm31826/pf-trunk-prefork/Apache-Test/lib/Apache/Tes
 tCommonPost.pm at line 131 fail #98
 FAILED tests 62, 92, 109
   Failed 3/172 tests, 98.26% okay


Thanks for tests and feedback.

Regards

Rüdiger



AW: svn commit: r378032 - in /httpd/httpd/trunk: CHANGES modules/proxy/mod_proxy_http.c modules/proxy/proxy_util.c

2006-02-20 Thread Plüm , Rüdiger , VIS


 -Ursprüngliche Nachricht-
 Von: Joe Orton 
  New Revision: 378032
  
  URL: http://svn.apache.org/viewcvs?rev=378032view=rev
  Log:
*) mod_proxy: Fix KeepAlives not being allowed and set to
   backend servers. PR38602. [Ruediger Pluem, Jim Jagielski]
  
  Also, document previous patch:
*) Correctly initialize mod_proxy workers, which use a
   combination of local and shared datasets. Adjust logging
   to better trace usage. PR38403. [Jim Jagielski]
 
 This one seems to have broken the proxy tests again, failed tests are:
 
 t/ssl/proxy.t   172   58  33.72%  3 8 
 10 12 14 16 18 20
   22 
 24 26 28 30 32 34
   36 
 38 40 42 44 46 48
   50 
 52 54 56 58 115-
   116 
 118-120 122 124
   126 
 128 130 132 134
   136 
 139-140 143-144
   
 147-148 151-152 155-
   156 
 159-160 163-164
   
 167-168 171-172
 
 
 if I svn up -r378031 modules/proxy this goes away.  This 

Hmm. Given 
http://mail-archives.apache.org/mod_mbox/httpd-dev/200602.mbox/ajax/[EMAIL 
PROTECTED]
this is really weird. Maybe another change that happened after r378032?

Regards

Rüdiger



AW: svn commit: r378032 - in /httpd/httpd/trunk: CHANGES modules/proxy/mod_proxy_http.c modules/proxy/proxy_util.c

2006-02-20 Thread Plüm , Rüdiger , VIS


 -Ursprüngliche Nachricht-
 Von: Joe Orton 
  
 http://mail-archives.apache.org/mod_mbox/httpd-dev/200602.mbox
/ajax/[EMAIL PROTECTED]
 this is really weird. Maybe another change that happened after r378032?

Maybe Jim's build didn't include mod_ssl?  r378032 is the most recent 
change to modules/proxy.

Maybe. Got another idea in the meantime. Maybe it is because we use the same
socket to a backend, but we create a new conn_rec each time for this backend
and thus try to reinit the existing ssl connection. Could you please give the
following patch a try?

Index: proxy_util.c
===
--- proxy_util.c(revision 379071)
+++ proxy_util.c(working copy)
@@ -1511,7 +1511,7 @@
 return APR_SUCCESS;
 
 /* deterimine if the connection need to be closed */
-if (conn-close_on_recycle || conn-close) {
+if (conn-close_on_recycle || conn-close || conn-is_ssl) {
 apr_pool_t *p = conn-pool;
 apr_pool_clear(conn-pool);
 memset(conn, 0, sizeof(proxy_conn_rec));

It closes a ssl backend connection once it is returned to the connection pool,
such that we have no persistent ssl connections to the backend.
Obviously this is only a workaround.


Regards

Rüdiger


AW: svn commit: r378032 - in /httpd/httpd/trunk: CHANGES modules/proxy/mod_proxy_http.c modules/proxy/proxy_util.c

2006-02-20 Thread Plüm , Rüdiger , VIS


 -Ursprüngliche Nachricht-
 Von: Joe Orton 
  Could you please give the following patch a try?
 
 That fixed most of the failures, there are still two left:

OK. That gives a better idea what possibly goes wrong.

 
 t/ssl/proxyok 61/172# Failed test 62 in t/ssl/proxy.t at line 112 
 fail #2
 t/ssl/proxyok 112/172# Failed test 113 in 
 /local/httpd/pf-trunk/blib/lib/Apache/TestCommonPost.pm at 
 line 131 fail 
 #102
 t/ssl/proxyFAILED tests 62, 113
 Failed 2/172 tests, 98.84% okay
 
 I found only one pertinent error message in the log:
 
 [Mon Feb 20 10:40:08 2006] [debug] proxy_util.c(2118): proxy: HTTP: 
 connection complete to 127.0.0.1:8529 (localhost.localdomain) 

But this is HTTP to the backend isn't it?


 [Mon Feb 20 10:40:08 2006] [error] (32)Broken pipe: proxy: 
 pass request 
 body failed to 127.0.0.1:8529 (localhost.localdomain)
 [Mon Feb 20 10:40:08 2006] [error] (32)Broken pipe: proxy: 
 pass request 
 body failed to 127.0.0.1:8529 (localhost.localdomain) from 
 127.0.0.1 ()
 
 ...perhaps a persistent connection being closed by the 
 backend, and the 
 proxy only finding out about this half way through sending a request 

Also my guess. See also point 2 in 
http://mail-archives.apache.org/mod_mbox/httpd-dev/200602.mbox/ajax/[EMAIL 
PROTECTED]
Is it possible to increase the keepalive timeout temporarily for a test run?
This would give a valuable hint?

 body?  Hard to handle that in the proxy - probably best to just 
 ungracefully terminate the connection to the client in that case, it 
 should then resend appropriately too.

Yes, if get into this situation I guess we have no better chance. But that 
should already
be the reaction to this kind of situation.
Ok, I see that this is currently not the case (at least not if we fail during
sending the reponse). The following patch should sent
a 502 to the client in such cases and close the connection to the client:

Index: mod_proxy_http.c
===
--- mod_proxy_http.c(revision 379071)
+++ mod_proxy_http.c(working copy)
@@ -1711,8 +1711,17 @@
 
 cleanup:
 if (backend) {
-if (status != OK)
+if (status != OK) {
 backend-close = 1;
+if (!r-eos_sent) {
+apr_bucket_brigade *bb;
+
+bb = apr_brigade_create(p, c-bucket_alloc);
+ap_proxy_backend_broke(r, bb);
+ap_pass_brigade(r-output_filters, bb);
+apr_brigade_destroy(bb);
+}
+}
 ap_proxy_http_cleanup(proxy_function, r, backend);
 }
 return status;

But I had the idea to use a possible keepalive header feedback from the
backend to get a feeling at which point of time the backend will close
the connection

Regards

Rüdiger


AW: AW: svn commit: r378032 - in /httpd/httpd/trunk: CHANGES modules/proxy/mod_proxy_http.c modules/proxy/proxy_util.c

2006-02-20 Thread Plüm , Rüdiger , VIS


 -Ursprüngliche Nachricht-
 Von: Jim Jagielski 
 
 Jim Jagielski wrote:
  
  Hmmm... Possibly SSL requires close_on_recycle. Or, at
  least, using that flag as required for SSL.
  
 
 I don't have time to explain in more detail, but the more
 I look over the old way, it was to maintain some sort
 of local state-of-health on the socket pre-and-post
 each request... As such, I'm *thinking* that the
 code patch should be reverted to maintain that
 logic, and extend it, rather than remove it...

I think the SSL problem is caused by throwing away the conn_rec
entry for the backend and create a new one for each request.
That does not sound right, but I admit that keeping it must be
carefully examinated due to several possible issues. Two
that I can see immeditately are:

1. memory pools
2. filters

For me that puts the question on the table if using fake request_rec
and conn_rec structures for the backend connections is really a good
idea. This misuse already leads to problems in other areas.
But reworking this will take much time and work and is only mid to
long term. Might be easier if we have a http / https client library
as part of httpd or apr-util.

The other problem Joe mentioned is a problem that I feared.
See 2. in 
http://mail-archives.apache.org/mod_mbox/httpd-dev/200602.mbox/ajax/[EMAIL 
PROTECTED]
I also need to think about Joes ideas in more detail before
responding to them. They might be a good solution to this. 

Regarding revert. The old version is really bad for performance.
So I would like to keep the patch, provided that we can fix the regressions
within a short timeframe (fixes do not need to provide optimal perfomance
, so non persistance for ssl backend connections would be ok for me
as a fix in this context). If not I would propose to revert on trunk
and create a branch to work this out.

Regards

Rüdiger


AW: AW: AW: svn commit: r378032 - in /httpd/httpd/trunk: CHANGES modules/proxy/mod_proxy_http.c modules/proxy/proxy_util.c

2006-02-20 Thread Plüm , Rüdiger , VIS


 -Ursprüngliche Nachricht-
 Von: Jim Jagielski 
  term. Might be easier if we have a http / https client 
 library as part 
  of httpd or apr-util.
 
 This only helps http, not ajp/fcgi or other protocols.

True, but ajp already does not use the fake conn_rec / request_rec stuff.
Instead it works directly on the socket, or better the code in ajp* works
directly on the socket and mod_proxy_ajp calls high level functions from
these files.
Currently I have no idea about fcgi.

Regards

Rüdiger


AW: [Patch] Keep Alive not workwing with mod_proxy (PR38602)

2006-02-15 Thread Plüm , Rüdiger , VIS


 -Ursprüngliche Nachricht-
 Von: Jim Jagielski 
 
 
 No regressions...
 

Now that this has been passed successfully, do you see need for
any further discussion / changes before I commit it or should
I commit to the trunk and we continue our further changes / discussions
there?

Regards

Rüdiger


AW: AW: [Patch] Keep Alive not workwing with mod_proxy (PR38602)

2006-02-15 Thread Plüm , Rüdiger , VIS


 -Ursprüngliche Nachricht-
 Von: Jim Jagielski 
  
 
 I vote to commit and use that as the continue point for
 more development :)

Excellent. I will do so tonight German time. Currently I am away from my
development env as you may have noticed from my nicely formated
Outlook mails :-).

Regards

Rüdiger



AW: [Patch] Keep Alive not workwing with mod_proxy (PR38602)

2006-02-14 Thread Plüm , Rüdiger , VIS


 -Ursprüngliche Nachricht-
 Von: Jim Jagielski  
 
 
 On Feb 13, 2006, at 6:25 PM, Ruediger Pluem wrote:
 

 
 
 Although it's not really documented anyplace, it really is
 good practice for people who submit large changes to run

That seems a reasonable good idea.

 them through the test framework first; it might be good
 to take same time and try to get it up and running,

I will do so.

 since it really does help track some things down... In the
 meantime, if you can send the latest patch, I'll test
 it here.
 

Currently I am away from my developing environment, but as soon
as I get there (tonight German time) I will sent an updated version.
Thanks in advance for running the tests.

Regards

Rüdiger


AW: AW: [Patch] Keep Alive not workwing with mod_proxy (PR38602)

2006-02-14 Thread Plüm , Rüdiger , VIS


 -Ursprüngliche Nachricht-
 Von: Jim Jagielski
 I'm currently trying to trace through exactly how the code is 
 trying to pool connections. Of course, we're only using 
 reslist if we're a threaded MPM...

Really? I thought APR_HAS_THREADS is set when the OS supports threads.
I thought that this is independent from the MPM we choose.


Regards

Rüdiger


AW: AW: AW: [Patch] Keep Alive not workwing with mod_proxy (PR38602)

2006-02-14 Thread Plüm , Rüdiger , VIS


 -Ursprüngliche Nachricht-
 Von: Jim Jagielski 
  
 
 Yeah, but we check to see if we're 1 thread, so in prefork,
 we drop to single connection workers.

Which makes sense to me. Why have more than one connection per worker
on a prefork processes that can only handle one request at a time?
As the current connection pool mechanism is limited to share connections
within the same process and not over all processes it is quite clear
that prefork suffers a lot from this approach and the real benefit only
shows up with a threaded MPM that has not too much processes. Whatever
too much means.

Regards

Rüdiger



AW: lb_score

2006-02-14 Thread Plüm , Rüdiger , VIS


 -Ursprüngliche Nachricht-
 Von: Jim Jagielski 
 
 
 Off the top of my head, I have no idea why we even have lb_score
 rather than just using proxy_worker_stat as we should.
 This is easy to fix except for the fact that ap_get_scoreboard_lb()
 is AP_DECLARE... Of course, adjusting in HEAD is fine, but
 this is something that really should be fixed in 2.2, which
 means we have an API change.
 
 Comments?

The problem is that we also need to change the scoreboard struct
in scoreboard.h for this. So I guess we have an even bigger API
change here.
And if we change this, every future change to the proxy_worker_stat
struct requires an API change that is not limited to the proxy.
So I tend to say that we should change it on trunk, but not on
2.2.

Regards

Rüdiger


AW: ap_proxy_initialize_worker_share

2006-02-14 Thread Plüm , Rüdiger , VIS


 -Ursprüngliche Nachricht-
 Von: Jim Jagielski 
 quite right, since it means that the *shared* data
 has been initialized, but that this worker may not
 have been (if you catch my drift). Furthermore,
 this means that the -cp stuff isn't being
 fully init'ed...

Yep, see http://issues.apache.org/bugzilla/show_bug.cgi?id=38403#c15
as an example for uninitilized cp / res list stuff.

Regards

Rüdiger



AW: lb_score

2006-02-14 Thread Plüm , Rüdiger , VIS


 -Ursprüngliche Nachricht-
 Von: Jim Jagielski 
  
 
 That was the reason I added the 'context' struct member, to 
 allow for some reasonable extensions without adjusting the 
 actual API. :)

Yes, but it is not possible to share this data over processes
easily as it is only a pointer. Of course it makes perfect sense
for data that stays local to the process.
So if we want to add further shareable data to this struct we need
to adjust the scoreboard API.  Maybe Nicks ideas
on an extended scoreboard API could be helpful to address this.

Regards

Rüdiger



AW: AW: lb_score

2006-02-14 Thread Plüm , Rüdiger , VIS


 -Ursprüngliche Nachricht-
 Von: Jim Jagielski 
  
  Yes, but it is not possible to share this data over 
 processes easily 
  as it is only a pointer.
 
 But it could be a pointer to a shared memory segment :)

Yes of course, but I have to write more code to manage this
than for an element of the struct. Ok, I am way tooo lazy :-).

Regards

Rüdiger



AW: Support for ProxyPreserveHost with mod_proxy_balancer?

2006-02-13 Thread Plüm , Rüdiger , VIS


 -Ursprüngliche Nachricht-
 Von: Graham Leggett 
 
 Gregor J. Rothfuss wrote:
 
  i am trying to use mod_proxy_balancer with a backend that is in turn
  using name-based virtual hosts.
  
  it seems that mod_proxy_balancer doesn't honor 
 ProxyPreserveHost (both
  2.2.0 and trunk), and does not send the Host: header to the backend.
  
  would there be interest in a patch for that or am i attempting 
  something
  dumb?
 
 The Host header should always be sent to the backend, not doing so 
 violates HTTP/1.1. This sounds like a bug of some kind.

This is not the problem. A host header *is* sent to the backend, but
it is the *wrong* one (the hostname of the worker I assume and not
the hostname of the reverse proxy).

Regards

Rüdiger



AW: Support for ProxyPreserveHost with mod_proxy_balancer?

2006-02-13 Thread Plüm , Rüdiger , VIS


 -Ursprüngliche Nachricht-
 Von: Gregor J. Rothfuss 
 
 
 hi,
 
 i am trying to use mod_proxy_balancer with a backend that is in turn 
 using name-based virtual hosts.
 
 it seems that mod_proxy_balancer doesn't honor 
 ProxyPreserveHost (both 
 2.2.0 and trunk), and does not send the Host: header to the backend.

After a first quick view in the code on trunk I cannot see a problem there.
Can you please post your config here, such that we can rule out a config
problem?

Furthermore could you please open a bug in bugzilla for this? This makes
things easier to track and to reference.


Regards

Rüdiger


AW: mod_proxy buffering small chunks

2006-02-06 Thread Plüm , Rüdiger , VIS


 -Ursprüngliche Nachricht-
 Von: Alan Gutierrez 
 
 The proposed solution is to poll for chunks using 
 non-blocking I/O. When the socket returns EAGAIN, the 8K 
 buffer is flushed, and the socket is read with blocking I/O.

Thats the way the code is already designed in 2.2.x.

 
 However, in the http code, regardless of the block mode flag, 
 the headers are read using blocking I/O. Furthermore, 
 somewhere deeper in the chain, when EAGAIN is detected the 
 success flag is returned instead. I'll provide more detail, 
 if this is not already well known.

This is the problem. The EAGAIN or better the mode handling in this
code path is somewhat broken.

 
 Looks like this is going to be messy to fix. Any patches?

Not so far. I guess it requires some general changes that must
be carefully designed. But anyway, patch proposals are welcome :-).

Actually, the thing you want works with mod_proxy_ajp in 2.2.x.

Regards

Rüdiger


AW: Apache proxy behaviour...

2006-02-02 Thread Plüm , Rüdiger , VIS


 -Ursprüngliche Nachricht-
 Von: Matthieu Estrade 
 
 The reverse proxy read a brigade, then forward it to the 
 client. It should not 
 buffer the response but forward block of data. Maybe it's 
 because of deflate

mod_deflate buffers definitely. You need to turn it off for
such pages or you need to write a lot of garabage data to make
it flush.
 
 or mod_security...

Maybe if you check the outgoing responses. If not I would say no.

 
 But this behaviour is not normal imho.

It is :-(.

This has something to do with the improper returning of the
EAGAIN status code in the filter chain.

BTW: It currently works with mod_proxy_ajp in httpd 2.2.x, but
not with mod_proxy_http.

Regards

Rüdiger




AW: Apache proxy behaviour...

2006-02-02 Thread Plüm , Rüdiger , VIS

-Ursprüngliche Nachricht-
Von: CASTELLE Thomas .
 
 Anyway, even if the Apache timeout is increased, Firewalls or browsers don't 
 like idle TCP/IP session either... without speaking of the users ;-)
 Regarding my problem, I tried to disable every modules (except mod_proxy of 
 course), and it still doesn't work...
 It seems to confirm what Rüdiger said...
 
In general mod_proxy_http of httpd 2.2.x is prepared for this task,
but is does not work because of the EAGAIN problem.
mod_proxy_http of httpd 2.0.x is not prepared at all for such things.

Regards

Rüdiger


AW: AW: proxy failover/load balance

2006-02-01 Thread Plüm , Rüdiger , VIS

-Ursprüngliche Nachricht-
Von: Robby Pedrica 

 Hi Rudiger,

 I've applied patches and recompiled. My results are as follows:
 
 1. apache starts up with the member 'b' disabled now
 2. if I shutdown the working member 'a' httpd, then manager shows the change 
 only when you try and access the web site. Moves from state ok to err
 3. If I bring the member 'a' back up again ( start httpd ) then manager shows 
 the move from err to ok only after trying to access the site.

2. and 3. work as designed. httpd only changes the status for a worker to err 
if a request failed in the middle or if it tried to assign a request
to this worker and that failed. Once the worker is in error state httpd will 
not try to assign a new request to this worker for retry seconds
(see parameter explanation at 
http://httpd.apache.org/docs/2.2/mod/mod_proxy.html#proxypass). After that 
grace period the worker gets new requests
assigned again and will switch back to ok once it processed the assigned 
request successfully.

 
 This is definitely an improvement even if failover isn't working because at 
 least we can manually failover ...

Considering my explanation above, what do you mean by failover isn't working?

Regards

Rüdiger


AW: AW: AW: proxy failover/load balance

2006-02-01 Thread Plüm , Rüdiger , VIS

-Ursprüngliche Nachricht-
Von: Robby Pedrica 


  
 Thanks Rudiger,
 
 I've set redirect and route values - not sure if these are correct:
 
 Proxy balancer://mycluster
 BalancerMember http://192.168.4.2:80 redirect=1
 BalancerMember http://192.168.4.3:80 route=1 status=D
 /Proxy
 
 I'm assuming that if member a. fails then requests will be redirected to 
 member b. - this is not happening currently.

This only works with session stickyness. So

ProxyPass / balancer://mycluster stickysession=SESSION_COOKIE
Proxy balancer://mycluster
BalancerMember http://192.168.4.2:80 route=a redirect=b
BalancerMember http://192.168.4.3:80 route=b status=Disabled
/Proxy

Furthermore your backend should deliver a session cookie

SESSION_COOKIE which has .a / .b added to the session id.
If you don't have an application on the backend that does this (from previous 
statements
I assume that you are using a httpd as backend) you can use
mod_headers on the backend servers to set these cookies.

Header add Set-Cookie SESSION_COOKIE=anything.a;Path=/


Regards

Rüdiger


AW: AW: AW: proxy failover/load balance

2006-02-01 Thread Plüm , Rüdiger , VIS


 -Ursprüngliche Nachricht-
 Von: Plüm, Rüdiger,
 
 This only works with session stickyness. So
 
 ProxyPass / balancer://mycluster stickysession=SESSION_COOKIE 
 Proxy balancer://mycluster BalancerMember 
 http://192.168.4.2:80 route=a redirect=b BalancerMember 
 http://192.168.4.3:80 route=b status=Disabled /Proxy
 
 Furthermore your backend should deliver a session cookie
 
 SESSION_COOKIE which has .a / .b added to the session id.
 If you don't have an application on the backend that does 
 this (from previous statements I assume that you are using a 
 httpd as backend) you can use mod_headers on the backend 
 servers to set these cookies.
 
 Header add Set-Cookie SESSION_COOKIE=anything.a;Path=/

I fear that the configuration above is not sufficient without the attached
patch.

Regards

Rüdiger



disabled_worker.diff
Description: disabled_worker.diff


AW: AW: AW: proxy failover/load balance

2006-02-01 Thread Plüm , Rüdiger , VIS


 -Ursprüngliche Nachricht-
 Von: Jim Jagielski 
 
 I think we're in agreement that the current failover does
 not work as it should with HTTP, and is quite
 cumbersome to get it to work. :)

Apart from the fact that it currently does not even work without
patches :-).
So I am keen on feedback by Robby. I hope to find time to commit these
changes to the trunk tonight, so that it works at least in the cumbersome way 
:-).

 
 I hope to later on this week work on code that has
 a real hot standby status, and avoids the requirement
 for sticky sessions. It won't replace what's in
 there now (for AJP) but will make it easier
 to implement failover for simple tasks.
 

Sounds good.

Regards

Rüdiger


AW: AW: proxy failover/load balance

2006-02-01 Thread Plüm , Rüdiger , VIS
 -Ursprüngliche Nachricht-
 Von: Jim Jagielski 
 
 Why the breaks? Certainly we still want to continue the
 for loop even if we see a valid setting. For example,
 to set a worker in DISABLED and STOPPED mode.

1. Currently there is no clear separation letter.
2. Setting status=disabled will result in Unknow status parameter option

But maybe I just understand the syntax of status wrong and status should
not be compiled of clear text words but rather of something cryptic like

[eEdDsS\+-]+ 

where 

e,E is error
d,D is disabled
s,S is stopped

But as it is not documented I don't know :-)

Regards

Rüdiger



AW: AW: AW: AW: proxy failover/load balance

2006-02-01 Thread Plüm , Rüdiger , VIS


 -Ursprüngliche Nachricht-
 Von: Jim Jagielski 
 
 On Feb 1, 2006, at 9:02 AM, Plüm, Rüdiger, VIS wrote:


  So I am keen on feedback by Robby. I hope to find time to 
 commit these 
  changes to the trunk tonight, so that it works at least in the
  cumbersome way :-).
 

I will cut the breaks from the patch to keep the current syntax alive.
The correct syntax should be defined and discussed later.
And of course documented, once there is a decision on that :-).


Regards

Rüdiger



AW: proxy failover/load balance

2006-01-31 Thread Plüm , Rüdiger , VIS

-Ursprüngliche Nachricht-
Von: Robby Pedrica 
Gesendet: Montag, 30. Januar 2006 09:27
Betreff: proxy failover/load balance


 wrt information provided by Mladen Turk, Jim Jagielski and Andreas Wieczorek:
 
 I'm having issues when using mod_proxy/mod_proxy_balancer and it appears 
 these have been alluded to on the dev mailing list:
 
 1. apache 220 doesn't appear to use the status=d value for a balancer member 
 when loading ( status=disabled doesn't work as apache complains about the

This seems to be a bug in set_worker_param of mod_proxy.c. It should exit the 
for loop once it found a valid thing.

 syntax ); when starting with status=d, apache starts fine but balancer 
 manager still indicates that host you set as disabled in config is actually 
 available

This seems to be a bug in init_balancer_members of mod_proxy_balancer.c which 
seems to overwrite the status of the balancer members with
PROXY_WORKER_INITIALIZED.

 2. when manually disabling a member in the balancer manager, it comes back 
 online automatically after about a minute or 2

What do you mean with online? Is it enabled again? If yes, this might be 
related to the bug above as init_balancer_members is called
by child_init which runs in the child_init hook. So I think everytime httpd 
creates a new child process this gets reseted.

Regards

Rüdiger


AW: Please Help!: Problem with Java Plug-in

2006-01-30 Thread Plüm , Rüdiger , VIS


 -Ursprüngliche Nachricht-
 Von: Noranis Aris 

This is a user specific question. Please post it
to users@httpd.apache.org

Regards

Rüdiger



AW: mod_disk_cache behaviour with Vary: header

2006-01-25 Thread Plüm , Rüdiger , VIS


 -Ursprüngliche Nachricht-
 Von: Laurent Perez 

 Sorry to up this thread again, but another wrong behaviour 
 appeared after I switched to 2.2 (source from
 http://apache.crihan.fr/dist/httpd/httpd-2.2.0.tar.gz) :-(
 
 The varying now works fine, with the .vary folder filled with 
 variations. But now, the cached files do not seem to be 
 checked on the disk anymore, so the cache is always being 
 rewritten from backend webapp queries. I have apache2.0 and 

This is a known issue with reverse proxies in 2.2.0. Please have
a look at http://issues.apache.org/bugzilla/show_bug.cgi?id=38017
and the attached patch
http://issues.apache.org/bugzilla/attachment.cgi?id=17342

Regards

Rüdiger



AW: [PATCH] mod_disk_cache: store/read array table

2006-01-24 Thread Plüm , Rüdiger , VIS


 -Ursprüngliche Nachricht-
 Von: Ian Holsman 
 
 
 Brian Akins wrote:
  This is a rather nasty patch (sorry).  Basically, it changes the way
  arrays and tables are stored on disk.  This allows us to do a much 
  cleaner and quicker read_array and read_table.  It cuts down 
  significantly on the number of disk reads for header files 
 (one big one) 
  and the number of strdup's (it used apr_table_addn, for example).
  
  Bottom line is alot fewer system calls and allocations.  It 
 gives me a
  5% increase in mod_disk_cache across the board.
  
 
 does anyone have any objections to this patch?
 5% is a pretty nice gain imho.


To be honest I do not understand why Brian creates its own buffered
input / output reading. I think that header files are rarely larger than 4K.
And the current code already buffers 4K when APR_BUFFERED is set as a flag for
apr_file_open (which is set). So I currently cannot believe that we really
save disk io / syscalls. Of course the apr methods have more overhead for
buffering than Brians code because they are thread safe which is not needed
in this case IMHO. So I guess some cycle's are safed. OTH we tried to use apr to
abstract such things as buffered input away from the httpd code. As always
such solutions are generalized and create some overhead.

So I am currently -1 (vote not veto) on the buffering aspects of the patch.

Regarding the alternate method in safeing the arrays and tables itself
I had not have the time to dig in deeper, so I cannot give a comment on
this aspect. The only thing I understood so far is that cycles should be saved
by dumping / reading the data structures directly instead of using the
apr access methods.

Regards

Rüdiger

[..cut..]



AW: mod_proxy_balancer: how to define failover (only)/hot standby behavior?

2006-01-24 Thread Plüm , Rüdiger , VIS

 -Ursprüngliche Nachricht-
 Von: Jim Jagielski 
 

[..cut..]

 
 
 A sort of warm standby is something that I had planned to
 work into the balancer code post 2.2.1.

+1

AFAIK local_worker_only mentioned below does not exist any longer
in recent versions of mod_jk, but the recent version of mod_jk
knows the concept of disabled workers which are just doing
a hot standby (in contrast to a stopped worker which does not
get any requests at all). Plus you can change this status via
the management application.
Maybe also an interesting concept for mod_proxy_balancer.

Regards

Rüdiger



AW: mod_proxy_balancer: how to define failover (only)/hot standby behavior?

2006-01-24 Thread Plüm , Rüdiger , VIS


 -Ursprüngliche Nachricht-
 Von: Mladen Turk 
 
 Like said earlier.
 
 Hot standby already works with mod_proxy balancer.
 The 'hot standby' BalancerMember must be initially 
 'disabled'. Other members must have 'redirect' option set to 

Maybe it is just me being blind. But I haven't found in the documentation
how to disable a BalancerMember. Is this a 'hidden' feature? :-)

 the name of 'hot standby' member.
 
 The thing we would need to backport from mod_jk 1.2.14+ is 
 hot-standby to 'domain'.

Do we currently have domain support at all with mod_proxy_balancer?

Regards

Rüdiger


AW: AW: mod_proxy_balancer: how to define failover (only)/hot standby behavior?

2006-01-24 Thread Plüm , Rüdiger , VIS


 -Ursprüngliche Nachricht-
 Von: Jim Jagielski 
  
  Maybe it is just me being blind. But I haven't found in the = 
  documentation how to disable a BalancerMember. Is this a 'hidden' 
  feature? :-)
  
 
 The balancer-manager does that... well, let's the admin do that.

+1

Otherwise it is lost after httpd restart. So it should be made persistent
in the configuration.

Regards

Rüdiger



AW: Time for 2.0.56 ?

2006-01-23 Thread Plüm , Rüdiger , VIS


 -Ursprüngliche Nachricht-
 Von: Justin Erenkrantz 
 
 What SVN rev are we talking about here?  If it's the Solaris mod_cgid
 thread check in modules/generators/config.m4, then yes, I 
 believe I have a
 +1 to 2.0.x; if not, well, here you go: +1.  =)  -- justin

I am talking about

http://svn.apache.org/viewcvs?rev=326018view=rev

Regards

Rüdiger


AW: AW: PR#38123

2006-01-20 Thread Plüm , Rüdiger , VIS


 -Ursprüngliche Nachricht-
 Von: Nick Kew

 
 That's just about what I had in mind.  But I'd hesitate to 
 use it without knowing whether someone had a reason for 
 putting it at the end of the function originally.  It would 
 affect the semantics of post_read_request, but is that fixing 
 a bug or damaging a feature?
 
 Anyone recollect the origins of that code?

The filter was added in r91192 
(http://svn.apache.org/viewcvs.cgi?rev=91192view=rev)
4.25 years ago by Justin. Maybe he can remember.

Interesting the original position of the filter had been *before* the ap_die 
call, but
*after* the ap_send_error_response calls (which causes trouble in 38123).

The position of the filter was changed as a sideeffect of r97765
(http://svn.apache.org/viewcvs.cgi?rev=97765view=rev)

I like svn praise :-)


Regards

Rüdiger



AW: PR#38123

2006-01-19 Thread Plüm , Rüdiger , VIS


 -Ursprüngliche Nachricht-
 Von: Nick Kew

[..cut..]

 
 
 That's the same bug and fix as PR#37790!
 
 Which leads me to wonder, is there some good reason not to 
 insert the input filter unconditionally somewhere earlier in 
 request_post_read?  As it stands, it looks as if your fix has 
 the same problem as mine: namely, it fixes the immediate 
 problem but leaves the bug waiting to manifest itself anew in 
 other early error conditions.

What about the following patch instead (currently untested)?

Index: server/protocol.c
===
--- server/protocol.c   (revision 370457)
+++ server/protocol.c   (working copy)
@@ -902,6 +902,9 @@
   (see RFC2616 section 14.23): %s, r-uri);
 }
 
+ap_add_input_filter_handle(ap_http_input_filter_handle,
+   NULL, r, r-connection);
+
 if (r-status != HTTP_OK) {
 ap_send_error_response(r, 0);
 ap_update_child_status(conn-sbh, SERVER_BUSY_LOG, r);
@@ -910,8 +913,6 @@
 }
 
 if ((access_status = ap_run_post_read_request(r))) {
-ap_add_input_filter_handle(ap_http_input_filter_handle,
-   NULL, r, r-connection);
 ap_die(access_status, r);
 ap_update_child_status(conn-sbh, SERVER_BUSY_LOG, r);
 ap_run_log_transaction(r);
@@ -934,8 +935,6 @@
 ap_log_rerror(APLOG_MARK, APLOG_INFO, 0, r,
   client sent an unrecognized expectation value of 
   Expect: %s, expect);
-ap_add_input_filter_handle(ap_http_input_filter_handle,
-   NULL, r, r-connection);
 ap_send_error_response(r, 0);
 ap_update_child_status(conn-sbh, SERVER_BUSY_LOG, r);
 ap_run_log_transaction(r);
@@ -943,8 +942,6 @@
 }
 }
 
-ap_add_input_filter_handle(ap_http_input_filter_handle,
-   NULL, r, r-connection);
 return r;
 }
 

Regards

Rüdiger



add_input_filter.diff
Description: add_input_filter.diff


AW: Problem: compiling mod_tidy with Apache 2.2

2006-01-19 Thread Plüm , Rüdiger , VIS


 -Ursprüngliche Nachricht-
 Von: Sierk Bornemann 
 

[..cut..]

 --
 
 APR_BRIGADE_FOREACH does not longer exist, so there must be a 
 short fix 
 reflecting this.

The macro APR_BRIGADE_FOREACH is marked deprecated in apr-util 0.9.x
which is used by Apache 2.0.x for quite a long time (3.5 years,
see http://svn.apache.org/viewcvs.cgi?rev=58680view=rev)
Thus it is not contained in apr-util 1.2.2 which is used by Apache
httpd 2.2.0. So please use different code to iterate over a brigade
as shown in the comments above the definition of APR_BRIGADE_FOREACH
in apr-util 0.9.x.

Regards

Rüdiger


[..cut..]


AW: AW: AW: 2.2 mod_http_proxy and partial pages

2006-01-10 Thread Plüm , Rüdiger , VIS

 
 William A. Rowe, Jr. wrote:
  Ruediger Pluem wrote:

[..cut..]

  
  Quick consideration;
  
  Rather than look for HTTP_BAD_GATEWAY error bucket, we can actually
  generalize the problem.  ANY metadata bucket that isn't 
 recognized and
  handled by an intermediate filter probably indiciates a problem; and
  therefore the result is a non-cacheable, broken response.
 
 Actually two cases.  In the error bucket case, it's 
 non-cacheable, and broken.

So what do you think should be done in this case with respect to the 
'brokenness'.

1. set r-connection-keepalive to AP_CONN_CLOSE
2. Do not sent the last chunk marker in the case of a chunked encoding response
3. Do 1. and 2.

Next question is: Do we need to stop sending further data to the client
immediately?

 In the unrecognized bucket type case, it's non-cacheable (a 
 'complex' response),
 but it is likely serveable to the front end client.  In both 
 cases, if mod_cache
 doesn't grok what it sees, then something 'interesting' is 
 going on and we would
 not want to deposit into the cache.

I agree with the goals, but making it non cacheable is not easy to add to the
current patch, because the HTTP_OUTERROR filter is a protocol filter that is
run after the CACHE_SAVE filter. So setting r-no_cache there may be too
late in the case that the error bucket and eos bucket are in the same
brigade. This is the reason why we actually set r-no_cache on the proxy
side in ap_proxy_backend_broke which is called from the scheme handlers.
From my current perspective this would mean that the CACHE_SAVE filter must
be taught to deal with these buckets.
But apart from the case that no content-length is present the CACHE_SAVE
filter itself does not iterate over the brigade.
So we would need to add an additional loop over the brigade inside
of CACHE_SAVE filter to scan for these meta buckets.
Furthermore I think we need to keep in mind that, if we think that this
reponse is not worth caching, we may should make any upstream proxies
think the same. In the case of a broken backend this is reached (depending
on the transfer encoding) by

1. sending less content then the content-length header announces
2. do not send the last-chunk marker.

But in the case of an unrecognized bucket type we must let the upstream
proxies know that it is not cacheable via headers.
But this could be impossible if the headers had been already sent.

Regards

Rüdiger


AW: AW: AW: 2.2 mod_http_proxy and partial pages

2005-12-16 Thread Plüm , Rüdiger , VIS


 -Ursprüngliche Nachricht-
 Von: Justin Erenkrantz 
 Gesendet: Freitag, 16. Dezember 2005 09:19
 An: dev@httpd.apache.org
 Betreff: Re: AW: AW: 2.2 mod_http_proxy and partial pages
 
 
 On Thu, Dec 15, 2005 at 10:12:57PM +0100, Ruediger Pluem wrote:
  I think we have to simulate to the client what happened to 
 us on the backend:
  A broken connection.
 
 I mostly agree.
 
 However, Roy's veto is predicated on us not doing anything 
 that would cause
 a hypothetical (*duck*) Waka protocol filter from having the 
 underlying
 connection closed.  The point Roy's made is that Waka will 

I do not intend to do close the connection by myself. Currently it
will be closed because c-keepalive is set to AP_CONN_CLOSE
(a approach also suggested in Roys patch).
The only addition I want to make is that in the chunked case
the chunked filter should not sent the closing chunk to make
it clear to the client that something had broken.
The question that remains to me: Does it hurt that the core output
filter removes the error bucket once it has seen it?
Does this address this point?

Currently I am away from my development env. I hope I can post
a complete patch with all my ideas by tomorrow.

Regards

Rüdiger


AW: [PATCH] Rename to Apache D

2005-12-15 Thread Plüm , Rüdiger , VIS


 -Ursprüngliche Nachricht-
 Von: Brad Nicholes 
 Gesendet: Donnerstag, 15. Dezember 2005 16:39
 An: 
 Betreff: Re: [PATCH] Rename to Apache D
 
 
 
   You're not really serious about this are you?  It is a 
 little premature to rename something  to 'd' that is still 
 very much 'httpd'. 
 Get the code in place first and then see if it makes sense to 
 worry about trivial things like renaming the binary.

+1

Regards

Rüdiger

[..cut..]


AW: AW: 2.2 mod_http_proxy and partial pages

2005-12-15 Thread Plüm , Rüdiger , VIS


 -Ursprüngliche Nachricht-
 Von: Jim Jagielski 
 Gesendet: Donnerstag, 15. Dezember 2005 17:02
 An: dev@httpd.apache.org
 Betreff: Re: AW: 2.2 mod_http_proxy and partial pages
 
 

{..cut..]

 
  Sorry, but I think I have to disagree.
  There is nothing that can be handled anymore since the headers had
  been sent to the client.
  The only part of the chain that handles error buckets so 
 far is the  
  http header filter which is gone at
  this point of time.
 
 IMO, that's the problem... The core output filter should be 
 aware of this error. Not sure if magically noticing this 
 when removing empty buckets is the right solution... 
 

No problem. Let's discuss where to place this. I just placed it
into the remove_empty_buckets function as I wanted to avoid to run
a loop twice over the brigade. I think I need some kind of loop
because otherwise I might miss this bucket (e.g. in remove_empty_bucket,
if there are other meta buckets before the error bucket).
Having the check only in writev_nonblocking might lead to a miss of
this bucket.

Anyway I detected another problem that is also there with my current
patch proposal. I think we need to make the ap_http_chunk_filter aware of this
error bucket. Otherwise it will add the closing zero length chunk to
the response once it sees the eos bucket. This would give the client the
impression that the response had been correct and complete (provided
that the reponse was in chunked encoding). If the client is a proxy
this could lead to a cache poisoning. So my proposal is that we do
*not* insert the closing zero length chunk to signal the client
that the response is not complete and broke in the middle.


Regards

Rüdiger


Re: 2.2 mod_http_proxy and partial pages

2005-12-09 Thread Plüm , Rüdiger , VIS


 -Ursprüngliche Nachricht-
 Von: Justin Erenkrantz 
 Gesendet: Freitag, 9. Dezember 2005 06:22
 An: dev@httpd.apache.org
 Betreff: Re: AW: 2.2 mod_http_proxy and partial pages
 

[..cut..]

 Even with an EOS bucket, how will we indicate that the 
 connection should be aborted?  (i.e. don't try to do any 
 further writes?)
 
 (See below as to why an EOS doesn't do anything.)
 
  If you have another idea how to specify via the handler that the
  connection
  needs to be dropped, I'm all ears.  But, I couldn't see one.  --  
  justin
  
  I would extend the EOS bucket data to be an errno and then have 
  mod_cache check for that data != 0 when it does its EOS check.
 
 For httpd's filters, an EOS bucket data doesn't attempt a close of the
 stream: in fact, EOS doesn't do anything to the socket.  By 
 the time we start writing the body, all of the filters that 

Sorry for possibly getting back to a dead horse, but what if we set
c-keepalive to AP_CONN_CLOSE and sent an EOS bucket?
Of course currently this would only cause the connection to the client to be 
closed.
For the notification of the filters on the stack that something has gone
wrong we need additional measures as the error code addition to the
EOS bucket.

[..cut..]

Regards

Rüdiger



Re: 2.2 mod_http_proxy and partial pages

2005-12-07 Thread Plüm , Rüdiger , VIS
 -On December 7, 2005 2:00:19 AM +0100 Ruediger Pluem [EMAIL PROTECTED] 
 wrote:

 The patches to mod_proxy_http we identified here on list do indeed work
 and are in as r354628.

 Sorry for stepping in that late into the discussion, but wouldn't it be
 better to fix that after the return from proxy_run_scheme_handler in
 mod_proxy?

 The error has to be flagged inside the HTTP scheme before the error is 
 lost.  Without this patch, mod_proxy_http returns 'success' 
 unconditionally.  That is clearly wrong and that's what I changed.

Yes, of course the scheme handler must signal the proxy handler that the backend
broke. Just returning 'success' in this case is of course plain wrong.


 I fear that mod_proxy_ajp is affected by the same problem that
 is now fixed in mod_proxy_http. This means we put the burden of handling
 this in a unified way on each proxy backend module. How about letting the
 schema_handler simply return a specific value (DONE or whatever) to
 signal that the backend broke in the middle of sending the response and
 let mod_proxy handle the dirty work.

 That's what it does right now.  What would you like to change?

I would like to set the c-aborted in mod_proxy's proxy_handler after the
run_scheme_handler.

Reason:

1. We can define a clear interface for the scheme handlers here:
   If the backend broke before you sent headers just return BAD_GATEWAY
   and send nothing, if it broke afterwards just return BROKEN_BACKEND
   (or whatever you like that should be defined for this case).
   The proxy handler would handle this BROKEN_BACKEND return code and
   do the 'right' thing (currently setting c-aborted).
   Thus we do not have the need to put the burden of the details on
   the schema handler (why I regard it as a burden see 2.)

2. I am not 100% percent happy with the c-aborted approach as the original
   intention of c-aborted was another one (The connection to the *client* broke
   not to the *backend*). I admit that I do not see any other approach
   currently, so we should stick with this, but if we decide to change this
   later on and we follow 1. then it is much easier to change as we have this
   code only in *one* location and not in every scheme handler.

[..cut..]

 An error bucket is already sent down the chain when the specific connection 
 error I hit with the chunked line occurs through HTTP_IN, but that 
 accomplishes little because the HTTP filters which understand the error 
 buckets have already gone as the headers have been sent.

 FWIW, an error bucket, by itself, would not be enough; the connection close 
 logic is only implemented well outside of the filter logic.  At best, it 
 has to be an error bucket combined with a returned status code that can be 
 returned all the way up.  -- justin

Ahh, ok. Thanks for clarification.

Regards

Rüdiger



RE: 2.2 mod_http_proxy and partial pages

2005-12-07 Thread Plüm , Rüdiger , VIS


 -Ursprüngliche Nachricht-
 Von: Justin Erenkrantz 
 Gesendet: Mittwoch, 7. Dezember 2005 17:08
 An: dev@httpd.apache.org
 Betreff: Re: 2.2 mod_http_proxy and partial pages
 

[..cut..]

 
 Feel free to commit a patch.  =)

I will do so :).

 
  2. I am not 100% percent happy with the c-aborted approach 
 as the original
 intention of c-aborted was another one (The connection 
 to the *client* broke
 not to the *backend*). I admit that I do not see any 
 other approach
 currently, so we should stick with this, but if we 
 decide to change this
 later on and we follow 1. then it is much easier to 
 change as we have this
 code only in *one* location and not in every scheme handler.
 
 But, that's exactly what we want: we must abort the 
 connection to the client because we can't complete the 
 response.  -- justin

Yes, I know. Maybe this is nitpicking, but my original understanding is that
c-aborted is set if the connection to the client has broken for whatever 
external
reason on the network route between client and server, not if we decide that we
need to / should break this connection to the client because of something has 
gone
wrong on the backend. But as said, this is possibly just nitpicking :-).

Regards

Rüdiger

 


AW: 2.2 mod_http_proxy and partial pages

2005-12-07 Thread Plüm , Rüdiger , VIS


 -Ursprüngliche Nachricht-
 Von: Justin Erenkrantz 
 Gesendet: Mittwoch, 7. Dezember 2005 17:30
 An: dev@httpd.apache.org
 Betreff: Re: 2.2 mod_http_proxy and partial pages
 
 
 On Wed, Dec 07, 2005 at 05:24:46PM +0100, Plm, Rdiger, VIS wrote:

[..cut..]

 
 Nope, that's the flag we set when we want the core to drop 
 the connection. I thought that it would be set by the filters 
 when a connection was dropped, but, as I said earlier in this 
 thread, I'm wrong.  The filters will never ever set it.  -- justin
 

Ok. Then I withdraw my objections against the setting of c-aborted.
I just understood its purpose wrong. Thanks for clarification.
Regarding the question where to set this (in scheme_handler vs after
run_scheme_handler hook in proxy handler) I will redecide once I had
a more closer look on mod_proxy_ajp.

Regards

Rüdiger


AW: svn commit: r354779 - /httpd/httpd/branches/2.2.x/STATUS

2005-12-07 Thread Plüm , Rüdiger , VIS


 -Ursprüngliche Nachricht-
 Von: Jim Jagielski 
 Gesendet: Mittwoch, 7. Dezember 2005 17:43
 An: Justin Erenkrantz
 Cc: dev@httpd.apache.org; [EMAIL PROTECTED]
 Betreff: Re: svn commit: r354779 - /httpd/httpd/branches/2.2.x/STATUS

 
 Sure... Right now, there appears to be some questions on
 why we are treating some requests differently and how
 that affects the pool. Ruediger was looking into this,
 with the end result that some areas of this, such as
 what warrants a reusable connection, may be changed.
 I did not want to rework what was the original logic
 layout too much, just simply fix a quick problem.
 The old code has the same if-else-if structure, and
 I didn't want to disturb too much since Ruediger stated
 that this was the exact section he was playing
 around with.

Yes, that was the area I wanted to play, but I had not found time so far :-(.
So if you already have ideas on this feel free to move forward
as I do not want to block progress due to this temporary lack of time
on my side. I am sure it is possible for both of us to work on
this as we share our ideas on the list anyway.

Regards

Rüdiger

[..cut..]


Re: 2.2 mod_http_proxy and partial pages

2005-12-07 Thread Plüm , Rüdiger , VIS


 -Ursprüngliche Nachricht-
 Von: Roy T. Fielding 
 Gesendet: Donnerstag, 8. Dezember 2005 03:17

[..cut..]

  Ok. Then I withdraw my objections against the setting of 
 c-aborted. I 
  just understood its purpose wrong. Thanks for clarification.
 
 No, you understood its purpose correctly.  I have no idea 
 what Justin is talking about -- maybe the proxy's connection 
 to the outbound server?
 c-aborted is only set on the inbound connection when a previous write
 to that connection has failed.

Ok. Any further opinions on the purpose of c-aborted that support either
Roys or Justins point of view on this?

 
 Setting the inbound c-aborted within a filter just to 
 indicate that the outbound connection has died is wrong -- it 
 prevents other parts of the server (that are working fine) 
 from correctly handling the bad gateway response.

Just to ensure that we are talking about the same thing here:
The client will never get aware of the bad gateway as it already
has received the headers. So we are only talking about handling
this bad gateway situation correctly internally, so that for example
mod_cache does not cache an incomplete and thus broken response.
The client is doomed anyway.
As far as I can see everybody agrees that the connection to the
client must be closed in this situation.
Stupid question: Can't we enforce at least this by setting
c-keepalive to AP_CONN_CLOSE. Of course this does not solve the problem
to make the remaining parts of the code aware of the bad gateway situation.

Regards

Rüdiger

[..cut..]



Re: [PATCH] fix incorrect 304's responses when cache is unwritable

2005-08-11 Thread Plüm , Rüdiger , VIS
[..cut..]

Thanks!  I've committed this in r231486, r231487, and r231488.  I re-split 
them up to make it easier for people to review the commits.

However, there remains an issue in mod_disk_cache's remove_url: I don't think 
it properly handles removing the Vary condition files.  I went ahead and 
committed it because it gets us closer to the desired solution.

The code will remove the header file and the disk file; but it also likely 
needs to go up a 'level' and remove all variants.  Because if we get a 404 on 
a varied entity, it also means that all variants should be removed, no?

Should they really be removed? 
In the case that you are caching a response from a backend app server or
a cgi script I can imagine situations where one variant is 404 and another
one is not. Dw also pointed that out.
From my personal point of view we should keep them and let the next 
revalidation
on them caused by a client decide whether they should be removed or not.


I think what this means is that we need to recompute the hash - verify that it 
either meets the 'base' URL (we're done then), or we need to then go through 
and wack all of the varied headers/bodies and such.  Now, I *think* it's 
possible after Paul's latest rewrite to just wack some subdirectories - but 
I'm fuzzy on this.  (Paul??)

Yes, I think we just need to remove the .vary subdirectory with all its 
subdirectories
and files.


Does this make sense?  Or, am I missing something?  -- justin

Regards

Rüdiger


Re: [PATCH] fix incorrect 304's responses when cache is unwritable

2005-08-10 Thread Plüm , Rüdiger , VIS
--On August 8, 2005 1:25:46 PM +0100 Colm MacCarthaigh [EMAIL PROTECTED] 
wrote:

 O.k., I've merged our two patches, but I've changed a few things, tell
 me if there's anothing you think is wrong;

Would you mind writing up a log message for this patch?

I've lost track of what it's supposed to do.  ;-)


I try to sort things out. Colm please correct me if I mix things up:

Colm merged his and my patch which solved the following issues:

Colms patch: If the headers for a revalidated cache entry cannot be stored
for whatever reason then remove this cache entry from the cache rather then
to try revalidating it each time. Cite from the comments of Colms patch:

/* Before returning we need to handle the possible case of an
 * unwritable cache. Rather than leaving the entity in the cache
 * and having it constantly re-validated, now that we have recalled 
 * the body it is safe to try and remove the url from the cache.
 */

My patch: It tries to delete cache entries for which a non cacheable
response (most focused on 404's) was delivered during revalidation.
Basicly this was already implemented, but 

- the remove_url function of mod_disk_cache was empty
- it turned out that error response sometimes bypass the CACHE_SAVE filter
  (canned error responses) or do not call it with the original request.

Those things are fixed by my patch which I developed during ApacheCon
after the discussion with you, Paul and Sander.

Colms patch relies on the implementation of remove_url in mod_disk_cache
to get the file really deleted.

Later on I send a patch to the merged patch which

- Moves some logging messages which produces misleading messages
- Also removes the directory structure of the cached files as
  far as possible. As Colm pointed out correctly leaving the empty
  directories on the disk is a pain for the filesystem, so they 
  should be removed.

Don't hesitate if you have further question or like to have this
patch broken in different chunks for better reviewing / commiting :-).


Regards

Rüdiger