Re: Support for ProxyPreserveHost with mod_proxy_balancer?

2006-02-13 Thread Graham Leggett

Gregor J. Rothfuss wrote:

i am trying to use mod_proxy_balancer with a backend that is in turn 
using name-based virtual hosts.


it seems that mod_proxy_balancer doesn't honor ProxyPreserveHost (both 
2.2.0 and trunk), and does not send the Host: header to the backend.


would there be interest in a patch for that or am i attempting something 
dumb?


The Host header should always be sent to the backend, not doing so 
violates HTTP/1.1. This sounds like a bug of some kind.


Regards,
Graham
--


smime.p7s
Description: S/MIME Cryptographic Signature


AW: Support for ProxyPreserveHost with mod_proxy_balancer?

2006-02-13 Thread Plüm , Rüdiger , VIS


 -Ursprüngliche Nachricht-
 Von: Graham Leggett 
 
 Gregor J. Rothfuss wrote:
 
  i am trying to use mod_proxy_balancer with a backend that is in turn
  using name-based virtual hosts.
  
  it seems that mod_proxy_balancer doesn't honor 
 ProxyPreserveHost (both
  2.2.0 and trunk), and does not send the Host: header to the backend.
  
  would there be interest in a patch for that or am i attempting 
  something
  dumb?
 
 The Host header should always be sent to the backend, not doing so 
 violates HTTP/1.1. This sounds like a bug of some kind.

This is not the problem. A host header *is* sent to the backend, but
it is the *wrong* one (the hostname of the worker I assume and not
the hostname of the reverse proxy).

Regards

Rüdiger



AW: Support for ProxyPreserveHost with mod_proxy_balancer?

2006-02-13 Thread Plüm , Rüdiger , VIS


 -Ursprüngliche Nachricht-
 Von: Gregor J. Rothfuss 
 
 
 hi,
 
 i am trying to use mod_proxy_balancer with a backend that is in turn 
 using name-based virtual hosts.
 
 it seems that mod_proxy_balancer doesn't honor 
 ProxyPreserveHost (both 
 2.2.0 and trunk), and does not send the Host: header to the backend.

After a first quick view in the code on trunk I cannot see a problem there.
Can you please post your config here, such that we can rule out a config
problem?

Furthermore could you please open a bug in bugzilla for this? This makes
things easier to track and to reference.


Regards

Rüdiger


Re: AW: Support for ProxyPreserveHost with mod_proxy_balancer?

2006-02-13 Thread Gregor J. Rothfuss

Plüm wrote:


After a first quick view in the code on trunk I cannot see a problem there.
Can you please post your config here, such that we can rule out a config
problem?


let me know if this snip is enough:

NameVirtualHost *:8080
ProxyRequests Off
ProxyPreserveHost On

Proxy balancer://tiles_live_cluster/
   BalancerMember 192.168.1.54:8080
   BalancerMember 192.168.1.55:8080
   BalancerMember 192.168.1.56:8080
   BalancerMember 192.168.1.57:8080
   BalancerMember 192.168.1.58:8080
   BalancerMember 192.168.1.59:8080
   BalancerMember 192.168.1.60:8080
   BalancerMember 192.168.1.61:8080
   BalancerMember 192.168.1.62:8080
   BalancerMember 192.168.1.63:8080
/Proxy

#
# VirtualHost example:
# Almost any Apache directive may go into a VirtualHost container.
# The first VirtualHost section is used for all requests that do not
# match a ServerName or ServerAlias in any VirtualHost block.
#
VirtualHost *:8080
ServerName t0.tiles.com
ServerAlias t1.tiles.com
ServerAlias t2.tiles.com
ServerAlias t3.tiles.com

ProxyPreserveHost On
ProxyPass / balancer://tiles_live_cluster/ nofailover=On
ProxyPassReverse / balancer://tiles_live_cluster/
/VirtualHost

Proxy balancer://live_cluster
   BalancerMember 192.168.1.54:8080
   BalancerMember 192.168.1.55:8080
   BalancerMember 192.168.1.56:8080
   BalancerMember 192.168.1.57:8080
   BalancerMember 192.168.1.58:8080
   BalancerMember 192.168.1.59:8080
   BalancerMember 192.168.1.60:8080
   BalancerMember 192.168.1.61:8080
   BalancerMember 192.168.1.62:8080
   BalancerMember 192.168.1.63:8080
/Proxy

VirtualHost *:8080
ServerName customer.server.com
ServerAlias tiles.server.com
ServerAlias tiles0.server.com
ServerAlias tiles1.server.com
ServerAlias tiles2.server.com
ServerAlias tiles3.server.com

ProxyPreserveHost On
ProxyPass / balancer://live_cluster/ nofailover=On
ProxyPassReverse / balancer://live_cluster/
/VirtualHost



Furthermore could you please open a bug in bugzilla for this? This makes
things easier to track and to reference.


will do if it is not a PEBKAC issue (problem exists between keyboard and 
chair) ;)


--
http://43folders.com/2005/09/19/writing-sensible-email-messages/


Re: BalancerMembers are doubled workers

2006-02-13 Thread Jim Jagielski


On Feb 7, 2006, at 3:50 PM, Ruediger Pluem wrote:

During my work on PR 38403 (http://issues.apache.org/bugzilla/ 
show_bug.cgi?id=38403) I noticed that
Balancermembers appear twice in the worker list. First they get  
created by ap_proxy_add_worker
in add_member of mod_proxy.c and afterwards they are cloned by  
ap_proxy_add_worker_to_balancer

also called from add_member.
Can anybody remember the rationale for this?
These worker twins share the same reslist, but use different  
scoreboard slots. This is at least

confusing without comment.



I can't totally recall, but I think it was initially setup
that way to allow for some sort of parent-child/master-slave
setup... I need to look over the old emails.


Re: Support for ProxyPreserveHost with mod_proxy_balancer?

2006-02-13 Thread Jim Jagielski

I can't recreate that here... Can you provide more info?

On Feb 13, 2006, at 12:43 AM, Gregor J. Rothfuss wrote:


hi,

i am trying to use mod_proxy_balancer with a backend that is in  
turn using name-based virtual hosts.


it seems that mod_proxy_balancer doesn't honor ProxyPreserveHost  
(both 2.2.0 and trunk), and does not send the Host: header to the  
backend.


would there be interest in a patch for that or am i attempting  
something dumb?


thanks,

-gregor





[RELEASE CANDIDATE] Apache-Test 1.28

2006-02-13 Thread Geoffrey Young
we are pleased to announce a new release candidate for the Apache-Test
distribution.

  http://people.apache.org/~geoff/Apache-Test-1.28-dev.tar.gz

please give it a whirl and report back success or failure.

prior to the 1.26 release it was discovered that Apache-Test didn't play
nice with boxes that had both mp1 and mp2 installed.  we thought we had the
problem licked but apparently we didn't, so this release contains another
try.  if you're running mod_perl and can't do both

  $ perl Makefile.PL -httpd /path/to/httpd-1.X/bin/httpd

and

  $ perl Makefile.PL -httpd /path/to/httpd-2.X/bin/httpd

and have 'make test' choose and run against the proper httpd version
something is very wrong.  hopefully it's all fixed now, but if you
experience any problems please report them.

--Geoff


Changes since 1.27:

add need_imagemap() and have_imagemap() to check for mod_imap
or mod_imagemap [ Colm MacCarthaigh ]

shortcuts like need_cgi() and need_php() no longer spit out
bogus skip messages  [Geoffrey Young]

Adjust Apache::TestConfig::untaint_path() to handle relative paths
that don't start with /.  [Stas]

If perlpath is longer than 62 chars, some shells on certain platforms
won't be able to run the shebang line, so when seeing a long perlpath
use the eval workaround [Mike Smith [EMAIL PROTECTED]]

Location of the pid file is now configurable via the command line
-t_pid_file option [Joe Orton]

remove the mod_perl.pm entry from %INC after Apache::Test finishes
initializing itself.  because both mp1 and mp2 share the entry,
leaving it around means that Apache::Test might prevent later modules
from loading the real mod_perl module they're interested in, leading
to bad things  [Geoffrey Young]

use which(cover) to find the cover utility from Devel::Cover and run
it only if found. [Stas]

Devel::Cover magic is now fully integrated.  no more modperl_extra.pl
or extra.conf.in fiddling - 'make testcover' should be all you need
to do now [Geoffrey Young]

Implemented a magic @NextAvailablePort@ to be used in config files to
automatically allocate the next available port [Stas]

Adjust Apache::TestConfig::add_inc to add lib/ in separate call to
lib::-import at the very end of @INC manipulation to ensure it'll be
on top of @INC. For some reason lib has changed to add directories in
a different order than it did before. [Stas]


Re: [RELEASE CANDIDATE] Apache-Test 1.28

2006-02-13 Thread Geoffrey Young


Mark Galbreath wrote:
 I'm drawing a blank

for those following only dev@httpd.apache.org, and thus may be unaware of
what Apache-Test is, here's the deal...

Apache-Test

  http://perl.apache.org/Apache-Test/

is the engine that drives the perl-framework

  http://svn.apache.org/viewcvs.cgi/httpd/test/trunk/perl-framework/

which is part of the httpd project.  the perl-framework used to have its own
mailing list that received these release-based announcements, but recently
that list was dissolved and perl-framework discussion brought to [EMAIL 
PROTECTED]
 I thought it was appropriate to continue announcing releases here since
many of the httpd developers use Apache-Test and without some kind of
announcement here they wouldn't know of new releases.  but if the
perl-framework savvy folks on [EMAIL PROTECTED] don't want to receive A-T
release-type announcements in the future that's fine by me, too.

--Geoff


Re: Getting Started on mod_python 3.3.

2006-02-13 Thread Jorey Bump

Jim Gallacher wrote:


This is how I would set priorities:


Try and assign most of the issues to someone. This is a bit of PR spin, 
but I think it looks bad when there are a large number of open issues 
with no assignee. To the public it may look like the project is not 
being actively maintained.


I think the same can be said for the lack of Apache 2.2 support. 
Personally, I would put this (as well as backporting 2.2 support to 
mod_python 3.2) as the number one priority, for both PR and pragmatic 
reasons.


The need for compatibility with Apache 2.0  2.2 is going to be an issue 
for quite a while, and should be addressed before mod_python undergoes 
some of the significant changes that have been discussed.





Re: Change in how to configure authorization

2006-02-13 Thread Brad Nicholes
 On 2/10/2006 at 5:58:43 pm, in message
[EMAIL PROTECTED],
[EMAIL PROTECTED] wrote:
 Joshua Slive wrote:
 On 1/26/06, Ian Holsman [EMAIL PROTECTED] wrote:
 
Hi Joshua:

httpd.conf.in has the new structure
httpd-std.conf (the one I was looking at) didn't ;(
 
 
 Hmmm... httpd-std.conf doesn't exist in trunk.
 
 Just ran into this and couldn't quite believe what I was seeing.
 
 I have a similar config on a server and basically unless you're very
 careful you end up shutting people out! This change in auth seems to
 make no sense to me.
 
 It's adding a lot of complexity to config files. Do we really need
to
 make this change? Really?
 
 At the very least can someone please document how config files need
to
 be changed... And no, I don't think having it in a sample config file
is
 enough.
 
 davi

Yes, we do need to make this change.  With the provider based
rearchitecting of authentication in httpd 2.2, this left authorization
in an unpredictable state especially when using multiple authorization
types.  You were never quite sure which one was going to happen first
and had no way to order them or control them.  With that, there was also
a growing demand to be able to apply AND/OR logic to the way in which
authorization is applied.  So basically this change brings authorization
up to the same level of power and flexibility that currently exists in
httpd 2.2 for authentication.Hence being new functionality, there
are bound to be bugs that need to be fixed, especially with backwards
compatibility.  So let's get the bugs identified and fixed.

Brad 


Re: svn commit: r377053 - /httpd/httpd/trunk/modules/proxy/mod_proxy_http.c

2006-02-13 Thread Joe Orton
On Sat, Feb 11, 2006 at 08:57:14PM -, [EMAIL PROTECTED] wrote:
 Author: rpluem
 Date: Sat Feb 11 12:57:12 2006
 New Revision: 377053
 
 URL: http://svn.apache.org/viewcvs?rev=377053view=rev
 Log:
 * Do not remove the connection headers from r-headers_in. They are needed
   by the http output filter to create the correct connection response headers.
   Instead work on a copy of r-headers_in.
 
 PR: 38524

This change (I think) is triggering the bad pool ancestry abort() in the 
tables code: the proxy tests in the test suite are all dumping core in 
APR_POOL_DEBUG builds since yesterday.  Here's a sample backtrace:

#0  0x002a96514745 in raise () from /lib64/tls/libc.so.6
No symbol table info available.
#1  0x002a96515eb3 in abort () from /lib64/tls/libc.so.6
No symbol table info available.
#2  0x002a95e629e4 in apr_table_copy (p=0x6c8980, t=0x6c54d0)
at tables/apr_tables.c:403
new = (apr_table_t *) 0x79f960
#3  0x002a9a735b05 in ap_proxy_http_request (p=0x6c8980, r=0x6f1e70, 
p_conn=0x705ca0, origin=0x6d2e80, conf=0x6a6ff0, uri=0x0, 
url=0x6edce0 /, server_portstr=0x7fbfff7940 :8541)
at mod_proxy_http.c:729
here = (struct apr_bucket *) 0x6
c = (conn_rec *) 0x6c4e20
bucket_alloc = (apr_bucket_alloc_t *) 0x834d08
header_brigade = (apr_bucket_brigade *) 0x6dc840
input_brigade = (apr_bucket_brigade *) 0x2a9a737e2c
temp_brigade = (apr_bucket_brigade *) 0x6ffc40
e = (apr_bucket *) 0x
buf = 0x633ed0 Host: localhost.localdomain:8529\r\n
headers_in_array = (const apr_array_header_t *) 0x43a203
headers_in = (const apr_table_entry_t *) 0x677ef0
counter = 0
status = 6504144
rb_method = RB_INIT
old_cl_val = 0x0
old_te_val = 0x0
bytes_read = 0
bytes = 6229376
force10 = 0
headers_in_copy = (apr_table_t *) 0x633ed0
...


Re: Change in how to configure authorization

2006-02-13 Thread Joe Orton
On Mon, Feb 13, 2006 at 08:26:39AM -0700, Brad Nicholes wrote:
 Yes, we do need to make this change.  With the provider based 
 rearchitecting of authentication in httpd 2.2, this left authorization 
 in an unpredictable state especially when using multiple authorization 
 types.  You were never quite sure which one was going to happen first 
 and had no way to order them or control them.  With that, there was 
 also a growing demand to be able to apply AND/OR logic to the way in 
 which authorization is applied.  So basically this change brings 
 authorization up to the same level of power and flexibility that 
 currently exists in httpd 2.2 for authentication.  Hence being new 
 functionality, there are bound to be bugs that need to be fixed, 
 especially with backwards compatibility.  So let's get the bugs 
 identified and fixed.

Could you have a look at making the test suite pass again, to that end?

I tried to port mod_authany (c-modules/authany/mod_authany.c) to the 
trunk authz API, but to no avail.  The tests which fail are:

t/http11/basicauth..# Failed test 2 in t/http11/basicauth.t at 
line 24
FAILED test 2
Failed 1/3 tests, 66.67% okay
t/security/CVE-2004-0811# Failed test 1 in 
t/security/CVE-2004-0811.t at line 14
# Failed test 2 in t/security/CVE-2004-0811.t at line 14 fail #2
# Failed test 3 in t/security/CVE-2004-0811.t at line 14 fail #3
# Failed test 4 in t/security/CVE-2004-0811.t at line 14 fail #4
FAILED tests 1-4

joe


Re: [Patch] Keep Alive not workwing with mod_proxy (PR38602)

2006-02-13 Thread Jim Jagielski

This looks like a big change, and my only concern is
that the behavior changes, although it appears that
we don't know why the current behavior is the
way it is...

Anyway:

On Feb 12, 2006, at 3:53 PM, Ruediger Pluem wrote:


The real problem is that we actually *close* our connection to the  
backend
after each request (see line 1512 of mod_proxy_http.c) and if it  
would have

survived we would *close* it on the reusal of this connection:



Again, this appears to be specifically done that way
for a reason, but I have no idea what it would
have been :)


@@ -1504,12 +1513,6 @@

 /* found the last brigade? */
 if (APR_BUCKET_IS_EOS(APR_BRIGADE_LAST(bb))) {
-/* if this is the last brigade, cleanup the
- * backend connection first to prevent the
- * backend server from hanging around waiting
- * for a slow client to eat these bytes
- */
-backend-close = 1;
 /* signal that we must leave */
 finish = TRUE;
 }


This, I think provides a clue: I'm guessing we are trying
to optimize the client-Apache link, at the expense of
opening/closing sockets to the backend, or wasting
those sockets. If we had a nice connection pool, then
it would be different...


@@ -1667,26 +1659,15 @@
  proxy: HTTP: serving URL %s, url);


-/* only use stored info for top-level pages. Sub requests  
don't share

- * in keepalives
- */
-if (!r-main) {
-backend = (proxy_conn_rec *) ap_get_module_config(c- 
conn_config,
-   
proxy_http_module);

-}
 /* create space for state information */
 if (!backend) {
 if ((status = ap_proxy_acquire_connection(proxy_function,  
backend,
   worker, r- 
server)) != OK)

 goto cleanup;

-if (!r-main) {
-ap_set_module_config(c-conn_config,  
proxy_http_module, backend);

-}
 }




Not sure why we would bother still having that !backend
check, since we know it's NULL. We set it to NULL :)
And this also seems to allude to the fact that the
present framework is to support pooled connections.
Not sure how the above would conflict with subrequests

Does the patched version pass the test framework?


Re: [Patch] Keep Alive not workwing with mod_proxy (PR38602)

2006-02-13 Thread Jim Jagielski


On Feb 13, 2006, at 11:22 AM, Jim Jagielski wrote:


This, I think provides a clue: I'm guessing we are trying
to optimize the client-Apache link, at the expense of
opening/closing sockets to the backend, or wasting
those sockets. If we had a nice connection pool, then
it would be different...



Giving this some very quick thought, I wonder if
this is associated with that bug you and I squashed
awhile ago: there is no guarantee that the next
kept-alive connection will go to the same backend;
as such, keeping it open is wasteful and re-using
it is downright wrong. We need to look to make sure
that there are sufficient checks that this doesn't
happen, and that might be the reason the current
behavior exists...


Re: Getting Started on mod_python 3.3.

2006-02-13 Thread Jim Gallacher

Jorey Bump wrote:

Jim Gallacher wrote:


This is how I would set priorities:



Try and assign most of the issues to someone. This is a bit of PR 
spin, but I think it looks bad when there are a large number of open 
issues with no assignee. To the public it may look like the project is 
not being actively maintained.



I think the same can be said for the lack of Apache 2.2 support. 
Personally, I would put this (as well as backporting 2.2 support to 
mod_python 3.2) as the number one priority, for both PR and pragmatic 
reasons.


The need for compatibility with Apache 2.0  2.2 is going to be an issue 
for quite a while, and should be addressed before mod_python undergoes 
some of the significant changes that have been discussed.


Apache 2.2 support has already been checked into svn trunk. It's just a 
question of doing the backport to the 3.2.x branch once we've seen some 
testing. I think we should plan on doing regular 3.2.x bugfix releases 
so that the 3.3 dev branch can mature without the pressure making a 
release just to fix bugs.


Jim


Re: AW: Support for ProxyPreserveHost with mod_proxy_balancer?

2006-02-13 Thread Joost de Heer

ProxyPassReverse / balancer://tiles_live_cluster/


This looks wrong, shouldn't this be http://reverse.proxy.host/ ?

Joost


Re: [Patch] Keep Alive not workwing with mod_proxy (PR38602)

2006-02-13 Thread Brian Akins

Jim Jagielski wrote:


 there is no guarantee that the next
kept-alive connection will go to the same backend;
as such, keeping it open is wasteful and re-using
it is downright wrong. 


Why?  Why would we care which backend a request goes to, in general. 
And, do we not want to use keepalives as much as possible to the backends?


--
Brian Akins
Lead Systems Engineer
CNN Internet Technologies


Re: [Patch] Keep Alive not workwing with mod_proxy (PR38602)

2006-02-13 Thread Jim Jagielski


On Feb 13, 2006, at 12:57 PM, Brian Akins wrote:


Jim Jagielski wrote:


 there is no guarantee that the next
kept-alive connection will go to the same backend;
as such, keeping it open is wasteful and re-using
it is downright wrong.


Why?  Why would we care which backend a request goes to, in  
general. And, do we not want to use keepalives as much as possible  
to the backends?




Let's assume that you have Apache setup as a proxy and furthermore
it's configured so that /html goes to foo1 and /images goes
to /foo2.

A request comes in for /html/index.htm, and gets proxied to
foo1, as it should; the connection is kept-alive, and a
request for /images/blarf.gif is requested; this should not
be sent to the just kept-alive server, but instead to
foo2...

So we need to ensure that this doesn't happen. We had a similar
type bug when picking workers out, which was patched in
349723...


Re: [Patch] Keep Alive not workwing with mod_proxy (PR38602)

2006-02-13 Thread William A. Rowe, Jr.

Ruediger Pluem wrote:

Currently I work on PR 38602 
(http://issues.apache.org/bugzilla/show_bug.cgi?id=38602).
First of all the reporter is correct that we do not sent the
Connection: Keep-Alive
header on our HTTP/1.1 keep-alive connections to the backend.
But this is only the small part of the problem since 8.1.2 of the RFC says:

   A significant difference between HTTP/1.1 and earlier versions of
   HTTP is that persistent connections are the default behavior of any
   HTTP connection. That is, unless otherwise indicated, the client
   SHOULD assume that the server will maintain a persistent connection,


The real problem is that we've never paid attention to the backend server.
If speaking to a backend http/1.0 server, we can try connection: keep-alive
if the server pays attention to it.  That header is invalid for http/1.1
backends, and we should choose connection: close where appropriate.

To a backend http/1.0 server, connection: close is meaningless (and wrong).




Re: [Patch] Keep Alive not workwing with mod_proxy (PR38602)

2006-02-13 Thread Jim Jagielski


On Feb 13, 2006, at 1:28 PM, William A. Rowe, Jr. wrote:


Ruediger Pluem wrote:
Currently I work on PR 38602 (http://issues.apache.org/bugzilla/ 
show_bug.cgi?id=38602).

First of all the reporter is correct that we do not sent the
Connection: Keep-Alive
header on our HTTP/1.1 keep-alive connections to the backend.
But this is only the small part of the problem since 8.1.2 of the  
RFC says:

   A significant difference between HTTP/1.1 and earlier versions of
   HTTP is that persistent connections are the default behavior of  
any

   HTTP connection. That is, unless otherwise indicated, the client
   SHOULD assume that the server will maintain a persistent  
connection,


The real problem is that we've never paid attention to the backend  
server.
If speaking to a backend http/1.0 server, we can try connection:  
keep-alive
if the server pays attention to it.  That header is invalid for  
http/1.1

backends, and we should choose connection: close where appropriate.

To a backend http/1.0 server, connection: close is meaningless (and  
wrong).





Some sort of proxy-connection pool would help here, as
that would be part of the meta-data. I was always wondering
how much of DBD could be re-used for this, but wanted to
wait until DBD was at a stable stage...


Re: [Patch] Keep Alive not workwing with mod_proxy (PR38602)

2006-02-13 Thread Brian Akins

Jim Jagielski wrote:


Let's assume that you have Apache setup as a proxy and furthermore
it's configured so that /html goes to foo1 and /images goes
to /foo2.

A request comes in for /html/index.htm, and gets proxied to
foo1, as it should; the connection is kept-alive, and a
request for /images/blarf.gif is requested; this should not
be sent to the just kept-alive server, but instead to
foo2...


I see now.

Does this apply even when using balancer? I mean, do we break the keep 
alive to backend?  We should need to...



--
Brian Akins
Lead Systems Engineer
CNN Internet Technologies


Re: [Patch] Keep Alive not workwing with mod_proxy (PR38602)

2006-02-13 Thread Jim Jagielski
Brian Akins wrote:
 
 Jim Jagielski wrote:
 
  Let's assume that you have Apache setup as a proxy and furthermore
  it's configured so that /html goes to foo1 and /images goes
  to /foo2.
  
  A request comes in for /html/index.htm, and gets proxied to
  foo1, as it should; the connection is kept-alive, and a
  request for /images/blarf.gif is requested; this should not
  be sent to the just kept-alive server, but instead to
  foo2...
 
 I see now.
 
 Does this apply even when using balancer? I mean, do we break the keep 
 alive to backend?  We should need to...
 

Yep, and that's why I think we close the connection each time;
we can't be assured that the next request will be for that
backend, so we have a held-open socket for who knows how long,
and without real connection pooling, we're wasting sockets.

-- 
===
   Jim Jagielski   [|]   [EMAIL PROTECTED]   [|]   http://www.jaguNET.com/
If you can dodge a wrench, you can dodge a ball.


Re: Cool feature from mod_perl : Configure Apache with Perl

2006-02-13 Thread Jim Gallacher

Nicolas Lehuen wrote:

Hi,

I'm currently reading the feature section from mod_perl. Initially, I
was trying to find information about how they cope with
multithreading, multiple interpreter instantiation and code reloading,
but I stumbled upon this :

http://perl.apache.org/start/tips/config.html

Now, I can't stand Perl, but this feature is quite cool, isn't it ?
Would it be difficult to implement, say, in mod_python 4.0 ?


Wow, that is cool! And yes, mp 4.0 is likely a pretty good time frame. :)

Jim


Re: Change in how to configure authorization

2006-02-13 Thread Brad Nicholes
 On 2/13/2006 at 8:39:41 am, in message
[EMAIL PROTECTED],
[EMAIL PROTECTED] wrote:
 On Mon, Feb 13, 2006 at 08:26:39AM -0700, Brad Nicholes wrote:
 Yes, we do need to make this change.  With the provider based 
 rearchitecting of authentication in httpd 2.2, this left
authorization 
 in an unpredictable state especially when using multiple
authorization 
 types.  You were never quite sure which one was going to happen
first 
 and had no way to order them or control them.  With that, there was

 also a growing demand to be able to apply AND/OR logic to the way in

 which authorization is applied.  So basically this change brings 
 authorization up to the same level of power and flexibility that 
 currently exists in httpd 2.2 for authentication.  Hence being new 
 functionality, there are bound to be bugs that need to be fixed, 
 especially with backwards compatibility.  So let's get the bugs 
 identified and fixed.
 
 Could you have a look at making the test suite pass again, to that
end?
 
 I tried to port mod_authany (c-modules/authany/mod_authany.c) to the

 trunk authz API, but to no avail.  The tests which fail are:
 
 t/http11/basicauth..# Failed test 2 in t/http11/basicauth.t
at 
 line 24
 FAILED test 2
   Failed 1/3 tests, 66.67% okay
 t/security/CVE-2004-0811# Failed test 1 in 
 t/security/CVE-2004-0811.t at line 14
 # Failed test 2 in t/security/CVE-2004-0811.t at line 14 fail #2
 # Failed test 3 in t/security/CVE-2004-0811.t at line 14 fail #3
 # Failed test 4 in t/security/CVE-2004-0811.t at line 14 fail #4
 FAILED tests 1-4
 
 jo

The problem that I see with mod_anyuser is that it is trying to
re-register the 'user' authorization provider.  All of the authorization
types must be unique.  So in this case, the provider should probably be
called 'any-user' or something like that.  Then, according to the code,
the whole thing looks a lot like 'valid-user'.  Is there a reason why
the test configuration doesn't just use 'valid-user'?

Brad


Re: mod_proxy buffering small chunks

2006-02-13 Thread Alan Gutierrez
* Plüm, Rüdiger, VIS [EMAIL PROTECTED] [2006-02-06 09:29]:
 
 
  -Ursprüngliche Nachricht-
  Von: Alan Gutierrez 
  
  The proposed solution is to poll for chunks using 
  non-blocking I/O. When the socket returns EAGAIN, the 8K 
  buffer is flushed, and the socket is read with blocking I/O.
 
 Thats the way the code is already designed in 2.2.x.

I am aware of this, acutally. I should have noted this myself.

I've traced through the code to see where the EAGAIN return value is
lost, and to note that the non-blocking mode is not applied to the
header. I've not tested, but I suspect that the logic is correctly
implemented should the socket block during the transfer of the body
of a chunk. It fails when the next chunk has not been sent.


  However, in the http code, regardless of the block mode flag, 
  the headers are read using blocking I/O. Furthermore, 
  somewhere deeper in the chain, when EAGAIN is detected the 
  success flag is returned instead. I'll provide more detail, 
  if this is not already well known.

 This is the problem. The EAGAIN or better the mode handling in this
 code path is somewhat broken.

Looks that way. The EAGAIN is never sent back out, and in addition
the headers are read with a blocking read. It would require changing
the return values of code path, which may way cause some breakage.

Maybe not, if EAGAIN is handled correctly in all places, but it's
never been tested, since it's never returned, right?

  Looks like this is going to be messy to fix. Any patches?
 
 Not so far. I guess it requires some general changes that must
 be carefully designed. But anyway, patch proposals are welcome :-).

Can I suggest a flag? Something like

ProxyChunkedAsIs On

If this could be specified for a path or directory, then the
upstream chunks are sent immediately, no buffering what so ever.

This is a patch I've applied partially, need to add the flag. Adding
the behavior fixes my problem.

I'm proxying a stock ticker. The chunk size is always going to be
smaller than 8k. Even when the EAGAIN logic is implemented
correctly, buffering is liable to delay the receipt of the chunk.

 Actually, the thing you want works with mod_proxy_ajp in 2.2.x.

AJP? Confusing. Wish I could keep it HTTP.

--
Alan Gutierrez - 504 717 1428 - [EMAIL PROTECTED] - http://blogometer.com/


Re: [Patch] Keep Alive not workwing with mod_proxy (PR38602)

2006-02-13 Thread Brian Akins

Jim Jagielski wrote:


Yep, and that's why I think we close the connection each time;


umm, I thought the balancer would try to keep the connection open to 
backends?  A single client may wind up talking to multiple backend pools 
over the course of a connection (/css - A, /images - B, etc.).



we're wasting sockets.



we'd be saving start up time on each socket.

So just to be clear, there is no connection pooling in proxy_balancer, 
or is there?  Did I imagine that it was supposed to be?



--
Brian Akins
Lead Systems Engineer
CNN Internet Technologies


Re: [Patch] Keep Alive not workwing with mod_proxy (PR38602)

2006-02-13 Thread Ruediger Pluem


On 02/13/2006 07:28 PM, William A. Rowe, Jr. wrote:
 Ruediger Pluem wrote:
 
  The real problem is that we've never paid attention to the backend server.
 If speaking to a backend http/1.0 server, we can try connection: keep-alive
 if the server pays attention to it.  That header is invalid for http/1.1

Out of interest which part of which rfc (2616 / 2068) says that connection: 
keep-alive
is invalid for http/1.1?
I see this header send frequently by browsers that speak http/1.1 (of course
this does not mean that this is rfc compliant :-)

 backends, and we should choose connection: close where appropriate.
 
 To a backend http/1.0 server, connection: close is meaningless (and wrong).

Agreed. But I actually only do this when I sent an HTTP/1.1 request.
BTW: This is how the code currently works and this is not changed by my patch.

Another question: Do you see the need for some kind of detection code on the 
first
request to a backend server whether it speaks HTTP/1.1 or only HTTP/1.0?

Regards

Rüdiger




Re: [Patch] Keep Alive not workwing with mod_proxy (PR38602)

2006-02-13 Thread Ruediger Pluem


On 02/13/2006 09:12 PM, Jim Jagielski wrote:
 Brian Akins wrote:
 
Jim Jagielski wrote:


Let's assume that you have Apache setup as a proxy and furthermore
it's configured so that /html goes to foo1 and /images goes
to /foo2.

A request comes in for /html/index.htm, and gets proxied to
foo1, as it should; the connection is kept-alive, and a
request for /images/blarf.gif is requested; this should not
be sent to the just kept-alive server, but instead to
foo2...

I see now.

Does this apply even when using balancer? I mean, do we break the keep 
alive to backend?  We should need to...

 
 
 Yep, and that's why I think we close the connection each time;
 we can't be assured that the next request will be for that
 backend, so we have a held-open socket for who knows how long,
 and without real connection pooling, we're wasting sockets.

What do you mean by real connection pooling? We actually have connection 
pooling
via the apr reslist. The patch ensures that we return this connection to the 
pool
such that it can be used by other clients that use this worker.

Furthermore we have to keep the following two things in mind:

1. Closing the connection in proxy_util.c is badness as other protocols (e.g. 
ajp
   are designed to have very long living connections to the backend. If we keep 
on closing
   this, it would mean that mod_proxy_ajp would be no real alternative to 
mod_jk.

2. To be honest my view on the proxy code might be too reverse proxy driven, 
but on
   a busy reverse proxy the connection will be quickly reused by another 
client, so
   it makes perfect sense to me to keep it open. One critical thing I currently 
see on
   this path is a race condition with the keep-alive timeout timer on the 
backend.
   The backend could close the connection after we checked it in 
ap_proxy_connect_backend
   and before we sent the request in ap_proxy_http_request.
   That might be also an argument to sent a Connection: Keep-Alive header and 
evaluate
   the timeout token of a possibly returned Keep-Alive header by the backend.
   E.g. we could restablish the connection once we know that there is less 
equal then one
   second left before the timeout happens on the backend.


Regards

Rüdiger




Re: AW: Support for ProxyPreserveHost with mod_proxy_balancer?

2006-02-13 Thread Ruediger Pluem


On 02/13/2006 06:31 PM, Joost de Heer wrote:
 ProxyPassReverse / balancer://tiles_live_cluster/
 
 
 This looks wrong, shouldn't this be http://reverse.proxy.host/ ?

Yes, this also looks wrong to me. I think he needs a separate ProxyPassReverse
line for *each* of the backend servers he configured in the cluster.
The balancer://tiles_live_cluster/ is not used any longer once an appropriate
worker has been chosen from the cluster.

Regards

Rüdiger


Re: [Patch] Keep Alive not workwing with mod_proxy (PR38602)

2006-02-13 Thread Jim Jagielski
Brian Akins wrote:
 
 Jim Jagielski wrote:
 
  Yep, and that's why I think we close the connection each time;
 
 umm, I thought the balancer would try to keep the connection open to 
 backends?  A single client may wind up talking to multiple backend pools 
 over the course of a connection (/css - A, /images - B, etc.).
 
  we're wasting sockets.
  
 
 we'd be saving start up time on each socket.
 
 So just to be clear, there is no connection pooling in proxy_balancer, 
 or is there?  Did I imagine that it was supposed to be?
 

It's *kind* of there, but not really fully fleshed out... I think of
a connection pool as one shared among all entities and with
longer longevity that what we currently do (tuck them away
in proxy_conn_rec).

-- 
===
   Jim Jagielski   [|]   [EMAIL PROTECTED]   [|]   http://www.jaguNET.com/
If you can dodge a wrench, you can dodge a ball.


Re: [Patch] Keep Alive not workwing with mod_proxy (PR38602)

2006-02-13 Thread Jim Jagielski
Ruediger Pluem wrote:
 
 What do you mean by real connection pooling? We actually have connection 
 pooling
 via the apr reslist. The patch ensures that we return this connection to the 
 pool
 such that it can be used by other clients that use this worker.
 

That's what I mean by real... it's not really being used as
real connection pools would be. :)

 Furthermore we have to keep the following two things in mind:
 
 1. Closing the connection in proxy_util.c is badness as other protocols (e.g. 
 ajp
are designed to have very long living connections to the backend. If we 
 keep on closing
this, it would mean that mod_proxy_ajp would be no real alternative to 
 mod_jk.
 

+1

-- 
===
   Jim Jagielski   [|]   [EMAIL PROTECTED]   [|]   http://www.jaguNET.com/
If you can dodge a wrench, you can dodge a ball.


Re: Troubles debuging module

2006-02-13 Thread Matty


On Thu, 9 Feb 2006, Maxime Petazzoni wrote:


Hi,

* Miroslav Maiksnar [EMAIL PROTECTED] [2006-02-09 07:05:48]:


I'm trying debug mod_spnego in gdb, but it looks apache2 does not load debug
info for module (I can't see source). What is the proper way to debug apache
modules using debuger?


I typically add --enable-maintainer-mode to the configure line to enable 
debugging symbols.


Hope this helps,
- Ryan
--
UNIX Administrator
http://daemons.net/~matty


Re: Cool feature from mod_perl : Configure Apache with Perl

2006-02-13 Thread Graham Dumpleton
Nicolas Lehuen wrote ..
 Hi,
 
 I'm currently reading the feature section from mod_perl. Initially, I
 was trying to find information about how they cope with
 multithreading, multiple interpreter instantiation and code reloading,
 but I stumbled upon this :
 
 http://perl.apache.org/start/tips/config.html
 
 Now, I can't stand Perl, but this feature is quite cool, isn't it ?
 Would it be difficult to implement, say, in mod_python 4.0 ?

I had seen that feature as well. I also thought it would be interesting to
look at it but also figured it would be much later rather than sooner. As
you can see from all the JIRA reports lately I think there is enough work
to do to fill out even the current stuff with little bits that are missing and
making mod_python work properly in the Apache way.

In general I think there is probably a lot of ideas we can get from looking
at mod_perl. BTW, if you haven't stumbled across it already, someone
has a full copy of Writing Apache Modules with Perl and C online at:

  http://162.105.203.19/apache-doc/1.htm

I'm probably still going to go and buy the dead tree version today if I
can get the time. Easier to flick through then.

Anyway, there is a really good description of the basic Apache life cycle
for requests in there which even people only using mod_python may
find it worthwhile to read at some point to help understand the different
request phases. Pity that mod_python doesn't quite get it right.

Graham


shutdown and linux poll()

2006-02-13 Thread Chris Darroch
Hi --

   This may be an old topic of conversation, in which case I apologize.
I Googled and searched marc.theaimslist.com and Apache Bugzilla but
didn't see anything, so here I am with a question.

   In brief, on Linux, when doing an ungraceful stop of httpd, any
 worker threads that are poll()ing on Keep-Alive connections don't get
awoken by close_worker_sockets() and that can lead to the process
getting the SIGKILL signal without ever getting the chance to run
apr_pool_destroy(pchild) in clean_child_exit().  This seems to
relate to this particular choice by the Linux and/or glibc folks:

http://bugme.osdl.org/show_bug.cgi?id=546


   The backstory goes like this: I spent a chunk of last week trying
to figure out why my module wasn't shutting down properly.  First I
found some places in my code where I'd failed to anticipate the order
in which memory pool cleanup functions would be called, especially
those registered by apr_thread_cond_create().

   However, after fixing that, I found that when connections were still
in the 15 second timeout for Keep-Alives, a child process could get the
SIGKILL before finished cleaning up.  (I'm using httpd 2.2.0 with the
worker MPM on Linux 2.6.9 [RHEL 4] with APR 1.2.2.)  The worker threads
are poll()ing and, if I'm reading my strace files correctly, they don't
get an EBADF until after the timeout completes.  That means that
join_workers() is waiting for those threads to exit, so child_main()
can't finish up and call clean_child_exit() and thus apr_pool_destroy()
on the pchild memory pool.

   This is a bit of a problem for me because I really need
join_workers() to finish up and the cleanups I've registered
against pchild in my module's child_init handler to be run if
at all possible.

   It was while researching all this that I stumbled on the amazing
new graceful-stop feature and submitted #38621, which I see has
already been merged ... thank you!

   However, if I need to do an ungraceful stop of the server --
either manually or because the GracefulShutdownTimeout has
expired without a chance to gracefully stop -- I'd still like my
cleanups to run.


   My solution at the moment is a pure hack -- I threw in
apr_sleep(apr_time_from_sec(15)) right before
ap_reclaim_child_processes(1) in ap_mpm_run() in worker.c.
That way it lets all the Keep-Alive timeouts expire before
applying the SIGTERM/SIGKILL hammer.  But that doesn't seem
ideal, and moreover, doesn't take into account the fact that
KeepAliveTimeouts  15 seconds may have been assigned.  Even
if I expand my hack to wait for the maximum possible Keep-Alive
timeout, it's still clearly a hack.


   Does anyone have any advice?  Does this seem like a problem
to be addressed?  I tried to think through how one could signal
the poll()ing worker threads with pthread_kill(), but it seems
to me that not only would you have to have a signal handler
in the worker threads (not hard), you'd somehow have to break
out of whatever APR wrappers are abstracting the poll() once
the handler set its flag or whatever and returned -- the APR
functions can't just loop on EINTR anymore.  (Is it
socket_bucket_read() in the socket bucket code and then
apr_socket_recv()?  I can't quite tell yet.)  Anyway, it seemed
complex and likely to break the abstraction across OSes.

   Still, I imagine I'm not the only one who would really like
those worker threads to cleanly exit so everything else does ...
after all, they're not doing anything critical, just waiting
for the Keep-Alive timeout to expire, after which they notice
their socket is borked and exit.

   FWIW, I tested httpd 2.2.0 with the worker MPM on a Solaris
2.9 box and it does indeed do what the Linux bug report says;
poll() returns immediately if another thread closes the socket
and thus the whole httpd server exits right away.

   Thoughts, advice?  Any comments appreciated.

Chris.

-- 
GPG Key ID: 366A375B
GPG Key Fingerprint: 485E 5041 17E1 E2BB C263  E4DE C8E3 FA36 366A 375B



Re: shutdown and linux poll()

2006-02-13 Thread Paul Querna

To clarify, are you sure its not using EPoll instead of Poll?


Chris Darroch wrote:

Hi --

   This may be an old topic of conversation, in which case I apologize.
I Googled and searched marc.theaimslist.com and Apache Bugzilla but
didn't see anything, so here I am with a question.

   In brief, on Linux, when doing an ungraceful stop of httpd, any
 worker threads that are poll()ing on Keep-Alive connections don't get
awoken by close_worker_sockets() and that can lead to the process
getting the SIGKILL signal without ever getting the chance to run
apr_pool_destroy(pchild) in clean_child_exit().  This seems to
relate to this particular choice by the Linux and/or glibc folks:

http://bugme.osdl.org/show_bug.cgi?id=546


   The backstory goes like this: I spent a chunk of last week trying
to figure out why my module wasn't shutting down properly.  First I
found some places in my code where I'd failed to anticipate the order
in which memory pool cleanup functions would be called, especially
those registered by apr_thread_cond_create().

   However, after fixing that, I found that when connections were still
in the 15 second timeout for Keep-Alives, a child process could get the
SIGKILL before finished cleaning up.  (I'm using httpd 2.2.0 with the
worker MPM on Linux 2.6.9 [RHEL 4] with APR 1.2.2.)  The worker threads
are poll()ing and, if I'm reading my strace files correctly, they don't
get an EBADF until after the timeout completes.  That means that
join_workers() is waiting for those threads to exit, so child_main()
can't finish up and call clean_child_exit() and thus apr_pool_destroy()
on the pchild memory pool.

   This is a bit of a problem for me because I really need
join_workers() to finish up and the cleanups I've registered
against pchild in my module's child_init handler to be run if
at all possible.

   It was while researching all this that I stumbled on the amazing
new graceful-stop feature and submitted #38621, which I see has
already been merged ... thank you!

   However, if I need to do an ungraceful stop of the server --
either manually or because the GracefulShutdownTimeout has
expired without a chance to gracefully stop -- I'd still like my
cleanups to run.


   My solution at the moment is a pure hack -- I threw in
apr_sleep(apr_time_from_sec(15)) right before
ap_reclaim_child_processes(1) in ap_mpm_run() in worker.c.
That way it lets all the Keep-Alive timeouts expire before
applying the SIGTERM/SIGKILL hammer.  But that doesn't seem
ideal, and moreover, doesn't take into account the fact that
KeepAliveTimeouts  15 seconds may have been assigned.  Even
if I expand my hack to wait for the maximum possible Keep-Alive
timeout, it's still clearly a hack.


   Does anyone have any advice?  Does this seem like a problem
to be addressed?  I tried to think through how one could signal
the poll()ing worker threads with pthread_kill(), but it seems
to me that not only would you have to have a signal handler
in the worker threads (not hard), you'd somehow have to break
out of whatever APR wrappers are abstracting the poll() once
the handler set its flag or whatever and returned -- the APR
functions can't just loop on EINTR anymore.  (Is it
socket_bucket_read() in the socket bucket code and then
apr_socket_recv()?  I can't quite tell yet.)  Anyway, it seemed
complex and likely to break the abstraction across OSes.

   Still, I imagine I'm not the only one who would really like
those worker threads to cleanly exit so everything else does ...
after all, they're not doing anything critical, just waiting
for the Keep-Alive timeout to expire, after which they notice
their socket is borked and exit.

   FWIW, I tested httpd 2.2.0 with the worker MPM on a Solaris
2.9 box and it does indeed do what the Linux bug report says;
poll() returns immediately if another thread closes the socket
and thus the whole httpd server exits right away.

   Thoughts, advice?  Any comments appreciated.

Chris.





Re: Change in how to configure authorization

2006-02-13 Thread Brad Nicholes
 On 2/13/2006 at 8:39:41 am, in message
[EMAIL PROTECTED],
[EMAIL PROTECTED] wrote:
 On Mon, Feb 13, 2006 at 08:26:39AM -0700, Brad Nicholes wrote:
 Yes, we do need to make this change.  With the provider based 
 rearchitecting of authentication in httpd 2.2, this left
authorization 
 in an unpredictable state especially when using multiple
authorization 
 types.  You were never quite sure which one was going to happen
first 
 and had no way to order them or control them.  With that, there was

 also a growing demand to be able to apply AND/OR logic to the way in

 which authorization is applied.  So basically this change brings 
 authorization up to the same level of power and flexibility that 
 currently exists in httpd 2.2 for authentication.  Hence being new 
 functionality, there are bound to be bugs that need to be fixed, 
 especially with backwards compatibility.  So let's get the bugs 
 identified and fixed.
 
 Could you have a look at making the test suite pass again, to that
end?
 
 I tried to port mod_authany (c-modules/authany/mod_authany.c) to the

 trunk authz API, but to no avail.  The tests which fail are:
 
 t/http11/basicauth..# Failed test 2 in t/http11/basicauth.t
at 
 line 24
 FAILED test 2
   Failed 1/3 tests, 66.67% okay
 t/security/CVE-2004-0811# Failed test 1 in 
 t/security/CVE-2004-0811.t at line 14
 # Failed test 2 in t/security/CVE-2004-0811.t at line 14 fail #2
 # Failed test 3 in t/security/CVE-2004-0811.t at line 14 fail #3
 # Failed test 4 in t/security/CVE-2004-0811.t at line 14 fail #4
 FAILED tests 1-4
 
 joe

The other problem that I see in the configuration is that the Location
/authany defines an authtype and authname but no authentication
provider.  This means that the authentication provider will default to
'file'.  But since there hasn't been a password file specified either,
the result is an AUTH_GENERAL_ERROR.  This scenario would occur with
either 2.2 or trunk.

Brad


Re: POST_MAX

2006-02-13 Thread Jonathan Vanasco


Anyone?  This is killing me.

The only thing I've been able to figure out is this:

my  $error = $apacheRequest-body_status();
if ( $error eq 'Exceeds configured maximum limit' )
{
$self-RESULT_FINAL__general( undef , 'Your file is too big.' );
return;
}


matching against a error text message is pretty bad, IMO, and makes  
me worry what happens if someone changes the error message or if  
there are multiple errors.





On Feb 10, 2006, at 10:50 AM, Jonathan wrote:


how do i catch the error when POST_MAX has been passed?

ie-

sub handler
{
my  $r   = shift;
	my 	$apr = Apache2::Request-new( $r , DISABLE_UPLOADS=0 ,  
POST_MAX=100_000 );

}

and someone tries to upload something 200_000 ?

it's unclear in the docs
	http://httpd.apache.org/apreq/docs/libapreq2/ 
group__apreq__xs__request.html


---
* POST_MAX, MAX_BODY
Limit the size of POST data (in bytes).
---

i'd love to know how to catch this, and even more for someone to  
update the docs with that info :)






Re: svn commit: r377053 - /httpd/httpd/trunk/modules/proxy/mod_proxy_http.c

2006-02-13 Thread Ruediger Pluem


On 02/13/2006 04:37 PM, Joe Orton wrote:
 On Sat, Feb 11, 2006 at 08:57:14PM -, [EMAIL PROTECTED] wrote:
 
 
 
 This change (I think) is triggering the bad pool ancestry abort() in the 
 tables code: the proxy tests in the test suite are all dumping core in 
 APR_POOL_DEBUG builds since yesterday.  Here's a sample backtrace:

Thanks for spotting this. You are correct: I used the wrong pool.
p is actually the connection pool whereas the key / value pairs in r-headers_in
get created from r-pool which lives shorter than the connection pool.
Hopefully fixed by r377525.
Could you please give it a try again?

Regards

Rüdiger



Re: [Patch] Keep Alive not workwing with mod_proxy (PR38602)

2006-02-13 Thread Ruediger Pluem


On 02/13/2006 05:22 PM, Jim Jagielski wrote:
 This looks like a big change, and my only concern is

This is why I discuss it first, before I commit it :-)

 that the behavior changes, although it appears that
 we don't know why the current behavior is the
 way it is...

Then we should either find out or adjust it to the behaviour
that we think is correct as the current behaviour doesn't seem to be.


 
 Not sure why we would bother still having that !backend
 check, since we know it's NULL. We set it to NULL :)

Well spotted :-). I missed that. I keep this in mind and will add
it to the patch once we have discussed and cleared the real hard stuff.

 And this also seems to allude to the fact that the
 present framework is to support pooled connections.
 Not sure how the above would conflict with subrequests

Good question. Does anybody remember why the old code insisted of having a
fresh connection by all means for a subrequest?

 
 Does the patched version pass the test framework?

Have not checked so far. I did not manage to get the test framework running on
my box so far. Can someone who has it running give it a try? That would be
very nice.

Regards

Rüdiger


Re: POST_MAX

2006-02-13 Thread Joe Schaefer
Jonathan Vanasco [EMAIL PROTECTED] writes:

 Anyone?  This is killing me.

 The only thing I've been able to figure out is this:

   my  $error = $apacheRequest-body_status();
   if ( $error eq 'Exceeds configured maximum limit' )

That looks ok to me for now; but what we really need to do 
is export the constant in Error.xs so you can write 

   if ($error == APR::Request::Error::OVERLIMIT)

-- 
Joe Schaefer



Re: shutdown and linux poll()

2006-02-13 Thread Chris Darroch
Paul:

This may be an old topic of conversation, in which case I apologize.
 I Googled and searched marc.theaimslist.com and Apache Bugzilla but
 didn't see anything, so here I am with a question.

In brief, on Linux, when doing an ungraceful stop of httpd, any
  worker threads that are poll()ing on Keep-Alive connections don't get
 awoken by close_worker_sockets() and that can lead to the process
 getting the SIGKILL signal without ever getting the chance to run
 apr_pool_destroy(pchild) in clean_child_exit().  This seems to
 relate to this particular choice by the Linux and/or glibc folks:

 http://bugme.osdl.org/show_bug.cgi?id=546

 To clarify, are you sure its not using EPoll instead of Poll?

   Well, I'll probe more deeply tomorrow, and while I'm no expert
on this stuff, I don't think so.  Here are the last two lines from
an strace on one of the worker threads:

21:39:30.955670 poll([{fd=13, events=POLLIN, revents=POLLNVAL}], 1,
15000) = 1
21:39:42.257615 +++ killed by SIGKILL +++

   That's the poll() on descriptor 13, for 15 keep-alive seconds,
during which the main process decides to do the SIGKILL.  Here,
I think, is the accept() that opens fd 13:

21:38:51.017764 accept(3, {sa_family=AF_INET, sin_port=htons(63612),
  sin_addr=inet_addr(xxx.xxx.xxx.xxx)}, [16]) = 13

and while I do see some epoll stuff, it's on another descriptor:

21:38:43.012242 epoll_create(1) = 12

   Now, the caveat here is that I'm learning as I go; sockets
are not really my strong point.  But it's fairly easy to reproduce
this behaviour with a stock Apache 2.0 or 2.2 on a RedHat system;
I've tried both.  I can certainly provide more details if requested;
let me know!  Thanks,

Chris.

-- 
GPG Key ID: 366A375B
GPG Key Fingerprint: 485E 5041 17E1 E2BB C263  E4DE C8E3 FA36 366A 375B



Re: Refactoring of the test suite

2006-02-13 Thread Jim Gallacher

Mike Looijmans wrote:

Oh and if we are refactoring the tests, I want a make tests rule. I'm
 tired of doing: ./configure; make; sudo make install; make tests; DOH!
cd test; python test.py. :)



Make that make check (like autotools), to not confuse old-skool 
autoconfers like myself.


That works for me.

Jim




Re: Getting Started on mod_python 3.3.

2006-02-13 Thread Jim Gallacher

Graham Dumpleton wrote:

As Jim pointed out a while back, we need to get going on mod_python 3.3
before I fill up JIRA with another page of bug reports or suggestions.


I think you already *have* filled another page since I made that comment. ;)


That said, how do we want to proceed on this? Do we want to draw up an
initial list of things to do with priorities, discuss them to make sure
all are okay with the fix or what is proposed, possibly assign them to
individuals, and then get started? Or even before that, do we want to
state general aims about what we want to address in mod_python 3.3?
Do we want to focus only on addressing bugs again, or look at some new
features as well?


This is how I would set priorities:

Mark resolved bugs in JIRA as closed, just to clean things up.

Try and assign most of the issues to someone. This is a bit of PR spin, 
but I think it looks bad when there are a large number of open issues 
with no assignee. To the public it may look like the project is not 
being actively maintained.


Fix the easy bugs first so we can backport to 3.2, and be ready for a 
bugfix release. This does not include the various importer issues. I'd 
say that there are not many things in this category, but we should 
review JIRA to be sure.


Refactor the unit tests. If we are going to do this, we should do it 
early in the development cycle so we have lots of time to test the test 
suite.


Review JIRA and collect related issues.

Improve documentation.

New features.

Grand Unified Import Theory.

The order does not necessarily suggest the importance of the various 
issues, and of course we can work in parallel on the last 3 items.



By then I might have got my SVN access sorted out. Have account, but
haven't as yet tried a check out using it.

BTW, I still don't have priviledges in JIRA to administer entries, ie.,
assign etc. Do I need/want that? How do I get that set up?


It's handy to be able to assign, close and resolve issues, so I would 
say yes. I bevlieve Grisha can change your priviledges.


Jim



ANNOUNCE: Mod_python 3.2.7

2006-02-13 Thread Gregory (Grisha) Trubetskoy


The Apache Software Foundation and The Apache HTTP Server Project are
pleased to announce the 3.2.7 release of mod_python. Mod_python 3.2.7
is considered a stable release, suitable for production use.

Mod_python is an Apache HTTP Server module that embeds the Python
language interpreter within the server. With mod_python you can write
web-based applications in Python that will run many times faster than
traditional CGI and will have access to advanced features such as
ability to maintain objects between requests, access to httpd
internals, content filters and connection handlers.

The 3.2.7 release has many new features, feature enhancements, fixed
bugs and other improvements over the previous version. See Appendix A
of mod_python documentation for more details.

Mod_python 3.2.7 is released under Apache License version 2.0.

Mod_python 3.2.7 is available for download from:

http://httpd.apache.org/modules/python-download.cgi

More information about mod_python is available at:

http://httpd.apache.org/modules/

Many thanks to Jim Gallacher, Graham Dumpleton, Nicolas Lehuen and
everyone else who contributed to and helped test this release, without
your help it would not be possible!

Regards,

Grisha Trubetskoy


Cool feature from mod_perl : Configure Apache with Perl

2006-02-13 Thread Nicolas Lehuen
Hi,

I'm currently reading the feature section from mod_perl. Initially, I
was trying to find information about how they cope with
multithreading, multiple interpreter instantiation and code reloading,
but I stumbled upon this :

http://perl.apache.org/start/tips/config.html

Now, I can't stand Perl, but this feature is quite cool, isn't it ?
Would it be difficult to implement, say, in mod_python 4.0 ?

Regards,
Nicolas


For the curious : how mod_perl handles threading

2006-02-13 Thread Nicolas Lehuen
http://perl.apache.org/docs/2.0/user/intro/overview.html#Threads_Support

Regards,
Nicolas


[jira] Commented: (MODPYTHON-29) mod_python.publisher and inbound content_type

2006-02-13 Thread Graham Dumpleton (JIRA)
[ 
http://issues.apache.org/jira/browse/MODPYTHON-29?page=comments#action_12366267 
] 

Graham Dumpleton commented on MODPYTHON-29:
---

Whether one allows this means making some sort of policy decision about what 
the purpose of mod_python.publisher is. If it is mean't as a more generalised 
dispatcher for URLs to published functions, then it should probably allow 
inbound content types for POST other than those appropriate to forms. If one 
allows that though, then you open up a whole can of worms as to whether it 
should therefore restrict the method type to HEAD, GET and POST as it does now. 
Ie., should it start with:

def handler(req):

req.allow_methods([GET, POST, HEAD])
if req.method not in [GET, POST, HEAD]:
raise apache.SERVER_RETURN, apache.HTTP_METHOD_NOT_ALLOWED

To make it more generalised may be more trouble that its worth, so tt may well 
thus be best to constrain the purpose of mod_python.publisher and say that if 
you want to do anything else such as handle other inbound content types and 
other method types that you do it using some other mechanism.

 mod_python.publisher and inbound content_type
 -

  Key: MODPYTHON-29
  URL: http://issues.apache.org/jira/browse/MODPYTHON-29
  Project: mod_python
 Type: Improvement
   Components: publisher
 Versions: 3.1.4
 Reporter: Graham Dumpleton
 Assignee: Nicolas Lehuen


 The mod_python.publisher implementation always decodes form parameters
 regardless of whether the method being called can accept any and also ignores
 whether the content type is even of a type where form parameters can be
 decoded in the first place. This means that it is not possible using the
 mod_python.publisher module to implement a method which accepts POST
 requests with a inbound content type such as text/xml. Attempting to do
 so yields an error such as:
 !DOCTYPE HTML PUBLIC -//IETF//DTD HTML 2.0//EN
 htmlhead
 title501 Method Not Implemented/title
 /headbody
 h1Method Not Implemented/h1
 pPOST to /xmlrpc/service not supported.br /
 /p
 hr /
 addressApache/2.0.51 (Unix) mod_python/3.1.4 Python/2.3 Server at localhost 
 Port 8080/address
 /body/html
 The Method Not Implemented error in this case is due to FieldStorage
 rejecting the content type of text/xml.
 A check could be added to ensure that FieldStorage is only used to decode
 form parameters if the content type is appropriate. Ie.,
 if not req.headers_in.has_key(content-type):
   content_type = application/x-www-form-urlencoded
 else:
   content_type = req.headers_in[content-type]
 if content_type == application/x-www-form-urlencoded or \
 content_type[:10] == multipart/:
   req.form = util.FieldStorage(req,keep_blank_values=1)
 Because req.form is passed to util.apply_fs_data(), code where it is called
 would need to be changed to pass None instead and util.apply_fs_data()
 would need to be modified to see that the argument is None and not do
 the field conversion process of stuff in the form etc etc.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



Re: Getting Started on mod_python 3.3.

2006-02-13 Thread Graham Dumpleton
Jim Gallacher wrote ..
 Jorey Bump wrote:
  Jim Gallacher wrote:
  
  This is how I would set priorities:
  
  
  Try and assign most of the issues to someone. This is a bit of PR 
  spin, but I think it looks bad when there are a large number of open
  issues with no assignee. To the public it may look like the project
 is 
  not being actively maintained.
  
  
  I think the same can be said for the lack of Apache 2.2 support. 
  Personally, I would put this (as well as backporting 2.2 support to 
  mod_python 3.2) as the number one priority, for both PR and pragmatic
  reasons.
  
  The need for compatibility with Apache 2.0  2.2 is going to be an issue
  for quite a while, and should be addressed before mod_python undergoes
  some of the significant changes that have been discussed.
 
 Apache 2.2 support has already been checked into svn trunk. It's just a
 question of doing the backport to the 3.2.x branch once we've seen some
 testing. I think we should plan on doing regular 3.2.x bugfix releases
 so that the 3.3 dev branch can mature without the pressure making a 
 release just to fix bugs.

If we want to go down the path of having interim 3.2 bug rollup releases
while 3.3 is being developed, might I suggest that we target the following
for such a release in the near future.

MODPYTHON-77

  The Simplified GIL Aquisition patches.

MODPYTHON-78

  Apache 2.2 patches.

MODPYTHON-94

  Support for optional mod_ssl functions on request object.

MODPYTHON-113

  Make PythonImport use apache.import_module() via CallBack method.

MODPYTHON-119

  DBM Session test patches.

MODPYTHON-122

  Bash 3.1.X configure patches.

I know that MODPYTHON-94 isn't a bug fix, but a few people have been after
this one. Also MODPYTHON-113 may not seem important, but will mean
that any test package I make available for new importer will work properly
in all cases where module imports occur.

Anyway, after trolling through JIRA, these seemed to be the important ones
to me, but other might have other suggestions.

Now, the question is how we manage this. Do we concentrate on these only
in the trunk and get them out of the way first as a 3.2.X release, holding back
any changes to test framework? Or do we merge such changes from trunk on
a case by case basis in 3.2.X branch?

Graham


Re: For the curious : how mod_perl handles threading

2006-02-13 Thread Nicolas Lehuen
It seems that bu dfdefault Perl is not thread safe, and that they have
to jump through all those hoops to ensure thread safety. There is no
real lesson for mod_python, I just wanted to know how they solved this
rather difficult problem.

Not instantiating one interpreter per name per thread and using an
interpreter pool may be an interesting optimisation, though. But I
guess it's a rather complicated one.

Regards,
Nicolas

2006/2/13, Gregory (Grisha) Trubetskoy [EMAIL PROTECTED]:

 On Mon, 13 Feb 2006, Nicolas Lehuen wrote:

  http://perl.apache.org/docs/2.0/user/intro/overview.html#Threads_Support

 Which part of it - the pool of interpreters? Are they doing it out of
 necessity, i.e. there is no way to run multiple threads in Perl like we do
 in Python because of Python's GIL?

 Grisha