Re: store_headers in memcache and diskcache

2008-02-05 Thread Garrett Rooney
On Feb 5, 2008 1:45 PM, Dirk-Willem van Gulik [EMAIL PROTECTED] wrote:
 Caching experts --  why do memcache and diskcache have seemingly quite
 different caching strategies when it comes to storing the headers ?
 E.g. the cache_object_t * is populated with the status/date/etc data
 in memcache - but not in disk-cache. Is this work in progress or
 subtle design ?

 I am trying to understand (got  a working mod_memcached_cache.c* --
 and cannot quite get the right VARY behaviour).

If I had to guess I'd say it's because people have actually been
working on disk cache, while mem cache has been largely ignored for a
while.

-garrett


Re: Testing frameworks [was: mod_atom]

2007-06-27 Thread Garrett Rooney

On 6/27/07, Issac Goldstand [EMAIL PROTECTED] wrote:

Paul, do you know offhand what the difference is between the
perl-framework, and perl.apache.org's Apache::Test framework?  I'm
familiar with the latter, and have found it to be an amazing tool for
testing Apache modules written in all languages (and web applications of
any sort running on Apache), but don't have any real familiarity with
the former...


They're the same thing.

-garrett


Re: [PATCH] mod_wombat separate out connection and server

2007-05-17 Thread Garrett Rooney

On 5/17/07, Akins, Brian [EMAIL PROTECTED] wrote:

Had a couple hours while on vacation after reading PiL.  This makes
connection, server, and apr_table into real lua modules.  I also separated
out the code and started playing with getters and setters.

I like the idea of doing the function tables in plain C, rather than Lua
C.  Mostly because I understand it more :)  Also, it may be faster as it
avoids using a bunch of Lua tables to keep track of callbacks and functions.
Would be interesting to have a performance bake-off.

Haven't found a good way to rework request.c into this arrangement, but I
ran out of time.


I'm not a fan of the way the pools and hash tables are lazily
initialized, as it isn't thread safe and one of the nice things about
mod_wombat is its thread safety.  Perhaps something that's initialized
during server startup instead?

Also the new files all need license headers.

-garrett


Re: Question about httpd / APR version relationship

2007-05-10 Thread Garrett Rooney

On 5/9/07, Guenter Knauf [EMAIL PROTECTED] wrote:

Hi all,
currently from what I see we use:

Apache 2.0.x - has to use APR 0.9.x
Apache 2.2.x - has to use APR 1.2.x
Apache 2.3.x - has to use APR 1.3.x

is this now a mandatory relationship, or is it valid to:

build Apache 2.2.x with APR 1.3.x


This would likely work, but I wouldn't recommend it for official
builds.  You wouldn't want module authors to start depending on new
functionality in APR 1.3.x when most versions of Apache 2.2.x don't
have that.


build Apache 2.3.x with APR 1.2.x


That /might/ work, unless Apache is depending on new functionality in
APR 1.3.x, which it very well might be.  One of those YMMV, if it
breaks you get to keep both pieces kind of situations.

-garrett


Re: [mod_wombat] Patch to improve docs

2007-05-04 Thread Garrett Rooney

On 5/2/07, Brian McCallister [EMAIL PROTECTED] wrote:


On Apr 30, 2007, at 11:15 AM, Joe Schaefer wrote:

 Brian McCallister [EMAIL PROTECTED] writes:

 +If compiling (make) reports an error that it cannot find the
 +libapreq2 header file, please tell me ( [EMAIL PROTECTED] )
 +as this occurs under some configurations but we haven't
 +hammered down the weird things libapreq2 does with its
 +install. If you build libapreq2 with a --prefix configuration
 +option, it always seems to work.

 By default, libapreq2 tries to install itself alongside libaprutil.
 This is the relevant part of acinclude.m4:

 dnl Reset the default installation prefix to be the same as
 apu's
 ac_default_prefix=`$APU_CONFIG --prefix`

 Does mod_wombat use the apreq2_config script for getting at
 apreq2's installation data?

I don't know, Garrett did that part. Autoconf makes me crawl under my
desk and hide.


IIRC the apreq2 detection is particularly stupid, so it almost
certainly doesn't do the right thing here.  Patches welcome, of course
;-)

-garrett


Re: [mod_wombat] Patch to improve docs

2007-04-30 Thread Garrett Rooney

On 4/30/07, Brian McCallister [EMAIL PROTECTED] wrote:

Patch to add information on building, running tests, and organize the
README into some actual docu.


+1, looks like a big improvement.

-garrett


Re: 3.0 - Proposed Requirements

2007-02-14 Thread Garrett Rooney

On 2/14/07, Paul Querna [EMAIL PROTECTED] wrote:

This proposed list of requirements for a 3.0 platform. this list enables
a 'base' level of performance and design decisions to be made. If others
can make designs work with 'lessor' requirements, all the better, but
I'm not worried about it.

Proposed Requirements:
- C99 Compiler.


Are there any C99 compilers?  I was under the impression that GCC was
close, but nobody else really seemed to be pushing for it (i.e.
Microsoft doesn't seem to care).


- High Performance Event System Calls (KQueue, Event Ports, EPoll, I/O
Completion Ports).
- Async Socket and Disk IO available. (POSIX AIO?)


What kind of async I/O are you thinking of?  Does anyone actually use
posix aio?  I'm not all that thrilled with the idea of being the
canary in that coal mine ;-)


- Good Kernel Threading.

Based on this list, the following operating systems would be supported
TODAY:
- FreeBSD = 6.x
- Solaris = 10
- Linux = 2.6
- Windows = XP? (Maybe even 2003 or Vista -- I don't know this one well
enough)

Operating systems that would likely have problems with these
requirements today:
- AIX?
- NetWare?
- HP-UX?
- Many other older Unixy systems.

The key part to me for all of these is this is Today.  If you view any
3.0 project on a 1-3 year timeline, if we start pushing things like high
performance event system calls, there is time for these other operating
systems to pick them up.

Today, we have all of the major platforms with a good level of async IO,
events and threading support, so it makes sense to me to set these are
the base requirements.


I do think that providing a higher level of base requirements makes
sense, but I also expect that the devotees of systems that can't/don't
support those sort of things should be allowed to make things work on
their systems, so long as they don't require invasive changes in the
rest of the system.

-garrett


Re: 3.0 - Proposed Goals

2007-02-14 Thread Garrett Rooney

On 2/14/07, Paul Querna [EMAIL PROTECTED] wrote:


- Rewrite how Brigades, Buckets and filters work.  Possibly replace them
with other models. I haven't been able to personally consolidate my
thoughts on how to 'fix' filters, but I am sure we can plenty of long
threads about it :-)


I think a big part of this should be documenting how filters are
supposed to interact with the rest of the system.  Right now it seems
to be very much a well, I looked at this other module and did what it
did, and it's quite easy to start depending on behavior in the system
that isn't actually documented to work that way.


- Build a cleaner configuration system, enabling runtime
reconfiguration. Today's system basically requires a complete restart of
everything to change configurations.  I would like to move to an
internal DOM like representation of the configuration, but preserve the
current file format as the 'default'. (Modules could easily write an XML
config file format, or pull from LDAP).


This seems like a rather invasive change.  Virtually every module
currently caches configuration info into global variables.  Are we
expecting these modules to dynamically query the core config system
whenever they want to access this sort of information?  What will the
performance implications of this sort of thing be?


- Experiment with embedding scripting languages or something like
Varnish'es VCL if and where it makes sense. (Cache Rules, Rewrite Rules,
Require Rules, and the like).


This seems like a Good Idea (tm).


- Promote and include a external-process communication method in the
core.  This could be used to communicate with PHP, a JVM, Ruby or many
other things that do not wish to be run inside a highly-threaded and
async core.  The place for large dynamic languages is not in the core
'data router' process. Choices include AJP, FastCGI, or just HTTP.  We
should optionally include a process management framework, to spawn these
as needed, to make configuration for server administrators easier.


+1

-garrett


Re: 3.0 - Proposed Requirements

2007-02-14 Thread Garrett Rooney

On 2/14/07, Greg Marr [EMAIL PROTECTED] wrote:

At 08:33 AM 2/14/2007, Garrett Rooney wrote:
On 2/14/07, Paul Querna [EMAIL PROTECTED] wrote:
This proposed list of requirements for a 3.0 platform. this list
enables
a 'base' level of performance and design decisions to be made. If
others
can make designs work with 'lessor' requirements, all the better, but
I'm not worried about it.

Proposed Requirements:
- C99 Compiler.

Are there any C99 compilers?  I was under the impression that GCC was
close, but nobody else really seemed to be pushing for it (i.e.
Microsoft doesn't seem to care).

According to all the information I've found, the only C99 features
that Visual Studio 2005 supports are long long, variadic macros,
and C++ style comments (which they've supported for years because of
requests from C++ customers).


Well, varardic macros are one of the really nice features of c99, but
I'm not sure that's really enough justification for requiring it...

-garrett


Re: 3.0 - Introduction

2007-02-14 Thread Garrett Rooney

On 2/14/07, Davanum Srinivas [EMAIL PROTECTED] wrote:

Dumb Question: Would all this mean a total(?) rewrite of APR as well?


A total rewrite of APR seems unlikely, but if there are changes people
want made to APR in order to better support new functionality in HTTPD
I don't see why it wouldn't be possible.

-garrett


Re: mod_memcache??

2007-02-05 Thread Garrett Rooney

On 2/2/07, Brian Akins [EMAIL PROTECTED] wrote:

I have a need to write a generic way to integrate  apr_memcache into httpd.
Basically, I have several otehr modules taht use memcached as backend and want
to combine the boring stuff into a central place, ie configuration, stats,
etc.  We talked a little on list about this a few months ago, but noone ever did
anything.   Is anyone else interested in this?  Has anyone did this?

Basically I was thinking there would be a single funtion:

apr_status_t ap_memcache_client(apr_memcache_t **mc)

which would simply give the user an client to use with normal apr_memcache
functions.  The module could create the underlying mc at post_config.

Basically, mod_memcache could have this config:

MemCacheServer memcache1.turner.com:9020 min=8 smax=16 max=64 ttl=5
MemCacheServer memcache4.turner.com:9020 min=8 smax=16 max=64 ttl=5
MemCacheServer memcache10.turner.com:9020 min=8 smax=16 max=64 ttl=5

or whatever.  This would end the config duplication between various modules.
This module could also add memcache stats to /server-status

Comments?


Seems useful to me.

-garrett


Re: svn commit: r490083 - /httpd/httpd/trunk/README

2006-12-24 Thread Garrett Rooney

On 12/24/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

Author: fielding
Date: Sun Dec 24 14:54:49 2006
New Revision: 490083

URL: http://svn.apache.org/viewvc?view=revrev=490083
Log:
Follow Garrett's example and provide a crypto notice in the README,
with specific details for removing the crypto and for nossl packages.


Nice!  I was actually just thinking that we needed to do this for httpd.

-garrett


Re: svn commit: r490086 - in /httpd/site/trunk: docs/bis.rdf xdocs/bis.rdf

2006-12-24 Thread Garrett Rooney

On 12/24/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

Author: fielding
Date: Sun Dec 24 15:32:15 2006
New Revision: 490086

URL: http://svn.apache.org/viewvc?view=revrev=490086
Log:
BIS Notice for TSU exception


Note that this still needs to be registered with export-registry.xml
and the crypto notification web page needs to be regenerated along
with the notification email to the US government.  See instructions on
http://www.apache.org/dev/crypto.html.

-garrett


Re: {VOTE] Import mod_wombat

2006-11-27 Thread Garrett Rooney

On 11/26/06, Paul Querna [EMAIL PROTECTED] wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

This is a vote to import mod_wombat under the httpd pmc.

mod_wombat is currently located at:
  http://svn.i-want-a-pony.com/repos/wombat/trunk/

If the vote passes, mod_wombat will fill out the incubation paperwork at
  http://incubator.apache.org/ip-clearance/index.html
And we would then import mod_wombat under:
  hhttps://svn.apache.org/repos/asf/httpd/mod_wombat

The votes will be tallied 5 days from now on Thursday at 8:00pm PST.


+1

-garrett


Re: [VOTE] Re: mod_wombat

2006-11-21 Thread Garrett Rooney

On 11/20/06, Justin Erenkrantz [EMAIL PROTECTED] wrote:

On 11/17/06, William A. Rowe, Jr. [EMAIL PROTECTED] wrote:
 I'm happy to see wombat enter the ASF, but as an httpd-sponsored incubation
 project.  My question is, if we are punting mod_python out to a TLP, and
 mod_perl is already a TLP - is this a fit as a subproject of httpd, or it's
 own TLP?

Given that Paul wants to use it as a core module (i.e. to replace the
config) and I want it to do some crazy other stuff in the core
(partially config, but more run-time), it doesn't make sense as a TLP.
 So, I want it directly imported into our tree - not as a standalone
or separate module.


Regardless of any plans people have to use this stuff in the core, it
seems like the reason that mod_perl and mod_python either are separate
TLPs or are pushing to be one is because their communities don't
really overlap with that of core httpd.  In this case, other than
Brian the majority of people who are interested in working on this
stuff are httpd committers, and are intending on doing design
work/discussions on httpd lists, so IMO there's no reason to jump to
the there won't be any overlap and thus the community will
essentially be separate and thus call for its own TLP conclusion at
this time.

-garrett


Re: mod_wombat

2006-11-08 Thread Garrett Rooney

On 11/8/06, Paul Querna [EMAIL PROTECTED] wrote:


Thoughts?


Big +1 from me, although you probably saw that coming ;-)

Seriously though, I want to see mod_lua here at the ASF eventually.  I
had originally thought of it as a good example of a labs type project
(assuming that labs.apache.org gets off the ground), but it certainly
has the potential to grow into something more than that (with an
actual community, releases, etc), and if that's the path that Brian
wants to head down, the best home for it seems to be in the HTTP
Server PMC.

-garrett


Re: mod_wombat

2006-11-08 Thread Garrett Rooney

On 11/8/06, Brian McCallister [EMAIL PROTECTED] wrote:


Yes. I had thought it would be a good labs project, but as there is
already outside interest, I think a lab wouldn't be the right path
for it.


I figured as much.  If there are people who aren't yet ASF committers
of some sort who are interested in the project, then clearly it's
already beyond the labs stage ;-)

-garrett


Re: Coding style

2006-10-02 Thread Garrett Rooney

On 10/2/06, Nick Kew [EMAIL PROTECTED] wrote:

We have a bunch of new bug reports[1], detailing bugs of the form

   if ((rv = do_something(args) == APR_SUCCESS))

for

if ((rv = do_something(args)) == APR_SUCCESS)

Of course, that's a C classic, and can be a *** to spot.

We can avoid this by adopting an alternative coding style
that doesn't rely on confusing parentheses:

  if (rv = do_something(args), rv == APR_SUCCESS)

Can I suggest adopting this as a guideline for new code,
to avoid this kind of bug?


Or the even more readable:

rv = do_something(args);
if (rv == APR_SUCCESS) {

}

-garrett


Re: svn commit: r451006 - in /httpd/httpd/trunk/modules/generators: mod_cgi.c mod_cgid.c

2006-09-29 Thread Garrett Rooney

On 9/29/06, Joe Orton [EMAIL PROTECTED] wrote:

On Thu, Sep 28, 2006 at 08:15:44PM -, [EMAIL PROTECTED] wrote:
 --- httpd/httpd/trunk/modules/generators/mod_cgi.c (original)
 +++ httpd/httpd/trunk/modules/generators/mod_cgi.c Thu Sep 28 13:15:42 2006
 @@ -837,6 +837,11 @@
  APR_BLOCK_READ, HUGE_STRING_LEN);

  if (rv != APR_SUCCESS) {
 +if (rv == APR_TIMEUP) {

These should use APR_STATUS_IS_TIMEUP() rather than checking against
APR_TIMEUP directly.


Fixed in 451289.

-garrett


Re: Regexp-based rewriting for mod_headers?

2006-09-29 Thread Garrett Rooney

On 9/29/06, Nick Kew [EMAIL PROTECTED] wrote:

On Thursday 28 September 2006 18:29, Garrett Rooney wrote:
 On 9/28/06, Nick Kew [EMAIL PROTECTED] wrote:
  We have a problem with DAV + SSL hardware.
  It appears to be the issue described in
  http://svn.haxx.se/users/archive-2006-03/0549.shtml
 
  It seems to me that the ability to rewrite a request header
  will fix that.  As a generic fix, I've patched mod_headers
  to support regexp-based rewriting of arbitrary headers.
 
  Please review.  If people like this (or if noone objects and I
  find the time), I'll document it and commit to /trunk/.

 The patch seems reasonable, but I've never messed with mod_headers, so
 bear with me.  It should let you do something like:

 Header edit header regex replace-string

Exactly (and support mod_headers's optional cond var).

 Right?  Could you provide an example so I can confirm that it's
 working correctly?

The example that fixes the DAV problem above can be stated as

  RequestHeader edit Destination ^https: http:

Otherwise, one can dream up mod_rewrite like cases; e.g.
(modulo line wrap)

  Header edit Location ^http://([a-z0-9-]+)\.example\.com/
http://example.com/$1/

(not sure if that makes sense, but you get my meaning:-)


Cool.  I just tested it here, and it seems to work fine.  +1 to commit
from me, assuming suitable docs and so forth.

My only comment about the patch itself is I'm not overly thrilled
about recursively processing matches, simply because every time we
recurse we allocate an entire new string to hold the entire results,
which seems wasteful.  But on the other hand, any other solution would
probably be much more complex, and we're not talking about that many
levels of recursion in any sane configureation, so hey, whatever.

-garrett


Re: Regexp-based rewriting for mod_headers?

2006-09-28 Thread Garrett Rooney

On 9/28/06, Nick Kew [EMAIL PROTECTED] wrote:


We have a problem with DAV + SSL hardware.
It appears to be the issue described in
http://svn.haxx.se/users/archive-2006-03/0549.shtml

It seems to me that the ability to rewrite a request header
will fix that.  As a generic fix, I've patched mod_headers
to support regexp-based rewriting of arbitrary headers.

Please review.  If people like this (or if noone objects and I
find the time), I'll document it and commit to /trunk/.


The patch seems reasonable, but I've never messed with mod_headers, so
bear with me.  It should let you do something like:

Header edit header regex replace-string

Right?  Could you provide an example so I can confirm that it's
working correctly?

-garrett


Re: creating new lbmethod for mod_proxy_balancer

2006-09-25 Thread Garrett Rooney

On 9/25/06, snacktime [EMAIL PROTECTED] wrote:

I have very little C programming experience, but I've decided to
tackle adding another load balancing method to mod_proxy_balancer.
The reason for a new lbmethod is to have something that works nicely
with ruby on rails.  Both ruby and rails are not thread safe, which
poses certain challenges.  Right now the most popular way of hosting
rails apps is using Mongrel http://mongrel.rubyforge.org/.  Mongrel is
a simple http server which loads and runs rails, but it can only
process one request at a time due to rails not being thread safe.  So
a fairly good way to serve up rails is to have a cluster of mongrels
behind apache using balancer.  The problem is when you have a mix of
short requests with longer requests.  Mongrel will queue up
connections but can only service one at a time, so one mongrel might
have several slow requests queued up, and another mongrel might only
be serving one request.

So, what I'm trying to do is implement a new lbmethod that will only
proxy a request to a balance member that has no requests currently
being processed.  If there are no more balance members available, I'm
thinking the best thing is to implement a short wait cycle until a
free mongrel is available, and possibly log a warning so you know that
you need to add more mongrels to the cluster, and more balance members
to apache.

Any advice or hints are more then welcome, especially if for whatever
reason this is going to get really complicated.


The major problem you're going to have is that you'll have to use some
sort of shared memory (or a similar technique) to coordinate
information between the various worker processes.  Since Apache can
run in multiprocess mode in addition to multithreaded modes you can't
easily tell other workers hey guys, I've got this one don't use it
till I'm done.  It'd be a useful problem to solve in a generic manner
though, as the mod_proxy_fcgi module has a simlar problem.

-garrett


Re: creating new lbmethod for mod_proxy_balancer

2006-09-25 Thread Garrett Rooney

On 9/25/06, Jim Jagielski [EMAIL PROTECTED] wrote:


Actually, I've added the 'busy' struct element which
could be used for that... The orig intent was to add
the mod_jk busyness LB method, but it would also
serve as a flag that the member is busy ;)

As of now, neither Trunk or 2.2.x do anything with busy,
but that will change soon :)


How is that flag shared across worker processes?  Is that structure
stored in shared memory or something?

-garrett


Re: creating new lbmethod for mod_proxy_balancer

2006-09-25 Thread Garrett Rooney

On 9/25/06, Ruediger Pluem [EMAIL PROTECTED] wrote:


After having a look in the code I am just wondering why we do not have any
locks around when changing this shared data / do not use atomics when increasing
e.g the value for the number of read bytes (worker-s-read). Is this correct?


That's a good question...  To be entirely correct, I believe atomics
should be used (and it would have to be actual atomics, not the bogus
fallbacks based on a cache of mutexes we use if we don't have real
atomics on the platform).

-garrett


Re: [patch 16/16] remove duplicated defines

2006-09-20 Thread Garrett Rooney

On 9/19/06, Davi Arnaut [EMAIL PROTECTED] wrote:

Remove duplicated defines.


Applied in r448226.  Thanks,

-garrett


Re: fcgi proxy module to 2.2.x?

2006-09-07 Thread Garrett Rooney

On 9/7/06, Jim Jagielski [EMAIL PROTECTED] wrote:

Topic for discussion: Add the FCGI proxy module to
the 2.2.x distro?


I'm split on the issue.  On one hand, I'd like to have some evidence
that someone has actually used it in anger and it didn't blow up on
them.  On the other hand I doubt anyone will do so without it being in
a release branch ;-)

It also lacks any documentation, so that seems like a prereq for
getting into 2.2.x.

-garrett


Re: fcgi proxy module to 2.2.x?

2006-09-07 Thread Garrett Rooney

On 9/7/06, Paul Querna [EMAIL PROTECTED] wrote:

Jim Jagielski wrote:
 Topic for discussion: Add the FCGI proxy module to
 the 2.2.x distro?

I personally would like to get the local-process spawning done first, or
has everyone pretty much given up on ever doing that?


I don't personally plan on working on it anytime soon, although I'd be
more than happy to look at patches implementing it if they were to
magically show up on this list someday.

-garrett


Re: load balancer cluster set

2006-07-31 Thread Garrett Rooney

On 7/31/06, Guy Hulbert [EMAIL PROTECTED] wrote:

On Mon, 2006-31-07 at 13:54 -0400, Brian Akins wrote:
 Guy Hulbert wrote:
  That's the ultimate case, after all :-)

 Not necessarily.  Google's answer is to throw tons of hardware at
 stuff.

The point of contention was scalability ... from a human point of view
it is really annoying to have to solve a problem twice but from the
business pov, outgrowing your load balancer might only be a good thing.


Oh please, 99.% of users have nowhere near the scalability
constraints that google operates under.  Are you saying that because
some do we shouldn't provide solutions that work for the rest?

-garrett


Re: svn commit: r423886 - in /httpd/httpd/trunk: CHANGES server/request.c

2006-07-20 Thread Garrett Rooney

On 7/20/06, Ruediger Pluem [EMAIL PROTECTED] wrote:


I guess I can't change the log entry anymore. All I can do is adjust the CHANGES
entry. Would that address your concerns?


Actually you can change the log entry.  Try 'svn pedit --revprop -r REVISION'

-garrett


Re: svn commit: r421686 - in /httpd/httpd/trunk: CHANGES docs/manual/mod/mod_proxy.xml modules/proxy/mod_proxy.c modules/proxy/mod_proxy.h modules/proxy/proxy_util.c

2006-07-14 Thread Garrett Rooney

On 7/14/06, Nick Kew [EMAIL PROTECTED] wrote:

On Friday 14 July 2006 14:16, Joe Orton wrote:

 This introduced compiler warnings:

 cc1: warnings being treated as errors
 mod_proxy.c: In function `proxy_interpolate':
 mod_proxy.c:427: warning: passing arg 1 of `ap_strstr' discards qualifiers
 from pointer target type mod_proxy.c:431: warning: passing arg 1 of
 `ap_strchr' discards qualifiers from pointer target type make[4]: ***
 [mod_proxy.slo] Error 1

Ugh.

It's being used in those lines with const char* arguments.
That's what the string.h strstr and strchr take.  Depending
on the AP_DEBUG setting, ap_strstr may be #defined to strstr
and ap_strchr to strchr.

So the fact that that *can* generate those warnings looks like
an over-engineered and inconsistent httpd.h.


You can avoid those warnings by using ap_strchr_c.

-garrett


Re: APR-UTIL build breakage

2006-07-12 Thread Garrett Rooney

On 7/12/06, Jim Jagielski [EMAIL PROTECTED] wrote:

Since this morning, apr-util (trunk) is refusing to build:

Making all in apr-util
make[3]: *** No rule to make target `.make.dirs', needed by
`buckets/apr_brigade.lo'.  Stop.
make[2]: *** [all-recursive] Error 1
make[1]: *** [all-recursive] Error 1
make: *** [all-recursive] Error 1

As a stopgap, I copied over the .make.dirs from apr to apr-util.


This really should be discussed on the apr dev list, but FWIW, I
believe Joe Orton already committed a fix for this.

-garrett


Re: atom feeds for projects

2006-07-04 Thread Garrett Rooney

On 7/4/06, Sam Ruby [EMAIL PROTECTED] wrote:


 To be clear, AFAIK, there was never a patch for mod_mbox -- it was a
 Ruby file that only solved part of the problem. Again, AFAIK, no one
 ever wrote a patch in C for mod_mbox to attempt to resolve this issue.

I offered.  The response was, and I quote, Erm, no.


The Erm, no was in response to the approach, not the offer to help, IIRC.

If you're willing to fix the problem the right way, by adding real
support for character sets to mod_mbox, I'm sure nobody would have a
problem with  that.

-garrett


Re: atom feeds for projects

2006-07-04 Thread Garrett Rooney

On 7/4/06, Sam Ruby [EMAIL PROTECTED] wrote:

Garrett Rooney wrote:
 On 7/4/06, Sam Ruby [EMAIL PROTECTED] wrote:

  To be clear, AFAIK, there was never a patch for mod_mbox -- it was a
  Ruby file that only solved part of the problem. Again, AFAIK, no one
  ever wrote a patch in C for mod_mbox to attempt to resolve this issue.

 I offered.  The response was, and I quote, Erm, no.

 The Erm, no was in response to the approach, not the offer to help, IIRC.

 If you're willing to fix the problem the right way, by adding real
 support for character sets to mod_mbox, I'm sure nobody would have a
 problem with  that.

You chose to snip the portion where I argue that the approach I outlined
is necessary, at least as a fall-back/safety net.  Care to explain why
such a fall-back/safety net isn't necessary or appropriate?


No argument that it's necessary, but it seems kind of pointless to fix
that part without fixing the underlying fact that mod_mbox is totally
ignorant of character sets.  You'll get perfectly valid junk in the
vast majority of cases, that doesn't seem like a real step forward to
me.

-garrett


Re: [PATCH] Compilation on Solaris

2006-06-19 Thread Garrett Rooney

On 6/16/06, Shanti Subramanyam [EMAIL PROTECTED] wrote:


Thanks Mads.
I've re-generated the PATCH with the Studio URL :

--- README.platformsFri Jun 16 13:58:10 2006
+++ README.platforms.orig   Thu Jun 15 13:13:50 2006
@@ -95,12 +95,4 @@

http://www.apache.org/dist/httpd/patches/apply_to_2.0.49/aix_xlc_optimization.patch

  (That patch works with many recent levels of Apache 2+.)
-
-  Solaris:
-On Solaris, much better performance can be achieved by using the
Sun Studio compiler
-instead of gcc. Download the compiler from
-
-   http://developers.sun.com/prodtech/cc/downloads/index.jsp
-
-Use the following compiler flags: -XO4 -xchip=generic


For what it's worth, you're also generating the patch backwards...
Those lines should start with +, not - ;-)

-garrett


Re: httpd-win.conf broken on trunk

2006-06-08 Thread Garrett Rooney

On 6/1/06, William A. Rowe, Jr. [EMAIL PROTECTED] wrote:

Garrett Rooney wrote:

 One thing that seems odd, it looks like Makefile.win is still copying
 docs/conf/httpd-win.conf to conf/httpd.conf.default, isn't the goal of
 the previous changes to get a massaged version of httpd-std.conf.in
 there?

future tense, yes.  These were just glaring things I tripped over and
streamlining the extras copy that were already committed.

I think the one issue with using httpd-std.conf.in is the list of modules
to load; netware has an awk script which accomplishes this - perhaps we
piggyback on the same script?


Seems reasonable to me.

-garrett


Re: There should be a filter spec

2006-06-05 Thread Garrett Rooney

On 6/5/06, Joachim Zobel [EMAIL PROTECTED] wrote:

Am Donnerstag, den 01.06.2006, 16:36 +0200 schrieb Plüm, Rüdiger, VF
EITO:
 As far as I remember there had been also a discussion on who owns a brigade.
 So who has to call / should not call apr_brigade_destroy / apr_brigade_cleanup
 in the filter chain. I think rules for this would be also useful.

Maybe I can start such a spec by finding such discussions and writing
down the conclusions. Questions on correct filter behaviour can then be
discussed on the list and put into the this document if resolved.

Do the people here have a way to decide about what to put into such a
document?


Well, someone writing some stuff down so we can have a starting point
for the discussion seems like a good place to start.

-garrett


Re: httpd-win.conf broken on trunk

2006-06-01 Thread Garrett Rooney

On 6/1/06, William A. Rowe, Jr. [EMAIL PROTECTED] wrote:

Garrett Rooney wrote:
 It looks like the trunk version of httpd has been busted on win32 ever
 since the big authz refactoring in r368027.  I'd be happy to make the
 changes to get it working again, if someone would be so kind as to
 point me to some sort of documentation on how exactly one goes from
 The old way to The new way...

Garrett please take a look at my commits a couple hrs ago, see if they
are in sync with the direction you want to go.  Need to look to backporting
as these changes are needed on 2.2 IIRC.


They seem reasonable to me, although I haven't tried them out.

One thing that seems odd, it looks like Makefile.win is still copying
docs/conf/httpd-win.conf to conf/httpd.conf.default, isn't the goal of
the previous changes to get a massaged version of httpd-std.conf.in
there?

-garrett


Re: svn commit: r410761 - /httpd/httpd/trunk/docs/conf/extra/httpd-mpm.conf.in

2006-06-01 Thread Garrett Rooney

On 6/1/06, Joshua Slive [EMAIL PROTECTED] wrote:

On 6/1/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
 Author: wrowe
 Date: Wed May 31 22:42:13 2006
 New Revision: 410761

 URL: http://svn.apache.org/viewvc?rev=410761view=rev
 Log:

   That's the point, isn't it?  All mpm's in one basket?

Sure, but windows has its own config file where we left the mpm_winnt
stuff in.  So putting it here is duplication.


Considering that the httpd-win.conf file is so out of date that it
doesn't even work, I think Bill's goal here is to make it use the same
configuration files as the rest of the platforms...

-garrett


Re: There should be a filter spec

2006-06-01 Thread Garrett Rooney

On 5/26/06, Joachim Zobel [EMAIL PROTECTED] wrote:

Hi.

I need a specification that tells me what a filter must/should and must
not/should not do. Is there something alike?

My actual problem is that I have a filter that if nothing was modified
drops all content, sets f-r-status to 304 and sends an eos bucket.
This works. If I pass the output through mod_transform the browser does
get a 200 without content instead.

So probably mod_transform is broken. But it is difficult to decide what
is broken without such a spec.


I've had similar issues lately, it's very unclear how a filter setting
f-r-status or f-r-status_line should act.  Depending on what
modules your working with, that may be enough to get an error out to
the browser, and it may not.  Some specific rules about that sort of
thing would be quite useful...

-garrett


httpd-win.conf broken on trunk

2006-05-31 Thread Garrett Rooney

It looks like the trunk version of httpd has been busted on win32 ever
since the big authz refactoring in r368027.  I'd be happy to make the
changes to get it working again, if someone would be so kind as to
point me to some sort of documentation on how exactly one goes from
The old way to The new way...

-garrett


Re: Apache 2.0 - 2.2 Module upgrade...errors

2006-05-25 Thread Garrett Rooney

On 5/25/06, Schwenker, Stephen [EMAIL PROTECTED] wrote:



Hello,

I'm trying to upgrade a 3rd party module from 2.0 to 2.2 and I'm getting the
following errors.  Can anyone help me figure out what the issue is?  I'm not
sure where to start.

Thank you,


Steve.






gcc -DHAVE_CONFIG_H -g -O2 -pthread -fPIC -DUSING_MIBII_SYSORTABLE_MODULE
-I..
-I/opt/jboss/agent-jboss-1.2.29/product_connectors/snmp-jboss-1.2.29/snmp_common/ucd-snmp
-I/opt/jboss/agent-jboss-1.2.29/product_connectors/snmp-jboss-1.2.29/snmp_common/ucd-snmp/snmplib
-I/opt/jboss/agent-jboss-1.2.29/product_connectors/snmp-jboss-1.2.29/snmp_common/ucd-snmp/agent
-I/opt/jboss/agent-jboss-1.2.29/product_connectors/snmp-jboss-1.2.29/snmp_common/sdbm
-I/usr/local/apache222/include -I/usr/local/apache222/include -c
apache-status-process.c -fPIC -DPIC -o .libs/apache-status-process.o

In file included from
/usr/local/apache222/include/ap_config.h:25,

from /usr/local/apache222/include/httpd.h:43,

from apache-status-process.c:10:

/usr/local/apache222/include/apr.h:270: error: syntax error
before apr_off_t


This implies that you're not passing the correct CFLAGS for the APR
headers.  apr-1-config --cflags will show them to you, although if
you're building with apxs they should already be included for you.

-garrett


Re: mod_proxy_fcgi and php

2006-05-21 Thread Garrett Rooney

On 5/13/06, Markus Schiegl [EMAIL PROTECTED] wrote:


Because of r-uri == r-path_info, ap_add_cgi_vars sets SCRIPT_NAME to

PHP needs this one for backreference and http://cgi-spec.golux.com
states
it must be set.
An empty r-path_info (manually patched) would give me a SCRIPT_NAME
but removes PATH_INFO, ergo no solution. Ideas?


What do other fcgi implementations send?  When in doubt, I imagine the
best thing to do is whatever mod_fastcgi or mod_fcgid do.


Each fcgi-request is logged (mod_proxy_fcgi.c, line 918) as error?
What about replacing APLOG_ERR with APLOG_DEBUG?


Done, along with changing a couple of APLOG_DEBUGs that are really
errors to APLOG_ERR.

-garrett


Re: mail-archives.apache.org not refreshed

2006-05-19 Thread Garrett Rooney

On 5/19/06, Plüm, Rüdiger, VF EITO [EMAIL PROTECTED] wrote:

Someone an idea why at least the httpd lists do not get refreshed any more in 
the mod_mbox
archive on mail-archives.apache.org? The latest entries e.g. for 
bugs@httpd.apache.org are
rather old.


Yeah, that was noticed yesterday, and we thought it was a cron job
that wasn't being run, but apparently getting the cron job going again
hasn't fixed things.  Will get people to look into it again today.

-garrett


mod_dav spamming over r-status

2006-05-19 Thread Garrett Rooney

So I've got this module that works as an input filter.  It sits in
front of mod_dav_svn and parses incoming REPORT requests, and when it
determines that something is wrong it sets r-status and
r-status_line and errors out by returning APR_EGENERAL.

In HTTPD 2.0.x this works as I expect, the status line I set is sent
back to the user.  In HTTPD 2.2.x I just get a generic error (400 Bad
Request).  This seems kind of odd, so I looked into it.

It turns out that the source of the difference is that in 2.2.1 we
added code that sanity checks r-status and r-status_line, and only
uses r-status_line if the number at the beginning matches r-status.
It turns out that despite the fact that I set both r-status and
r-status_line at the same time, by the time it hits
basic_http_header_check in http_filters.c they're no longer the same.

So where is r-status getting spammed?  Well, it's actually deep in
mod_dav.  We call ap_xml_parse_input, which ends up causing our filter
to get called and  the underlying error gets returned.  This means
that HTTP_BAD_REQUEST gets returned up the stack.  That eventually
gets passed into ap_die, and in ap_die, which does the actual setting
of r-status to HTTP_BAD_REQUEST, which means our r-status and
r-status_line are out of sync, which means the status line gets
ignored.

So my first instinct is to fix this in the XML parsing code.  If we
get to the error case and r-status is already set to something
interesting, we should return that instead of our default
HTTP_BAD_REQUEST.  Does that make sense?

-garrett


Re: [PATCH] aborting on OOM

2006-05-10 Thread Garrett Rooney

On 5/10/06, Joe Orton [EMAIL PROTECTED] wrote:

There a few choices for what to do in the oom handler: 1.3 fprintf's to
stderr, then does exit(1), which doesn't seem particularly wise since
fprintf can itself malloc; could do similarly, could just exit(1) or
even just exit(APEXIT_CHILDSICK); but then nothing gets logged.  With
abort() at least something is logged, and you can get core dumps with a
suitably configured environment/server, for further diagnosis.  Any
opinions?


I would personally prefer abort to exit...

-garrett


Re: [PATCH] aborting on OOM

2006-05-10 Thread Garrett Rooney

On 5/10/06, Colm MacCarthaigh [EMAIL PROTECTED] wrote:

On Wed, May 10, 2006 at 10:53:50AM -0700, Garrett Rooney wrote:
 I would personally prefer abort to exit...

is write()'ing a static error message an option too?


Perhaps, but where would you write() to?

-garrett


Re: [PATCH] aborting on OOM

2006-05-10 Thread Garrett Rooney

On 5/10/06, Colm MacCarthaigh [EMAIL PROTECTED] wrote:

On Wed, May 10, 2006 at 11:11:27AM -0700, Garrett Rooney wrote:
 On 5/10/06, Colm MacCarthaigh [EMAIL PROTECTED] wrote:
 On Wed, May 10, 2006 at 10:53:50AM -0700, Garrett Rooney wrote:
  I would personally prefer abort to exit...
 
 is write()'ing a static error message an option too?

 Perhaps, but where would you write() to?

STDERR_FILENO :)


Which is likely to be redirected to /dev/null in most cases...

-garrett


Re: [PATCH] aborting on OOM

2006-05-10 Thread Garrett Rooney

On 5/10/06, Colm MacCarthaigh [EMAIL PROTECTED] wrote:

On Wed, May 10, 2006 at 11:22:25AM -0700, Garrett Rooney wrote:
 Which is likely to be redirected to /dev/null in most cases...

We redirect standard error to the main error log  :) See ap_open_logs in
server/log.c :-)  httpd -E also causes stderr redirection for the
start-up phase, /dev/null is the usually the exception :)


Ahh, I stand corrected.  Sounds good to me.

-garrett


Re: mod_proxy_fcgi and php

2006-05-09 Thread Garrett Rooney

On 4/22/06, Markus Schiegl [EMAIL PROTECTED] wrote:

Sorry it took me so long to get back to this.  Got distracted with
other things, etc.


From my limited perspective r-filename should be set to
/opt/www/html/i.php
Any ideas?


mod_proxy_fcgi is talking to an arbitrary socket that could correspond
to any file on disk, how would it figure out what to set r-filename
to?

The fact that PHP has settings you can tweak to make this work implies
to me that it's not a problem we need to fix...


While playing with mod_rewrite i realized it does not recognize fcgi as
scheme yet (1)
The following patch should solve this.

Index: httpd-trunk/modules/mappers/mod_rewrite.c
===
--- httpd-trunk/modules/mappers/mod_rewrite.c   (revision 396157)
+++ httpd-trunk/modules/mappers/mod_rewrite.c   (working copy)
@@ -577,6 +577,9 @@
 if (!strncasecmp(uri, tp://, 5)) {/* ftp://*/
 return 6;
 }
+if (!strncasecmp(uri, cgi://, 6)) {   /* fcgi://*/
+return 7;
+}
 break;

 case 'g':


I'll look at getting this checked in, thanks!

-garrett


Re: test/zb.c

2006-05-08 Thread Garrett Rooney

On 5/8/06, Thom May [EMAIL PROTECTED] wrote:

Hey,
just had a report in debian that test/zb.c's license doesn't necessarily
allow you to modify and redistribute the code. A quick grep around doesn't
reveal any uses of this code in our tree, and given that we have
support/ab.c it seems strange to carry both.
Can we just drop it?


Isn't ab.c derived from zb.c?  If so, isn't that kind of problematic
with regard to this licensing issue?

-garrett


Re: test/zb.c

2006-05-08 Thread Garrett Rooney

On 5/8/06, Sander Temme [EMAIL PROTECTED] wrote:


Found on http://svn.apache.org/viewcvs.cgi?rev=80572view=rev

Does an archive of that apache-core mailing list mentioned above exist?


Yes, it does.  The first few years of archives of the httpd pmc
mailing list are actually the archives of the old apache-core list. 
It's not public, but you're a member, so you should be able to read

it.


Do we need zb.c to be in our tree? Or can we declare it superseded by
ab.c? If only to help out our friends over at Debian?


I can't see why we'd need it...

-garrett


Re: [PATCH 5/6] hard restart on Linux #38737

2006-05-07 Thread Garrett Rooney

On 5/7/06, Nick Kew [EMAIL PROTECTED] wrote:


Now, what about a platform that HAS_PTHREAD_KILL, but which uses some
other form of threading in its APR (isn't that at least an option on some
FreeBSD versions?)  Wouldn't this break horribly when it pthread_kills a
non-pthread?  Couldn't it even happen on Linux, in principle at least?


On FreeBSD you basically need to pick one threading library, if you're
linked against more than one of them bad things happen, since they all
implement the same pthread functions.  On Solaris, which does have
multiple threading implementations with different APIs, I don't think
it would matter, since pthreads is implemented on top of the lower
level solaris threads.  I suspect that's the common case, if there is
another threading library it's lower level and pthreads is generally
implemented on top of it.  The only other case I can think of is if
you're using a user level threading library but the system has its own
pthreads library.

-garrett


Re: [PATCH 5/6] hard restart on Linux #38737

2006-05-07 Thread Garrett Rooney

On 5/7/06, Nick Kew [EMAIL PROTECTED] wrote:

On Sunday 07 May 2006 23:07, Garrett Rooney wrote:
 On 5/7/06, Nick Kew [EMAIL PROTECTED] wrote:
  Now, what about a platform that HAS_PTHREAD_KILL, but which uses some
  other form of threading in its APR (isn't that at least an option on some
  FreeBSD versions?)  Wouldn't this break horribly when it pthread_kills a
  non-pthread?  Couldn't it even happen on Linux, in principle at least?

 On FreeBSD you basically need to pick one threading library, if you're
 linked against more than one of them bad things happen, since they all
 implement the same pthread functions.

So there's no thread API that isn't pthreads?


Correct, there are user level pthreads implementations (libc_r, GNU
pth), and kernel or mixed userland/kernel implementations (libpthread,
libthread), but they all expose the same pthreads API, and you can't
mix them.


 On Solaris, which does have
 multiple threading implementations with different APIs, I don't think
 it would matter, since pthreads is implemented on top of the lower
 level solaris threads.

But if an APR was built on the lower-level threads, then we might
be pthread_kill()ing something that isn't a pthread?  Even if it's
only a theoretical possibility, we should avoid it.


Does APR actually do this on any platform?


If we had an APR #define for pthread vs non-pthread threading
implementations, we should test that.  As it stands, if we take
the patch as-is, then IMO we need to document the MPMs
explicitly as requiring APR threads to be built on pthreads.

 I suspect that's the common case, if there is
 another threading library it's lower level and pthreads is generally
 implemented on top of it. The only other case I can think of is if
 you're using a user level threading library but the system has its own
 pthreads library.

Exactly.  What if someone out there has a super-optimised APR
built on their lower-level threads, on a platform that also has pthreads?


Well, if they do then it's not in the actual APR distribution, so I
have trouble caring...  If we're on unix systems, I think it's safe to
assume that it's either pthreads, or something that interoperates well
with pthreads (i.e. solaris threads, where you can use either the
pthreads functions or the solaris threads functions and they play
reasonably nicely together).

-garrett


Re: svn commit: r396063 - in /httpd/httpd/trunk: modules/proxy/config.m4 modules/proxy/fcgi_protocol.h modules/proxy/mod_proxy_balancer.c modules/proxy/mod_proxy_fcgi.c support/ support/Makefile.in su

2006-04-28 Thread Garrett Rooney

On 4/28/06, Joe Orton [EMAIL PROTECTED] wrote:

On Sat, Apr 22, 2006 at 03:44:07AM -, [EMAIL PROTECTED] wrote:
 Author: rooneg
 Date: Fri Apr 21 20:44:05 2006
 New Revision: 396063

 URL: http://svn.apache.org/viewcvs?rev=396063view=rev
 Log:
 Merge the fcgi-proxy-dev branch to trunk, adding a FastCGI back end for
 mod_proxy.  This log message is just a summary of the changes, for the
 full original log messages see r357431:393955 in branches/fcgi-proxy-dev.

This code has the following gcc (4.1 on x86_64) warnings:

mod_proxy_fcgi.c: In function 'send_environment':
mod_proxy_fcgi.c:269: warning: implicit declaration of function 
'ap_add_common_vars'
mod_proxy_fcgi.c:270: warning: implicit declaration of function 
'ap_add_cgi_vars'
mod_proxy_fcgi.c: In function 'handle_headers':
mod_proxy_fcgi.c:467: warning: implicit declaration of function 
'ap_scan_script_header_err_brigade'
mod_proxy_fcgi.c: In function 'dispatch':
mod_proxy_fcgi.c:677: warning: format '%d' expects type 'int', but argument 7 
has type 'apr_size_t'
mod_proxy_fcgi.c: In function 'proxy_fcgi_handler':
mod_proxy_fcgi.c:62: warning: 'brb.reserved[0]' is used uninitialized in this 
function
mod_proxy_fcgi.c:63: warning: 'brb.reserved[1]' is used uninitialized in this 
function
mod_proxy_fcgi.c:64: warning: 'brb.reserved[2]' is used uninitialized in this 
function
mod_proxy_fcgi.c:65: warning: 'brb.reserved[3]' is used uninitialized in this 
function
mod_proxy_fcgi.c:66: warning: 'brb.reserved[4]' is used uninitialized in this 
function


Thanks!  Should be fixed in r397968.

-garrett


Re: Fold mod_proxy_fcgi into trunk (and maybe 2.2...)

2006-04-21 Thread Garrett Rooney

On 4/19/06, Jim Jagielski [EMAIL PROTECTED] wrote:

I think that the Proxy FastCGI module is at a point where
we should consider folding it into trunk, with the hope
of it being backported to 2.2.x and some not-too-distant
future.


Since everyone seems to be in favor of merging it I went ahead and did
it, see r396063 for details.

-garrett


Re: Fold mod_proxy_fcgi into trunk (and maybe 2.2...)

2006-04-19 Thread Garrett Rooney
On 4/19/06, Jim Jagielski [EMAIL PROTECTED] wrote:
 I think that the Proxy FastCGI module is at a point where
 we should consider folding it into trunk, with the hope
 of it being backported to 2.2.x and some not-too-distant
 future.

 Comments?

+1 on merging to trunk, +0 on 2.2.x.  I'd love to see someone actually
using it for something real before it goes into any release, and at
this point I'm not sure it has...

-garrett


Re: Fold mod_proxy_fcgi into trunk (and maybe 2.2...)

2006-04-19 Thread Garrett Rooney
On 4/19/06, Plüm, Rüdiger, VF EITO [EMAIL PROTECTED] wrote:


  -Ursprüngliche Nachricht-
  Von: Jim Jagielski
 
 
  I think that the Proxy FastCGI module is at a point where
  we should consider folding it into trunk, with the hope
  of it being backported to 2.2.x and some not-too-distant
  future.
 
  Comments?
 

 Questions:

 I am a lazy guy :-).
 Would it be possible for you to provide the changes as a diff that
 need to be applied to the *existing* sources on the trunk?
 This would help to understand what changes in the existing files as a
 result of this merge.

The diff for existing files is very small.  Just the build changes to
build the new code and a tweak to mod_proxy_balancer if I recall
correctly.  Just doing a diff between the revision we created the
branch in and the tip of the branch should give it to you.

 Are there any test cases for the test framework to check the FastCGI module?

At the moment no, although contributions of them would be more than welcome.

-garrett


Re: Fold mod_proxy_fcgi into trunk (and maybe 2.2...)

2006-04-19 Thread Garrett Rooney
On 4/19/06, Colm MacCarthaigh [EMAIL PROTECTED] wrote:
 On Wed, Apr 19, 2006 at 11:06:56AM -0400, Jim Jagielski wrote:
   +1 on merging to trunk, +0 on 2.2.x.  I'd love to see someone actually
   using it for something real before it goes into any release, and at
   this point I'm not sure it has...
 
  Hence my desire to get it into a branch that people are actively
  playing with :)

 Would an alpha 2.3 release solve that problem?

Sure, assuming we actually merge it to trunk ;-)

-garrett


Re: It's that time of the year again

2006-04-17 Thread Garrett Rooney
On 4/16/06, Ian Holsman [EMAIL PROTECTED] wrote:
 Google is about to start it's summer of code project

 what does this mean for HTTP/APR ?

 we need:
 - mentors

I'd be willing to help mentor.

 and
 - project ideas.

A few ideas:

in APR:

  - Improve the build system so that it can generate win32 project
files automatically, instead of requiring us to maintain them by hand.
 It also might be nice to allow generation of makefiles on win32, so
we can build via command line tools instead of requiring visual
studio.
  - Add a logging API, abstracting the differences between syslog,
win32 event logs, and basic file backed logs.  This project has the
potential to involve working with the Subversion project as well,
since it has a need for such an API.

in HTTPD:

  - Extend the mod_proxy_fcgi code so that it can manage its own
worker processes, rather than requiring them to be managed externally.
 Would most likely require a bit of refactoring inside of mod_proxy as
well.

-garrett


Re: It's that time of the year again

2006-04-17 Thread Garrett Rooney
On 4/17/06, Colm MacCarthaigh [EMAIL PROTECTED] wrote:
 On Mon, Apr 17, 2006 at 12:34:29PM -0400, Rian A Hunter wrote:
  I think a SoC project that profiles Apache (and finds out where we
  fall short) so that we are able to compete with other lightweight HTTP
  servers popping up these days would be a good endeavor for any CS
  student.

 Right now, I'm getting 22k reqs/sec from Apache httpd, and 18k/sec from
 lighttpd. Simple things like using epoll, or the way the worker
 balancing is done have huge effects compared to the tiny improvements
 refactoring code can have.

  This seems to be more viable for our threaded MPMs. For the prefork
  MPM, maybe a goal for 10,000 connections might be impractical.

 With prefork I can generally push about 27,000 concurrent connections
 before things get hairy. With worker, I have a usable system up to
 83,000 concurrent connections, without much effort.

  I haven't done any benchmarks myself, I've just read results so anyone
  correct me if I'm wrong.

 Dan Kegels page is years out of date, and was uninformed even when it
 wasn't :)

I suspect that a significant problem with this sort of project will be
lack of proper hardware for benchmarking purposes.  From everything
I've heard it's not all that hard to totally saturate the kind of
networks you're likely to have sitting around your house with
commodity hardware and no real effort.  To really benchmark it's going
to require more stuff than your average college student has lying
around the house.

-garrett


Re: It's that time of the year again

2006-04-17 Thread Garrett Rooney
On 4/17/06, Brian McCallister [EMAIL PROTECTED] wrote:

 On Apr 17, 2006, at 10:04 AM, Garrett Rooney wrote:

  I suspect that a significant problem with this sort of project will be
  lack of proper hardware for benchmarking purposes.  From everything
  I've heard it's not all that hard to totally saturate the kind of
  networks you're likely to have sitting around your house with
  commodity hardware and no real effort.  To really benchmark it's going
  to require more stuff than your average college student has lying
  around the house.

 That would be one of the advantages of it being with the ASF -- in
 theory we can get access to some network gear which might be harder
 for a student to lay hands on.

Perhaps, but AFAICT infra@ doesn't have this kind of thing lying
around at the moment, so unless someone is going to step up with
hardware people can use it's kind of a showstopper.

-garrett


Re: svn commit: r391080 - in /httpd/mod_smtpd/trunk/src: Makefile.am smtp_bouncer.c smtp_protocol.c smtp_util.c

2006-04-03 Thread Garrett Rooney
On 4/3/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

 --- httpd/mod_smtpd/trunk/src/smtp_util.c (original)
 +++ httpd/mod_smtpd/trunk/src/smtp_util.c Mon Apr  3 09:40:17 2006
 @@ -76,15 +76,17 @@
  smtpd_run_on_reset_envelope(scr);
  smtpd_clear_envelope_rec(scr);
  }
 -
 +#if 0
  #include netinet/in.h
  #include arpa/nameser.h
  #include resolv.h
 +#endif

  SMTPD_DECLARE_NONSTD(apr_status_t)
  smtpd_get_mailex(apr_pool_t *pool, /* out */ char **resolved_host,
   char *original_host)
  {
 +#if 0
  /* res_search hack, too dependent */
  unsigned char buf[NS_PACKETSZ];
  ns_msg handle;
 @@ -130,6 +132,8 @@
 ns_rr_rdata(rr) + 2,
 *resolved_host,
 MAXDNAME);
 +#endif
 +*resolved_host = apr_pstrdup(pool, original_host);

  return APR_SUCCESS;
  }

If you must #if 0 out code like this, at least leave some indication
of why it's been done.  Ideally though, I'd suggest just removing it,
you can always get it back from the subversion repository if you need
it again later.

-garrett


Re: fcgi branch

2006-04-01 Thread Garrett Rooney
On 4/1/06, Jim Jagielski [EMAIL PROTECTED] wrote:
 Topic for discussion: merge mod_proxy_fcgi into trunk...

 I think we're pretty close...

+1

There are a number of improvements I'd like to make eventually
(fcgistarter needs a good way to let you signal the running fcgi
processes to restart or shut down, we need a way to use unix domain
sockets instead of tcp sockets, mod_proxy_fcgi eventually needs a way
to manage its own worker processes, etc), but it is at that sort of
minimal acceptable functionality stage, and there's no reason to
continue to keep it sequestered on the branch.

Thank you for bringing this up, I've been meaning to do so for a while
now and just haven't gotten around to it yet.

-garrett


Re: When did this break?

2006-03-31 Thread Garrett Rooney
On 3/31/06, William A. Rowe, Jr. [EMAIL PROTECTED] wrote:
 It seems this can be trivially solved by ensuring we defer this test until
 after we invoke the sub-configure of apr and apr-util.

 Comments?

Makes sense to me.

-garrett


Re: [PATCH] (resend) Re: mod_authz_core:check_provider_list bug?

2006-03-31 Thread Garrett Rooney
On 3/31/06, Max Bowsher [EMAIL PROTECTED] wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 Could someone commit?

Committed in r390506, thanks.

-garrett


Re: svn commit: r390511 - /httpd/httpd/trunk/support/ab.c

2006-03-31 Thread Garrett Rooney
On 3/31/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

 +   **
 +   ** Version 2.3
 +   ** SIGINT now triggers output_results().
 +   ** Conributed by colm, March 30, 2006

You're missing a t in contributed.

-garrett


Re: APR resolver?

2006-03-28 Thread Garrett Rooney
On 3/28/06, Rian A Hunter [EMAIL PROTECTED] wrote:

 I know there is apr_sockaddr_info_get but this doesn't handle getting mx 
 records
 (or other DNS record types). Should a resolver API be added to APR. 
 Actually I
 think yes and I think I'm going to implement this. Any objections?

A resolver API would be great to have.  Some people have talked in the
past about implementing an asynchronous DNS resolver in APR, since the
existing Async resolver libraries out there are not especially easy to
use with an APR app, if you felt like taking that additional step I'm
sure it would be welcome.

That said, this should really be discussed on [EMAIL PROTECTED], not [EMAIL 
PROTECTED] ;-)

-garrett


Re: svn commit: r384580 - in /httpd/httpd/trunk/modules/proxy: mod_proxy.c mod_proxy.h mod_proxy_ajp.c proxy_util.c

2006-03-09 Thread Garrett Rooney
On 3/9/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

 --- httpd/httpd/trunk/modules/proxy/mod_proxy.c (original)
 +++ httpd/httpd/trunk/modules/proxy/mod_proxy.c Thu Mar  9 10:39:16 2006
 @@ -218,6 +218,26 @@
  }
  }
  }
 +else if (!strcasecmp(key, ajpflushpackets)) {
 +if (!strcasecmp(val, on))
 +worker-ajp_flush_packets = ajp_flush_on;
 +else if (!strcasecmp(val, off))
 +worker-ajp_flush_packets = ajp_flush_off;
 +else if (!strcasecmp(val, auto))
 +worker-ajp_flush_packets = ajp_flush_auto;
 +else
 +return FlushPackets must be On|Off|Auto;
 +}
 +else if (!strcasecmp(key, ajpflushwait)) {
 +ival = atoi(val);
 +if (ival  1000 || ival  0) {
 +return AJPFlushWait must be = 1000, or 0 for system default of 
 10 millseconds.;
 +}
 +if (ival == 0)
 +worker-ajp_flush_wait = AJP_FLUSH_WAIT;
 +else
 +worker-ajp_flush_wait = ival * 1000;/* change to 
 microseconds */
 +}
  else {
  return unknown Worker parameter;
  }

This isn't really a complaint about this particular change, more about
the way the worker parameter stuff works at the moment.

Sticking per-backend info like ajp_flush_wait into the worker object
and the code to configure it in mod_proxy.c itself seems very wrong to
me.  There should be a per-backend contect pointer to hold per-backend
information, and the work of handling worker parameters really should
be pushed to a per-backend callback or something like that.

-garrett


Re: Should fastcgi be a proxy backend?

2006-03-07 Thread Garrett Rooney
On 3/7/06, Brian Candler [EMAIL PROTECTED] wrote:

 I'm not sure what you mean there, in particular what you mean by 'assumes
 that you can make multiple connections to back end fastcgi processes'

 What I'm familiar with is apache 1.x with mod_fcgi. In that case, the
 typical fastcgi program does indeed handle a single request at once, but you
 have a pool of them, analagous to httpd with its pool of worker processes.

 Multiple front-end processes can open sockets to the pool, and each
 connection is assigned to a different worker. In this regard, a fastcgi
 process is just like a httpd process. I don't see why mod_proxy_foo can't
 open multiple fastcgi connections to the pool, in the same way as it could
 open multiple http connections to an apache 1.x server.

 (You could think of the fastcgi protocol as just a bastardised form of HTTP.
 I wonder why they didn't just use HTTP as the protocol, and add some
 X-CGI-Environment: headers or somesuch)

Oh, believe me, I'm fully aware of how fastcgi typically works ;-)

The problem is meshing that concept with how mod_proxy works.  When
you give ProxyPass a URL it really wants a host + port combo, you
can't (now anyway) have N different backend processes (listening on
different ports) all associated with the same URL.  To do that you
need to wrap them up in a balancer group and use mod_proxy_balancer.

Balancer the has the problem that the worker processes don't
coordinate things with each other, so even if you're droping the
connection to the back end process as soon as you're done with it you
can still get into situations where there are free back end processes,
but you're sitting there waiting because you tried to connect to one
that's already in use.

  Second, mod_proxy_balancer doesn't (seem to) have any real mechanism
  for adding back end processes on the fly, which is something that
  would be really nice to be able to do.  I'd eventually love to be able
  to tell mod_proxy_fcgi that it should start up N back end processes at
  startup, and create up to M more if needed at any given time.
  Processes should be able to be killed off if they become nonresponsive
  (or probably after processing a certain number of requests)

 ... sounds very similar to httpd worker process management (for non-threaded
 workers)

  , and they
  should NOT be bound up to a single httpd worker process.

 In that case, is the underlying problem that mod_proxy_foo shouldn't really
 hold open a *persistent* connection to the fastcgi worker pool, otherwise it
 will tie up a fastcgi worker without good reason, preventing it from doing
 work for someone else?

Just keeping it from having a persistant connection helps, but it's
not sufficient to solve the entire problem.

  So is there some reason I'm missing that justifies staying within the
  proxy framework

 Maybe. You might want to consider the case where the fastcgi server is a
 *remote* pool of workers, where the fastcgi messages are sent over a TCP/IP
 socket, rather than a local Unix domain socket. In that case, some remote
 process is responsible for managing the pool, and this is arguably very
 similar to the proxy case.

That's actually the case we support now.  There is no support for unix
domain sockets yet, it's all TCP.

 OTOH, the typical approach when using such a remote pool is to have a
 different port number for each fastcgi application, since I'm not sure that
 the fastcgi protocol itself has some way of passing down a URL or partial
 URL which could identify the particular worker of interest. If it did, a
 single process listening on a single socket could manage a number of
 different applications, each with a different pool of workers. In any case,
 though, it probably needs a configured list of applications, as it will need
 some parameters for each one (e.g. minimum and maximum size of pool, as you
 state)

That's what you'd be doing with the current state of things, you use
mod_proxy_balancer to wrap up the different back end processes.

Alternatively, you can just use the new -N flag I gave to fcgistarter,
which lets you start N workers that all listen on the same port, then
your problem has sort of gone away.  Unfortunately, at this point you
don't really have a good solution for managing the processes, since
you can't easily start new ones (fcgistarter would have to persist for
that) and you can't easily (from mod_proxy_fcgi's point of view) tell
the difference between the various back end processes because they all
listen on the same port, so when one of them times out you can't say
kill that off and start a new one because you don't know which one
to kill off.

Anyway, as you can see there are a number of issues at this point.

-garrett


Re: Should fastcgi be a proxy backend?

2006-03-06 Thread Garrett Rooney
On 3/6/06, Jim Jagielski [EMAIL PROTECTED] wrote:
 I think the whole issue revolves around whether the balancer
 should, or should not, pre-open connections and manage them
 internally, or whether it should be one-shot. The real
 power is being able to load balance, and implement
 that in a central location.

 So it seems to me that some sort of Balancer member
 option that determines whether or not the connection
 is persistent or not would alleviate some of
 the issues you raise.

We actually have a way to do that, it's the close_on_recycle flag, and
I had to turn it on in order to get anything approaching reliability
for fastcgi.  The problem with just using that is that without some
coordination between worker processes you're still going to end up
with collisions where more than one connection is made to a given
fastcgi process, and the majority of those don't know how to handle
more than one connection at a time, so requests will simply hang.

-garrett


Re: Should fastcgi be a proxy backend?

2006-03-06 Thread Garrett Rooney
On 3/6/06, Brian Akins [EMAIL PROTECTED] wrote:
 Garrett Rooney wrote:
 [snip]


 Also, we tend to run most of our fastcgi's using a domain socket.  I'm
 sure others do that as well.

True, but that's actually fairly simple to implement.  I've got a
scheme for making that work under proxy already, just haven't actually
implemented it yet.

-garrett


Re: Should fastcgi be a proxy backend?

2006-03-06 Thread Garrett Rooney
On 3/6/06, William A. Rowe, Jr. [EMAIL PROTECTED] wrote:
 Jim Jagielski wrote:
  I think the whole issue revolves around whether the balancer
  should, or should not, pre-open connections and manage them
  internally, or whether it should be one-shot. The real
  power is being able to load balance, and implement
  that in a central location.
 
  So it seems to me that some sort of Balancer member
  option that determines whether or not the connection
  is persistent or not would alleviate some of
  the issues you raise.

 That would be the ideal model for any remoted ASP.NET container as well.
 Some persistance flag to indicate that a backed should be persistant,
 and pooled, and the pool constraints (for this client side, not the actual
 backend's true constraints), would be ideal.

See, the issue for fastcgi isn't controlling persistence, persistent
connections are fine as long as you're actually making use of the
backend process, the problem is avoiding having more than one
connection to a backend process that simply cannot handle multiple
concurrent connections.

This seems to be a problem unique (so far anyway) to fastcgi.

-garrett


Re: Should fastcgi be a proxy backend?

2006-03-06 Thread Garrett Rooney
On 3/6/06, Sascha Schumann [EMAIL PROTECTED] wrote:
   Also, we tend to run most of our fastcgi's using a domain socket.  I'm
   sure others do that as well.
  
 
  Isn't that very unreliable?

 Why should Unix domain sockets be unreliable?

Yeah, that's my question as well.  Quite a few people seem to use them...

-garrett


Re: Should fastcgi be a proxy backend?

2006-03-06 Thread Garrett Rooney
On 3/6/06, Plüm, Rüdiger, VIS [EMAIL PROTECTED] wrote:


  -Ursprüngliche Nachricht-
  Von: [EMAIL PROTECTED]
 
  We actually have a way to do that, it's the close_on_recycle flag, and
  I had to turn it on in order to get anything approaching reliability
  for fastcgi.  The problem with just using that is that without some
  coordination between worker processes you're still going to end up
  with collisions where more than one connection is made to a given
  fastcgi process, and the majority of those don't know how to handle

 I think the problem is that we only manage connection pools that are local
 to the httpd processes.

Exactly, the pool of available backends needs to be managed globally,
which we don't currently have and it's not clear if that ability would
be useful outside of fastcgi.

-garrett


Re: Should fastcgi be a proxy backend?

2006-03-06 Thread Garrett Rooney
On 3/6/06, Jim Jagielski [EMAIL PROTECTED] wrote:
 Garrett Rooney wrote:
 
  On 3/6/06, Jim Jagielski [EMAIL PROTECTED] wrote:
   I think the whole issue revolves around whether the balancer
   should, or should not, pre-open connections and manage them
   internally, or whether it should be one-shot. The real
   power is being able to load balance, and implement
   that in a central location.
  
   So it seems to me that some sort of Balancer member
   option that determines whether or not the connection
   is persistent or not would alleviate some of
   the issues you raise.
 
  We actually have a way to do that, it's the close_on_recycle flag, and
  I had to turn it on in order to get anything approaching reliability
  for fastcgi.
 

 That's how we do it *internally* and not what I meant. I meant
 more a parameter to be set in httpd.conf...

Sure, we could add a parameter to control that behavior, but it still
won't solve the real underlying problem, you'll still get collisions
under load, they just won't be as common.

-garrett


Re: Should fastcgi be a proxy backend?

2006-03-06 Thread Garrett Rooney
On 3/6/06, Plüm, Rüdiger, VIS [EMAIL PROTECTED] wrote:

  von Garrett Rooney
 
  Exactly, the pool of available backends needs to be managed globally,
  which we don't currently have and it's not clear if that ability would
  be useful outside of fastcgi.

 But as connection pools are per worker and not per cluster
 this problem should also appear in the unbalanced environment.

Oh sure, right now what we've got only kinda sorta works, if you don't
put it under load.  I've only done very limited testing, and it
certainly seems logical that we'd see this problem even without
balancer in the equation.

-garrett


Re: Should fastcgi be a proxy backend?

2006-03-06 Thread Garrett Rooney
On 3/6/06, William A. Rowe, Jr. [EMAIL PROTECTED] wrote:
 Garrett Rooney wrote:
  On 3/6/06, William A. Rowe, Jr. [EMAIL PROTECTED] wrote:
 
 Jim Jagielski wrote:
 
  See, the issue for fastcgi isn't controlling persistence, persistent
  connections are fine as long as you're actually making use of the
  backend process, the problem is avoiding having more than one
  connection to a backend process that simply cannot handle multiple
  concurrent connections.
 
  This seems to be a problem unique (so far anyway) to fastcgi.

 So the issue is that mod_proxy_fastcgi needs to create a pool of single
 process workers, and ensure that each has only one concurrent request,
 right?  That's an issue for the proxy_fastcgi module, to mutex them all.

The problem is that with the way mod_proxy currently works there isn't
any way to do that, at least as far as I can tell.  It seems like it
will require us to move away from having mod_proxy manage the back end
connections, and if we do that then we're back to the what exactly is
the advantage to using mod_proxy again? question.

-garrett


Should fastcgi be a proxy backend?

2006-03-05 Thread Garrett Rooney
So, predictably, now that we've gotten mod_proxy_fcgi to the point
where it's actualy able to run real applications I'm starting to
question some basic assumptions we made when we started out along this
course.

The general idea was that we want to be able to get content from some
fastcgi processes.  That seems pretty similar to what mod_proxy_http
does with other http servers, and mod_proxy_ajp with java app servers,
and heck, since we're probably going to have lots of back end fastcgi
processes it sure is cool that we've got that mod_proxy_balancer stuff
to handle that part of the equation.

It sure seems like a good idea, doesn't it?  And at first glance it
is, I mean it basically works, I can set up a balancer group with a
bunch of back end fastcgi processes that I started up with the new
fcgistarter program, and it'll pretty much do what we want.

But there are some issues looming on the horizon.

First of all, mod_proxy_balancer really assumes that you can make
multiple connections to back end fastcgi processes at once.  This may
be true for some things that speak fastcgi (python programs that use
flup to do it sure look like they'd work for that sort of thing, but I
haven't really tried it yet), but in general the vast majority of
fastcgi programs are single threaded, non-asynchronous, and are
designed to process exactly one connection at a time.  This is sort of
a problem because mod_proxy_balancer doesn't actually have any
mechanism for coordinating between the various httpd processes about
who is using what backend process.

Second, mod_proxy_balancer doesn't (seem to) have any real mechanism
for adding back end processes on the fly, which is something that
would be really nice to be able to do.  I'd eventually love to be able
to tell mod_proxy_fcgi that it should start up N back end processes at
startup, and create up to M more if needed at any given time. 
Processes should be able to be killed off if they become nonresponsive
(or probably after processing a certain number of requests), and they
should NOT be bound up to a single httpd worker process.

This all means that some kind of mechanism for coordinating access to
and creation of back end processes needs to be created, and as it
moves on it starts to feel less and less like this sort of
functionality is generically useful to other back end fastcgi
processes.  Maybe I'm wrong about that though.

Oh, and in order to do any of the really cool stuff we'll also have to
rework the way mod_proxy handles arguments that are given to ProxyPass
statements, so that they can be passed down to something other than
either mod_proxy or mod_proxy_balancer.  And even after we do that,
we'll still be stuck in this situation where you end up with like a
bazillion options on the end of each fastcgi ProxyPass, when really
we'd want them to be per-balancer or per-directory or something like
that.  It just feels kinda clunky.

Finally, I have to say that I'm starting to wonder what we're actually
getting out of using the proxy framework for this.  I mean all it's
doing is creating some sockets for us, all the other stuff I just
talked about pretty much needs to be implemented itself, and it's
questionable whether any of it would be useful for something other
than the fastcgi code.

So is there some reason I'm missing that justifies staying within the
proxy framework, cause I'm really tempted to just create a handler
module that reuses most of the mod_proxy_fcgi code, since it sure
feels like it'd be easier to write this stuff if I didn't have to
shoehorn it into mod_proxy.

-garrett


Rails and mod_proxy_fcgi, a match made in... well... someplace anyway.

2006-03-04 Thread Garrett Rooney
So I've been trying to make Rails work with mod_proxy_fcgi (since the
whole point of writing mod_proxy_fcgi was to make it easier to use
rails/django type web apps with httpd, and I'm more of a Ruby person
than a Python person), and I think I've made enough progress that it's
worth sharing with the world.

Note that this doesn't completely work yet.  It'll render some pages
in a basic Rails application, but for reasons I don't completely
understand it occasionally just hangs.  YMMV.

My current httpd.conf configuration looks something like this:

# create a balancer group for the app
Proxy balancer://helloworld
  BalancerMember fcgi://localhost:4747
/Proxy

# NOTE: the name dispatch.fcgi is IMPORTANT, Rails uses it internally
#   to determine the relative root url.
ProxyPass /helloworld/dispatch.fcgi balancer://helloworld

# you need a symlink in your htdocs dir for this, named helloworld
# pointing to the public dir in your app
Location /helloworld
  RewriteEngine On

  # don't loop
  RewriteCond %{REQUEST_URI} !^/helloworld/dispatch.fcgi

  # let files pass through normally
  RewriteCond %{REQUEST_FILENAME} !-f

  # other stuff gets passed through the proxy
  RewriteRule /helloworld/(.*)$ /helloworld/dispatch.fcgi/$1 [QSA,L]
/Location

I then start the Rails dispatcher.fcgi with the spawn-fcgi program
from lighttpd, like this:

RAILS_ENV=production ./spawn-fcgi -f `pwd`/public/dispatch.fcgi -p 4747

Eventually we'll want to ship something like this with mod_proxy_fcgi,
but swiping the one from lighttpd will do for now.

Now to get from this to basic functionality you need to hack
action_controller/request.rb's relative_url_root method:

--- 
/usr/lib/ruby/gems/1.8/gems/actionpack-1.11.2/lib/action_controller/request.rb.orig
2006-03-04 15:40:43.0 -0800
+++ 
/usr/lib/ruby/gems/1.8/gems/actionpack-1.11.2/lib/action_controller/request.rb
 2006-03-04 13:53:18.0 -0800
@@ -169,7 +169,8 @@
 # Returns the path minus the web server relative installation directory.
 # This method returns nil unless the web server is apache.
 def relative_url_root
-  @@relative_url_root ||= server_software == 'apache' ?
env[SCRIPT_NAME].to_s.sub(/\/dispatch\.(fcgi|rb|cgi)/, '') : ''
+  # XXX hack
+  @@relative_url_root ||= server_software == 'apache' ?
env[SCRIPT_NAME].to_s.sub(/\/dispatch\.(fcgi|rb|cgi).*$/, '') : ''
 end

 # Returns the port number of this request as an integer.

The original implementation assumes that you're using a handler type
module that routes through dispatch.fcgi, thus it has a SCRIPT_NAME
that ends in dispatch.fcgi.  For mod_proxy_fcgi that's not the case
though, the SCRIPT_NAME ends up getting set to
/helloworld/dispatch.fcgi/more/stuff, and we need to rip off
everything all the way to the end.  I'm not sure who's wrong in this
case, mod_proxy_fcgi or Rails, but in any event this is a quick
solution to the problem.

Finally, just add a symlink from 'helloworld' in your htdocs directory
to the 'public' directory of your rails application.

At this point you should be far enough along to view the default
index.html you get when you create a new rails application, and
clicking on the About your application's environment link will do
its ajax magic and hit the backend fcgi process to pull down info
about the various libraries you're using.

Doing more than this may or may not work.  For example, if you use
rails scaffolding to create a model, view, and controller for some
object you'll be able to view some pages, but some will hang.  Oddly,
clicking on links that direct you to another part of the app often
seem to hang, but cutting and pasting the same URL into the address
bar of the browser will work fine.  I still need to figure out what's
causing this, but I suspect a problem in mod_proxy_fcgi, because when
you break into httpd with a debugger you find it sitting in apr_poll.

Anyway, just wanted to share what I've come up with so far, in case
anyone else wants to give this stuff a shot.  Once I manage to get a
Rails app (or Django or any other significant fcgi based web framework
for that matter, but I'm most concerned about Rails) working reliably
I think mod_proxy_fcgi will be suitable for merging back into trunk.

-garrett


Re: Rails and mod_proxy_fcgi, a match made in... well... someplace anyway.

2006-03-04 Thread Garrett Rooney
On 3/4/06, Garrett Rooney [EMAIL PROTECTED] wrote:

 Doing more than this may or may not work.  For example, if you use
 rails scaffolding to create a model, view, and controller for some
 object you'll be able to view some pages, but some will hang.  Oddly,
 clicking on links that direct you to another part of the app often
 seem to hang, but cutting and pasting the same URL into the address
 bar of the browser will work fine.  I still need to figure out what's
 causing this, but I suspect a problem in mod_proxy_fcgi, because when
 you break into httpd with a debugger you find it sitting in apr_poll.

And it turns out that the underlying problem was simply that we were
holding open connections to the back end fastcgi processes, and it was
very easy to end up in a situation where all your connections were
already being used up.

I'm sure this won't be the final word on the subject, but for now I've
set the default to closing the connection after each request, which
kinda sucks, but at least lets things function reliably.  We'll
probably want to come up with a better solution in the future anyway,
especially when we're actually managing the fcgi processes ourselves
instead of requiring them to be managed externally as we are now.

-garrett


Re: Recompiling modules for Apache 2.2.0

2006-02-14 Thread Garrett Rooney
On 2/14/06, System Support [EMAIL PROTECTED] wrote:
 I am trying to recompile some modules from Apache 2.0 to use under 2.2.0,
 and get the following:

 In file included from /usr/local/apache2/include/ap_config.h:25,
  from /usr/local/apache2/include/httpd.h:43,
  from mod_rexx.h:72,
  from mod_rexx.c:25:
 /usr/local/apache2/include/apr.h:270: error: syntax error before 'r_off_t'
 /usr/local/apache2/include/apr.h:270: warning: type defaults to '' in
 declaration of 'apr_off_t'
 /usr/local/apache2/include/apr.h:270: warning: data definition has no type=
  or storage class
 In file included from /usr/local/apache2/include/apr_file_io.h:29,
  from /usr/local/apache2/include/apr_network_io.h:26,
  from /usr/local/apache2/include/httpd.h:53,
  from mod_rexx.h:72,
  from mod_rexx.c:25:
 /usr/local/apache2/include/apr_file_info.h:204: error: syntax error before
 'apr_off_t'
 /usr/local/apache2/include/apr_file_info.h:204: warning: no semicolon at end
 of struct or union
 /usr/local/apache2/include/apr_file_info.h:206: warning: type defaults to
 '' in declaration
 of 'csize'

  and so on


 This appears to be caused by a missing typedef for off64_t and others, but  I
 have run out of ideas on how to fix it.  Any suggestions?

However you are compiling these modules you are most likely not
passing the correct CFLAGS to the compiler.  To use APR (as HTTPD
does) you need to pass APR's CFLAGS, which you can get from
apr-1-config --cflags.  That'll allow off64_t to be correctly picked
up from the system headers.

-garrett


Re: APR cross-platform signal semantics

2006-02-04 Thread Garrett Rooney
On 2/4/06, Nick Kew [EMAIL PROTECTED] wrote:
 Historically, different platforms have different signal semantics.

 I need to set up a signal handler.  The primary targets are
 Linux and Solaris, but I'd much prefer cross-platform.  And I'd
 like it to be MPM-agnostic in httpd, though the prime target
 is Worker.

 Is the following correct and complete for APR signals
 to function across platforms?

 static void my_handler(int signum) {
   apr_signal_block(signum) ;
   /* do things */
   apr_signal_unblock(signum) ;
 }

 static void my_child_init(args) {
   apr_signal(MY_SIGNAL, my_handler);
 }

That seems reasonable to me, although I suspect the behavior with
regard to how signals interact with threads will vary from system to
system, although apr_setup_signal_thread seems relevant to that part
of the problem...

-garrett


Re: Outch, a few problems for 2.0/2.2

2006-02-03 Thread Garrett Rooney
On 2/2/06, William A. Rowe, Jr. [EMAIL PROTECTED] wrote:

 Finally vpath + symlink builds were broken, there is a set of patches
 over on http://people.apache.org/~wrowe/ named fixbuild-n.n.patch where
 -n.n is -2.0  -0.9, -2.2  -1.2, and -2.3  1.3 for the corresponding
 httpd and apr-util versions.  The patches;

   * ensure we don't look for srclib/apr*** directories, but simply the
 file contained within.

Seems reasonable to me...

   * avoid looking from apr-util to ../apr, since on symlinked environments
 in solaris this can be erronious.

+1

   * ensure we don't bomb on vpath builds looking for .h files in both the
 source and vpath target trees (because they don't exist in both).

Consistently looking in one place seems reasonable.

   * properly check if we are vpath'ing for apr-util/xml/expat, creating that
 directory in the vpath target, and introduce the syntax 
 --with-expat=builtin
 to resolve the ambiguity that vpath builds of the builtin expat 
 introduces.

Again, working in vpath builds is a good thing ;-)

   * never configure apr-iconv from apr-util.  Since we won't configure apr
 from apr-util this was inconsistent.

Assuming you mean buildconf, not configure, +1, I was surprised we did
this at all.  AFAICT the patches don't have anything to do with
running apr-iconv's configure.

In general the APR part of these patches looks reasonable to me,
although I haven't actually tested them.  Didn't look at the httpd
side though.

-garrett


Re: svn commit: r374754 - in /httpd/site/trunk: docs/ docs/mod_smtpd/ docs/modules/ xdocs/ xdocs/mod_smtpd/ xdocs/modules/

2006-02-03 Thread Garrett Rooney
On 2/3/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

 +ttmod_smtpd/tt started it's life as a 2005 Google Summer of Code 
 project
 +taken on by strongRian Hunter/strong and strongJem Berkes/strong 
 with
 +mentors strongNick Kew/strong and strongPaul Querna/strong. It 
 continues
 +its life being developed and maintained by Rian Hunter with help from 
 the httpd
 +developers.

Ok, now we're seeing this both in the mod_mbox and mod_smtpd web
pages.  This is supposed to be a community project, ASF projects don't
have lead developers, and they don't generally go out of their way to
assign credit like this, that's normally kept to the CHANGES file and
the svn commit logs.  Putting this sort of thing on the web pages
seems quite inappropriate to me.

-garrett


Re: Bouncing messages in mod_smtpd

2006-02-01 Thread Garrett Rooney
On 2/1/06, Rian Hunter [EMAIL PROTECTED] wrote:

 mod_smtpd needs a bouncing mechanism! I need some help with this
 because I am not sure how to approach this. Should I implement an
 entire SMTP client in mod_smtpd to bounce messages? Should I relegate
 this responsibility to a sendmail command or what platform specific
 mailer there is? Are there other options?

Well, we already have a small SMTP implementation in the SMTP queue
module, that could be made somewhat more generic and used for this
sort of thing.

 I want to dedicate the rest of this week to writing some drafts of
 documentation and a website for mod_smtpd,

Yay!

-garrett


Re: mod_proxy_fcgi TODO list

2006-01-22 Thread Garrett Rooney
On 1/14/06, Garrett Rooney [EMAIL PROTECTED] wrote:

 Fix handle_headers so that it works when the \r\n\r\n is split into
 two of the FastCGI records, or when a FastCGI uses \n\n without the
 \r's.

This is done as of r371428.

-garrett


Re: httpd and locales

2006-01-19 Thread Garrett Rooney
On 1/19/06, André Malo [EMAIL PROTECTED] wrote:
 * Branko Čibej wrote:

  You're confusing the content of the SVN repository and hook scripts
  stored on the local filesystem. Paths in the first are always encoded in
  UTF-8. The latter naturally have to obey the server's locale.

 I don't think so. The task was to pass the name of a file stored in the
 repository to a hook script via the command line. Otherwise I must have
 misunderstood something quite heavily.

That is correct, it's an argument to the hook script that happens to
contain the path of a file in the repository.  Currently all arguments
are transcoded from utf8 to native before we execute the hook script.

-garrett


httpd and locales

2006-01-18 Thread Garrett Rooney
Is there any particular reason that httpd never does the
'setlocale(LC_ALL, );' magic necessary to get libc to respect the
various locale related environment variables?  As far as I can tell,
despite system settings for locale (i.e. /etc/sysconfig/i18n on RHEL)
httpd always runs with a locale of C, which is fine for most things,
but pretty irritating if you have a need to do stuff with multibyte
strings in a module.

Just adding a call to setlocale with a  locale in httpd's main makes
my particular problem go away, but I'm kind of hesitant to propose
actually doing so since I don't know what kind of fallout there would
be from having httpd all of a sudden start respecting the environment
variables...

-garrett


Re: httpd and locales

2006-01-18 Thread Garrett Rooney
On 1/18/06, Joe Orton [EMAIL PROTECTED] wrote:
 On Wed, Jan 18, 2006 at 11:17:30AM -0800, Garrett Rooney wrote:
  Is there any particular reason that httpd never does the
  'setlocale(LC_ALL, );' magic necessary to get libc to respect the
  various locale related environment variables?  As far as I can tell,
  despite system settings for locale (i.e. /etc/sysconfig/i18n on RHEL)
  httpd always runs with a locale of C, which is fine for most things,
  but pretty irritating if you have a need to do stuff with multibyte
  strings in a module.
 
  Just adding a call to setlocale with a  locale in httpd's main makes
  my particular problem go away, but I'm kind of hesitant to propose
  actually doing so since I don't know what kind of fallout there would
  be from having httpd all of a sudden start respecting the environment
  variables...

 Ideally the locale shouldn't matter, but in practice it does: notably
 strcasecmp() and the is* functions behave differently.  This can cause
 things to fail in surprising ways, so it's generally to be avoided.

 Various modules will do it at startup anyway, so it's hard to avoid
 completely, but it's not something that I'd really advise propagating.

The specific problem I'm trying to fix is that mod_dav_svn fails to
run a pre-lock hook script when you try to lock a filename with double
byte characters.  It never even gets to the point of trying to run the
script, it fails trying to build the command line because it can't
convert the filename from utf8 to the native encoding because the
locale is C and thus the native encoding is 7 bit ascii.  I'm having
trouble finding a work around for this that doesn't involve setting
the locale, although if there's anything obvious I'm missing I'd love
to hear it.

-garrett


Re: httpd and locales

2006-01-18 Thread Garrett Rooney
On 1/18/06, André Malo [EMAIL PROTECTED] wrote:
 * Garrett Rooney wrote:

  The specific problem I'm trying to fix is that mod_dav_svn fails to
  run a pre-lock hook script when you try to lock a filename with double
  byte characters.  It never even gets to the point of trying to run the
  script, it fails trying to build the command line because it can't
  convert the filename from utf8 to the native encoding because the
  locale is C and thus the native encoding is 7 bit ascii.  I'm having
  trouble finding a work around for this that doesn't involve setting
  the locale, although if there's anything obvious I'm missing I'd love
  to hear it.

 It doesn't belong here, but... I'm wondering why the path isn't passed as
 UTF-8. Why is it translated to the locale at all? It's all happening within
 the svn file system, so I'd really expect to get utf-8 and would consider
 locale translation as a bug.

Well, I imagine that the assumption is that any hook script is going
to be using the actual locale specified in LANG/LC_ALL/etc env
variables, so if we don't translate to that locale it'll get rather
confused by utf8 data in its command line.  As a general rule svn
translates from native - utf8 on input and from utf8 - native for
output.  Ironically, if the LANG/LC_ALL/etc env vars were being
followed by httpd this translation would be a noop, since the system
uses a utf8 locale...

-garrett


Re: svn commit: r369283 - in /httpd/httpd/trunk: ./ build/win32/ modules/aaa/ modules/arch/win32/ modules/cache/ modules/database/ modules/dav/fs/ modules/dav/main/ modules/debug/ modules/echo/ module

2006-01-16 Thread Garrett Rooney
On 1/16/06, William A. Rowe, Jr. [EMAIL PROTECTED] wrote:
 [EMAIL PROTECTED] wrote:
  Author: rooneg
  Date: Sun Jan 15 15:47:19 2006
  New Revision: 369283
 
  URL: http://svn.apache.org/viewcvs?rev=369283view=rev
  Log:
  Update svn:ignore properties so that generated files from a win32 VC++
  Express build are ignored.
 
  --- svn:ignore (original)
  +++ svn:ignore Sun Jan 15 15:47:19 2006
  @@ -37,5 +37,6 @@
   *.stt
   *.sto
   *.vcproj
  +*.vcproj.*

 Quick observation, wouldn't a single;

 *.vcproj*

 be a bit more efficient :-?

I suppose, but I don't want to block out foo.vcprojbar, since Visual
Studio doesn't create files like that, it's always either foo.vcproj
or foo.vcproj.someotherstuff.

-garrett


Re: mod_proxy_fcgi TODO list

2006-01-15 Thread Garrett Rooney
On 1/15/06, Graham Leggett [EMAIL PROTECTED] wrote:
 Garrett Rooney wrote:

  Need a way to use a unix domain socket or a pipe instead of a TCP socket.

 Isn't this a filter problem?

 mod_proxy in theory should not care that the source is a socket beyond
 maybe knowing to insert the right filter into the stack.

Well, mod_proxy currently creates the socket itself via
apr_socket_create based on the host and port it's been given via
ProxyPass.  We're going to need a way to tell it to use a unix domain
socket located at a certain path, that's all I mean by that change.  I
have some patches hacked together to make it use the apr_portable.h
stuff to do it, it isn't that hard, it's really just a matter of
figuring out how it should be configured to do so.

-garrett


[PATCH] fix warning in mod_usertrack.c

2006-01-15 Thread Garrett Rooney
While playing around with httpd on win32 today, I noticed a small
warning in mod_usertrack.c.


cls-expires = modifier;


The problem here is that modifier is a time_t, and cls-expires is an
int, and Visual C++ Express 2005 is unthrilled about the possibility
for data loss there.

I threw together a patch that changes expires to a time_t, and then
adjusted the only place in the file that really cares about that (an
apr_psprintf that uses expires, and thus needs to be cast up to an
unsigned 64 bit int and printf'd via APR_UINT64_T_FMT, since there's
no portable printf format string for time_t), but I don't actually
have any way to test it, so I figured I'd throw it at this list and
see if people thought it looked reasonable.  It does fix the warning,
no promises about what it does at runtime though...

-garrett

Fix warning in mod_usertrack.c

* modules/metadata/mod_usertrack.c
  (cookie_log_state): Make expires a time_t instead of an int.
  (make_cookie): Adjust apr_psprintf call to take into acount new type
   for expires.


mod_usertrack.diff
Description: Binary data


The Win32 Build, Visual Studio Versions, etc.

2006-01-15 Thread Garrett Rooney
So I played around with getting HTTPD to build on a windows machine
today, using only the freely available Express version of Visual C++
that Microsoft released a little while back.  It works, basically, but
it's not nearly as easy as it should be, for a few reasons.

The major problems is that the conversion from the existing .dsp files
over to VS.Net .sln and .vcproj files doesn't seem to work perfectly. 
Specifically, the libs that various targets are linked against don't
seem to get migrated perfectly, so you have to go add a few .libs to
libapr, libhttpd, a bunch of the command line tools, the win32
specific programs, and probably some more places I can't recall.

Aside from this hurdle, the major problem with getting this stuff to
build was simply that the Makefile.win targets describe in the build
instructions include targets that run _build, which tries to run
visual studio using the vc6 (I think) command line tool, which doesn't
work with newer visual studio versions.  It's possible to work around
this by running the build by hand, then calling _install directly to
build the dist tree, but it's a bit of a pain.

I didn't actually go so far as to try building the APR tests, but I
imagine they'd probably suffer similar problems to the httpd command
line tools.

So I guess what I'm asking is has anyone else spent any time working
with these compilers?  Is there any way we can make the use of them
easier?  I'd love for it to be a reasonably well defined set of steps
for an APR or HTTPD developer to actually get things running on Win32,
and if those instructions start with go download this free thing from
microsoft as opposed to go shell out a few hundred bucks for visual
studio it'll probably encourage people to actually do some work on
the win32 code, which is a bit underloved lately IMO.

If there isn't an easy way we can make our current system of .dsp
files convert easily into VC.Net Express projects, then I'm curious if
we couldn't start thinking about either maintaining parrallel build
files for newer Visual Studio versions, or even better just switching
to it as the default on win32.  I'm sure that I recall various issues
that make it desirable to use earlier versions of visual studio, but
are these issues really insurmountable?

-garrett


mod_proxy_fcgi TODO list

2006-01-14 Thread Garrett Rooney
Just in case other people want to jump in and fix things, here's a
list of stuff I think needs to be done in order to make httpd +
mod_proxy_fcgi a good replacement for the current popular fcgi
solutions (httpd 1.3.x and mod_fcgi and lighttpd + it's fcgi module).

Note that this is split up into things that need to be fixed in
mod_proxy_fcgi and things that will probably need changes in mod_proxy
itself.  I'm also of the opinion that not all of this stuff needs to
be fixed in order for us to merge the current fcgi proxy branch back
into trunk.  If we could fix the majority of the mod_proxy_fcgi
specific stuff that would be enough to justify merging it into trunk.

-garrett

* Stuff that needs to be fixed in mod_proxy_fcgi itself

Figure out what to do about the TZ env variable.

Deal with the (very remote) possibility that the environment is large
enough that it won't fit in the available 32 bits of the length field
in the header.

Fix handle_headers so that it works when the \r\n\r\n is split into
two of the FastCGI records, or when a FastCGI uses \n\n without the
\r's.

Investigate the possibility of making our network I/O nonblocking.

Update the various proxy statistics, so that balancer:// stuff actually works.

Look at cleaning up the FCGI_STDERR logging, after testing with real
world users we may be able to make it look nicer.  Once suitable work
in mod_proxy and execd has been done, look at using unix domain
sockets and execd to start FastCGI processes.

Testing needs to be done with large users of FastCGI like Rails,
Django, Catalyst, and other such frameworks and instructions need to
be written as to how to make them use mod_proxy_fcgi.

* Stuff that needs to be fixed in the mod_proxy framework

Need a way to use a unix domain socket or a pipe instead of a TCP socket.

Need to generalize the ProxyPass options stuff, so that back end
modules can have config options without having to hack it into
mod_proxy itself.


Re: charset in SVN repository

2006-01-13 Thread Garrett Rooney
On 1/13/06, Nick Kew [EMAIL PROTECTED] wrote:
 r368730 encodes Rüdiger's name using UTF-8 in CHANGES.

 When I look at it, I see junk in my default tools.  I also see junk
 when viewing it in viewcvs.cgi, which reports it as iso-8859-1.

 Is that just a viewcvs bug and my local setup, or is there
 something deeper in svn and/or our repository?

Likely a viewcvs bug/bug in your local setup, if I had to guess. 
There can't possibly be a svn bug here, since svn really doesn't give
a damn about the encoding of the contents of a file in the repository,
it's all just bytes to it.

-garrett


Re: svn commit: r368929 - in /httpd/httpd/trunk/modules/aaa: config.m4 mod_auth.h mod_authz_core.c mod_authz_default.c

2006-01-13 Thread Garrett Rooney
On 1/13/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

 --- httpd/httpd/trunk/modules/aaa/config.m4 (original)
 +++ httpd/httpd/trunk/modules/aaa/config.m4 Fri Jan 13 16:13:22 2006
 @@ -48,6 +48,10 @@
  dnl keep the bad guys out.
  APACHE_MODULE(authz_default, authorization control backstopper, , , yes)

 +dnl - and just in case all of the above punt; a default handler to
 +dnl keep the bad guys out.
 +APACHE_MODULE(access_compat, mod_access compatibility, , , most)
 +

This comment seems wrong...

 --- httpd/httpd/trunk/modules/aaa/mod_authz_core.c (original)
 +++ httpd/httpd/trunk/modules/aaa/mod_authz_core.c Fri Jan 13 16:13:22 2006
 @@ -101,6 +101,8 @@
  authz_provider_list *providers;
  authz_request_state req_state;
  int req_state_level;
 +//int some_authz;
 +//char *path;

And the c++ style comments in this file won't work everywhere... 
Actually, I'm curious why this stuff is being commented out anyway,
instead of just deleted.  We do have version control, after all, we
can get them back if we want to ;-)

-garrett


Re: [PATCH] clarify return value of hook functions used in request processing

2006-01-12 Thread Garrett Rooney
On 9/9/05, Daniel L. Rall [EMAIL PROTECTED] wrote:
 This is a code-clarity improvement only.


 * server/request.c
   (ap_process_request_internal): Check the return value of hook
functions against the constant OK -- defined in httpd.conf as
Module has handled this stage -- instead of against the magic
number 0 to improve clarity.

Thanks, committed in r368505.

-garrett


Re: [PATCH] mod_proxy_fcgi - s/fcgi-tcp:/fcgi:/

2006-01-10 Thread Garrett Rooney
On 1/9/06, Garrett Rooney [EMAIL PROTECTED] wrote:

 With that in mind, does anyone object to the following patch?

Well, since nobody jumped up and down screaming NO, NO, DON'T DO IT
I committed this in r367906.

-garrett


[PATCH] mod_proxy_fcgi - s/fcgi-tcp:/fcgi:/

2006-01-09 Thread Garrett Rooney
As we get further into implementing this stuff, it seems more and more
silly to have more than one scheme for fastcgi.  Any non-tcp mechanism
is going to require more info than we can easily get out of the URL
anyway, since we're already using the path portion of the URL for
calculating the path_info, so we might as well just use fcgi:// URLs
and save people some typing.  When the time comes to implement fcgi
over unix domain sockets or pipes we'll be able to infer the socket
type needed from the ProxyPass parameters or some other similar means.

With that in mind, does anyone object to the following patch?

-garrett

Change the FastCGI URL scheme to fcgi://.

* modules/proxy/mod_proxy_fcgi.c
  (proxy_fcgi_canon): Stop pretending unix domain sockets will need their
   own url scheme.
  (FCGI_SCHEME): New constant to describe the FastCGI proxy backend.
  (proxy_fcgi_handler): Drop the fcgi-local stuff, use FCGI_SCHEME now that
   we aren't worrying about multiple types of FastCGI workers.
Index: modules/proxy/mod_proxy_fcgi.c
===
--- modules/proxy/mod_proxy_fcgi.c	(revision 367477)
+++ modules/proxy/mod_proxy_fcgi.c	(working copy)
@@ -75,10 +75,10 @@
 static int proxy_fcgi_canon(request_rec *r, char *url)
 {
 char *host, sport[7];
-const char *err, *scheme, *path;
+const char *err, *path;
 apr_port_t port = 8000;
 
-if (strncasecmp(url, fcgi-, 5) == 0) {
+if (strncasecmp(url, fcgi://, 7) == 0) {
 url += 5;
 }
 else {
@@ -88,48 +88,29 @@
 ap_log_error(APLOG_MARK, APLOG_DEBUG, 0, r-server,
  proxy: FCGI: canonicalising URL %s, url);
 
-if (strncmp(url, tcp://, 6) == 0) {
-url += 4;
+err = ap_proxy_canon_netloc(r-pool, url, NULL, NULL, host, port);
+if (err) {
+ap_log_rerror(APLOG_MARK, APLOG_ERR, 0, r,
+  error parsing URL %s: %s, url, err);
+return HTTP_BAD_REQUEST;
+}
 
-scheme = fcgi-tcp://;
-
-err = ap_proxy_canon_netloc(r-pool, url, NULL, NULL, host, port);
-if (err) {
-ap_log_rerror(APLOG_MARK, APLOG_ERR, 0, r,
-  error parsing URL %s: %s,
-  url, err);
-return HTTP_BAD_REQUEST;
-}
+apr_snprintf(sport, sizeof(sport), :%d, port);
 
-apr_snprintf(sport, sizeof(sport), :%d, port);
-
-if (ap_strchr_c(host, ':')) {
-/* if literal IPv6 address */
-host = apr_pstrcat(r-pool, [, host, ], NULL);
-}
+if (ap_strchr_c(host, ':')) {
+/* if literal IPv6 address */
+host = apr_pstrcat(r-pool, [, host, ], NULL);
+}
 
-path = ap_proxy_canonenc(r-pool, url, strlen(url), enc_path, 0,
- r-proxyreq);
-if (path == NULL)
-return HTTP_BAD_REQUEST;
+path = ap_proxy_canonenc(r-pool, url, strlen(url), enc_path, 0,
+ r-proxyreq);
+if (path == NULL)
+return HTTP_BAD_REQUEST;
 
-r-filename = apr_pstrcat(r-pool, proxy:, scheme, host, sport, /,
-  path, NULL);
+r-filename = apr_pstrcat(r-pool, proxy:fcgi://, host, sport, /,
+  path, NULL);
 
-r-path_info = apr_pstrcat(r-pool, /, path, NULL);
-}
-else if (strncmp(url, local://, 8) == 0) {
-url += 6;
-scheme = fcgi-local:;
-ap_log_error(APLOG_MARK, APLOG_ERR, 0, r-server,
- proxy: FCGI: Local FastCGI not supported.);
-return HTTP_INTERNAL_SERVER_ERROR;
-}
-else {
-ap_log_error(APLOG_MARK, APLOG_ERR, 0, r-server,
- proxy: FCGI: mallformed destination: %s, url);
-return HTTP_INTERNAL_SERVER_ERROR;
-}
+r-path_info = apr_pstrcat(r-pool, /, path, NULL);
 
 return OK;
 }
@@ -745,6 +726,8 @@
 return OK;
 }
 
+#define FCGI_SCHEME FCGI
+
 /*
  * This handles fcgi:(type):(dest) URLs
  */
@@ -757,7 +740,7 @@
 char server_portstr[32];
 conn_rec *origin = NULL;
 proxy_conn_rec *backend = NULL;
-const char *scheme;
+
 proxy_dir_conf *dconf = ap_get_module_config(r-per_dir_config,
  proxy_module);
 
@@ -770,7 +753,7 @@
  proxy: FCGI: url: %s proxyname: %s proxyport: %d,
  url, proxyname, proxyport);
 
-if (strncasecmp(url, fcgi-, 5) == 0) {
+if (strncasecmp(url, fcgi://, 7) == 0) {
 url += 5;
 }
 else {
@@ -779,32 +762,17 @@
 return DECLINED;
 }
 
-if (strncmp(url, tcp://, 6) == 0) {
-scheme = FCGI_TCP;
-}
-else if (strncmp(url, local://, 8) == 0) {
-scheme = FCGI_LOCAL;
-ap_log_error(APLOG_MARK, APLOG_ERR, 0, r-server,
- proxy: FCGI: local FastCGI not supported.);
-return HTTP_INTERNAL_SERVER_ERROR;
-}
-else {
-

  1   2   >