Re: mod_proxy_html

2011-10-10 Thread Aaron Bannert

On Oct 10, 2011, at 3:02 PM, Nick Kew wrote:

 Following a chat with a fellow developer, the subject of my
 donating mod_proxy_html to ASF arose again.  This would
 (presumably) sit best as an HTTPD subproject in the manner
 of mod_ftp, mod_fcgid, etc, so that it doesn't pull in
 libxml2 as a dependency for httpd.
 
 mod_xml2enc, which manages charsets for libxml2-based
 modules including mod_proxy_html, would also be included
 in the subproject.
 
 I would of course relicense mod_proxy_html at the point
 where it is donated.
 
 The obvious advantage all round is that we get the benefit
 of Apache infrastructure for collaborative development,
 including patches such as those my correspondent has worked
 on, and opening the way for others such as the derived
 module mod_proxy_content[1] to collaborate more easily
 rather than fork.
 
 Any interest?

+1

I've been working with mod_proxy_html for the last few months, and I think this 
would be a good addition to HTTPD. mod_proxy_html could also benefit from some 
better collaboration, and I personally have some bug fixes and improvements 
lined up that I could provide. I don't know what the tradeoffs are between 
having it be a normal included module or a subproject module, does anyone have 
any insight?

-aaron

Re: svn commit: r923712 - in /httpd/httpd/trunk/docs/manual: ./ mod/

2010-03-25 Thread Aaron Bannert
On Fri, Mar 19, 2010 at 12:28:46AM +, Justin Erenkrantz wrote:
 On Wed, Mar 17, 2010 at 12:17 PM, Noirin Shirley noi...@apache.org wrote:
  On Wed, Mar 17, 2010 at 12:54 PM, Dan Poirier poir...@pobox.com wrote:
 
  How about Apache Web Server?  Httpd is just the name of one of the
  files, and not even the one people run to start it most of the time.
  Apache HTTP Server is fine, but Apache Web Server is equally correct,
  easier to pronounce, and less geeky :-)
 
  I also like this option.
 
 From the peanut gallery, eww.  =P
 
 +1 to Apache HTTP Server (long name) and httpd (short name).
 
 I don't see a compelling reason to rebrand now - we would only want to
 do so if we wanted to do that as a major 'publicity' push which I
 doubt is on our collective radar screen.
 
 (BTW, the InConSisteNt capitalization always bugged me to no end...)  -- 
 justin

Another +1 from the peanut gallery for Apache HTTP Server and httpd.

-aaron


Re: mod_proxy_http ignores errors from ap_pass_brigade(r-output_filters)

2010-02-11 Thread Aaron Bannert


On Feb 10, 2010, at 11:34 PM, Plüm, Rüdiger, VF-Group wrote:



-Original Message-
From: Aaron Bannert
Sent: Donnerstag, 11. Februar 2010 00:04
To: dev@httpd.apache.org
Subject: mod_proxy_http ignores errors from
ap_pass_brigade(r-output_filters)

mod_proxy_http appears to be ignoring errors returned from
ap_pass_brigade
(when passing down the output_filter stack). This is a problem for  
any

output filter that has a fatal error and wishes to signal that error
to the client (eg. HTTP_INTERNAL_SERVER_ERROR). I'm not
familiar enough
with the current mod_proxy code to know if I did this correctly, but
this patch works for me. I also went ahead and marked the places  
where

we (kind of hackishly) ignore the return values intentionally.

Could someone review this and let me know if I'm on the right track
(against 2.2.x branch)?


IMHO this is unneeded and indicates a bug in the according filter.
Filters return apr error codes and not HTTP status codes. If they wish
to set a specific status code they should pass an error bucket down  
the

chain or set r-status appropriately.


You are correct that filters should be returning apr status codes and
not http status codes, thanks for pointing that out. I'm still a little
concerned about how this is supposed to work though, since it seems like
the mod_proxy_http handler behaves differently than the default handler.

The default handler will return HTTP_INTERNAL_SERVER_ERROR if the
output filter stack returns an error (unless r-status or c-aborted
are set); while the proxy handler will return an OK when this happens.
The problem here is that when this happens, if there were no buckets
passed down the output filter stack, then Apache just hangs up on the
client and produces nothing, when it should probably be producing a
500 error. Is this supposed to happen?

Also, I don't see many modules that are passing error buckets when they
have errors in their filters. Most appear to return errors which will
probably be ignored if run from underneath mod_proxy. There are only
a few that create error buckets and pass them though (I only found
mod_ext_filter, byterange_filter, proxy_util, and the core http_filter).

Thanks,
-aaron



mod_proxy_http ignores errors from ap_pass_brigade(r-output_filters)

2010-02-10 Thread Aaron Bannert
mod_proxy_http appears to be ignoring errors returned from ap_pass_brigade
(when passing down the output_filter stack). This is a problem for any
output filter that has a fatal error and wishes to signal that error
to the client (eg. HTTP_INTERNAL_SERVER_ERROR). I'm not familiar enough
with the current mod_proxy code to know if I did this correctly, but
this patch works for me. I also went ahead and marked the places where
we (kind of hackishly) ignore the return values intentionally.

Could someone review this and let me know if I'm on the right track
(against 2.2.x branch)?

Thanks,
-aaron


Index: modules/proxy/mod_proxy_http.c
===
--- modules/proxy/mod_proxy_http.c  (revision 908672)
+++ modules/proxy/mod_proxy_http.c  (working copy)
@@ -1444,7 +1444,7 @@
 else {
 APR_BUCKET_INSERT_BEFORE(eos, e);
 }
-ap_pass_brigade(r-output_filters, bb);
+(void)ap_pass_brigade(r-output_filters, bb);
 /* Need to return OK to avoid sending an error message */
 return OK;
 }
@@ -1767,7 +1767,7 @@
 ap_log_cerror(APLOG_MARK, APLOG_ERR, rv, c,
   proxy: error reading response);
 ap_proxy_backend_broke(r, bb);
-ap_pass_brigade(r-output_filters, bb);
+(void)ap_pass_brigade(r-output_filters, bb);
 backend_broke = 1;
 backend-close = 1;
 break;
@@ -1800,36 +1800,40 @@
 }
 
 /* try send what we read */
-if (ap_pass_brigade(r-output_filters, pass_bb) != 
APR_SUCCESS
-|| c-aborted) {
-/* Ack! Phbtt! Die! User aborted! */
-backend-close = 1;  /* this causes socket close below 
*/
-finish = TRUE;
-}
+rv = ap_pass_brigade(r-output_filters, pass_bb);
 
 /* make sure we always clean up after ourselves */
 apr_brigade_cleanup(bb);
 apr_brigade_cleanup(pass_bb);
 
+/* check if output filter failed or user aborted */
+if (rv != APR_SUCCESS || c-aborted) {
+backend-close = 1; /* this causes socket close below 
*/
+if (!rv) return DONE; /* return DONE for aborts */
+return rv;
+}
+
 } while (!finish);
 }
 ap_log_error(APLOG_MARK, APLOG_DEBUG, 0, r-server,
  proxy: end body send);
 }
 else if (!interim_response) {
+apr_status_t rv;
 ap_log_error(APLOG_MARK, APLOG_DEBUG, 0, r-server,
  proxy: header only);
 
 /* Pass EOS bucket down the filter chain. */
 e = apr_bucket_eos_create(c-bucket_alloc);
 APR_BRIGADE_INSERT_TAIL(bb, e);
-if (ap_pass_brigade(r-output_filters, bb) != APR_SUCCESS
-|| c-aborted) {
-/* Ack! Phbtt! Die! User aborted! */
+rv = ap_pass_brigade(r-output_filters, bb);
+apr_brigade_cleanup(bb);
+/* check if output filter failed or user aborted */
+if (rv != APR_SUCCESS || c-aborted) {
 backend-close = 1;  /* this causes socket close below */
+if (!rv) return DONE; /* return DONE for aborts */
+return rv;
 }
-
-apr_brigade_cleanup(bb);
 }
 } while (interim_response  (interim_response  
AP_MAX_INTERIM_RESPONSES));
 


Re: 3.0 - Proposed Requirements

2007-03-14 Thread Aaron Bannert
On Mon, Mar 12, 2007 at 02:53:14PM -0700, Shankar Unni wrote:
 Paul Querna wrote:
 
 - High Performance Event System Calls (KQueue, Event Ports, EPoll, I/O
 Completion Ports).
 
 This is a tricky area. You definitely don't want to tie yourself to a 
 small subset of OSes.  The real magic trick, however, would be to come 
 up with an abstraction that can take advantage of these if available, 
 but still be able to fall back to conventional I/O if none of these are 
 available..

This is what libevent does. It has a clean and well-designed API, and
was released under a 3-clause BSD license, which I hope is compatible
with our Apache License (v2).

-aaron


Re: Small patch to ab apr_socket_recv error handling

2007-02-27 Thread Aaron Bannert
Apache shouldn't be prematurely disconnecting sockets in the middle
of a response unless there is a serious problem (Eg. the Apache
child process is crashing). Could you describe how to reproduce this?

As for the patch, could you make this configurable with a command-line
option? That way the current functionality can stay default (meaning,
all recv() errors are fatal) and for those circumstances where the
user knows that there is some network-level or Apache-level problem
causing intermittent recv() errors, they can still get performance
results out of AB.

-aaron


On Mon, Feb 26, 2007 at 01:06:14PM -0700, Filip Hanik - Dev Lists wrote:
 I've created a small patch that lets ab continue even if it encounters 
 an error on apr_socket_recv
 
 quite commonly, when servers are overloaded they disconnect the socket, 
 ab receives a 104 (connection reset by peer) and the ab test exits.
 
 This patch logs the error, both counters correctly, cleans up the 
 connection and continues.
 
 thoughts?
 
 Filip

 Index: ab.c
 ===
 --- ab.c  (revision 511976)
 +++ ab.c  (working copy)
 @@ -1332,7 +1332,10 @@
  err_except++; /* XXX: is this the right error counter? */
  /* XXX: Should errors here be fatal, or should we allow a
   * certain number of them before completely failing? -aaron */
 -apr_err(apr_socket_recv, status);
 +//apr_err(apr_socket_recv, status);
 +bad++;
 +close_connection(c);
 +return;
  }
  }
  



Re: 3.0 - Introduction

2007-02-14 Thread Aaron Bannert
On Wed, Feb 14, 2007 at 02:10:19PM -0800, Roy T. Fielding wrote:
 
 But do we really want to start by calling it 3.0?  How about if we
 work off of a few code names first?  Say, for example, amsterdam.
 The reason is because there will be some overlap between ideas of
 how to do certain things, with a variety of overlapping breakage
 that can get pretty annoying if you just want to get one part working
 first.
 
 I want people to be able to break things in one tree without blocking
 others.  And then, say once a month, we all agree on what parts are
 finished enough to merge into all sandbox trees.

I prefer this rather than going straight to 3.0. Would each sandbox
correspond to a single new feature prototype?


 The reason I was about to start the sandbox thing is because I've
 been thinking about moving away from the MPM design.  To be precise,
 I mean that we should get closer to the kernels on the more modern
 platforms and find a way to stay in kernel-land until a valid
 request is received (with load restrictions tested and ipfw applied
 automatically), transform the request into a waka message, and then
 dispatch that request to a process running under a userid that matches
 a prefix on the URI. That's pretty far out, though, and I wouldn't
 want it to stand in the way of any shorter term goals.

This may be too early to jump into design details, but the first thing
that I like about this abstration is a direct mapping between URI-space
and handlers. The second thing that's nice is multi-user support for
any vhost or any portion of a URL path. I don't know how we would pass
a request message containing a large body though. Also, how would this
model gracefully fall back on older syscalls for legacy systems? Would
we simply use a different kernel adapter (kind of like what we have now
with the WinNT and BeOS MPMs)? Really we need to decouple low-level I/O
(disk and network and pipes) from concurrency models (multi-process,
multi-threaded, event-driven async) and also from our protocol handlers.

-aaron


Re: 3.0 - Proposed Goals

2007-02-14 Thread Aaron Bannert
On Wed, Feb 14, 2007 at 07:08:32PM +, Colm MacCarthaigh wrote:
 On Wed, Feb 14, 2007 at 01:57:27PM -0500, Brian Akins wrote:
  Would be nice if we could do HTTP over unix domain sockets, for example.  
  No need for full TCP stack just to pass things back and forth between 
  Apache and back-end processes.
 
 Or over standard input, so that we can have an admin debug mode. Type
 HTTP on standard in, see corresponing log messages on standard out.
 Exim has this feature and it is very useful.

For this you just need telnet or netcat, and then to tail the error log
in another window. I do this all the time to debug requests/responses.

-aaron


Re: Large Resource Consumption by One User Access to mp3 File

2007-02-08 Thread Aaron Bannert
Hi Greg,

According to your logs, each of the responses are 206 (Partial Content)
codes, which means the client requested only a range of the resource,
not the entire resource (see
http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.7
for details).

It is unclear from your message how the Apache Service is stopping, or
if it recovers or crashes. If it is crashing, then a bug report would
be greatly appreciated. If it is simply becomming inaccessible for a
brief period, that sounds more likely to be underallocation of resources,
and the user support list will be the best place to address that.

-aaron



On Thu, Feb 08, 2007 at 08:17:09AM -0800, Greg Sims wrote:
 I originally posted this to the User Support forum and received no replies
 in over two weeks.  I appreciate your help with this in advance!
  
 The Apache Service is stopping as a result of a single user accessing one or
 two mp3 files.  I gathered some log data to detail the problem:
  
 http://raystedman.org/4204Ref.txt  contains a log sequence that accesses one
 mp3 file by one user.
  
 http://raystedman.org/mp3/4204.mp3 is the mp3
 http://raystedman.org/mp3/4204.mp3%20is%20the%20mp3  being accesses which
 is almost 7 MB large.  
  
 Normally I will only see one record in the access_log for a transfer like
 this one.  This transfer contains 30+ records over 12 minutes from the same
 user to the same file.  It may be that this user is one a system with
 limited buffer resources and/or a modem connection to the internet.
  
 http://raystedman.org/22Stat.txt is a server-status that the aftermath of
 all this.  Notice the number of workers that were allocated to this one
 user.
  
 This consumption of server resource by one user is unfair to everyone else
 trying to use http at the same time.  Is it possible to control resource
 allocation so that it is fair to all users?
  
 Thanks! Greg
  
 PS. The user is working on transferring a group of .mp3 files at the same
 time. This can be seen at http://raystedman.org/mp3Ref.txt.  The server
 doesn't have much else going on which can be seen here
 http://raystedman.org/22Ref.txt.
 
  
 


Re: Interprocess read/write locks

2007-02-08 Thread Aaron Bannert
If you don't care about portability, pthread_rwlocks stored in shmem
should work fine. The reason we didn't implement a cross-process
rwlock in APR is because we couldn't guarantee a proper implementation
(at the time) on all supported platforms.

-aaron


On Mon, Feb 05, 2007 at 03:44:16PM +, Christiaan Lamprecht wrote:
 Last Nov 22 I asked a question about writable shared memory,
 everything's working now so thanks!
 
 Locking the shared memory is turning out to be quite costly. I have
 many shared-memory readers and only a few writers so using a rwlock
 seems appropriate but APR provides only apr_thread_rwlock routines.
 
 Is there a way to use rwlocks between processes rather than threads?
 Maybe using the standard POSIX pthread_rwlock routines (storing the
 lock in the shm space...)?
 
 Many thanks again
 Christiaan
 
 PS:
 Apache 2 on Linux
 apr_shm (for shared memory)
 prefork MPM (it's non threaded!)
 


Re: How the PHP works?

2006-11-21 Thread Aaron Bannert
On Mon, Nov 20, 2006 at 09:51:44PM +, Andy Armstrong wrote:
 On 13 Nov 2006, at 17:09, Andy Armstrong wrote:
 Toni - this isn't really the right list for your question - this  
 list is concerned with the development of Apache itself.
 
 The majority of the PHP code is not Apache specific and then  
 there's an Apache loadable module that interfaces with the PHP engine.
 
 Better a week late than never. I wonder where that's been.

In the moderator queue, sorry for the delay.

-aaron


Re: [PATCH] ProxyVia Full honor ServerTokens

2005-11-02 Thread Aaron Bannert

Makes sense, +1 in concept.
-a


On Oct 28, 2005, at 6:40 AM, Brian Akins wrote:


Can we get a vote on this?


--- mod_proxy_http.c.orig   2005-09-26 11:43:45.893872108 -0400
+++ mod_proxy_http.c2005-09-26 12:06:48.390005516 -0400
@@ -641,7 +641,7 @@
 HTTP_VERSION_MAJOR(r- 
proto_num),
 HTTP_VERSION_MINOR(r- 
proto_num),

 server_name, server_portstr,
-AP_SERVER_BASEVERSION)
+ap_get_server_version())
  : apr_psprintf(p, %d.%d %s%s,
 HTTP_VERSION_MAJOR(r- 
proto_num),
 HTTP_VERSION_MINOR(r- 
proto_num),

@@ -1296,7 +1296,7 @@
HTTP_VERSION_MINOR(r-proto_num),
server_name,
server_portstr,
-   AP_SERVER_BASEVERSION)
+   ap_get_server_version())
  : apr_psprintf(p, %d.%d %s%s,
HTTP_VERSION_MAJOR(r-proto_num),
HTTP_VERSION_MINOR(r-proto_num),




--
Brian Akins
Lead Systems Engineer
CNN Internet Technologies






Re: RTC killed the open source project

2005-08-09 Thread Aaron Bannert

I've been trying to speed up the release cycles for years, but
it's only gotten slower with all the red tape.

The slow release cycles are just another symptom of a broken process,
they are not the cause.

-aaron


On Aug 9, 2005, at 7:00 AM, Paul Querna wrote:


Aaron Bannert wrote:


*** Look at the writing on the wall: RTC killed this project.

This year there have only been 3 tarballs released:
 - 2.1.3 (alpha)
 - 2.0.53 and 2.0.54
 - no releases of 1.3



Missing:
2.1.4 (alpha)
2.1.6 (alpha)

I don't believe changing RTC/CTR policy is the solution to the
(relatively) slow development.  I think pushing for faster stable/dev
version cycles is.

I would rather release 2.2 'soon', and have to do 2.4 in 6-12 months
than change the CTR/RTC policy.  I think these faster cycles address
many of your concerns.

-Paul






Re: RTC killed the open source project

2005-08-09 Thread Aaron Bannert

I definitely agree that RTC requires a lengthy time commitment
that many of us simply can't give, while CTR allows for much more
fluid development process. This is the difference between a full-time
day job and a hobby.

How many of you only spend less than an hour a day doing httpd dev work?

-aaron


On Aug 9, 2005, at 6:11 AM, Jim Jagielski wrote:



On Aug 9, 2005, at 1:55 AM, Aaron Bannert wrote:


I can't believe you guys are still debating the merits of RTC over  
CTR
after all this time. RTC killed the momentum in this project a  
long time

ago.


The RTC experiment was tried and has failed. Can we please
go back to the way things were, back when this project was fun?




I think that RTC has a place, but too often RTC is used as a club
to slow down development. Small changes that could easily
be made once the code has been committed instead result in
cycles of Wouldn't it be best to do this? and another
round of patch-vote commences.

CTR is better suited towards more volunteer-oriented
processes, where someone may have a lot of time today
and tomorrow, but little next week; in RTC, stuff will
stagnate during their offline times; with CTR they can
add their code and next week, when unavailable,
people can add upon it, improve it, etc... RTC requires
a longer time commitment and availability to see
things through to the end, tough things with the
bursty nature of volunteers.






Re: RTC killed the open source project

2005-08-09 Thread Aaron Bannert

No, 2.0 was a moving target because there was lots of active
development and things were getting rapidly fixed and rolled into
tarballs for our beta testers to pound on. There were easily 3 times
as many developers working on 2.0 than there are now.

Moving Target  Stagnation

Bill's changes were numerous but each was individually reviewable.
The RTC alternative is to post that patches, find 2 other buddies to
hold hands with while crossing the street, look both ways, hold your
breath, then commit the big megapatch. Version control is your friend,
use it.

-aaron


On Aug 9, 2005, at 7:46 AM, Mads Toftum wrote:


On Tue, Aug 09, 2005 at 09:11:56AM -0400, Jim Jagielski wrote:


I think that RTC has a place, but too often RTC is used as a club
to slow down development. Small changes that could easily
be made once the code has been committed instead result in
cycles of Wouldn't it be best to do this? and another
round of patch-vote commences.


All fine in theory, but wrowe who is the one suggesting CTR on  
those two
modules certainly wasn't suggesting small changes - there was what  
- 20

commits in the proxy/cache code the other day?
The real trick is how you define the difference between small,
relatively safe changes and complete rewrites of large chunks of code.
2.0 took very long to settle because it was very much a moving  
target, I

would hate for the same to keep happening with a 2.2 release.

vh

Mads Toftum
--
`Darn it, who spiked my coffee with water?!' - lwall






Re: RTC killed the open source project

2005-08-09 Thread Aaron Bannert


On Aug 9, 2005, at 7:40 AM, Nick Kew wrote:


Jim Jagielski wrote:



I think that RTC has a place, but too often RTC is used as a club
to slow down development. Small changes that could easily
be made once the code has been committed instead result in
cycles of Wouldn't it be best to do this? and another
round of patch-vote commences.



That kind of discussion is a Good Thing.  Well, except when it
gets bogged down and leads to stalemate.  For example, my most
recent contribution (ap_hook_monitor) benefitted from Jeff's
improvements to my original suggestion.


That kind of discussion happened MORE with CTR. I used to
read every commit message to see if it touched code that I cared
about, but now every commit message has the same subject (svn
commit: useless number [end of title]) and I have to be
online to update my repository to read STATUS to find out what
needs reviewing. It's all just pointless bureaucracy.


The trouble with RTC is when things simply languish and
nothing happens.

Or when someone fixes a simple bug in trunk but can't be arsed
to backport because RTC is more hassle.  I plead guilty to far
too much of that myself.


It seems to me that the entire project is languishing.


 in RTC, stuff will
stagnate during their offline times; with CTR they can
add their code and next week, when unavailable,
people can add upon it, improve it, etc... RTC requires
a longer time commitment and availability to see
things through to the end, tough things with the
bursty nature of volunteers.



Yep.  But mostly it's finding the time to review other peoples
contributions, and making sufficient nuisance of oneself to
get ones own things reviewed.

I think there could be some mileage in a hybrid policy, with
lazy consensus on simple bugfixes and RTC on any new functionality
or substantial changes.  There's a problem of definition in there,
but if lazy consensus requires (say) a month in STATUS it gives ample
time for someone to shout that's too big - needs proper RTC.


Amusing that you should bring this up. This hybrid approach is
EXACTLY how it was before.

1) Small patches are simply committed to HEAD/trunk
2) Large changes are posted to the mailing list and given a few days
for review and discussion.
3) Everyone trusts everyone else to be reasonable about how they
define large and small, and we err on the side of large.
4) Vetoes (only with a valid technical reason) cause code to be revoked
with a followup discussion about how to solve the veto.

- no more having discussion in STATUS files
- no more red tape for simple backports
- no restrictions on what people are allowed to work on (if they want to
   add features to an older version of apache, don't stop them)

-aaron



Re: RTC killed the open source project

2005-08-09 Thread Aaron Bannert

What other branches are CTR? 1.3? 2.0?

Dependency on any outside library, APR included, is going to
cause dependency on that library's release cycle. Stagnation
happened after 2.0 was released, however.

On another note, if APR is the problem, then why are we even
talking about branching 2.2 before APR is fixed?

-aaron



On Aug 8, 2005, at 11:31 PM, Roy T. Fielding wrote:


On Aug 8, 2005, at 10:55 PM, Aaron Bannert wrote:


I can't believe you guys are still debating the merits of RTC over  
CTR
after all this time. RTC killed the momentum in this project a  
long time

ago.



That doesn't make sense, Aaron.  The other branches are CTR and
I don't see anyone making releases of those either.  We were
operating in RTC mode for the first year of Apache and that was
our most productive time in terms of number of releases.

Personally, I think what is killing momentum is the number of
single-developer platforms we claim to support that end up dragging
the release cycle beyond the capacity of any single RM.  That's why
I haven't driven a release since 1.3.  APR just makes it worse.
Please, if you want to help, don't bicker about RTC -- just
create a working APR release so that we can do an httpd release.

Roy






Re: RTC killed the open source project

2005-08-09 Thread Aaron Bannert

I'm not talking about 2.2. I'm talking about a severe slowdown in the
pace of development on this entire project.

And it sounds like you've been feeling the effects of all the red tape
too, with being fed up trying to follow through with a release.

There are two separate issues:

* The difficulty in *producing* releases is due to RMs being too afraid
  to make releases that aren't perfect. This phobia is a disease,  
because

  it prevents the release-early-release-often motto from happening.

* The difficulty in *making progress toward a releasable repository* is
  being caused by the STATUS file bottleneck. The STATUS file  
bottleneck

  is caused because we have Review-Then-Commit rules set up, which
  means everyone has to serialize EVERY CHANGE on that file. (You're
  talking to the thread programming guy here, remember? That's a point
  of contention. :)

   (It's interesting to note that SVN is a parallel development version
control engine, but RTC forces us to serialize all our changes.  
Maybe

we should start using SCCS.)

RTC came with good intentions (higher quality code), but it's clear now
that it's not working and it needs to change.

-aaron


On Aug 9, 2005, at 9:47 AM, Justin Erenkrantz wrote:


--On August 9, 2005 8:17:37 AM -0700 Aaron Bannert  
[EMAIL PROTECTED] wrote:





I've been trying to speed up the release cycles for years, but
it's only gotten slower with all the red tape.




Where have you been when we've done releases in the last few years?




The slow release cycles are just another symptom of a broken process,
they are not the cause.




If you want to place a cause on why 2.2 hasn't been released, I  
feel that there is only one real reason.  I gave up on trying to do  
2.1/2.2 releases because I got fed up with having every release  
blocked for illegitimate reasons.  Paul ran into the exact same  
roadblocks; but he's probably too nice to admit it.


The 'slow release cycle' has *nothing* to do with CTR or RTC.  --  
justin









RTC killed the open source project

2005-08-08 Thread Aaron Bannert

I can't believe you guys are still debating the merits of RTC over CTR
after all this time. RTC killed the momentum in this project a long time
ago.

* Quality and stability are emergent properties, not processes:

Making releases is a natural step in the bug-fixing cycle. However,
the STATUS file adds a whole bunch of unnecessary red tape
to the change review process which makes it very difficult to
release early and release often. This slows the wheels of
development and the rate of bug finding.

* RTC reduces the number of eyes looking for bugs:

The STATUS file does not impart perfection. It keeps track of each
patch's reviewers, but doesn't guarantee thorough reviews. Due to
all the hoops that a reviewer must jump through in order to review
a patch these days, it is inevitable that fewer people make the effort.

* RTC means we don't trust our committers.

Why do we have committers if we don't trust them to commit code?
We trust them enough to vote, but only the buddy system for commits?
I already trust everyone here to review changes to code they care
about. Forgotten code still suffers bitrot regardless of RTC. The only
difference between now and what we had before is that there are
fewer eyeballs looking at patches and more wasted energy.

* RTC isn't necessary with a proper version control system.

Bugs happen, and with version control they can be easily fixed,
shared with others, and tracked. Use it!

*** Look at the writing on the wall: RTC killed this project.

This year there have only been 3 tarballs released:
 - 2.1.3 (alpha)
 - 2.0.53 and 2.0.54
 - no releases of 1.3


The RTC experiment was tried and has failed. Can we please
go back to the way things were, back when this project was fun?

-aaron



Re: svn commit: r164752 - /httpd/httpd/trunk/STATUS

2005-05-24 Thread Aaron Bannert
I don't see this backport in 1.3, but I did provide a patch at one  
point.

I'll update my patch and repost along with the magic number bumps
that were talked about a month or so ago.

-aaron, catching up on really old messages


On Apr 25, 2005, at 11:45 PM, [EMAIL PROTECTED] wrote:


Author: pquerna
Date: Mon Apr 25 23:45:44 2005
New Revision: 164752

URL: http://svn.apache.org/viewcvs?rev=164752view=rev
Log:
AllowEncodedSlashes fix was backported in 2.0.52.  Remove this  
STATUS file entry.


Modified:
httpd/httpd/trunk/STATUS

Modified: httpd/httpd/trunk/STATUS
URL: http://svn.apache.org/viewcvs/httpd/httpd/trunk/STATUS? 
rev=164752r1=164751r2=164752view=diff
== 


--- httpd/httpd/trunk/STATUS (original)
+++ httpd/httpd/trunk/STATUS Mon Apr 25 23:45:44 2005
@@ -358,9 +358,6 @@
   Message-ID: Pine.LNX. 
[EMAIL PROTECTED]

   .cs.virginia.edu

-* When sufficiently tested, the AllowEncodedSlashes/%2f patch
-  needs to be backported to 2.0 and 1.3.
-
 * APXS either needs to be fixed completely for use when apr is  
out of tree,

   or it should drop query mode altogether, and we just grow an
   httpd-config or similar arrangement.







Re: Backport of AllowEncodedSlashes to Apache 1.3

2005-03-29 Thread Aaron Bannert
If there's no objection, shall I just go ahead and commit this?
-aaron
On Mar 24, 2005, at 4:38 PM, Aaron Bannert wrote:
I've attached a patch against the trunk of Apache 1.3 that backports
support for the AllowEncodedSlashes directive. It should behave
identically to the way it works in 2.0. By default Apache will disallow
any request that includes a %-encoded slash ('/') character (which
is '%2F'), but by enabling this directive an administrator can override
this prevention and allow %2Fs in request URLs. If this is an 
acceptable
backport, and I can get some +1s for it, I'll be happy to commit it and
update the documentation (at least the English :).

-aaron
Index: src/include/httpd.h
===
--- src/include/httpd.h (revision 158971)
+++ src/include/httpd.h (working copy)
@@ -976,6 +976,7 @@
 API_EXPORT(int) ap_is_url(const char *u);
 API_EXPORT(int) ap_unescape_url(char *url);
+API_EXPORT(int) ap_unescape_url_keep2f(char *url);
 API_EXPORT(void) ap_no2slash(char *name);
 API_EXPORT(void) ap_getparents(char *name);
 API_EXPORT(char *) ap_escape_path_segment(pool *p, const char *s);
Index: src/include/http_core.h
===
--- src/include/http_core.h (revision 158971)
+++ src/include/http_core.h (working copy)
@@ -318,6 +318,8 @@
 /* Digest auth. */
 char *ap_auth_nonce;
+unsigned int allow_encoded_slashes : 1; /* URLs may contain %2f 
w/o being
+	 * pitched indiscriminately */
 } core_dir_config;

 /* Per-server core configuration */
Index: src/main/util.c
===
--- src/main/util.c (revision 158971)
+++ src/main/util.c (working copy)
@@ -1635,6 +1635,53 @@
return OK;
 }
+API_EXPORT(int) ap_unescape_url_keep2f(char *url)
+{
+register int badesc, badpath;
+char *x, *y;
+
+badesc = 0;
+badpath = 0;
+/* Initial scan for first '%'. Don't bother writing values before
+ * seeing a '%' */
+y = strchr(url, '%');
+if (y == NULL) {
+return OK;
+}
+for (x = y; *y; ++x, ++y) {
+if (*y != '%') {
+*x = *y;
+}
+else {
+if (!ap_isxdigit(*(y + 1)) || !ap_isxdigit(*(y + 2))) {
+badesc = 1;
+*x = '%';
+}
+else {
+char decoded;
+decoded = x2c(y + 1);
+if (decoded == '\0') {
+badpath = 1;
+}
+else {
+*x = decoded;
+y += 2;
+}
+}
+}
+}
+*x = '\0';
+if (badesc) {
+return BAD_REQUEST;
+}
+else if (badpath) {
+return NOT_FOUND;
+}
+else {
+return OK;
+}
+}
+
 API_EXPORT(char *) ap_construct_server(pool *p, const char *hostname,
unsigned port, const request_rec *r)
 {
Index: src/main/http_request.c
===
--- src/main/http_request.c (revision 158971)
+++ src/main/http_request.c (working copy)
@@ -1175,8 +1175,21 @@
 /* Ignore embedded %2F's in path for proxy requests */
 if (r-proxyreq == NOT_PROXY  r-parsed_uri.path) {
-   access_status = ap_unescape_url(r-parsed_uri.path);
+   core_dir_config *d;
+   d = ap_get_module_config(r-per_dir_config, core_module);
+   if (d-allow_encoded_slashes) {
+   access_status = ap_unescape_url_keep2f(r-parsed_uri.path);
+   }
+   else {
+   access_status = ap_unescape_url(r-parsed_uri.path);
+   }
if (access_status) {
+   if (! d-allow_encoded_slashes) {
+   ap_log_rerror(APLOG_MARK, APLOG_NOERRNO|APLOG_INFO, r,
+ found %%2f (encoded '/') in URI 
+ (decoded='%s'), returning 404,
+ r-parsed_uri.path);
+   }
ap_die(access_status, r);
return;
}
Index: src/main/http_core.c
===
--- src/main/http_core.c(revision 158971)
+++ src/main/http_core.c(working copy)
@@ -143,6 +143,9 @@
 conf-etag_add = ETAG_UNSET;
 conf-etag_remove = ETAG_UNSET;
+/* disallow %2f (encoded '/') by default */
+conf-allow_encoded_slashes = 0;
+
 return (void *)conf;
 }
@@ -319,6 +322,8 @@
 conf-cgi_command_args = new-cgi_command_args;
 }
+conf-allow_encoded_slashes = new-allow_encoded_slashes;
+
 return (void*)conf;
 }
@@ -2309,6 +2314,18 @@
 }
 #endif /* AP_ENABLE_EXCEPTION_HOOK */
+static const char *set_allow2f(cmd_parms *cmd, core_dir_config *d, 
int arg)
+{
+const char *err = ap_check_cmd_context(cmd, NOT_IN_LIMIT);
+
+if (err != NULL) {
+return err;
+}
+
+d-allow_encoded_slashes

Re: Backport of AllowEncodedSlashes to Apache 1.3

2005-03-29 Thread Aaron Bannert
On Mar 29, 2005, at 8:47 AM, Jim Jagielski wrote:
Since we're extending core_dir_config, we should document the
change in core_dir_config
Should I elaborate more in my core_dir_config from what I already have?
Index: src/include/http_core.h
===
--- src/include/http_core.h (revision 158971)
+++ src/include/http_core.h (working copy)
@@ -318,6 +318,8 @@
 /* Digest auth. */
 char *ap_auth_nonce;
+unsigned int allow_encoded_slashes : 1; /* URLs may contain %2f
w/o being
+* pitched indiscriminately */
 } core_dir_config;
 /* Per-server core configuration */



Backport of AllowEncodedSlashes to Apache 1.3

2005-03-24 Thread Aaron Bannert
I've attached a patch against the trunk of Apache 1.3 that backports
support for the AllowEncodedSlashes directive. It should behave
identically to the way it works in 2.0. By default Apache will disallow
any request that includes a %-encoded slash ('/') character (which
is '%2F'), but by enabling this directive an administrator can override
this prevention and allow %2Fs in request URLs. If this is an acceptable
backport, and I can get some +1s for it, I'll be happy to commit it and
update the documentation (at least the English :).
-aaron
Index: src/include/httpd.h
===
--- src/include/httpd.h (revision 158971)
+++ src/include/httpd.h (working copy)
@@ -976,6 +976,7 @@
 API_EXPORT(int) ap_is_url(const char *u);
 API_EXPORT(int) ap_unescape_url(char *url);
+API_EXPORT(int) ap_unescape_url_keep2f(char *url);
 API_EXPORT(void) ap_no2slash(char *name);
 API_EXPORT(void) ap_getparents(char *name);
 API_EXPORT(char *) ap_escape_path_segment(pool *p, const char *s);
Index: src/include/http_core.h
===
--- src/include/http_core.h (revision 158971)
+++ src/include/http_core.h (working copy)
@@ -318,6 +318,8 @@
 /* Digest auth. */
 char *ap_auth_nonce;
+unsigned int allow_encoded_slashes : 1; /* URLs may contain %2f 
w/o being
+	 * pitched indiscriminately */
 } core_dir_config;

 /* Per-server core configuration */
Index: src/main/util.c
===
--- src/main/util.c (revision 158971)
+++ src/main/util.c (working copy)
@@ -1635,6 +1635,53 @@
return OK;
 }
+API_EXPORT(int) ap_unescape_url_keep2f(char *url)
+{
+register int badesc, badpath;
+char *x, *y;
+
+badesc = 0;
+badpath = 0;
+/* Initial scan for first '%'. Don't bother writing values before
+ * seeing a '%' */
+y = strchr(url, '%');
+if (y == NULL) {
+return OK;
+}
+for (x = y; *y; ++x, ++y) {
+if (*y != '%') {
+*x = *y;
+}
+else {
+if (!ap_isxdigit(*(y + 1)) || !ap_isxdigit(*(y + 2))) {
+badesc = 1;
+*x = '%';
+}
+else {
+char decoded;
+decoded = x2c(y + 1);
+if (decoded == '\0') {
+badpath = 1;
+}
+else {
+*x = decoded;
+y += 2;
+}
+}
+}
+}
+*x = '\0';
+if (badesc) {
+return BAD_REQUEST;
+}
+else if (badpath) {
+return NOT_FOUND;
+}
+else {
+return OK;
+}
+}
+
 API_EXPORT(char *) ap_construct_server(pool *p, const char *hostname,
unsigned port, const request_rec *r)
 {
Index: src/main/http_request.c
===
--- src/main/http_request.c (revision 158971)
+++ src/main/http_request.c (working copy)
@@ -1175,8 +1175,21 @@
 /* Ignore embedded %2F's in path for proxy requests */
 if (r-proxyreq == NOT_PROXY  r-parsed_uri.path) {
-   access_status = ap_unescape_url(r-parsed_uri.path);
+   core_dir_config *d;
+   d = ap_get_module_config(r-per_dir_config, core_module);
+   if (d-allow_encoded_slashes) {
+   access_status = ap_unescape_url_keep2f(r-parsed_uri.path);
+   }
+   else {
+   access_status = ap_unescape_url(r-parsed_uri.path);
+   }
if (access_status) {
+   if (! d-allow_encoded_slashes) {
+   ap_log_rerror(APLOG_MARK, APLOG_NOERRNO|APLOG_INFO, r,
+ found %%2f (encoded '/') in URI 
+ (decoded='%s'), returning 404,
+ r-parsed_uri.path);
+   }
ap_die(access_status, r);
return;
}
Index: src/main/http_core.c
===
--- src/main/http_core.c(revision 158971)
+++ src/main/http_core.c(working copy)
@@ -143,6 +143,9 @@
 conf-etag_add = ETAG_UNSET;
 conf-etag_remove = ETAG_UNSET;
+/* disallow %2f (encoded '/') by default */
+conf-allow_encoded_slashes = 0;
+
 return (void *)conf;
 }
@@ -319,6 +322,8 @@
 conf-cgi_command_args = new-cgi_command_args;
 }
+conf-allow_encoded_slashes = new-allow_encoded_slashes;
+
 return (void*)conf;
 }
@@ -2309,6 +2314,18 @@
 }
 #endif /* AP_ENABLE_EXCEPTION_HOOK */
+static const char *set_allow2f(cmd_parms *cmd, core_dir_config *d, int 
arg)
+{
+const char *err = ap_check_cmd_context(cmd, NOT_IN_LIMIT);
+
+if (err != NULL) {
+return err;
+}
+
+d-allow_encoded_slashes = (arg != 0);
+return NULL;
+}
+
 static const char *set_pidfile(cmd_parms *cmd, void *dummy, char *arg)
 {
 const 

Re: worker MPM: it sucks to have minimal MaxSpareThreads

2005-03-07 Thread Aaron Bannert
On Mar 4, 2005, at 12:08 PM, Jeff Trawick wrote:
Any comments on these two separate proposals?
b) tweak worker MPM to automatically bump the value of MaxSpareThreads
to at least 15% of MaxClients, with a warning written to the error log
I like this best, because is requires no action on the user's part
to take advantage of the change.
Just so I understand the problem correctly, you're saying that
when Worker is trying hard to stay near that MaxSpareThreads
setting, and under a condition that pushes the server constantly
up near that threshold (eg. when you have a sustained connection
rate that is higher than MaxSpareThreads) then the turnover of
connections causes Worker to kill and respawn children quickly,
but that since the turnover is so quick you end up having children
lingering around with one or two thread slots and essentially
we approach the prefork scenario in terms of number of child
processes. Is this correct?
-aaron


Re: Puzzling News

2005-03-07 Thread Aaron Bannert
On Feb 28, 2005, at 1:17 PM, Paul A. Houle wrote:
	Honestly,  I don't see a huge advantage in going to worker.  On Linux 
performance is about the same as prefork,  although I haven't done 
benchmarking on Solaris.
Under low-load conditions prefork often out-performs worker. Under
high-concurrency scenarios, worker tends to degrade gracefully while
under prefork you run the risk of running out of memory. Another benefit
of worker that I've seen is that it can respond to requests with lower 
latency,
which may have a positive impact on a user's experience (pages snap in
more quickly). It would be nice to get some updated benchmarks on the
relative metrics, like requests/second, concurrency, latency, etc...

-aaron


Re: End of Life Policy

2004-11-28 Thread Aaron Bannert
On Nov 20, 2004, at 12:11 PM, Paul Querna wrote:
No, I do not want to make it forbidden.  Rather, I would like a set 
date  where we do not provide _any_ implied support as the HTTPd 
project.
We don't provide any implied support anyway. Sure, we'd like to release
perfect software, but we make no warranties[1], and we definitely
shouldn't be making any implied warranties that might contradict
our license. In other words, setting dates like this goes against our
license and in my opinion goes against our philosophy.
-aaron
[1] Excerpt from the Apache License 2.0:
   7. Disclaimer of Warranty. Unless required by applicable law or
  agreed to in writing, Licensor provides the Work (and each
  Contributor provides its Contributions) on an AS IS BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
  implied, including, without limitation, any warranties or 
conditions
  of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
  PARTICULAR PURPOSE. You are solely responsible for determining the
  appropriateness of using or redistributing the Work and assume any
  risks associated with Your exercise of permissions under this 
License.



Re: End of Life Policy

2004-11-28 Thread Aaron Bannert
On Nov 20, 2004, at 10:32 AM, Paul Querna wrote:
I would like to have a semi-official policy on how long we will 
provide security backports for 2.0 releases.

I suggest a value between 6 and 12 months.
Many distrobutions will provide their own security updates anyways, so 
this would be a service to only a portion of our users.

As always, this is open source, and I would not stop anyone from 
continuing support for the 2.0.x branch. My goal is to help set our 
end user's expectations for how long they have to upgrade to 2.2.
As long as there are people willing to work on bug fixes or other 
improvements,
who are we to stop them? They can of course always fork, but I'd rather 
have them
contribute their bug fixes back to us. Official policies are just red 
tape.

-aaron


Re: Apache 1.3.31 RC Tarballs available

2004-05-08 Thread Aaron Bannert
On May 8, 2004, at 4:05 AM, Jim Jagielski wrote:
I don't consider us a closely held ivory-tower QA and I would
say that if anyone knows of a talented pool of users would would
like to test RCs, then we should have a mechanism to use them.
That was the intent for the current/stable-testers list, but
we've never really used that as we should have.
The problem is really 2 fold:

   1. The tarballs were being mistakenly described as the
  official release. It's not released until we say so.
  I think it's our responsibility to ensure that people
  aren't mistakenly running pre-release s/w under the
  impression that it is release.
I agree that it was mistakenly described as an official
release, but I think we're confusing the idea of an official
release with the concept of quality. It's difficult for us to
state the quality of a release when our license[1] clearly says
that we make no guarantees or warranties. Even if we say it's
perfect, it's still a use at your own risk type of thing.
   2. That when all goes well, and the RC tarballs are approved,
  they aren't changed at all... We are testing, really,
  the accuracy of the tarball itself. This add some
  complexity to the whole process.
I don't think it needs to add to the complexity. I see us wanting
to do simple sanity checking on the tarball, but even that
lends itself to scalability. Assuming the tarballs are labeled
as release candidates, people should be free to do whatever
they want with them. If the signatures don't check out, they'll
tell us. If it doesn't build, they'll tell us. If it works out for a
few days we'll bless it. When it breaks we'll fix it and roll another
tarball.
I've been thinking over changing the 1.3 release process and
us actually tagging a tree as RC, creating actual 1.3.x-rc1
tarballs and people testing that, and having those very,
very open, but having the actual *release* tarballs
somewhat closed (again, to test the viability of the tarball,
not the code).
I think this would be a step in the right direction. I still don't see
why any stage in the release process should be closed, though.
We don't make any guarantees about any of our code at any time,
so as long as we make it totally clear when we want sanity checking
(testing a release candidate) or when we want normal bug testing
then I think we may see much greater participation by our users in the
QA process, and as a result we will all have much higher quality code.
-aaron

[1] - Excerpt from http://www.apache.org/LICENSE.txt

 * THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESSED OR IMPLIED
 * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
 * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
 * DISCLAIMED.  IN NO EVENT SHALL THE APACHE SOFTWARE FOUNDATION OR
 * ITS CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
 * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
 * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF
 * USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
 * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
 * OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT
 * OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
 * SUCH DAMAGE.


Re: Apache 1.3.31 RC Tarballs available

2004-05-07 Thread Aaron Bannert
Why is it bad if people download the RC version and
test it? Frankly, I really don't mind if slashdot or anyone
else broadcasts that we have an RC tarball available.
If anything it's a good thing. We don't make any guarantees
about our code anyway, so whether or not we call it a GA
release is just a courtesy to our users. Sheesh, this just
seems like we're turning down would-be beta-testers!
Please put the tarballs back up, and please ignore the press.

-aaron

On May 7, 2004, at 12:28 PM, Jim Jagielski wrote:

I have made the tarballs unavailable from the below URL. People
should contact me directly to obtain the correct URL...
Sander Temme wrote:
On May 7, 2004, at 8:15 AM, Jim Jagielski wrote:

Via:

   http://httpd.apache.org/dev/dist/

I'd like to announce and release the 11th.
Except Slashdot beat you to the punch: http://apache.slashdot.org/.



Re: Apache 1.3.31 RC Tarballs available

2004-05-07 Thread Aaron Bannert
I believe that a strict QA process actually hurts the quality
of OSS projects like Apache. We have a gigantic pool of
talented users who would love to give us a hand by testing
our latest and greatest in every contorted way imaginable.
But we're holding out on them. We're saying that we know
better than they do. I don't think we do. Sure, we should be
testing our code, but there's absolutely no way that we can
be perfect. Closely held ivory-tower QA doesn't scale any
better at the ASF than it does at a proprietary company. But
the QA that comes out of widely distributed release-candidates
_does_ scale. Why don't we let the teeming masses have their
fill?
-aaron

On May 7, 2004, at 6:40 PM, Jim Jagielski wrote:

The trouble is that we need to perform *some* sort of quality
control out there... The option is as soon as we have a tarball
out, it's immediately released, in which case why even bother
with a test or RC candidate. We need to, IMO, impose some
sort of order and process on how we release s/w, and the simple
fact that we have RC tarballs out there doesn't cut it.
It's certainly a problem that we've had for some time, especially
when you consider the times when releases are really security
related... I would hate having some sort of private release
mailing list where we can really test RC tarballs in a
semi secure environment before they are released in general,
but I can't see us simply saying our release procedure is
we throw a tarball out there and 2-3 days after we do that
we Announce it :)
Aaron Bannert wrote:
Why is it bad if people download the RC version and
test it? Frankly, I really don't mind if slashdot or anyone
else broadcasts that we have an RC tarball available.
If anything it's a good thing. We don't make any guarantees
about our code anyway, so whether or not we call it a GA
release is just a courtesy to our users. Sheesh, this just
seems like we're turning down would-be beta-testers!
Please put the tarballs back up, and please ignore the press.

-aaron

On May 7, 2004, at 12:28 PM, Jim Jagielski wrote:

I have made the tarballs unavailable from the below URL. People
should contact me directly to obtain the correct URL...
Sander Temme wrote:
On May 7, 2004, at 8:15 AM, Jim Jagielski wrote:

Via:

   http://httpd.apache.org/dev/dist/

I'd like to announce and release the 11th.
Except Slashdot beat you to the punch:  
http://apache.slashdot.org/.



--  
=== 

   Jim Jagielski   [|]   [EMAIL PROTECTED]   [|]
http://www.jaguNET.com/
  A society that will trade a little liberty for a little order
 will lose both and deserve neither - T.Jefferson




Re: Apache 1.3.31 RC Tarballs available

2004-05-07 Thread Aaron Bannert
FWIW, we're currently only using half of our allocated bandwidth.
If RC distributions become a bandwidth problem, we can think
about mirroring then (wouldn't that be a great problem to have
though?)
-aaron

On May 7, 2004, at 7:05 PM, André Malo wrote:

* Aaron Bannert [EMAIL PROTECTED] wrote:

Why is it bad if people download the RC version and
test it?
Frankly, I really don't mind if slashdot or anyone
else broadcasts that we have an RC tarball available.
Our traffic fee does anyway. RC stuff in /dev/dist/ is not mirrored.

nd
--
Winnetous Erbe: http://pub.perlig.de/books.html#apache2



Re: error during make all

2004-04-13 Thread Aaron Bannert
You shouldn't have to run buildconf. Start from a clean tarball again 
and
just run ./configure with your args and then make.

-aaron
On Apr 4, 2004, at 9:41 PM, Navneetha wrote:
am am new to apache flood.i have downloaded the copy of flood .after
successful download i am able to successfully execute ./buildconf and
./configure --disable-shared .



Re: [PATCH] flood and content-type setting

2004-03-18 Thread Aaron Bannert
On Mon, Mar 15, 2004 at 09:13:45AM +0100, Philippe Marzouk wrote:
 On Sun, Mar 14, 2004 at 05:27:16PM -0800, Aaron Bannert wrote:
  This sounds reasonable to me, did it ever get committed?
 
 I imagine you asked the flood developers but I just did a cvs diff and
 this patch was not committed.

Nope, I was just catching up on old mailings and wanted to see if
this was ever addressed.


I've committed it, thanks for the patch.

Could you provide an example xml input file that I could stick
in the examples/ directory that shows usage of this new parameter?

-aaron


Re: [PROPOSAL] Move httpd to the subversion repository

2004-03-16 Thread Aaron Bannert
On Tue, Mar 16, 2004 at 09:52:49PM +0100, Sander Striker wrote:
 Can we please move this discussion to [EMAIL PROTECTED]
 
 A lot of the points discussed aren't about technical problems of httpd
 moving over, but overall topics concerning our setup.  Most of the
 concerns that have come up are things that people not directly
 involved with Infrastructure are likely never having to deal with.

This discussion is about the version control needs of the HTTPD
Server Project. Please keep the discussion on this list with the
users who will be most affected by the proposed change.

 PKI, integrated with/on top of, Subversion, can be a joint effort
 between the Infrastructure and Security Team.  If a good, practical
 solution can be put together we can start looking how to roll that
 out.

If having a tamper-resistant code repository is a new requirement of
the HTTPD Server Project then we should discuss this in terms of
abstract requirements and not assume a particular implementation.

Keep in mind that although the infrastructure team may be charged
with managing the infrastructure, it shouldn't be pushing projects
to use tools that they don't want to use.

-aaron


Re: [PROPOSAL] Move httpd to the subversion repository

2004-03-14 Thread Aaron Bannert
On Sat, Mar 13, 2004 at 02:04:09PM +0100, Sander Striker wrote:
 I hereby would like to propose that we move the HTTP Server project
 codebase to the Subversion repository at:
   http://svn.apache.org/repos/asf/.

-1

This will, at least for now, raise the bar to entry for contributors.

-aaron


Re: Failure in child_init when doing graceful with flock()

2004-03-14 Thread Aaron Bannert
Ok, after wading through the code for awhile I have a working theory:

1) Parent creats a child
2) Parent gets graceful-restart signal
3) Parent returns from ap_run_mpm, pconf is cleared, cross-process lock file
   is closed and removed.
4) Child finally gets scheduled to run the apr_proc_mutex_child_init for
   fcntl(). Oops, apr_file_open fails since step #3 above removed the file.
   Child errors out (ENOENT is returned from apr_file_open()) and dies.
5) Parent notices that child has died, errors out and dies completely.

Note that the 2.0 branch likely has this same problem (which will only
show up very rarely under graceful restarts while using fcntl() for the
accept mutex). Both the www.apache.org build (2.0) and the cvs.apache.org
build (2.1-dev) are seeing accept.lock.pid turds being left behind,
and I think it is likely that one is left behind each time we hit this
bug.

One way to recreate this might be to pummel the server with graceful
restarts while also pummeling it with requests (enough requests to get
the parent to need to create new children).

In any case, can anyone else confirm that this race condition exists, and
maybe suggest a way to synchronize a parent's shutdown with the starting
up of an old-generation child? (Eg. the parent shouldn't remove the
lockfile until all children are successfully started.)

-aaron


On Sun, Mar 14, 2004 at 10:15:43AM -0800, Justin Erenkrantz wrote:
 This morning, when we did a graceful to the httpd serving cvs.apache.org 
 (which runs HEAD not APACHE_2_0_BRANCH), it failed and gave us:
 
 [Sun Mar 14 00:00:00 2004] [emerg] (2)No such file or directory: Couldn't 
 initialize cross-process lock in child
 [Sun Mar 14 00:00:00 2004] [emerg] (2)No such file or directory: Couldn't 
 initialize cross-process lock in child
 [Sun Mar 14 00:00:00 2004] [alert] Child 10485 returned a Fatal error... 
 server is exiting!
 
 It subsequently brought down the entire server with it.  (That's sort of 
 bad, too.)
 
 This error lines up with prefork.c around line 485:
 
 status = apr_proc_mutex_child_init(accept_mutex, ap_lock_fname, pchild);
 if (status != APR_SUCCESS) {
  ap_log_error(APLOG_MARK, APLOG_EMERG, status, ap_server_conf,
   Couldn't initialize cross-process lock in child);
  clean_child_exit(APEXIT_CHILDFATAL);
 }
 
 We don't have a LockFile or an AcceptMutex directive, so it should be using 
 the default, which is flock() on FreeBSD.
 
 Anyone else seen this?  Should we switch the AcceptMutex directive to 
 fcntl()?
 (If this does fail with flock(), should we just remove support for flock()?)
 
 Thanks!  -- justin
 


Re: 2.0.49 (rc1) tarballs available for testing

2004-03-10 Thread Aaron Bannert
On Wed, Mar 10, 2004 at 07:53:03AM +, Joe Orton wrote:
 There was an httpd-test issue causing segfaults in t/http11/chunked.t
 (fixed yesterday) - was that it or was there something else?

That wasn't it, it was a more general problem getting the test suite
to run (and get all its dependencies installed), which was probably my
own fault. I'll pound on it some more when I get a minute.

-aaron


Re: 2.0.49 (rc1) tarballs available for testing

2004-03-09 Thread Aaron Bannert
On Tue, Mar 09, 2004 at 06:02:03PM +0100, Sander Striker wrote:
 There are 2.0.49-rc1 tarballs available for testing...

+1

Looks good over here (though I had trouble running the testsuite on x86_64).

FWIW, x86_64/Linux on my 1.2Ghz Opteron with NPTL enabled runs worker
about 40% faster than prefork for static pages (prefork does approx.
1050 req/sec while worker does approx. 1480 req/sec). Woohoo!

-aaron


Re: [PATCH] raise MAX_SERVER_LIMIT

2004-01-26 Thread Aaron Bannert
On Thu, Jan 15, 2004 at 04:04:38PM +, Colm MacCarthaigh wrote:
 There were other changes co-incidental to that, like going to 12Gb
 of RAM, which certainly helped, so it's hard to narrow it down too
 much.

Ok with 18,000 or so child processes (all in the run queue) what does
your load look like? Also, what kind of memory footprint are you seeing?

 I don't use worker because it still dumps an un-backtracable corefile
 within about 5 minutes for me. I still have no idea why, though plenty
 of corefiles. I havn't tried a serious analysis yet, becasue I've been
 moving house, but I hope to get to it soon. Moving to worker would be
 a good thing :)

I'd love to find out what's causing your worker failures. Are you using
any thread-unsafe modules or libraries?

-aaron


Re: Proposal: Allow ServerTokens to specify Server header completely

2004-01-26 Thread Aaron Bannert
On Tue, Jan 13, 2004 at 02:04:06PM +, Ivan Ristic wrote:
 Jim Jagielski wrote:
 
 I'd like to get some sort of feedback concerning the idea
 of having ServerTokens not only adjust what Apache
 sends in the Server header, but also allow the directive
 to fully set that info.
 
 For example: ServerTokens Set Aporche/3.5
 would cause Apache to send Aporche/3.5 as the
 Server header. Some people want to be able to totally
 obscure the server type.
 
   I like the idea. Right now you either have to
   change the source code or use mod_security to achieve
   this, but I think the feature belongs to the server core.
 
   But I think a new server directive is a better solution.

I think one should have to change the source code in order to
have this level of control over the Server: header.

-aaron


Flood's --with-capath mandatory?

2003-12-30 Thread Aaron Bannert
Could someone please remind me why --with-capath is mandatory when
--enable-ssl is used? The default is only useful if you actually
use --with-ssl=some/path. I have a patch that changes the default to
$sysconfdir/certs, but in the long run I think this should be something
configured through the XML input (eg. you enable CA checking for some
set of URLs and you give the patch to the trusted certs). 

-aaron


Re: filtering huge request bodies (like 650MB files)

2003-12-12 Thread Aaron Bannert
[we really should move this to the [EMAIL PROTECTED] list]

On Fri, Dec 12, 2003 at 11:53:53AM +, Ben Laurie wrote:
 This was exactly the conversation we were having at the hackathon. As 
 always, Windows was the problem, but I thought Bill had it licked?

Well, there are two things we have to solve. I think we know how to solve
the first one: portable IPC that works on Windows. This is not easy to
solve in a portable way, but given enough energy I think this is solvable.

The second part is getting all the different I/O types to work within
the same poll() or poll-like mechanism. This seems like a much more
difficult task to me, but it all depends on how it works under Windows.

-aaron


Re: filtering huge request bodies (like 650MB files)

2003-12-11 Thread Aaron Bannert
On Thu, Dec 11, 2003 at 01:50:46PM -0600, William A. Rowe, Jr. wrote:
 But the 2.0 architecture is entirely different.  We need a poll but it's not entirely
 obvious where to put one...
 
 One suggestion raised in a poll bucket: when a connection level filter cannot
 read anything more, it passed back a bucket containing a poll descriptor as
 metadata.  Each filter passes this metadata bucket back up.  Some filters
 like mod_ssl would move it from the connection brigade to the data brigade.

At one level we'll have to fit whatever I/O multiplexer we come
up with in the filters. I'm going to stay out of that discussion.

At a lower level, ignoring filters for a moment, we still need a
way for applications to be able to multiplex I/O between different
I/O types: pipes, files, sockets, IPC, etc... I think this is the
root of the problem (and something we should probably move over
to the [EMAIL PROTECTED] list, and also something we might want to take up
after APR 1.0 is released).

-aaron


Re: [PATCH 25137] atomics in worker mpm

2003-12-11 Thread Aaron Bannert
On Thu, Dec 11, 2003 at 08:39:27AM -0500, Brian Akins wrote:
 I wonder if this
 binary would run on an older processor (running a modern version of linux).
 
 AFAIK, yes.  It's standard x86 assembly.
 
 
 All:  Please correct me if I am wrong.  I'm sure you will ;)

I'm no x86 asm expert, so maybe someone else can comment on the
portability of this code.

-aaron


Re: cvs commit: httpd-2.0 CHANGES

2003-12-10 Thread Aaron Bannert
On Wed, Dec 10, 2003 at 01:42:54PM +0100, Sander Striker wrote:
 It's a public recorded thing, so I'd say: that surely is more than
 sufficient.  I was getting at the fact that phonecalls or irc sessions
 aren't logged, so there is no way to know there was approval without it
 being summarized somewhere.

Reviews don't need to be recorded, we only do it to give recognition
to the reviewer. Recall that we make no warranty about the quality
of the code we release, so there is no legal requirement to record
our QA process.

FWIW, I've never liked this whole r-t-c thing for any branch of
httpd, development or stable. I trust every single other committer
on this project to commit good code and to catch when someone else
commits something bad. Everything else is red tape.

-aaron


Re: filtering huge request bodies (like 650MB files)

2003-12-10 Thread Aaron Bannert
On Wed, Dec 10, 2003 at 06:29:28PM -0500, Glenn wrote:
 On Wed, Dec 10, 2003 at 03:18:44PM -0800, Stas Bekman wrote:
  Are you saying that if I POST N MBytes of data to the server and just have 
  the server send it back to me, it won't grow by that N MBytes of memory for 
  the duration of that request? Can you pipe the data out as it comes in? I 
  thought that you must read the data in before you can send it out (at least 
  if it's the same client who sends and receives the data).
 
 Well, in the case of CGI, mod_cgi and mod_cgid currently require that
 the CGI read in the entire body before mod_cgi(d?) will read the
 response from the CGI.  So a CGI echo program must buffer the whole
 response before mod_cgi(d?) will read the CGI output and send it back
 to the client.  If the CGI buffers to disk, no problem, but if the
 CGI buffers in memory, it will take a lot of memory (but not in Apache).
 
 Obviously :-), that's a shortcoming of mod_cgi(d?), but might also be
 a problem for modules such as mod_php which preprocesses the CGI POST
 info before running the PHP script.

[slightly off-topic]

Actually, I believe that mod_cgi and mod_cgid are currently broken
WRT the CGI spec. The spec says that a CGI may read as much of an
incoming request body as it wishes and may return data as soon as
it wishes (all AIUI). That means that right now if you send a big
body to a CGI script that does not read the request body (which
is perfectly valid according to the CGI script) then mod_cgi[d] will
deadlock trying to write the rest of the body to the script.

The best way to fix this would be to support a poll()-like multiplexing
I/O scheme where data could be written do and read from the CGI process
at the same time. Unfortunately, that's not currently supported by
the Apache filter code.

-aaron


Re: [PATCH 25137] atomics in worker mpm

2003-12-10 Thread Aaron Bannert
On Tue, Dec 09, 2003 at 03:24:15PM -0500, Brian Akins wrote:
 I was testing on x86 Linux which appears to do the apr_atomics in assembly.

Does it use this atomics implementation by default? I wonder if this
binary would run on an older processor (running a modern version of linux).

-aaron


Re: Regarding worker MPM and queue_push/pop

2003-12-05 Thread Aaron Bannert
On Wed, Dec 03, 2003 at 11:38:25PM -0800, MATHIHALLI,MADHUSUDAN (HP-Cupertino,ex1) 
wrote:
 A first guess is that I'm using SysV semaphores, and a semlock can bring
 down the entire httpd to crawl. I'm re-compiling using pthread mutexes
 whenever possible.

Depending on the implementation in your libc/kernel, pthread may be
better suited for Apache. I have seen noticable improvements on Solaris
using pthread, for example.

 I took a gprof on the system, and noticed that httpd was sleeping on a
 condition - the first guess ap_queue_pop () !!..(anything else ?). 
 Question : Has anybody done some sort of profiling on the ap_queue_* stuff -
 if so, can you please share the data ? 
 
 I had another dumb question - (was this already considered and rejected ?)
 instead of having the worker threads compete for the incoming connections
 (using ap_queue_pop .. and hence mutex_lock), assign the connection to the
 next free thread on a round-robin basis - if I'm not wrong, zeus does
 something similar.

We don't sleep on a mutex_lock, since that is not a good way to do
predictive scheduling and makes no guarantees about sharing, etc.
As you noticed above, when a thread is idle is waits on a condition
in the ap_queue. There is one thread feeding that queue, and when
it accept()s a connection is pushes it into that queue, effectively
triggering the condition on a single thread which then wakes up
and does its work.

I can't imagine why sleeping would consume 20% of your CPU when the
server is idle. Perhaps you should look into the scalability of your
thread library. On what kind of system are you seeing this problem
(hardware/OS/rev/etc)?

Are you seeing big load spikes when only a trickle of requests are coming
in?

-aaron



Re: [PATCH 25137] atomics in worker mpm

2003-12-02 Thread Aaron Bannert
On Tue, Dec 02, 2003 at 08:40:05AM -0500, Brian Akins wrote:
 Backported from 2.1.  Stable for me in various loads.

Cool! What OS/arch are you using? Also, any idea how well it performs
compared to before the patch?

-aaron


Re: Where is mod_access??

2003-11-26 Thread Aaron Bannert
On Wed, Nov 26, 2003 at 03:51:23PM -0500, Christopher Jastram wrote:
 My boss found subversion+webdav, and wants it implemented for use with 
 Adobe FrameMaker.  So, I need dav_lock (without it, framemaker can load 
 from dav, but cannot checkin or checkout).

Cool!

 I checked nagoya, there doesn't seem to be any way to submit bug reports 
 (and patches) for httpd2.1.  Do I submit it under 2.0 or post to the list?

Feel free to post to the list at any time, even after we get an httpd-2.1
section into the bug system.

-aaron


Re: Release Frequency and Testing

2003-11-24 Thread Aaron Bannert
On Mon, Nov 24, 2003 at 12:54:05AM -0500, Robert La Ferla wrote:
 I have no problem with running release candidates and contributing.  I 
 have contributed in the past by the way...  In fact, I wouldn't object 
 to trying nightly or weekly builds.  The problem is that I don't see 
 those as easily available from the httpd.apache.org website.  There is a 
 link to the latest source tree but the source code there does not have a 
 configure script.  Yes, I can build the script (autoconf?) but if you 
 want people to test software on a regular basis, it would be better to 
 have a ready to go source release on a nightly basis.  Perhaps, you have 
 this but I couldn't find it.
 
 There's also the issue of release frequency.  I remember seeing some 
 discussion about it several releases ago but no action was taken.  More 
 frequent releases would be much welcomed.  Take this recent user_track 
 bug.  A lot of sites use cookie tracking.  It would be great if there 
 was a release that fixed it.  In the interim, it would be nice to see 
 some mention of a workaround on the site for users.

Take a look here:

http://www.apache.org/~aaron/httpd-2.1.0-rc1/

I made these tarballs early last week. Once I get some feedback on
this (whether it works for anyone else, since it works for me on
linux 2.4 i386, linux 2.4 amd64 (opteron), and Mac OS X 10.3
(Panther)) I'll encorporate any new changes and cut another tarball.
Your feedback would be greatly appreciated. :)

-aaron


Re: Release Frequency and Testing

2003-11-24 Thread Aaron Bannert
On Sun, Nov 23, 2003 at 10:13:09PM -0500, Robert La Ferla wrote:
 What testing gets performed prior to an official httpd 2.x release?  I 
 think whatever test suite is used, needs some updating to include 
 configurations that utilize more features like user tracking, caching 
 and multi-views.  The last release (2.0.48) crashes on startup for my 
 configuration which includes those features.  I will submit a bug report 
 later.  However, I have seen some previous releases that had other crash 
 bugs on startup as well.  2.0.48 was supposed to be a bug fix release 
 but some other bugs were introduced.  If the frequency of the 2.x 
 releases was greater (shorter time between releases), it wouldn't be as 
 a big of a problem.  I have noticed that 2.x releases seem to take much 
 longer than ones in the 1.x days.  I guess what I am saying here is that 
 I hope there can be some discussion about the frequency of releases 
 (making a conscious effort to make them more frequent) as well as a 
 review of what testing gets done prior to a release.

The Apache Software Foundation makes no warranty about the products
it releases. We do try and release software with high quality, but we
make no guarantees.

Having said that, there is an effort underway to produce more timely
releases, and to do more frequent pre-releases. I recently posted
a 2.1.0-rc1 build that you may download and test. Please feel free
to report bugs on pre-release code directly to this mailing list.
Crash-on-startup bugs are particularly interesting. :)

-aaron


Re: MaxClients as shared memory

2003-11-22 Thread Aaron Bannert

Use apr_shm_create in the post_config hook (which gets run in the parent)
and then attach to it in the child_init hook (which runs in the child).
See the scoreboard.c for an example.

-aaron



On Sat, Nov 22, 2003 at 02:28:03PM +0100, David Herrero wrote:
 Thanks, In which part of the source can i reserve my own shared memory
 and the child process can access to this memory?


 -Mensaje original-
 De: Aaron Bannert [mailto:[EMAIL PROTECTED] 
 Enviado el: sábado, 22 de noviembre de 2003 7:41
 Para: [EMAIL PROTECTED]
 Asunto: Re: MaxClients as shared memory
 
 
 On Sat, Nov 22, 2003 at 01:56:11AM +0100, David Herrero wrote:
  
  Hello, i need to do the global variable as shared memory which
 can be 
  modificated in executing time by child process. Well, i want to know 
  what is the better way, including MaxClients in the Scoreboard or 
  created a shared memory to this variable.
 
 You can't do this. MaxClients actually affects the size of the
 scoreboard, so it wouldn't make sense to store it in the scoreboard.
 
 -aaron
 


Re: child connection ?

2003-11-22 Thread Aaron Bannert
At one time? The Prefork MPM (Apache 1.3+) can only handle one at
a time. The Worker MPM (in Apache 2.0+ only) can handle multiple
at a time, since it is multithreaded.

-aaron


On Sat, Nov 22, 2003 at 02:06:18PM +, Haskell Curry wrote:
 How much diferent connections can a child handle !?
 
 From: Aaron Bannert [EMAIL PROTECTED]
 Reply-To: [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 Subject: Re: Fwd: request scheduling ?
 Date: Fri, 21 Nov 2003 22:42:29 -0800
 
 On Sat, Nov 22, 2003 at 01:23:24AM +, Haskell Curry wrote:
  How does the apache do the request scheduling !?
  does the child do the scheduling or apache let it for the threads !?
 
 Scheduling is currently always left up to the operating system,
 whether you are on a prefork MPM or the worker (threaded) MPM.
 
 -aaron
 
 _
 MSN Messenger: converse com os seus amigos online.  
 http://messenger.msn.com.br
 


Re: MaxClients as shared memory

2003-11-21 Thread Aaron Bannert
On Sat, Nov 22, 2003 at 01:56:11AM +0100, David Herrero wrote:
 
   Hello, i need to do the global variable as shared memory which
 can be modificated in executing time by child process. Well, i want to
 know what is the better way, including MaxClients in the Scoreboard or
 created a shared memory to this variable.

You can't do this. MaxClients actually affects the size of the scoreboard,
so it wouldn't make sense to store it in the scoreboard.

-aaron


Re: patch to add style sheet support to mod_autoindex

2003-11-19 Thread Aaron Bannert
On Wed, Nov 19, 2003 at 04:09:12PM -0800, Tyler Riddle wrote:
 I made a simple modification to mod_autoindex that
 allows one to specify a style sheet to be used for the
 generated page in the server config file. This was
 done with a new configuration item IndexStyleSheet.
 This item specifies a string that is used to reference
 an external style sheet.
 
 The patch is available at
 http://dhamma.homeunix.net/patch/mod_autoindex-style_sheet.diff
 and is a unified diff against apache 1.3.28. If
 something is wrong with this patch or it is not up to
 the apache code standards I would love to know why so
 I can improve my coding skills.

Neat patch, short and sweet. I haven't compiled it but it look good.

+1 in concept

-aaron


Re: Scoreboard

2003-11-17 Thread Aaron Bannert
On Tue, Nov 18, 2003 at 01:29:50AM +0100, David Herrero wrote:
 I need to implement a control module that creates a child process to
 receive request to modify a global variable of Apache, if it creates a
 child that has a copy of this variable, the other process doesn't view
 the change. Can i insert this global variable into scoreboard structure
 and int this way all all the processes can view the changes in this
 variable?

No, but you can create your own shared memory segment in one of the
hooks that gets run in the parent process, and then it will be inherited
in your child processes.

-aaron


Re: 1.3 Wishlist: (Was: Re: consider reopening 1.3)

2003-11-16 Thread Aaron Bannert
On Sun, Nov 16, 2003 at 03:54:59PM -0500, Jim Jagielski wrote:
 I'm also curious about what a 1.4/1.5 would do that the current 1.3
 does not which would provide a seamless upgrade. Are you talking
 API or what? As someone who's preformed numerous such migrations,
 the actual mechanics of doing so are meager, especially if
 you stay prefork.

sendfile?

-aaron


Creating HTTPD Tarballs

2003-11-16 Thread Aaron Bannert
I've updated the tools/release.sh script in the httpd-dist CVS
repository to make it easier for anyone to create HTTPD tarballs.
Before it was necessary for a tag to exist before a tarball could
be created. This made it very difficult to release
experimental/developmental tarballs to a set of users for testing,
since we only like to make tags for releases that we believe will
be voted GA.

The new script takes away the VERSION parameter and replaces it
with two new parameters:
  TAG is the CVS tag that we will pull from.
  RELEASE-VERSION is the name that we'll give the tarball.

For example, running

   tools/release.sh httpd HEAD 2.1.0-rc1

creates a tarball named httpd-2.1.0-rc1.tar.gz from HEAD.

-aaron



HTTPD 2.1.0-rc1 tarballs up

2003-11-16 Thread Aaron Bannert
I've made some tarballs of the httpd-2.1 tree. I just pulled HEAD of
both httpd and apr (as of about an hour ago, just before greg's pollset
changes). They're here:

http://www.apache.org/~aaron/httpd-2.1.0-rc1/

This seems to work fine on my Mac OS X (10.3 Panther) box, my linux
2.4 x86 box, and my linux x86_64 (amd64 opteron) box. Let me know
if there are any problems (and fixes) so I can encorporate the fixes
and reroll in a few days. My goal is to churn out updated -rc tarballs
every few days until we get one that we like, at which point we'll call
it 2.1.0 GA.

-aaron


Re: FreeBSD threads was RE: consider reopening 1.3

2003-11-16 Thread Aaron Bannert
On Sun, Nov 16, 2003 at 02:34:47PM -0800, Justin Erenkrantz wrote:
 --On Sunday, November 16, 2003 5:20 PM -0400 Marc G. Fournier 
 
 'k, maybe expand the comment in the INSTALL file to address this?
 
 Well, we've asked for confirmation of FreeBSD threading 'working' on the 
 [EMAIL PROTECTED] - which as a platform-specific issue is the 'right' 
 mailing list for this topic - not [EMAIL PROTECTED]
 
 FWIW, the FreeBSD port maintainer has asked us to enable threads on all 
 versions, but we've (so far) yet to receive a reply that indicates that the 
 FreeBSD's threading works properly.  The only reply we received so far was 
 that it isn't fixed with FreeBSD - even 5.1.  So, if you have something to 
 add, please contribute on the right list.

I compiled it on a 4.9-CURRENT machine two days ago and it failed
(even after working around some problems with atomics) by deadlocking.
Connections were established but no responses ever returned. I wasn't
even able to knock a request out of the blocked state by hitting it
from another client.

-aaron


Re: HTTPD 2.1.0-rc1 tarballs up

2003-11-16 Thread Aaron Bannert
On Sat, Nov 15, 2003 at 05:20:33PM -0800, Sander Striker wrote:
 On Sun, 2003-11-16 at 15:36, Aaron Bannert wrote:
  I've made some tarballs of the httpd-2.1 tree. I just pulled HEAD of
  both httpd and apr (as of about an hour ago, just before greg's pollset
  changes). They're here:
  
  http://www.apache.org/~aaron/httpd-2.1.0-rc1/
 
 Ok, I'll leave you to the RM task then.  One more thing off my list
 of things to do ;).

Nobody ever said there need only be one RM. :) Do you feel I am stepping
on your toes or something?

-aaron


Re: FreeBSD threads was RE: consider reopening 1.3

2003-11-16 Thread Aaron Bannert
On Sun, Nov 16, 2003 at 09:43:03PM -0400, Marc G. Fournier wrote:
 
 Yup, this is what I tend to see ...
 
 One question, what does 'ps auxwl' show, primarily the WCHAN column?

I don't have access to the machine right now, but I can check later.

-aaron


Re: the wheel of httpd-dev life is surely slowing down, solutions please

2003-11-14 Thread Aaron Bannert
On Wed, Nov 12, 2003 at 05:51:46PM -0800, Justin Erenkrantz wrote:
 Creating roles like this is just as bad as having chunks of httpd owned
 by one particular developer. (See below)
 
 I don't think you understand the role of 'patch manager.'  On the projects 
 I'm involved with, that person's only responsibility is for ensuring that 
 the patches posted to the mailing list are entered into the right tool.
 
 I think that can easily be a volunteer, rotating job as you don't need to 
 do anything more than that.

I don't know what this hypothetical tool looks like, but why wouldn't
the person submitting the patch just use the tool directly?

Seriously though, until we actually have a tool that we want to use,
this part of the discussion is moot.

 In the patch manager role, nothing would preclude others from adding 
 patches to the right tool.  But, having someone whose responsibility it is 
 to ensure that patches get inputted into the tool on a timely basis is 
 goodness.  On other projects I'm involved with, that person isn't an even a 
 committer.  They don't have to understand what the patch does - just make 
 sure it is recorded and allow for annotation.
 
 And, it doesn't have to be on a 'timely' basis - once a week, you go 
 through and ensure that all [PATCH] messages are in the right patch 
 management tool.  This lessens the number of dropped patches that we'd 
 have.  (Bugzilla sucks at this, so I think we'd need a new tool.)

Bugzilla wasn't designed for this, so yes, it sucks at it.

I still maintain that the person submitting the patch should simply
enter it into the patch system directly.

Even easier would be a patch system that monitored the list for [PATCH]
emails and tracked those automatically. We might come up with some standard
syntax to help the patch tracking system know what version of httpd (or
apr or whatever) the patch applies to.

 Woah woah woah. Are you discouraging people from thinking about big
 changes for the 2.2 timeframe? If someone has a revolutionary idea for
 the next generation of Apache HTTPD, who will stand in their way? Not I.
 
 Frankly, yes.
 
 I think our changes that we already have in the tree is about 'right' for 
 2.2 (no big architecture changes, but lots of modules have been rewritten 
 and improved).  It's just that no one has time or desire to shepherd 2.2 to 
 a release.  And, I think we need APR 1.0 out the door before 2.2 can be 
 rationally discussed.

[tangential issue] Why does it take some much time and desire to make a
release? Shouldn't that be one of the easier things to do around here?

 If we did any major changes from this point that require modules to rewrite 
 themselves, we'd need to go to 3.0 per our versioning rules.  And, I'd 
 *strongly* discourage that.  I don't think it's in anyone's best interest 
 to revamp our architecture at this stage.
 
 We don't need to give a moving target to our poor module developers.  Only 
 by producing a series of high-quality releases can we ensure that module 
 writers will be confident in writing against 2.0 or 2.2.  If we come out 
 with 3.x now, I think we'll just end up hurting ourselves in the long run 
 even worse than we are now.

Let me pose a different angle: If our module API is so broken that we need
to change it in an incompatible way, then don't you think the module authors
would rather have the improved interface rather than being stuck with the
broken one?

In other words, the only time we actually talk about making drastic changes
is when they are warranted. Therefore, being conservative about API changes
merely serves to discourage creative development and that is _a bad thing_.

-aaron


Re: the wheel of httpd-dev life is surely slowing down, solutionsplease

2003-11-12 Thread Aaron Bannert
On Tue, Nov 11, 2003 at 09:55:24AM -0700, Brad Nicholes wrote:
 One question: Have we set the barrier too high? This was discussed at
 length last year when the 2_0_BRANCH was created and I think that it is
 worth reviewing.  My personal feeling is that the barrier may be doing
 more to discourage development and patch submission than it is to avoid
 breaking the code.  Should the barrier be relaxed to a certain extent? 
 When a patch is submitted to the 2_0_BRANCH, isn't the most important
 question to be asked, does it break backward compatibility?  If the
 answer is no, then why not commit it?  Is there any other reason to hold
 a patch (bug fix or enhancement) over in the 2.1 branch?  There is a
 growing list of backports in the 2.0 STATUS file and although I haven't
 reviewed them all personally, my guess is that the majority of them
 won't break backward compatibility otherwise they wouldn't have been
 proposed.  
 
 People like to see the results of their work.  This is true whether it
 be software development or fixing the kitchen sink.  To my knowledge
 there has never been a release of the 2.1 branch therefore much of the
 work that is going on there is going unnoticed.  This includes bug fixes
 as well as new features.  Yes, you can say that people can just pull the
 code from CVS, but what percentage of potential testers pull the code
 from CVS vs. download a tar ball?  If developers are allowed to see the
 results of a patch that they submitted, they are much more likely to
 submit another one.  
 
 The current R-T-C policy in place over the 2_0_BRANCH prevents
 otherwise good patches from making it into the code in a timely fashion.
  I have a proposed backport that has been sitting there for weeks with
 only a 0 vote placed on it.  I understand that relaxing the R-T-C policy
 may allow for a certain degree of destabilization, but isn't it faster
 to find a bug in code than it is to find it in a description of proposed
 patch.  Also, the people that seem to be reviewing the patches, appears
 to be the same set of people which tends to say that there are fewer
 eyeballs on a patch rather than more.
 
 IMO, the faster we get patches in front of the masses, the faster bugs
 are fixed and stabilization is acheived.

+1

I feel we should bring the barrier to entry as low as possible, and
right now the bar is too high. I would be in favor of going back
to CTR on 2.0, and letting the incompatibilities and bugs sort
themselves out with our beta testers (who, mind you, are completely
willing to play with the latest and greatest and would be happy to
submit bug reports).

-aaron


Re: the wheel of httpd-dev life is surely slowing down, solutions please

2003-11-12 Thread Aaron Bannert
On Mon, Nov 10, 2003 at 05:14:35PM -0800, Stas Bekman wrote:
 ==
 1) Bugs
 
 searching for NEW and REOPENED bugs in httpd-2.0 returns: 420 entries
 http://nagoya.apache.org/bugzilla/buglist.cgi?bug_status=NEWbug_status=REOPENEDproduct=Apache+httpd-2.0
 
 Suggestion: make bugzilla CC bug-reports to the dev list. Most
 developers won't just go and check the bugs database to see if there
 is something to fix, both because they don't have time and because
 there are too many things to fix. Posting the reports to the list
 raises a chance for the developer to have a free minute and may be to
 resolve the bug immediately or close it.

Would anyone else be in favor of exploring other forms of bug-tracking?
The old system (gnats) was completely email-oriented, which was good
and bad. Bugzilla may be more usable for web-oriented users, but it
it definitely more difficult to use offline (ie. impossible). Perhaps
we could try out the bug tracking system used by the debian project
(http://www.debian.org/Bugs/).

 ==
 2) Lack of Design
 
 In my personal opinion, the move to CTR from RTC had very bad
 implications on the design of httpd 2.1.
 
 2a). Design of new on-going features (and changes/fixes) of the
 existing features is not discussed before it's put to the code. When
 it's committed it's usually too late to have any incentive to give a
 design feedback, after all it's already working and we are too busy
 with other things, so whatever.

I agree with the premise but not your conclusion. Design is being
adversely affected when people make large design-changing commits.
These people should be posting their design changes (code and all)
to the mailing list before committing.

However, if people are making commits and others aren't reviewing
those commits, then the responsibility falls to the reviewer. Nobody
should be obligated to hold on to a patch because other people don't have
the time to review.

 The worst part is that it's now easy to sneak in code which otherwise
 would never be accepted (and backport it to 2.0). I don't have any
 examples, but I think the danger is there.

I have not seen any evidence of this.

 2b). As a side-effect when someone asks a design question (e.g. me)
 it's not being answered because people tell: we are in the CRT mode,
 go ahead code it and commit it. But if the poster doesn't know what's
 the right design, if she did she won't have asked in first place.

There is not a fine line between those types of changes that warrant
discussion and those that can be simply committed, so this is a difficult
issue to address.

It is not uncommon for design questions to go unanswered on [EMAIL PROTECTED],
and it has been that way as long as I can remember. Patches, on the other
hand, speak loudest.

 2c). Future design. There is no clear roadmap of where do we go with
 Apache 2.1. People scratch their itches with new ideas which is cool,
 but if there was a plan to work on things, this may have invited new
 developers to scratch their itches.

Make proposals (or better yet, add them to the 2.1 STATUS file. :)

I would personally like to see:

a) httpd.conf rewrite
  - normalized syntax (structured, maybe even *gasp* XML)
  - normalized parsing in apache (defined passes, hooks, restart semantics, etc)
  - other ways to retrieve a config (not just from a file, eg. LDAP)
b) sendfile improvements for 64bit memory addressing machines
   (eg. can we mmap/sendfile a bunch of DVD images at the same time and
not crash? Do we see improvements?)
c) simplified filters
   (it's been too long since I've thought about this, but basicly writing
filters is too difficult and should be easier).


 2d). CRT seemed to come as a replacement for design discussions. It's
 very easy to observe from the traffic numbers:

I think it's actually the opposite. The amount of design discussions in
general has dramatically decreased since RTC went into effect in 2.0.
I recall very few discussions around 2.1 in general.

 ==
 3). Contributions
 
 I don't have numbers to support my clause, but I have a strong feeling
 that nowadays we see a much smaller number of posts with contributions
 from non-developers, since most contributions are simply ignored and
 it's a known thing among Apache community that posting fixes and
 suggestions to httpd-dev list is usually a waste of time. (This is
 based on my personal experience and discussions with other developers
 who aren't httpd-core coders). I realize that I'm not entirely correct
 saying that, since some things are picked up, so I apologize to those
 folks who do try hard to pick those things.
 
 The obvious problem is that most of those things are unsexy, usually
 don't fit someones itch and they sometimes require a lot of
 communication overhead with the 

Re: the wheel of httpd-dev life is surely slowing down, solutions please

2003-11-12 Thread Aaron Bannert
On Mon, Nov 10, 2003 at 05:50:46PM -0800, Justin Erenkrantz wrote:
 --On Monday, November 10, 2003 17:14:35 -0800 Stas Bekman [EMAIL PROTECTED] 
 wrote:
 I believe Sander's suggested that we start a patch manager role (discussion 
 at AC Hackathon!).  A few other projects I'm involved with do this.  It'd 
 be goodness, I think.  But, the problem is finding a volunteer willing to 
 shepherd patches.

-1

Creating roles like this is just as bad as having chunks of httpd owned
by one particular developer. (See below)

Having a patch tracking tool would be a much better solution, in my opinion.

 
 a. certain (most?) chunks of httpd lack owners. If httpd was to have
 
 I really dislike this.  The least maintained modules are those that had 
 'owners.'  When those owners left, they didn't send a notice saying, 'we're 
 leaving.'  The code just rotted.
 
 I firmly believe that httpd's group ownership is the right model.  Only 
 recently have those modules that had 'owners' been cleaned up.  (Yay for 
 nd!)

+1

 b. similar to the root rotation we have on the infrastructure, I
 suggest to have a voluntary weekly rotation of httpd-dev list champion
 
 +1

-1

How is this different than owners above? You can't expect developers
on this list to think ahead of time when they might have extra time
to contribute to this project. Most of the time it is on a whim
late at night when all of the important chores are taken care of.
If you ask for people to sign up for a slotted amount of time,
you'll only end up excluding people who don't work on Apache at
their day job.

 Additionally, I'd say that we're best off *slowing* down 2.x development to 
 let module authors write modules against 2.0.  Changing the world in 2.2 
 would be detrimental, I think.

Woah woah woah. Are you discouraging people from thinking about big changes
for the 2.2 timeframe? If someone has a revolutionary idea for the next
generation of Apache HTTPD, who will stand in their way? Not I.


-aaron


Re: the wheel of httpd-dev life is surely slowing down, solutions please

2003-11-12 Thread Aaron Bannert
On Thu, Nov 13, 2003 at 02:04:31AM +0100, Erik Abele wrote:
 I'd be very in favour of exploring other forms of bug-tracking. For  
 example
 we'll have a full replication of BugZilla in Jira (/me hides) in the  
 near future
 here on ASF hardware (see http://nagoya.apache.org/jira/ for a preview).
 
 I've to admit that I really like it and especially as a patch-tracking  
 tool
 it could be useful, at least IMHO.
 
 Btw, Scarab (http://nagoya.apache.org/scarab/issues) is also available  
 but I
 doubt that it is used by a single project ;-)

My main requirement is that the bug tracking system be fully-accessible
through email. Having a full web interface is great, but not at the
expense of usable offline replies to bug reports.

(Do either of these bug tracking systems support this?)

-aaron


Re: the wheel of httpd-dev life is surely slowing down, solutions please

2003-11-12 Thread Aaron Bannert
On Thu, Nov 13, 2003 at 02:17:01AM +0100, Erik Abele wrote:
 My main requirement is that the bug tracking system be fully-accessible
 through email. Having a full web interface is great, but not at the
 expense of usable offline replies to bug reports.
 
 Okay, I can understand that.
 
 (Do either of these bug tracking systems support this?)
 
 Hmm, don't know, but I don't think so...
 Is bugzilla able to be fully controlled through email?

Nope, but it sucks for other reasons too. ;)

We used to use gnats until someone suggested we try Bugzilla.
I suggested today that we might look into using the homegrown
bug tracking system used by debian. It is email based with a web
interface.

-aaron


Re: Segfaults in 2.0.47 (worker)

2003-10-15 Thread Aaron Bannert
On Wed, Oct 15, 2003 at 07:44:57PM +0100, Colm MacCarthaigh wrote:
 gdb backtrace;
 
 #0  0x080872ab in server_main_loop (remaining_children_to_start=Cannot
 access memory at address 0x9
 ) at worker.c:1579
 1579ap_wait_or_timeout(exitwhy, status, pid, pconf);
 (gdb) bt full
 #0  0x080872ab in server_main_loop (remaining_children_to_start=Cannot
 access memory at address 0x9
 ) at worker.c:1579
 child_slot = 135027216
 exitwhy = Cannot access memory at address 0xffe9
 (gdb) 
 
 can't trace back any further. Can't see anything immediately
 obvious in the source, or the diffs from HEAD. Not running any crazy 
 patches, mine or otherwise ;)
 
 Running on Linux, i386. I'm about to go delving into this (starting
 with trying HEAD from 2.0), but in the meantime is there something
 I'm missing ? Are there any outstanding known issues ? 

It looks like your stack is getting blown away somehow, since gdb
is unable to read local variables.

What modules are you running? Is there anything in the error logs?
What version of linux are you running (uname -a)? Are you running
a stock kernel?

-aaron


Re: Segfaults in 2.0.47 (worker)

2003-10-15 Thread Aaron Bannert
On Wed, Oct 15, 2003 at 07:44:57PM +0100, Colm MacCarthaigh wrote:
 
 One of our servers is getting truly hammered today (5,000 simultaneous
 clients is not unusual right now), so I've been tinkering with worker
 instead of prefork. It's not doing nice things for me :(
 
 The master (running-as-root) httpd is segfaulting after a few minutes, but 
 the children are staying around (for all the good that does ;) :
 
 wait4(-1, 0xb620, WNOHANG|WUNTRACED, NULL) = 0
 select(0, NULL, NULL, NULL, {1, 0}) = 0 (Timeout)
 wait4(-1, 0xb620, WNOHANG|WUNTRACED, NULL) = 0
 select(0, NULL, NULL, NULL, {1, 0}) = 0 (Timeout)
 wait4(-1, 0xb620, WNOHANG|WUNTRACED, NULL) = 0
 select(0, NULL, NULL, NULL, {1, 0}) = 0 (Timeout)
 fork()  = 29289
 wait4(-1, 0xb620, WNOHANG|WUNTRACED, NULL) = 0
 select(0, NULL, NULL, NULL, {1, 0}) = 0 (Timeout)
 --- SIGSEGV (Segmentation fault) ---

You don't by any chance have NEED_WAITPID defined somewhere, do you?

I found a place that looks like a bug that could definitely cause
this, but only if you have NEED_WAITPID defined. Take a look
at server/mpm_common.c around line 233 (search for the call to
reap_children()). The return value is overwriting the pointer to the
apr_proc_t struct, and then right after we call apr_sleep() (which lines
up with the call to select() in your trace above) we get a segfault.

Could you send me your httpd binary and the core file?

In the mean time, if it turns out you do have NEED_WAITPID defined,
try this patch:

Index: server/mpm_common.c
===
RCS file: /home/cvs/httpd-2.0/server/mpm_common.c,v
retrieving revision 1.102.2.4
diff -u -u -r1.102.2.4 mpm_common.c
--- server/mpm_common.c 15 May 2003 20:28:18 -  1.102.2.4
+++ server/mpm_common.c 15 Oct 2003 20:27:53 -
@@ -230,7 +230,7 @@
 }
 
 #ifdef NEED_WAITPID
-if ((ret = reap_children(exitcode, status))  0) {
+if ((rv = reap_children(exitcode, status))  0) {
 return;
 }
 #endif


-aaron


Re: running on embedded devices

2003-10-06 Thread Aaron Bannert
On Mon, Oct 06, 2003 at 03:06:26PM -0500, Wood, Bryan wrote:
 Has anyone used this on the arm platform? 

Used what?

-aaron


Re: Apache won't start - shared memory problem

2003-10-05 Thread Aaron Bannert
On Fri, Oct 03, 2003 at 06:47:16PM +0200, Graham Leggett wrote:
 Is there no way that some cleanup process (whether it involves deleting 
 a file, or talking to the kernel, doesn't matter) can be done before an 
 attempt is made to create the shared memory segment?

The problem with doing that is you don't know if there already is an
apache instance running on the box when you blow away the memory segment.

-aaron


Re: FLOOD_1_1_RC1 tagged.

2003-09-12 Thread Aaron Bannert
On Tuesday, September 9, 2003, at 04:18  PM, Jacek Prucia wrote:
I have just tagged the tree with FLOOD_1_1_RC1. Release tarballs (made 
with
apr and apr-util HEAD) are available for testing here:

http://cvs.apache.org/~jacekp/release/
RC1 builds cleanly on my Gentoo box (with openssl and without) and all 
example
files report no errors at all. Will do few more builds (Solaris, 
FreeBSD)
tomorrow before casting a vote. Anyway looks like a strong candidate 
for GA.

I would like to ask other RM's to take a closer look at RC1 tarballs. 
I might
goofed something up and have absolutelly no idea about it :)
Did you mean to include the full source to apr and apr-util in the flood
source tree?
-aaron


Re: FLOOD_1_1_RC1 tagged.

2003-09-12 Thread Aaron Bannert
On Tuesday, September 9, 2003, at 04:18  PM, Jacek Prucia wrote:
I would like to ask other RM's to take a closer look at RC1 tarballs. 
I might
goofed something up and have absolutelly no idea about it :)
The tarball builds and runs great on Darwin 6.6 (Mac OS X 10.2.6), good 
work!

Here's my +1
(I'm fine with distributing the apr and apr-util source until apr 1.0 
gets
released, but after that we should just say which officially released 
version
of APR we depend on.)

-aaron


Re: cvs commit: httpd-test/flood/build rules.mk.in

2003-09-12 Thread Aaron Bannert
I think I wrote that stuff, and the first time you run make it
will complain about a missing .deps file, but once that file is
built you won't get the complaints any longer. I just didn't
take the time to figure out how to do it in a way that didn't
complain about the missing file.
-aaron
On Thursday, September 11, 2003, at 11:57  PM, [EMAIL PROTECTED] 
wrote:

jerenkrantz2003/09/11 23:57:29
  Modified:floodCHANGES
   flood/build rules.mk.in
  Log:
  Axe the .deps stuff for now.  I can't figure out how I intended this 
to
  work.

  Revision  ChangesPath
  1.51  +2 -0  httpd-test/flood/CHANGES
  Index: CHANGES
  ===
  RCS file: /home/cvs/httpd-test/flood/CHANGES,v
  retrieving revision 1.50
  retrieving revision 1.51
  diff -u -u -r1.50 -r1.51
  --- CHANGES   8 Sep 2003 00:04:49 -   1.50
  +++ CHANGES   12 Sep 2003 06:57:29 -  1.51
  @@ -1,5 +1,7 @@
   Changes since 1.0:
  +* Remove .deps code that is just broken for now.  [Justin 
Erenkrantz]
  +
   * Added configversion attribute (root element flood) which ties 
config
 file structure with certain flood version and warns user if there 
is a
 conflict.  [Jacek Prucia]


  1.4   +2 -2  httpd-test/flood/build/rules.mk.in
  Index: rules.mk.in
  ===
  RCS file: /home/cvs/httpd-test/flood/build/rules.mk.in,v
  retrieving revision 1.3
  retrieving revision 1.4
  diff -u -u -r1.3 -r1.4
  --- rules.mk.in   3 Feb 2003 17:11:00 -   1.3
  +++ rules.mk.in   12 Sep 2003 06:57:29 -  1.4
  @@ -179,7 +179,7 @@
   local-depend: x-local-depend
   	if test `echo $(srcdir)/*.c` != $(srcdir)'/*.c'; then \
  -		$(CC) -MM $(ALL_CPPFLAGS) $(ALL_INCLUDES) $(srcdir)/*.c | sed 
's/\.o:/.lo:/'  $(builddir)/.deps || true;   \
  +		$(CC) -MM $(ALL_CPPFLAGS) $(ALL_INCLUDES) $(srcdir)/*.c | sed 
's/\.o:/.lo:/'  $(top_builddir)/.deps || true;   \
   	fi

   local-clean: x-local-clean
  @@ -252,7 +252,7 @@
   #
   # Dependencies
   #
  -include $(builddir)/.deps
  +#include $(top_builddir)/.deps
   .PHONY: all all-recursive install-recursive local-all 
$(PHONY_TARGETS) \
   	shared-build shared-build-recursive local-shared-build \





Re: Apache 2.1 Alpha Release

2003-08-23 Thread Aaron Bannert
If making a release wasn't so complicated, I'd do one right now.

Where can one find the most recently updated RM instructions these days?

-aaron

On Friday, August 22, 2003, at 11:07  PM, Paul Querna wrote:

An Alpha Type Release from HEAD has been discussed Several times in 
the past
months.  It seems no one is against such a release.  Just a Quick 
Refresh:

June 20:
Once mod_ssl works again, I'd support doing a 2.1 release. - Justin
Erenkrantz  -
http://marc.theaimsgroup.com/?l=apache-httpd-devm=105606838827549w=2
May 14: All replies in this thread supported a 2.1 release -
http://marc.theaimsgroup.com/?t=10528898783r=1w=2
There are even a couple more before those.

So. Are we going todo a 2.1 release anytime soon?  Everyone can agree, 
but no
one does it.



Re: Semaphore bug

2003-08-23 Thread Aaron Bannert
You've simply exhausted your systemwide (or soft user limits) for
that resource. Find out what type of lock you're using and increase
the limit.
-aaron

On Friday, August 22, 2003, at 05:04  AM, Pablo Yaggi wrote:

Hi,
	I notice the semaphore bug last night, is anybody working on it ? I 
saw is the last one
on bugs report.
	The problem is apache left too many semaphores open, so it reuses to 
start with this
kind of message (in my case)

[crit] (28)No space left on device: mod_rewrite: could not create 
rewrite_log_lock

	 is plenty space left on disk and if you check ipcs -s you'll notice 
a lot of apache's
semaphores.
	

cu Pablo

	



Re: Strangely long poll() times

2003-08-22 Thread Aaron Bannert
Can you tell us more about the operating systems and the hardware 
(drivers,
network cards, etc)?

-aaron

On Friday, August 22, 2003, at 04:59  PM, Jim Whitehead wrote:
Teng Xu, a graduate student working with me, has been developing a 
WebDAV
performance testing client. Some of the results he received were a bit
puzzling -- small PROPFIND operations were taking far longer (appx. 
40-45ms)
than we had expected. So, we started looking into the components of 
this
time on the server side. We initially suspected the problem might be 
XML
parsing, but quickly found this not to be the case.
[...]



Re: httpd 1.3 running processes mem usage

2003-08-14 Thread Aaron Bannert
On Wednesday, August 6, 2003, at 10:33  AM, Austin Gonyou wrote:

If I have a Linux server using httpd 1.3 and has 1GB ram and has 1012
httpd processes. When I look at that process's ram stats, I have a
vmSize of say 7032KB, vmRSS of 2056KB, and  vmEXE of 2072KB.
Is the size I'm most concerned with when looking at per/process memory
utilization going to be vmSize? We're trying to tune our servers a bit
better until we're ready to move to httpd2, which will sometime at the
beginning of 2004. But help around this diagnosis would be appreciated.
This is one of the main features of httpd-2.0's worker MPM. That is,
significantly reduced memory usage. It is likely that when you switch
to 2.0 you'll see an order of magnitude reduction is memory usage for
the same number of concurrent requests.
-aaron



Re: flood: responsescript patch

2003-08-12 Thread Aaron Bannert
On Monday, August 11, 2003, at 04:26  PM, Justin Erenkrantz wrote:
--On Sunday, August 10, 2003 23:24:04 +0200 Jacek Prucia 
[EMAIL PROTECTED] wrote:

This probably belongs in contrib/patches. It is a quick'n'dirty hack I
did few days ago, to simulate applet making network connection.
Basically it allows for something like this:
url responsescript=/path/to/script.pyhttp://www.example.com//url
I wouldn't see an issue just placing this in the main code rather than 
as a separate patch.  Just place in the documentation that the use of 
this attribute may distort the timings.
I agree. +1
This patch looks cool, but there's one little problem. If the response 
body is bigger
than the internal IO buffers on your system, you'll block trying to 
write. If the
script fails to read or fails to read enough of the body, then flood 
and the script will
deadlock. A better (but more complicated) way to do this is with a 
pollset, so flood
can poll on reading and writing from the script's stdout and stdin.

Hmm.  If we did this *after* the close is done, would we still have a 
problem with the timing?  I don't think we would, but I could be 
wrong. (Perhaps we're freeing the response buffer at that time.  I 
forget.)
There may be cases where we'd want to keep the connection to the server
open while the script processes, but I can't think of any. We could 
have it default
to run the script after close, and then later if someone comes up with 
a use
case, we could add a flag or something.

-aaron


Re: config.nice should be installed

2003-07-24 Thread Aaron Bannert
On Thursday, July 24, 2003, at 01:31  PM, Astrid Keßler wrote:

It would be a big help to our users if config.nice was installed by a 
make
install.
This is a really good idea. +1
I like this too. +1

Where should it be installed? $prefix/lib maybe?

-aaron



Re: flood docs -- take 2

2003-06-30 Thread Aaron Bannert
On Sunday, June 29, 2003, at 06:18  PM, Jacek Prucia wrote:
Please have another look at:
http://cvs.apache.org/~jacekp/manual/
This is actually what I'm going to commit tommorow. It has bugs, empty 
places,
but at least mentions every element/attribute available (at least I 
hope so).
Looks like it is a good starting point.
From the parts I looked at, it looks great. :)
while preparing documentation I was stuck with one simple thing: What 
is HTTP
Server project policy with CHANGES file? Is it:

a) hand edited by separate cvs commits (changed the code? edit 
CHANGES!)
b) autogenerated from cvs log

Some of my useful commits (auth support, baseurl) aren't in CHANGES. 
Gotta fix
that before release.
We hand edit the CHANGES file, so go ahead and add anything you feel
the user should know about. (Big awesome doc additions like this 
definitely
belong in there.)

-aaron


Re: Finding race conditions/deadlock using Stanford Checker

2003-06-29 Thread Aaron Bannert
On Friday, June 27, 2003, at 11:08  AM, Ken Ashcraft wrote:
Have race conditions and deadlock been a problem in the past?  How
likely is it that there are race condition and deadlock bugs hiding in
the current source?
Race Conditions and Deadlocks are an issue both in the server and
in modules. It's still possible that some exist in the server, but
if we've done our job the big ones are gone.
Who are the developers who could answer my is this a race condition
questions?
Anyone on this list (or on the [EMAIL PROTECTED] list). If you think
you've found a race condition, please just post it to the appropriate
list so that we can all discuss it. Any help you can provide would
be appreciated.
Is there any documentation about locks in the server?  Where they are
used?  How they are used?  What do they protect?
They are implemented in APR. There are different types, depending on
what you want to protect and how you want to protect them. Take a
look at the locks/ subdirectory (srclib/apr/locks in the httpd tarball)
and in the 
srclib/apr/include/apr_{thread,process,global}_{mutex,rwlock,cond}.h
header files for the best documentation. (I gave a talk on this at
ApacheCon last year, but I haven't put up the materials yet. One of
these days I'll get around to it though...)

What files should I be looking at?  Which use locks?  Which contain the
locking functions?
Many files in the httpd source tree call the various APR locking 
functions.
Just run grep over the whole tree.

Are there any absolute rules about locks (i.e. all global variables 
must
be protected by locks, orderings of lock acquisition)?
In general there aren't rules like that. We try to architect the system
in such a way to avoid locks at all (for example, we have a shared 
memory
scoreboard that contains the status of each child process, but because
of the way that shmem segment is accessed we don't need locks.) The 
times
when they are necessary are when data will be lost or corrupted if some
form of mutual exclusion weren't used.

-aaron



Re: flood - Can it go faster?

2003-06-27 Thread Aaron Bannert
On Tuesday, June 24, 2003, at 12:35  PM, Jeremy Brown wrote:
[...]
generators running FreeBSD 5.1
You might try other operating systems. I don't know how well FreeBSD
compares, but I have had good luck on both Solaris (x86) and Linux 2.4.
Are there any tips to get more throughput on flood?
Depends on what you want to model. Do you want keepalive enabled? Do you
want to simulate 7000 simultaneous connections, 7000 requests, or 7000
users (depending on your definition of a user)?
Or any methods to optimize flood to run faster?
There are probably many things in flood that could be better tuned.
We'll happily accept patches if you have any. :)
Also is there a way to turn off logging completely? The new server will
have logging enable to catch all the information. My job is to hit it 
as
hard as I possibly can with my resources. And to only show something if
there is a failure?
You could implement your own hooks in flood that simple disable output 
(or count
and don't display the results until the end).

I am using shell scripts of flood since megaconglomerate is not 
operating
at this time from what I have tested. Are there any recommendations for
doing this more effectively? Or is megaconglomerate really working?
Getting megaconglomerate working will probably have to wait until
someone who needs it implements it.
-aaron


Re: Volunteering to be RM

2003-06-26 Thread Aaron Bannert
Anyone can RM, and they don't even have to announce it before
they have a tarball made.
-aaron

On Wednesday, June 25, 2003, at 03:40  PM, Sander Striker wrote:

I'm volunteering to be RM for 2.0.47.  When noone objects
I'm going to try to get out a stable tag withing the next two
weeks.  My aim is to release on July 7th, a monday.



Re: cvs commit: httpd-2.0/docs/manual/vhosts details.html.en details.html.ko.euc-kr examples.html.en examples.html.ko.euc-kr fd-limits.html.en fd-limits.html.ja.jis fd-limits.html.ko.euc-kr index.html.de index.html.en index.html.ja.jis index.html.ko.euc-kr ip-based.html.en ip-based.html.ko.euc-kr mass.html.en mass.html.ko.euc-kr name-based.html.en name-based.html.ko.euc-kr

2003-05-30 Thread Aaron Bannert
On Thursday, May 29, 2003, at 12:30  PM, [EMAIL PROTECTED] wrote:

  Modified:docs/manual Tag: APACHE_2_0_BRANCH bind.html.en
bind.html.ja.jis bind.html.ko.euc-kr
cgi_path.html.en cgi_path.html.ja.jis
cgi_path.html.ko.euc-kr configuring.html.en
[...]

I've been trying to hold my tongue on this, but I can't any longer.
Is there something we can do about gigantic commits like this? This
commit generated a 660KB email! Nobody is reading these emails. I'm
already opposed to having automatically generated content committed
to CVS, but when I'm receiving gigantic emails like this it's too much.
-aaron



Re: Removing Server: header

2003-03-27 Thread Aaron Bannert
On Thursday, March 27, 2003, at 01:36  AM, Sander Striker wrote:
People, why, oh why, do we need to muck with the Server header?  Who 
cares?  Attacks will
be run regardless of Server headers (and if not, they will as soon as 
we start removing them).
So, in the end, what good does it do?
I totally agree. Apache is proud to be Apache. If you don't want to
display the Server header, guess what, you have the source!!
-aaron



Re: 2 off-topic questions

2003-03-27 Thread Aaron Bannert
On Thursday, March 27, 2003, at 12:55  PM, Ian Holsman wrote:

1. does anyone know of a tool which can replay http traffic caught via 
tcpdump, and possibly
change the hostname/ip# of the host.
tcptrace (www.tcptrace.org I think) can take tcpdump output
and produce a file for each direction of each TCP session
found in the dump. You can use netcat to shove it at whomever
you want at that point.
-aaron



Re: cvs commit: httpd-2.0/server/mpm/worker pod.c

2003-03-20 Thread Aaron Bannert
On Thursday, March 20, 2003, at 01:50  PM, [EMAIL PROTECTED] wrote:

wrowe   2003/03/20 13:50:41

  Modified:.CHANGES
   modules/loggers mod_log_config.c
   modules/mappers mod_rewrite.c
   server   log.c mpm_common.c
   server/mpm/worker pod.c
  Log:
SECURITY:  Eliminated leaks of several file descriptors to child
processes, such as CGI scripts.
[...]

   apr_sockaddr_info_get((*pod)-sa, 
ap_listeners-bind_addr-hostname,
 APR_UNSPEC, 
ap_listeners-bind_addr-port, 0, p);

  +/* close these before exec. */
  +apr_file_unset_inherit((*pod)-pod_in);
  +apr_file_unset_inherit((*pod)-pod_out);
  +
   return APR_SUCCESS;
The PODs in the worker MPM are getting closed and the parent is then
unable to kill its children when it needs to (don't you love how
morbid that sounds?). I see one of these every second in the error log:
[Thu Mar 20 18:09:25 2003] [warn] (32)Broken pipe: write pipe_of_death

Since the unset_inherit() is being called from the open_logs hook, it's
happening in the parent process, which means that the fork for
the children is going to kill them off. We need to unset the inherit
*after* we are running in the child.
-aaron



Re: flood proxy (was: [STATUS] (flood))

2003-03-13 Thread Aaron Bannert
On Thursday, March 13, 2003, at 01:46  AM, Jacek Prucia wrote:
* Write robust tool (using tethereal perhaps) to take network 
dumps
  and convert them to flood's XML format.
Status: Justin volunteers.  Aaron had a script somewhere that 
is
a start.
Wouldn't it be better, if we use proxy instead of all-purpose network
software? I was thinking about mod_proxy_flood.so with some function 
attached
to request forwarding and a simple response handler which could allow 
users
to:

1. enable/disable flood proxy
2. edit gathered urls (only delete for now, later full edit)
3. dump flood file
Not a bad idea. things like tethereal and tcptrace are definitately
like you say all-purpose, but for just collecting URLs and timestamps,
that's sounds like a good idea to me.
-aaron


Re: cvs commit: httpd-2.0 CHANGES

2003-03-12 Thread Aaron Bannert
On Wednesday, March 12, 2003, at 12:51  PM, Oden Eriksson wrote:
Anyway..., despite this enormous disregard or what it is called, 
mandrake
Linux will be the first distribution shipping apache2 (my packaging), 
_plus_
a whole bunch of other server like stuff I have packed.
Um, hasn't RH8.0 been shipping Apache 2 for a few months in the
standard distro?
-aaron



Re: cvs commit: httpd-2.0 CHANGES

2003-02-28 Thread Aaron Bannert
On Thursday, February 27, 2003, at 07:56  AM, Greg Ames wrote:
Most of us have committed bug fixes with the best of intentions which 
were not quite complete or had unintended side effects.  I certainly 
have - the deferred write pool stuff in core_output_filter comes to 
mind.  Letting the fixes age a bit in the unstable tree reduces the 
probability of unpleasant surprises happening in the stable tree, at 
least for mainline code.  We can be extra diligent about 
reviewing/testing changes that we know are not mainline.
I see the need for letting patches age in the unstable tree, but
I think we could do that without having to vote on each and
every change.
-aaron



Re: mod_authn_mysql

2003-02-19 Thread Aaron Bannert
On Wednesday, February 19, 2003, at 11:32  AM, Cliff Woolley wrote:


On Wed, 19 Feb 2003, Dietz, Phil E. wrote:


For 2.1 and beyond, I'd rather see something more generic.  Like a
mod_authn_odbc or a mod_authn_soap.


Ironic, since I was just about to say I'm not so keen on adding more
modules to 2.0, and that if it's going in I'd rather have it in 2.1.


Yeah, I'd rather see fewer core modules rather than more.

-0 for inclusion.

-aaron




Re: cvs commit: httpd-2.0/modules/filters config.m4

2003-02-17 Thread Aaron Bannert
Being enabled or disable by default isn't solely based on
begin stable. I personally don't think mod_deflate should
be enabled by default, especially given its track record
of browser incompatibility/bugs.

-aaron


On Sunday, February 16, 2003, at 06:47  PM, [EMAIL PROTECTED] 
wrote:

jerenkrantz2003/02/16 18:47:11

  Modified:modules/filters config.m4
  Log:
  Switch mod_deflate to 'most' status now.  It's stable enough and 
will disable
  itself if zlib isn't found.




Re: cvs commit: httpd-2.0/modules/filters config.m4

2003-02-17 Thread Aaron Bannert
On Monday, February 17, 2003, at 06:45  AM, Joshua Slive wrote:



On Mon, 17 Feb 2003, Aaron Bannert wrote:


Being enabled or disable by default isn't solely based on
begin stable. I personally don't think mod_deflate should
be enabled by default, especially given its track record
of browser incompatibility/bugs.


In my opinion, --enable-modules=most should give you essentially all 
the
modules that can be compiled unless there is a very good reason to 
exclude
one.  That is what people expect.  But just because the module is
compiled, doesn't mean it should be activated in the default config 
file.

The current build/install system will add a LoadModule line for
each DSO that it installs.

-aaron




Re: cvs commit: httpd-2.0/modules/filters config.m4

2003-02-17 Thread Aaron Bannert

On Monday, February 17, 2003, at 07:30  AM, Joshua Slive wrote:

The current build/install system will add a LoadModule line for
each DSO that it installs.


But it won't actually activate the filter, will it?


Does mod_deflate need something special to be enabled,
other than the LoadModule line?

-aaron




Re: Strange Behavior of Apache 2.0.43 on SPARC MP system

2003-02-12 Thread Aaron Bannert

On Wednesday, February 12, 2003, at 01:08  PM, Min Xu wrote:

We will soon have two new 8P Sun servers equipped with
Gigabit ethernet coming to our lab. With that, I should be able to
experiment with separate machines.


I'd be very interested in seeing updated results from a multi-machine
networked test. Feel free to post them here once you have them.

-aaron




Re: cvs commit: apr configure.in

2003-02-11 Thread Aaron Bannert
Sometimes it's useful to have comments in the configure cruft, but yeah
the dnl's should stay.

-aaron


On Tuesday, February 11, 2003, at 08:51  AM, Justin Erenkrantz wrote:

--On Tuesday, February 11, 2003 3:36 PM + [EMAIL PROTECTED] wrote:


jorton  2003/02/11 07:36:56

  Modified:.configure.in
  Log:
  Capitalize some Posixes and s/dnl/# a comment.


Why are you removing the dnl's?  Comments should only be dnl not #. 
Adding the #'s doesn't help anyone.  They'll just be added to the 
unintelligble cruft of configure.  -- justin




Re: Strange Behavior of Apache 2.0.43 on SPARC MP system

2003-02-11 Thread Aaron Bannert
A couple questions and a couple observations:

1) How many network cards are in the server? (How many unique
   interrupt handlers for the network?)

2) How much context switching was going on, and how impacted
   were the mutexes (see mpstat)?

3) Was the workload uniformly distributed across the CPUs?

I've seen large MP systems completely fail to distribute the
workload, and suffer because of it. My current theory for why
this occurs is that the interrupt load is overwhelming the CPU
where that interrupt is being serviced. This combined with the
relatively small amount of userspace work that must be done to
push a small static file out is wreaking havoc on the scheduler
(and it's probably more dramatic if your system enabled sendfile
support).

-aaron


On Tuesday, February 11, 2003, at 09:34  AM, Min Xu wrote:


Thanks to Owen Garrett, who reminded me that I should have
mentioned a little more details about the client configuration.

My modified SURGE client fetch web pages on an object basis,
each object contains multiple pages. For each object the client
uses HTTP/1.1 keepalive, but not pipeline. After an object
being fetched completely, the client close the connection
to the server and reopen a new one for next object.

The delay time I added was between each web pages, so the
client goes to sleep for a little while with the connection
being still open.

FYI, I have attached the client code. Anyone have a wild guess
on what's going on? ;-) Thanks a lot!





  1   2   3   4   5   6   7   >