Re: run_tests and test_clean

2004-09-16 Thread Stas Bekman
Geoffrey Young wrote:
hi all...
in TestMM.pm we have this:
  test_clean :
...
  run_tests : test_clean
...
test :: pure_all run_tests test_clean
in a test suite with *lots* of files, test_clean takes forever.  however, I
experience has shown me that test_clean is only really required when you
make changes to a configuration file or have autogenerated files you want to
regenerate, making it a waste common development circumstances where a
single test file is run over and over again.
I'd like to suggest that test_clean _not_ be a prerequisite for run_tests,
but instead shuffle the 'test' target around a bit.  with the attached
patch, you can save *lots* of cycles by doing something similar to the
following:
$ make test TEST_FILES=t/foo.t
$ make run_tests TEST_FILES=t/foo.t
$ make run_tests TEST_FILES=t/foo.t
...
in my current situation, the first (with t/TEST -clean) runs in 45 seconds,
while the second (and subsequent) iteration runs in 29 seconds.  definitely
a plus for me :)
Are you sure that there will be no side effects? If yes, then +1
Why not just run t/TEST directly instead of using make? I never use make 
when developing tests. So it was never an issue for me.

--
__
Stas BekmanJAm_pH -- Just Another mod_perl Hacker
http://stason.org/ mod_perl Guide --- http://perl.apache.org
mailto:[EMAIL PROTECTED] http://use.perl.org http://apacheweek.com
http://modperlbook.org http://apache.org   http://ticketmaster.com


Re: run_tests and test_clean

2004-09-16 Thread Geoffrey Young

 Are you sure that there will be no side effects? 

I'm pretty sure.  it's not like test_clean won't be called under normal
'make test' circumstances, it's just that now there would be a way around it
if the user thinks it is beneficial.

 If yes, then +1

ok, cool.  if things blow up then I'll revert it, but I'm not seeing any
problems on my end.

 
 Why not just run t/TEST directly instead of using make? I never use make
 when developing tests. So it was never an issue for me.

t/TEST works fine for smallish things, but make is nice for automatically
computing and including library directories spread out all over the place.

--Geoff


Re: cvs commit: httpd-test/perl-framework/Apache-Test/lib/Apache TestMM.pm

2004-09-16 Thread Geoffrey Young


[EMAIL PROTECTED] wrote:
 geoff   2004/09/15 16:55:31
 
   Modified:perl-framework/Apache-Test Changes
perl-framework/Apache-Test/lib/Apache TestMM.pm
   Log:
   run_tests make target no longer invokes t/TEST -clean, making it
   possible to save a few development cycles when a full cleanup is
   not required between runs.

TestMB.pm probably should be updated to be consistent as well.

David, I think the attached patch is right but I'll leave TestMB entirely up
to you.

--Geoff
? mb.patch
? quick.patch
Index: lib/Apache/TestMB.pm
===
RCS file: /home/cvs/httpd-test/perl-framework/Apache-Test/lib/Apache/TestMB.pm,v
retrieving revision 1.7
diff -u -r1.7 TestMB.pm
--- lib/Apache/TestMB.pm	5 Sep 2004 00:11:30 -	1.7
+++ lib/Apache/TestMB.pm	15 Sep 2004 23:56:48 -
@@ -53,7 +53,6 @@
 
 sub ACTION_run_tests {
 my $self = shift;
-$self-depends_on('test_clean');
 # XXX I'd love to do this without t/TEST.
 $self-do_system($self-perl, $self-_bliblib,
  $self-localize_file_path($self-apache_test_script),
@@ -71,8 +70,8 @@
 sub ACTION_test {
 my $self = shift;
 $self-depends_on('code');
-$self-depends_on('run_tests');
 $self-depends_on('test_clean');
+$self-depends_on('run_tests');
 }
 
 sub _cmodules {


Re: run_tests and test_clean

2004-09-16 Thread Stas Bekman
Geoffrey Young wrote:
Are you sure that there will be no side effects? 

I'm pretty sure.  it's not like test_clean won't be called under normal
'make test' circumstances, it's just that now there would be a way around it
if the user thinks it is beneficial.
Oh, now I remember where it was used. In the httpd-dev test suite: 'make 
test' was always nuking .so files and ssl stuff, which was a pain in an 
arse. But I don't know why Doug programmed it this way. It wasn't us who 
introduced that change.

If yes, then +1

ok, cool.  if things blow up then I'll revert it, but I'm not seeing any
problems on my end.
sure
Why not just run t/TEST directly instead of using make? I never use make
when developing tests. So it was never an issue for me.

t/TEST works fine for smallish things, but make is nice for automatically
computing and including library directories spread out all over the place.
+1
--
__
Stas BekmanJAm_pH -- Just Another mod_perl Hacker
http://stason.org/ mod_perl Guide --- http://perl.apache.org
mailto:[EMAIL PROTECTED] http://use.perl.org http://apacheweek.com
http://modperlbook.org http://apache.org   http://ticketmaster.com


[STATUS] (perl-framework) Wed Sep 15 23:45:40 EDT 2004

2004-09-16 Thread Rodent of Unusual Size
httpd-test/perl-framework STATUS:   -*-text-*-
Last modified at [$Date: 2002/03/09 05:22:48 $]

Stuff to do:
* finish the t/TEST exit code issue (ORed with 0x2C if
  framework failed)

* change existing tests that frob the DocumentRoot (e.g.,
  t/modules/access.t) to *not* do that; instead, have
  Makefile.PL prepare appropriate subdirectory configs
  for them.  Why?  So t/TEST can be used to test a
  remote server.

* problems with -d perl mode, doesn't work as documented
  Message-ID: [EMAIL PROTECTED]
  Date: Sat, 20 Oct 2001 12:58:33 +0800
  Subject: Re: perldb

Tests to be written:

* t/apache
  - simulations of network failures (incomplete POST bodies,
chunked and unchunked; missing POST bodies; slooow
client connexions, such as taking 1 minute to send
1KiB; ...)

* t/modules/autoindex
  - something seems possibly broken with inheritance on 2.0

* t/ssl
  - SSLPassPhraseDialog exec:
  - SSLRandomSeed exec:


[STATUS] (flood) Wed Sep 15 23:45:35 EDT 2004

2004-09-16 Thread Rodent of Unusual Size
flood STATUS:   -*-text-*-
Last modified at [$Date: 2003/07/01 20:55:12 $]

Release:

1.0:   Released July 23, 2002
milestone-03:  Tagged January 16, 2002
ASF-transfer:  Released July 17, 2001
milestone-02:  Tagged August 13, 2001
milestone-01:  Tagged July 11, 2001 (tag lost during transfer)

RELEASE SHOWSTOPPERS:

* Everything needs to work perfectly

Other bugs that need fixing:

* I get a SIGBUS on Darwin with our examples/round-robin-ssl.xml
  config, on the second URL. I'm using OpenSSL 0.9.6c 21 dec 2001.
  
* iPlanet sends Content-length - there is a hack in there now
  to recognize it.  However, all HTTP headers need to be normalized
  before checking their values.  This isn't easy to do.  Grr.

* OpenSSL 0.9.6
  Segfaults under high load.  Upgrade to OpenSSL 0.9.6b.
   Aaron says: I just found a big bug that might have been causing
   this all along (we weren't closing ssl sockets).
   How can I reproduce the problem you were seeing
   to verify if this was the fix?

* SEGVs when /tmp/.rnd doesn't exist are bad. Make it configurable
  and at least bomb with a good error message. (See Doug's patch.)
   Status: This is fixed, no?

* If APR has disabled threads, flood should as well. We might want
  to have an enable/disable parameter that does this also, providing
  an error if threads are desired but not available.

* flood needs to clear pools more often. With a long running test
  it can chew up memory very quickly. We should just bite the bullet
  and create/destroy/clear pools for each level of our model:
  farm, farmer, profile, url/request-cycle, etc.

* APR needs to have a unified interface for ephemeral port
  exhaustion, but aparently Solaris and Linux return different
  errors at the moment. Fix this in APR then take advantage of
  it in flood.

* The examples/analyze-relative scripts fail when there are less
  than 5 unique URLs.

Other features that need writing:

* More analysis and graphing scripts are needed

* Write robust tool (using tethereal perhaps) to take network dumps 
  and convert them to flood's XML format.
Status: Justin volunteers.  Aaron had a script somewhere that is
a start. Jacek is working on a Mozilla application, codename
Flood URL bag (much like Live HTTP Headers) and small
HTTP proxy.

* Get chunked encoding support working.
Status: Justin volunteers.  He got sidetracked by the httpd
implementation of input filtering and never finished 
this.  This is a stopgap until apr-serf is completed.

* Maybe we should make randfile and capath runtime directives that
  come out of the XML, instead of autoconf parameters.

* We are using apr_os_thread_current() and getpid() in some places
  when what we really want is a GUID. The GUID will be used to
  correlate raw output data with each farmer. We may wish to print
  a unique ID for each of farm, farmer, profile, and url to help in
  postprocessing.

* We are using strtol() in some places and strtoll() in others.
  Pick one (Aaron says strtol(), but he's not sure).

* Validation of responses (known C-L, specific strings in response)
   Status: Justin volunteers

* HTTP error codes (ie. teach it about 302s)
   Justin says: Yeah, this won't be with round_robin as implemented.  
Need a linked list-based profile where we can insert 
new URLs into the sequence.

* Farmer (Single-thread, multiple profiles)
   Status: Aaron says: If you have threads, then any Farmer can be
   run as part of any Farm. If you don't have threads, you can
   currently only run one Farmer named Joe right now (this will
   be changed so that if you don't have threads, flood will attempt
   to run all Farmers in serial under one process).

* Collective (Single-host, multiple farms)
  This is a number of Farms that have been fork()ed into child processes.

* Megaconglomerate (Multiple hosts each running a collective)
  This is a number of Collectives running on a number of hosts, invoked
  via RSH/SSH or maybe even some proprietary mechanism.

* Other types of urllists
a) Random / Random-weighted
b) Sequenced (useful with cookie propogation)
c) Round-robin
d) Chaining of the above strategies
  Status: Round-robin is complete.

* Other types of reports
  Status: Aaron says: simple reports are functional. Justin added
  a new type that simply prints the approx. timestamp when
  the test was run, and the result as OK/FAIL; it is called
  easy reports (see flood_easy_reports.h).
  Furthermore, 

[Patch] Caching apxs queries

2004-09-16 Thread Philippe M. Chiasson
I had been investigating into ways to make test runs run faster, and I found 
that
one of the biggest bottlenecks on my test machine was useless repetitive calls
to apxs. The following patch caches apxs queries.
For instance, timings for mod_perl test run:
+ Before:
Files=218, Tests=2512, 1032 wallclock secs (594.94 cusr + 431.33 csys = 1026.27 
CPU)
+ After:
Files=218, Tests=2512, 246 wallclock secs (196.69 cusr + 33.55 csys = 230.24 
CPU)
--

Philippe M. Chiasson m/gozer\@(apache|cpan|ectoplasm)\.org/ GPG KeyID : 88C3A5A5
http://gozer.ectoplasm.org/ F9BF E0C2 480E 7680 1AE5 3631 CB32 A107 88C3A5A5
Index: perl-framework/Apache-Test/lib/Apache/TestConfig.pm
===
RCS file: /home/cvs/httpd-test/perl-framework/Apache-Test/lib/Apache/TestConfig.pm,v
retrieving revision 1.244
diff -u -I$Id -r1.244 TestConfig.pm
--- perl-framework/Apache-Test/lib/Apache/TestConfig.pm	5 Sep 2004 16:30:30 -	1.244
+++ perl-framework/Apache-Test/lib/Apache/TestConfig.pm	16 Sep 2004 17:46:54 -
@@ -1620,20 +1620,27 @@
 sub apxs {
 my($self, $q, $ok_fail) = @_;
 return unless $self-{APXS};
-local @ENV{ qw(PATH IFS CDPATH ENV BASH_ENV) };
-my $devnull = devnull();
-my $apxs = shell_ready($self-{APXS});
-my $val = qx($apxs -q $q 2$devnull);
-chomp $val if defined $val; # apxs post-2.0.40 adds a new line
-unless ($val) {
-if ($ok_fail) {
-return ;
+my $val;
+unless (exists $self-{_apxs}{$q}) {
+local @ENV{ qw(PATH IFS CDPATH ENV BASH_ENV) };
+my $devnull = devnull();
+my $apxs = shell_ready($self-{APXS});
+$val = qx($apxs -q $q 2$devnull);
+chomp $val if defined $val; # apxs post-2.0.40 adds a new line
+if ($val) {
+$self-{_apxs}{$q} = $val;
 }
-else {
-warn APXS ($self-{APXS}) query for $q failed\n;
+unless ($val) {
+if ($ok_fail) {
+return ;
+}
+else {
+warn APXS ($self-{APXS}) query for $q failed\n;
+return $val;
+}
 }
 }
-$val;
+$self-{_apxs}{$q};
 }
 
 sub pop_dir {


Re: [Patch] Caching apxs queries

2004-09-16 Thread Stas Bekman
Philippe M. Chiasson wrote:
I had been investigating into ways to make test runs run faster, and I 
found that
one of the biggest bottlenecks on my test machine was useless repetitive 
calls
to apxs. The following patch caches apxs queries.

For instance, timings for mod_perl test run:
+ Before:
Files=218, Tests=2512, 1032 wallclock secs (594.94 cusr + 431.33 csys = 
1026.27 CPU)

+ After:
Files=218, Tests=2512, 246 wallclock secs (196.69 cusr + 33.55 csys = 
230.24 CPU)
gozer++, +1
--
__
Stas BekmanJAm_pH -- Just Another mod_perl Hacker
http://stason.org/ mod_perl Guide --- http://perl.apache.org
mailto:[EMAIL PROTECTED] http://use.perl.org http://apacheweek.com
http://modperlbook.org http://apache.org   http://ticketmaster.com


Re: [PATCH] Re: Seg fault: race conditions in mod_mem_cache.c

2004-09-16 Thread Greg Ames
looks good.
Bill Stoddard wrote:
 static int remove_url(const char *key) 
[...]
-if (obj) {
-obj-cleanup = 1;
+if (!apr_atomic_dec(obj-refcount)) {
+/* For performance, cleanup cache object after releasing the lock */
+cleanup = 1;
this could be:
  cleanup = !apr_atomic_dec(obj-refcount);
might save a conditional branch.  Is it as clear?  IMO yes, because you have to 
understand what !apr_atomic_dec() means either way.  no strong opinion.

Greg


Re: [PATCH] Re: Seg fault: race conditions in mod_mem_cache.c

2004-09-16 Thread Greg Ames
Bill Stoddard wrote:
+ * memcacache_cache_free is a callback that is only invoked during by a 
thread
s/memcacache/memcache/
during ?? by
+ * has been ejected from the cache. decrement the refcount and if the 
refcount drop
+ * to 0, cleanup the cache object.
s/drop/drops/
Greg



Re: [PATCH] Re: Seg fault: race conditions in mod_mem_cache.c

2004-09-16 Thread Greg Ames
Bill Stoddard wrote:
-/* If obj-complete is not set, the cache update failed and the
- * object needs to be removed from the cache then cleaned up.
- */
-if (!obj-complete) {
-if (sconf-lock) {
-apr_thread_mutex_lock(sconf-lock);
-}
-/* Remember, objects marked for cleanup are, by design, already
- * removed from the cache. remove_url() could have already
- * removed the object from the cache (and set obj-cleanup)
- */
-if (!obj-cleanup) {
-cache_remove(sconf-cache_cache, obj);
-obj-cleanup = 1;
-}
-if (sconf-lock) {
-apr_thread_mutex_unlock(sconf-lock);
-}
-}
-
I don't understand why the equivalent of this section is no longer needed.  Was 
this dead code before?

Greg


[PATCH] Re: Seg fault: race conditions in mod_mem_cache.c

2004-09-16 Thread Jean-Jacques Clar


Are there any reasons why that part of decrement_refcount() is not 
needed anymore?

@@ -267,31 +271,9 @@ { cache_object_t *obj = (cache_object_t *) arg;- /* If obj-complete is not set, the cache update failed and the- * object needs to be removed from the cache then cleaned up.- */- if (!obj-complete) {- if (sconf-lock) {- apr_thread_mutex_lock(sconf-lock);- }- /* Remember, objects marked for cleanup are, by design, already- * removed from the cache. remove_url() could have already- * removed the object from the cache (and set obj-cleanup)- */- if (!obj-cleanup) {- cache_remove(sconf-cache_cache, obj);- obj-cleanup = 1;- }- if (sconf-lock) {- apr_thread_mutex_unlock(sconf-lock);- }- }-


Re: AddOutputFilterByType oddness

2004-09-16 Thread Joe Orton
On Wed, Aug 25, 2004 at 02:40:39PM +0200, Graham Leggett wrote:
 Justin Erenkrantz wrote:
 Ultimately, all that is needed is a call to ap_set_content_type() before 
 any bytes are written to the client to get AddOutputFilterByType to 
 work. Perhaps with the recent momentum behind mod_proxy work, someone 
 could investigate that and get mod_proxy fixed.
 
 ap_set_content_type() is called on line 769 of proxy_http.c:

But ap_add_output_filters_by_type() explicitly does nothing for a
proxied request.  Anyone know why?  AddOutputFilterByType DEFLATE
text/plain text/html seems to work as expected here for a forward proxy
with this applied: maybe I'm missing something fundamental...

--- server/core.c~  2004-08-31 09:16:56.0 +0100
+++ server/core.c   2004-09-16 16:48:09.0 +0100
@@ -2875,11 +2875,10 @@
 conf = (core_dir_config *)ap_get_module_config(r-per_dir_config,
core_module);
 
-/* We can't do anything with proxy requests, no content-types or if
- * we don't have a filter configured.
+/* We can't do anything with no content-type or if we don't have a
+ * filter configured.
  */
-if (r-proxyreq != PROXYREQ_NONE || !r-content_type ||
-!conf-ct_output_filters) {
+if (!r-content_type || !conf-ct_output_filters) {
 return;
 }
 



Re: cvs commit: apache-1.3/src/os/tpf/samples uri_delims.txt test_char.txt sample_env.txt sample_mak.txt linkhttp.jcl loadset.jcl

2004-09-16 Thread David McCreedy




 mccreedy 2004/09/15 
16:45:18 
src/modules/proxy mod_proxy.h Index: 
mod_proxy.h 
=== 
RCS file: 
/home/cvs/apache-1.3/src/modules/proxy/mod_proxy.h,v 
retrieving revision 1.62 retrieving revision 
1.63 diff -u -r1.62 -r1.63 --- 
mod_proxy.h 17 Feb 2004 21:52:22 
- 1.62 +++ 
mod_proxy.h 15 Sep 2004 23:45:18 
- 1.63 @@ -213,7 
+213,11 @@ struct per_thread_data 
{ struct hostent 
hpbuf; +#ifdef TPF 
+ u_int ipaddr; 
+#else u_long 
ipaddr; 
+#endif char 
*charpbuf[2]; };64-bit fix for 
TPF? looks like u_long is a hold-over from the veryold days and we 
should use in_addr_t wherever it exists, and definein_addr_t 
appropriately on other platforms

Since I can't #ifdef for the typedef of in_add_t I can only think 
ofthree approaches to make this code generic:

1) Change the "u_int ipaddr;" line to "in_addr_t ipaddr;" and let 
other platforms with in_addr_t add their platforms to the #ifdef TPF.
2) Ditch the #ifdef TPF and change the original "u_long ipaddr;" line 
to "in_addr_t ipaddr;" for all platforms. This would force the ones that 
don't have in_addr_t to change (by either typedef'ing in_addr_t to a u_long 
orby adding an #ifdef fin mod_proxy.h or themselves).
3) Add aflag like "AP_HAVE_IN_ADDR_T to ap_config.h or some 
other header file and using that to determine if "in_addr_t ipaddr;" should be 
used in mod_proxy.h.

I'd prefer #1, unless there's a 4th option.

-David McCreedy


Re: [PATCH] Re: Seg fault: race conditions in mod_mem_cache.c

2004-09-16 Thread Jean-Jacques Clar


Oups!, sorry I was few minutes slower than you Greg.
The overall patch looks good and much cleaner.
I would love to move my office next to both of you...

I will put the new code on my mp box and make sure
It does not break.
Thanks,
JJ [EMAIL PROTECTED] 09/16/04 9:50 AM 
Bill Stoddard wrote: - /* If obj-complete is not set, the cache update failed and the - * object needs to be removed from the cache then cleaned up. - */ - if (!obj-complete) { - if (sconf-lock) { - apr_thread_mutex_lock(sconf-lock); - } - /* Remember, objects marked for cleanup are, by design, already - * removed from the cache. remove_url() could have already - * removed the object from the cache (and set obj-cleanup) - */ - if (!obj-cleanup) { - cache_remove(sconf-cache_cache, obj); - obj-cleanup = 1; - } - if (sconf-lock) { - apr_thread_mutex_unlock(sconf-lock); - } - } -I don't understand why the equivalent of this section is no longer needed. Was this dead code before?Greg


Bundling APR in 2.2

2004-09-16 Thread Paul Querna
In most of the Apache 2.0.XX releases, we have been using a CVS snapshot
of APR and APR-Util.

I would like to make it an official policy that for the 2.2 cycle, we
will never use a CVS snapshot of APR.

I believe we should still bundle APR and APR-Util with HTTPd, but we
should only use the released versions of each.

This will keep it easy for end users, because APR will still be bundled
in our Source Distribution.

It will also make life much easier for System Packagers.  If we only use
released versions, APR and APR-Util can be easily placed into separate
packages.  This will become more important as more standalone
applications use APR.  

Comments/Objections?

-Paul Querna



Re: [PATCH] Re: Seg fault: race conditions in mod_mem_cache.c

2004-09-16 Thread Bill Stoddard

  [EMAIL PROTECTED] 09/16/04 9:50 AM 
Bill Stoddard wrote:
  -/* If obj-complete is not set, the cache update failed and the
  - * object needs to be removed from the cache then cleaned up.
  - */
  -if (!obj-complete) {
  -if (sconf-lock) {
  -apr_thread_mutex_lock(sconf-lock);
  -}
  -/* Remember, objects marked for cleanup are, by design, already
  - * removed from the cache. remove_url() could have already
  - * removed the object from the cache (and set obj-cleanup)
  - */
  -if (!obj-cleanup) {
  -cache_remove(sconf-cache_cache, obj);
  -obj-cleanup = 1;
  -}
  -if (sconf-lock) {
  -apr_thread_mutex_unlock(sconf-lock);
  -}
  -}
  -
I don't understand why the equivalent of this section is no longer 
needed.  Was
this dead code before?
Ooops, still need this code (or something very similar).
Bill


Re: Bundling APR in 2.2

2004-09-16 Thread Nick Kew
On Thu, 16 Sep 2004, Paul Querna wrote:

 In most of the Apache 2.0.XX releases, we have been using a CVS snapshot
 of APR and APR-Util.

 I would like to make it an official policy that for the 2.2 cycle, we
 will never use a CVS snapshot of APR.

That makes httpd releases (relatively frequent) hostage to APR releases
(extremely infrequent) when we need a bugfix in CVS.  Is that acceptable?

 I believe we should still bundle APR and APR-Util with HTTPd, but we
 should only use the released versions of each.

Release version ABI yes.  Release version - only if that dependency
can be fixed (i.e. APR folks can be hurried along where necessary).


 It will also make life much easier for System Packagers.  If we only use
 released versions, APR and APR-Util can be easily placed into separate
 packages.  This will become more important as more standalone
 applications use APR.

Keeping binary-compatibility (ABI) is sufficient for that, innit?

-- 
Nick Kew


Re: Bundling APR in 2.2

2004-09-16 Thread Garrett Rooney
Nick Kew wrote:
On Thu, 16 Sep 2004, Paul Querna wrote:

In most of the Apache 2.0.XX releases, we have been using a CVS snapshot
of APR and APR-Util.
I would like to make it an official policy that for the 2.2 cycle, we
will never use a CVS snapshot of APR.

That makes httpd releases (relatively frequent) hostage to APR releases
(extremely infrequent) when we need a bugfix in CVS.  Is that acceptable?
So have one of the many APR developers on the HTTPD team roll an APR 
release if you need a fix.  There is absolutely zero reason that APR 
releases need to be infrequent.  Don't make one problem (totally random 
versions of APR being distributed out into the wild) worse in an attempt 
to avoid another problem (infrequent APR releases).  Just fix the second 
problem and be done with it.

Having HTTPD ship CVS snapshots of APR is inexcusable.  There's no 
reason that HTTPD is special, it should be using released versions just 
like everyone else.

-garrett


Re: Bundling APR in 2.2

2004-09-16 Thread Paul Querna
On Thu, 2004-09-16 at 18:13 +0100, Nick Kew wrote:
 On Thu, 16 Sep 2004, Paul Querna wrote:
 
  In most of the Apache 2.0.XX releases, we have been using a CVS snapshot
  of APR and APR-Util.
 
  I would like to make it an official policy that for the 2.2 cycle, we
  will never use a CVS snapshot of APR.
 
 That makes httpd releases (relatively frequent) hostage to APR releases
 (extremely infrequent) when we need a bugfix in CVS.  Is that acceptable?

If there is a significant bug in APR, it should be fixed in a release by
them.  Regarding APR's release frequency, it is my hope that it will
become more frequent as a possible side effect of only bundling released
versions.

However, APR is still their own project, and it is up to them when they
do a release.

  I believe we should still bundle APR and APR-Util with HTTPd, but we
  should only use the released versions of each.
 
 Release version ABI yes.  Release version - only if that dependency
 can be fixed (i.e. APR folks can be hurried along where necessary).

I guess we should ask them if they can make releases more often :-)

 
  It will also make life much easier for System Packagers.  If we only use
  released versions, APR and APR-Util can be easily placed into separate
  packages.  This will become more important as more standalone
  applications use APR.
 
 Keeping binary-compatibility (ABI) is sufficient for that, innit?

No.  We have a backwards binary compatibility policy.  This means
existing functions and public structures will not change.  It does allow
new functions, structures or interfaces to be added.  When these things
are added, and then used in Apache, we must then wait for them to appear
in an APR Release.

-Paul Querna





[PATCH - update ] Re: Seg fault: race conditions in mod_mem_cache.c

2004-09-16 Thread Bill Stoddard
Adds back obj-complete check in decrement_refcount. Makes changes recommended by Greg.

Index: mod_mem_cache.c
===
RCS file: /home/cvs/httpd-2.0/modules/experimental/mod_mem_cache.c,v
retrieving revision 1.88.2.9
diff -u -r1.88.2.9 mod_mem_cache.c
--- mod_mem_cache.c	26 Aug 2004 16:59:44 -	1.88.2.9
+++ mod_mem_cache.c	16 Sep 2004 17:38:42 -
@@ -13,6 +13,18 @@
  * limitations under the License.
  */
 
+/*
+ * obj-refcount should only be incremented under protection of the sconf-lock
+ * obj-refcount is decremented under protection of sconf-lock when the object
+ *   is removed from the cache.
+ * obj-refcount can be atomically decremented w/o protection of the sconf-lock
+ *   by worker threads.
+ * obj-refcount is incremented by one when the object is in the cache and once
+ *   per thread referencing the object. An object in the cache with no threads 
+ *   accessing it will have a refcount of 1. When refcount drops to 0, the object 
+ *   should be garbage collected by which ever thread detects the refcount dropping 
+ *   to 0.
+ */
 #define CORE_PRIVATE
 #include mod_cache.h
 #include cache_pqueue.h
@@ -142,24 +154,20 @@
 return obj-key;
 }
 /** 
- * callback to free an entry 
- * There is way too much magic in this code. Right now, this callback
- * is only called out of cache_insert() which is called under protection
- * of the sconf-lock, which means that we do not (and should not)
- * attempt to obtain the lock recursively. 
+ * memcache_cache_free()
+ * memcache_cache_free is a callback that is only invoked by a thread 
+ * running in cache_insert(). cache_insert() runs under protection
+ * of sconf-lock.  By the time this function has been entered, the cache_object
+ * has been ejected from the cache. decrement the refcount and if the refcount drops
+ * to 0, cleanup the cache object.
  */
 static void memcache_cache_free(void*a)
 {
 cache_object_t *obj = (cache_object_t *)a;
 
-/* Cleanup the cache object. Object should be removed from the cache
- * now. Increment the refcount before setting cleanup to avoid a race 
- * condition. A similar pattern is used in remove_url()
+/* Decrement the refcount to account for the object being ejected
+ * from the cache. If the refcount is 0, free the object.
  */
-apr_atomic_inc(obj-refcount);
-
-obj-cleanup = 1;
-
 if (!apr_atomic_dec(obj-refcount)) {
 cleanup_cache_object(obj);
 }
@@ -269,29 +277,28 @@
 
 /* If obj-complete is not set, the cache update failed and the
  * object needs to be removed from the cache then cleaned up.
+ * The garbage collector may have ejected the object from the
+ * cache already, so make sure it is really still in the cache
+ * before attempting to remove it.
  */
 if (!obj-complete) {
+cache_object_t *tobj = NULL;
 if (sconf-lock) {
 apr_thread_mutex_lock(sconf-lock);
 }
-/* Remember, objects marked for cleanup are, by design, already
- * removed from the cache. remove_url() could have already
- * removed the object from the cache (and set obj-cleanup)
- */
-if (!obj-cleanup) {
+tobj = cache_find(sconf-cache_cache, obj-key);
+if (tobj == obj) {
 cache_remove(sconf-cache_cache, obj);
-obj-cleanup = 1;
+apr_atomic_dec(obj-refcount);
 }
 if (sconf-lock) {
 apr_thread_mutex_unlock(sconf-lock);
 }
-}
+} 
 
-/* Cleanup the cache object */
+/* If the refcount drops to 0, cleanup the cache object */
 if (!apr_atomic_dec(obj-refcount)) {
-if (obj-cleanup) {
-cleanup_cache_object(obj);
-}
+cleanup_cache_object(obj);
 }
 return APR_SUCCESS;
 }
@@ -312,10 +319,7 @@
 }
 obj = cache_pop(co-cache_cache);
 while (obj) { 
-/* Iterate over the cache and clean up each entry */  
-/* Free the object if the recount == 0 */
-apr_atomic_inc(obj-refcount);
-obj-cleanup = 1;
+/* Iterate over the cache and clean up each unreferenced entry */
 if (!apr_atomic_dec(obj-refcount)) {
 cleanup_cache_object(obj);
 }
@@ -415,7 +419,6 @@
 apr_atomic_set(obj-refcount, 1);
 mobj-total_refs = 1;
 obj-complete = 0;
-obj-cleanup = 0;
 obj-vobj = mobj;
 /* Safe cast: We tested  sconf-max_cache_object_size above */
 mobj-m_len = (apr_size_t)len;
@@ -425,7 +428,7 @@
  * Note: Perhaps we should wait to put the object in the
  * hash table when the object is complete?  I add the object here to
  * avoid multiple threads attempting to cache the same content only
- * to discover at the very end that only one of them will suceed.
+ * to discover at the very end that only one of them will succeed.
  * Furthermore, adding the cache object to the table at the end could

Re: Bundling APR in 2.2

2004-09-16 Thread Graham Leggett
Paul Querna wrote:
In most of the Apache 2.0.XX releases, we have been using a CVS snapshot
of APR and APR-Util.
I would like to make it an official policy that for the 2.2 cycle, we
will never use a CVS snapshot of APR.
+1.
I believe we should still bundle APR and APR-Util with HTTPd, but we
should only use the released versions of each.
This will keep it easy for end users, because APR will still be bundled
in our Source Distribution.
I think this will cause more problems than it solves, I would prefer to 
see APR no longer bundled with httpd v2.2 onwards.

APR is now GA, and is available as a standalone package. It is also used 
by other packages, like subversion etc. If APR is installed as standard 
on a system, which APR does httpd use? The system one? The bundled one?

Getting httpd v2.1 built as an RPM is currently broken, due to the 
bundled APR clashing with a system APR. APR is it's own project, there 
is no longer any need to bundle it with httpd.

Regards,
Graham
--


smime.p7s
Description: S/MIME Cryptographic Signature


Re: [PATCH2] Re: Seg fault: race conditions in mod_mem_cache.c

2004-09-16 Thread Greg Ames
Jean-Jacques Clar wrote:
To remove the 2 noted possible race situations, in addition to another 
possible
one in store_body(), 
ooops!  I missed that one
  we could add the following 2 recursive functions,
that have to be called under the protection of the lock (it is now for
all 3 cases of atomic_set32()):
That would work I think.  But I prefer the idea of using an extra increment to 
the refcount to represent the reference from the hash table rather than having a 
separate cleanup bit.  Then all the former cleanup bit manipulations become 
apr_atomic_inc() or _dec() and the code is simpler.

I'm going to see if I can make 2.0.51 blow up, then try Bill's patch with 2 
small tweaks - one to eliminate an if statement, the other to restore the code 
in decrement_refcount() that removes an incomplete object from the cache 
(reworked of course).

Greg



Re: Bundling APR in 2.2

2004-09-16 Thread Joe Orton
On Thu, Sep 16, 2004 at 07:41:00PM +0200, Graham Leggett wrote:
 Paul Querna wrote:
 I believe we should still bundle APR and APR-Util with HTTPd, but we
 should only use the released versions of each.
 
 This will keep it easy for end users, because APR will still be bundled
 in our Source Distribution.

+1 to shipping the sources in the tarball.  In principle I like the idea
of only shipping apr from a release tag, but it adds a significant
burden if there needs to be an httpd security release for an apr issue. 
I think we need to demonstrate that we can ship APR releases more often
than once a year first.

 I think this will cause more problems than it solves, I would prefer to 
 see APR no longer bundled with httpd v2.2 onwards.
 
 APR is now GA, and is available as a standalone package. It is also used 
 by other packages, like subversion etc. If APR is installed as standard 
 on a system, which APR does httpd use? The system one? The bundled one?

This is precisely dictated by APR_FIND_APR and how it's used.  Not
bundling the apr and apr-util sources just takes away user choice, and I
don't see any justification for doing that.

 Getting httpd v2.1 built as an RPM is currently broken, due to the 
 bundled APR clashing with a system APR. APR is it's own project, there 
 is no longer any need to bundle it with httpd.

That is purely a packaging issue and need not influence what goes in the
tarball.  You can easily make httpd.spec work with either an installed
or bundled apr based on an rpmbuild --with-blah flag, if you like. Or
just pick one behaviour and make it work.

joe


[PATCH - update ] Re: Seg fault: race conditions in mod_mem_cache.c

2004-09-16 Thread Jean-Jacques Clar


+1
Tried differentconfigurations in lab and changes are acting as expected.

Thanks to Greg and Bill S. for fixing the problem.
JJ [EMAIL PROTECTED] 09/16/04 11:39 AM 
Adds back obj-complete check in decrement_refcount. Makes changes recommended by Greg.


Re: Bundling APR in 2.2

2004-09-16 Thread Graham Leggett
Joe Orton wrote:
+1 to shipping the sources in the tarball.  In principle I like the idea
of only shipping apr from a release tag, but it adds a significant
burden if there needs to be an httpd security release for an apr issue. 
I think we need to demonstrate that we can ship APR releases more often
than once a year first.
I don't see any reason why APR would not ship more than once a year. The 
big step was v0.9 to v1.0, the step to v1.0.1 or onwards is a no 
brainer, especially for a security release.

This is precisely dictated by APR_FIND_APR and how it's used.  Not
bundling the apr and apr-util sources just takes away user choice, and I
don't see any justification for doing that.
APR_FIND_APR allows you to install v0.x and v1.x simultaneously and 
choose between the two, but it will not help if APR v1.x.bar is 
installed and used by subversion, while APR v1.y.foo is used by httpd.

Regards,
Graham
--


smime.p7s
Description: S/MIME Cryptographic Signature


Moving httpd-2.0 to Subversion

2004-09-16 Thread Paul Querna
The Original Proposal was in March of this year:
http://marc.theaimsgroup.com/?t=10791831443r=2w=2

+1 Votes:
  Tom May
  Justin Erenkrantz
  Andr Malo
  Erik Abele
  Jim Jagielski
  Bill Stoddard
  Joe Orton

-1 Votes:
  Aaron Bannert

Aaron said:
 This will, at least for now, raise the bar to entry for contributors.

While Subversion had recently become 1.0.0 back in March, I believe
that Subversion Clients are available for most platforms now, and that
the barrier to entry for subversion is not any higher than CVS for new
contributors.

Is this still a valid concern?

A large portion of the thread was taken up by discussions about the
various security features of subversion.  I believe that it was
generally agreed that Subversion is better than CVS in this respect, and
further concerns should be taken to [EMAIL PROTECTED]

What are the barriers to moving to Subversion? I believe we might as
well do it now, before we start on a new stable branch (2.2).  Time for
another Vote?

-Paul Querna



Re: Moving httpd-2.0 to Subversion

2004-09-16 Thread Sander Striker
From: Paul Querna [EMAIL PROTECTED]
Sent: Friday, September 17, 2004 7:34 AM

Hi,

 The Original Proposal was in March of this year:
 http://marc.theaimsgroup.com/?t=10791831443r=2w=2

 +1 Votes:
   Tom May
   Justin Erenkrantz
   Andr Malo
   Erik Abele
   Jim Jagielski
   Bill Stoddard
   Joe Orton

I can't believe I am not in this list...

 -1 Votes:
   Aaron Bannert

 Aaron said:
 This will, at least for now, raise the bar to entry for contributors.
[...]

 What are the barriers to moving to Subversion?

Other than making the actual decision?  None.

 I believe we might as
 well do it now, before we start on a new stable branch (2.2).  Time for
 another Vote?

Here's my +1.  I should note though that we voted to move 1.3
ages ago, and I managed to drop the ball on that one.  When I
finally got around to processing the 1.3 converted CVS
repository, Fitz had removed it from his homedir.

cvs2svn has seen a release as well, and has been improved a lot
since march.  Infrastructure is now totally able to run the
cvs2svn conversion itself.  I fully intent to start on the 1.3
conversion tonight; this time without dropping the ball...


Sander