[STATUS] (flood) Wed Mar 27 23:45:24 EST 2002

2002-03-28 Thread Rodent of Unusual Size
flood STATUS:   -*-text-*-
Last modified at [$Date: 2002/01/17 01:09:45 $]

Release:

milestone-04:  In development
milestone-03:  Tagged January 16, 2002
ASF-transfer:  Released July 17, 2001
milestone-02:  Tagged August 13, 2001
milestone-01:  Tagged July 11, 2001 (tag lost during transfer)

RELEASE SHOWSTOPPERS:

* Everything needs to work perfectly

Other bugs that need fixing:

* DNS lookup failures in any urllist cause segfault.
   Justin says: Wow.  Why?
   Aaron says: Is this still happening? It's been awhile.

* Misformed URLs cause a segfault. When we fix this we should
  make sure to print out a helpful message when this occurs.
  (This is not something that can be detected with a validating
  XML parser, unfortunately.)

* iPlanet sends Content-length - there is a hack in there now
  to recognize it.  However, all HTTP headers need to be normalized
  before checking their values.  This isn't easy to do.  Grr.

* OpenSSL 0.9.6
  Segfaults under high load.  Upgrade to OpenSSL 0.9.6b.
   Aaron says: I just found a big bug that might have been causing
   this all along (we weren't closing ssl sockets).
   How can I reproduce the problem you were seeing
   to verify if this was the fix?

* SEGVs when /tmp/.rnd doesn't exist are bad. Make it configurable
  and at least bomb with a good error message. (See Doug's patch.)
   Status: This is fixed, no?

* If APR has disabled threads, flood should as well. We might want
  to have an enable/disable parameter that does this also, providing
  an error if threads are desired but not available.

* flood needs to clear pools more often. With a long running test
  it can chew up memory very quickly. We should just bite the bullet
  and create/destroy/clear pools for each level of our model:
  farm, farmer, profile, url/request-cycle, etc.

Other features that need writing:

* Write robust tool (using tethereal perhaps) to take network dumps 
  and convert them to flood's XML format.
Status: Justin volunteers.  Aaron had a script somewhere that is
a start.

* Get chunked encoding support working.
Status: Justin volunteers.  He got sidetracked by the httpd
implementation of input filtering and never finished 
this.  This is a stopgap until apr-serf is completed.

* Maybe we should make randfile and capath runtime directives that
  come out of the XML, instead of autoconf parameters.

* Migrate all locks to APRs new lock api.

* We are using apr_os_thread_current() and getpid() in some places
  when what we really want is a GUID. The GUID will be used to
  correlate raw output data with each farmer. We may wish to print
  a unique ID for each of farm, farmer, profile, and url to help in
  postprocessing.

* We are using strtol() in some places and strtoll() in others.
  Pick one (Aaron says strtol(), but he's not sure).

* Validation of responses (known C-L, specific strings in response)
   Status: Justin volunteers

* HTTP error codes (ie. teach it about 302s)
   Justin says: Yeah, this won't be with round_robin as implemented.  
Need a linked list-based profile where we can insert 
new URLs into the sequence.

* Farmer (Single-thread, multiple profiles)
   Status: Aaron says: If you have threads, then any Farmer can be
   run as part of any Farm. If you don't have threads, you can
   currently only run one Farmer named Joe right now (this will
   be changed so that if you don't have threads, flood will attempt
   to run all Farmers in serial under one process).

* Collective (Single-host, multiple farms)
  This is a number of Farms that have been fork()ed into child processes.

* Megaconglomerate (Multiple hosts each running a collective)
  This is a number of Collectives running on a number of hosts, invoked
  via RSH/SSH or maybe even some proprietary mechanism.

* Other types of urllists
a) Random / Random-weighted
b) Sequenced (useful with cookie propogation)
c) Round-robin
d) Chaining of the above strategies
  Status: Round-robin is complete.

* Other types of reports
  Status: Aaron says: simple reports are functional. Justin added
  a new type that simply prints the approx. timestamp when
  the test was run, and the result as OK/FAIL; it is called
  easy reports (see flood_easy_reports.h).
  Furthermore, simple_reports and easy_reports both print
  out the current requesting URI line.

Documentation that needs writing:

* Documentation?  What documentation? RTFS?
Status: Justin volunteers.  He'll probably use Anakia 

[STATUS] (perl-framework) Wed Mar 27 23:45:29 EST 2002

2002-03-28 Thread Rodent of Unusual Size
httpd-test/perl-framework STATUS:   -*-text-*-
Last modified at [$Date: 2002/03/09 05:22:48 $]

Stuff to do:
* finish the t/TEST exit code issue (ORed with 0x2C if
  framework failed)

* change existing tests that frob the DocumentRoot (e.g.,
  t/modules/access.t) to *not* do that; instead, have
  Makefile.PL prepare appropriate subdirectory configs
  for them.  Why?  So t/TEST can be used to test a
  remote server.

* problems with -d perl mode, doesn't work as documented
  Message-ID: [EMAIL PROTECTED]
  Date: Sat, 20 Oct 2001 12:58:33 +0800
  Subject: Re: perldb

Tests to be written:

* t/apache
  - simulations of network failures (incomplete POST bodies,
chunked and unchunked; missing POST bodies; slooow
client connexions, such as taking 1 minute to send
1KiB; ...)

* t/modules/autoindex
  - something seems possibly broken with inheritance on 2.0

* t/ssl
  - SSLPassPhraseDialog exec:
  - SSLRandomSeed exec:


Re: 1.3.24 mod_proxy patch: multiple set-cookies fix

2002-03-28 Thread Graham Leggett

Michael Best wrote:

 I'm pretty sure it's not the syntax and it's somehow related to this
 multiple cookie issue, as when testing I get one cookie but not the other.
 
 I have tested 1.3.23 and 1.3.24.
 
 I am going to go test 1.3 CVS now.

The original fix was based on the idea that all duplicated headers were
merged into a single header of comma separated values. Trouble was that
cookies contain stray commas, so separating them out again was broken.

The new fix was to switch back in a routine that was designed to handle
multiple headers without killing duplicates, but which had been switched
with another function in the core that did something similar.

This new fix is in 1.3.25-dev.

Regards,
Graham
-- 
-
[EMAIL PROTECTED]There's a moon
over Bourbon Street
tonight...


smime.p7s
Description: S/MIME Cryptographic Signature


[PATCH] HTTP proxy, ab, and Host: as a hop-by-hop header?

2002-03-28 Thread Taisuke Yamada


Hi.

I do believe it's been discussed at least once, but I have a
question on Host: header generated by ab(1).

The problem I'm encountering is that ab(1) generates Host: header
pointing to proxy server instead of real destination host.
Due to this behavior, proxy server (not mod_proxy, BTW) is failing
to send a valid HTTP request to destintion webserver using name-based
virtualhost, as it simply passes Host: header with its (proxy
server's) hostname in it.

After some experiments, I found ab(1) that comes with 2.0.32
does not behave this way, and simply puts destination hostname
in Host: header even when HTTP proxy is in use. This seems to
be the correct behavior.

# Current ab-2.0.32 cannot interoperate with proxy server,
# but that's an another story...(patch attached below)

I think this is a bug, but I'm not yet completely certain on that,
as comment left in ab.c shows that this code was included with
some intent. I suspect this was done to test mod_proxy running
on name-based virtualhost.

# mod_proxy does not have problem with above Host: header, because
# mod_proxy always drops Host: header client had sent.

As it is stated

  - ab is a tool for benchmarking the performance of your
Apache HyperText Transfer Protocol (HTTP) server.

  - Ab does not implement HTTP/1.x fully; instead, it only
accepts some 'expected' forms of responses.

in the manual, I may be plain wrong to expect ab behave like
other HTTP client. But is there any possibility to have a
patch accepted if I added new option to ab so it will behave
more like standard client?

Since I first made a quick fix to ab.c to make it work with
other proxy servers, I'm attaching it in this email anyway.
There are two patches, one for ab-1.3 and the other for ab-2.0.32.

If there's any chance to get new option (like -rfc as in -ansi
in gcc?) into ab, I'll probably make one and submit it also.

Best Regards,
--
Taisuke Yamada [EMAIL PROTECTED]
Internet Initiative Japan Inc., Technical Planning Division

* Quick fix to make latest ab-1.3d (or CVS version) generate Host:
  header using destination hostname. Also adds port number.

--- for ab-1.3d --- for ab-1.3d --- for ab-1.3d --- for ab-1.3d ---
--- ab.c.orig   Thu Mar 28 14:32:50 2002
+++ ab.cThu Mar 28 14:38:14 2002
@@ -1186,7 +1186,7 @@
  * the proxy - whistl quoting the
 * full URL in the GET/POST line.
 */
-   host  = proxyhost;
+   host  = hostname;
connectport = proxyport;
url_on_request = fullurl;
 }
@@ -1234,7 +1234,7 @@
sprintf(request, %s %s HTTP/1.0\r\n
User-Agent: ApacheBench/%s\r\n
%s %s %s
-   Host: %s\r\n
+   Host: %s:%d\r\n
Accept: */*\r\n
%s \r\n,
(posting == 0) ? GET : HEAD,
@@ -1242,13 +1242,13 @@
VERSION,
keepalive ? Connection: Keep-Alive\r\n : ,
cookie, auth, 
-   host, hdrs);
+   host, port, hdrs);
 }
 else {
sprintf(request, POST %s HTTP/1.0\r\n
User-Agent: ApacheBench/%s\r\n
%s %s %s
-   Host: %s\r\n
+   Host: %s:%d\r\n
Accept: */*\r\n
Content-length: %d\r\n
Content-type: %s\r\n
@@ -1258,7 +1258,7 @@
VERSION,
keepalive ? Connection: Keep-Alive\r\n : ,
cookie, auth,
-   host, postlen,
+   host, port, postlen,
(content_type[0]) ? content_type : text/plain, hdrs);
 }
 
--- for ab-1.3d --- for ab-1.3d --- for ab-1.3d --- for ab-1.3d ---

* Quick fix to make latest ab-2.0.32 (or CVS version) work with
  HTTP proxy. This will prevent ab from dropping URL in HTTP
  request line, resulted in invalid HTTP request. Also adds port number.

--- for ab-2.0.32 --- ab-2.0.32 --- ab-2.0.32 --- ab-2.0.32 ---
--- ab.c.orig   Thu Mar 28 14:44:10 2002
+++ ab.cThu Mar 28 14:57:21 2002
@@ -274,8 +274,7 @@
 apr_port_t connectport;
 char *gnuplot; /* GNUplot file */
 char *csvperc; /* CSV Percentile file */
-char url[1024];
-char fullurl[1024];
+char *fullurl;
 int isproxy = 0;
 apr_short_interval_time_t aprtimeout = 30 * APR_USEC_PER_SEC;  /* timeout value */
  /*
@@ -1150,20 +1149,20 @@
sprintf(request, %s %s HTTP/1.0\r\n
User-Agent: ApacheBench/%s\r\n
%s %s %s
-   Host: %s\r\n
+   Host: %s:%d\r\n
Accept: */*\r\n
%s \r\n,
(posting == 0) ? GET : HEAD,
(isproxy) ? fullurl : path,
AP_SERVER_BASEREVISION,
keepalive ? Connection: Keep-Alive\r\n : ,
-   cookie, auth, host_field, hdrs);
+   cookie, auth, host_field, port, hdrs);
 }
 else {
sprintf(request, POST %s HTTP/1.0\r\n
User-Agent: 

FreeBSD sendfile

2002-03-28 Thread Igor Sysoev

Hi,

apr_sendfile() for FreeBSD has workaround for nbytes!=0 bug
but this bug had fixed in CURRENT:

http://www.FreeBSD.org/cgi/cvsweb.cgi/src/sys/kern/uipc_syscalls.c#rev1.103

So I think code should be following:

#ifdef __FreeBSD_version  500029
for (i = 0; i  hdtr-numheaders; i++) {
bytes_to_send += hdtr-headers[i].iov_len;
}
#endif

But this correct problem at build time only.
Suppose that someone has built Apache 2 on FreeBSD 4.x. Than he will
upgrade FreeBSD to 5.1 or higher. Sometimes it's not possible
to rebuild Apache so he would encounter problem.

So I think that better way is not to use FreeBSD 4.x sendfile()
capability to send header but use emulatation of header
transmition instead.


Igor Sysoev




connect to listener warning message

2002-03-28 Thread David Hill



Hi all,
 When I run 2.0.32 on Compaq 
Tru64 and give it a bit of a load (ok, I whack it good :-), I get warnings 
similar to the following looping into the error log file. I am getting one per 
second, even well after the load is removed.I do not get any of these until 
after the load has been applied for a 10 seconds or so.

[Thu Mar 28 13:11:43 2002] [warn] (49)Can't assign 
requested address: connect tolistener

I am guessing that this is coming from dummy 
connection in server/mpm_common.c but don't understand enough about what else is 
going on to know where to look further.

The server does seem to continue to respond to 
requests.

I tried bumping up the thread count but that did 
not seem to make any difference.

I have not tried the nightly yet to see if that 
makes any difference.

Any suggestions ?

thanks,
 Dave Hill



requesting guidance on converting a mod to 2.0

2002-03-28 Thread David Hill



Hi all,

I am in the process of converting an apache module 
that I have working with Apache 1.3 to work with 2.0. This crufty bit of code 
handles the dynamic content portion of Specweb99. I must admit my knowledge of 
Apache modules is less than it should be as my first module work was converting 
this one from isapi to be a 1.3Apache module. (BTW anyone else playing 
with specweb99 out there?)

Things that confuse me in the 2.0 API

1) In my handler routine, I am trying to push a 
file aka ap_send_fd or ap_send_fd_len in apache 1.3.

 I found a modified ap_send_fd 
and am using it. What is unclear is that it seems that I should not call 
apr_file_close() on a sucessfull send which seems real strange. Can I assume 
that the bucket brigade stuff closes the file for me ? The code in mod_asis 
(which mirrors ap_send_fd) seems to suggest that is the case.

2) At the top of the handler routine, I copied from 
mod_example a

 if (strcmp(r-handler, 
"specweb-handler")) { return 
DECLINED; }
Without this, I am finding that the handler get 
called for every request, even requests which are outside of the 
Location directive that enables the handler. Am I missing something or 
is that just how it is now ?

thanks,
 Dave Hill





RE: requesting guidance on converting a mod to 2.0

2002-03-28 Thread Ryan Bloom













I am in the process of converting an apache module that I
have working with Apache 1.3 to work with 2.0. This crufty bit of code handles
the dynamic content portion of Specweb99. I must admit my knowledge of Apache
modules is less than it should be as my first module work was converting this
one from isapi to be a 1.3Apache module. (BTW anyone else playing with
specweb99 out there?)











Things that confuse me in the 2.0 API











1) In my handler routine, I am trying to push a file aka
ap_send_fd or ap_send_fd_len in apache 1.3.











 I found a modified ap_send_fd and am
using it. What is unclear is that it seems that I should not call
apr_file_close() on a sucessfull send which seems real strange. Can I assume
that the bucket brigade stuff closes the file for me ? The code in mod_asis
(which mirrors ap_send_fd) seems to suggest that is the case.







The bucket brigade code will handle
closing the file for you, yes.









2) At the top of the handler routine, I copied from
mod_example a











 if (strcmp(r-handler,
specweb-handler)) {
 return DECLINED;
 }





Without this, I am finding that the handler get called for
every request, even requests which are outside of the Location
directive that enables the handler. Am I missing something or is that just how
it is now ?







Thats just how it is now.



Ryan












[PATCH] elminate warning in http_config.h

2002-03-28 Thread sterling

when building with -Werror -Wall it seems http_config.h bjorks my module 
build.  This patch should fix it - any objections?

sterling


Index: include/http_config.h
===
RCS file: /home/cvspublic/httpd-2.0/include/http_config.h,v
retrieving revision 1.95
diff -u -r1.95 http_config.h
--- include/http_config.h   15 Mar 2002 07:37:21 -  1.95
+++ include/http_config.h   28 Mar 2002 21:30:02 -
 -184,7 +184,7 
 
 #else /* AP_HAVE_DESIGNATED_INITIALIZER */
 
-typedef const char *(*cmd_func) ();
+typedef const char *(*cmd_func) (void);
 
 # define AP_NO_ARGS  func
 # define AP_RAW_ARGS func




Re: [PATCH] Re: mod_include bug(s)?

2002-03-28 Thread Paul J. Reder

That patch seems to solve at least one of the problems that I am seeing,
but I have at least one other problem and a core dump inside
send_parsed_content. I'm currently stepping though, trying to find the
source of the core dump.

I'll let you know what I find.

Paul J. Reder

Brian Pane wrote:

 Here's a patch (against the current CVS head) that addresses the two
 problems I know about:
  * The ctx-tag_length computation in find_end_sequence() was a bit
broken in cases where there was a false alarm match on a partial
--
  * The ap_ssi_get_tag_and_value() function needs to avoid walking off
the end of the string.  After debugging this some more, I ended up
using Cliff's original patch.
 
 With this patch, both the current CVS head and the bucket_allocator-patched
 httpd pass all but one of the mod_include tests in httpd-test.  The test 
 that
 reports a failure is the if6.shtml one.  It's expecting three instances of
 [an error occurred... but only two are output.  I believe the third one
 that the test expects is due to an imbalanced !--#endif--.  But based
 on the code, I wouldn't expect to see the [an error occurred... message
 for this situation.
 
 Cliff and Paul (and anyone else with good test cases for this stuff), can
 you test this patch against your test cases?
 
 Thanks,
 --Brian
 
 
 Paul J. Reder wrote:
 
 Brian,

 Please give me a chance to fix this. I indicated that I was looking
 at this problem. There is no reason to duplicate work. I have identified
 several problems and am working on fixes for them. I should have 
 something
 tested and ready by the end of day on Thursday or Friday during the day
 at the latest.

 Paul J. Reder


 Paul J. Reder wrote:

 Okay, I have recreated at least two problems in include processing, one
 of which results in a core dump. I am in process of tracking them down.
 It might be tomorrow before I have a patch.

 Paul J. Reder

 Paul J. Reder wrote:

 Brian,

 I'm looking into this right now. I'll let you all know what I find out.

 I have some concerns about the suggested fix. I hope to have a fix
 by this afternoon.

 Paul J. Reder

 Brian Pane wrote:

 Cliff Woolley wrote:

 I've spent the entire evening chasing some wacky mod_include bugs 
 that
 surfaced as I was doing final testing on the bucket API patch.  At 
 first I
 assumed they were my fault, but upon further investigation I think 
 the
 fact that they haven't surfaced until now is a coincidence.  There 
 are two
 problems that I can see -- the if6.shtml and if7.shtml files I 
 committed
 to httpd-test last week to check for the mod_include 1.3 bug has 
 turned up
 quasi-related problems in mod_include 2.0 as well.

 Problem 1:

 When in an #if or #elif or several other kinds of tags,
 ap_ssi_get_tag_and_value() is called from within a while(1) loop that
 continues until that function returns with tag==NULL.  But in the 
 case of
 if6.shtml, ap_ssi_get_tag_and_value() steps right past the end of the
 buffer and never bothers to check and see how long the thing it's 
 supposed
 to be processing actually is.  The following patch fixes it, but 
 there
 could be a better way to do it.  I'm hoping someone out there who 
 knows
 this code better will come up with a better way to do it.

 Index: mod_include.c
 ===
 RCS file: /home/cvs/httpd-2.0/modules/filters/mod_include.c,v
 retrieving revision 1.205
 diff -u -d -r1.205 mod_include.c
 --- mod_include.c   24 Mar 2002 06:42:14 -  1.205
 +++ mod_include.c   27 Mar 2002 06:41:55 -
  -866,6 +866,11 
 int   shift_val = 0;
 char  term = '\0';

 +if (ctx-curr_tag_pos - ctx-combined_tag  ctx-tag_length) {
 +*tag = NULL;
 +return;
 +}
 +


 My only objection to this is that ctx-curr_tag_pos is supposed
 to point to a null-terminated copy of the directive, and all the
 subsequent looping logic in ap_ssi_tag_and_value() depends on
 that.  Are we hitting a case where this string isn't null-terminated
 (meaning that the root cause of the problem is somewhere else)?



 *tag_val = NULL;
 SKIP_TAG_WHITESPACE(c);
 *tag = c; /* First non-whitespace character (could 
 be NULL). */


 Problem 2:

 In the case of if7.shtml, for some reason, the null-terminating 
 character
 is placed one character too far forward.  So instead of #endif you 
 get
 #endif\b or some such garbage:

 ...

 Note the trailing \b in curr_tag_pos that shouldn't be there.

 I'm at a bit of a loss on this one.  I would think the problem 
 must be in
 get_combined_directive(), but I could be wrong.  Again, more eyes 
 would be
 appreciated.


 I'm willing to take a look at this later today.  The only problem
 is that I can't recreate this problem (or the first one) with the
 latest CVS head of httpd-test and httpd-2.0.  Is there any special
 configuration needed to trigger the bug?

 --Brian










 
 
 
 

Re: [PATCH] Re: mod_include bug(s)?

2002-03-28 Thread Cliff Woolley

On Thu, 28 Mar 2002, Paul J. Reder wrote:

 That patch seems to solve at least one of the problems that I am seeing,
 but I have at least one other problem and a core dump inside
 send_parsed_content. I'm currently stepping though, trying to find the
 source of the core dump.
 I'll let you know what I find.

Thanks guys!

--Cliff


--
   Cliff Woolley
   [EMAIL PROTECTED]
   Charlottesville, VA





Re: [PATCH] Re: mod_include bug(s)?

2002-03-28 Thread Brad Nicholes

I don't know if this has anything to do with the problems that you are
seeing but there is a while loop in is_only_below that is running off
the end of the string.  

while (*path  *(path++) != '/')
++path;

The while loop increments path twice in one iteration which means
that any string with an odd number of characters will skip past the
check for NULL and start walking through whatever memory exists at that
point.  This is causing NetWare to fault.  I'm sure it is causing other
platforms problems also.

Brad

Brad Nicholes
Senior Software Engineer
Novell, Inc., a leading provider of Net business solutions
http://www.novell.com 

 [EMAIL PROTECTED] Thursday, March 28, 2002 2:49:10 PM 
On Thu, 28 Mar 2002, Paul J. Reder wrote:

 That patch seems to solve at least one of the problems that I am
seeing,
 but I have at least one other problem and a core dump inside
 send_parsed_content. I'm currently stepping though, trying to find
the
 source of the core dump.
 I'll let you know what I find.

Thanks guys!

--Cliff


--
   Cliff Woolley
   [EMAIL PROTECTED] 
   Charlottesville, VA





Re: [PATCH] Re: mod_include bug(s)?

2002-03-28 Thread Brian Pane

Paul J. Reder wrote:

 That patch seems to solve at least one of the problems that I am seeing,
 but I have at least one other problem and a core dump inside
 send_parsed_content. I'm currently stepping though, trying to find the
 source of the core dump. 


Thanks, I'll commit this patch for now

--Brian






Re: cvs commit: httpd-2.0/modules/filters mod_include.c

2002-03-28 Thread Brian Pane

[EMAIL PROTECTED] wrote:

bnicholes02/03/28 16:39:56

  Modified:modules/filters mod_include.c
  Log:
  Stop the while loop from incrementing twice per iteration before checking for
  the NULL terminator.  This was causing the while loop to walk off the end of any
  string with an odd number of characters.


With this change, the is_only_below() function is getting stuck in
an infinite loop when I run httpd-test.  I'm gdb'ing it now...

--Brian






Re: cvs commit: httpd-2.0/modules/filters mod_include.c

2002-03-28 Thread Cliff Woolley

On 29 Mar 2002 [EMAIL PROTECTED] wrote:

   -while (*path  *(path+1) != '/')
   +while (*path  (*path != '/')) {
   +++path;
   +}
   +if (*path == '/') {
++path;
   +}

Alternatively:

while (*path  *(path++) != '/')
continue;

--Cliff



--
   Cliff Woolley
   [EMAIL PROTECTED]
   Charlottesville, VA






start of possible replacement for mod-include

2002-03-28 Thread Ian Holsman

I started working on this a while ago, and seeing how
we've had a spate of issues with mod-include recently I thought
I'd bring it up.

The code implements a alternative parser 
(http://webperf.org/a2/include/) is 90% complete. it passes most of
the tests but I haven't had any time to follow it through to completion
(been dragged in 90 different directions at the moment)

so..  should I commit it under experimental?
I belive the code in parser.c is much easier to understand than the
current method used in mod-include..

..Ian