RE: Apache 2.0 Numbers

2002-06-24 Thread Ryan Bloom

  It would be nice
  if there was an apxs flag that would return the MPM type.
 
 +1

There is.  -q will query for any value in config_vars.mk, and MPM_NAME
is in that file.  So `apxs -q MPM_NAME` will return the configured MPM
type.

Ryan




RE: core_output_filter buffering for keepalives? Re: Apache 2.0 Numbers

2002-06-24 Thread Ryan Bloom

I think we should leave it alone.  This is the difference between
benchmarks and the real world.  How often do people have 8 requests in a
row that total less than 8K?

As a compromise, there are two other options.  You could have the
core_output_filter refuse to buffer more than 2 requests, or you could
have the core_output_filter not buffer if the full request is in the
buffer.

Removing the buffering is not the correct solution, because it does have
a negative impact in the real world.

Ryan

--
Ryan Bloom  [EMAIL PROTECTED]
645 Howard St.  [EMAIL PROTECTED]
San Francisco, CA 

 -Original Message-
 From: Brian Pane [mailto:[EMAIL PROTECTED]]
 Sent: Sunday, June 23, 2002 9:38 PM
 To: [EMAIL PROTECTED]
 Subject: core_output_filter buffering for keepalives? Re: Apache 2.0
 Numbers
 
 On Sun, 2002-06-23 at 20:58, Brian Pane wrote:
 
  For what it's worth, I just tried the test case that you posted.  On
my
  test system, 2.0 is faster when I run ab without -k, and 1.3 is
faster
  when I run with -k.
 
 I studied this test case and found out why 2.0 runs faster in the
 non-keepalive case and slower in the non-keepalive case.  It's
 because of the logic in core_output_filter() that tries to avoid
 small writes when c-keepalive is set.  In Rasmus's test, the file
 size is only 1KB, so core_output_filter reads in and buffers the
 contents of 8 requests before it finally reaches the 8KB threshold
 and starts calling writev.  1.3, in contrast, writes out each
 request's response immediately, with no buffering.
 
 I'm somewhat inclined to remove the buffering in this case, and
 let core_output_filter send the response as soon as it sees EOS
 for each request, even if that means sending a small packet.  I
 just tried this in my working tree, and it does speed up 2.0 for
 this particular test case.  On the other hand, I'm not a fan of
 small write calls in general.
 
 Anyone else care to offer an opinion on whether we should remove
 the buffering in this situation?
 
 --Brian
 





Re: Apache 2.0 Numbers

2002-06-24 Thread Andi Gutmans

On Sun, 23 Jun 2002, Brian Pane wrote:

 On Sun, 2002-06-23 at 18:58, Rasmus Lerdorf wrote:
 
  Someone asked me for numbers when I mentioned the other day that Apache
  2-prefork was really not a viable drop-in replacement for Apache 1.3 when
  it comes to running a PHP-enabled server.
  
  Apache 1.3 is still significantly faster than Apache2-prefork for both
  static and dynamic content.
 
 Most of the static benchmarks that I've seen posted to dev@httpd
 (including my own tests on Solaris and Linux) indicate otherwise.
 
 And for dynamic content, it's tough to make any generalization that
 one httpd release is faster than another, because the performance
 depends heavily on one's choice of content generation engine.
 
  Now, part of the blame goes to PHP here for
  the dynamic case. We are compiling PHP in threadsafe mode when building
  the PHP DSO for Apache2-prefork which is not necessary.
 
 You'll definitely see slow performance with PHP and httpd-2.0.
 I know of two major factors that contribute to this:
 
   * mod_php is using malloc and free quite a bit.

This shouldn't make a difference with the pre-fork MPM. It should be the
same speed as with 1.3. In any case, I'm in the middle of writing a
per-thread memory manager for the threaded MPM which doesn't use any
locks (except for allocating huge chunks) and allows frees.
The APR memory pools don't do this. It should improve performance under
OS's which don't have native per-thread pools (Win32 has them). But again,
it has nothing to do with Apache 1.3 vs. Apache 2.


   * PHP's nonbuffered output mode produces very small socket writes
 with Apache 2.0.  With 1.3, the httpd's own output buffering
 alleviated the problem.  In 2.0, where the PHP module splits
 long blocks of static text into 400-byte segments and inserts
 a flush bucket after every bucket of data that it sends to the
 next filter, the result is a stream of rather small packets.

You should test this with PHP's internal output buffering enabled. You can
set it there to something like 4096.

Andi




RE: Apache 2.0 Numbers

2002-06-24 Thread Rasmus Lerdorf

   It would be nice
   if there was an apxs flag that would return the MPM type.
 
  +1

 There is.  -q will query for any value in config_vars.mk, and MPM_NAME
 is in that file.  So `apxs -q MPM_NAME` will return the configured MPM
 type.

Ah right.  Is there a way to check at runtime as well?  I've added a PHP
configure check now to the apache2filter sapi module so it will come up
non-threaded by default if it sees Apache2-prefork.  Just a bit worried
about someone changing their MPM after the fact, so perhaps a runtime
check is needed as well.

-Rasmus




RE: Apache 2.0 Numbers

2002-06-24 Thread Ryan Bloom

 From: Rasmus Lerdorf [mailto:[EMAIL PROTECTED]]
 
It would be nice
if there was an apxs flag that would return the MPM type.
  
   +1
 
  There is.  -q will query for any value in config_vars.mk, and
MPM_NAME
  is in that file.  So `apxs -q MPM_NAME` will return the configured
MPM
  type.
 
 Ah right.  Is there a way to check at runtime as well?  I've added a
PHP
 configure check now to the apache2filter sapi module so it will come
up
 non-threaded by default if it sees Apache2-prefork.  Just a bit
worried
 about someone changing their MPM after the fact, so perhaps a runtime
 check is needed as well.

Runtime is harder, but you can just use ap_mpm_query to get the MPMs
characteristics.  This won't give you the MPM name, but it will let you
know if the MPM is threaded or not.

Ryan





RE: Apache 2.0 Numbers

2002-06-24 Thread Rasmus Lerdorf

 Runtime is harder, but you can just use ap_mpm_query to get the MPMs
 characteristics.  This won't give you the MPM name, but it will let you
 know if the MPM is threaded or not.

What is the correct way to fail in a filter post_config?  Do I return -1
from it if my filter finds a fatal error?  I can't use ap_log_rerror() at
this point, right?  How would I log the reason for the failure?

-Rasmus




RE: Apache 2.0 Numbers

2002-06-24 Thread Rasmus Lerdorf

  What is the correct way to fail in a filter post_config?  Do I return
 -1
  from it if my filter finds a fatal error?  I can't use ap_log_rerror()
 at
  this point, right?  How would I log the reason for the failure?

 I'm confused by the question, but I'll try to answer.  If you mean the
 post_config phase, then you can use ap_log_error or ap_log_perror.  If
 you want to stop the server from starting, just return DECLINED.

Right, I found ap_log_error.  It was the return value I was looking for.
None of the example filter modules had a fatal error check at the config
phase.  So returning a -1 is the correct way to stop the server from
starting.  Thanks.

-Rasmus




RE: Apache 2.0 Numbers

2002-06-24 Thread Rasmus Lerdorf



On Mon, 24 Jun 2002, Rasmus Lerdorf wrote:

   What is the correct way to fail in a filter post_config?  Do I return
  -1
   from it if my filter finds a fatal error?  I can't use ap_log_rerror()
  at
   this point, right?  How would I log the reason for the failure?
 
  I'm confused by the question, but I'll try to answer.  If you mean the
  post_config phase, then you can use ap_log_error or ap_log_perror.  If
  you want to stop the server from starting, just return DECLINED.

 Right, I found ap_log_error.  It was the return value I was looking for.
 None of the example filter modules had a fatal error check at the config
 phase.  So returning a -1 is the correct way to stop the server from
 starting.  Thanks.

Hrm..  Nope.  doing 'return DECLINED' from the post_config phase does not
stop the server from starting.  I have this:

static int
php_apache_server_startup(apr_pool_t *pconf, apr_pool_t *plog, apr_pool_t *ptemp, 
server_rec *s)
{
void *data = NULL;
const char *userdata_key = apache2filter_post_config;
#ifndef ZTS
int threaded_mpm;

ap_mpm_query(AP_MPMQ_IS_THREADED, threaded_mpm);
if(threaded_mpm) {
ap_log_error(APLOG_MARK, APLOG_CRIT, 0, s, Apache is running a threaded MPM, 
but your PHP Module is not compiled to be threadsafe.  You need to recompile PHP.);
return DECLINED;
}
#endif
...
}

...

ap_hook_pre_config(php_pre_config, NULL, NULL, APR_HOOK_MIDDLE);
ap_hook_post_config(php_apache_server_startup, NULL, NULL, APR_HOOK_MIDDLE);

And in my log I get:

[Mon Jun 24 08:27:23 2002] [crit] Apache is running a threaded MPM, but your PHP 
Module is not compiled to be threadsafe.  You need to recompile PHP.
[Mon Jun 24 08:27:23 2002] [crit] Apache is running a threaded MPM, but your PHP 
Module is not compiled to be threadsafe.  You need to recompile PHP.
[Mon Jun 24 08:27:23 2002] [notice] Apache/2.0.40-dev (Unix) configured -- resuming 
normal operations




RE: Apache 2.0 Numbers

2002-06-24 Thread Cliff Woolley

On Mon, 24 Jun 2002, Rasmus Lerdorf wrote:

 Hrm..  Nope.  doing 'return DECLINED' from the post_config phase does not
 stop the server from starting.  I have this:

I thought you were supposed to return HTTP_INTERNAL_SERVER_ERROR.

--Cliff




RE: Apache 2.0 Numbers

2002-06-24 Thread Ryan Bloom

My bad.  Post_config is a run_all.  If you return DONE the server won't
start.  This is what the MPMs do if the socket is already taken.

Ryan

--
Ryan Bloom  [EMAIL PROTECTED]
645 Howard St.  [EMAIL PROTECTED]
San Francisco, CA 

 -Original Message-
 From: Rasmus Lerdorf [mailto:[EMAIL PROTECTED]]
 Sent: Monday, June 24, 2002 8:34 AM
 To: Ryan Bloom
 Cc: [EMAIL PROTECTED]
 Subject: RE: Apache 2.0 Numbers
 
 
 
 On Mon, 24 Jun 2002, Rasmus Lerdorf wrote:
 
What is the correct way to fail in a filter post_config?  Do I
 return
   -1
from it if my filter finds a fatal error?  I can't use
 ap_log_rerror()
   at
this point, right?  How would I log the reason for the failure?
  
   I'm confused by the question, but I'll try to answer.  If you mean
the
   post_config phase, then you can use ap_log_error or ap_log_perror.
If
   you want to stop the server from starting, just return DECLINED.
 
  Right, I found ap_log_error.  It was the return value I was looking
for.
  None of the example filter modules had a fatal error check at the
config
  phase.  So returning a -1 is the correct way to stop the server from
  starting.  Thanks.
 
 Hrm..  Nope.  doing 'return DECLINED' from the post_config phase does
not
 stop the server from starting.  I have this:
 
 static int
 php_apache_server_startup(apr_pool_t *pconf, apr_pool_t *plog,
apr_pool_t
 *ptemp, server_rec *s)
 {
 void *data = NULL;
 const char *userdata_key = apache2filter_post_config;
 #ifndef ZTS
 int threaded_mpm;
 
 ap_mpm_query(AP_MPMQ_IS_THREADED, threaded_mpm);
 if(threaded_mpm) {
 ap_log_error(APLOG_MARK, APLOG_CRIT, 0, s, Apache is running
a
 threaded MPM, but your PHP Module is not compiled to be threadsafe.
You
 need to recompile PHP.);
 return DECLINED;
 }
 #endif
 ...
 }
 
 ...
 
 ap_hook_pre_config(php_pre_config, NULL, NULL, APR_HOOK_MIDDLE);
 ap_hook_post_config(php_apache_server_startup, NULL, NULL,
 APR_HOOK_MIDDLE);
 
 And in my log I get:
 
 [Mon Jun 24 08:27:23 2002] [crit] Apache is running a threaded MPM,
but
 your PHP Module is not compiled to be threadsafe.  You need to
recompile
 PHP.
 [Mon Jun 24 08:27:23 2002] [crit] Apache is running a threaded MPM,
but
 your PHP Module is not compiled to be threadsafe.  You need to
recompile
 PHP.
 [Mon Jun 24 08:27:23 2002] [notice] Apache/2.0.40-dev (Unix)
configured --
 resuming normal operations





RE: Apache 2.0 Numbers

2002-06-24 Thread Ryan Bloom

 From: Cliff Woolley [mailto:[EMAIL PROTECTED]]
 
 On Mon, 24 Jun 2002, Rasmus Lerdorf wrote:
 
  Hrm..  Nope.  doing 'return DECLINED' from the post_config phase
does
 not
  stop the server from starting.  I have this:
 
 I thought you were supposed to return HTTP_INTERNAL_SERVER_ERROR.

No.  That implies that you have an actual HTTP error.  You don't, this
is during config processing, not request processing.  Yes, that value
will work, but it is incorrect semantically.

Ryan





RE: Apache 2.0 Numbers

2002-06-24 Thread Rasmus Lerdorf

On Mon, 24 Jun 2002, Cliff Woolley wrote:
 On Mon, 24 Jun 2002, Rasmus Lerdorf wrote:

  Hrm..  Nope.  doing 'return DECLINED' from the post_config phase does not
  stop the server from starting.  I have this:

 I thought you were supposed to return HTTP_INTERNAL_SERVER_ERROR.

In include/http_config.h it says:

/**
 * Run the post_config function for each module
 * param pconf The config pool
 * param plog The logging streams pool
 * param ptemp The temporary pool
 * param s The list of server_recs
 * return OK or DECLINED on success anything else is a error
 */

So I guess I need to return 'anything else'

Trying this, ie. returning -2 it does the job.  But this seems a little
vague.  Should we perhaps have a #define FATAL -2 or something similar so
I don't get stepped on later on if someone decides to use -2 for something
else?

And no, I don't think it makes sense to overload INTERNAL_SERVER_ERROR for
this.  To me that is a 500 error which is very much tied to a
request-level error.

-Rasmus




RE: Apache 2.0 Numbers

2002-06-24 Thread Ryan Bloom

 From: Rasmus Lerdorf [mailto:[EMAIL PROTECTED]]
 
 On Mon, 24 Jun 2002, Cliff Woolley wrote:
  On Mon, 24 Jun 2002, Rasmus Lerdorf wrote:
 
   Hrm..  Nope.  doing 'return DECLINED' from the post_config phase
does
 not
   stop the server from starting.  I have this:
 
  I thought you were supposed to return HTTP_INTERNAL_SERVER_ERROR.
 
 In include/http_config.h it says:
 
 /**
  * Run the post_config function for each module
  * @param pconf The config pool
  * @param plog The logging streams pool
  * @param ptemp The temporary pool
  * @param s The list of server_recs
  * @return OK or DECLINED on success anything else is a error
  */
 
 So I guess I need to return 'anything else'
 
 Trying this, ie. returning -2 it does the job.  But this seems a
little
 vague.  Should we perhaps have a #define FATAL -2 or something similar
so
 I don't get stepped on later on if someone decides to use -2 for
something
 else?

As it happens, DONE is defined to be -2.   :-)

Ryan





RE: Apache 2.0 Numbers

2002-06-24 Thread Rasmus Lerdorf

 As it happens, DONE is defined to be -2.   :-)

Ok, I will use that, but 'DONE' doesn't really give the impression of
being a fatal error return value.

-Rasmus




RE: Apache 2.0 Numbers

2002-06-24 Thread Ryan Bloom

 From: Rasmus Lerdorf [mailto:[EMAIL PROTECTED]]
 
  As it happens, DONE is defined to be -2.   :-)
 
 Ok, I will use that, but 'DONE' doesn't really give the impression of
 being a fatal error return value.

I know.  It's original use was for use during request processing, when a
module wanted to be sure that it was the last function run for a
specific hook.  Basically this value ensured that no other functions
were run for that hook.

Ryan





Re: Apache 2.0 Numbers

2002-06-24 Thread Ian Holsman

Ryan Bloom wrote:
It would be nice
if there was an apxs flag that would return the MPM type.

+1
 
 
 There is.  -q will query for any value in config_vars.mk, and MPM_NAME
 is in that file.  So `apxs -q MPM_NAME` will return the configured MPM
 type.
 
 Ryan
 
This is the wrong approach IMHO. we should have a flag to determine if 
threading is enabled, and use that to base the decision on. I can see in 
the future where we might have 2 non-threaded MPM's and then all the 
people using this flag will break

--Ian







Re: Apache 2.0 Numbers

2002-06-24 Thread Brian Pane

On Mon, 2002-06-24 at 02:16, Andi Gutmans wrote:

* PHP's nonbuffered output mode produces very small socket writes
  with Apache 2.0.  With 1.3, the httpd's own output buffering
  alleviated the problem.  In 2.0, where the PHP module splits
  long blocks of static text into 400-byte segments and inserts
  a flush bucket after every bucket of data that it sends to the
  next filter, the result is a stream of rather small packets.
 
 You should test this with PHP's internal output buffering enabled. You can
 set it there to something like 4096.

That definitely will improve the numbers, but I'd rather not spend the
next few years saying turn on buffering in mod_php every time another
user posts a benchmark claiming that Apache 2.0 sucks because it runs
my PHP scripts ten times slower than 1.3 did. :-)

I have two proposals for this:

* Saying turn on buffering is, IMHO, a reasonable solution if you
  can make buffering the default in PHP under httpd-2.0.  Otherwise,
  you'll surprise a lot of users who have been running with the default
  non-buffered output using 1.3 and find that all their applications
  are far slower with 2.0.

* A better solution, though, would be to have the PHP filter generate
  flush buckets (in nonbuffered mode) only when it reaches a % or
  %.  I.e., if the input file has 20KB of static text before the
  first embedded script, send that entire 20KB in a bucket, and don't
  try to split it into 400-byte segments.  If mod_php is in nonbuffered
  mode, send an apr_bucket_flush right after it.  (There's a precedent
  for this approach: one of the ways in  which we managed to get good
  performance from mod_include in 2.0 was to stop trying to split large
  static blocks into small chunks.  We were originally concerned about
  the amount of time it would take for the mod_include lexer to run
  through large blocks of static content, but it hasn't been a problem
  in practice.)

From a mod_php perspective, would either of those be a viable solution?

--Brian





Re: Apache 2.0 Numbers

2002-06-24 Thread Rasmus Lerdorf

 * Saying turn on buffering is, IMHO, a reasonable solution if you
   can make buffering the default in PHP under httpd-2.0.  Otherwise,
   you'll surprise a lot of users who have been running with the default
   non-buffered output using 1.3 and find that all their applications
   are far slower with 2.0.

We could turn on buffering for 2.0.  I just verified that this does indeed
create a single 1024-byte bucket for my 1024-byte file test case.  And
combined with compiling PHP non-threaded for the prefork mpm the result
is:

Concurrency Level:  5
Time taken for tests:   115.406395 seconds
Complete requests:  5
Failed requests:0
Write errors:   0
Keep-Alive requests:0
Total transferred:  6325 bytes
HTML transferred:   5120 bytes
Requests per second:433.25 [#/sec] (mean)
Time per request:   11.541 [ms] (mean)
Time per request:   2.308 [ms] (mean, across all concurrent requests)
Transfer rate:  535.21 [Kbytes/sec] received

Up from 397 requests/second but still nowhere near the 615 requests/second
for Apache 1.3.  But, doing this buffering internally in PHP and then
again in Apache doesn't seem efficient to me, and the numbers would seem
to reflect this inefficiency.

 * A better solution, though, would be to have the PHP filter generate
   flush buckets (in nonbuffered mode) only when it reaches a % or
   %.  I.e., if the input file has 20KB of static text before the
   first embedded script, send that entire 20KB in a bucket, and don't
   try to split it into 400-byte segments.  If mod_php is in nonbuffered
   mode, send an apr_bucket_flush right after it.  (There's a precedent
   for this approach: one of the ways in  which we managed to get good
   performance from mod_include in 2.0 was to stop trying to split large
   static blocks into small chunks.  We were originally concerned about
   the amount of time it would take for the mod_include lexer to run
   through large blocks of static content, but it hasn't been a problem
   in practice.)

 From a mod_php perspective, would either of those be a viable solution?

I think Andi is working on this.  But, just to test the theory, I modified
the PHP lexer to use larger chunks.  1024 in this case.  So, the 1k.php
test case which looks like this:

html
headtitleTest Document./title
body
h1Test Document./h1
p
?='This is a 1024 byte HTML file.'?br /
aabr /
bbbr /
ccbr /
ddbr /
eebr /
ffbr /
ggbr /
hhbr /
iibr /
jjbr /
kkbr /
llbr /
mmbr /
nnbr /
oobr /
ppbr /
qqbr /
rrbr /
ssbr /
ttbr /
uubr /
vvbr /
wwbr /
xxbr /
/p
/body
/html

Was split up into 3 buckets.

1. 78 bytes containing:

html
headtitleTest Document./title
body
h1Test Document./h1
p

2. 30 bytes containing (because this was dynamically generated)
This is a 1024 byte HTML file.

3. A 916 byte bucket containing the rest of the static text.

Result:

Concurrency Level:  5
Time taken for tests:   124.456357 seconds
Complete requests:  5
Failed requests:0
Write errors:   0
Keep-Alive requests:0
Total transferred:  6325 bytes
HTML transferred:   5120 bytes
Requests per second:401.75 [#/sec] (mean)
Time per request:   12.446 [ms] (mean)
Time per request:   2.489 [ms] (mean, across all concurrent requests)
Transfer rate:  496.29 [Kbytes/sec] received


So slower than the single 1024 byte bucket and actually also slower than
the 400-byte case, so an invalid test.  There are probably some other
memory-allocation changes I would need to make to make this a valid test.

-Rasmus




Re: Apache 2.0 Numbers

2002-06-24 Thread Sebastian Bergmann

Brian Pane wrote:
 That definitely will improve the numbers, but I'd rather not spend the
 next few years saying turn on buffering in mod_php every time another
 user posts a benchmark claiming that Apache 2.0 sucks because it runs
 my PHP scripts ten times slower than 1.3 did. :-)

  Well, it's already turned on in php.ini-recommended:

; Output buffering allows you to send header lines (including
; cookies) even after you send body content, at the price of slowing
; PHP's output layer a bit. You can enable output buffering during 
; runtime by calling the output buffering functions.  You can also
; enable output buffering for all files by setting this directive to 
; On. If you wish to limit the size of the buffer to a certain size -
; you can use a maximum number of bytes instead of 'On', as a value
; for this directive (e.g., output_buffering=4096).
output_buffering = 4096

-- 
  Sebastian Bergmann
  http://sebastian-bergmann.de/ http://phpOpenTracker.de/

  Did I help you? Consider a gift: http://wishlist.sebastian-bergmann.de/



Re: Apache 2.0 Numbers

2002-06-24 Thread Pier Fumagalli

Rasmus Lerdorf [EMAIL PROTECTED] wrote:

 Up from 397 requests/second but still nowhere near the 615 requests/second
 for Apache 1.3.  But, doing this buffering internally in PHP and then
 again in Apache doesn't seem efficient to me, and the numbers would seem
 to reflect this inefficiency.

Rasmus... I was chatting with Ryan on IRC today, and in my case (I have a
document which is approx 11 Kb long), Apache 2.0.39/worker on
Solaris/8-intel actually outperforms Apache 1.3.26 with the same PHP 4.2.1
(well, the one for Apache 2.0 uses Apache2Filter from PHP HEAD)...

It's a good 10% faster than 1.3...

(If people are interested in numbers, I'll rerun the tests. When RBB told me
he expected those results, I /dev/null ed em...)

Pier




RE: core_output_filter buffering for keepalives? Re: Apache 2.0 Numbers

2002-06-24 Thread Ryan Bloom


 From: Brian Pane [mailto:[EMAIL PROTECTED]]
 
 Ryan Bloom wrote:
 
 I think we should leave it alone.  This is the difference between
 benchmarks and the real world.  How often do people have 8 requests
in a
 row that total less than 8K?
 
 As a compromise, there are two other options.  You could have the
 core_output_filter refuse to buffer more than 2 requests, or you
could
 have the core_output_filter not buffer if the full request is in the
 buffer.
 
 
 In your second option, do you mean full response rather than full
 request?  If so, it's equivalent to what I was proposing yesterday:
 send the response for each request as soon as we see EOS for that
request.
 I like that approach a lot because it keeps us from doing an extra
 mmap/memcpy/memunmap write before the write in the real-world case
 where a client sends a non-pipelined request for a small (8KB) file
 over a keepalive connection.

I do mean full response, and that is NOT equivalent to sending the
response as soon as you see the EOS for that request.  EVERY request
gets its own EOS.  If you send the response when you see the EOS, then
you will have removed all of the buffering for pipelined requests.

You are trying to solve a problem that doesn't exist in the real world
IIUIC.  Think it through.  The problem is that if the page is 1k and
you request that page 8 times on the same connection, then Apache will
buffer all of the responses.  How often are those conditions going to
happen in the real world?

Now, take it one step further please.  The real problem is how AB is
measuring the results.  If I send 3 requests, and Apache buffers the
response, how is AB measuring the time?  Does it start counting at the
start of every request, or is the time just started at start of the
first request?  Perhaps a picture:

Start of requestresponse received
0 5 10 15 20 25 30 35
--- request 1
--- request 2
  - request 3
 -- request 4

Did the 4 requests take 35 seconds total, or 85 seconds?  I believe that
AB is counting this as 85 seconds for 4 requests, but Apache is only
taking 35 seconds for the 4 requests.

Ryan





Re: Apache 2.0 Numbers

2002-06-24 Thread Andi Gutmans

On Mon, 24 Jun 2002, Brian Pane wrote:

 On Mon, 2002-06-24 at 02:16, Andi Gutmans wrote:
 
 * PHP's nonbuffered output mode produces very small socket writes
   with Apache 2.0.  With 1.3, the httpd's own output buffering
   alleviated the problem.  In 2.0, where the PHP module splits
   long blocks of static text into 400-byte segments and inserts
   a flush bucket after every bucket of data that it sends to the
   next filter, the result is a stream of rather small packets.
  
  You should test this with PHP's internal output buffering enabled. You can
  set it there to something like 4096.
 
 That definitely will improve the numbers, but I'd rather not spend the
 next few years saying turn on buffering in mod_php every time another
 user posts a benchmark claiming that Apache 2.0 sucks because it runs
 my PHP scripts ten times slower than 1.3 did. :-)
 
 I have two proposals for this:
 
 * Saying turn on buffering is, IMHO, a reasonable solution if you
   can make buffering the default in PHP under httpd-2.0.  Otherwise,
   you'll surprise a lot of users who have been running with the default
   non-buffered output using 1.3 and find that all their applications
   are far slower with 2.0.

It is the default in the recommended INI file.
 
 * A better solution, though, would be to have the PHP filter generate
   flush buckets (in nonbuffered mode) only when it reaches a % or
   %.  I.e., if the input file has 20KB of static text before the
   first embedded script, send that entire 20KB in a bucket, and don't
   try to split it into 400-byte segments.  If mod_php is in nonbuffered
   mode, send an apr_bucket_flush right after it.  (There's a precedent
   for this approach: one of the ways in  which we managed to get good
   performance from mod_include in 2.0 was to stop trying to split large
   static blocks into small chunks.  We were originally concerned about
   the amount of time it would take for the mod_include lexer to run
   through large blocks of static content, but it hasn't been a problem
   in practice.)

From tests we did a long time ago making the lexer queue 20KB of buffer
just to send it out in one piece was actually a problem. I think the
buffer needed reallocating which made it really slow but it was about 3
years ago so I can't remember the exact reason we limited it to 400. I'm
pretty sure it was a good reason though :)

I bet that the performance difference Rasmus is describing is not really
due to PHP's added output buffering.

Andi






RE: core_output_filter buffering for keepalives? Re: Apache 2.0 Numbers

2002-06-24 Thread Ryan Bloom

 From: Brian Pane [mailto:[EMAIL PROTECTED]]
 Ryan Bloom wrote:
 
 From: Brian Pane [mailto:[EMAIL PROTECTED]]
 
 Ryan Bloom wrote:
 
 
 
 I think we should leave it alone.  This is the difference between
 benchmarks and the real world.  How often do people have 8 requests
 
 
 in a
 
 
 row that total less than 8K?
 
 As a compromise, there are two other options.  You could have the
 core_output_filter refuse to buffer more than 2 requests, or you
 
 
 could
 
 
 have the core_output_filter not buffer if the full request is in
the
 buffer.
 
 
 
 In your second option, do you mean full response rather than full
 request?  If so, it's equivalent to what I was proposing yesterday:
 send the response for each request as soon as we see EOS for that
 
 
 request.
 
 
 I like that approach a lot because it keeps us from doing an extra
 mmap/memcpy/memunmap write before the write in the real-world case
 where a client sends a non-pipelined request for a small (8KB) file
 over a keepalive connection.
 
 
 
 I do mean full response, and that is NOT equivalent to sending the
 response as soon as you see the EOS for that request.  EVERY request
 gets its own EOS.  If you send the response when you see the EOS,
then
 you will have removed all of the buffering for pipelined requests.
 
 
 -1 on buffering across requests, because the performance problems
 caused by the extra mmap+munmap will offset the gain you're trying
 to achieve with pipelining.

Wait a second.  Now you want to stop buffering to fix a completely
differeny bug.  The idea that we can't keep a file_bucket in the brigade
across requests is only partially true.  Take a closer look at what we
are doing and why when we convert the file to an mmap.  That logic was
added so that we do not leak file descriptors across requests.

However, there are multiple options to fix that. The first, and easiest
one is to just have the core_output_filter call apr_file_close() after
it has sent the file.  The second is to migrate the apr_file_t to the
conn_rec's pool if and only if the file needs to survive the request's
pool being killed.  Because we are only migrating file descriptors in
the edge case, this shouldn't cause a big enough leak to cause a
problem.

 You are trying to solve a problem that doesn't exist in the real
world
 IIUIC.  Think it through.  The problem is that if the page is 1k and
 you request that page 8 times on the same connection, then Apache
will
 buffer all of the responses.  How often are those conditions going to
 happen in the real world?
 
 
 That's not the problem that I care about.  The problem that matters
 is the one that happens in the real world, as a side-effect of the
 core_output_filter() code trying to be too clever:
   - Client opens a keepalive connection to httpd
   - Cclient requests a file smaller than 8KB
   - core_output_filter(), upon seeing the file bucket followed by EOS,
 decides to buffer the output because it has less than 8KB total.
   - There isn't another request ready to be read (read returns EAGAIN)
 because the client isn't pipelining its connections.  So we then
 do the writev of the file that we've just finished buffering.

But, this case only happens if the headers + the file are less than 8k.
If the file is 10k, then this problem doesn't actually exist at all.
As I said above, there are better ways to fix this than removing all
ability to pipeline responses.

 Aside from slowing things down for the user, this hurts the
scalability
 of the httpd (mmap and munmap don't scale well on multiprocessor
boxes).
 
 What we should be doing in this case is just doing the write
immediately
 upon seeing the EOS, rather than penalizing both the client and the
 server.

By doing that, you are removing ANY benefit to using pipelined requests
when serving files.  Multiple research projects have all found that
pipelined requests show a performance benefit.  In other words, you are
fixing a performance problem by removing another performance enhancer.

Ryan





Re: core_output_filter buffering for keepalives? Re: Apache 2.0 Numbers

2002-06-24 Thread Bill Stoddard



  From: Brian Pane [mailto:[EMAIL PROTECTED]]
  Ryan Bloom wrote:
 
  From: Brian Pane [mailto:[EMAIL PROTECTED]]
  
  Ryan Bloom wrote:
  
  
  
  I think we should leave it alone.  This is the difference between
  benchmarks and the real world.  How often do people have 8 requests
  
  
  in a
  
  
  row that total less than 8K?
  
  As a compromise, there are two other options.  You could have the
  core_output_filter refuse to buffer more than 2 requests, or you
  
  
  could
  
  
  have the core_output_filter not buffer if the full request is in
 the
  buffer.
  
  
  
  In your second option, do you mean full response rather than full
  request?  If so, it's equivalent to what I was proposing yesterday:
  send the response for each request as soon as we see EOS for that
  
  
  request.
  
  
  I like that approach a lot because it keeps us from doing an extra
  mmap/memcpy/memunmap write before the write in the real-world case
  where a client sends a non-pipelined request for a small (8KB) file
  over a keepalive connection.
  
  
  
  I do mean full response, and that is NOT equivalent to sending the
  response as soon as you see the EOS for that request.  EVERY request
  gets its own EOS.  If you send the response when you see the EOS,
 then
  you will have removed all of the buffering for pipelined requests.
  
 
  -1 on buffering across requests, because the performance problems
  caused by the extra mmap+munmap will offset the gain you're trying
  to achieve with pipelining.

 Wait a second.  Now you want to stop buffering to fix a completely
 differeny bug.  The idea that we can't keep a file_bucket in the brigade
 across requests is only partially true.  Take a closer look at what we
 are doing and why when we convert the file to an mmap.  That logic was
 added so that we do not leak file descriptors across requests.

 However, there are multiple options to fix that. The first, and easiest
 one is to just have the core_output_filter call apr_file_close() after
 it has sent the file.  The second is to migrate the apr_file_t to the
 conn_rec's pool if and only if the file needs to survive the request's
 pool being killed.  Because we are only migrating file descriptors in
 the edge case, this shouldn't cause a big enough leak to cause a
 problem.

  You are trying to solve a problem that doesn't exist in the real
 world
  IIUIC.  Think it through.  The problem is that if the page is 1k and
  you request that page 8 times on the same connection, then Apache
 will
  buffer all of the responses.  How often are those conditions going to
  happen in the real world?
  
 
  That's not the problem that I care about.  The problem that matters
  is the one that happens in the real world, as a side-effect of the
  core_output_filter() code trying to be too clever:
- Client opens a keepalive connection to httpd
- Cclient requests a file smaller than 8KB
- core_output_filter(), upon seeing the file bucket followed by EOS,
  decides to buffer the output because it has less than 8KB total.
- There isn't another request ready to be read (read returns EAGAIN)
  because the client isn't pipelining its connections.  So we then
  do the writev of the file that we've just finished buffering.

 But, this case only happens if the headers + the file are less than 8k.
 If the file is 10k, then this problem doesn't actually exist at all.
 As I said above, there are better ways to fix this than removing all
 ability to pipeline responses.

  Aside from slowing things down for the user, this hurts the
 scalability
  of the httpd (mmap and munmap don't scale well on multiprocessor
 boxes).
 
  What we should be doing in this case is just doing the write
 immediately
  upon seeing the EOS, rather than penalizing both the client and the
  server.

 By doing that, you are removing ANY benefit to using pipelined requests
 when serving files.  Multiple research projects have all found that
 pipelined requests show a performance benefit.  In other words, you are
 fixing a performance problem by removing another performance enhancer.

 Ryan


Ryan,
Solve the problem to enable setting aside the open fd just long enough to check for a
pipelined request will nearly completely solve the worst part (the mmap/munmap) of this
problem.  On systems with expensive syscalls, we can do browser detection and 
dynamically
determine whether we should attempt the pipelined optimization or not. Not many 
browsers
today support pipelining requests, FWIW.

Bill




RE: core_output_filter buffering for keepalives? Re: Apache 2.0 Numbers

2002-06-24 Thread Ryan Bloom

 From: Bill Stoddard [mailto:[EMAIL PROTECTED]]
   From: Brian Pane [mailto:[EMAIL PROTECTED]]
   Ryan Bloom wrote:
   From: Brian Pane [mailto:[EMAIL PROTECTED]]
   Ryan Bloom wrote:

  Wait a second.  Now you want to stop buffering to fix a completely
  differeny bug.  The idea that we can't keep a file_bucket in the
brigade
  across requests is only partially true.  Take a closer look at what
we
  are doing and why when we convert the file to an mmap.  That logic
was
  added so that we do not leak file descriptors across requests.
 
  However, there are multiple options to fix that. The first, and
easiest
  one is to just have the core_output_filter call apr_file_close()
after
  it has sent the file.  The second is to migrate the apr_file_t to
the
  conn_rec's pool if and only if the file needs to survive the
request's
  pool being killed.  Because we are only migrating file descriptors
in
  the edge case, this shouldn't cause a big enough leak to cause a
  problem.
 
   You are trying to solve a problem that doesn't exist in the real
  world
   IIUIC.  Think it through.  The problem is that if the page is 1k
and
   you request that page 8 times on the same connection, then Apache
  will
   buffer all of the responses.  How often are those conditions
going to
   happen in the real world?
   
  
   That's not the problem that I care about.  The problem that
matters
   is the one that happens in the real world, as a side-effect of the
   core_output_filter() code trying to be too clever:
 - Client opens a keepalive connection to httpd
 - Cclient requests a file smaller than 8KB
 - core_output_filter(), upon seeing the file bucket followed by
EOS,
   decides to buffer the output because it has less than 8KB
total.
 - There isn't another request ready to be read (read returns
EAGAIN)
   because the client isn't pipelining its connections.  So we
then
   do the writev of the file that we've just finished buffering.
 
  But, this case only happens if the headers + the file are less than
8k.
  If the file is 10k, then this problem doesn't actually exist at
all.
  As I said above, there are better ways to fix this than removing all
  ability to pipeline responses.
 
   Aside from slowing things down for the user, this hurts the
  scalability
   of the httpd (mmap and munmap don't scale well on multiprocessor
  boxes).
  
   What we should be doing in this case is just doing the write
  immediately
   upon seeing the EOS, rather than penalizing both the client and
the
   server.
 
  By doing that, you are removing ANY benefit to using pipelined
requests
  when serving files.  Multiple research projects have all found that
  pipelined requests show a performance benefit.  In other words, you
are
  fixing a performance problem by removing another performance
enhancer.
 
  Ryan
 
 
 Ryan,
 Solve the problem to enable setting aside the open fd just long enough
to
 check for a
 pipelined request will nearly completely solve the worst part (the
 mmap/munmap) of this
 problem.  On systems with expensive syscalls, we can do browser
detection
 and dynamically
 determine whether we should attempt the pipelined optimization or not.
Not
 many browsers
 today support pipelining requests, FWIW.

That would be a trivial change.  I'll have a patch posted for testing
later today.

Ryan





RE: core_output_filter buffering for keepalives? Re: Apache 2.0 Numbers

2002-06-24 Thread Ryan Bloom


 -1 on buffering across requests, because the performance problems
 caused by the extra mmap+munmap will offset the gain you're trying
 to achieve with pipelining.
 
 
 
 Wait a second.  Now you want to stop buffering to fix a completely
 differeny bug.  The idea that we can't keep a file_bucket in the
brigade
 across requests is only partially true.  Take a closer look at what
we
 are doing and why when we convert the file to an mmap.  That logic
was
 added so that we do not leak file descriptors across requests.
 
 However, there are multiple options to fix that. The first, and
easiest
 one is to just have the core_output_filter call apr_file_close()
after
 it has sent the file.  The second is to migrate the apr_file_t to the
 conn_rec's pool if and only if the file needs to survive the
request's
 pool being killed.  Because we are only migrating file descriptors in
 the edge case, this shouldn't cause a big enough leak to cause a
 problem.
 
 
 Migrating the apr_file_t to the conn_rec's pool is an appealing
 solution, but it's quite dangerous.  With that logic added to the
 current 8KB threshold, it would be too easy to make an httpd run
 out of file descriptors: Send a pipeline of a few hundred requests
 for some tiny file (e.g., a small image).  They all get setaside
 into the conn_rec's pool.  Then send a request for something
 that takes a long time to process, like a CGI.  Run multiple
 copies of this against the target httpd at once, and you'll be
 able to exhaust the file descriptors for a threaded httpd all
 too easily.

That is why we allow people to control how many requests can be sent on
the same connection.  Or, you can just have a limit on the number of
file descriptors that you are willing to buffer.  And, the pipe_read
function should be smart enough that if we don't get any data off of the
pipe, for say 30 seconds, then we flush whatever data we currently have.

 That's not the problem that I care about.  The problem that matters
 is the one that happens in the real world, as a side-effect of the
 core_output_filter() code trying to be too clever:
   - Client opens a keepalive connection to httpd
   - Cclient requests a file smaller than 8KB
   - core_output_filter(), upon seeing the file bucket followed by
EOS,
 decides to buffer the output because it has less than 8KB total.
   - There isn't another request ready to be read (read returns
EAGAIN)
 because the client isn't pipelining its connections.  So we then
 do the writev of the file that we've just finished buffering.
 
 But, this case only happens if the headers + the file are less than
8k.
 If the file is 10k, then this problem doesn't actually exist at all.
 As I said above, there are better ways to fix this than removing all
 ability to pipeline responses.
 
 
 We're not removing the ability to pipeline responses.

You are removing a perfectly valid optimization to stop us from sending
a lot of small packets across pipelined responses.

Ryan




Re: Apache 2.0 Numbers

2002-06-24 Thread Paul J. Reder

...as in stick a fork in it, its 'DONE. ;)

Rasmus Lerdorf wrote:

As it happens, DONE is defined to be -2.   :-)

 
 Ok, I will use that, but 'DONE' doesn't really give the impression of
 being a fatal error return value.
 
 -Rasmus
 
 
 


-- 
Paul J. Reder
---
The strength of the Constitution lies entirely in the determination of each
citizen to defend it.  Only if every single citizen feels duty bound to do
his share in this defense are the constitutional rights secure.
-- Albert Einstein





PHP as a filter (was: Apache 2.0 Numbers)

2002-06-24 Thread Greg Stein

On Sun, Jun 23, 2002 at 08:52:09PM -0700, Ian Holsman wrote:
...
 The main difference that I can see is that php is using a filter.
 i'd say that php's performance would increase to 1.3 numbers when
 they write there sapi interface as handler NOT a filter.
 until then php's performance will always be worse in 2.0, as they
 are using the wrong set of hooks (filters) to communicate with 2.0.

No frickin' way. PHP is a filter, not a handler. Something else generates
the content, then PHP parses it and executes it.

If you make PHP a handler, then you will *only* be able to run PHP on
content that *it* comes up with. You'll never be able to run PHP on custom
data sources.

Concrete example? Sure. Subversion stores its content in a custom, versioned
repository. When a request is made, SVN generates the content from its data
store. Since PHP is a filter this means that we can serve up .php files
right out of a Subversion repository! How frickin' cool is that?!

[ I also know some guys are working on a MySQL backend; it would kick ass to
  store .php files in the MySQL database, and PHP-process them as they get
  yanked out of there. ]

It would be a sad, sad, day if PHP ever reverted back to a handler rather
than operated as a filter.

If it was a handler, it would have to know about every single data store out
there that somebody might want to use for producing Apache content. I doubt
they want to take that on :-)

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/



Re: PHP as a filter (was: Apache 2.0 Numbers)

2002-06-24 Thread Ian Holsman

Greg Stein wrote:
 On Sun, Jun 23, 2002 at 08:52:09PM -0700, Ian Holsman wrote:
 
...
The main difference that I can see is that php is using a filter.
i'd say that php's performance would increase to 1.3 numbers when
they write there sapi interface as handler NOT a filter.
until then php's performance will always be worse in 2.0, as they
are using the wrong set of hooks (filters) to communicate with 2.0.
 
 
 No frickin' way. PHP is a filter, not a handler. Something else generates
 the content, then PHP parses it and executes it.
 

It should also be a handler because in the normal case it just reads a 
flat file off the disk.
so that in the general case you won't get the slowdown penalty for the
people who are going to be use it ontop of something else.

I'm not saying that you shouldn't have a filter mode.. just that you 
should have a handler mode as well. (and so should mod-include IMHO)

--Ian

 If you make PHP a handler, then you will *only* be able to run PHP on
 content that *it* comes up with. You'll never be able to run PHP on custom
 data sources.


 
 Concrete example? Sure. Subversion stores its content in a custom, versioned
 repository. When a request is made, SVN generates the content from its data
 store. Since PHP is a filter this means that we can serve up .php files
 right out of a Subversion repository! How frickin' cool is that?!
 
 [ I also know some guys are working on a MySQL backend; it would kick ass to
   store .php files in the MySQL database, and PHP-process them as they get
   yanked out of there. ]
 
 It would be a sad, sad, day if PHP ever reverted back to a handler rather
 than operated as a filter.
 
 If it was a handler, it would have to know about every single data store out
 there that somebody might want to use for producing Apache content. I doubt
 they want to take that on :-)
 
 Cheers,
 -g
 






Re: PHP as a filter (was: Apache 2.0 Numbers)

2002-06-24 Thread Aaron Bannert

On Mon, Jun 24, 2002 at 09:15:41PM -0700, Ian Holsman wrote:
 No frickin' way. PHP is a filter, not a handler. Something else generates
 the content, then PHP parses it and executes it.
 
 It should also be a handler because in the normal case it just reads a 
 flat file off the disk.
 so that in the general case you won't get the slowdown penalty for the
 people who are going to be use it ontop of something else.
 
 I'm not saying that you shouldn't have a filter mode.. just that you 
 should have a handler mode as well. (and so should mod-include IMHO)

In an ideal world, the filter system would make the distinction
transparent, both in terms of the API and the performance. I think
we should make this our long-term goal.

-aaron



Re: PHP as a filter (was: Apache 2.0 Numbers)

2002-06-24 Thread William A. Rowe, Jr.

At 11:15 PM 6/24/2002, Ian Holsman wrote:
Greg Stein wrote:

No frickin' way. PHP is a filter, not a handler. Something else generates
the content, then PHP parses it and executes it.

It should also be a handler because in the normal case it just reads a 
flat file off the disk.
so that in the general case you won't get the slowdown penalty for the
people who are going to be use it ontop of something else.

You miss the point altogether.

The handler is the core FILESYSTEM handler itself.  Why on earth should
php worry about the distinction?  Whether it resides in the filesystem, the 
sql
database, the .tar archive or an SVN repository is all beside the point.

To the extent that 2.0 provides faux-handlers that make setup simpler for
old users to understand, fine.  We have that.  By 2.1 I sure as heck hope
we rip out those kludges.

The problem today is that the zend engine parses a file, not a stream.  That's
actually partly true, zend is set up for streaming, IIUC there are just a few
bits missing to actually serve the content straight into PHP's zend engine
from the brigade.  There should be no significant slowdown if the brigade
mechanics interact correctly.

Creating both a filter and handler gives you two opportunities for bugs,
security holes, exploits and the rest.  Code that works in a single modality
for multiple uses should be written in the single cleanest possible way.

Bill




Apache 2.0 Numbers

2002-06-23 Thread Rasmus Lerdorf

Someone asked me for numbers when I mentioned the other day that Apache
2-prefork was really not a viable drop-in replacement for Apache 1.3 when
it comes to running a PHP-enabled server.

Apache 1.3 is still significantly faster than Apache2-prefork for both
static and dynamic content.  Now, part of the blame goes to PHP here for
the dynamic case. We are compiling PHP in threadsafe mode when building
the PHP DSO for Apache2-prefork which is not necessary. It would be nice
if there was an apxs flag that would return the MPM type. Right now we
would need to parse the output of httpd -l or -V to figure out which MPM
is being used. Being able to go non-threadsafe in PHP does speed us up a
bit. But none of this has anything to do with the fact that Apache 1.3 is
faster for static files.  It's going to be very hard to convince people to
switch to Apache2-prefork if we can't get it to go at least as fast as 1.3
for simple static files.

Platform: Linux 2.4.19-pre8, glibc 2.2.5, gcc-2.96, P3-800, 128M
Tested using ab from the httpd-2.0 tree with these flags: -c 5 -n 5 -k

1024-byte file which looked like this:

html
headtitleTest Document./title
body
h1Test Document./h1
p
This is a 1024 byte HTML file.br /
aabr /
bbbr /
ccbr /
ddbr /
eebr /
ffbr /
ggbr /
hhbr /
iibr /
jjbr /
kkbr /
llbr /
mmbr /
nnbr /
oobr /
ppbr /
qqbr /
rrbr /
ssbr /
ttbr /
uubr /
vvbr /
wwbr /
xxbr /
/p
/body
/html

The PHP version was:

html
headtitleTest Document./title
body
h1Test Document./h1
p
?='This is a 1024 byte HTML file.'?br /
aabr /
bbbr /
ccbr /
ddbr /
eebr /
ffbr /
ggbr /
hhbr /
iibr /
jjbr /
kkbr /
llbr /
mmbr /
nnbr /
oobr /
ppbr /
qqbr /
rrbr /
ssbr /
ttbr /
uubr /
vvbr /
wwbr /
xxbr /
/p
/body
/html

Note the fact that the Apache2 static test produced the wrong number of
total bytes.  3072 bytes too many???  Where in the world did they come
from?  The PHP test on Apache2 produced the correct number as did both
static and PHP on Apache1.


Apache 2 PreFork

KeepAlive On
MaxKeepAliveRequests 100
KeepAliveTimeout15
StartServers 5
MinSpareServers  5
MaxSpareServers 10
MaxClients  15
MaxRequestsPerChild  0

STATIC

Concurrency Level:  5
Time taken for tests:   23.793270 seconds
Complete requests:  5
Failed requests:0
Write errors:   0
Keep-Alive requests:49511
Total transferred:  66681859 bytes
HTML transferred:   51203072 bytes === Uh?
Requests per second:2101.43 [#/sec] (mean)
Time per request:   2.379 [ms] (mean)
Time per request:   0.476 [ms] (mean, across all concurrent requests)
Transfer rate:  2736.87 [Kbytes/sec] received

PHP

Concurrency Level:  5
Time taken for tests:   125.831896 seconds
Complete requests:  5
Failed requests:0
Write errors:   0
Keep-Alive requests:0
Total transferred:  6325 bytes
HTML transferred:   5120 bytes
Requests per second:397.36 [#/sec] (mean)
Time per request:   12.583 [ms] (mean)
Time per request:   2.517 [ms] (mean, across all concurrent requests)
Transfer rate:  490.87 [Kbytes/sec] received


Apache 1.3
--
Timeout 300
KeepAlive On
MaxKeepAliveRequests 100
KeepAliveTimeout   15
MinSpareServers 5
MaxSpareServers10
StartServers5
MaxClients 15
MaxRequestsPerChild 0
---
STATIC
---
Concurrency Level:  5
Time taken for tests:   19.735772 seconds
Complete requests:  5
Failed requests:0
Write errors:   0
Keep-Alive requests:49507
Total transferred:  

Re: Apache 2.0 Numbers

2002-06-23 Thread Brian Pane

On Sun, 2002-06-23 at 18:58, Rasmus Lerdorf wrote:

 Someone asked me for numbers when I mentioned the other day that Apache
 2-prefork was really not a viable drop-in replacement for Apache 1.3 when
 it comes to running a PHP-enabled server.
 
 Apache 1.3 is still significantly faster than Apache2-prefork for both
 static and dynamic content.

Most of the static benchmarks that I've seen posted to dev@httpd
(including my own tests on Solaris and Linux) indicate otherwise.

And for dynamic content, it's tough to make any generalization that
one httpd release is faster than another, because the performance
depends heavily on one's choice of content generation engine.

 Now, part of the blame goes to PHP here for
 the dynamic case. We are compiling PHP in threadsafe mode when building
 the PHP DSO for Apache2-prefork which is not necessary.

You'll definitely see slow performance with PHP and httpd-2.0.
I know of two major factors that contribute to this:

  * mod_php is using malloc and free quite a bit.

  * PHP's nonbuffered output mode produces very small socket writes
with Apache 2.0.  With 1.3, the httpd's own output buffering
alleviated the problem.  In 2.0, where the PHP module splits
long blocks of static text into 400-byte segments and inserts
a flush bucket after every bucket of data that it sends to the
next filter, the result is a stream of rather small packets.

 It would be nice
 if there was an apxs flag that would return the MPM type.

+1

 Right now we
 would need to parse the output of httpd -l or -V to figure out which MPM
 is being used. Being able to go non-threadsafe in PHP does speed us up a
 bit. But none of this has anything to do with the fact that Apache 1.3 is
 faster for static files.  It's going to be very hard to convince people to
 switch to Apache2-prefork if we can't get it to go at least as fast as 1.3
 for simple static files.

For what it's worth, I just tried the test case that you posted.  On my
test system, 2.0 is faster when I run ab without -k, and 1.3 is faster
when I run with -k.

--Brian





Re: core_output_filter buffering for keepalives? Re: Apache 2.0 Numbers

2002-06-23 Thread Aaron Bannert

On Mon, Jun 24, 2002 at 01:07:48AM -0400, Cliff Woolley wrote:
 Anyway, what I'm saying is: don't make design decisions of this type based
 only on the results of an ab run.

+1

I think at this point ab should have the ability to interleave issuing
new connections, handling current requests, and closing finished requests,
but it would be nice if someone else could make sure.

If I get a chance I'll try to run flood against the two scenarios --
it tends to get around some of the concurrency problems we see with ab
but at the expense of scalability.

-aaron




Re: core_output_filter buffering for keepalives? Re: Apache 2.0 Numbers

2002-06-23 Thread Bill Stoddard



 Bill Stoddard wrote:
 .

 So changing the AP_MIN_BYTES_TO_WRITE just moves the relative postion of the write()
and
 the check pipeline read.
 

 It has one other side-effect, though, and that's what's bothering me:
 In the case where core_output_filter() decides to buffer a response because
 it's smaller than 8KB, the end result is to turn:
 sendfile
 into:
 mmap
 memcpy
 munmap
 ... buffer some more requests' output until we have 8KB ...
 writev

 ...


Yack... just noticed this too. This renders the fd cache (in mod_mem_cache) virtually
useless.  Not sure why we cannot setaside a fd.

Bill