Well I'm not a web developer, never used mod_perl, there is 
the article on Perl.com describing how they built e-toys with 
mod_perl.  I think at the time it was the 3rd busiest web site
at the time.  I don't know if the hardware even compares, but 
you can take a look, if you haven't already.  


On 24-Oct-01 Scott R. Godin wrote:
> here's a missive fired off by the site admin after he "benchmarked" two 
> scripts, one written in php and one written in perl/cgi
> 
>> 
>> First of all.
>> 
>> Dude. you're out of your mind.  Im serious. 
>> 
>> The WHOLE point about why PHP is faster than Perl is because the
>> interpreter is compiled into Apache. 
> 
> he's not running a perl interpreter compiled into Apache.
> 
>> Also. the fact that your script caused MySQL to use up all of it's
>> connections has -nothing- to do with PHP being compiled into Apache. 
> 
> I find this terribly difficult to believe, but I'm willing to post my 
> cgi for review both here and in the DBI list
> 
>> We simply do -not- have the hardware to run mod_perl.   With our level of
>> traffic, we would need a load balanced cluster to handle this.  
>> 
>> We skewed nothing.  We ran the same apache bench command for both
>> scripts.  Same number of concurrent requests, same number of times. Do not
>> confuse ApacheBench with some useless little tool. This is for serious
>> server benchmarking.  The fact that we're using gemini table types and
>> your database tables are indexed just further shows the limitations of
>> Perl.
>> 
>> You simply don't get it.  PHP, in all of it's forms, blows perl out of the
>> water.  I've been writing both since early 1993, and in every case, in
>> every instance, PHP crushes perl for speed.  That's -why- it was created
>> (build in interpreter).  That's -why- Zend released the PHP4 engine.  
>> That's -why- we're running Zend Optimizer. If perl was the shit for doing
>> CGI, why would anyone even bother creating things like PHP?  That's like
>> the Chewbacca website.  It makes no sense.
>> 
>> mod_perl is a resource pig. I refuse to install something on a server that
>> will make life miserable for everyone else. I've seen GHz machines hauled
>> off of their foundations because of mod_perl, while the same server
>> running PHP code has no problems whatsoever.
> 
> I responded with certain information along these lines: 
> 
> -=-
>> > If you use CGI.pm in many of your mod_perl scripts, you may want to
>> > preload 
>> > CGI.pm and its methods at server startup time. To do this,
>> > add the following line to httpd.conf: 
>> > 
>> > PerlScript /home/httpd/conf/startup.pl
>> > 
>> > Create the file /home/httpd/conf/startup.pl and put in it all the modules 
>> > you 
>> > want to load. Include CGI.pm among them and call its
>> > compile() method to precompile its autoloaded methods. 
>> > 
>> > #!/usr/local/bin/perl
>> > 
>> > use CGI ();
>> > CGI->compile(':all');
>> > 
>> > Change the path to the startup script according to your preferences. 
>> 
>> if you're gonna benchmark at least do it right. 
>> 
>> don't flap statistics at my face when you've got sandbags tied around the 
>> feet of all my peasants, and shot each one in the foot as well, please.
>> 
>> Also, Yoda's script is not performing (and from what I can see, can not 
>> perform ) the same query mine was (again skewing the 'benchmark')
>> 
>> searching his script for 'c' does not return even the same list of maps mine
>> does. I feel that a certain degree of *accuracy* is also important in a 
>> benchmark. 
>> 
>> I've also gone to the trouble of doing things such as this:
>> 
>>     my( $type, $id, $filename, $title, $size, $reviewfile, $rating, $rated, 
>>     $oldtype );
>>     $sth->bind_columns(\$type, \$id, \$filename, \$title, \$size, 
>>     \$reviewfile, \$rating);
>> 
>> which binds the results of each return into the same variable reference to 
>> save on memory and processing while looping through the fetch, instead of 
>> thrashing the symbol table.
>> 
>> and other things like this
>> 
>> # die with status error if necessary if cgi itself got an error
>>  if (!param() && cgi_error()) {
>>     print header(-status=>cgi_error());
>>     goto FINISH;# don't call exit 0; !!! (unless you LIKE killing your perl 
>>     process over and over, ass-hat) :P
>>  }
>> 
>> and
>> 
>> if ( $search_obj eq '' )
>> {
>> # skip the database query
>>     print end_html;
>>     goto FINISH; # don't call exit 0!
>> }
>> 
>> to prevent stupidity like killing the mod_perl process 
>> 
>> so, 
>> A> the 'benchmark' is invalidated by the virtue of not performing the exact 
>> same query to the MySQL server. (That shoots the benchmark down right there.
>> =:P If the query string isn't exactly the same for both tests how can you 
>> call it a benchmark?)
>> 
>> B> the 'benchmark' is additionally character-assassinated by not running a 
>> persistent perl process and CGI process (and DBI process) like you DO have a
>> persistent php and php-with-mysql process, causing perl and CGI and DBI to 
>> re-execute and recompile themselves each time. (what kind of results DID you
>> expect doing something like this? =:P)
>> 
>> C> The accuracy of the returned result of the query as performed by each 
>> script is also in question. (look at the result count reported by each
>> script 
>> as to how many maps it returned from the query, and tell me something's not 
>> wrong with one of them. =:P)
>> 
>> try again.
> -=-
> 
> here's the "benchmark" result he returned to me
> 
> We benched your script using ApacheBench.  We let it run with 100
> concurrent connections.  A couple of things happened.
> 
> 1.  MySQL died with a "too many connections" error.  Our forums, with 150
> users on them, can't even do that.
> 
> 2.  The load average on the box jumped to 11.  Not 1 or 2.  11.  
> 
> 3.  Yoda has written a PHP search engine which already incorporates all 
> of
> the advanced features for NC.  His script ran 229 times faster than 
> yours,
> and the load average never moved above 0.5.  MySQL was also  perfectly
> fine, after being benched under the same conditions.
> 
> I strongly recommend at this point that you do not use Perl for
> Nalicity.  I have included the results of our benchmarks (performed by
> Chris), so you can see for yourself.  This sort of load would be
> unacceptible in the BU environment.
> 
> Begin ab log:
> 
> Yoda's version:
> [root@beyondunreal bin]# ./ab -n100 -c10 -k
> http://nalicity.beyondunreal.com/testbed/news2.php?executesearch=1&search
> by_titles=on&tSearchText=c&sortby=2&sorttype=1
> This is ApacheBench, Version 1.3c <$Revision: 1.45 $> apache-1.3
> Copyright (c) 1996 Adam Twiss, Zeus Technology Ltd, 
> http://www.zeustech.net/
> Copyright (c) 1998-2000 The Apache Group, http://www.apache.org/
> 
> Server Software:        Apache/1.3.22
> Server Hostname:        nalicity.beyondunreal.com
> Server Port:            80
> 
> Document Path:          /testbed/news2.php?executesearch=1
> Document Length:        769 bytes
> 
> Concurrency Level:      10
> Time taken for tests:   0.574 seconds
> Complete requests:      100
> Failed requests:        0
> Keep-Alive requests:    0
> Total transferred:      97206 bytes
> HTML transferred:       78438 bytes
> Requests per second:    174.22
> Transfer rate:          169.35 kb/s received
> 
> Connnection Times (ms)
>               min   avg   max
> Connect:        1     5    17
> Processing:    20    44   244
> Total:         21    49    261
> 
> Fuzzbuster's version:
> [root@beyondunreal bin]# ./ab -n100 -c10 -k
> http://nalicity.beyondunreal.com/cgi-bin/simplesearch.cgi?searchfor=c
> This is ApacheBench, Version 1.3c <$Revision: 1.45 $> apache-1.3
> Copyright (c) 1996 Adam Twiss, Zeus Technology Ltd, 
> http://www.zeustech.net/
> Copyright (c) 1998-2000 The Apache Group, http://www.apache.org/
> 
> Server Software:        Apache/1.3.22
> Server Hostname:        nalicity.beyondunreal.com
> Server Port:            80
> 
> Document Path:          /cgi-bin/simplesearch.cgi?searchfor=c
> Document Length:        698276 bytes
> 
> Concurrency Level:      10
> Time taken for tests:   115.382 seconds
> Complete requests:      100
> Failed requests:        0
> Keep-Alive requests:    0
> Total transferred:      71242100 bytes
> HTML transferred:       71215208 bytes
> Requests per second:    0.87
> Transfer rate:          617.45 kb/s received
> 
> Connnection Times (ms)
>               min   avg   max
> Connect:        1   122  3017
> Processing:  9771 11087 11502
> Total:       9772 11209 14519
> 
> I'm willing to post my script here to see if any of you individuals can 
> tell me what, if anything, I did wrong with MY script that could have 
> caused MySQL to die with "too many connections" or whether this is a 
> problem with DBI and DBD::MySQL in its present form. 
> 
> I await your response. (with heavy sighs and a great deal of frustration)
> 
> -- 
> Scott R. Godin            | e-mail : [EMAIL PROTECTED]
> Laughing Dragon Services  |    web : http://www.webdragon.net/

----------------------------------
E-Mail: Scott T. Hildreth <[EMAIL PROTECTED]>
Date: 24-Oct-01
Time: 16:53:03
----------------------------------

Reply via email to