Brian Shire wrote:
> 
> On Dec 3, 2007, at 2:17 PM, Stanislav Malyshev wrote:
> 
>>> I am a developer on a CMS also which uses the auto-include
>>> functionality to
>>> include many classes over many files. Each request can include up to 30
>>> different files.  The speed increase is around the 15% mark when
>>> combining
>>> the files.  This is with APC installed too.
>>
>> Can you provide some benchmark setups that this could be researched -
>> i.e. describe what was benchmarked and how to reproduce it?
>>
> 
> I've seen this come up before internally at Facebook.  Many people do a
> microtime() test within there code and consider this a definitive
> benchmark of how fast there script runs.  Unfortunately this excludes a
> lot of work that's done prior to execution.  Typically we see people
> claiming gains from combining files when in actuality they where just
> excluding the compilation time in their benchmark by moving compilation
> done via includes() to before the initial script begins executing.  When
> measuring this type of optimization one really must measure outside of
> PHP using something like an Apache Bench tool so you get an idea of the
> big picture.  I think trying to optimize these also presumes that you're
> already running a bytecode cache etc.

Hi Brian and Stas,

I hate to say it, but it is somewhat condescending to assume that the
benchmarks were done with microtime().  I spent about 15 hours of my
time designing a very complex, carefully constructed benchmark, and yes,
I ran it with apache benchmark.  In addition, I ran the benchmark using
no APC, with APC, and with APC and apc.stat=0.  The benchmark in
question compared require_once to include with full paths to a single
file.  In the best case, I got a 12% performance difference between
include with full paths and apc.stat=0 and a single file.

An earlier benchmark compared a single file to using both require_once
and dirname(__FILE__) - a real performance killer that resulted in 19%
difference without APC, and 30% difference with APC.

Oh and before anyone gets any ideas about my competence, Stas tried the
same benchmark in Zend's ultra-high tech lab and got the same results.
These are not some loser's microtime() benchmark.

What is particularly irksome about this whole nightmare is the
combination of "prove it you little peon" attitude and the fact that it
doesn't really matter what evidence this little peon presents - the
decision appears to have already been made without any debate or
interest in work.  At first I thought it was the annoyance of having to
come up with a patch, but I have also provided patches complete with
.phpt tests.  If the decision is to ignore input, I would really rather
someone just say "piss off" instead of letting me waste several months
patiently proving that there *is* a performance difference that can
matter just so that it can be dismissed without consideration and vague
references to "it probably is really only a 5% difference."

Then I wouldn't have to waste more time writing messages like this one
that say: I've already proven there's a performance difference, the ball
is in *your* court to prove (with benchmark) that I am wrong.

Greg

-- 
PHP Internals - PHP Runtime Development Mailing List
To unsubscribe, visit: http://www.php.net/unsub.php

Reply via email to