Larry Garfield wrote:
> On Sunday 20 December 2009 10:45:45 am Daniel Kolbo wrote:
>> Hello PHPers,
>> This is a two part question:
>> 1) Is it faster to include one file with lots of code, or many separate
>> smaller individual files?  Assume the one massive file is merely the
>> concatenation of all the smaller individual files.  (I am assuming the
>> one massive file would be faster..., but i wanted to get confirmation).
> Conventional wisdom is that the one big file is faster, since it requires one 
> disk I/O hit instead of several.  HOWEVER, if you're only using a small 
> portion of that code then it could be faster to load only the code you really 
> need.  Where the trade off is varies with your architecture, the amount of 
> code, ad how good the disk caching of your OS is.
>> 2) Suppose php has to invoke the include function 100 times.  Suppose
>> all files are on average the same size and contain the same number of
>> instructions.  Would it be faster to include the same exact file 100
>> times as opposed to 100 different file names?  Basically, does the
>> engine/parser take any shortcuts if it notices that the file name has
>> already been included once?
> I'm pretty sure that PHP will recognize that it's already parsed that file 
> and 
> keep the opcode caches in memory, so it needn't hit disk again.  I've not 
> checked into that part of the engine, though, so I may be wrong there.

Thanks for the reply.

For 2): I've often searched for php parsing documentation.  I love the documentation.  However, i have yet to find an excellent source
documenting the php parser/engine.  My searches always yield the zend
website, but it doesn't seem like i can get very far from that page.
Any suggestions on where i could learn more of the nitty gritty details
of the php/zend behaviours?


PHP General Mailing List (
To unsubscribe, visit:

Reply via email to