Re: [PHP] How does the Zend engine behave?

2006-10-27 Thread Rasmus Lerdorf
Sean Pringle wrote:
 The Caching systems such as Zend Cache (not the Optimizer), MMCache,
 APC, etc are expressly designed to store the tokenized version of the
 PHP script to be executed.

 Note that their REAL performance savings is actually in loading from
 the hard drive into RAM, not actually the PHP tokenization.

 Skipping a hard drive seek and read is probably at least 95% of the
 savings, even in the longest real-world scripts.
 
 Interesting.  If PHP tokenization is such a small cost, what makes a
 special caching system better than the OS for caching files in memory
 after the first disk read?

APC actually executes the opcodes directly in shared memory, so unlike a
disk cache, you are not copying lots of stuff around.  APC does need to
copy some things down into process-local memory, but most can be left
where it is.  Disk caches also tend to expire pretty fast because
everything your OS does tends to touch the disk and when you application
starts eating memory your disk cache is the first to go when things get
tight.  With a dedicated shared memory segment that won't happen.
Things will stay put.

It is also possible to run APC in no-stat mode which means it will never
touch the disk at all.  If you are on an Intel cpu with an older OS like
freebsd4, disk-touching syscalls are massively slow and you can gain a
lot by skipping the up-to-date stat check on each request and include file.

Finally the compiler does more than just tokenize your script.  It will
cache functions and classes when it can and rewrite the opcodes so these
functions and classes won't have to be created at execution time.  Of
course, that assumes you haven't gone all framework-crazy and made
everything dynamic with autoload or weird conditional function and class
declarations.  If it becomes a runtime decision whether class or a
function is declared, or heaven forbid, the same class takes on
different signatures based on some runtime condition, then there isn't
much the compiler can do to speed that up.

-Rasmus

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP] How does the Zend engine behave?

2006-10-27 Thread Richard Lynch
On Fri, October 27, 2006 12:11 am, Sean Pringle wrote:
 The Caching systems such as Zend Cache (not the Optimizer), MMCache,
 APC, etc are expressly designed to store the tokenized version of
 the
 PHP script to be executed.

 Note that their REAL performance savings is actually in loading from
 the hard drive into RAM, not actually the PHP tokenization.

 Skipping a hard drive seek and read is probably at least 95% of the
 savings, even in the longest real-world scripts.

 Interesting.  If PHP tokenization is such a small cost, what makes a
 special caching system better than the OS for caching files in memory
 after the first disk read?

Your OS caching system doesn't know the difference between data files
that may or may not get re-used, and PHP scripts, like your homepage,
that are getting the crap beat out of them when you get slash-dotted.

The OS caching can build up a history of oft-used files and if you
have, say, a hit-song MP3 that's getting downloaded 1000 X a second,
is probably the way to go.

But if your goal is to get your codebase into RAM so that it can load
up what seems like a random distribution of data, a code cache is the
way to go.

If you're not sure what the hell you're doing, use both and play with
the parameters until it's fast enough, then focus on something else.

Disclaimer:
I am making this answer up with zero real-world experience, no
benchmarks, and little theoretical proof.  Test it yourself if you
want to be certain.

-- 
Some people have a gift link here.
Know what I want?
I want you to buy a CD from some starving artist.
http://cdbaby.com/browse/from/lynch
Yeah, I get a buck. So?

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP] How does the Zend engine behave?

2006-10-27 Thread Richard Lynch
On Thu, October 26, 2006 8:28 pm, [EMAIL PROTECTED] wrote:
 Richard Lynch wrote:
 On Wed, October 25, 2006 11:58 am, [EMAIL PROTECTED] wrote:
 Are included files ever unloaded? For instance if I had 3 include
 files
 and no loops, once execution had passed from the first include file
 to
 the second, the engine might be able to unload the first file. Or
 at
 least the code, if not the data.


 I doubt that the code is unloaded -- What if you called a function
 from the first file while you were in the second?

 I agree it's unlikely, but it's feasible if coded is loaded whenever
 required. Especially if data and code are separated by the engine, and
 that's quite likely because of the garbage collection.

It's really not that practical to do that separation and unload code
objects, when the user could call any function at any time.

Particularly with:
http://php.net/eval
$function = 'myprint';
function myprint ($foo) { print $foo; }
$function('Hello World');

 Thanks - that's really useful - I didn't realise that the bulk of the
 saving wasn't in tokenising.

It is a very common misconception.

I think MOST people actually get this wrong on their first encounter
of a code cache.

 Yes, without a cache, each HTTP request will load a different
 script.

 Do you know if, when a cache is used, whether requests in the same
 thread use the same in-memory object. I.e. Is the script persistent in
 the thread?

Almost for sure, the in-memory tokenized version is not only shared
within a thread, but across all threads.

Otherwise, your cache would be loading hundreds of copies of each
script for all the Apache children.

The dispatcher may copy the tokenized script in order to run it with
a clean slate, but the source it uses is probably shared RAM.

At least, so I presume...

 Fifthly, if a script takes 4MB, given point 4, does the webserver
 demand
 8MB if it is simultaneously servicing 2 requests?


 If you have a PHP script that is 4M in length, you've done something
 horribly wrong. :-)

 Sort of. I'm using Drupal with lots of modules loaded. PHP
 memory_limit
 is set to 20MB, and at times 20MB is used. I think that works per
 request. All the evidence points to that. So 10 concurrent requests,
 which is not unrealistic, it could use 400MB + webserver overhead. And
 I
 still want to combine it with another bit of software that will use 10
 to 15MB per request. It's time to think about memory usage and whether
 there are any strategies to disengage memory usage from request rate.

Almost for sure, that 20MB is never actually about the Drupal code
itself.

Loading in and manipulating an IMAGE using GD, or sucking down a
monster record-center from a database, or slurping down a bloated
web-page for web-scraping and converting that to XML nodes and ...

It's not the CODE that is chewing up the bulk of your 20MB.  It's the
data.

I may rant about OOP code-bloat, but I don't think it's THAT bad :-)

-- 
Some people have a gift link here.
Know what I want?
I want you to buy a CD from some starving artist.
http://cdbaby.com/browse/from/lynch
Yeah, I get a buck. So?

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP] How does the Zend engine behave?

2006-10-26 Thread jeff . phplist



Jon Anderson wrote:
Take this with a grain of salt. I develop with PHP, but I am not an 
internals guy...


[EMAIL PROTECTED] wrote:
Are the include files only compiled when execution hits them, or are 
all include files compiled when the script is first compiled, which 
would mean a cascade through all statically linked include files. By 
statically linked files I mean ones like include ('bob.php') - i.e 
the filename isn't in a variable.
Compiled when execution hits them. You can prove this by trying to 
conditionally include a file with a syntax error: if (false) 
include('script_with_syntax_error.php'); won't cause an error.


Good idea.



Secondly, are include files that are referenced, but not used, loaded 
into memory? I.e Are statically included files automatically loaded 
into memory at the start of a request? (Of course those where the 
name is variable can only be loaded once the name has been 
determined.) And when are they loaded into memory? When the 
instruction pointer hits the include? Or when the script is initially 
loaded?
If your include file is actually included, it will use memory. If it 
is not included because of some condition, then it won't use memory.


I wonder if that's the same when a cache/optimiser is used. Probably. 
Maybe I'll check.


Are included files ever unloaded? For instance if I had 3 include 
files and no loops, once execution had passed from the first include 
file to the second, the engine might be able to unload the first 
file. Or at least the code, if not the data.
If you define a global variable in an included file and don't unset it 
anywhere, then it isn't automatically unloaded, nor are 
function/class definitions unloaded when execution is finished.


Once you include a file, it isn't unloaded later though - even 
included files that have just executed statements (no definitions 
saved for later) seem to eat a little memory once, but it's so minimal 
that you wouldn't run into problems unless you were including many 
thousand files. Including the same file again doesn't eat further 
memory. I assume the eaten memory is for something to do with 
compilation or caching in the ZE.
Thirdly, I understand that when a request arrives, the script it 
requests is compiled before execution. Now suppose a second request 
arrives for the same script, from a different requester, am I right 
in assuming that the uncompiled form is loaded? I.e the script is 
tokenized for each request, and the compiled version is not loaded 
unless you have engine level caching installed - e.g. MMCache or Zend 
Optimiser.
I think that's correct. If you don't have an opcode cache, the script 
is compiled again for every request, regardless of who requests it.


IMO, you're probably better off with PECL/APC or eAccelerator rather 
than MMCache or Zend Optimizer. I use APC personally, and find it 
exceptional - rock solid + fast. (eAccelerator had a slight 
performance edge for my app up until APC's most recent release, where 
APC now has a significant edge.)

Thanks - that's useful to know.


Fourthly, am I right in understanding that scripts do NOT share 
memory, even for the portions that are simply instructions? That is, 
when the second request arrives, the script is loaded again in full. 
(As opposed to each request sharing the executed/compiled code, but 
holding data separately.)

Yep, I think that's also correct.
Fifthly, if a script takes 4MB, given point 4, does the webserver 
demand 8MB if it is simultaneously servicing 2 requests?

Yep. More usually with webserver/PHP overhead.
Lastly, are there differences in these behaviors for PHP4 and PHP5? 
Significant differences between 4 and 5, but with regards to the 
above, I think they're more or less the same.


Thanks Jon. So in summary: use a cache if possible. Late load files to 
save memory. Buy more memory to handle more sessions. And it's 
conceivable that when caches are used, different rules may apply.


Jeff

--
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP] How does the Zend engine behave?

2006-10-26 Thread Richard Lynch
On Wed, October 25, 2006 11:58 am, [EMAIL PROTECTED] wrote:
 Are the include files only compiled when execution hits them, or are
 all
 include files compiled when the script is first compiled, which would
 mean a cascade through all statically linked include files. By
 statically linked files I mean ones like include ('bob.php') - i.e
 the
 filename isn't in a variable.

As far as I know, the files are only loaded as execution hits them.

If your code contains:

?php
  if (0){
require 'foo.inc';
  }
?

Then foo.inc will never ever be read from the hard drive.

You realize you could have tested this in less time than it took you
to post, right?.
:-)

 Are included files ever unloaded? For instance if I had 3 include
 files
 and no loops, once execution had passed from the first include file to
 the second, the engine might be able to unload the first file. Or at
 least the code, if not the data.

I doubt that the code is unloaded -- What if you called a function
from the first file while you were in the second?

 Thirdly, I understand that when a request arrives, the script it
 requests is compiled before execution. Now suppose a second request
 arrives for the same script, from a different requester, am I right in
 assuming that the uncompiled form is loaded? I.e the script is
 tokenized
 for each request, and the compiled version is not loaded unless you
 have
 engine level caching installed - e.g. MMCache or Zend Optimiser.

You are correct.

The Caching systems such as Zend Cache (not the Optimizer), MMCache,
APC, etc are expressly designed to store the tokenized version of the
PHP script to be executed.

Note that their REAL performance savings is actually in loading from
the hard drive into RAM, not actually the PHP tokenization.

Skipping a hard drive seek and read is probably at least 95% of the
savings, even in the longest real-world scripts.

The tokenizer/compiler thingie is basically easy chump change they
didn't want to leave on the table, rather than the bulk of the
performance win.

I'm sure somebody out there has perfectly reasonable million-line PHP
script for a valid reason that the tokenization is more than 5% of the
savings, but that's going to be a real rarity.

 Fourthly, am I right in understanding that scripts do NOT share
 memory,
 even for the portions that are simply instructions? That is, when the
 second request arrives, the script is loaded again in full. (As
 opposed
 to each request sharing the executed/compiled code, but holding data
 separately.)

Yes, without a cache, each HTTP request will load a different script.

 Fifthly, if a script takes 4MB, given point 4, does the webserver
 demand
 8MB if it is simultaneously servicing 2 requests?

If you have a PHP script that is 4M in length, you've done something
horribly wrong. :-)

Of course, if it loads a 4M image file, then, yes, 2 at once needs 8M
etc.

 Lastly, are there differences in these behaviors for PHP4 and PHP5?

I doubt it.

I think APC is maybe going to be installed by default in PHP6 or
something like that, but I dunno if it will be on by default or
not...

At any rate, not from 4 to 5.


Note that if you NEED a monster body of code to be resident, you can
prototype it in simple PHP, port it to C, and have it be a PHP
extension.

This should be relatively easy to do, if you plan fairly carefully.

If a village idiot like me can write a PHP extension (albeit a
dirt-simple one) then anybody can. :-)

-- 
Some people have a gift link here.
Know what I want?
I want you to buy a CD from some starving artist.
http://cdbaby.com/browse/from/lynch
Yeah, I get a buck. So?

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP] How does the Zend engine behave?

2006-10-26 Thread Richard Lynch
On Thu, October 26, 2006 9:33 am, [EMAIL PROTECTED] wrote:
 If your include file is actually included, it will use memory. If it
 is not included because of some condition, then it won't use memory.

 I wonder if that's the same when a cache/optimiser is used. Probably.
 Maybe I'll check.

Almost for sure no PHP opcode cache attempts to load files that
*might* be included later.

That would be daft. :-)

They load the file, tokenize it, and store the opcode output of the
tokenizer for later re-execution.

 Thanks Jon. So in summary: use a cache if possible. Late load files to
 save memory. Buy more memory to handle more sessions. And it's
 conceivable that when caches are used, different rules may apply.

If your PHP source scripts themselves are chewing up large chunks of
RAM, that's pretty unusual...

-- 
Some people have a gift link here.
Know what I want?
I want you to buy a CD from some starving artist.
http://cdbaby.com/browse/from/lynch
Yeah, I get a buck. So?

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP] How does the Zend engine behave?

2006-10-26 Thread jeff . phplist



Richard Lynch wrote:

On Wed, October 25, 2006 11:58 am, [EMAIL PROTECTED] wrote:
  

Are the include files only compiled when execution hits them, or are
all
include files compiled when the script is first compiled, which would
mean a cascade through all statically linked include files. By
statically linked files I mean ones like include ('bob.php') - i.e
the
filename isn't in a variable.



As far as I know, the files are only loaded as execution hits them.

If your code contains:

?php
  if (0){
require 'foo.inc';
  }
?

Then foo.inc will never ever be read from the hard drive.

You realize you could have tested this in less time than it took you
to post, right?.
:-)
  
I don't know the extent to which the engine optimises performance, and I 
know very little about how different versions of the engine deal with 
the issue, but I guessed it depended on the behaviour of the engine, 
cache and maybe optimiser, and know/knew I don't know enough...  My 
thinking: for dynamically linked files, one speed optimisation is to 
load the file before it's needed, at the expense of memory, while 
continuing execution of the loaded portions. It's easy to do if the 
filename is static. The code may never be executed, but still takes up 
space. Some data structures can also be pre-loaded in this way.



  

Are included files ever unloaded? For instance if I had 3 include
files
and no loops, once execution had passed from the first include file to
the second, the engine might be able to unload the first file. Or at
least the code, if not the data.



I doubt that the code is unloaded -- What if you called a function
from the first file while you were in the second?
  
I agree it's unlikely, but it's feasible if coded is loaded whenever 
required. Especially if data and code are separated by the engine, and 
that's quite likely because of the garbage collection.


  

Thirdly, I understand that when a request arrives, the script it
requests is compiled before execution. Now suppose a second request
arrives for the same script, from a different requester, am I right in
assuming that the uncompiled form is loaded? I.e the script is
tokenized
for each request, and the compiled version is not loaded unless you
have
engine level caching installed - e.g. MMCache or Zend Optimiser.



You are correct.

The Caching systems such as Zend Cache (not the Optimizer), MMCache,
APC, etc are expressly designed to store the tokenized version of the
PHP script to be executed.

Note that their REAL performance savings is actually in loading from
the hard drive into RAM, not actually the PHP tokenization.

Skipping a hard drive seek and read is probably at least 95% of the
savings, even in the longest real-world scripts.

The tokenizer/compiler thingie is basically easy chump change they
didn't want to leave on the table, rather than the bulk of the
performance win.

I'm sure somebody out there has perfectly reasonable million-line PHP
script for a valid reason that the tokenization is more than 5% of the
savings, but that's going to be a real rarity.

  
Thanks - that's really useful - I didn't realise that the bulk of the 
saving wasn't in tokenising.

Fourthly, am I right in understanding that scripts do NOT share
memory,
even for the portions that are simply instructions? That is, when the
second request arrives, the script is loaded again in full. (As
opposed
to each request sharing the executed/compiled code, but holding data
separately.)



Yes, without a cache, each HTTP request will load a different script.
  
Do you know if, when a cache is used, whether requests in the same 
thread use the same in-memory object. I.e. Is the script persistent in 
the thread?


  

Fifthly, if a script takes 4MB, given point 4, does the webserver
demand
8MB if it is simultaneously servicing 2 requests?



If you have a PHP script that is 4M in length, you've done something
horribly wrong. :-)
  
Sort of. I'm using Drupal with lots of modules loaded. PHP memory_limit 
is set to 20MB, and at times 20MB is used. I think that works per 
request. All the evidence points to that. So 10 concurrent requests, 
which is not unrealistic, it could use 400MB + webserver overhead. And I 
still want to combine it with another bit of software that will use 10 
to 15MB per request. It's time to think about memory usage and whether 
there are any strategies to disengage memory usage from request rate.




Of course, if it loads a 4M image file, then, yes, 2 at once needs 8M
etc.

  

Lastly, are there differences in these behaviors for PHP4 and PHP5?



I doubt it.

I think APC is maybe going to be installed by default in PHP6 or
something like that, but I dunno if it will be on by default or
not...

At any rate, not from 4 to 5.


  

Thanks.


Note that if you NEED a monster body of code to be resident, you can
prototype it in simple PHP, port it to C, and have it be a PHP
extension.
  

A good idea, but not feasible in this 

Re: [PHP] How does the Zend engine behave?

2006-10-26 Thread Larry Garfield
On Thursday 26 October 2006 20:28, [EMAIL PROTECTED] wrote:

  If you have a PHP script that is 4M in length, you've done something
  horribly wrong. :-)

 Sort of. I'm using Drupal with lots of modules loaded. PHP memory_limit
 is set to 20MB, and at times 20MB is used. I think that works per
 request. All the evidence points to that. So 10 concurrent requests,
 which is not unrealistic, it could use 400MB + webserver overhead. And I
 still want to combine it with another bit of software that will use 10
 to 15MB per request. It's time to think about memory usage and whether
 there are any strategies to disengage memory usage from request rate.

Drupal tends to use about 10 MB of memory in normal usage for a reasonable set 
of modules in my experience, but a crapload more on the admin/modules page 
because it has to load everything in order to do so.  Normally you won't hit 
that page very often. :-)

However, Drupal is deliberately friendly toward APC.  I don't recall the stats 
(I know someone made some pretty graphs at one point, but I can't find them), 
but simply throwing APC at Drupal should give you a hefty hefty performance 
boost.  I believe APC does cache-one-run-many, so the code, at least, will 
only be stored in RAM once rather than n times.  (Richard is correct, though, 
that data is generally larger than code in most apps.)

Also, search the Drupal forums for something called Split mode.  It was 
something chx was putting together a while back.  I don't know what it's 
status is, but he claimed to get a nice performance boost out of it.

Cheers.

-- 
Larry Garfield  AIM: LOLG42
[EMAIL PROTECTED]   ICQ: 6817012

If nature has made any one thing less susceptible than all others of 
exclusive property, it is the action of the thinking power called an idea, 
which an individual may exclusively possess as long as he keeps it to 
himself; but the moment it is divulged, it forces itself into the possession 
of every one, and the receiver cannot dispossess himself of it.  -- Thomas 
Jefferson

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP] How does the Zend engine behave?

2006-10-26 Thread Sean Pringle

The Caching systems such as Zend Cache (not the Optimizer), MMCache,
APC, etc are expressly designed to store the tokenized version of the
PHP script to be executed.

Note that their REAL performance savings is actually in loading from
the hard drive into RAM, not actually the PHP tokenization.

Skipping a hard drive seek and read is probably at least 95% of the
savings, even in the longest real-world scripts.


Interesting.  If PHP tokenization is such a small cost, what makes a
special caching system better than the OS for caching files in memory
after the first disk read?

--
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP] How does the Zend engine behave?

2006-10-25 Thread Jon Anderson
Take this with a grain of salt. I develop with PHP, but I am not an 
internals guy...


[EMAIL PROTECTED] wrote:
Are the include files only compiled when execution hits them, or are 
all include files compiled when the script is first compiled, which 
would mean a cascade through all statically linked include files. By 
statically linked files I mean ones like include ('bob.php') - i.e 
the filename isn't in a variable.
Compiled when execution hits them. You can prove this by trying to 
conditionally include a file with a syntax error: if (false) 
include('script_with_syntax_error.php'); won't cause an error.


Secondly, are include files that are referenced, but not used, loaded 
into memory? I.e Are statically included files automatically loaded 
into memory at the start of a request? (Of course those where the name 
is variable can only be loaded once the name has been determined.) And 
when are they loaded into memory? When the instruction pointer hits 
the include? Or when the script is initially loaded?
If your include file is actually included, it will use memory. If it is 
not included because of some condition, then it won't use memory.
Are included files ever unloaded? For instance if I had 3 include 
files and no loops, once execution had passed from the first include 
file to the second, the engine might be able to unload the first file. 
Or at least the code, if not the data.
If you define a global variable in an included file and don't unset it 
anywhere, then it isn't automatically unloaded, nor are function/class 
definitions unloaded when execution is finished.


Once you include a file, it isn't unloaded later though - even included 
files that have just executed statements (no definitions saved for 
later) seem to eat a little memory once, but it's so minimal that you 
wouldn't run into problems unless you were including many thousand 
files. Including the same file again doesn't eat further memory. I 
assume the eaten memory is for something to do with compilation or 
caching in the ZE.
Thirdly, I understand that when a request arrives, the script it 
requests is compiled before execution. Now suppose a second request 
arrives for the same script, from a different requester, am I right in 
assuming that the uncompiled form is loaded? I.e the script is 
tokenized for each request, and the compiled version is not loaded 
unless you have engine level caching installed - e.g. MMCache or Zend 
Optimiser.
I think that's correct. If you don't have an opcode cache, the script is 
compiled again for every request, regardless of who requests it.


IMO, you're probably better off with PECL/APC or eAccelerator rather 
than MMCache or Zend Optimizer. I use APC personally, and find it 
exceptional - rock solid + fast. (eAccelerator had a slight performance 
edge for my app up until APC's most recent release, where APC now has a 
significant edge.)
Fourthly, am I right in understanding that scripts do NOT share 
memory, even for the portions that are simply instructions? That is, 
when the second request arrives, the script is loaded again in full. 
(As opposed to each request sharing the executed/compiled code, but 
holding data separately.)

Yep, I think that's also correct.
Fifthly, if a script takes 4MB, given point 4, does the webserver 
demand 8MB if it is simultaneously servicing 2 requests?

Yep. More usually with webserver/PHP overhead.
Lastly, are there differences in these behaviors for PHP4 and PHP5? 
Significant differences between 4 and 5, but with regards to the above, 
I think they're more or less the same.


--
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP] How does the Zend engine behave?

2006-10-25 Thread Larry Garfield
On Wednesday 25 October 2006 14:48, Jon Anderson wrote:
 Take this with a grain of salt. I develop with PHP, but I am not an
 internals guy...

 [EMAIL PROTECTED] wrote:
  Are the include files only compiled when execution hits them, or are
  all include files compiled when the script is first compiled, which
  would mean a cascade through all statically linked include files. By
  statically linked files I mean ones like include ('bob.php') - i.e
  the filename isn't in a variable.

 Compiled when execution hits them. You can prove this by trying to
 conditionally include a file with a syntax error: if (false)
 include('script_with_syntax_error.php'); won't cause an error.

  Secondly, are include files that are referenced, but not used, loaded
  into memory? I.e Are statically included files automatically loaded
  into memory at the start of a request? (Of course those where the name
  is variable can only be loaded once the name has been determined.) And
  when are they loaded into memory? When the instruction pointer hits
  the include? Or when the script is initially loaded?

 If your include file is actually included, it will use memory. If it is
 not included because of some condition, then it won't use memory.

  Are included files ever unloaded? For instance if I had 3 include
  files and no loops, once execution had passed from the first include
  file to the second, the engine might be able to unload the first file.
  Or at least the code, if not the data.

 If you define a global variable in an included file and don't unset it
 anywhere, then it isn't automatically unloaded, nor are function/class
 definitions unloaded when execution is finished.

 Once you include a file, it isn't unloaded later though - even included
 files that have just executed statements (no definitions saved for
 later) seem to eat a little memory once, but it's so minimal that you
 wouldn't run into problems unless you were including many thousand
 files. Including the same file again doesn't eat further memory. I
 assume the eaten memory is for something to do with compilation or
 caching in the ZE.

  Thirdly, I understand that when a request arrives, the script it
  requests is compiled before execution. Now suppose a second request
  arrives for the same script, from a different requester, am I right in
  assuming that the uncompiled form is loaded? I.e the script is
  tokenized for each request, and the compiled version is not loaded
  unless you have engine level caching installed - e.g. MMCache or Zend
  Optimiser.

 I think that's correct. If you don't have an opcode cache, the script is
 compiled again for every request, regardless of who requests it.

 IMO, you're probably better off with PECL/APC or eAccelerator rather
 than MMCache or Zend Optimizer. I use APC personally, and find it
 exceptional - rock solid + fast. (eAccelerator had a slight performance
 edge for my app up until APC's most recent release, where APC now has a
 significant edge.)

  Fourthly, am I right in understanding that scripts do NOT share

  Fifthly, if a script takes 4MB, given point 4, does the webserver
  demand 8MB if it is simultaneously servicing 2 requests?

 Yep. More usually with webserver/PHP overhead.

Unless you're using an opcode cache, as I believe some of the are smarter 
about that.  Which and how, I don't know. :-)

-- 
Larry Garfield  AIM: LOLG42
[EMAIL PROTECTED]   ICQ: 6817012

If nature has made any one thing less susceptible than all others of 
exclusive property, it is the action of the thinking power called an idea, 
which an individual may exclusively possess as long as he keeps it to 
himself; but the moment it is divulged, it forces itself into the possession 
of every one, and the receiver cannot dispossess himself of it.  -- Thomas 
Jefferson

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php