Larry Garfield wrote:
> There are many things that everybody "knows" about optimizing PHP
> code. One of them is that one of the most expensive parts of the
> process is loading code off of disk and compiling it, which is why
> opcode caches are such a bit performance boost. The corollary to
> that, of course, is that more files = more IO and therefore more of a
> performance hit.
> So... does anyone have any actual, hard data here? I don't mean "I
> think" or "in my experience". I am looking for hard benchmarks,
> profiling, or writeups of how OS (Linux specifically if it matters)
> file caching works in 2010, not in 1998.
The principle hasn't changed since the 50s - read the file into memory,
and keep it there (and keep reading it from there) as long as it isn't
written to. The implementation has changed many times over, but it's
not something we as regular application programmers ought to be much
converned with. Leave it to the smart operating system.
Per Jessen, Zürich (1.6°C)
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php