On Fri, May 9, 2008 at 2:34 PM, Chris Chabot <[EMAIL PROTECTED]> wrote:
> The main problem is the overhead of loading all the features javascript
> each request, this consumes tons of memory and takes loads of IO,
> so the overly obvious solution is to not do this anymore :)
<snip>
>  I'll play a bit with how the cache is setup, and if loading the javascript
> of disk, while caching the features xml and dependency graph is more
> efficient in the end, or see if i can't think of some other solutions..
>
>  If some other ideas bubble up for anyone, please let me know :-)

(Warning: not a PHP user, likely to be speaking nonsense.)  It sounds
like something you're doing is screwing up the kernel's I/O cache.
What you *really* want is for each process to read from a file and
stream to a socket.  The kernel will do a good job of figuring out
that those files ought to be kept in memory, so the individual
processes don't pay for any disk I/O as they work.  Because the files
are being streamed in chunks it doesn't take much memory per-process
either.

Even better would be to have access to something like the sendfile
syscall, so you can skip user space entirely.  Again, to make that
possible you'd need to write the file to disk to take advantage of the
kernel.  It is not uncommon for the kernel to do a better job of
caching data from disk than an HTTP server can do.

About the 420 pages/sec vs 630 pages/sec number... what are you
counting as a page?  Does each page load include the requests to
download the extra js, or are you assuming you get those for free?

Reply via email to