I'm am writing a large web application, and on every request, my pages must
load the environment, which takes up a significant amount of time (.1
seconds on a Athlon 1 Ghz).

Much of that time seems to be spent on loading the environment from disk,
and processing it into usable form.

So I thought of running a PHP script all the time, that accepts socket
connections, transfers all the data in serialized form through a port to the
requesting script.  The PHP script that is always on would only have to load
the data files once.

Sounds great right... but it actually is slower to transfer data through a
TCP/IP connection from 2 ports on the same machine, if you are doing it with
small chunks (<100 Kbytes at a time) -- but I noticed if I cranked the chunk
size, it was up to 100 times faster:

while ($out=socket_read ($sock,1000000000)) {
   $EncodedString.=$out;
}

Now, what I am wondering is:

a) Is there a faster way to send data between 2 processes, that will work
with PHP, and is supported by Windows and *nix.
b) Is there any drawback to use data chunks that are: 1000000000 bytes?

I'd ask this on the other list, but nobody is going to really have a good of
a handle as you guys.

-Thanks in advance, sorry for the intrustion.

-Brian Tanner


-- 
PHP Development Mailing List <http://www.php.net/>
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
To contact the list administrators, e-mail: [EMAIL PROTECTED]

Reply via email to