On October 11, 2002 10:55 pm, you wrote:
> I've committed a probable fix for the first issue.

I'll test the patch in the morning and let you know if it solved the problem.

> The second issue is not really an issue at all AFAICT; the delay is the
> remote server remaining open after serving your request because you have
> not closed the connection.
> This similar script does not "hang" as you described:
>
>  <?php
>  $fp = fopen("http://host";, "r");
>  while ( ($data = fgets($fp, 1024)) ) {
>  echo $data;
>  }
>  fclose($fp);
>  ?>

You are correct on this point, after further testing it seems that this is 
identical to the behaviour in older PHPs. For the 'hang' to occur you 
actually need to sent HTTP/1.1 request, which defaults to keep-alive 
connection.

> As for the last point... we do have that memory limit check in emalloc,
> but anyone allocating lines of that length deserves to die().
> I'll bet that just about any other code that allows the user to specify a
> buffer size will die equally unpleasant deaths, so it's not really a
> streams issue.

I've just tested the very same code on PHP 4.2.3, not only did the code not 
crash, but it actually read all the avaliable data from the socket correctly. 
So, I think this is indeed a problem. Allocating a huge buffer just because 
users says so is not right, we might as well allow people direct memory 
access while we are at it. One thing is that if this were to fail because 
there were really 1000000000 bytes to read and PHP could not allocate that 
much memory to store all this data. Even then, a graceful exit would've been 
nicer then a segmentation fault, but that's already arguable.

One more issue that I've come across since the last email is the following 
leak in the streams code.
/home/rei/PHP_CVS/php4/main/streams.c(986) :  Freeing 0x084DA69C (1 bytes), 
script=/home/rei/PHP_CVS/php4/run-tests.php
/home/rei/PHP_CVS/php4/ext/standard/file.c(438) : Actual location (location 
was relayed)

Unfortunately, I am unable to reproduce the with a small test scripts, so not 
much detail about this leak atm, I'm afraid.

Ilia


> --Wez.
>
> On 12/10/02, "Ilia A." <[EMAIL PROTECTED]> wrote:
> > There are 3 streams related problems I've come across today while using
> > CVS code. First it seems that sending of binary data in excess of 8k
> > causes some of the data not to be sent. This can be easily replicated
> > with the following test code:
> > <?php
> > $test_binary_file = file_get_contents("file_name");
> > $length = strlen($test_binary_file);
> >
> > $fs = fsockopen("host", "port");
> > $written = fwrite($fs, $test_binary_file)
> > fclose($fs);
> > ?>
> >
> > According to fwrite() output all the data has been written, since $length
> > equals to $written. However, using a network monitoring tool, Ethereal I
> > was able to see that only about 8k worth of data was sent. This was
> > confirmed by another script that set on the receiving end.
> >
> > Another problem I've come across is about reading from sockets that do no
> > terminate connection. Using the following script I've connected to a
> > webserver, in php 4.2.3 the script returns immediately after the last
> > byte has been read. In CVS the script wait about 10 seconds after all the
> > data has been read before finally terminating (timing out?). The example
> > script is below:
> > <?php
> > $fp = fsockopen("host", 80);
> > fwrite($fp, "GET / HTTP/1.1\r\nHost: host\r\n\r\n");
> > while ( ($data = fgets($fp, 1024)) ) {
> > echo $data;
> > }
> > flose($fp);
> > ?>
> >
> > The third problem is coredump that is fairly easily replicate with the
> > script below:
> > <?php
> > $fp = fsockopen("localhost", 80);
> > fputs($fp, "GET / HTTP/1.0\r\n\r\n");
> > fgets($fp, 1000000000);
> > ?>
> >
> > Ilia


-- 
PHP Development Mailing List <http://www.php.net/>
To unsubscribe, visit: http://www.php.net/unsub.php

Reply via email to