RE: [PHP-DEV] Stream chunk size

2009-03-06 Thread Jonathan Bond-Caron
On Mon Mar 2 11:10 PM, Andi Gutmans wrote:
 I don't see a fundamental issue why it could not be arbitrary.
 The only challenge which may be an issue is that this code clearly 
 allocates the buffer on the stack for what are probably performance 
 reasons. If you allow arbitrary chunk size and use alloca()
 (do_alloca()) for stack allocation you might kill the stack.
 
 I suggest you do some performance tests and if you need to keep it on 
 the stack then create some arbitrary limit like 8K and use stack below 
 that and use heap above that (code will be uglier).
 

Thanks for the tips, it's on a todo list


-- 
PHP Internals - PHP Runtime Development Mailing List
To unsubscribe, visit: http://www.php.net/unsub.php



[PHP-DEV] Stream chunk size

2009-03-02 Thread Jonathan Bond-Caron
Hi everyone, I have a question about streams and the maximum ‘chunk size’ of
8192. 

 

I’ve read README.STREAMS and found these slides by Wez:

http://netevil.org/blog/2008/07/slides-php-streams

 

While trying to write an Amazon S3 stream wrapper and I ran into an issue
with large files: 

 

$fp = fopen('s3://mvtest/large.html', 'r'); // 30 mb

 

// This is OK

fseek($fp, 10);

echo fread($fp, 100) . \n; // 100 bytes

echo fread($fp, 100) . \n; // 100 bytes

 

// This is OK (according to documentation, max 8192 bytes) 

echo fread($fp, 65536) . \n; // 8192 bytes

 

My issue is I would like to request larger ‘chunks’, something like:

stream_set_chunk_size($fp, 65536);

 

echo fread($fp, 65536) . \n; // 65536 bytes

echo fread($fp, 10) . \n; // 65536 bytes

echo fread($fp, 15) . \n; // 15 bytes

 

Then copying to a file and avoiding memory issues:

 

$wfp = fopen(‘/tmp/large.html’);

stream_copy_to_stream($fp, $wfp); // read 65536 byte chunks, write default
8192 byte chunks

 

stream_set_chunk_size($wfp, 65536);

stream_copy_to_stream($fp, $wfp); // read  write 65536 byte chunks

copy('s3://mvtest/large.html', '/tmp/large.html'); // read  write default
8192 byte chunks

 

Going through the PHP 5.2 source, it looks like there’s support for it but
at some places the 8192 ‘chunk’ is hardcoded:

 

#define CHUNK_SIZE 8192

 

PHPAPI size_t _php_stream_copy_to_stream(php_stream *src, php_stream *dest,
size_t maxlen STREAMS_DC TSRMLS_DC)

{

char buf[CHUNK_SIZE]; ß  Is there any reason the php_stream
*src-chunk_size isn’t used?

 

stream_set_chunk_size($fp, 65536); // Would mean src-chunk_size = 65536;

 

I’d like to try to write a patch for it, anything that I should know about
streams and why the limit is there?



RE: [PHP-DEV] Stream chunk size

2009-03-02 Thread Andi Gutmans
I don't see a fundamental issue why it could not be arbitrary.
The only challenge which may be an issue is that this code clearly allocates 
the buffer on the stack for what are probably performance reasons. If you allow 
arbitrary chunk size and use alloca() (do_alloca()) for stack allocation you 
might kill the stack.
I suggest you do some performance tests and if you need to keep it on the stack 
then create some arbitrary limit like 8K and use stack below that and use heap 
above that (code will be uglier).

Andi

 -Original Message-
 From: Jonathan Bond-Caron [mailto:jbo...@openmv.com]
 Sent: Monday, March 02, 2009 8:48 AM
 To: 'PHP Developers Mailing List'
 Subject: [PHP-DEV] Stream chunk size
 
 Hi everyone, I have a question about streams and the maximum 'chunk size' of
 8192.
 
 
 
 I've read README.STREAMS and found these slides by Wez:
 
 http://netevil.org/blog/2008/07/slides-php-streams
 
 
 
 While trying to write an Amazon S3 stream wrapper and I ran into an issue
 with large files:
 
 
 
 $fp = fopen('s3://mvtest/large.html', 'r'); // 30 mb
 
 
 
 // This is OK
 
 fseek($fp, 10);
 
 echo fread($fp, 100) . \n; // 100 bytes
 
 echo fread($fp, 100) . \n; // 100 bytes
 
 
 
 // This is OK (according to documentation, max 8192 bytes)
 
 echo fread($fp, 65536) . \n; // 8192 bytes
 
 
 
 My issue is I would like to request larger 'chunks', something like:
 
 stream_set_chunk_size($fp, 65536);
 
 
 
 echo fread($fp, 65536) . \n; // 65536 bytes
 
 echo fread($fp, 10) . \n; // 65536 bytes
 
 echo fread($fp, 15) . \n; // 15 bytes
 
 
 
 Then copying to a file and avoiding memory issues:
 
 
 
 $wfp = fopen('/tmp/large.html');
 
 stream_copy_to_stream($fp, $wfp); // read 65536 byte chunks, write default
 8192 byte chunks
 
 
 
 stream_set_chunk_size($wfp, 65536);
 
 stream_copy_to_stream($fp, $wfp); // read  write 65536 byte chunks
 
 copy('s3://mvtest/large.html', '/tmp/large.html'); // read  write default
 8192 byte chunks
 
 
 
 Going through the PHP 5.2 source, it looks like there's support for it but
 at some places the 8192 'chunk' is hardcoded:
 
 
 
 #define CHUNK_SIZE 8192
 
 
 
 PHPAPI size_t _php_stream_copy_to_stream(php_stream *src, php_stream *dest,
 size_t maxlen STREAMS_DC TSRMLS_DC)
 
 {
 
 char buf[CHUNK_SIZE]; ß  Is there any reason the php_stream
 *src-chunk_size isn't used?
 
 
 
 stream_set_chunk_size($fp, 65536); // Would mean src-chunk_size = 65536;
 
 
 
 I'd like to try to write a patch for it, anything that I should know about
 streams and why the limit is there?


--
PHP Internals - PHP Runtime Development Mailing List
To unsubscribe, visit: http://www.php.net/unsub.php