Rasmus Lerdorf wrote:
Tim Starling wrote:
<?php
class C {
    var $v1, $v2, $v3, $v4, $v5, $v6, $v7, $v8, $v9, $v10;
}

$m = memory_get_usage();
$a = array();
for ( $i = 0; $i < 10000; $i++ ) {
    $a[] = new C;
}
print ((memory_get_usage() - $m) / 10000) . "\n";
?>

1927 bytes (I'll use 64-bit from now on since it gives the most shocking
numbers)

PHP 5.3.3-dev (cli) (built: Jan 11 2010 11:26:25)
Linux colo 2.6.31-1-amd64 #1 SMP Sat Oct 24 17:50:31 UTC 2009 x86_64

php > class C {
php {     var $v1, $v2, $v3, $v4, $v5, $v6, $v7, $v8, $v9, $v10;
php { }
php >
php > $m = memory_get_usage();
php > $a = array();
php > for ( $i = 0; $i < 10000; $i++ ) {
php {     $a[] = new C;
php { }
php > print ((memory_get_usage() - $m) / 10000) . "\n";
1479.5632

So you need 1500 bytes per object in your array.  I still fail to see
the problem for a web request.  Maybe I am just old-fashioned in the way
I look at this stuff, but if you have more than 1000 objects loaded on a
single request, you are doing something wrong as far as I am concerned.

This is why we do things like unbuffered mysql queries, zero-copy stream
passing, etc.  We never want entire result sets or entire files in
memory because even if we optimize the crap out of it, it is still going
to be way faster to simply not do that.

actually with mysqlnd a buffered set might be faster, if you know what are you doing, because the data won't be copied once more. with unbuffered sets data is copied from the network buffer to the zval. With buffered sets the zval just pointes to the network buffer. If you have the RAM then buffered should be faster. Of course you should use the set and when finished close it and not fetch-close-process, because then copy is forced.

Best,
Andrey

--
PHP Internals - PHP Runtime Development Mailing List
To unsubscribe, visit: http://www.php.net/unsub.php

Reply via email to