At 05:56 PM 8/30/01 -0400, Doug Lentz wrote:
>I've been using <FILEHANDLE> to read an entire text file into an array.
>
>@buffer = <MY_BIG_FILE>;
>
>I string-manipulate the individual array elements and then sometime
>later, do a
>
>$buffer = join "", @buffer;
>
>...and this worked OK for a 80M text file. I couldn't resist and tried
>it out on
>a gigabyte monster.
>
>The script aborted with "Out of memory during request for 26 bytes
>during sbrk()".
>
>26 bytes, coincidentally enough :), is the record size.
>
>I realize this is a pretty extreme test. Now, my question is, is this an
>operating system (SCO unix in this case) virtual memory problem? There
>is enough physical disk space to double the size of the input file and
>still have a gig left over. Our sysadm thinks the box was configured
>with 500M of swap space (but is not 100% sure).
>
>Any way to persuade perl to use less memory if this is the case? Thanks!
Check out 'perldoc perldebguts', and look for the section which starts:
Debugging Perl memory usage
Perl is a profligate wastrel when it comes to memory use. There is a saying
that to estimate memory usage of Perl, assume a reasonable algorithm for
memory allocation, multiply that estimate by 10, and while you still may
miss the mark, at least you won't be quite so astonished. This is not
absolutely true, but may prvide a good grasp of what happens.
If you read the whole file into a single scalar to begin with, you won't be
reading all the lines into a list of scalars, each with their own overhead,
and then joining them and thus requiring roughly double the space in the
interim:
local $/;
$buffer = <MY_BIG_FILE>;
You'll be able to read bigger files that way.
--
Peter Scott
Pacific Systems Design Technologies
http://www.perldebugged.com
--
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]