Bryan R Harris wrote:
I figured the OS would load the file in blocks, but I thought the blocks
might only be 12k or something like that.
Block sizes are often chosen based on average files sizes. Where there
will be lots of small files, smaller block sizes - perhaps 12KB - are
used, and the longer it takes to read large files. With larger blocks,
it will take less time to read larger files, but the amount of space
required to store small files is much greater (who wants a 4KB files
taking up 4MB?).
I am surprised at the levels of concern over memory reading in <20 MB files.
Don't most people have 1+ GB now? I've got 2... I'm just surprised that
using at most 1% of my total ram would be a concern.
The most common amount of RAM on a prebuilt midrange computer is 512MB
today. This is sometimes used partially for video and BIOS caching
(maybe 128MB). The OS itself might easily use another 128MB, so that
leaves about 256MB for programs to share.
The reason people get more RAM is usually because they use it. In the
other room, I have a desktop system with 2GB RAM and it hardly ever has
512MB to spare.
If you don't need to read the look at the same line multiple times, it
almost always makes sense to read the file line by line. And it may even
be faster because you won't have to wait for the entire file to be
slurped before doing anything.
If you do need to read the file non-sequentially or to look at the same
lines multiple times, it might make sense to slurp the entire file. But
if you 'undef $/' first, your program will be horribly slow - copying
large amounts of memory multiple times can slow things down a lot.
--
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
<http://learn.perl.org/> <http://learn.perl.org/first-response>