With the line:

@text = <FILE>;

You are reading the ENTIRE file into
an array.  This is rarely necessary.
Instead, read it line by line
processing each line as it passes through
your program:

for $file (@logfiles) {
        open (FILE, $file);             # what if it fails???
        while ($line = <LINE>) {
                #(do some analysing on the $line, building hashes with data)
        }
        close FILE;
}



"Kamphuys, ing. K.G." wrote:
> 
> Hi all,
> 
> I have a question regarding opening very large webserver logfiles.  They are
> about a gigabyte each and I have seven of them so I run out of memory.
> 
> This is what I do now:
> 
> for $file (@logfiles) {
>  open (FILE, "$file");
>  @text = <FILE>;
>  close FILE;
>  while ($#text > -1) {
>   $line = shift @text;
>   #(do some analysing on the $line, building hashes with data)
>   }
>  }
> 
> Though I try to save memory by deleting each line that isn't necessary
> anymore (line 6) I still swallow up a complete file at a time, so I need
> memory for both the logfile and the growing data structure of analyzed
> stuff.
> 
> Now I've seen sometime that there is a more efficient way to open large
> files, but I cannot remember anymore.  Who can help me out?
> 
> Thanks in advance,
> 
> Koen Kamphuys
> Web master of, during foot and mouth disease crisis, the most successful
> site of Dutch government, http://www.minlnv.nl/
> _______________________________________________
> Perl-Win32-Users mailing list
> [EMAIL PROTECTED]
> http://listserv.ActiveState.com/mailman/listinfo/perl-win32-users
_______________________________________________
Perl-Win32-Users mailing list
[EMAIL PROTECTED]
http://listserv.ActiveState.com/mailman/listinfo/perl-win32-users

Reply via email to