> Hi all,
>
> I have a question regarding opening very large webserver logfiles.  They
are
> about a gigabyte each and I have seven of them so I run out of memory.
>
> This is what I do now:
>
> for $file (@logfiles) {
>  open (FILE, "$file");
>  @text = <FILE>;
>  close FILE;
>  while ($#text > -1) {
>   $line = shift @text;
>   #(do some analysing on the $line, building hashes with data)
>   }
>  }
>
> Though I try to save memory by deleting each line that isn't necessary
> anymore (line 6) I still swallow up a complete file at a time, so I need
> memory for both the logfile and the growing data structure of analyzed
> stuff.
>
> Now I've seen sometime that there is a more efficient way to open large
> files, but I cannot remember anymore.  Who can help me out?

I always use one of these two methods:

# - Method 1
open FILE, "file";
while (<FILE>)
{
    # do work on $_
}
close FILE;

# - Method 2
open FILE, "file";
while ($line=<FILE>)
{
    # do work on $line
}
close FILE;

Good luck!

Michael

_______________________________________________
Perl-Win32-Users mailing list
[EMAIL PROTECTED]
http://listserv.ActiveState.com/mailman/listinfo/perl-win32-users

Reply via email to