This seems to work. I hacked it together pretty quickly, and tested it on
exactly one input file. So you might want to gussy it up with error checking
and so on.


use strict;

my @lines = ();
my $totalLength = 0;

my $file = shift;
open(IN, "< $file");

my $inputLine;
while (<IN>) {
$inputLine = $_; #save it in case it's last
unshift(@lines, $totalLength);
$totalLength += length($inputLine);
}

#We've got the last line in $input line, so don't process
#it in the loop (especially since it may not end in \n).

chomp $inputLine;
print "$inputLine\n";
shift @lines; # throw away the last/first pointer so we don't process it twice.

for my $i (@lines) {
seek(IN, $i, 0);
my $line = <IN>;
print $line;
}

close(IN);

If the file is small enough to fit in RAM, just do something like above, but
put the whole line in the array, instead of just the pointer.

I tested this in MacOS X, i.e. standard Unix perl. I'm counting on seek()
acting the same in MacPerl.

Hope this is right, and helps.
-Chris

On Tuesday, February 18, 2003, at 04:24 PM, Detlef Lindenthal wrote:
How can I read and process 60 MB of text,
line by line, but from the end to the beginning
(last line first). -- I tried thinking of doing this
by means of the Unix system statement "tail",
but instead of returning one single line "tail"
would return all the rest of the file. --

What possibilities would work with MacPerl?
Would DB_File a way?

It would be sufficient if there ist a way to revert
the line sequence of that file.
(Number them and put them into a SQL database?)

Help is welcome.

TIA, Detlef Lindenthal



Reply via email to