On Mon, 14 Feb 2005 22:06:51 +0530, Vishal Vasan <[EMAIL PROTECTED]> wrote: > Hi, > > This is the perl script that I had written... > ----------------------------------------------------------- > #!/usr/bin/perl -w > > @a = <>; > @b = sort @a; > for($i=0;$i<@a;$i=$i+1){ > print "$b[$i]"; > } > ------------------------------------------------------------- > > But this runs out of memory. I that there is not enough memory to hold > 36 million lines with 52 characters in each line. > So I am doing it the following way (Completely Unix). > > sort -u -T /filesystem file.dat > file_out.dat > where file.dat is my input file > and file_out.dat is my output file. > > The version of Perl is 5.8. > > I am going to try DB_FILE module once. > > Hope this helps.... > > Thanks and Regards, > Vishal >
Vishal, DB_File would certainly be an option, especially if you are going to do this regularly with possibly even larger files. For a one-off solution, though, insert some debugging code: #!/usr/bin/perl -w @a = <>; print "file loaded\n" ; @b = sort @a; print "file sorted\n" ; for($i=0;$i<@a;$i=$i+1){ print "$b[$i]"; } You're doubling your memory usage, here. you load the file into an array, and then you copy the array into another array. So you're not holding 36,000,000 lines, you're holding 72,000,000. If your initial @a = <> loads into memory, try this: #!/usr/bin/perl use warnings ; use strict ; my @a = <> ; @a = sort @a ; print "$_\n" foreach @a ; HTH, --jay -- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] <http://learn.perl.org/> <http://learn.perl.org/first-response>