----- Original Message ----- 
From: Paul <[EMAIL PROTECTED]>
To: cherukuwada subrahmanyam <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Wednesday, May 02, 2001 7:08 PM
Subject: Re: eliminating duplicate lines in a file


> 
> --- cherukuwada subrahmanyam <[EMAIL PROTECTED]> wrote:
> > Hi,
> > Iam reading flat text file of 100000 lines. Each line has got data of
> > maximum 10 characters.
> > I want to eliminate duplicate lines and blank lines out of that file.
> > i.e. something like sort -u in unix.
> 
> Got plenty of memory? =o)
> 
> open IN, $file or die $!;
> my %uniq;
> while(<IN>) {
>   $uniq{$_}++;
> }
> print sort keys %uniq;
> 
 how about you?

open FH, "lines.txt" || die $!;
my %uniq;
map{$uniq{$_}=1 and print $_ unless $uniq{$_} }<FH>;

:o))

Reply via email to