On Wed, May 02, 2001 at 07:39:03PM +0200, M.W. Koskamp wrote:
: 
: ----- Original Message ----- 
: From: Paul <[EMAIL PROTECTED]>
: To: cherukuwada subrahmanyam <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
: Sent: Wednesday, May 02, 2001 7:08 PM
: Subject: Re: eliminating duplicate lines in a file
: 
: 
: > 
: > --- cherukuwada subrahmanyam <[EMAIL PROTECTED]> wrote:
: > > Hi,
: > > Iam reading flat text file of 100000 lines. Each line has got data of
: > > maximum 10 characters.
: > > I want to eliminate duplicate lines and blank lines out of that file.
: > > i.e. something like sort -u in unix.
: > 
: > Got plenty of memory? =o)
: > 
: > open IN, $file or die $!;
: > my %uniq;
: > while(<IN>) {
: >   $uniq{$_}++;
: > }
: > print sort keys %uniq;
: > 
:  how about you?
: 
: open FH, "lines.txt" || die $!;
: my %uniq;
: map{$uniq{$_}=1 and print $_ unless $uniq{$_} }<FH>;
: 
: :o))

While this is fun and amusing, it's not being very helpfull.  At least
provide an easy to understand explination of what your code is doing.

Casey West

-- 
"Is forbitten to steal hotel towels please. If you are not person to
do such thing is please not to read notis."
 --In a Tokyo Hotel

Reply via email to