Yeah, why not? If I had to do this here, or if the list were very long (over
1000 entries, say), I'd write a perl program to read the file into a sorted
array, discarding duplicates along the way, then write out a new, clean file. 

If the list was short and I was feeling lazy (or didn't know perl), I'd use
"sort" to process the file and output a new copy, then use vi (or whatever
text editor you prefer) to delete the dups by hand.

The relevant sort line ("man sort" for details) is something like:

sort -o outfile.name infile.name

At 05:46 AM 2/17/99 +0500, Omer Ansari wrote [excerpt]:

>this is the scenario: i have a file in which there are multiple entries
>of the same email address...e.g.:

>excerpt from the file shows that there are two entries of
>[EMAIL PROTECTED] and [EMAIL PROTECTED] I want to remove all the rest
>as i want only a single entry for each email address (1
>[EMAIL PROTECTED] and 1 [EMAIL PROTECTED] in this case).

------------------------------------"Never tell me the odds!"---
Ray Olszewski                                        -- Han Solo
762 Garland Drive
Palo Alto, CA  94303-3603
650.321.3561 voice     650.322.1209 fax          [EMAIL PROTECTED]        
----------------------------------------------------------------

Reply via email to