>I can get the sorted file easy enough with
>  history | sort > sortedhistory.txt
>and regularly updated it with
> history | sort >> sortedhistory.txt
>
>So the main problem is getting the duplicates out of the sortedhistory.txt
>file.  What do you reckon, a good job for a Perl program ?

perhaps, but I'd do it with sed, since I like sed.

give this a try

history | sed 's/^ *[^ ]* *//' | sort -u > sortedhistory.txt

The sed line says match any number of spaces " *" followed by
any number of non-spaces "[^ ]*" followed by any number of spaces " *" again
and replaces them with nothing "//". Which just gets rid of the numbering.
then sort -u will take care of uniqueness for you.

have fun,

greg


-
To unsubscribe from this list: send the line "unsubscribe linux-newbie" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.linux-learn.org/faqs

Reply via email to