I don't know if this is the optimal solution, but there *is* a way you could 
use sort to speed things up: first number the items, then sort them by their 
initial value, remove dups, then sort again by their number. Finally, strip 
the numbers. Not the simplest, but it should be faster for large data than 
checking every item against every other.

HTH,
Brian

<< Here is a function that I've written quite some time ago. It takes a list 
of
text theList, separated by theitemDel, remove duplicate items in the list,
and returns a new list without duplicate items.
Unfortunately, this function running rather slowly on a large list (a few
thousands records) - that is why I am posting here as a little "open
source", Request For Comment: Make it faster guys!
Remember, the sequence of the records cannot be changed, that say you are
not likely to use sort. >>


Archives: http://www.mail-archive.com/[email protected]/
Info: http://www.xworlds.com/metacard/mailinglist.htm
Please send bug reports to <[EMAIL PROTECTED]>, not this list.

Reply via email to