Steven Bethard wrote: > I'm sorry, I assume this has been discussed somewhere already, but I > found only a few hits in Google Groups... If you know where there's a > good summary, please feel free to direct me there. > > > I have a list[1] of objects from which I need to remove duplicates. I > have to maintain the list order though, so solutions like set(lst), etc. > will not work for me. What are my options? So far, I can see: > > def filterdups(iterable): > result = [] > for item in iterable: > if item not in result: > result.append(item) > return result > > def filterdups(iterable): > result = [] > seen = set() > for item in iterable: > if item not in seen: > result.append(item) > seen.add(item) > return result > > def filterdups(iterable): > seen = set() > for item in iterable: > if item not in seen: > seen.add(item) > yield item > > Does anyone have a better[2] solution? > > STeve > > [1] Well, actually it's an iterable of objects, but I can convert it to > a list if that's helpful. > > [2] Yes I know, "better" is ambiguous. If it helps any, for my > particular situation, speed is probably more important than memory, so > I'm leaning towards the second or third implementation.
from itertools import * [ x for (x,s) in izip(iterable,repeat(set())) if (x not in s,s.add(x))[0] ] that's-one-ambiguously-better-solution-ly yr's, -- CARL BANKS -- http://mail.python.org/mailman/listinfo/python-list