> Many times when I am writing some program in python, I notice that I
> could transform my list into set, then use the set methods like union,
> intersection, set equality etc. , and it will solve my problem easily.
> But then I realize that if I transform my list into set, it will
> remove duplicates of elements in the list and so I will lose
> information from my original list.
> 
> For example, I was writing a program to detect whether two strings are
> anagrams of each other. I had to write it like this:
> 
> def isAnagram(w1, w2):
>   w2=list(w2)
>   for c in w1:
>     if c not in w2:
>       return False
>     else:
>       w2.remove(c)
>   return True
> 
> But if there was a data structure in python which supported duplicate
> elements(lets call it dset), then I could just write:
> 
> def inAnagram(w1,w2):
>   return dset(w1)==dset(w2)
> 
> Example of some dset methods:
> {1,2,3,3} intersection {4,1,2,3,3,3}  == {1,2,3,3}
> {1,2,3,3} union {4,1,2,3,3,3} == {1,2,3,3,3,4}
> {4,1,2,3,3,3} difference {1,2,3,3} == {4,3}
> 
> Do you think that it would be a good idea to add this kind of data
> structure to python? Or did I overlook some other easy way to solve
> this kind of problems?

I think collections.Counter object may be useful for your purpose.

http://docs.python.org/py3k/library/collections.html#collections.Counter
-- 
http://mail.python.org/mailman/listinfo/python-list

Reply via email to