On Fri, Oct 31, 2008 at 8:34 AM, Dinesh B Vadhia <[EMAIL PROTECTED]> wrote: > Hi Kent > > The code is very simple: > > dict_long_lists = defaultdict(list) > for long_list in dict_long_lists.itervalues() > for element in long_list: > array_a[element] = m + n + p # m,n,p are numbers > > The long_list's are read from a defaultdict(list) dictionary and so don't > need initializing. The elements of long_list are integers and ordered > (sorted before placing in dictionary). There are > 20,000 long_list's each > with a variable number of elements (>5,000). The elements of long_list are > immutable (ie. don't change).
I don't see a lot of potential for optimization. How long does it take now? If m+n+p don't change within the loop, you should hoist the addition out of the loop. If the code is running at module level, put it into a function or method and make sure all the names used in the loop are local - name lookup is faster for local names inside a function. Also you could try replacing the inner loop with itertools.imap(array_a.__setitem__, long_list, itertools.repeat(m+n+p)) Perhaps there is a way to do this with numpy that would be faster, I don't know. You might want to ask on comp.lang.python, there are some optimization gurus who hang out there. Kent _______________________________________________ Tutor maillist - Tutor@python.org http://mail.python.org/mailman/listinfo/tutor