Hello!

I'm about to parallelize some algorithm that turned out to be too
slow. Before I start doing it, I'd like to hear some suggestions/hints
from you.

The algorithm essentially works like this:  There is a iterator
function "foo" yielding a special kind permutation of [1,....n]. The
main program then iterates through this permutations calculating some
proprieties. Each time a calculation ends, a counter is incremented
and each time the counter is divisible by 100, the current progress is
printed.

The classical idea is to spawn m threads and use some global lock when
calling the instance of the iterator + one global lock for
incrementing the progress counter. Is there any better way? I'm
especially concerned with performance degradation due to locking - is
there any way to somehow avoid it?

I've also read about the `multiprocessing' module and as far as I've
understood :

====
permutation = foo()
threadlst = []
for i in xrange(m) :
   p = Process(target=permutation.next)
   threadlst.append(p)
   p.start()
for p in threadlst:
   p.join()
====

should do the trick. Am I right? Is there any better way other than
this?
--
http://mail.python.org/mailman/listinfo/python-list

Reply via email to