>>>>> ma3mju <matt.u...@googlemail.com> (m) wrote: >m> Hi all, >m> I'm having trouble with multiprocessing I'm using it to speed up some >m> simulations, I find for large queues when the process reaches the >m> poison pill it does not exit whereas for smaller queues it works >m> without any problems. Has anyone else had this trouble? Can anyone >m> tell me a way around it? The code is in two files below.
How do you know it doesn't exit. You haven't shown any of your output. I did discover a problem in your code, but it should cause an exception: >m> #set off some of the easy workers on the hard work (maybe double >m> number of hard) >m> for i in range(0,num_hard_workers): >m> hard_work_queue.put(None) >m> hard_workers.append(multiprocessing.Process >m> (target=GP.RandomWalkGeneralizationErrorParallel,args= >m> (hard_work_queue,result_queue,))) >m> #wait for all hard workers to finish >m> for worker in hard_workers: >m> worker.join() Here you create new hard workers, but you never start them. The join should then give an exception when it reaches these. -- Piet van Oostrum <p...@cs.uu.nl> URL: http://pietvanoostrum.com [PGP 8DAE142BE17999C4] Private email: p...@vanoostrum.org -- http://mail.python.org/mailman/listinfo/python-list