On Jun 22, 2010, at 11:54 AM, Kevin Hunter wrote:

> Man, I'm late to this convo as I'm getting these emails all out of
> thread order and at random times ... corporate filters rock.

You could just use a private email address, such as one from gmail, to 
subscribe to this list.  (p.s., assumedly sandia.gov means that you work at the 
Sandia labs, which means you are located in Albuquerque?  Just wondering 
because that is where I am.)
> 
> On Jun 22, 12:25 am, William Ratcliff wrote:
>> I would go with multiprocessing rather than multithreading--processes are
>> "weightier" but there are no side effects....
> 
> To add on to this, if you're worried about performance, multi/
> processing/ usually gets you farther with Python than threading.  The
> reason is the Python GIL (Global Interpreter Lock), which is very much
> oriented around a single thread mindset.  There was some work a couple
> of years ago to try to remove it (and more work may be happening as we
> speak, but I can't say), but the removal proved to be even more of a
> performance drag for threaded applications.
> 
> If you'd like to "see" the performance degradation, write yourself a
> multi-threaded and analogous multi-processes test application, and
> time them with a standard utility (e.g. /usr/bin/time).  You should
> note a *lot* more time spent doing system calls (i.e. pure overhead)
> when using the threaded version as compared to the multiple processes
> version.
> 
> A further argument for multiple processes is that due to the nature of
> unshared memory, you're forced into a (hopefully) more rigorous
> passing of information which has huge potential to reduce bugs.
> 
> Please take all of the above with a grain of salt, and also recognize
> that I'm *not* a Windows person: my analysis may be incorrect for that
> platform (especially since I'm aware that new processes are *much*
> heavier there than with, at least, Linux).
> 

I still would like to know just what exactly one should use to accomplish this. 
 Do you have to manually do the processing, or is there a good tool for them 
(I'm not even sure how to manually do it). I know a little about IPython's 
multiprocessing abilities, but that's about it.  Also, am I mistaken, or does 
the processes method require completely rewriting a sequential algorithm, 
splitting apart things that can be done independently?  If so, this might not 
be worth it except for a few very intensive operations.

I know in other languages have ways of marking what should be done threaded 
directly in the code, such as Objective-C Blocks with Grand Central Dispatch 
(see [0]).  Maybe ten years from now Python will have removed the GIL and Guido 
will come up with some awesome pythonic version of that.  

Aaron Meurer

[0] - http://arstechnica.com/apple/reviews/2009/08/mac-os-x-10-6.ars/10

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/sympy?hl=en.

Reply via email to