Hi,

I'll try to explain how I understand your question, and how I would
approach this issue. Please correct me if I am wrong. There are many
possible ways of using parallel computational power, and the best solution
heavily depends upon your specific problem. I am far from an expert, but
from the little that I've learned over the past years I know that there is
no easy answer.

1) Spyder uses multi-threading on the application level (using QThread if I
understand correctly) so things like code completion, monitor,
documentation lookup, ... doesn't freeze the whole program while doing.
However, this is not related to running your python scripts. Spyders
multiprocessing implementation is, as far as I understand it, not related
at all to the scripts you want to run. Python/IPython consoles are run in
process that is separate from the main spyder process.

When you want to use the parallel power of your computer cluster, you need
to launch a python or ipython console in spyder that is aware of all those
CPUs/GPUs. From there you can use the built in modules threading and
multiprocessing to start explicitly using all that computational power.
Note that depending on what you are trying to achieve, programming parallel
applications can be challenging. The summer school "Advanced Scientific
Programming in Python" [1] has some very nice lectures on this, and I would
recommend you to have a look at those informative slides.

2) I don't think spyder uses http to connect to other python consoles, not
sure how http is related to the problem at hand

3) interactive python console refers to an interactive work-flow and does
not necessarily has anything to do with qsub (I assume you refer to your
clusters mechanism to submit and queue jobs?)

When I look at how we have our clusters configured, I could theoretically
imagine the following work flow:
* launch an IPython instance on the cluster with qsub and let it use as
many CPUs as you see fit (for instance, specify #PBS -lnodes=xxx:ppn=yyy in
your PBS script), and give it as much wall time as you think you need.
* connect to that IPython console using the notebook/web interface (I know
people do that, I just don't know how. There has to be some documentation
available somewhere, or ask on the IPython mailinglist, or see [8])
* Within Spyder, connect to the IPython console running on the cluster, but
I don't think that is already implemented, not sure if that's planned.
* or, checkout [8]: Using IPython for parallel computing
 * rent an IPython instance on cluster configured for you at [11]

You can release the GIL when using Cython, see for example the slides on
Cython [1]. You can not release the GIL in a Python script. For explicit
concurrency directly in your Python script, use the build in modules
threading and multiprocessing.

Numpy/Scipy/Numba and other Python modules already use, in some cases, the
parallel power of your machine. Some examples on Numba are can be found
here [2] and here [3]: For Numpy/Scipy, this is dependent on your
BLAS/LAPACK implementation (such as MKL, OpenBlas, ACML): the low level
number crunching routines on which they are built. Building NumPy against a
BLAS/LAPACK implementation optimized for your machine from source can be
challenging depending on your experience/skills [9] [10], but performance
can be increased significantly [4].

Other more exotic libraries that can help unleash parallel power of
CPUs/GPUs (for which I only know they exist): Magma [5], Plasma [6], CUBLAS
[7].

[1] https://python.g-node.org/python-summerschool-2012/schedule
[2] http://jakevdp.github.io/blog/2012/08/24/numba-vs-cython/
[3]
http://www.continuum.io/blog/simple-wave-simulation-with-numba-and-pygame
[4]
https://dpinte.wordpress.com/2010/01/15/numpy-performance-improvement-with-the-mkl/
[5] http://icl.cs.utk.edu/magma/index.html
[6] http://icl.cs.utk.edu/plasma/index.html
[7] https://developer.nvidia.com/cublas
[8] http://ipython.org/ipython-doc/dev/parallel/
[9]
http://osdf.github.io/blog/numpyscipy-with-openblas-for-ubuntu-1204-second-try.html
[10]
http://www.der-schnorz.de/2012/06/optimized-linear-algebra-and-numpyscipy/
[11] http://continuum.io/wakari.html

ok, this reply exploded in length and I can not vouch for its quality or
its usefulness...I'll better stop here :-)

Regards,
David


On 28 May 2013 20:47, <[email protected]> wrote:

> Hi Carols,
>
> thank you so much for this kind answers. I have just few more questions:
> 1. how different is multiprocessing from spyder in terms of implementation?
> 2. Does it mean that spyder uses http?
> 3. ipython stand for interactive python? How are I'm going to deal with
> that and qsub?
>
> Appreciate your help,
> Ana
>
>
> On Wednesday, May 22, 2013 1:28:46 PM UTC-5, [email protected] wrote:
>>
>> Hello,
>>
>> please help me answer those questions, along with: do numpy and scipy
>> thread or spyder does?
>> I am planing to install spyder on Cray XE6 machine, with intention do so
>> some python MPI and threading, and debugging
>> Would spyder be of use for me?
>>
>> Thanks
>> Ana
>>
>  --
> You received this message because you are subscribed to the Google Groups
> "spyder" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To post to this group, send email to [email protected].
> Visit this group at http://groups.google.com/group/spyderlib?hl=en.
> For more options, visit https://groups.google.com/groups/opt_out.
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"spyder" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/spyderlib?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to