Re: [OMPI users] MPI process hangs if OpenMPI is compiled with --enable-thread-multiple -- part II

2013-12-02 Thread Ralph Castain
No surprise there - that's known behavior. As has been said, we hope to extend the thread-multiple support in the 1.9 series. On Mon, Dec 2, 2013 at 6:33 PM, Eric Chamberland < eric.chamberl...@giref.ulaval.ca> wrote: > Hi, > > I just open a new "chapter" with the same subject. ;-) > > We are

[OMPI users] MPI process hangs if OpenMPI is compiled with --enable-thread-multiple -- part II

2013-12-02 Thread Eric Chamberland
Hi, I just open a new "chapter" with the same subject. ;-) We are using OpenMPI 1.6.5 (compiled with --enable-thread-multiple) with Petsc 3.4.3 (on colosse supercomputer: http://www.calculquebec.ca/en/resources/compute-servers/colosse). We observed a deadlock with threads within the openib

Re: [OMPI users] [EXTERNAL] Re: (OpenMPI for Cray XE6 ) How to set mca parameters through aprun?

2013-12-02 Thread Ralph Castain
FWIW: that has been fixed with the current head of the 1.7 branch (will be in 1.7.4 release) On Mon, Dec 2, 2013 at 2:28 PM, Nathan Hjelm wrote: > Ack, forgot about that. There is a bug in 1.7.3 that breaks one of LANL's > default > settings. Just change the line in >

Re: [OMPI users] [EXTERNAL] Re: (OpenMPI for Cray XE6 ) How to set mca parameters through aprun?

2013-12-02 Thread Teranishi, Keita
Nathan, It is working! Thanks, --- -- Keita Teranishi Principal Member of Technical Staff Scalable Modeling and Analysis Systems Sandia National Laboratories Livermore, CA 94551 +1 (925) 294-3738 On 12/2/13 2:28 PM,

Re: [OMPI users] MPI process hangs if OpenMPI is compiled with --enable-thread-multiple

2013-12-02 Thread Jeff Squyres (jsquyres)
I'm joining this thread late, but I think I know what is going on: - I am able to replicate the hang with 1.7.3 on Mavericks (with threading enabled, etc.) - I notice that the hang has disappeared at the 1.7.x branch head (also on Mavericks) Meaning: can you try with the latest 1.7.x nightly

Re: [OMPI users] [EXTERNAL] Re: (OpenMPI for Cray XE6 ) How to set mca parameters through aprun?

2013-12-02 Thread Nathan Hjelm
Ack, forgot about that. There is a bug in 1.7.3 that breaks one of LANL's default settings. Just change the line in contrib/platform/lanl/cray_xe6/optimized-common from: enable_orte_static_ports=no to: enable_orte_static_ports=yes That should work. -Nathan On Wed, Nov 27, 2013 at

Re: [OMPI users] configure: error: Could not run a simple Fortran program. Aborting.

2013-12-02 Thread Jeff Squyres (jsquyres)
I did notice that you have an oddity: - I see /usr/local/opt/gfortran/bin in your PATH (line 41 in config.log) - I see that configure is invoking /usr/local/bin/gfortran (line 7630 and elsewhere in config.log) That implies that you have 2 different gfortrans installed on your machine, one of

Re: [OMPI users] configure: error: Could not run a simple Fortran program. Aborting.

2013-12-02 Thread Raiden Hasegawa
Yes, what I meant is that when running: /usr/local/bin/gfortran -o conftestconftest.f outside of configure it does work. I don't think I have DYLD_LIBRARY_PATH set, but I will check when I get back to my home computer. On Mon, Dec 2, 2013 at 3:47 PM, Jeff Squyres (jsquyres)

Re: [OMPI users] configure: error: Could not run a simple Fortran program. Aborting.

2013-12-02 Thread Jeff Squyres (jsquyres)
On Dec 2, 2013, at 3:00 PM, Raiden Hasegawa wrote: > Thanks, Jeff. The compiler does in fact work when running the troublesome > line in ./configure. Errr... I'm not sure how to parse that. The config.log you cited shows that the compiler does *not* work in

Re: [OMPI users] configure: error: Could not run a simple Fortran program. Aborting.

2013-12-02 Thread Raiden Hasegawa
Thanks, Jeff. The compiler does in fact work when running the troublesome line in ./configure. I haven't set either FC, FCFLAGS nor do I have LD_LIBRARY_PATH set in my .bashrc. Do you have any thoughts on what environmental variable may trip this up? Raiden On Mon, Dec 2, 2013 at 11:23 AM,

Re: [OMPI users] [EXTERNAL] Re: open-mpi on Mac OS 10.9 (Mavericks)

2013-12-02 Thread Jeff Squyres (jsquyres)
Ah -- sorry, I missed this mail before I replied to the other thread (OS X Mail threaded them separately somehow...). Sorry to ask you to dive deeper, but can you find out where in orte_ess.init() it's failing? orte_ess.init is actually a function pointer; it's a jump-off point into a

Re: [OMPI users] open-mpi on Mac OS 10.9 (Mavericks)

2013-12-02 Thread Jeff Squyres (jsquyres)
Karl -- Can you force the use of just the shared memory transport -- i.e., disable the TCP BTL? For example: mpirun -np 2 --mca btl sm,self hello_c If that also hangs, can you attach a debugger and see *where* it is hanging inside MPI_Init()? (In OMPI, MPI::Init() simply invokes

Re: [OMPI users] configure: error: Could not run a simple Fortran program. Aborting.

2013-12-02 Thread Jeff Squyres (jsquyres)
It looks like your Fortran compiler installation is borked. Have you tested with the same test program that configure used? program main end Put that in a simple "conftest.f" file, and try the same invocation line that configure used: /usr/local/bin/gfortran -o conftest

Re: [OMPI users] Bug in MPI_REDUCE in CUDA-aware MPI

2013-12-02 Thread Rolf vandeVaart
Hi Peter: The reason behind not having the reduction support (I believe) was just the complexity of adding it to the code. I will at least submit a ticket so we can look at it again. Here is a link to FAQ which lists the APIs which are CUDA-aware.

Re: [OMPI users] Bug in MPI_REDUCE in CUDA-aware MPI

2013-12-02 Thread Peter Zaspel
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Hi Rolf, OK, I didn't know that. Sorry. Yes, it would be a pretty important feature in cases when you are doing reduction operations on many, many entries in parallel. Therefore, each reduction is not very complex or time-consuming but potentially

Re: [OMPI users] Bug in MPI_REDUCE in CUDA-aware MPI

2013-12-02 Thread Rolf vandeVaart
Thanks for the report. CUDA-aware Open MPI does not currently support doing reduction operations on GPU memory. Is this a feature you would be interested in? Rolf >-Original Message- >From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Peter Zaspel >Sent: Friday, November 29,