Re: [OMPI users] Processes unable to communicate when using MPI_Comm_spawn on Windows

2016-06-08 Thread Gilles Gouaillardet
Christopher, the sm btl does not work with inter communicators and hence disqualifies itself. i guess this is what you interpreted as 'partially working' i am surprised you are using a privileged port (260 < 1024), are you running as an admin ? Open MPI is no more supported on windows,

Re: [OMPI users] Processes unable to communicate when using MPI_Comm_spawn on Windows

2016-06-08 Thread Roth, Christopher
Well, that obvious error message states the basic problem - I was hoping you had noticed a detail in the ompi_info output that would point to a reason for it. Further test runs with the option '-mca btl tcp,self' (excluding 'sm' from the mix) and '-mca btl_base_verbose 100', supply some more

Re: [OMPI users] Processes unable to communicate when using MPI_Comm_spawn on Windows

2016-06-08 Thread Ralph Castain
> On Jun 8, 2016, at 4:30 AM, Roth, Christopher wrote: > > What part of this output indicates this non-communicative configuration? -- At least one pair of MPI processes are unable to reach each other for

Re: [OMPI users] openmpi-dev-4221-gb707d13: referenced symbol

2016-06-08 Thread Jeff Squyres (jsquyres)
Filed https://github.com/open-mpi/ompi/issues/1771 to track the issue. > On Jun 8, 2016, at 1:47 AM, George Bosilca wrote: > > Apparently Solaris 10 lacks support for strnlen. We should add it to our > configure and provide a replacement where needed. > > George. > > >

[OMPI users] cuda-memcheck reports errors for MPI functions after use of cudaSetDevice

2016-06-08 Thread Kristina Tesch
Hello everyone, in my application I use CUDA-aware OpenMPI 1.10.2 together with CUDA 7.5. If I call cudaSetDevice() cuda-memcheck reports this error for all subsequent MPI function calls: = CUDA-MEMCHECK = Program hit CUDA_ERROR_INVALID_VALUE (error 1) due to "invalid

Re: [OMPI users] Processes unable to communicate when using MPI_Comm_spawn on Windows

2016-06-08 Thread Roth, Christopher
What part of this output indicates this non-communicative configuration? Please recall, this is using the precompiled OpenMpi Windows installation When the 'verbose' option is added, I see this sequence of output for the scheduler and each of the executor processes: -- [sweet1:06412] mca:

Re: [OMPI users] openmpi-dev-4221-gb707d13: referenced symbol

2016-06-08 Thread George Bosilca
Apparently Solaris 10 lacks support for strnlen. We should add it to our configure and provide a replacement where needed. George. On Wed, Jun 8, 2016 at 4:30 PM, Siegmar Gross < siegmar.gr...@informatik.hs-fulda.de> wrote: > Hi, > > I have built openmpi-dev-4221-gb707d13 on my machines

Re: [OMPI users] hybrid MPI/OpenMP C++ code without acceleration in OpenMP mode

2016-06-08 Thread Gilles Gouaillardet
note this is still suboptimal. for example, if you run a job with two MPI tasks with two OpenMP threads each on the same node, then it is likely the OpenMP runtime will bind both thread 0 on core 0, and both thread 1 on core 1, which one more time means time sharing. Cheers, Gilles On

[OMPI users] openmpi-dev-4221-gb707d13: referenced symbol

2016-06-08 Thread Siegmar Gross
Hi, I have built openmpi-dev-4221-gb707d13 on my machines (Solaris 10 Sparc, Solaris 10 x86_64, and openSUSE Linux 12.1 x86_64) with gcc-5.1.0 and Sun C 5.13. Unfortunately I get an error for a small program. tyr hello_1 109 ompi_info | grep -e "OPAL repo revision:" -e "C compiler absolute:"

[OMPI users] openmpi-v2.x-dev-1468-g6011906 : problem with "--host"

2016-06-08 Thread Siegmar Gross
Hi, I have built openmpi-v2.x-dev-1468-g6011906 on my machines (Solaris 10 Sparc, Solaris 10 x86_64, and openSUSE Linux 12.1 x86_64) with gcc-5.1.0 and Sun C 5.13. Unfortunately I have a problem with "--host" for a MPMD program. The behaviour is different on different machines. Why do I need two

[OMPI users] openmpi-v1.10.3rc4: another problem with "--slot-list"

2016-06-08 Thread Siegmar Gross
Hi, I have built openmpi-v1.10.3rc4 on my machines (Solaris 10 Sparc, Solaris 10 x86_64, and openSUSE Linux 12.1 x86_64) with gcc-5.1.0 and Sun C 5.13. Unfortunately I have once more a problem with "--slot-list". This time a small program breaks on my Sparc machine while it works as expected on

[OMPI users] hybrid MPI/OpenMP C++ code without acceleration in OpenMP mode

2016-06-08 Thread Maxim Reshetnyak
thank you! mpirun --bind-to none ... gives what I need: echo " run 1 " ; export OMP_NUM_THREADS=1 ; time mpirun -np 1 --bind-to none a.out ; echo " run 2 " ; export OMP_NUM_THREADS=2 ; time mpirun -np 1 --bind-to none a.out run 1 0 0 0 0 real0m43.593s user0m43.282s sys

Re: [OMPI users] hybrid MPI/OpenMP C++ code without acceleration in OpenMP mode

2016-06-08 Thread Gilles Gouaillardet
mpirun binds a.out on a single core, so when you OMP_NUM_THREADS=2 mpirun -np 1 a.out the two OpenMP threads ends up doing time sharing. you can confirm that by running grep Cpus_allowed_list /proc/self/status mpirun -np 1 grep Cpus_allowed_list /proc/self/status here is what i get :

[OMPI users] hybrid MPI/OpenMP C++ code without acceleration in OpenMP mode

2016-06-08 Thread Maxim Reshetnyak
Hello! I have a problem with the hybrid MPI/OpenMP C++ code, which does not produce acceleration in OpenMP mode at the local, 4th-core home computer. Open MPI loaded from www.open-mpi.org/ mpirun -V mpirun (Open MPI) 1.8.1. Compiled from the source. Ubuntu 14.04 // === //main.c #include

Re: [OMPI users] mpirun and Torque

2016-06-08 Thread Ralph Castain
I can confirm that mpirun will not direct-launch the applications under Torque. This is done for wireup support - if/when Torque natively supports PMIx, then we could revisit that design. Gilles: the benefit is two-fold: * Torque has direct visibility of the application procs. When we launch