Hi all,
I am using openmpi-1.10.3.
openmpi-1.10.3 compiled for arm(cross compiled on X86_64 for openWRT
linux) libmpi.so.12.0.3 size is 2.4MB,but if i compiled on X86_64 (linux)
libmpi.so.12.0.3 size is 990.2KB.
can anyone tell how to reduce the size of libmpi.so.12.0.3 compiled for
arm.
Tha
Tom,
regardless the (lack of) memory model in Fortran, there is an error in
testmpi3.f90
shar_mem is declared as an integer, and hence is not in the shared memory.
i attached my version of testmpi3.f90, which behaves just like the C
version,
at least when compiled with -g -O0 and with Ope
I guess --with-cuda is disabling the default CUDA path which is
/usr/local/cuda. So you should either not set --with-cuda or set
--with-cuda $CUDA_HOME (no include).
Sylvain
On 10/27/2016 03:23 PM, Craig tierney wrote:
Hello,
I am trying to build OpenMPI 1.10.3 with CUDA but I am unable to b
Hi Ralph,
I haven't played around in this code, so I'll flip the question
over to the Slurm list, and report back here when I learn
anything.
Cheers
Andy
On 10/27/2016 01:44 PM,
r...@open-mpi.org wrote:
Sigh - of course it wou
Sigh - of course it wouldn’t be simple :-(
All right, let’s suppose we look for SLURM_CPU_BIND:
* if it includes the word “none”, then we know the user specified that they
don’t want us to bind
* if it includes the word mask_cpu, then we have to check the value of that
option.
* If it is all
Yes, they still exist:
$ srun --ntasks-per-node=2 -N1 env | grep BIND | sort -u
SLURM_CPU_BIND_LIST=0x
SLURM_CPU_BIND=quiet,mask_cpu:0x
SLURM_CPU_BIND_TYPE=mask_cpu:
SLURM_CPU_BIND_VERBOSE=quiet
Here are the relevant Slurm configuration option
Hello,
Given mpi nodes 0 N-1, 0 being root, master node.
and trying to determine the maximum value of a function over a large
range of values of its parameters,
What are the differences between if any:
1. At node i:
evaluate f for each of the values assigned to i of the parameters space
And if there is no --cpu_bind on the cmd line? Do these not exist?
> On Oct 27, 2016, at 10:14 AM, Andy Riebs wrote:
>
> Hi Ralph,
>
> I think I've found the magic keys...
>
> $ srun --ntasks-per-node=2 -N1 --cpu_bind=none env | grep BIND
> SLURM_CPU_BIND_VERBOSE=quiet
> SLURM_CPU_BIND_TYPE=no
Hi Ralph,
I think I've found the magic keys...
$ srun --ntasks-per-node=2 -N1 --cpu_bind=none env | grep BIND
SLURM_CPU_BIND_VERBOSE=quiet
SLURM_CPU_BIND_TYPE=none
SLURM_CPU_BIND_LIST=
SLURM_CPU_BIND=quiet,none
SLURM_CPU_BIND_VERBOSE=quiet
SLURM_CPU_BIND_TYPE=none
SLURM_CPU_BIND_LIST=
SLURM_CPU_
Hey Andy
Is there a SLURM envar that would tell us the binding option from the srun cmd
line? We automatically bind when direct launched due to user complaints of poor
performance if we don’t. If the user specifies a binding option, then we detect
that we were already bound and don’t do it.
Ho
Hi All,
We are running Open MPI version 1.10.2, built with support for Slurm
version 16.05.0. When a user specifies "--cpu_bind=none", MPI tries to
bind by core, which segv's if there are more processes than cores.
The user reports:
What I found is that
% srun --ntasks-per-node=8 --cpu_bind
This fix for this was just merged (we had previously fixed it in the v2.x
branch, but neglected to also put it on the v2.0.x branch) -- it should be in
tonight's tarball:
https://github.com/open-mpi/ompi/pull/2295
> On Oct 27, 2016, at 6:45 AM, Siegmar Gross
> wrote:
>
> Hi,
>
> I tri
Yes, I tried -O0 and -O3. But VOLATILE is going to thwart a wide range of
optimizations that would break this code.
Jeff
On Thu, Oct 27, 2016 at 2:19 AM, Gilles Gouaillardet <
gilles.gouaillar...@gmail.com> wrote:
> Jeff,
>
> Out of curiosity, did you compile the Fortran test program with -O0 ?
Siegmar,
The fix is in the pipe.
Meanwhile, you can download it at
https://github.com/open-mpi/ompi/pull/2295.patch
Cheers,
Gilles
Siegmar Gross wrote:
>Hi,
>
>I tried to install openmpi-v2.0.1-130-gb3a367d on my "SUSE Linux
>Enterprise Server 12.1 (x86_64)" with Sun C 5.14 beta. Unfortunatel
Hi,
I tried to install openmpi-v2.0.1-130-gb3a367d on my "SUSE Linux
Enterprise Server 12.1 (x86_64)" with Sun C 5.14 beta. Unfortunately,
I get the following error. I was able to build it with gcc-6.2.0.
loki openmpi-v2.0.1-130-gb3a367d-Linux.x86_64.64_cc 124 tail -18
log.make.Linux.x86_64.64
Jeff,
Out of curiosity, did you compile the Fortran test program with -O0 ?
Cheers,
Gilles
Tom Rosmond wrote:
>Jeff,
>
>Thanks for looking at this. I know it isn't specific to Open-MPI, but it is a
>frustrating issue vis-a-vis MPI and Fortran. There are many very large MPI
>applications ar
> Rationale. The C bindings of MPI_ALLOC_MEM and MPI_FREE_MEM are similar
> to the bindings for the malloc and free C library calls: a call to
> MPI_Alloc_mem(: : :, &base) should be paired with a call to
> MPI_Free_mem(base) (one
> less level of indirection). Both arguments are declared to be of
From the MPI 3.1 standard (page 338)
Rationale. The C bindings of MPI_ALLOC_MEM and MPI_FREE_MEM are similar
to the bindings for the malloc and free C library calls: a call to
MPI_Alloc_mem(: : :, &base) should be paired with a call to
MPI_Free_mem(base) (one
less level of indirection). Both
I've had a look at the OpenMPI 1.10.3 sources, and the trouble appears to me to
be that the MPI wrappers declare
the argument
TYPE(C_PTR), INTENT(OUT) :: baseptr
inside the BIND(C) interface on the Fortran side (for OpenMPI this would, for
example be ompi_win_allocate_f), and
the C implement
19 matches
Mail list logo