Well, I managed to get a successful mpirun @ a slot count of 132 using --mca
btl ^sm,
however when I increased the slot count to 160, mpirun crashed without any error
output:
mpirun -np 160 -display-devel-map --prefix /hpc/apps/mpi/openmpi/1.8.6/
--hostfile hostfile-noslots --mca btl ^sm --heter
That strange. Are you sure the btl mca variable is not being set through
an environment variable or though an MCA parameter file? You should be
able to tell from the output of ompi_info -a.
BTW, you do no need to specify both sm and vader. vader is a newer
shared memory btl that will likely repla
Starting in the 1.7 series, Open MPI automatically binds application
processes. By default, we bind to core if np <= 2, otherwise we bind to
socket.. So your proc, and all its threads, are being bound to a single
core.
What you probably want to do is add either "--bind-to none" or "--bind-to
socke
Hi,
I'm trying hybrid programming and I have this strange issue:
Running fortran code listed below it happens that it uses only the 200%
of cpu on each node also if I request 4 threads with the command
mpirun -n 2 -npernode 1 -x OMP_NUM_THREADS=4 ./pi_parallel_do.f.exe
I'll explain: four threads
Hi Nick,
I will endeavor to put together a wiki for the master/v2.x series specific
to Cray systems
(sans those customers who choose to neither 1) use Cray supported eslogin
setup nor 2) permit users to directly log in to and build apps on service
nodes) that explains best practices for
using Op