Re: [OMPI users] OpenMPI 1.8.6, CentOS 6.3, too many slots = crash

2015-06-26 Thread Lane, William
Well, I managed to get a successful mpirun @ a slot count of 132 using --mca btl ^sm, however when I increased the slot count to 160, mpirun crashed without any error output: mpirun -np 160 -display-devel-map --prefix /hpc/apps/mpi/openmpi/1.8.6/ --hostfile hostfile-noslots --mca btl ^sm --heter

Re: [OMPI users] vader/sm not being picked up

2015-06-26 Thread Nathan Hjelm
That strange. Are you sure the btl mca variable is not being set through an environment variable or though an MCA parameter file? You should be able to tell from the output of ompi_info -a. BTW, you do no need to specify both sm and vader. vader is a newer shared memory btl that will likely repla

Re: [OMPI users] hybrid programming: cpu load issues

2015-06-26 Thread Ralph Castain
Starting in the 1.7 series, Open MPI automatically binds application processes. By default, we bind to core if np <= 2, otherwise we bind to socket.. So your proc, and all its threads, are being bound to a single core. What you probably want to do is add either "--bind-to none" or "--bind-to socke

[OMPI users] hybrid programming: cpu load issues

2015-06-26 Thread Fedele Stabile
Hi, I'm trying hybrid programming and I have this strange issue: Running fortran code listed below it happens that it uses only the 200% of cpu on each node also if I request 4 threads with the command mpirun -n 2 -npernode 1 -x OMP_NUM_THREADS=4 ./pi_parallel_do.f.exe I'll explain: four threads

Re: [OMPI users] Running with native ugni on a Cray XC

2015-06-26 Thread Howard Pritchard
Hi Nick, I will endeavor to put together a wiki for the master/v2.x series specific to Cray systems (sans those customers who choose to neither 1) use Cray supported eslogin setup nor 2) permit users to directly log in to and build apps on service nodes) that explains best practices for using Op