Re: [slurm-users] srun and mpirun

2018-04-13 Thread Chris Samuel
On Saturday, 14 April 2018 1:33:13 AM AEST Mahmood Naderan wrote: > I tried with one of the NAS benchmarks (BT) with 121 threads since the > number of cores should be square. That's an IO benchmark, not going to help you for this. You need something that is compute bound & comms intensive to se

Re: [slurm-users] srun and mpirun

2018-04-13 Thread Artem Polyakov
The output is certainly not enough to judge, but my first guess would be that your MPI (what is it btw?) is not support PMI that is enabled in Slurm. Note also, that Slurm now supports 3 ways of doing PMI and from the info that you have provided it is not clear which one you are using. To judge wit

Re: [slurm-users] srun and mpirun

2018-04-13 Thread Mahmood Naderan
I tried with one of the NAS benchmarks (BT) with 121 threads since the number of cores should be square. With srun, I get WARNING: compiled for 121 processes Number of active processes: 1 0 1 408 408 408 Problem size too big for compiled arra

Re: [slurm-users] srun and mpirun

2018-04-13 Thread Chris Samuel
On 13/4/18 7:19 pm, Mahmood Naderan wrote: I see some old posts on the web about performance comparison of srun vs. mpirun. Is that still an issue? Just running an MPI hello world program is not going to test that. You need to run an actual application that is doing a lot of computation and c

Re: [slurm-users] srun and mpirun

2018-04-13 Thread Peter Kjellström
On Fri, 13 Apr 2018 13:49:56 +0430 Mahmood Naderan wrote: > Hi, > I see some old posts on the web about performance comparison of srun > vs. mpirun. Is that still an issue? Both the following scripts works > for test programs and surely the performance concerns is not visible > here. ... > #SBAT

[slurm-users] srun and mpirun

2018-04-13 Thread Mahmood Naderan
Hi, I see some old posts on the web about performance comparison of srun vs. mpirun. Is that still an issue? Both the following scripts works for test programs and surely the performance concerns is not visible here. #!/bin/bash #SBATCH --job-name=hello_mpi #SBATCH --output=hellompi.log #SBATCH --