[slurm-dev] Re: scancel cannot successfully cancel the mpi job create by running salloc

2016-06-04 Thread Sourav Chakraborty
Hi Yingdi, Perhaps you can use Slurm Epilog to clean up the user processes once the job is completed or aborted. You can add something like this to your epilog: if [ "$SLURM_JOB_USER" != "root" ] then /usr/bin/pkill -9 -u $SLURM_JOB_USER fi Hope that helps. Thanks, Sourav On Sat, May 28,

[slurm-dev] Re: slurm-dev summary, was Re: What follows PMI-2?

2015-09-25 Thread Sourav Chakraborty
Hi All, To clarify things, we have had similar goals and have been working on improving the PMI-2 plugin for some time. We evaluated several designs and strategies: 1. Designs and detailed performance evaluations (up to 16K cores) for on-demand PMI gets (similar to instant startup if I

[slurm-dev] Re: What follows PMI-2?

2015-09-24 Thread Sourav Chakraborty
Hi Andy, These two projects were being developed in parallel. They are quite different in terms of the API, design and performance characteristics. The APIs were prefixed by PMIX only to differentiate the proposed extensions from the existing PMI2 functions. Some of our proposed designs

[slurm-dev] Re: MVAPICH2

2015-06-28 Thread Sourav Chakraborty
Hi Trevor, Using PMI2 is recommended as it provides better job startup performance compared to PMI1. To configure MVAPICH2 to use PMI2 and Slurm, use ./configure --with-pmi=pmi2 --with-pm=slurm To run using PMI2, you should use something like srun --mpi=pmi2 -N 2 -n 4 ./a.out For more

[slurm-dev] Re: MVAPICH2 2.1 and SLURM docs

2015-04-20 Thread Sourav Chakraborty
Hi Trey, To use SLURM+PMI2 with MVAPICH2 you should configure it using ./configure --with-pmi=pmi2 --with-pm=slurm and run it like srun --mpi=pmi2 -n 2 ./a.out. Thanks, Sourav On Mon, Apr 20, 2015 at 2:13 PM, Trey Dockendorf treyd...@tamu.edu wrote: Just a heads up to anyone who uses

[slurm-dev] Re: MVAPICH2 2.1 and SLURM docs

2015-04-20 Thread Sourav Chakraborty
Hi Trey, Thanks for the clarification. We will update the user guide with some more details on using SLURM+PMI2 with MVAPICH2 soon. Thanks, Sourav On Mon, Apr 20, 2015 at 3:19 PM, Trey Dockendorf treyd...@tamu.edu wrote: That's a really unfortunate typo. MpiDefault=pmi2 will just work with

[slurm-dev] Re: Multi-thread support for MVAPICH2

2014-11-27 Thread Sourav Chakraborty
/to/hostfile MV2_USE_CUDA=1 MV2_ENABLE_AFFINITY=0 ./mpi app Thanks, Sourav Chakraborty The Ohio State University On Thu, Nov 27, 2014 at 4:17 AM, zxia...@mail.ustc.edu.cn wrote: Has anyone ran multi-threaded MVAPICH2 programs with SLURM? I wonder is this true that only single-threaded mvapich2