-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 07/08/13 16:59, Janne Blomqvist wrote:
> That is, the memory accounting is per task, and when launching > using mpirun the number of tasks does not correspond to the number > of MPI processes, but rather to the number of "orted" processes (1 > per node). That appears to be correct, I am seeing 1 task in the batch and 68 tasks for orted when I use mpirun whilst I see 1 task in the batch and 1104 tasks as namd2 when I use srun. I could understand how that might result in Slurm (wrongly) thinking that a single task is using more than its allowed memory per tasks, but I'm not sure I understand how that could lead to Slurm thinking the job is using vastly more memory than it actually is though. cheers, Chris - -- Christopher Samuel Senior Systems Administrator VLSCI - Victorian Life Sciences Computation Initiative Email: sam...@unimelb.edu.au Phone: +61 (0)3 903 55545 http://www.vlsci.org.au/ http://twitter.com/vlsci -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iEYEARECAAYFAlIB+lgACgkQO2KABBYQAh8uqgCdGuA03jCEdJVJE2dJGBHEJjb/ WY4An3em/48L25xq4Ui/GHijSJY2Oo6T =Zk4G -----END PGP SIGNATURE-----