-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 23/07/13 17:06, Christopher Samuel wrote:

> Bringing up a new IBM SandyBridge cluster I'm running a NAMD test 
> case and noticed that if I run it with srun rather than mpirun it 
> goes over 20% slower.

Following on from this issue, we've found that whilst mpirun gives
acceptable performance the memory accounting doesn't appear to be correct.

Anyone seen anything similar, or any ideas on what could be going on?

Here are two identical NAMD jobs running over 69 nodes using 16 nodes
per core, this one launched with mpirun (Open-MPI 1.6.5):


==> slurm-94491.out <==
WallClock: 101.176193  CPUTime: 101.176193  Memory: 1268.554688 MB
End of program

[samuel@barcoo-test Mem]$ sacct -j 94491 -o JobID,MaxRSS,MaxVMSize
       JobID     MaxRSS  MaxVMSize
- ------------ ---------- ----------
94491
94491.batch    6504068K  11167820K
94491.0        5952048K   9028060K


This one launched with srun (about 60% slower):

==> slurm-94505.out <==
WallClock: 163.314163  CPUTime: 163.314163  Memory: 1253.511719 MB
End of program

[samuel@barcoo-test Mem]$ sacct -j 94505 -o JobID,MaxRSS,MaxVMSize
       JobID     MaxRSS  MaxVMSize
- ------------ ---------- ----------
94505
94505.batch       7248K   1582692K
94505.0        1022744K   1307112K



cheers!
Chris
- -- 
 Christopher Samuel        Senior Systems Administrator
 VLSCI - Victorian Life Sciences Computation Initiative
 Email: sam...@unimelb.edu.au Phone: +61 (0)3 903 55545
 http://www.vlsci.org.au/      http://twitter.com/vlsci

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlIB5sEACgkQO2KABBYQAh9QMQCfQ57w0YqVDwgyGRqUe3dSvQDj
e9cAnRRx/kDNUNqUCuFGY87mXf2fMOr+
=JUPK
-----END PGP SIGNATURE-----

Reply via email to