Re: [gmx-users] Gromacs 4.5.1 on 48 core magny-cours AMDs

2010-10-22 Thread Sander Pronk
Hi Carsten, I've been thinking a bit about this issue, and for now a relatively easy fix would be to enable thread affinity when all cores on a machine are used. When fewer threads are turned on, I don't want to turn on thread affinity because any combination might either - interfere with

RE: [gmx-users] Gromacs 4.5.1 on 48 core magny-cours AMDs

2010-10-21 Thread Berk Hess
Hi, We haven't observed any problems running with threads over 24 core AMD nodes (4x6 cores). Berk From: ckut...@gwdg.de Date: Thu, 21 Oct 2010 12:03:00 +0200 To: gmx-users@gromacs.org Subject: [gmx-users] Gromacs 4.5.1 on 48 core magny-cours AMDs Hi, does anyone have experience

Re: [gmx-users] Gromacs 4.5.1 on 48 core magny-cours AMDs

2010-10-21 Thread Sander Pronk
Hi Carsten, As Berk noted, we haven't had problems on 24-core machines, but quite frankly I haven't looked at thread migration. Currently, the wait states actively yield to the scheduler, which is an opportunity for the scheduler to re-assign threads to different cores. I could set harder

Re: [gmx-users] Gromacs 4.5.1 on 48 core magny-cours AMDs

2010-10-21 Thread Carsten Kutzner
Hi Sander, On Oct 21, 2010, at 12:27 PM, Sander Pronk wrote: Hi Carsten, As Berk noted, we haven't had problems on 24-core machines, but quite frankly I haven't looked at thread migration. I did not have any problems on 32-core machines as well, only on 48-core ones. Currently, the

Re: [gmx-users] Gromacs 4.5.1 on 48 core magny-cours AMDs

2010-10-21 Thread Sander Pronk
Thanks for the information; the OpenMPI recommendation is probably because OpenMPI goes to great lengths trying to avoid process migration. The numactl doesn't prevent migration as far as I can tell: it controls where memory gets allocated if it's NUMA. For gromacs the setting should of

Re: [gmx-users] Gromacs 4.5.1 on 48 core magny-cours AMDs

2010-10-21 Thread Carsten Kutzner
On Oct 21, 2010, at 4:44 PM, Sander Pronk wrote: Thanks for the information; the OpenMPI recommendation is probably because OpenMPI goes to great lengths trying to avoid process migration. The numactl doesn't prevent migration as far as I can tell: it controls where memory gets allocated

Re: [gmx-users] Gromacs 4.5.1 on 48 core magny-cours AMDs

2010-10-21 Thread Sander Pronk
On 21 Oct 2010, at 16:50 , Carsten Kutzner wrote: On Oct 21, 2010, at 4:44 PM, Sander Pronk wrote: Thanks for the information; the OpenMPI recommendation is probably because OpenMPI goes to great lengths trying to avoid process migration. The numactl doesn't prevent migration as far as

Re: [gmx-users] Gromacs 4.5.1 on 48 core magny-cours AMDs

2010-10-21 Thread Esztermann, Ansgar
Thanks for the information; the OpenMPI recommendation is probably because OpenMPI goes to great lengths trying to avoid process migration. The numactl doesn't prevent migration as far as I can tell: it controls where memory gets allocated if it's NUMA. My understanding is that processes

Re: [gmx-users] Gromacs 4.5.1 on 48 core magny-cours AMDs

2010-10-21 Thread Ondrej Marsalek
Hi, FWIW, I have recently asked about this in the hwloc mailing list: http://www.open-mpi.org/community/lists/hwloc-users/2010/10/0232.php In general, hwloc is a useful tool for these things. http://www.open-mpi.org/projects/hwloc/ Best, Ondrej On Thu, Oct 21, 2010 at 12:03, Carsten Kutzner