Hi Carsten,
I've been thinking a bit about this issue, and for now a relatively easy fix
would be to enable thread affinity when all cores on a machine are used. When
fewer threads are turned on, I don't want to turn on thread affinity because
any combination might either
- interfere with
Hi,
We haven't observed any problems running with threads over 24 core AMD nodes
(4x6 cores).
Berk
From: ckut...@gwdg.de
Date: Thu, 21 Oct 2010 12:03:00 +0200
To: gmx-users@gromacs.org
Subject: [gmx-users] Gromacs 4.5.1 on 48 core magny-cours AMDs
Hi,
does anyone have experience
Hi Carsten,
As Berk noted, we haven't had problems on 24-core machines, but quite frankly I
haven't looked at thread migration.
Currently, the wait states actively yield to the scheduler, which is an
opportunity for the scheduler to re-assign threads to different cores. I could
set harder
Hi Sander,
On Oct 21, 2010, at 12:27 PM, Sander Pronk wrote:
Hi Carsten,
As Berk noted, we haven't had problems on 24-core machines, but quite frankly
I haven't looked at thread migration.
I did not have any problems on 32-core machines as well, only on 48-core ones.
Currently, the
Thanks for the information; the OpenMPI recommendation is probably because
OpenMPI goes to great lengths trying to avoid process migration. The numactl
doesn't prevent migration as far as I can tell: it controls where memory gets
allocated if it's NUMA.
For gromacs the setting should of
On Oct 21, 2010, at 4:44 PM, Sander Pronk wrote:
Thanks for the information; the OpenMPI recommendation is probably because
OpenMPI goes to great lengths trying to avoid process migration. The numactl
doesn't prevent migration as far as I can tell: it controls where memory gets
allocated
On 21 Oct 2010, at 16:50 , Carsten Kutzner wrote:
On Oct 21, 2010, at 4:44 PM, Sander Pronk wrote:
Thanks for the information; the OpenMPI recommendation is probably because
OpenMPI goes to great lengths trying to avoid process migration. The numactl
doesn't prevent migration as far as
Thanks for the information; the OpenMPI recommendation is probably because
OpenMPI goes to great lengths trying to avoid process migration. The
numactl doesn't prevent migration as far as I can tell: it controls where
memory gets allocated if it's NUMA.
My understanding is that processes
Hi,
FWIW, I have recently asked about this in the hwloc mailing list:
http://www.open-mpi.org/community/lists/hwloc-users/2010/10/0232.php
In general, hwloc is a useful tool for these things.
http://www.open-mpi.org/projects/hwloc/
Best,
Ondrej
On Thu, Oct 21, 2010 at 12:03, Carsten Kutzner
9 matches
Mail list logo