toph
- Original Message -
From: "Open MPI Users" mailto:users@lists.open-mpi.org> >
To: "Open MPI Users" mailto:users@lists.open-mpi.org> >
Cc: "Carlo Nervi" mailto:carlo.ne...@unito.it> >
Sent: Thursday, 20 August, 2020 12:17:21
Subject: [OMPI u
I'm using VASP, Quantum Espresso, DFTB+, Gulp, Tinker, Crystal and Gaussian.
VASP, QE and G16 are not a problem (the latter is using threads up to 48
cores).
QE sometimes slows down, but nothing to be much worry. DFTB+ is often used
with several jobs and mpi. In any cases jobs x mpi <= 48
I'm
t all your simulations in a single script this would look
>> like
>>
>> mpirun -n 6 --cpu-list "$(seq -s, 0 5)" --bind-to cpu-list:ordered $app
>> mpirun -n 6 --cpu-list "$(seq -s, 6 11)" --bind-to cpu-list:ordered $app
>> ...
>> mpirun -
On Thu, Aug 20, 2020 at 3:22 AM Carlo Nervi via users <
users@lists.open-mpi.org> wrote:
> Dear OMPI community,
> I'm a simple end-user with no particular experience.
> I compile quantum chemical programs and use them in parallel.
>
Which code? Some QC codes behave differently than traditional
--cpu-list "$(seq -s, 42 47)" --bind-to cpu-list:ordered $app
Best
Christoph
- Original Message -
From: "Open MPI Users" mailto:users@lists.open-mpi.org> >
To: "Open MPI Users" mailto:users@lists.open-mpi.org> >
Cc: "Carlo Nervi" mailto
t; Best
> Christoph
>
>
> - Original Message -
> From: "Open MPI Users"
> To: "Open MPI Users"
> Cc: "Carlo Nervi"
> Sent: Thursday, 20 August, 2020 12:17:21
> Subject: [OMPI users] OMPI 4.0.4 how to use mpirun properly in numa
> archi
, 6 11)" --bind-to cpu-list:ordered $app
...
mpirun -n 6 --cpu-list "$(seq -s, 42 47)" --bind-to cpu-list:ordered $app
Best
Christoph
- Original Message -
From: "Open MPI Users"
To: "Open MPI Users"
Cc: "Carlo Nervi"
Sent: Thursday, 20 A
Dear OMPI community,
I'm a simple end-user with no particular experience.
I compile quantum chemical programs and use them in parallel.
My system is a 4 socket, 12 core per socket Opteron 6168 system for a total
of 48 cores and 64 Gb of RAM. It has 8 NUMA nodes:
openmpi $ hwloc-info
depth 0: