Stupid answer from me. If latency/bandwidth numbers are bad then check that
you are really running over the interface that you think you should be. You
could be falling back to running over Ethernet.
On Mon, 28 Feb 2022 at 20:10, Angel de Vicente via users <
users@lists.open-mpi.org> wrote:
>
Hello,
Usually you would rather allocate and bind at the same time so that the
> memory doesn't need to be migrated when bound. However, if you do not touch
> the memory after allocation, pages are not actually physically allocated,
> hence there's no to migrate. Might work but keep this in mind.
Dear list,
I have a program that utilizes Openmpi + multithreading and I want the
freedom to decide on which hardware cores my threads should run. By using
hwloc_set_cpubind() that already works, so now I also want to bind memory
to the hardware cores. But I just can't get it to work.
Basically,
Le 01/03/2022 à 17:34, Mike a écrit :
Hello,
Usually you would rather allocate and bind at the same time so
that the memory doesn't need to be migrated when bound. However,
if you do not touch the memory after allocation, pages are not
actually physically allocated, hence
These are very, very old versions of UCX and HCOLL installed in your
environment. Also, MXM was deprecated years ago in favor of UCX. What
version of MOFED is installed (run ofed_info -s)? What HCA generation is
present (run ibstat).
Josh
On Tue, Mar 1, 2022 at 6:42 AM Angel de Vicente via users
Le 01/03/2022 à 15:17, Mike a écrit :
Dear list,
I have a program that utilizes Openmpi + multithreading and I want the
freedom to decide on which hardware cores my threads should run. By
using hwloc_set_cpubind() that already works, so now I also want to
bind memory to the hardware cores.
Hello,
John Hearns via users writes:
> Stupid answer from me. If latency/bandwidth numbers are bad then check
> that you are really running over the interface that you think you
> should be. You could be falling back to running over Ethernet.
I'm quite out of my depth here, so all answers are