Re: [OMPI users] Hardware topology influence

2022-09-15 Thread Matias Vara via users
Hello Lucas,

Le mar. 13 sept. 2022 à 14:23, Lucas Chaloyard via users <
users@lists.open-mpi.org> a écrit :

> Hello,
>
> I'm working as a research intern in a lab where we're studying
> virtualization.
> And I've been working with several benchmarks using OpenMPI 4.1.0 (ASKAP,
> GPAW and Incompact3d from Phrononix Test suite).
>
> To briefly explain my experiments, I'm running those benchmarks on several
> virtual machines using different topologies.
> During one experiment I've been comparing those two topologies :
> - Topology1 : 96 vCPUS divided in 96 sockets containing 1 threads
> - Topology2 : 96 vCPUS divided in 48 sockets containing 2 threads (usage
> of hyperthreading)
>
> For the ASKAP Benchmark :
> - While using Topology2, 2306 processes will be created by the application
> to do its work.
> - While using Topology1, 4612 processes will be created by the application
> to do its work.
> This is also happening when running GPAW and Incompact3d benchmarks.
>
> What I've been wondering (and looking for) is, does OpenMPI take into
> account the topology, and reduce the number of processes create to execute
> its work in order to avoid the usage of hyperthreading ?
> Or is it something done by the application itself ?
>

I would like to add that it is possible that the VMM (Virtual Machine
Monitor) may never expose completely the physical topology to a guest. This
may vary from one hypervisor to another. Thus, VM topology won't ever match
the physical topology. I am not even sure if you can tweak the VMM to
perfectly match physical and virtual topology. There was an interesting
talk about this at the KVM forum a few years ago. You can watch it at
https://youtu.be/hHPuEF7qP_Q. That said, I am experimenting by running MPI
applications by using a unikernel. The unikernel is deployed in a single VM
with the same number of VCPUs as in the host. In this deployment, I am
using one thread per VCPU and the communication is over shared-memory,
i.e., virtio. This deployment aims at leveraging the NUMA topology by using
dynamic memory that is allocated per-core. In other words, threads allocate
only local memory. For the moment, I could not bench this deployment but I
will do it soon.

Matias



> I was looking at the source code, and I've been trying to find how and
> when are filled the information about the MPI_COMM_WORLD communicator, to
> see if the 'num_procs' field depends on the topology, but I didn't have any
> chance for now.
>
> Respectfully, Chaloyard Lucas.
>


Re: [OMPI users] Hardware topology influence

2022-09-14 Thread Jeff Squyres (jsquyres) via users
It was pointed out to me off-list that I should update my worldview on HPC in 
VMs.  :-)

So let me clarify my remarks about VMs: yes, many organizations run bare-metal 
HPC environments.  However, it is no longer unusual to run HPC in VMs.  Using 
modern VM technology, especially when tuned for HPC workloads (e.g., bind each 
vCPU to a physical CPU), VMs can effect quite low overheard these days.  There 
are many benefits to running virtualized environments, and those are no longer 
off-limits to HPC workloads.  Indeed, VM overheads may be outweighed by other 
benefits of running in VM-based environments.

That being said, I'm not encouraging you to run 96 VMs on a single host, for 
example.  I have not done any VM testing myself, but I imagine that the same 
adage that applies to HPC bare metal environments also applies to HPC VM 
environments: let Open MPI use shared memory to communicate (vs. a network) 
whenever possible.  In your environment, this likely translates to having a 
single VM per host (encompassing all the physical CPUs that you want to use on 
that host) and launching N_x MPI processes in each VM (where N_x is the number 
of vCPU/physical CPUs available in VM x).  This will allow the MPI processes to 
use shared memory for on-node communication.

--
Jeff Squyres
jsquy...@cisco.com

From: Jeff Squyres (jsquyres) 
Sent: Tuesday, September 13, 2022 10:08 AM
To: Open MPI Users 
Cc: Gilles Gouaillardet 
Subject: Re: [OMPI users] Hardware topology influence

Let me add a little more color on what Gilles stated.

First, you should probably upgrade to the latest v4.1.x release: v4.1.4.  It 
has a bunch of bug fixes compared to v4.1.0.

Second, you should know that it is relatively uncommon to run HPC/MPI apps 
inside VMs because the virtualization infrastructure will -- by definition -- 
decrease your overall performance.  This is usually counter to the goal of 
writing/running HPC applications.  If you do run HPC/MPI applications in VMs, 
it is strongly recommended that you bind the cores in the VM to physical cores 
to attempt to minimize the performance loss.

By default, Open MPI maps MPI processes by core when deciding how many 
processes to place on each machine (and also deciding how to bind them).  For 
example, Open MPI looks at a machine and sees that it has N cores, and (by 
default) maps N MPI processes to that machine.  You can change Open MPI's 
defaults to map by hardware thread ("Hyperthread" in Interl parlance) instead 
of by core, but conventional knowledge is that math-heavy processes don't 
perform well with the limited resources of a single hardware thread, and 
benefit from the full resources of the core (this depends on your specific app, 
of course -- YMMV).  Intel's and AMD's hardware threads have gotten better over 
the years, but I think they still represent a division of resources in the 
core, and will likely still be performance-detrimental to at least some classes 
of HPC applications.  It's a surprisingly complicated topic.

In the v4.x series, note that you can use "mpirun --report-bindings ..." to see 
exactly where Open MPI thinks it has bound each process.  Note that this 
binding occurs before each MPI process starts; it's nothing that the 
application itself needs to do.

--
Jeff Squyres
jsquy...@cisco.com

From: users  on behalf of Gilles Gouaillardet 
via users 
Sent: Tuesday, September 13, 2022 9:07 AM
To: Open MPI Users 
Cc: Gilles Gouaillardet 
Subject: Re: [OMPI users] Hardware topology influence

Lucas,

the number of MPI tasks started by mpirun is either
 - explicitly passed via the command line (e.g. mpirun -np 2306 ...)
 - equals to the number of available slots, and this value is either
 a) retrieved from the resource manager (such as a SLURM allocation)
 b) explicitly set in a machine file (e.g. mpirun -machinefile 
 ...) or the command line
 (e.g. mpirun --hosts host0:96,host1:96 ...)
 c) if none of the above is set, the number of detected cores on the system

Cheers,

Gilles

On Tue, Sep 13, 2022 at 9:23 PM Lucas Chaloyard via users 
mailto:users@lists.open-mpi.org>> wrote:
Hello,

I'm working as a research intern in a lab where we're studying virtualization.
And I've been working with several benchmarks using OpenMPI 4.1.0 (ASKAP, GPAW 
and Incompact3d from Phrononix Test suite).

To briefly explain my experiments, I'm running those benchmarks on several 
virtual machines using different topologies.
During one experiment I've been comparing those two topologies :
- Topology1 : 96 vCPUS divided in 96 sockets containing 1 threads
- Topology2 : 96 vCPUS divided in 48 sockets containing 2 threads (usage of 
hyperthreading)

For the ASKAP Benchmark :
- While using Topology2, 2306 processes will be created by the application to 
do its work.
- While using Topology1, 4612 processes will be created by the application to 
do its work.
T

Re: [OMPI users] Hardware topology influence

2022-09-13 Thread Jeff Squyres (jsquyres) via users
Let me add a little more color on what Gilles stated.

First, you should probably upgrade to the latest v4.1.x release: v4.1.4.  It 
has a bunch of bug fixes compared to v4.1.0.

Second, you should know that it is relatively uncommon to run HPC/MPI apps 
inside VMs because the virtualization infrastructure will -- by definition -- 
decrease your overall performance.  This is usually counter to the goal of 
writing/running HPC applications.  If you do run HPC/MPI applications in VMs, 
it is strongly recommended that you bind the cores in the VM to physical cores 
to attempt to minimize the performance loss.

By default, Open MPI maps MPI processes by core when deciding how many 
processes to place on each machine (and also deciding how to bind them).  For 
example, Open MPI looks at a machine and sees that it has N cores, and (by 
default) maps N MPI processes to that machine.  You can change Open MPI's 
defaults to map by hardware thread ("Hyperthread" in Interl parlance) instead 
of by core, but conventional knowledge is that math-heavy processes don't 
perform well with the limited resources of a single hardware thread, and 
benefit from the full resources of the core (this depends on your specific app, 
of course -- YMMV).  Intel's and AMD's hardware threads have gotten better over 
the years, but I think they still represent a division of resources in the 
core, and will likely still be performance-detrimental to at least some classes 
of HPC applications.  It's a surprisingly complicated topic.

In the v4.x series, note that you can use "mpirun --report-bindings ..." to see 
exactly where Open MPI thinks it has bound each process.  Note that this 
binding occurs before each MPI process starts; it's nothing that the 
application itself needs to do.

--
Jeff Squyres
jsquy...@cisco.com

From: users  on behalf of Gilles Gouaillardet 
via users 
Sent: Tuesday, September 13, 2022 9:07 AM
To: Open MPI Users 
Cc: Gilles Gouaillardet 
Subject: Re: [OMPI users] Hardware topology influence

Lucas,

the number of MPI tasks started by mpirun is either
 - explicitly passed via the command line (e.g. mpirun -np 2306 ...)
 - equals to the number of available slots, and this value is either
 a) retrieved from the resource manager (such as a SLURM allocation)
 b) explicitly set in a machine file (e.g. mpirun -machinefile 
 ...) or the command line
 (e.g. mpirun --hosts host0:96,host1:96 ...)
 c) if none of the above is set, the number of detected cores on the system

Cheers,

Gilles

On Tue, Sep 13, 2022 at 9:23 PM Lucas Chaloyard via users 
mailto:users@lists.open-mpi.org>> wrote:
Hello,

I'm working as a research intern in a lab where we're studying virtualization.
And I've been working with several benchmarks using OpenMPI 4.1.0 (ASKAP, GPAW 
and Incompact3d from Phrononix Test suite).

To briefly explain my experiments, I'm running those benchmarks on several 
virtual machines using different topologies.
During one experiment I've been comparing those two topologies :
- Topology1 : 96 vCPUS divided in 96 sockets containing 1 threads
- Topology2 : 96 vCPUS divided in 48 sockets containing 2 threads (usage of 
hyperthreading)

For the ASKAP Benchmark :
- While using Topology2, 2306 processes will be created by the application to 
do its work.
- While using Topology1, 4612 processes will be created by the application to 
do its work.
This is also happening when running GPAW and Incompact3d benchmarks.

What I've been wondering (and looking for) is, does OpenMPI take into account 
the topology, and reduce the number of processes create to execute its work in 
order to avoid the usage of hyperthreading ?
Or is it something done by the application itself ?

I was looking at the source code, and I've been trying to find how and when are 
filled the information about the MPI_COMM_WORLD communicator, to see if the 
'num_procs' field depends on the topology, but I didn't have any chance for now.

Respectfully, Chaloyard Lucas.


Re: [OMPI users] Hardware topology influence

2022-09-13 Thread Gilles Gouaillardet via users
Lucas,

the number of MPI tasks started by mpirun is either
 - explicitly passed via the command line (e.g. mpirun -np 2306 ...)
 - equals to the number of available slots, and this value is either
 a) retrieved from the resource manager (such as a SLURM allocation)
 b) explicitly set in a machine file (e.g. mpirun -machinefile
 ...) or the command line
 (e.g. mpirun --hosts host0:96,host1:96 ...)
 c) if none of the above is set, the number of detected cores on the
system

Cheers,

Gilles

On Tue, Sep 13, 2022 at 9:23 PM Lucas Chaloyard via users <
users@lists.open-mpi.org> wrote:

> Hello,
>
> I'm working as a research intern in a lab where we're studying
> virtualization.
> And I've been working with several benchmarks using OpenMPI 4.1.0 (ASKAP,
> GPAW and Incompact3d from Phrononix Test suite).
>
> To briefly explain my experiments, I'm running those benchmarks on several
> virtual machines using different topologies.
> During one experiment I've been comparing those two topologies :
> - Topology1 : 96 vCPUS divided in 96 sockets containing 1 threads
> - Topology2 : 96 vCPUS divided in 48 sockets containing 2 threads (usage
> of hyperthreading)
>
> For the ASKAP Benchmark :
> - While using Topology2, 2306 processes will be created by the application
> to do its work.
> - While using Topology1, 4612 processes will be created by the application
> to do its work.
> This is also happening when running GPAW and Incompact3d benchmarks.
>
> What I've been wondering (and looking for) is, does OpenMPI take into
> account the topology, and reduce the number of processes create to execute
> its work in order to avoid the usage of hyperthreading ?
> Or is it something done by the application itself ?
>
> I was looking at the source code, and I've been trying to find how and
> when are filled the information about the MPI_COMM_WORLD communicator, to
> see if the 'num_procs' field depends on the topology, but I didn't have any
> chance for now.
>
> Respectfully, Chaloyard Lucas.
>


[OMPI users] Hardware topology influence

2022-09-13 Thread Lucas Chaloyard via users
Hello, 

I'm working as a research intern in a lab where we're studying virtualization. 
And I've been working with several benchmarks using OpenMPI 4.1.0 (ASKAP, GPAW 
and Incompact3d from Phrononix Test suite). 

To briefly explain my experiments, I'm running those benchmarks on several 
virtual machines using different topologies. 
During one experiment I've been comparing those two topologies : 
- Topology1 : 96 vCPUS divided in 96 sockets containing 1 threads 
- Topology2 : 96 vCPUS divided in 48 sockets containing 2 threads (usage of 
hyperthreading) 

For the ASKAP Benchmark : 
- While using Topology2, 2306 processes will be created by the application to 
do its work. 
- While using Topology1, 4612 processes will be created by the application to 
do its work. 
This is also happening when running GPAW and Incompact3d benchmarks. 

What I've been wondering (and looking for) is, does OpenMPI take into account 
the topology, and reduce the number of processes create to execute its work in 
order to avoid the usage of hyperthreading ? 
Or is it something done by the application itself ? 

I was looking at the source code, and I've been trying to find how and when are 
filled the information about the MPI_COMM_WORLD communicator, to see if the 
'num_procs' field depends on the topology, but I didn't have any chance for 
now. 

Respectfully, Chaloyard Lucas.