It means that your underlying network transport supports RDMA.

To be clear, if you built Open MPI with UCX support, and you run on a system 
with UCX-enabled network interfaces (such as IB), Open MPI should automatically 
default to using those UCX interfaces.  This means you'll get all the benefits 
of an HPC-class networking transport (low latency, hardware offload, ... etc.).

For any given send/receive in your MPI application, in the right circumstances 
(message size, memory map, ... etc.), Open MPI will use RDMA to effect a 
network transfer.  There are many different run-time issues that will drive the 
choice of whether any individual network transfer actually uses RDMA or not.

Jeff Squyres

From: Masoud Hemmatpour <>
Sent: Thursday, April 21, 2022 2:38 AM
To: Open MPI Developers
Cc: Jeff Squyres (jsquyres)
Subject: Re: [OMPI devel] RDMA and OMPI implementation

Thank you very much for your description! Actually, I read this issue on github:

Is OpenMPI supporting RDMA?<>

If I have IB and I install and use UCX, does this guarantee that I am using 
RDMA or still it does not guarantee?

Thanks again,

On Thu, Apr 21, 2022 at 12:34 AM Jeff Squyres (jsquyres) via devel 
<<>> wrote:
Let me add a little more color to William's response.  The general theme is: it 
depends on what the underlying network provides.

Some underlying networks natively support one-sided operations like PUT / WRITE 
and GET / READ (e.g., IB/RDMA, RoCE/RDMA, ... etc.).  Some don't (like TCP).

Open MPI will adapt to use whatever transports the underlying network supports.

Additionally, the determination of whether Open MPI uses a "two sided" or "one 
sided" type of network transport operation depends on a bunch of other factors. 
 The most efficient method to get a message from sender to receive may depend 
on issues such as the size of the message, the memory map of the message, the 
current network resource utilization, the specific MPI operation, ... etc.

Also, be aware that "RDMA" commonly refers to InfiniBand-style one-sided 
operations.  So if you want to use "RDMA", you may need to use an NVIDIA-based 
network (e.g., IB or RoCE).  That's not the only type of network one-sided 
operations available, but it's common.

Jeff Squyres<>

From: devel 
<<>> on 
behalf of Zhang, William via devel 
Sent: Wednesday, April 20, 2022 6:12 PM
To: Open MPI Developers
Cc: Zhang, William
Subject: Re: [OMPI devel] RDMA and OMPI implementation

Hello Masoud,

Responded inline


From: devel 
<<>> on 
behalf of Masoud Hemmatpour via devel 
Reply-To: Open MPI Developers 
Date: Wednesday, April 20, 2022 at 5:29 AM
To: Open MPI Developers 
Cc: Masoud Hemmatpour <<>>
Subject: [EXTERNAL] [OMPI devel] RDMA and OMPI implementation

CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.

Hello Everyone,

Sorry, MPI is quite new for me, in particular the implementation. If you don't 
mind, I have some very basic questions regarding the OMPI implementation.

If I use one-sided MPI operations (Get, and Put) forcefully I use RDMA? – It 
depends, but it’s not guaranteed. For example, in Open MPI 4.0.x, there was the 
osc/pt2pt component that implemented osc operations using send/receive. Or for 
example, with calls to libfabric’s osc api, it depends on the implementation of 
the underlying provider.
Is it possible to have one-sided without RDMA? - Yes

In general, other types of MPI operations like Send/Receive or collective 
operations are implemented using RDMA or not necessarily? – Not necessarily. 
For example, using TCP won’t use RDMA. The underlying communication protocol 
could very well implement send/receive using RDMA though.

How can I be sure that I am using RDMA for a specific operation? – I’m not sure 
there’s an easy way to do this, I think you have to have some understanding of 
what communication protocol you’re using and what that protocol is doing.

Thank you very much in advance for your help!
Best Regards,

Best Regards,
Masoud Hemmatpour, PhD

Reply via email to