Thanks again for your answer and I hope I dont bother you with my questions! If I can ask my last question here. I would say how can I see a complete list of such factors like *message size, memory map, ... etc*? Is there any reading or should I look at the code, if any, could you please give me a starting point to look at it? In the case of UCX and UCX-enabled network interfaces (such as IB) is it a UCX decision or OpenMPI decision to use or not RDMA?
Sorry for my long question, and thank you again! On Thu, Apr 21, 2022 at 1:09 PM Jeff Squyres (jsquyres) <jsquy...@cisco.com> wrote: > It means that your underlying network transport supports RDMA. > > To be clear, if you built Open MPI with UCX support, and you run on a > system with UCX-enabled network interfaces (such as IB), Open MPI should > automatically default to using those UCX interfaces. This means you'll get > all the benefits of an HPC-class networking transport (low latency, > hardware offload, ... etc.). > > For any given send/receive in your MPI application, in the right > circumstances (message size, memory map, ... etc.), Open MPI will use RDMA > to effect a network transfer. There are many different run-time issues > that will drive the choice of whether any individual network transfer > actually uses RDMA or not. > > -- > Jeff Squyres > jsquy...@cisco.com > > ________________________________________ > From: Masoud Hemmatpour <mashe...@gmail.com> > Sent: Thursday, April 21, 2022 2:38 AM > To: Open MPI Developers > Cc: Jeff Squyres (jsquyres) > Subject: Re: [OMPI devel] RDMA and OMPI implementation > > > Thank you very much for your description! Actually, I read this issue on > github: > > Is OpenMPI supporting RDMA?<https://github.com/open-mpi/ompi/issues/5789> > > If I have IB and I install and use UCX, does this guarantee that I am > using RDMA or still it does not guarantee? > > > Thanks again, > > > > > > > > > > On Thu, Apr 21, 2022 at 12:34 AM Jeff Squyres (jsquyres) via devel < > devel@lists.open-mpi.org<mailto:devel@lists.open-mpi.org>> wrote: > Let me add a little more color to William's response. The general theme > is: it depends on what the underlying network provides. > > Some underlying networks natively support one-sided operations like PUT / > WRITE and GET / READ (e.g., IB/RDMA, RoCE/RDMA, ... etc.). Some don't > (like TCP). > > Open MPI will adapt to use whatever transports the underlying network > supports. > > Additionally, the determination of whether Open MPI uses a "two sided" or > "one sided" type of network transport operation depends on a bunch of other > factors. The most efficient method to get a message from sender to receive > may depend on issues such as the size of the message, the memory map of the > message, the current network resource utilization, the specific MPI > operation, ... etc. > > Also, be aware that "RDMA" commonly refers to InfiniBand-style one-sided > operations. So if you want to use "RDMA", you may need to use an > NVIDIA-based network (e.g., IB or RoCE). That's not the only type of > network one-sided operations available, but it's common. > > -- > Jeff Squyres > jsquy...@cisco.com<mailto:jsquy...@cisco.com> > > ________________________________________ > From: devel <devel-boun...@lists.open-mpi.org<mailto: > devel-boun...@lists.open-mpi.org>> on behalf of Zhang, William via devel < > devel@lists.open-mpi.org<mailto:devel@lists.open-mpi.org>> > Sent: Wednesday, April 20, 2022 6:12 PM > To: Open MPI Developers > Cc: Zhang, William > Subject: Re: [OMPI devel] RDMA and OMPI implementation > > Hello Masoud, > > Responded inline > > Thanks, > William > > From: devel <devel-boun...@lists.open-mpi.org<mailto: > devel-boun...@lists.open-mpi.org>> on behalf of Masoud Hemmatpour via > devel <devel@lists.open-mpi.org<mailto:devel@lists.open-mpi.org>> > Reply-To: Open MPI Developers <devel@lists.open-mpi.org<mailto: > devel@lists.open-mpi.org>> > Date: Wednesday, April 20, 2022 at 5:29 AM > To: Open MPI Developers <devel@lists.open-mpi.org<mailto: > devel@lists.open-mpi.org>> > Cc: Masoud Hemmatpour <mashe...@gmail.com<mailto:mashe...@gmail.com>> > Subject: [EXTERNAL] [OMPI devel] RDMA and OMPI implementation > > > CAUTION: This email originated from outside of the organization. Do not > click links or open attachments unless you can confirm the sender and know > the content is safe. > > Hello Everyone, > > Sorry, MPI is quite new for me, in particular the implementation. If you > don't mind, I have some very basic questions regarding the OMPI > implementation. > > If I use one-sided MPI operations (Get, and Put) forcefully I use RDMA? – > It depends, but it’s not guaranteed. For example, in Open MPI 4.0.x, there > was the osc/pt2pt component that implemented osc operations using > send/receive. Or for example, with calls to libfabric’s osc api, it depends > on the implementation of the underlying provider. > Is it possible to have one-sided without RDMA? - Yes > > In general, other types of MPI operations like Send/Receive or collective > operations are implemented using RDMA or not necessarily? – Not > necessarily. For example, using TCP won’t use RDMA. The underlying > communication protocol could very well implement send/receive using RDMA > though. > > How can I be sure that I am using RDMA for a specific operation? – I’m not > sure there’s an easy way to do this, I think you have to have some > understanding of what communication protocol you’re using and what that > protocol is doing. > > Thank you very much in advance for your help! > Best Regards, > > > > -- > Best Regards, > Masoud Hemmatpour, PhD > > -- Best Regards, Masoud Hemmatpour, PhD