Re: [OMPI devel] How to progress MPI_Recv using custom BTL for NIC under development

2022-08-03 Thread Nathan Hjelm via devel
Kind of sounds to me like they are using the wrong proc when receiving. Here is an example of what a modex receive should look like:https://github.com/open-mpi/ompi/blob/main/opal/mca/btl/ugni/btl_ugni_endpoint.c#L44-NathanOn Aug 3, 2022, at 11:29 AM, "Jeff Squyres (jsquyres) via devel"

Re: [OMPI devel] C style rules / reformatting

2021-05-18 Thread Nathan Hjelm via devel
It really is a shame that could not go forward. There are really three end goals in mind: 1) Consistency. We all have different coding styles and following a common coding style is more and more considered a best practice. The number of projects using clang-format grows continuously. I find

Re: [OMPI devel] Intel OPA and Open MPI

2019-04-24 Thread Nathan Hjelm via devel
for that at this time? > >> -Original Message- >> From: devel [mailto:devel-boun...@lists.open-mpi.org] On Behalf Of >> Nathan Hjelm via devel >> Sent: Friday, April 12, 2019 11:19 AM >> To: Open MPI Developers >> Cc: Nathan Hjelm ; Castain, Ralph H >> ; Y

Re: [OMPI devel] MPI Reduce Without a Barrier

2019-04-16 Thread Nathan Hjelm via devel
d but it'll use more resources. > > On Mon, Apr 15, 2019 at 9:00 AM Nathan Hjelm via devel > wrote: > If you do that it may run out of resources and deadlock or crash. I recommend > either 1) adding a barrier every 100 iterations, 2) using allreduce, or 3) > enable co

Re: [OMPI devel] MPI Reduce Without a Barrier

2019-04-15 Thread Nathan Hjelm via devel
If you do that it may run out of resources and deadlock or crash. I recommend either 1) adding a barrier every 100 iterations, 2) using allreduce, or 3) enable coll/sync (which essentially does 1). Honestly, 2 is probably the easiest option and depending on how large you run may not be any

Re: [OMPI devel] Intel OPA and Open MPI

2019-04-12 Thread Nathan Hjelm via devel
That is accurate. We expect to support OPA with the btl/ofi component. It should give much better performance than osc/pt2pt + mtl/ofi. What would be good for you to do on your end is verify everything works as expected and that the performance is on par for what you expect. -Nathan > On Apr

Re: [OMPI devel] IBM CI re-enabled.

2018-10-18 Thread Nathan Hjelm via devel
Appears to be broken. Its failing and simply saying: Testing in progress.. -Nathan On Oct 18, 2018, at 11:34 AM, Geoffrey Paulsen wrote:   I've re-enabled IBM CI for PRs.   ___ devel mailing list devel@lists.open-mpi.org

Re: [OMPI devel] Bug on branch v2.x since october 3

2018-10-17 Thread Nathan Hjelm via devel
Ah yes, 18f23724a broke things so we had to fix the fix. Didn't apply it to the v2.x branch. Will open a PR to bring it over. -Nathan On Oct 17, 2018, at 11:28 AM, Eric Chamberland wrote: Hi, since commit 18f23724a, our nightly base test is broken on v2.x branch. Strangely, on branch

Re: [OMPI devel] Patcher on MacOS

2018-09-28 Thread Nathan Hjelm via devel
Nope. We just never bothered to disable it on osx. I think Jeff was working on a patch. -Nathan > On Sep 28, 2018, at 3:21 PM, Barrett, Brian via devel > wrote: > > Is there any practical reason to have the memory patcher component enabled > for MacOS? As far as I know, we don’t have any

Re: [OMPI devel] Test mail

2018-08-28 Thread Nathan Hjelm via devel
no Sent from my iPhone > On Aug 27, 2018, at 8:51 AM, Jeff Squyres (jsquyres) via devel > wrote: > > Will this get through? > > -- > Jeff Squyres > jsquy...@cisco.com > > ___ > devel mailing list > devel@lists.open-mpi.org >

Re: [OMPI devel] openmpi 3.1.x examples

2018-07-17 Thread Nathan Hjelm via devel
> On Jul 16, 2018, at 11:18 PM, Marco Atzeri wrote: > >> Am 16.07.2018 um 23:05 schrieb Jeff Squyres (jsquyres) via devel: >>> On Jul 13, 2018, at 4:35 PM, Marco Atzeri wrote: >>> For one. The C++ bindings are no longer part of the standard and they are not built by default in

Re: [OMPI devel] openmpi 3.1.x examples

2018-07-13 Thread Nathan Hjelm via devel
For one. The C++ bindings are no longer part of the standard and they are not built by default in v3.1x. They will be removed entirely in Open MPI v5.0.0. Not sure why the fortran one is not building. -Nathan > On Jul 13, 2018, at 2:02 PM, Marco Atzeri wrote: > > Hi, > may be I am missing

Re: [OMPI devel] Odd warning in OMPI v3.0.x

2018-07-06 Thread Nathan Hjelm via devel
Looks like a bug to me. The second argument should be a value in v3.x.x. -Nathan > On Jul 6, 2018, at 4:00 PM, r...@open-mpi.org wrote: > > I’m seeing this when building the v3.0.x branch: > > runtime/ompi_mpi_init.c:395:49: warning: passing argument 2 of > ‘opal_atomic_cmpset_32’ makes