for that at this time?
>> -Original Message-
>> From: devel [mailto:devel-boun...@lists.open-mpi.org] On Behalf Of
>> Nathan Hjelm via devel
>> Sent: Friday, April 12, 2019 11:19 AM
>> To: Open MPI Developers
>> Cc: Nathan Hjelm ; Castain, Ralph H
>> ; Y
d but it'll use more resources.
> On Mon, Apr 15, 2019 at 9:00 AM Nathan Hjelm via devel
> If you do that it may run out of resources and deadlock or crash. I recommend
> either 1) adding a barrier every 100 iterations, 2) using allreduce, or 3)
> enable co
If you do that it may run out of resources and deadlock or crash. I recommend
either 1) adding a barrier every 100 iterations, 2) using allreduce, or 3)
enable coll/sync (which essentially does 1). Honestly, 2 is probably the
easiest option and depending on how large you run may not be any
That is accurate. We expect to support OPA with the btl/ofi component. It
should give much better performance than osc/pt2pt + mtl/ofi. What would be
good for you to do on your end is verify everything works as expected and that
the performance is on par for what you expect.
> On Apr
Appears to be broken. Its failing and simply saying:
Testing in progress..
On Oct 18, 2018, at 11:34 AM, Geoffrey Paulsen wrote:
I've re-enabled IBM CI for PRs.
devel mailing list
Ah yes, 18f23724a broke things so we had to fix the fix. Didn't apply it to the
v2.x branch. Will open a PR to bring it over.
On Oct 17, 2018, at 11:28 AM, Eric Chamberland
since commit 18f23724a, our nightly base test is broken on v2.x branch.
Strangely, on branch
Nope. We just never bothered to disable it on osx. I think Jeff was working on
> On Sep 28, 2018, at 3:21 PM, Barrett, Brian via devel
> Is there any practical reason to have the memory patcher component enabled
> for MacOS? As far as I know, we don’t have any
Sent from my iPhone
> On Aug 27, 2018, at 8:51 AM, Jeff Squyres (jsquyres) via devel
> Will this get through?
> Jeff Squyres
> devel mailing list
> On Jul 16, 2018, at 11:18 PM, Marco Atzeri wrote:
>> Am 16.07.2018 um 23:05 schrieb Jeff Squyres (jsquyres) via devel:
>>> On Jul 13, 2018, at 4:35 PM, Marco Atzeri wrote:
For one. The C++ bindings are no longer part of the standard and they are
not built by default in
For one. The C++ bindings are no longer part of the standard and they are not
built by default in v3.1x. They will be removed entirely in Open MPI v5.0.0.
Not sure why the fortran one is not building.
> On Jul 13, 2018, at 2:02 PM, Marco Atzeri wrote:
> may be I am missing
Looks like a bug to me. The second argument should be a value in v3.x.x.
> On Jul 6, 2018, at 4:00 PM, r...@open-mpi.org wrote:
> I’m seeing this when building the v3.0.x branch:
> runtime/ompi_mpi_init.c:395:49: warning: passing argument 2 of
> ‘opal_atomic_cmpset_32’ makes
Mail list logo