On Fri, May 5, 2017 at 10:41 AM, Josh Hursey wrote:
> To Dahai's last point - The second MPI_Reduce will fail with this error:
> *** An error occurred in MPI_Reduce
> *** reported by process [2212691969,0]
> *** on communicator MPI_COMM_WORLD
> *** MPI_ERR_ARG: invalid
The following code causes memory fault problem. The initial check shows
that it seemed caused by *ompi_comm_peer_lookup* with MPI_ANY_SOURCE, which
somehow messed up the allocated temporary buffer used in SendRecv.
any idea?
Dahai
#include
#include
#include
#include
#include
#include
To Dahai's last point - The second MPI_Reduce will fail with this error:
*** An error occurred in MPI_Reduce
*** reported by process [2212691969,0]
*** on communicator MPI_COMM_WORLD
*** MPI_ERR_ARG: invalid argument of some other kind
*** MPI_ERRORS_ARE_FATAL (processes in this communicator will
Hi all -
Howard’s out of office this week and I got swamped with a couple of internal
issues, so we’ve been behind in getting merges pulled into 3.x. I merged a
batch this morning and am going to let Jenkins / MTT catch up with testing.
Assuming testing looks good, I’ll do another batch
Thanks, George. It works! In addition, the following code would also cause
a problem. checking if count ==0 should be moved to the beginning of the
code ompi/mpi/c/reduce.c and ireduce.c, or fix it in other way.
Dahai
#include
#include
#include
int main(int argc, char** argv)
{
int
Indeed, our current implementation of the MPI_Sendrecv_replace prohibits
the use of MPI_ANY_SOURCE. Will work a patch later today.
George.
On Fri, May 5, 2017 at 11:49 AM, Dahai Guo wrote:
> The following code causes memory fault problem. The initial check shows
> that
Hi everyone -
We’ve been having discussions among the release managers about the choice of
naming the branch for Open MPI 3.0.0 as v3.x (as opposed to v3.0.x). Because
the current plan is that each “major” release (in the sense of the three
release points from master per year, not necessarily
Hi,
I am running some tests on a PPC platform that is using LSF and I see the
following problem every time I launch a job that runs on 2 nodes or more:
[crest1:49998] *** Process received signal ***
[crest1:49998] Signal: Segmentation fault (11)
[crest1:49998] Signal code: Address not mapped
+1 Go for it :-)
> On May 5, 2017, at 2:34 PM, Barrett, Brian via devel
> wrote:
>
> To be clear, we’d do the move all at once on Saturday morning. Things that
> would change:
>
> 1) nightly tarballs would rename from openmpi-v3.x--.tar.gz
> to
If we rebranch from master for every "major" release it makes sense to
rename the branch. In the long term renaming seems like the way to go, and
thus the pain of altering everything that depends on the naming will exist
at some point. I'am in favor of doing it asap (but I have no stakes in the
I would suggest not bringing it over in isolation - we planned to do an update
that contains a lot of related changes, including the PMIx update. Probably
need to do that pretty soon given the June target.
> On May 5, 2017, at 3:04 PM, Vallee, Geoffroy R. wrote:
>
> Hi,
>
As a maintainer of non-MTT scripts that need to know the layout of the
directories containing nighty and RC tarball, I also think that all the
changes should be done soon (and all together, not spread over months).
-Paul
On Fri, May 5, 2017 at 2:16 PM, George Bosilca wrote:
To be clear, we’d do the move all at once on Saturday morning. Things that
would change:
1) nightly tarballs would rename from openmpi-v3.x--.tar.gz to
openmpi-v3.0.x--.tar.gz
2) nightly tarballs would build from v3.0.x, not v3.x branch
3) PRs would need to be filed against v3.0.x
4) Both
Successful builds: []
Skipped builds: ['v1.11', 'master']
Failed builds: []
=== Build output ===
Branches: ['v1.11', 'master']
Starting build for v1.11
Build for revision bf116d3 already exists, skipping.
Starting build for master
Build for revision 74bcc8d already exists, skipping.
Your
14 matches
Mail list logo