On Tue, 2016-10-11 at 10:10 +0900, Gilles Gouaillardet wrote:
> George,
> at first, i recommend you try the latest v1.10 (1.10.4) or event
> 2.0.1.
>
Dear Gilles et al,
Can this coexist with the version of open-mpi on my RedHat
system (1.10.2)? Is this a matter of downloading source and runni
Hello everyone,
I am attempting to run a single program on 32 cores split across 4
computers (So each computer has 8 cores). I am attempting to use mpich for
this. I currently am just testing on 2 computers, I have the program
installed on both, as well as mpich installed on both. I have created a
Am 14.10.2016 um 23:10 schrieb Mahdi, Sam:
> Hello everyone,
>
> I am attempting to run a single program on 32 cores split across 4 computers
> (So each computer has 8 cores). I am attempting to use mpich for this. I
> currently am just testing on 2 computers, I have the program installed on
On Fri, 2016-10-14 at 14:10 -0700, Mahdi, Sam wrote:
>
>
> along with other errors of unable to parse hostfile, match handler
> etc. I assume this is all due to it being unable to read the host
> file. Is there any specific place I should save my hosts file? I have
> it saved directly on my Desk
Designation: Non-Export Controlled Content
Folks;
I have the following code setup. The sensorList is an array of
ints of size 1. The value it contains is 1. My comm world size is 5. The call
to MPI_Barrier fails every time with error "invalid communicator". This code is
pretty mu
Rick,
Let's assume that you have started 2 processes, and that your sensorList is
{1}. The worldgroup will then be {P0, P1}, which trimmed via the sensorList
will give the sensorgroup {MPI_GROUP_EMPTY} on P0 and the sensorgroup {P1}
on P1. As a result on P0 you will create a MPI_COMM_NULL communic