Re: [OMPI users] MPI_Comm_Spawn failure: All nodes already filled

2019-08-07 Thread Ralph Castain via users
Yeah, we do currently require that to be true. Process mapping is distributed 
across the daemons - i.e., the daemon on each node independently computes the 
map. We have talked about picking up the hostfile on the head node and sending 
out the contents, but haven't implemented that yet.


On Aug 7, 2019, at 2:46 PM, Mccall, Kurt E. (MSFC-EV41) mailto:kurt.e.mcc...@nasa.gov> > wrote:

Ralph,
 I got MPI_Comm_spawn to work by making sure that the hostfiles on the head 
(where mpiexec is called) and the remote node are identical.  I had assumed 
that only the one on the head node was read by OpenMPI.   Is this correct?
 Thanks,
Kurt
 From: Ralph Castain mailto:r...@open-mpi.org> > 

Subject: [EXTERNAL] Re: [OMPI users] MPI_Comm_Spawn failure: All nodes already 
filled
 I'm afraid I cannot replicate this problem on OMPI master, so it could be 
something different about OMPI 4.0.1 or your environment. Can you download and 
test one of the nightly tarballs from the "master" branch and see if it works 
for you?
 https://www.open-mpi.org/nightly/master/
 Ralph
 

On Aug 6, 2019, at 3:58 AM, Mccall, Kurt E. (MSFC-EV41) via users 
mailto:users@lists.open-mpi.org> > wrote:
 Hi,
 MPI_Comm_spawn() is failing with the error message “All nodes which are 
allocated for this job are already filled”.   I compiled OpenMpi 4.0.1 with the 
Portland Group C++  compiler, v. 19.5.0, both with and without Torque/Maui 
support.   I thought that not using Torque/Maui support would give me finer 
control over where MPI_Comm_spawn() places the processes, but the failure 
message was the same in either case.  Perhaps Torque is interfering with 
process creation somehow?
 For the pared-down test code, I am following the instructions here to make 
mpiexec create exactly one manager process on a remote node, and then forcing 
that manager to spawn one worker process on the same remote node:
 
https://stackoverflow.com/questions/47743425/controlling-node-mapping-of-mpi-comm-spawn
Here is the full error message.   Note the Max Slots: 0 message therein (?):
 Data for JOB [39020,1] offset 0 Total slots allocated 22
    JOB MAP   
 Data for node: n001    Num slots: 2    Max slots: 2    Num procs: 1
    Process OMPI jobid: [39020,1] App: 0 Process rank: 0 Bound: N/A
 =
Data for JOB [39020,1] offset 0 Total slots allocated 22
    JOB MAP   
 Data for node: n001    Num slots: 2    Max slots: 0Num procs: 1
    Process OMPI jobid: [39020,1] App: 0 Process rank: 0 Bound: socket 
0[core 0[hwt 0]]:[B/././././././././.][./././././././././.]
 =
--
All nodes which are allocated for this job are already filled.
--
[n001:08114] *** An error occurred in MPI_Comm_spawn
[n001:08114] *** reported by process [2557214721,0]
[n001:08114] *** on communicator MPI_COMM_SELF
[n001:08114] *** MPI_ERR_SPAWN: could not spawn processes
[n001:08114] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now 
abort,
[n001:08114] ***    and potentially your MPI job)
Here is my mpiexec command:
 mpiexec --display-map --v --x DISPLAY -hostfile MyNodeFile --np 1 -map-by 
ppr:1:node SpawnTestManager
Here is my hostfile “MyNodeFile”:
 n001.cluster.com slots=2 max_slots=2
Here is my SpawnTestManager code:
 
#include 
#include 
#include 
 #ifdef SUCCESS
#undef SUCCESS
#endif
#include "/opt/openmpi_pgc_tm/include/mpi.h"
 using std::string;
using std::cout;
using std::endl;
 int main(int argc, char *argv[])
{
    int rank, world_size;
    char *argv2[2];
    MPI_Comm mpi_comm;
    MPI_Info info;
    char host[MPI_MAX_PROCESSOR_NAME + 1];
    int host_name_len;
 string worker_cmd = "SpawnTestWorker";
    string host_name = "n001.cluster.com";
 argv2[0] = "dummy_arg";
    argv2[1] = NULL;
 MPI_Init(, );
    MPI_Comm_rank(MPI_COMM_WORLD, );
    MPI_Comm_size(MPI_COMM_WORLD, _size);
 MPI_Get_processor_name(host, _name_len);
    cout << "Host name from MPI_Get_processor_name is " << host << endl;
    char info_str[64];
    sprintf(info_str, "ppr:%d:node", 1);
    MPI_Info_create();
    MPI_Info_set(info, "host", host_name.c_str());
    MPI_Info_set(info, "map-by", info_str);
 MPI_Comm_spawn(worker_cmd.c_str(), argv2, 1, info, rank, MPI_COMM_SELF,
    _comm, MPI_ERRCODES_IGNORE);
    MPI_Comm_set_errhandler(mpi_comm, MPI::ERRORS_THROW_EXCEPTIONS);
 std::cout << "Manager success!" << std::endl;
 MPI_Finalize();
    return 0;
}

   Here is my SpawnTestWorker code:
 
#include "/opt/openmpi_pgc_tm/include/mpi.h&qu

[OMPI users] MPI_Comm_Spawn failure: All nodes already filled

2019-08-07 Thread Mccall, Kurt E. (MSFC-EV41) via users
Ralph,

I got MPI_Comm_spawn to work by making sure that the hostfiles on the head 
(where mpiexec is called) and the remote node are identical.  I had assumed 
that only the one on the head node was read by OpenMPI.   Is this correct?

Thanks,
Kurt

From: Ralph Castain 

Subject: [EXTERNAL] Re: [OMPI users] MPI_Comm_Spawn failure: All nodes already 
filled

I'm afraid I cannot replicate this problem on OMPI master, so it could be 
something different about OMPI 4.0.1 or your environment. Can you download and 
test one of the nightly tarballs from the "master" branch and see if it works 
for you?

https://www.open-mpi.org/nightly/master/<https://urldefense.proofpoint.com/v2/url?u=https-3A__www.open-2Dmpi.org_nightly_master_=DwMFaQ=ApwzowJNAKKw3xye91w7BE1XMRKi2LN9kiMk5Csz9Zk=6cP1IfXu3IZOHSDh_vBqciYiIh4uuVgs1MSi5K7l5fQ=02dv9l909IBsmfMcILaJtSebmPpGpbb5CA4hukOPv4Y=KOjBOU3R8SYRlORpTU4f1S89BfzgobqHLEMS3VC_jq8=>

Ralph



On Aug 6, 2019, at 3:58 AM, Mccall, Kurt E. (MSFC-EV41) via users 
mailto:users@lists.open-mpi.org>> wrote:

Hi,

MPI_Comm_spawn() is failing with the error message “All nodes which are 
allocated for this job are already filled”.   I compiled OpenMpi 4.0.1 with the 
Portland Group C++  compiler, v. 19.5.0, both with and without Torque/Maui 
support.   I thought that not using Torque/Maui support would give me finer 
control over where MPI_Comm_spawn() places the processes, but the failure 
message was the same in either case.  Perhaps Torque is interfering with 
process creation somehow?

For the pared-down test code, I am following the instructions here to make 
mpiexec create exactly one manager process on a remote node, and then forcing 
that manager to spawn one worker process on the same remote node:

https://stackoverflow.com/questions/47743425/controlling-node-mapping-of-mpi-comm-spawn<https://urldefense.proofpoint.com/v2/url?u=https-3A__stackoverflow.com_questions_47743425_controlling-2Dnode-2Dmapping-2Dof-2Dmpi-2Dcomm-2Dspawn=DwMFaQ=ApwzowJNAKKw3xye91w7BE1XMRKi2LN9kiMk5Csz9Zk=6cP1IfXu3IZOHSDh_vBqciYiIh4uuVgs1MSi5K7l5fQ=02dv9l909IBsmfMcILaJtSebmPpGpbb5CA4hukOPv4Y=XtyQHmJFY97-e9umj4ROKIlvbglN7fZx-2FTdawoMaY=>




Here is the full error message.   Note the Max Slots: 0 message therein (?):

Data for JOB [39020,1] offset 0 Total slots allocated 22

   JOB MAP   

Data for node: n001Num slots: 2Max slots: 2Num procs: 1
Process OMPI jobid: [39020,1] App: 0 Process rank: 0 Bound: N/A

=
Data for JOB [39020,1] offset 0 Total slots allocated 22

   JOB MAP   

Data for node: n001Num slots: 2Max slots: 0Num procs: 1
Process OMPI jobid: [39020,1] App: 0 Process rank: 0 Bound: socket 
0[core 0[hwt 0]]:[B/././././././././.][./././././././././.]

=
--
All nodes which are allocated for this job are already filled.
--
[n001:08114] *** An error occurred in MPI_Comm_spawn
[n001:08114] *** reported by process [2557214721,0]
[n001:08114] *** on communicator MPI_COMM_SELF
[n001:08114] *** MPI_ERR_SPAWN: could not spawn processes
[n001:08114] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now 
abort,
[n001:08114] ***and potentially your MPI job)




Here is my mpiexec command:

mpiexec --display-map --v --x DISPLAY -hostfile MyNodeFile --np 1 -map-by 
ppr:1:node SpawnTestManager




Here is my hostfile “MyNodeFile”:

n001.cluster.com<https://urldefense.proofpoint.com/v2/url?u=http-3A__n001.cluster.com_=DwMFaQ=ApwzowJNAKKw3xye91w7BE1XMRKi2LN9kiMk5Csz9Zk=6cP1IfXu3IZOHSDh_vBqciYiIh4uuVgs1MSi5K7l5fQ=02dv9l909IBsmfMcILaJtSebmPpGpbb5CA4hukOPv4Y=U1Kh9c1PySsnzmR1cM9R_R2_5zlVBcJk7McLUEwOT8c=>
 slots=2 max_slots=2




Here is my SpawnTestManager code:


#include 
#include 
#include 

#ifdef SUCCESS
#undef SUCCESS
#endif
#include "/opt/openmpi_pgc_tm/include/mpi.h"

using std::string;
using std::cout;
using std::endl;

int main(int argc, char *argv[])
{
int rank, world_size;
char *argv2[2];
MPI_Comm mpi_comm;
MPI_Info info;
char host[MPI_MAX_PROCESSOR_NAME + 1];
int host_name_len;

string worker_cmd = "SpawnTestWorker";
string host_name = 
"n001.cluster.com<https://urldefense.proofpoint.com/v2/url?u=http-3A__n001.cluster.com_=DwMFaQ=ApwzowJNAKKw3xye91w7BE1XMRKi2LN9kiMk5Csz9Zk=6cP1IfXu3IZOHSDh_vBqciYiIh4uuVgs1MSi5K7l5fQ=02dv9l909IBsmfMcILaJtSebmPpGpbb5CA4hukOPv4Y=U1Kh9c1PySsnzmR1cM9R_R2_5zlVBcJk7McLUEwOT8c=>";

argv2[0] = "dummy_arg";
argv2[1] = NULL;

MPI_Init(, );
MPI_Comm_rank(MPI_COMM_WORLD, );
MPI_Comm_size(MPI_COMM_WORLD, _size);

MPI_Get_processor_name(host, _na

Re: [OMPI users] MPI_Comm_Spawn failure: All nodes already filled

2019-08-06 Thread Ralph Castain via users
I'm afraid I cannot replicate this problem on OMPI master, so it could be 
something different about OMPI 4.0.1 or your environment. Can you download and 
test one of the nightly tarballs from the "master" branch and see if it works 
for you?

https://www.open-mpi.org/nightly/master/

Ralph


On Aug 6, 2019, at 3:58 AM, Mccall, Kurt E. (MSFC-EV41) via users 
mailto:users@lists.open-mpi.org> > wrote:

Hi,
 MPI_Comm_spawn() is failing with the error message “All nodes which are 
allocated for this job are already filled”.   I compiled OpenMpi 4.0.1 with the 
Portland Group C++  compiler, v. 19.5.0, both with and without Torque/Maui 
support.   I thought that not using Torque/Maui support would give me finer 
control over where MPI_Comm_spawn() places the processes, but the failure 
message was the same in either case.  Perhaps Torque is interfering with 
process creation somehow?
 For the pared-down test code, I am following the instructions here to make 
mpiexec create exactly one manager process on a remote node, and then forcing 
that manager to spawn one worker process on the same remote node:
 
https://stackoverflow.com/questions/47743425/controlling-node-mapping-of-mpi-comm-spawn
Here is the full error message.   Note the Max Slots: 0 message therein (?):
 Data for JOB [39020,1] offset 0 Total slots allocated 22
    JOB MAP   
 Data for node: n001    Num slots: 2    Max slots: 2    Num procs: 1
    Process OMPI jobid: [39020,1] App: 0 Process rank: 0 Bound: N/A
 =
Data for JOB [39020,1] offset 0 Total slots allocated 22
    JOB MAP   
 Data for node: n001    Num slots: 2    Max slots: 0Num procs: 1
    Process OMPI jobid: [39020,1] App: 0 Process rank: 0 Bound: socket 
0[core 0[hwt 0]]:[B/././././././././.][./././././././././.]
 =
--
All nodes which are allocated for this job are already filled.
--
[n001:08114] *** An error occurred in MPI_Comm_spawn
[n001:08114] *** reported by process [2557214721,0]
[n001:08114] *** on communicator MPI_COMM_SELF
[n001:08114] *** MPI_ERR_SPAWN: could not spawn processes
[n001:08114] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now 
abort,
[n001:08114] ***    and potentially your MPI job)
Here is my mpiexec command:
 mpiexec --display-map --v --x DISPLAY -hostfile MyNodeFile --np 1 -map-by 
ppr:1:node SpawnTestManager
Here is my hostfile “MyNodeFile”:
 n001.cluster.com   slots=2 max_slots=2
Here is my SpawnTestManager code:
 
#include 
#include 
#include 
 #ifdef SUCCESS
#undef SUCCESS
#endif
#include "/opt/openmpi_pgc_tm/include/mpi.h"
 using std::string;
using std::cout;
using std::endl;
 int main(int argc, char *argv[])
{
    int rank, world_size;
    char *argv2[2];
    MPI_Comm mpi_comm;
    MPI_Info info;
    char host[MPI_MAX_PROCESSOR_NAME + 1];
    int host_name_len;
 string worker_cmd = "SpawnTestWorker";
    string host_name = "n001.cluster.com  ";
 argv2[0] = "dummy_arg";
    argv2[1] = NULL;
 MPI_Init(, );
    MPI_Comm_rank(MPI_COMM_WORLD, );
    MPI_Comm_size(MPI_COMM_WORLD, _size);
 MPI_Get_processor_name(host, _name_len);
    cout << "Host name from MPI_Get_processor_name is " << host << endl;
    char info_str[64];
    sprintf(info_str, "ppr:%d:node", 1);
    MPI_Info_create();
    MPI_Info_set(info, "host", host_name.c_str());
    MPI_Info_set(info, "map-by", info_str);
 MPI_Comm_spawn(worker_cmd.c_str(), argv2, 1, info, rank, MPI_COMM_SELF,
    _comm, MPI_ERRCODES_IGNORE);
    MPI_Comm_set_errhandler(mpi_comm, MPI::ERRORS_THROW_EXCEPTIONS);
 std::cout << "Manager success!" << std::endl;
 MPI_Finalize();
    return 0;
}

   Here is my SpawnTestWorker code:
 
#include "/opt/openmpi_pgc_tm/include/mpi.h"
#include 
 int main(int argc, char *argv[])
{
    int world_size, rank;
    MPI_Comm manager_intercom;
 MPI_Init(, );
    MPI_Comm_rank(MPI_COMM_WORLD, );
    MPI_Comm_size(MPI_COMM_WORLD, _size);
 MPI_Comm_get_parent(_intercom);
    MPI_Comm_set_errhandler(manager_intercom, MPI::ERRORS_THROW_EXCEPTIONS);
 std::cout << "Worker success!" << std::endl;
 MPI_Finalize();
    return 0;
}

 My config.log can be found here:  
https://gist.github.com/kmccall882/e26bc2ea58c9328162e8959b614a6fce.js
 I’ve attached the other info requested at on the help page, except the output 
of "ompi_info -v ompi full --parsable".   My version of ompi_info doesn’t 
accept the “ompi full” arguments, and the “-all” arg doesn’t produce much 
output.
 Thanks for your help,
Kurt
 ___
users mailing list
users@lists.open-mpi.org 

[OMPI users] MPI_Comm_Spawn failure: All nodes already filled

2019-08-06 Thread Mccall, Kurt E. (MSFC-EV41) via users
Hi,

MPI_Comm_spawn() is failing with the error message "All nodes which are 
allocated for this job are already filled".   I compiled OpenMpi 4.0.1 with the 
Portland Group C++  compiler, v. 19.5.0, both with and without Torque/Maui 
support.   I thought that not using Torque/Maui support would give me finer 
control over where MPI_Comm_spawn() places the processes, but the failure 
message was the same in either case.  Perhaps Torque is interfering with 
process creation somehow?

For the pared-down test code, I am following the instructions here to make 
mpiexec create exactly one manager process on a remote node, and then forcing 
that manager to spawn one worker process on the same remote node:

https://stackoverflow.com/questions/47743425/controlling-node-mapping-of-mpi-comm-spawn




Here is the full error message.   Note the Max Slots: 0 message therein (?):

Data for JOB [39020,1] offset 0 Total slots allocated 22

   JOB MAP   

Data for node: n001Num slots: 2Max slots: 2Num procs: 1
Process OMPI jobid: [39020,1] App: 0 Process rank: 0 Bound: N/A

=
Data for JOB [39020,1] offset 0 Total slots allocated 22

   JOB MAP   

Data for node: n001Num slots: 2Max slots: 0Num procs: 1
Process OMPI jobid: [39020,1] App: 0 Process rank: 0 Bound: socket 
0[core 0[hwt 0]]:[B/././././././././.][./././././././././.]

=
--
All nodes which are allocated for this job are already filled.
--
[n001:08114] *** An error occurred in MPI_Comm_spawn
[n001:08114] *** reported by process [2557214721,0]
[n001:08114] *** on communicator MPI_COMM_SELF
[n001:08114] *** MPI_ERR_SPAWN: could not spawn processes
[n001:08114] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now 
abort,
[n001:08114] ***and potentially your MPI job)




Here is my mpiexec command:

mpiexec --display-map --v --x DISPLAY -hostfile MyNodeFile --np 1 -map-by 
ppr:1:node SpawnTestManager




Here is my hostfile "MyNodeFile":

n001.cluster.com slots=2 max_slots=2




Here is my SpawnTestManager code:


#include 
#include 
#include 

#ifdef SUCCESS
#undef SUCCESS
#endif
#include "/opt/openmpi_pgc_tm/include/mpi.h"

using std::string;
using std::cout;
using std::endl;

int main(int argc, char *argv[])
{
int rank, world_size;
char *argv2[2];
MPI_Comm mpi_comm;
MPI_Info info;
char host[MPI_MAX_PROCESSOR_NAME + 1];
int host_name_len;

string worker_cmd = "SpawnTestWorker";
string host_name = "n001.cluster.com";

argv2[0] = "dummy_arg";
argv2[1] = NULL;

MPI_Init(, );
MPI_Comm_rank(MPI_COMM_WORLD, );
MPI_Comm_size(MPI_COMM_WORLD, _size);

MPI_Get_processor_name(host, _name_len);
cout << "Host name from MPI_Get_processor_name is " << host << endl;

   char info_str[64];
sprintf(info_str, "ppr:%d:node", 1);
MPI_Info_create();
MPI_Info_set(info, "host", host_name.c_str());
MPI_Info_set(info, "map-by", info_str);

MPI_Comm_spawn(worker_cmd.c_str(), argv2, 1, info, rank, MPI_COMM_SELF,
_comm, MPI_ERRCODES_IGNORE);
MPI_Comm_set_errhandler(mpi_comm, MPI::ERRORS_THROW_EXCEPTIONS);

std::cout << "Manager success!" << std::endl;

MPI_Finalize();
return 0;
}




Here is my SpawnTestWorker code:


#include "/opt/openmpi_pgc_tm/include/mpi.h"
#include 

int main(int argc, char *argv[])
{
int world_size, rank;
MPI_Comm manager_intercom;

MPI_Init(, );
MPI_Comm_rank(MPI_COMM_WORLD, );
MPI_Comm_size(MPI_COMM_WORLD, _size);

MPI_Comm_get_parent(_intercom);
MPI_Comm_set_errhandler(manager_intercom, MPI::ERRORS_THROW_EXCEPTIONS);

std::cout << "Worker success!" << std::endl;

MPI_Finalize();
return 0;
}


My config.log can be found here:  
https://gist.github.com/kmccall882/e26bc2ea58c9328162e8959b614a6fce.js

I've attached the other info requested at on the help page, except the output 
of "ompi_info -v ompi full --parsable".   My version of ompi_info doesn't 
accept the "ompi full" arguments, and the "-all" arg doesn't produce much 
output.

Thanks for your help,
Kurt









___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users