I'm not sure we support what you are wanting to do.

You can direct mpiexec to use a specified script to launch its daemons on 
remote nodes. The daemons will need to connect back via TCP to mpiexec. The 
daemons are responsible for fork/exec'ing the local MPI application procs on 
each node. Those procs connect back to their daemon via TCP, but only locally 
on the node.

mpiexec cannot launch application procs directly on another node. It needs the 
daemon to support it.

If that fits into your work environment, then you should be okay.

On Apr 20, 2021, at 12:22 AM, hihijo07 via users <users@lists.open-mpi.org 
<mailto:users@lists.open-mpi.org> > wrote:


Hello everyone,

In my working place, we have used a tool to launch user's job like schedulers.

Recently, we have encountered a technical issue that our users need to launch 
MPI jobs since additional use cases from new users are coming into our 
computing environment. 

The solution we are looking for is; launch a script on a master host by mpiexec 
then, the mpiexec launches worker processes on some of other hosts by a script 
or binary that we internally made. The executable might use socket to contact 
to a process on each remote hosts.

In case of LSF,  mpiexec can launch processes by blaunch if we build OpenMPI 
with an option for LSF so, I think we need sort of that behavior.

Can I launch processes on remote hosts from a master host by calling mpiexec 
with our executable?

Thanks.

Sent from my Galaxy


Reply via email to