Am 18.09.2007 um 01:17 schrieb Josh Hursey:

What version of Open MPI are you using?

This feature is not in any release at the moment, but is scheduled
for v1.3. It is available in the subversion development trunk which

Aha - thx. I simply looked at the latest 1.2.3 only. - Reuti

can be downloaded either from subversion or from a nightly snapshot
tarball on the Open MPI website:
http://www.open-mpi.org/svn/

-- Josh

On Sep 17, 2007, at 4:03 PM, Reuti wrote:

Josh:

Am 17.09.2007 um 21:33 schrieb Josh Hursey:

Sorry for catching up to this thread a bit late.

In the Open MPI trunk there is a mpirun option to preload a binary
before execution on remote nodes called '--preload-binary' (or '- s').
It has been tested with many of the resource managers (and should be
fairly resource manager agnostic), but I don't believe it has been
tested in the Xgrid environment. It will not use the Xgrid
distribution mechanism, but rsh/ssh transfer mechanisms. Native Xgrid
support can most likely be supported, but I do not know of any
developer planning to do so at the moment.

It has been a while since I tested the '--preload-binary' option to
mpirun so please let us know if you experience any problems. There is
also a '--preload-files=' option to mpirun that will allow arbitrary
files to be distributed to remote machines before a job starts as
well. Both options are useful when working in non-shared file
systems.

this is fantasic - but is this a hidden feature, compile-time option,
or lack of documentation? When I issue "mpirun -help" I don't get
these options listed. Hence I wasn't aware of them.

-- Reuti


Cheers,
Josh


On Sep 17, 2007, at 3:17 PM, Reuti wrote:

Am 17.09.2007 um 16:34 schrieb Brian Barrett:

On Sep 10, 2007, at 1:35 PM, Lev Givon wrote:

When launching an MPI program with mpirun on an xgrid cluster, is
there a way to cause the program being run to be temporarily
copied to
the compute nodes in the cluster when executed (i.e., similar to
what the
xgrid command line tool does)? Or is it necessary to make the
program
being run available on every compute node (e.g., using NFS data
partions)?

This is functionality we never added to our XGrid support.  It
certainly could be added, but we have an extremely limited
supply of
developer cycles for the XGrid support at the moment.

I think this should be implemented for all platforms, if it would
have to be part of OpenMPI at all (the parallel library Linda is
offering such a feature). Otherwise the option would be to submit
the
job using XGrid, or any another queuingsystem, where you can setup
such file tranfers in any prolog script (and epilog to remove the
programs again) - or copy it to the created $TMPDIR which I would
suggest if you decide to use e.g. SUN GridEngine, as this will be
ereased after the job automatically.

But just for curiosity: how is XGrid handling it, as you refer to
command-line-tool? If you have a jobscript with three mpirun-
commands
for three different programs, XGrid will transfer all three programs to the nodes for this job, or is it limited to be just one mpirun is
just one XGrid job?

-- Reuti


Brian

--
   Brian W. Barrett
   Networking Team, CCS-1
   Los Alamos National Laboratory


_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

Reply via email to