Hi Jeff,
This is really helpful. I'm looking into the examples, will ask again on
the list if I have more questions later.
Tony
On 02/24/11 08:01, Jeff Squyres wrote:
Sure, we can do this. Open MPI's support of various run-time systems is based
on a series of plugins to our run-time later (the Open MPI Run-Time
Environment, frequently abbreviated ORTE).
You can find details about creating new plugins (aka "components", in Open MPI
parlance) on this wiki page:
https://svn.open-mpi.org/trac/ompi/wiki/devel/CreateComponent
Open MPI v1.4 is a continuation of the v1.3 series; you might want to upgrade
to the latest-n-greatest, but it's not strictly necessary if you don't need to.
You probably need to create 2 or 3 components in the 1.3/1.4 series:
- orte/mca/ess: Environment-Specific Services
- orte/mca/plm: Process Launch Manager
- orte/mca/ras: Resource Allocation Subsystem
Look at the corresponding plugin API header file for each framework (i.e.,
orte/mca/ess/ess.h, orte/mca/plm/plm.h, and orte/mca/ras/ras.h) for the details
of what the components of each type need to provide.
Then look at examples provided by the other components -- e.g., if your
internal job scheduler is something like the SLURM model, look at
orte/mca/[ess,plm,ras]/slurm. If it's something like rsh, then look at
orte/mca/plm/rsh and (I think?) orte/mca/ess/env (there is no RAS module for
rsh because there's no back-end run-time system API that tells OMPI what
hostnames to use).
Does that help?
On Feb 22, 2011, at 7:10 PM, Tony Lam wrote:
Hi,
I'm looking into supporting running OMPI jobs on our internal compute farms,
specially we'd like to schedule and launch the jobs under the control of an
internal resource manager that we developed. My reading so far indicated this
can be achieved with some orted/plm plug-in (preferred over rsh/ssh), I'll
appreciate if someone can give me suggestion on what exact plug-ins should be
provided and what interfaces are expected from the plug-ins, I didn't find much
info on this with my search on the mailing list and OMPI docs.
Currently we have ompi v1.3, we can upgrade to a newer version if needed.
Thanks.
Tony
_______________________________________________
devel mailing list
de...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/devel