The Open MPI developer community might be interested in this discussion
(Below is the message I sent to the PMIx list a few weeks ago).

If you are interested in participating please register so we can properly
scope resource needs. Information in the link below.
 https://groups.google.com/d/msg/hpc-runtime-wg/LGaHyZ0jRvE/n2t9MeSkDgAJ

Thanks,
Josh


---------- Forwarded message ---------
From: Josh Hursey <jjhur...@open-mpi.org>
Date: Fri, Jan 18, 2019 at 6:55 PM
Subject: System Runtime Interfaces: What’s the Best Way Forward?
To: <p...@googlegroups.com>


I'd like to share this meeting announcement with the PMIx community.

I am co-facilitating this meeting to help champion the exceptional effort
of the PMIx community towards innovation, adoption, and standardization in
this domain. I intend to bring forward the PMIx standardization effort as
one such path forward towards the goals described in this announcement.

I hope for a meaningful discussion both on the group's mailing list and in
the face-to-face meeting. If you are interested and able to lend your voice
to the conversation it would be appreciated.

-- Josh


--------------------------------------------

The entire HPC community (and beyond!) benefits by having a standardized
API specification between applications/tools and system runtime
environments.  Such a standard interface should focus on supporting HPC
application launch and wire-up; tools that wish to inspect, steer, and/or
debug parallel applications; interfaces to manage dynamic workload
applications; interfaces to support fault tolerance and cross-library
coordination; and communication across container boundaries.

Beyond the proprietary interfaces, the HPC community has seen the evolution
from PMI-1 to PMI-2 to the current PMIx interfaces. We would like to
discuss how to move the current state of practice forward towards greater
stability and wider adoption. What is the best way to achieve this goal
without hindering current progress in the broader HPC community? There are a
wide range of directions to go towards this goal, but which is the best
path - that is the question we seek to discuss in this group.

This effort seeks a broad community to participate in this discussion. The
community should represent folks working on parallel libraries (e.g., MPI,
UPC, OpenSHMEM), runtime environments (e.g., SLURM, LSF, Torque), container
runtimes and orchestration environments (e.g., Singularity, CharlieCloud,
Docker, Kubernetes), tools (e.g., TotalView, DDT, STAT), and the broader
research community.

We will have a face-to-face meeting on March 4, 2019, from 9:00 am - 2:00
pm in Chattanooga, TN, to discuss these questions.  The meeting will be
co-located with (but separate from) the MPI Forum meeting (See this page
for logistics information: https://www.mpi-forum.org/meetings/). The
meeting is co-located with the MPI Forum to facilitate organization and
because of the overlap in the communities.

A mailing list has been created at the link below to facilitate
conversation on this topic:
https://groups.google.com/forum/#!forum/hpc-runtime-wg

Feel free to forward this information to others in your community that
might be interested in this discussion.

Kathryn Mohror (LLNL) and Josh Hursey (IBM)



-- 
Josh Hursey
IBM Spectrum MPI Developer


-- 
Josh Hursey
IBM Spectrum MPI Developer
_______________________________________________
devel mailing list
devel@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/devel

Reply via email to