Simple reason, Chris - the PMI support is GPL 2.0, and so anything built 
against it automatically becomes GPL. So OpenHPC cannot distribute Slurm with 
those libraries.

Instead, we are looking to use the new PMIx library to provide wireup support, 
which includes backward support for PMI 1 and 2. I’m supposed to complete that 
backport in my copious free time :-)

Until then, you can only launch via mpirun - which is just as fast, actually, 
but does indeed have different cmd line options.


> On Jan 5, 2016, at 9:22 PM, Christopher Samuel <[email protected]> wrote:
> 
> 
> On 06/01/16 01:46, David Carlet wrote:
> 
>> Depending on where you are in the design/development phase for your
>> project, you might also consider switching to using the OpenHPC build.
> 
> Caution: for reasons that are unclear OpenHPC disables Slurm PMI support:
> 
> https://github.com/openhpc/ohpc/releases/download/v1.0.GA/Install_guide-CentOS7.1-1.0.pdf
> 
> # At present, OpenHPC is unable to include the PMI process
> # management server normally included within Slurm which
> # implies that srun cannot be use for MPI job launch. Instead,
> # native job launch mechanisms provided by the MPI stacks are
> # utilized and prun abstracts this process for the various
> # stacks to retain a single launch command.
> 
> Their spec file does:
> 
> # 6/16/15 [email protected] - do not package Slurm's version of libpmi 
> with OpenHPC.
> %if 0%{?OHPC_BUILD}
>   rm -f $RPM_BUILD_ROOT/%{_libdir}/libpmi*
>   rm -f $RPM_BUILD_ROOT/%{_libdir}/mpi_pmi2*
> %endif
> 
> 
> 
> -- 
> Christopher Samuel        Senior Systems Administrator
> VLSCI - Victorian Life Sciences Computation Initiative
> Email: [email protected] Phone: +61 (0)3 903 55545
> http://www.vlsci.org.au/      http://twitter.com/vlsci

Reply via email to