Dear Damian,

I think that this is a very good approach.
In principle, it should work for MPI implementations that are part of the MPICH 
ABI initiative (https://www.mpich.org/abi/)

We do make use of this ABI compatibility for the container usage at Piz Daint 
(https://user.cscs.ch/tools/containers/advanced_shifter/#native-mpi-support)


Best wishes,
 
CSCS Swiss National Supercomputing Centre
Victor Holanda | Computational Scientist
ETH/CSCS | Via Trevano 131 | 6900 Lugano | Switzerland
victor.hola...@cscs.ch | Phone +41 91 610 82 65

On 22.08.18, 16:37, "easybuild-requ...@lists.ugent.be on behalf of Alvarez, 
Damian" <easybuild-requ...@lists.ugent.be on behalf of d.alva...@fz-juelich.de> 
wrote:

    Dear EasyBuilders and lmod users,
    
    I have a question for the community. Currently EasyBuild supports to deploy 
its software stack in a hierarchical manner, as intended and supported by lmod 
(ie: load compiler, that expands $MODULEPATH and the MPIs become visible, load 
an MPI, which expands $MODULEPATH again).
    
    There is a very significant number of MPIs that are ABI compatible (MPICH, 
MVAPICH, Intel MPI, ParaStationMPI and possibly 1 or 2 more). I don't know if 
OpenMPI-based MPI runtimes are also ABI compatible, but I would guess there is 
a big chance that they are.
    
    My question is how would you feel about expanding the MPI's $MODULEPATH 
based on ABI compatibility, rather than MPI_vendor/version. That way one could 
offer many MPIs without needing to mirror the whole SW stack in all MPI 
branches. That could simplify SW management significantly.
    
    Caveats I can think of are:
    -One would have to be careful regarding which MPI is used to compile the 
stack. MPI compiler wrappers are different, and might add different compiler 
and/or linker flags.
    -Some MPIs, despite being ABI compatible, might offer different 
capabilities (eg: CUDA-awareness). I guess in these cases it makes sense to try 
to ensure that loading packages that depend in particular MPI capabilities, 
actually load the correct MPI runtime as a dependency, instead of making vague 
assumptions like "the correct MPI is already loaded because otherwise the 
package won't be visible in the environment".
    
    Similarly, one could think of a similar approach for compilers, to allow 
drop-in compiler replacements. Let's say icc 2018.2 is used to compile a given 
branch of the hierarchy. If that compiler has a bug that is fixed in 2018.3, 
right now the whole SW stack needs to be recompiled in a separate branch of the 
hierarchy. However, with a drop-in replacement one could install the latest 
version of the compiler but still reuse the hierarchy compiled with 2018.2. 
Needless to say, this needs to be done carefully. However, the possibility 
seems interesting.
    
    Am I missing something? How do you feel about this?
    
    Best,
    Damian
    
    --
    Dr. Damian Alvarez
    Juelich Supercomputing Centre
    Forschungszentrum Juelich GmbH
    52425 Juelich, Germany
    
    
    
    
------------------------------------------------------------------------------------------------
    
------------------------------------------------------------------------------------------------
    Forschungszentrum Juelich GmbH
    52425 Juelich
    Sitz der Gesellschaft: Juelich
    Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
    Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
    Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
    Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
    Prof. Dr. Sebastian M. Schmidt
    
------------------------------------------------------------------------------------------------
    
------------------------------------------------------------------------------------------------
    
    

Reply via email to