hi all,
i've been compiling the latest ET on marenostrum 4, and there were a couple of
issues i'd like to ask:
1. on marenostrum, they bundle a lot of mpi-related tools on modules. in
particular, one of the
default modules that is loaded is called 'impi'. this module sets the global
variable $MPI to 'impi', ie
$ echo $MPI
impi
this seems to severely interfere with the Cactus building process. at the
configuration stage i get
the following:
Configuring with flesh MPI
Warning: use of flesh MPI via MPI option is deprecated and should be
replaced with the thorn
ExternalLibraries/MPI and its MPI_DIR option
MPI selected, but no known MPI method - what is "impi" ?
to workaround this problem, i've unloaded the module (module unload impi),
which undefines the $MPI
variable. after this, the configuration stage works fine.
this seems to me like a bug, though. after all, a cluster-wide global variable
called $MPI seems
like a natural thing to exist in a lot of these machines... should the building
of Cactus rely on
the non-existence of such a variable?
the other inconvenient thing is that, by unloading the impi module, one also
removes from $PATH the
mpirun command. so one has to unload the module for compiling the code, and
then load it back to be
able to run it.
2. i was not able to compile with any of the provided intel compilers (with gcc
it worked fine). i
know there are known issues with some versions of the intel compilers; is there
some sort of list of
intel compilers that are known to work? i could maybe try to ask the technical
support if they could
make those specific versions available...
thanks,
Miguel
_______________________________________________
Users mailing list
[email protected]
http://lists.einsteintoolkit.org/mailman/listinfo/users