I -think- that is correct, but you may need the verbs library as well - I
honestly don’t remember if the configury checks for functions in the library or
not. If so, then you’ll need that wherever you build OMPI, but everything else
is accurate
Good luck - and let us know how it goes!
Ralph
>
Ralph.
I will be building from the Master branch at github.com for testing
purposes. We are not 'supporting' Singularity container creation, but
we do hope to be able to offer some guidance, so I think we can
finesse the PMIx version, yes?
That is good to know about the verbs headers being the o
The embedded Singularity support hasn’t made it into the OMPI 2.x release
series yet, though OMPI will still work within a Singularity container anyway.
Compatibility across the container boundary is always a problem, as your
examples illustrate. If the system is using one OMPI version and the c
I am wishing to follow the instructions on the Singularity web site,
http://singularity.lbl.gov/docs-hpc
to test Singularity and OMPI on our cluster. My previously normal
configure for the 1.x series looked like this.
./configure --prefix=/usr/local \
--mandir=${PREFIX}/share/man \
-
On Fri, 17 Feb 2017, r...@open-mpi.org wrote:
Mark - this is now available in master. Will look at what might be
required to bring it to 2.0
Thanks Ralph,
To be honest, since you've given me an alternative, there's no rush from
my point of view.
The logic's embedded in a script and it's be
Mark - this is now available in master. Will look at what might be required to
bring it to 2.0
> On Feb 15, 2017, at 5:49 AM, r...@open-mpi.org wrote:
>
>
>> On Feb 15, 2017, at 5:45 AM, Mark Dixon wrote:
>>
>> On Wed, 15 Feb 2017, r...@open-mpi.org wrote:
>>
>>> Ah, yes - I know what the pr
Thanks Gilles!
> On Feb 15, 2017, at 10:24 PM, Gilles Gouaillardet wrote:
>
> Ralph,
>
>
> i was able to rewrite some macros to make Oracle compilers happy, and filed
> https://github.com/pmix/master/pull/309 for that
>
>
> Siegmar,
>
>
> meanwhile, feel free to manually apply the attache
Depends on the version, but if you are using something in the v2.x range, you
should be okay with just one installed version
> On Feb 17, 2017, at 4:41 AM, Mark Dixon wrote:
>
> Hi,
>
> We have some users who would like to try out openmpi MPI_THREAD_MULTIPLE
> support on our InfiniBand cluste
Hi,
We have some users who would like to try out openmpi MPI_THREAD_MULTIPLE
support on our InfiniBand cluster. I am wondering if we should enable it
on our production cluster-wide version, or install it as a separate "here
be dragons" copy.
I seem to recall openmpi folk cautioning that MPI_