On 9 December 2014 at 03:29, Howard Pritchard wrote:
> Hello Kevin,
>
> Could you try testing with Open MPI 1.8.3? There was a bug in 1.8.1
> that you are likely hitting in your testing.
>
> Thanks,
>
> Howard
Bingo!
Seems to have got rid of those messages.
Thanks.
Watcha,
we recently updated the OpenMPI installation on our School's ArchLinux
machines, where OpenMPI is built as a PkgSrc package, to 1.10.0
In running through the build, we were told that PkgSrc wasn't too keen on
the use of the == with a single "if test" construct and so I needed to apply
> Which libltdl version is that NetBSD ltdl.h from? Which version is
> in opal/libltdl? Have you tried not doing the above change?
>
> libltdl 2.2.x has incompatible changes over 1.5.x, both in the library
> as well as in the header, as well as (I think) in preloaded modules.
Hey Ralf,
The
Something I have just noticed on the NetBSD platform build
that I think goes further than just that platform.
There is a NetBSD packaging clash between the
libtrace.la
from
ompi/contrib/libtrace/
and that from an already existing package
libtrace-3.0.6
> That contribution needs to be
>
> a) brought under the control of --enable-contrib-no-build=
>
> b) possibly renamed (it would seem to be an MPI specific thing)
> so maybe, libmpitrace ?
I'd like to qualify that, in the light of some more digging,
though (b) is still an issue.
It seems
> 5. ompi_mca.m4 has been cleaned up a bit, allowing autogen.pl to be a
> little dumber than autogen.sh
So you are dumbing down in search of improvements ?
Hello again OpenMPI folk, been a while.
Have just come to build OpenMPI 1.8.1 within a PkgSrc environment for
our ArchLinux machines (yes, we used to be NetBSD, yes).
Latest PkgSrc build was for 1.6.4.
The 1.6.4 PkgSrc build required 4 patches, 3 of which were PkgSrc-specific
and just defined
Hi there,
Firstly, I should say I am not a NetBSD developer at this (or any)
level however, I can usually find my way around a system's internals
by judicious use of the man3 content and can type code.
Jeff Squyres suggested we, ECS VUW, try and come up with a
src/topology-netbsd.c
as we seem
On 23 March 2017 at 23:41, Jeff Squyres (jsquyres) wrote:
> Yoinks. Looks like this was an oversight. :-(
>
> Yes, I agree that install_in_opt should put the modulefile in /opt as well.
Actually, I have since read the SPEC file from top to bottom and seen a
Changelog entry
Another than occured to me whilst looking around this
was whether the OpenMPI SRPM might benefit from
being given proper "Software Collections" package
capability, as opposed to having the "install in opt"
option.
I don't claim to have enough insight to say either way
here, however the Software
Just came to rehash some old attempts to build previous OpenMPIs
for an RPM-based system and noticed that, despite specifying
--define 'install_in_opt 1' \
as part of this full "config" rpmbuild stage
(Note: SPEC-file Release tag is atered so as not to have the RPM clash with
any system
Just in case anyone is interested in following this, I'll
try and document what I'm doing here
I have a forked repo and added a branch here
https://github.com/vuw-ecs-kevin/ompi/tree/make-specfile-scl-capable
and have applied a series of small changes that allow for the building
of an RPM that
On 5 April 2017 at 13:01, Kevin Buckley
<kevin.buckley.ecs.vuw.ac...@gmail.com> wrote:
> I also note that as things stand, the Relocation is used for all
> files except the Environment Module file, resulting from the
> rpmbuild beig done as follows
>
> --define 'in
On 29 March 2017 at 13:49, Jeff Squyres (jsquyres) wrote:
> I have no objections to this.
>
> Unfortunately, I don't have the time to work on it, but we'd be glad to look
> at pull requests to introduce this functionality. :-)
Yes, yes, alright.
I am though slightly
On 31 March 2017 at 23:35, Jeff Squyres (jsquyres) wrote:
and Gilles, who said,
>> you should only use the tarballs from www.open-mpi.org
> The GitHub tarballs are simple tars of the git repo at a given hash (e.g.,
> the v2.0.2 tag in git). ...
Yep I'm aware of the way
On 19 April 2017 at 18:35, Kevin Buckley
<kevin.buckley.ecs.vuw.ac...@gmail.com> wrote:
> If I compile against 2.0.2 the same command works at the command line
> but not in the "SGE" job submission, where I see a complaint about
>
> =
&g
On 20 April 2017 at 12:58, r...@open-mpi.org wrote:
> Fully expected - if ORTE can’t start one or more daemons, then the MPI job
> itself will never be executed.
>
> There was an SGE integration issue in the 2.0 series - I fixed it, but IIRC
> it didn’t quite make the 2.0.2
On 20 April 2017 at 12:58, r...@open-mpi.org wrote:
> Fully expected - if ORTE can’t start one or more daemons, then the MPI job
> itself will never be executed.
>
> There was an SGE integration issue in the 2.0 series - I fixed it, but IIRC
> it didn’t quite make the 2.0.2
On 20 April 2017 at 12:58, r...@open-mpi.org wrote:
> Fully expected - if ORTE can’t start one or more daemons, then the MPI job
> itself will never be executed.
>
> There was an SGE integration issue in the 2.0 series - I fixed it, but IIRC
> it didn’t quite make the 2.0.2
I have source code for MrBayes.
If I compile against OpenMPI 1.8.3, then an
mpirun -np4 mb < somefile.txt
works at both the command line and in an "SGE" job submission where
I'm tagetting 4 cores on the same node.
If I compile against 2.0.2 the same command works at the command line
but not in
20 matches
Mail list logo