Note, too, that these variables are likely only changeable before MPI_INIT.
I didn't check these specific variables, but at least the btl_self_eager_limit
variable is likely only read/used for setup during MPI_INIT.
coll_tuned_bcast_algorithm may only be used before a communicator is setup
Hmm. Oscar's not around to ask any more, but I'd be greatly surprised if he
had InfiniPath on his systems where he ran into this segv issue...?
> On Aug 14, 2015, at 1:08 PM, Howard Pritchard wrote:
>
> Hi Gilles,
>
> Good catch! Nate we hadn't been testing on a
hmca_coll_ml_comm_query() ??:0
>>>>>>>> 9 0x0006ace9 hcoll_create_context() ??:0
>>>>>>>> 10 0x000fa626 mca_coll_hcoll_comm_query() ??:0
>>>>>>>> 11 0x0000000f776e mca_coll_base_comm_select() ??:0
>>>>
gt;>>>> 5 0x000b91eb base_bcol_basesmuma_setup_library_buffers() ??:0
>>>>>> 6 0x000969e3 hmca_bcol_basesmuma_comm_query() ??:0
>>>>>> 7 0x00032ee3 hmca_coll_ml_tree_hierarchy_discovery()
>>>>>> coll_ml_module.c:
This is likely because you installed Open MPI 1.8.7 into the same directory as
a prior Open MPI installation.
You probably want to uninstall the old version first (e.g., run "make
uninstall" from the old version's build tree), or just install 1.8.7 into a new
tree.
> On Aug 11, 2015, at
I think Dave's point is that numactl-devel (and numactl) is only needed for
*building* Open MPI. Users only need numactl to *run* Open MPI.
Specifically, numactl-devel contains the .h files we need to compile OMPI
against libnumactl:
$ rpm -ql numactl-devel
/usr/include/numa.h
On Aug 11, 2015, at 1:39 AM, Åke Sandgren wrote:
>
> Please fix the hcoll test (and code) to be correct.
>
> Any configure test that adds /usr/lib and/or /usr/include to any compile
> flags is broken.
+1
Gilles filed https://github.com/open-mpi/ompi/pull/796; I
Abhisek --
You are having two problems:
1. In the first "orted not found" problem, Open MPI was not finding its "orted"
helper executable on the remote nodes in your cluster. When you "module load
..." something, it just loads the relevant PATH / LD_LIBRARY_PATH / etc. on the
local node; it
On Jul 30, 2015, at 4:10 PM, Nathan Hjelm wrote:
>
> I agree with Ralph. Please run again with --enable-debug. That will give
> more information (line number) on where the error is occuring.
>
> Looking at the function in question the only place I see that could be
> causing
was
> less picky, because it must have worked for the developer.
>
> -- Ted
>
>
> On Jul 27, 2015, at 2:19 PM, Jeff Squyres (jsquyres) <jsquy...@cisco.com>
> wrote:
>
>>> On Jul 27, 2015, at 3:15 PM, Ted Mansell <ted.mans...@noaa.gov> wrote:
>>>
> On Jul 27, 2015, at 3:15 PM, Ted Mansell wrote:
>
> Hi,
>
> I'm getting a compile-time error on MPI_WAITALL and MPI_WAITANY when using
> "use mpi":
>
> parcel.f90(555): error #6285: There is no matching specific subroutine for
> this generic subroutine call.
Well, it looks like the formatting on that mail did not translate well. Sorry
about that -- but at least it gives you an idea of what the new text looks like.
It will render better in a browser than it did in that email. :-)
> On Jul 2, 2015, at 1:05 PM, Jeff Squyres (jsquyres) <
On Jul 2, 2015, at 12:20 PM, Tom Coles >
wrote:
I see, thanks for the quick reply.
In light of this, I'd like to suggest an update to the FAQ entry at
http://www.open-mpi.org/faq/?category=tuning#available-mca-params
I just noticed that this is covered in
Greetings Tom.
This was a change we made to the default behavior in ompi_info in the v1.7/v1.8
series. See the --level option and the "Levels" section in the ompi_info(1)
man page (i.e., http://www.open-mpi.org/doc/v1.8/man1/ompi_info.1.php).
> On Jul 2, 2015, at 10:10 AM, Tom Coles
lstopo will tell you -- if there is more than one "PU" (hwloc terminology for
"processing unit") per core, then hyper threading is enabled. If there's only
one PU per core, then hyper threading is disabled.
> On Jun 29, 2015, at 4:42 PM, Lane, William wrote:
>
> Would
Good catch; fixed.
Thanks!
> On Jun 29, 2015, at 7:28 AM, Åke Sandgren wrote:
>
> Hi!
>
> static inline int ompi_mpi_errnum_is_class ( int errnum )
> {
>ompi_mpi_errcode_t *err;
>
>if (errno < 0) {
>return false;
>}
>
> I assume it should be
Greetings Open MPI users and system administrators.
In response to user feedback, Open MPI is changing how its releases will be
numbered.
In short, Open MPI will no longer be released using an "odd/even" cadence
corresponding to "feature development" and "super stable" releases. Instead,
Specifically, it means that Open MPI could not find the "orted" executable on
some nodes ("orted" is the Open MPI helper daemon). Hence, your Open MPI
install is either not in your PATH / LD_LIBRARY_PATH on those nodes, or, as
Gilles mentioned, Open MPI is not installed on those nodes.
Check
Do you have different IB subnet IDs? That would be the only way for Open MPI
to tell the two IB subnets apart.
> On Jun 16, 2015, at 1:25 PM, Tim Miller wrote:
>
> Hi All,
>
> We have a set of nodes which are all connected via InfiniBand, but all are
> mutually
We just recently started showing these common symbol warnings -- they're really
motivations to ourselves to reduce the number of common symbols. :-)
> On Jun 16, 2015, at 11:17 AM, Siegmar Gross
> wrote:
>
> Hi Gilles,
>
>> these are just warnings, you
Can you send all the information listed here:
http://www.open-mpi.org/community/help/
> On Jun 11, 2015, at 2:17 AM, Filippo Spiga wrote:
>
> Dear OpenMPI experts,
>
> I am rebuilding IPM (https://github.com/nerscadmin/ipm) based on OpenMPI
> 1.8.5. However,
that is any better. Maybe the whole alignment thing is
> busted, and leaving it undefined (which usually defaults to zero, but not
> always) causes it to be turned “off”?
>
> I don’t really care, mind you - but it is clearly an error the way it was
> before.
>
>
>&g
Ralph --
This change was not correct
(https://github.com/open-mpi/ompi/commit/ce915b5757d428d3e914dcef50bd4b2636561bca).
It is causing memory corruption in the openib BTL.
> On May 25, 2015, at 11:56 AM, Ralph Castain wrote:
>
> I don’t see a problem with it. FWIW: I’m
On Jun 10, 2015, at 6:36 AM, Cian Davis wrote:
>
> An old version of software I use (FDS 5.5.3 in this case - a CFD solver for
> fire) was compiled against LAM-MPI. It looks for the particular executables
> that came with LAM-MPI and looks for a running lamd.
Ah, bummer.
> On Jun 10, 2015, at 6:27 AM, Jeff Squyres (jsquyres) <jsquy...@cisco.com>
> wrote:
>
>> i just found an other issue with this scenario :
>> symlinks in the profile directories (ompi/mpi/c/profile,
>> ompi/mpi/fortran/mpif-h/profile, oshmem/shmem/c/profile)
On Jun 10, 2015, at 12:00 AM, Gilles Gouaillardet wrote:
>
> that can happen indeed, in a complex but legitimate environment :
>
> mkdir ~/src
> cd ~/src
> tar xvfj openmpi-1.8.tar.bz2
> mkdir ~/build/openmpi-v1.8
> cd ~/build/openmpi-v1.8
> ~/src/openmpi-1.8/configure
> make
On Jun 10, 2015, at 1:35 AM, Siegmar Gross
wrote:
>
>> I don't see any reason why this should be happening to you only
>> sometimes; this code has been unchanged in *forever*. :-(
>
> It only happens since openmpi-v1.8.5-40-g7b9e672 which I tried to
>
Just curious -- is there a reason you can't upgrade?
I ask because:
- LAM/MPI has been dead for nearly a decade.
- Open MPI is source compatible with LAM/MPI.
> On Jun 10, 2015, at 5:07 AM, Cian Davis wrote:
>
>
> Hi All,
> While OpenMPI is the way forward, there is a
Siegmar --
I don't see any reason why this should be happening to you only sometimes; this
code has been unchanged in *forever*. :-(
Did your NFS server drift out of time sync with your build machine, perchance?
Regardless, I just pushed what should be a workaround to master and I'll PR it
On Jun 8, 2015, at 11:27 AM, Dave Goodell (dgoodell) wrote:
>
> My suggestion is to try to create a small reproducer program that we can send
> to the GCC folks with the claim that we believe it to be a buggy
> optimization. Then we can see whether they agree and if not,
Is there a reason you're specifying OMPI_CXX=g++48?
I.e., did you compile OMPI with a different C++ compiler? If so, that could be
your issue -- try compiling OMPI with g++48 and then see if your app compiles
and runs properly.
> On Jun 5, 2015, at 3:30 PM, rhan...@gmx.de wrote:
>
> Hi,
>
On Jun 4, 2015, at 5:48 AM, René Oertel wrote:
>
> Problem description:
> ===
>
> The critical code in question is in
> opal/mca/memory/linux/memory_linux_ptmalloc2.c:
> #
> 92 #if HAVE_POSIX_MEMALIGN
> 93 /* Double check for
vendor_id = 0x1425
>> vendor_part_id =
>> 0xb000,0xb001,0x5400,0x5401,0x5402,0x5403,0x5404,0x5405,0x5406,0x5407,0x5408,0x5409,0x540a,0x540b,0x540c,0x540d,0x540e,0x540f,0x5410,0x5411,0x5412,0x5413
>> use_eager_rdma = 1
>> mtu = 2048
>> receive_queues = P,65536,64
On May 30, 2015, at 9:42 AM, Jeff Layton wrote:
>
> The error happens during the configure step before compiling.
Hmm -- I'm confused. You show output from "make" in your previous mails...?
> However, I ran the make command as you indicated and I'm
> attaching the output to
Can you send the output of "make V=1"?
That will show the exact command line that is being used to build that file.
> On May 29, 2015, at 2:17 PM, Jeff Layton wrote:
>
> George,
>
> I changed my configure command to be:
>
> ./configure CCASFLAGS=-march=native
>
> and I
On May 29, 2015, at 11:19 AM, Timothy Brown
wrote:
>
> I've built Openmpi 1.8.5 with the following configure line:
>
> ./configure \
> --prefix=/curc/tools/x86_64/rh6/software/openmpi/1.8.5/pgi/15.3 \
> --with-threads=posix \
> --enable-mpi-thread-multiple \
>
On May 29, 2015, at 12:05 PM, Blosch, Edwin L wrote:
>
> I’ve tried ompi_info --param all but no matter what string I
> give for framework, I get no output at all.
Keep in mind that starting sometime in the v1.7 series, ompi_info grew another
command line
On May 29, 2015, at 6:54 AM, Bruno Queiros wrote:
>
> I understand that using Portland compiler isn't "advised" by Open Mpi, i was
> just wondering if there's a way of doing it, since i need Open Mpi compiled
> with PG fortran and not gfortran for example.
A further
Sounds like your pgcc compiler installation is busted. You'll need to get that
fixed to compile/install Open MPI.
> On May 28, 2015, at 5:29 AM, Bruno Queiros wrote:
>
> Hello David
>
> $> pgf90 hello.f90
>
> Works OK.
>
> $> pgcc hello.c
>
> Gives that license
A few points:
- Just to clarify: Open MPI and MPICH are entirely different code bases /
entirely different MPI implementations. They both implement the same C and
Fortran APIs that can be used by applications (i.e., they're *source code
compatible*), but they are otherwise not compatible at
I agree with Gilles -- when you compile with one MPI implementation, but then
accidentally use the mpirun/mpiexec from a different MPI implementation to
launch it, it's quite a common symptom to see an MPI_COMM_WORLD size of 1
(i.e., each MPI process is rank 0 in MPI_COMM_WORLD).
Make sure
> >
> > but in this line:
> >
> > https://github.com/open-mpi/ompi/blob/master/config/ompi_check_mxm.m4#L36
> >
> > ompi_check_mxm_libdir gets value if with_mxm was passed
> >
> >
> >
> > On Tue, May 26, 2015 at 6:59 PM, Jeff Squ
@dev.mellanox.co.il> wrote:
>
> Thanks Jeff!
>
> but in this line:
>
> https://github.com/open-mpi/ompi/blob/master/config/ompi_check_mxm.m4#L36
>
> ompi_check_mxm_libdir gets value if with_mxm was passed
>
>
>
> On Tue, May 26, 2015 at 6:59 PM, J
On May 14, 2015, at 11:29 PM, Chaitra Kumar wrote:
>
> We are developing an application which can handle only POSIX threads and
> POSIX synchronization APIs. I would like to know if the OpenMPI runtime
> implementation makes use of the POSIX threads and POSIX
Can you provide a bit more detail? Is the seg fault in your code or in Open
MPI?
Note that Open MPI uses hwloc (which likely uses libnuma) internally, too.
> On May 11, 2015, at 2:17 AM, Chaitra Kumar wrote:
>
> Hi Team,
>
> I am trying to test an application with
We can't capture the exact configure command line, but you can look at the
output from ompi_info to check specific characteristics of your Open MPI
installation.
ompi_info with no CLI options tells you a bunch of stuff; "ompi_info --all"
tells you (a lot) more.
> On May 5, 2015, at 2:54 PM,
hines.
>> 2. I have specified user names in host file.
>> 3. I have successfully run the program on two machines with two different
>> accounts.
>>
>> But when I tried on other machines with the two accounts, openmpi stuck at
>> the very beginning. There is no er
Per Satish's last mail
(http://www.open-mpi.org/community/lists/users/2015/04/26823.php), George is
looking at a followup issue...
> On Apr 30, 2015, at 2:57 PM, Ralph Castain wrote:
>
> Thanks! The patch wasn’t quite correct, but we have a good one now and it is
> going
t;knep...@gmail.com> wrote:
>
> On Fri, May 1, 2015 at 4:55 AM, Jeff Squyres (jsquyres) <jsquy...@cisco.com>
> wrote:
> Thank you!
>
> George reviewed your patch and adjusted it a bit. We applied it to master
> and it's pending to the release series (v1.8
Thank you!
George reviewed your patch and adjusted it a bit. We applied it to master and
it's pending to the release series (v1.8.x).
Would you mind testing a nightly master snapshot? It should be in tonight's
build:
http://www.open-mpi.org/nightly/master/
> On Apr 30, 2015, at 12:50
On Apr 27, 2015, at 5:02 PM, Walt Brainerd wrote:
>
> CC constants.lo
> In file included from ../../../../opal/include/opal_config_bottom.h:256:0,
> from ../../../../opal/include/opal_config.h:2797,
> from
Marco --
Have you run into this?
The m4 line in question that seems to be the problem is:
[AS_VAR_SET(type_var, [`cat conftestval`])]
Does `cat foo` in cygwin result in a ^M in the resulting shell string? If so,
is there a standard way to get rid of it?
> On Apr 27, 2015, at 2:17 PM,
f internally if you think the behave is different.
>
>
> On Fri, Apr 24, 2015 at 1:41 PM, Jeff Squyres (jsquyres) <jsquy...@cisco.com>
> wrote:
> Mike --
>
> What happens when you do this?
>
>
> ibv_fork_init();
>
> int *buffer = malloc(
ompi master now calls ibv_fork_init() before initializing btl/mtl/oob
> frameworks and all fork fears should be addressed.
>
>
> On Fri, Apr 24, 2015 at 4:37 AM, Jeff Squyres (jsquyres) <jsquy...@cisco.com>
> wrote:
> Disable the memory manager / don't use leave pinned.
Disable the memory manager / don't use leave pinned. Then you can fork/exec
without fear (because only MPI will have registered memory -- it'll never leave
user buffers registered after MPI communications finish).
> On Apr 23, 2015, at 9:25 PM, Howard Pritchard wrote:
>
On Apr 22, 2015, at 1:57 PM, Jerome Vienne wrote:
>
> While looking at performance and control variables provided by the MPI_T
> interface, I was surprised by the impressive number of control variables
> (1,087 if I am right (with 1.8.4)) but I was also disappointed to
Can you send your full Fortran test program?
> On Apr 22, 2015, at 6:24 PM, Galloway, Jack D wrote:
>
> I have an MPI program that is fairly straight forward, essentially
> "initialize, 2 sends from master to slaves, 2 receives on slaves, do a bunch
> ofsystem calls for
Thanks!
> On Apr 21, 2015, at 11:32 AM, Siegmar Gross
> wrote:
>
> Hi,
>
>> @siegmargross Can you give master a whirl and see if the problem
>> is fixed for you now? Thanks.
>
> The problem is fixed. At least I can build everything once more.
>
Thanks for reporting, Siegmar. I've filed
https://github.com/open-mpi/ompi/issues/540.
> On Apr 18, 2015, at 2:34 AM, Siegmar Gross
> wrote:
>
> Hi,
>
> yesterday I tried to build openmpi-dev-1527-g97444d8 on my machines
> (Solaris 10 Sparc, Solaris 10
Thomas: this was queued up for inclusion in the upcoming v1.8.5 release
(https://github.com/open-mpi/ompi-release/pull/245).
Specifically: the v1.8.4 version of the man page is automatically generated, so
we won't fix it. But it'll be fixed in the v1.8.5 version of the man page.
Thank you!
I know I'm quite late to this thread, but Edgar is correct: the arguments in
collective calls -- including the lengths in sendcounts and recvcounts in
alltoallv -- must match at all processes.
This is different than point-to-point MPI calls, where a sender can send a
smaller count than the
As Ralph mentioned, the 1.4.x series is very old.
Any chance you can upgrade to 1.8.x?
> On Apr 15, 2015, at 7:12 AM, cristian wrote:
>
> Hello,
>
> I noticed when performing a profiling of an application that the MPI_init()
> function takes a considerable amount of
+1
Please try upgrading to Open MPI v1.8.x and see if that solves your problem.
> On Apr 15, 2015, at 12:06 AM, Christopher Samuel
> wrote:
>
> On 15/04/15 12:19, Li Li wrote:
>
>>I am installed openmpi 1.5 and test it with a simple program
>
> Umm, Open-MPI 1.5
Stop using mpif.h.
Upgrading to use the "mpi" module is a very simple upgrade, and gives you at
least some measure of type safety and argument checking (e.g., it would have
failed to compile with that missing ierror argument).
If you have a modern enough Fortran compiler (e.g., gfortran >=
Open MPI no longer supports Microsoft Windows, sorry.
> On Apr 11, 2015, at 5:02 PM, Ahmed Salama wrote:
>
> I have network consists of different platform(Redhat 6 & windows 7), how can
> configure openmpi 1.8.4 to run on both theses different platform
>
You can also specify per-machine usernames in $HOME/.ssh/config.
See ssh_config(5).
> On Apr 13, 2015, at 9:19 PM, Ralph Castain wrote:
>
>
>> On Apr 13, 2015, at 5:47 PM, XingFENG wrote:
>>
>> Thanks for all who joined the discussion.
>> Yes,
On Apr 3, 2015, at 12:50 PM, Lei Shi wrote:
>
> P.S. Pavan suggests me to use MPI_Request_free. I will give it a try.
Keep in mind that you have zero indication of when a send or receive completes
if you MPI_Request_free (Pavan implied this, too). You could be reading half a
t's not communicator safe. That is, if I attach a buffer,
> some other library I linked with might chew up my buffer space. So the
> nonblocking guarantee is kind of bogus at that point.
>
> -- Pavan
>
>> On Apr 3, 2015, at 5:30 AM, Jeff Squyres (jsquyres) <
In the general case, MPI defines that you *have* to call one of the MPI_Test or
MPI_Wait functions to finish the communication request. If you don't do so,
you're basically leaking resources (e.g., memory).
In a single-threaded MPI implementation, the call to MPI_Test/MPI_Wait/etc. may
be
Yes. I think the blog post gives 10 excellent reasons why. :-)
> On Apr 3, 2015, at 2:40 AM, Lei Shi wrote:
>
> Hello,
>
> I want to use buffered sends. Read a blog said it is evil,
> http://blogs.cisco.com/performance/top-10-reasons-why-buffered-sends-are-evil
>
> Is it
You might want to check out these blog entries about the tree-based launcher in
Open MPI for a little background:
http://blogs.cisco.com/performance/tree-based-launch-in-open-mpi
http://blogs.cisco.com/performance/tree-based-launch-in-open-mpi-part-2
Your mail describes several issues;
On Mar 30, 2015, at 6:12 AM, LOTFIFAR F. wrote:
>
> Thank you very much guys. The problem has just been resolved.
Great!
> The problem was in the security groups rules when one create VMs. Openstack
> pushes the security groups into iptables rules and it is not
My $0.02:
- building under your $HOME is recommended in cases like this, but it's not
going to change the functionality of how OMPI works. I.e., rebuilding under
your $HOME will likely not change the result.
- you have 3 MPI implementations telling you that TCP connections between your
VMs
It might be helpful to send all the information listed here:
http://www.open-mpi.org/community/help/
> On Mar 26, 2015, at 10:55 PM, Ralph Castain wrote:
>
> Could you please send us your configure line?
>
>> On Mar 26, 2015, at 4:47 PM, Hammond, Simon David (-EXP)
Also remove the btl openib.la file in the same dir.
Sent from my phone. No type good.
> On Mar 19, 2015, at 9:05 AM, Tus wrote:
>
> I removed openib.so. I get this error with --mca btl_tcp_if_include p2p1:
>
> [compute-0-7.local:04799] mca: base: component_find: unable to
f moving this thread over to the devel list and
following up there.
> On Mar 18, 2015, at 10:55 AM, Peter Gottesman <mygames1...@gmail.com> wrote:
>
> Okay, the config.log and configure output are attached.
>
>
> On Wed, Mar 18, 2015 at 11:38 AM, Jeff Squyres (jsquyres)
Can you send all the info listed here:
http://www.open-mpi.org/community/help/
> On Mar 17, 2015, at 6:30 PM, Peter Gottesman wrote:
>
> Hey all,
> I am trying to compile Open MPI on a 32bit laptop running debian wheezy
> 7.8.0. When I
> ../ompi-master/configure
Can you send the information listed here:
http://www.open-mpi.org/community/help/
> On Mar 17, 2015, at 1:57 PM, Ahmed Salama wrote:
>
> when i configure openmpi-1.8.2 e.g. $./configure --enable-mpi-java
> --with-jdk-bindir=/usr/jdk6/bin
Ahmed --
Did you have a further question? I didn't see any additional / new text in the
reply below.
> On Mar 17, 2015, at 1:58 PM, Ahmed Salama <ah_salama_eng...@yahoo.com> wrote:
>
>
> From: Jeff Squyres (jsquyres) <jsquy...@cisco.com>
> To: "t...@riseup.
Can you send all the info listed here:
http://www.open-mpi.org/community/help/
> On Mar 17, 2015, at 6:30 PM, Peter Gottesman wrote:
>
> Hey all,
> I am trying to compile Open MPI on a 32bit laptop running debian wheezy
> 7.8.0. When I
> ../ompi-master/configure
be using va args differently than my example and be
exposing a problem with va args in your compiler.
Can you upgrade your compiler?
Sent from my phone. No type good.
> On Mar 14, 2015, at 9:44 PM, Fabricio Cannini <fcann...@gmail.com> wrote:
>
>> On 12-03-2015 20:44, Jeff
gt; wrote:
>
> On 12-03-2015 20:24, Jeff Squyres (jsquyres) wrote:
>> include
>> #include
>>
>> static void foo(const char *fmt, ...)
>> {
>> va_list list;
>> va_start(list, fmt);
>> vprintf(fmt, list);
>> va_e
oo.c -o foo
./foo
> On Mar 12, 2015, at 6:49 PM, Fabricio Cannini <fcann...@gmail.com> wrote:
>
> On 12-03-2015 18:23, Jeff Squyres (jsquyres) wrote:
>> Do you have the latest version of the Intel 12.x compiler installed?
>
>> Are you able to compile/install any
Do you have the latest version of the Intel 12.x compiler installed?
Are you able to compile/install any other C source code that uses varargs?
I ask because we've seen busted / buggy Intel compiler installs before. It may
be that you need to update to the latest version of the Intel 12.x
Two quick observations:
1. Open MPI 1.6.2 is pretty old, and it's not the last release in the 1.6.x
series. If you can, update to Open MPI 1.6.5, which has lots of bug fixes over
1.6.2. Even better, upgrade to Open MPI 1.8.4 (i.e., the latest stable
release), which has oodles of bug fixes
On Mar 9, 2015, at 12:19 PM, Tus wrote:
>
> I configured and installed 1.8.4 on my system. I was getting openfabric
> erros and started to specify -mca btl ^openib which is working but very
> slow.
>
> I would like to complile again excluding openfabric or ib support. I do
>
On Feb 27, 2015, at 9:42 AM, Sasso, John (GE Power & Water, Non-GE)
> wrote:
Unfortunately, we have a few apps which use LAM/MPI instead of OpenMPI (and
this is something I have NO control over).
Bummer!
I have been making an effortlonger
If you had an older Open MPI installed into /usr/local before you installed
Open MPI 1.8.4 into /usr/local, it's quite possible that some of the older
plugins are still there (and will not play nicely with the 1.8.4 install).
Specifically: installing a new Open MPI does not uninstall an older
On Feb 24, 2015, at 4:09 PM, Tom Wurgler wrote:
>
> Did you mean --disable-shared instead of --disable-dlopen?
Ah, sorry -- my eyes read one thing, and my brain read another. :-)
> And I am still confused. With "--disable-shared" I get a bigger static
> library than
The --disable-dlopen option actually snips out some code from the Open MPI code
base: it disables a feature (and the code that goes along with it).
Hence, it makes sense that the resulting library would be a different size:
there's actually less code compiled in it.
> On Feb 24, 2015, at 2:45
Harald --
Many thanks for noticing and sending a patch. I have committed the fix, and
slated it for the next Open MPI release:
https://github.com/open-mpi/ompi-release/pull/188
> On Feb 20, 2015, at 9:36 AM, Harald Servat wrote:
>
> Hello,
>
> may I suggest you
Siegmar --
This looks like an error in ROMIO that we should report upstream.
Thanks for the heads-up!
> On Feb 19, 2015, at 8:00 AM, Siegmar Gross
> wrote:
>
> Hi,
>
> today I tried to build openmpi-dev-1031-g008755a on my machines
> (Solaris 10 Sparc,
Looks like --enable-heterogeneous builds are broken on master. I filed
https://github.com/open-mpi/ompi/issues/403.
Thanks for the heads-up!
> On Feb 19, 2015, at 7:50 AM, Siegmar Gross
> wrote:
>
> Hi,
>
> today I tried to build
Siegmar --
This one looks like a seg fault in your compiler. I don't know if there's much
we can do about that.
> On Feb 19, 2015, at 7:50 AM, Siegmar Gross
> wrote:
>
> Hi,
>
> today I tried to build openmpi-dev-1031-g008755a on my machines
>
Siegmar --
This file (opal/mca/rcache/base/static-components.h) is generated during
configure.
I just downloaded the dev-1031 tarball from last night and ran configure on it,
and the opal/mca/rcache/base/static-components.h file is there for me.
Did something go wrong during your configure?
On Feb 17, 2015, at 11:36 AM, Tarandeep Kalra wrote:
>
> It is a 2011 Macbook pro. Depends on what you think is old.
:-)
I can't say we've tested Open MPI 1.8.x on OS X 10.6.x -- there may well be
some kind of weirdness there.
Can you try Open MPI 1.6.5? You'll likely
Sorry for not replying earlier.
Yes, you hit on a solution.
Note, too, that the MTLs are only used by the "cm" PML, and the BTLs are only
used by the "ob1" PML.
So if you're using the openib BTL for IB support (vs. the MXM MTL), you can
effectively choose one or the other by setting
This may well be related to:
https://github.com/open-mpi/ompi/issues/369
> On Feb 10, 2015, at 9:24 AM, Riccardo Zese wrote:
>
> Hi,
> I'm trying to modify an old algorithm of mine in order to exploit
> parallelization and I would like to use MPI. My algorithm is
We'll follow up on this github issue.
Alexander -- thanks for the bug report. If you'd like to follow the progress
of this issue, comment on https://github.com/open-mpi/ompi/issues/369.
> On Feb 1, 2015, at 5:08 PM, Oscar Vega-Gisbert wrote:
>
> Hi,
>
> I created an
my own
> machines I usually build OpenMPI/MPICH from source and haven't had any
> problems in the past but in this case I was simply trying out Cloud9 with all
> packages installed from official repository.
>
> Thanks,
>
> Tabrez
>
> On 02/06/2015 09:09 AM, Jeff Squyr
701 - 800 of 1536 matches
Mail list logo