On Apr 7, 2007, at 12:59 AM, Brian Powell wrote:
Greetings,
I turn to the assistance of the OpenMPI wizards. I have compiled
v1.2 using gcc and ifort (see the attached config.log) with a
variety of options. The compilation finishes (side note: I had to
define NM otherwise the configure
That's very odd. The usual cause for this is /tmp being unwritable
by the user or full. Can you check to see if either of those
conditions are true?
Thanks,
Brian
On Apr 13, 2007, at 2:44 AM, Christine Kreuzer wrote:
Hi,
I run openmpi on a AMD Opteron with two dualcore processors an
Yup, it does. There's nothing in the standard that says it isn't
allowed to. Given the number of system/libc calls involved in doing
communication, pretty much every MPI function is going to change the
value of errno. If you expect otherwise, I'd modify your
application. Most
Thanks for the bug report. I'm able to replicate your problem, and
it will be fixed in the 1.2.2 release.
Brian
On May 7, 2007, at 6:10 AM, livelfs wrote:
Hi all
I have observed a regression between 1.2 and 1.2.1
if CC is assigned an absolute path (i.e. export
This was a regression in Open MPI 1.2.1. We improperly handle the
situation where CC has a path in it. We will have this fixed in Open
MPI 1.2.2. For now, your options are to use Open MPI 1.2 or specify
a $CC without a path, such as CC=icc, and make sure $PATH is set
properly.
Brian
On May 14, 2007, at 10:21 AM, Nym wrote:
I am trying to use MPI_TYPE_STRUCT in a 64 bit Fortran 90 program. I'm
using the Intel Fortran Compiler 9.1.040 (and C/C++ compilers
9.1.045).
If I try to call MPI_TYPE_STRUCT with the array of displacements that
are of type
I fixed the OOB. I also mucked some things up with it interface wise
that I need to undo :). Anyway, I'll have a look at fixing up the
TCP component in the next day or two.
Brian
On May 10, 2007, at 6:07 PM, Jeff Squyres wrote:
Brian --
Didn't you add something to fix exactly this
On May 13, 2007, at 6:23 AM, Bert Wesarg wrote:
Even better: is there a patch available to fix this in the 1.2.1
tarball, so that
I can set the full path again with CC?
The patch is quite trivial, but requires a rebuild of the build
system
(autoheader, autoconf, automake,...)
see here:
On May 21, 2007, at 7:40 PM, Tom Clune wrote:
Executive summary: mpirun hangs when laptop is connected via
cellular modem.
Longer description: Under ordinary circumstances mpirun behaves as
expected on my OS X (Intel-duo) laptop. I only want to be using
the shared-memory mechanism -
On May 22, 2007, at 7:52 PM, Tom Clune wrote:
For example, if it is ppp0, try:
mpirun -np 1 -mca oob_tcp_exclude ppp0 uptime
This seems to at least produce a bit of output before hanging:
LM000953070:~ tlclune$ mpirun -np 1 -mca oob_tcp_exclude ppp0 uptime
On May 29, 2007, at 12:25 PM, smai...@ksu.edu wrote:
I am doing a research on parallel computing on shared memory with
NUMA architecture. The system is a 4 node AMD opteron with each node
being a dual-core. I am testing an OpenMPI program with MPI-nodes <=
MAX cores available on system (in
Bill -
This is a known issue in all released versions of Open MPI. I have a
patch that hopefully will fix this issue in 1.2.3. It's currently
waiting on people in the OPen MPI team to verify I didn't do
something stupid.
Brian
On May 29, 2007, at 9:59 PM, Bill Saphir wrote:
George,
On Jun 1, 2007, at 12:15 PM, Bert Wesarg wrote:
Hello,
is the 'EGREP' a typo in the first hunk of r14829:
https://svn.open-mpi.org/trac/ompi/changeset/14829/trunk/config/
cxx_find_template_repository.m4
Gah! Yes, it is. Should be $GREP. I'll fix this evening.
Thanks,
Brian
Or tell Open MPI not to build torque support, which can be done at
configure time with the --without-tm option.
Open MPI tries to build support for whatever it finds in the default
search paths, plus whatever things you specify the location of. Most
of the time, this is what the user
On Jun 7, 2007, at 9:04 PM, Code Master wrote:
nction `_int_malloc':
: multiple definition of `_int_malloc'
/usr/lib/libopen-pal.a(lt1-malloc.o)(.text+0x18a0):openmpi-1.2.2/
opal/mca/memory/ptmalloc2/malloc.c:3954: first defined here
/usr/bin/ld: Warning: size of symbol `_int_malloc' changed
On Jul 4, 2007, at 8:21 PM, Graham Jenkins wrote:
I'm using the openmpi-1.1.1-5.el5.x86_64 RPM on a Scientific Linux 5
cluster, with no installed HCAs. And a simple MPI job submitted to
that
cluster runs OK .. except that it issues messages for each node
like the
one shown below. Is there
On Jul 10, 2007, at 11:40 AM, Scott Atchley wrote:
On Jul 10, 2007, at 1:14 PM, Christopher D. Maestas wrote:
Has anyone seen the following message with Open MPI:
---
warning:regcache incompatible with malloc
---
---
We don't see this message with mpich-mx-1.2.7..4
MX has an internal
What Ralph said is generally true. If your application completed,
this is nothing to worry about. It means that an error occurred on
the socket between mpirun ad some other process. However, combind
with the travor0 errors in the log files, it could mean that your
IPoIB network is
architecture: i386-apple-darwin8.10.1
Hi Brian,
1.2.3 downloaded and built from source.
Tim
On 12/07/2007, at 12:50 AM, Brian Barrett wrote:
Which version of Open MPI are you using?
Thanks,
Brian
On Jul 11, 2007, at 3:32 AM, Tim Cornwell wrote:
I have a problem running openmpi under OS 10.4.10. My
On Jul 15, 2007, at 10:05 PM, Isaac Huang wrote:
Hello, I read from the FAQ that current Open MPI releases don't
support end-to-end data reliability. But I still have some confusing
that can't be solved by googling or reading the FAQ:
1. I read from "MPI - The Complete Reference" that "MPI
Jody -
I usually update the ROMIO package before each major release (1.0,
1.1, 1.2, etc.) and then only within a major release series when a
bug is found that requires an update. This seems to be one of those
times ;). Just to make sure we're all on the same page, which
version of Open
I wouldn't worry about it. 1.2.3 has no ROMIO fixes over 1.2.2.
Brian
On Jul 16, 2007, at 9:42 AM, jody wrote:
Brian,
I am using OpenMPI 1.2.2, so i am lagging a bit behind.
Should i update to 1.2.3 and do the test again?
Thanks for the info
Jody
On 7/16/07, Brian Barrett <bb
checking -enable-werror --prefix=/usr --mandir=/share/man
--enable-languages=c,objc,c++,obj-c++ --program-transform-name=/^
[cg][^.-]*$/s/$/-4.0/ --with-gxx-include-dir=/include/c++/4.0.0 --
with-slibdir=/usr/lib --build=powerpc-apple-darwin8 --with-
arch=nocona --with-tune=generic --program-pr
On Jul 19, 2007, at 3:24 PM, Moreland, Kenneth wrote:
I've run into a problem with the File I/O with openmpi version 1.2.3.
It is not possible to call MPI_File_set_view with a datatype created
from a subarray. Instead of letting me set a view of this type, it
gives an invalid datatype error.
On Jul 26, 2007, at 7:43 PM, Mathew Binkley wrote:
../../libtool: line 460: CDPATH: command not found
libtool: Version mismatch error. This is libtool 2.1a, but the
libtool: definition of this LT_INIT comes from an older release.
libtool: You should recreate aclocal.m4 with macros from libtool
On Aug 2, 2007, at 4:22 PM, Glenn Carver wrote:
Hopefully an easy question to answer... is it possible to get at the
values of mca parameters whilst a program is running? What I had in
mind was either an open-mpi function to call which would print the
current values of mca parameters or a
On Aug 21, 2007, at 3:32 PM, Lev Givon wrote:
configure: WARNING: *** Shared libraries have been disabled (--
disable-shared)
configure: WARNING: *** Building MCA components as DSOs
automatically disabled
checking which components should be static... none
checking for projects containing MCA
On Aug 21, 2007, at 10:52 PM, Lev Givon wrote:
(Running ompi_info after installing the build confirms the absence of
said components). My concern, unsurprisingly, is motivated by a desire
to use OpenMPI on an xgrid cluster (i.e., not with rsh/ssh); unless I
am misconstruing the above
On Aug 22, 2007, at 2:35 PM, Higor de Padua Vieira Neto wrote:
At the end of the output file, just show this:
" (...lot of output ...)
config.status: creating opal/include/opal_config.h
config.status: creating orte/include/orte_config.h
config.status: orte/include/orte_config.h is unchanged
On Aug 23, 2007, at 4:33 AM, Bernd Schubert wrote:
I need to compile a benchmarking program and absolutely so far do
not have
any experience with any MPI.
However, this looks like a general open-mpi problem, doesn't it?
bschubert@lanczos MPI_IO> make
cp ../globals.f90 ./; mpif90 -O2 -c
On Aug 24, 2007, at 10:57 AM, Marwan Darwish wrote:
I keep on getting the following link error when compiling lam-mpi
on a macosx (in the release mode)
would moving to open-mpi resolve such issues, anybody with
experience in this
Moving to Open MPI will work around this issue. Another
On Aug 27, 2007, at 3:14 PM, Lev Givon wrote:
I have OpenMPI 1.2.3 installed on an XGrid cluster and a separate Mac
client that I am using to submit jobs to the head (controller) node of
the cluster. The cluster's compute nodes are all connected to the head
node via a private network and are
On Aug 28, 2007, at 10:59 AM, Lev Givon wrote:
Received from Brian Barrett on Tue, Aug 28, 2007 at 12:22:29PM EDT:
On Aug 27, 2007, at 3:14 PM, Lev Givon wrote:
I have OpenMPI 1.2.3 installed on an XGrid cluster and a separate
Mac
client that I am using to submit jobs to the head
On Sep 10, 2007, at 1:35 PM, Lev Givon wrote:
When launching an MPI program with mpirun on an xgrid cluster, is
there a way to cause the program being run to be temporarily copied to
the compute nodes in the cluster when executed (i.e., similar to
what the
xgrid command line tool does)? Or
On Sep 25, 2007, at 1:37 PM, Richard Graham wrote:
Josh Hursey did the port of Open MPI to CNL. Here is the config
line I have used to build
on the Cray XT4:
./configure CC=/opt/xt-pe/default/bin/snos64/linux-pgcc CXX=/opt/xt-
pe/default/bin/snos64/linux-pgCC
On Sep 25, 2007, at 4:25 AM, Rayne wrote:
Hi all, I'm using the SGE system on my school network,
and would like to know if the errors I received below
means there's something wrong with my MPI_Recv
function.
[0,1,3][btl_tcp_frag.c:202:mca_btl_tcp_frag_recv]
mca_btl_tcp_frag_recv: readv failed
On Sep 28, 2007, at 4:56 AM, Massimo Cafaro wrote:
Dear all,
when I try to compile my MPI code on 64 bits intel Mac OS X the
build fails since the Open MPI library has been compiled using 32
bits. Can you please provide in the next version the ability at
configure time to choose between
, you can not use
recent CVS copies of Libtool, you'll have to use the same version
specified here:
http://www.open-mpi.org/svn/building.php
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
On Oct 10, 2007, at 1:27 PM, Dirk Eddelbuettel wrote:
| Does this happen for all MPI programs (potentially only those that
| use the MPI-2 one-sided stuff), or just your R environment?
This is the likely winner.
It seems indeed due to R's Rmpi package. Running a simple mpitest.c
shows no
On Oct 16, 2007, at 11:56 AM, Jeff Squyres wrote:
On Oct 16, 2007, at 11:20 AM, Brian Granger wrote:
Wow, that is quite a study of the different options. I will spend
some time looking over things to better understand the (complex)
situation. I will also talk with Lisandro Dalcin about what
e,
but
could you send all the compile/failure information?
http://www.open-mpi.org/community/help/
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
t include mpi.h from an extern "C"
block. It will fail, as you've noted. The proper solution is to not
be in an extern "C" block when including mpi.h.
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
ir code
from C++...
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
On Jan 1, 2008, at 12:47 AM, Adam C Powell IV wrote:
On Mon, 2007-12-31 at 20:01 -0700, Brian Barrett wrote:
Yeah, this is a complicated example, mostly because HDF5 should
really be covering this problem for you. I think your only option at
that point would be to use the #define
case. The
first question that needs to be asked is for the AIX / Power PC
machine you're running on, what is the right answer (as an IBM
employee, you're certainly more qualified to answer that than I am...).
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
configure?
Thanks,
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
:~Op() in ccQqJJlF.o
"MPI::FinalizeIntercepts()", referenced from:
MPI::Finalize() in ccQqJJlF.o
"MPI::COMM_WORLD", referenced from:
__ZN3MPI10COMM_WORLDE$non_lazy_ptr in ccQqJJlF.o
ld: symbol(s) not found
collect2: ld returned 1 exit status
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
memory management code.
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
are either to
increase the max stack size or (more portably) just allocate
everything on the heap with malloc/new.
Hope this helps,
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
rge memory grabs.
Could it be that this vmem is being grabbed by the OpenMPI memory
manager rather than directly by the app?
Ciao
Terry
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
Brian Barr
library. Hopefully, this will resolve
some of these headaches.
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
Sorry I haven't jumped in this thread earlier -- I've been a bit behind.
The multi-lib support worked at one time, and I can't think of why it
would have changed. The one condition is that libdir, includedir,
etc. *MUST* be specified relative to $prefix for it to work. It looks
like you
-
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/lis
Neither the CM PML or the MX MTL has been looked at for thread
safety. There's not much code to cause problems in the CM PML. The
MX MTL would likely need some work to ensure the restrictions Scott
mentioned are met (currently, there's no such guarantee in the MX MTL).
Brian
On Jun 11,
the MX MTL or BTL? Can you send a
small program that reproduces this abort?
Scott
On Jun 11, 2009, at 12:25 PM, Brian Barrett wrote:
Neither the CM PML or the MX MTL has been looked at for thread
safety. There's not much code to cause problems in the CM PML.
The MX MTL would likely need some
On Jun 15, 2005, at 12:23 PM, Bogdan Costescu wrote:
On Tue, 14 Jun 2005, Brian Barrett wrote:
It would be nice if the c++ compiler wrapper were
installed under mpicxx, mpiCC, and mpic++ instead of
just the latter 2.
Yeah, we can do that, no problem.
Sorry for the silly question
future release plans beyond the beta, you might want to take a look
at the mailing list archives from last month.
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
t. Should be in
tonight's nightly build.
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
On Sep 28, 2005, at 3:46 PM, Borenstein, Bernard S wrote:
I posted an issue with the Nasa Overflow 1.8 code and have traced
it further to a program failure in the malloc
areas of the code (data in these areas gets corrupted). Overflow
is mostly fortran, but since it is an old program,
it
and
linker. Fixing the CFLAGS (it may actually be FFLAGS, but I think
it's the CFLAGS) should fix the problem.
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
of
IRIX was this on?
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
. As for not
being able to find HPL.dat, I'm not sure why that would be a problem
- are you sure the file exists in the same directory as the xhpl
binary (on all nodes)?
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
y
tarballs that will be available tomorrow morning. Release candidates
and betas will be available at the URL below:
http://www.open-mpi.org/software/
Hope this helps,
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
for transmission.
I'm not aware of any packages that make using the STL and MPI easier,
but it's possible I've missed them.
Brian
--
Brian Barrett
Graduate Student, Open Systems Lab, Indiana University
http://www.osl.iu.edu/~brbarret/
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
Daryl -
I'm unable to replicate your problem. I was testing on a Fedora Core
3 system with Clustermatic 5. Is is possible that you have a random
dso from a previous build in your installation path? How are you
running mpirun -- maybe I'm just not hitting the same code path you
are...
On Nov 17, 2005, at 9:20 AM, Brian Barrett wrote:
I'm unable to replicate your problem. I was testing on a Fedora Core
3 system with Clustermatic 5. Is is possible that you have a random
dso from a previous build in your installation path? How are you
running mpirun -- maybe I'm just
Wl,-rpath,pathB"
will do essentially the same thing and get you past the OMPI_UNIQ
bug. I believe (again, could be wrong) that most compilers will
parse that into the correct number of options to pass to the linker.
Hope this helps,
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
On Nov 18, 2005, at 9:37 AM, Brian Barrett wrote:
On Nov 18, 2005, at 2:54 AM, Dries Kimpe wrote:
I have a question about the --with-wrapper-ldflags option;
I need to pass 2 different rpaths to the wrapper compilers,
so I tried
- --with-wrapper-ldflags="-Wl,-rpath -Wl,pathA -Wl,-rpat
ca btl ^mvapi
They are essentially equivalent. The first will load the mvapi
component, but never schedule any fragments on it. The second will
just not load the mvapi component. Sometimes we actually anticipate
user requests - not often, but sometimes ;).
Brian
--
Brian Barrett
L
to time constraints it will likely not
be in the 1.0.1 release. It should be in the 1.0.2 release, although
I can't give you a time table as to when we will have a 1.0.2 release.
Thanks for the report!
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
irix_header_fi
he process to select which
packages to install).
Hope this helps,
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
?
http://www.open-mpi.org/~brbarret/download/
openmpi-1.1a1r8384.tar.gz
If that works for you, we'll push the change into Open MPI 1.0.1
(it's a very small change).
Thanks,
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
tried the following version as you suggested
http://www.open-mpi.org/~brbarret/download/ openmpi-1.1a1r8384.tar.gz
Things go a little further but the make still fails.
Please find the logs attached.
Pierre.
Brian Barrett wrote:
On Dec 5, 2005, at 4:05 PM, Pierre Valiron wrote:
I tried
PI. We do not currently
have a time table for releasing this work, but will announce when we
are ready for users to start testing our work.
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
probably possible to use Bonjour / Zero Config for resource
discovery for Open MPI, but it really only helps in resource
discovery -- scheduling and allocating resources would still have to
be done. We do, however, support the use of Apple's XGrid system for
job startup.
Brian
--
Brian
n the OMPI build that fails). Again, doing this with a build of
Open MPI that contains debugging symbols would greatly increase the
usefulness to us.
Thanks,
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
I don't think it gets me off the hook with my boss ;), but
the more resources, the better for the Mac community.
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
fix should be in the nightly builds in the next
couple of days, and will be part of the upcoming 1.0.2 release.
Thanks,
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
compilers,
can you run "mpicc -showme" and send the results to me? If you
aren't using the wrapper compilers, try adding the following to your
link flags:
-Wl,-u,_munmap -Wl,-multiply_defined,suppress
that should do the right magic to make the linker more happy.
Brian
--
Bri
unately, you can not configure the firewall using the System
Preferences GUI to do this).
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
de 1
make: Fatal error: Command failed for target `all-recursive'
Any clue what went wrong this time? Thanks a lot!
David
* Correspondence *
From: Brian Barrett <brbar...@open-mpi.org>
Reply-To: Open MPI Users <us...@open-mpi.org>
Date: Tue, 14 Feb 2006 13:54:21 -0500
To
asier just to install in a shared
filesystem, if you have one.
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
cc.
Hope this makes sense,
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
(although it
sounds like you're going to be using native IB for communicate, it
never hurts to make sure TCP has a chance of working).
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
the cracks during
the runup to 1.0, since only a couple of functions in the MPI
interface deal with Fortran LOGICALs directly (other than sending
them around, that is).
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
either a
UltraSparc running Solaris or an Opteron running Linux. Could you
compile Open MPI with CFLAGS set to "-g -O -xtarget=opteron -
xarch=amd64". Hopefully being able to see the callstack with some
line numbers will help a bit.
Brian
--
Brian Barrett
Open MPI developer
a Subversion checkout of the trunk -
it's much easier to feed patches back (and they're much more likely
to be accepted) if you are working from the same source as all the
core developers. More information is available at this URL:
http://www.open-mpi.org/svn/
Good luck!
Brian
--
Brian
se :-) ?
_______
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
/MPI, for example), this is not the case and
they have slightly different semantics.
I will run it thru the debugger tomorrow and let you know of the
outcome.
Hopefully that will shed some light on the problem.
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
-mpi.org/faq/?category=running
Hope this helps,
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
hrc? Most .tcshrc files do, and the end is only evaluated
for interactive shells (which the one to start the orted is not).
This is probably why moving it to the top helped.
Anyway, glad to hear things are working for you.
Brian
From: Brian Barrett <brbar...@open-mpi.org>
Reply-To
to that version:
http://www.open-mpi.org/software/ompi/v1.0/
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
elease.
Thanks for reporting the issue and for the account. Let me know if
you have any further problems.
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
11 info.si_errno:0(Error 0) si_code:1(SEGV_MAPERR)
Failing at addr:40
*** End of error message ***
Compiling with -g adds no more information.
Doh, that probably shouldn't be happening. I'll try to investigate
further once I have the pty issues sorted out.
Brian
--
Brian Barrett
Hi Julian -
Can you send me the top-level config.log file? The code you're
running into problems with shouldn't be compiled on a G4-based
machine. It exists for Mac OS X 10.3 on the G5, which could emit a
strange mixed mode assembly that supported 64 bit operations in a 32
bit
the 1.0.2 pre-releases if you run into trouble with the 1.0.1 release.
Hope this helps,
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
an option outside of channel bonding.
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
em down, but it looks like an invalid request is
cropping up in a call to MPI_Wait and that is causing the
segmentation fault. There may be another issue, but I need to do a
bit more testing to be sure.
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
1 - 100 of 176 matches
Mail list logo