On Wed, 27 Feb 2008, David Gunter wrote:
We are trying to build OMPI-1.2.4 for a BProc/Ethernet-based cluster.
Here are the configure options:
./configure --prefix=${PREFIX} \
--libdir=${LIBDIR} \
--enable-shared \
--with-bproc \
--with-tm=/opt/PBS \
--with-io_romio_flags=--with-file-
Hi i am running Red hat linux in school
I am trying to compile open mpi and it gives me
this error:
make[3]: Entering directory `/home/acct2/babinsk3/research/openmpi-1.2.5/Albert/
ompi/mpi/cxx'
/bin/sh ../../../libtool --tag=CXX --mode=link g++ -O3 -DNDEBUG -m64 -finline
-functions -pthread -e
Hello Jenny and David,
On Wednesday 27 February 2008 17:42, David Gunter wrote:
> We are trying to build OMPI-1.2.4 for a BProc/Ethernet-based cluster.
> Here are the configure options:
>
> ./configure --prefix=${PREFIX} \
>--libdir=${LIBDIR} \
>--enable-shared \
>--with-bproc \
>--
Scott,
I can replicate this on big red. Seems to be a libtool problem. I'll
investigate...
Thanks,
Tim
Teige, Scott W wrote:
Hi all,
Attempting a build of 1.2.5 on a ppc machine, particulars:
uname -a
Linux s10c2b2 2.6.5-7.286-pseries64-lustre-1.4.10.1 #2 SMP Tue Jun 26
11:36:04 EDT 200
Hi all,
Attempting a build of 1.2.5 on a ppc machine, particulars:
uname -a
Linux s10c2b2 2.6.5-7.286-pseries64-lustre-1.4.10.1 #2 SMP Tue Jun 26
11:36:04 EDT 2007 ppc64 ppc64 ppc64 GNU/Linux
Error message (many times)
../../../opal/.libs/libopen-pal.a(dlopen.o)(.opd+0x0): In function
`__a
Brian is completely right. Here is a more detailed description of this
problem.
Upon receiving a fragment from the BTL (lower layer) we try to match
it with an MPI request. If the match fails, then we get a fragment
from the free_list (via the blocking call to FREE_LIST_WAIT) and copy
the
This error indicates that the open of a shared library failed. It is
generated by the dynamic loader, which is used by Open MPI to load the
components. The specific error you get is clear about this: one of the
dependencies of the TM RAS is missing.
If you log on one of the compute nodes an
We are trying to build OMPI-1.2.4 for a BProc/Ethernet-based cluster.
Here are the configure options:
./configure --prefix=${PREFIX} \
--libdir=${LIBDIR} \
--enable-shared \
--with-bproc \
--with-tm=/opt/PBS \
--with-io_romio_flags=--with-file-system=ufs+nfs \
--disable-pty_support
Bummer; ok.
On Feb 27, 2008, at 11:01 AM, Brian W. Barrett wrote:
I played with this to fix some things in ORTE at one point, and it's
a very dangerous slope -- you're essentially guaranteeing you have a
deadlock case. Now instead of running off the stack, you'll
deadlock. The issue is th
I played with this to fix some things in ORTE at one point, and it's a
very dangerous slope -- you're essentially guaranteeing you have a
deadlock case. Now instead of running off the stack, you'll deadlock.
The issue is that we call opal_progress to wait for something to happen
deep in the bo
On Feb 23, 2008, at 10:05 AM, Mathias PUETZ wrote:
1. Could you please fix the bug above in the configure script ?
Thanks for the detailed ananlysis. Can you confirm that this patch
works for you before I commit it:
Index: config/ompi_config_asm.m4
Gleb / George --
Is there an easy way for us to put a cap on max recusion down in
opal_progress? Just put in a counter in opal_progress() such that if
it exceeds some max value, return success without doing anything (if
opal_progress_event_flag indicates that nothing *needs* to be done)?
12 matches
Mail list logo