Hi,
I have installed openmpi on my machine and tested with some simple
programs such as ring and fpi. Everything works. When I tried to compile
my application, I got the following:
/work/wdx/ptmp/openmpi/openmpi-1.3.2/lib/libotf.a(OTF_File.o): In
function `OTF_File_open_zlevel':
Hmmmthose appear to be vampirtrace functions. I suspect they will have
to fix it.
For now, you can work around the problem by configuring with this:
--enable-contrib-no-build=vt
That will turn the offending code off.
Ralph
On Fri, May 1, 2009 at 9:07 AM, David Wong
So far, I'm unable to reproduce this problem. I haven't exactly
reproduced your test conditions, but then I can't. At a minimum, I
don't have exactly the code you ran (and not convinced I want to!). So:
*) Can you reproduce the problem with the stand-alone test case I sent out?
*) Does the
Hi OpenMPI and HPC experts
This may or may not be the right forum to post this,
and I am sorry to bother those that think it is not.
I am trying to run the HPL benchmark on our cluster,
compiling it with Gnu and linking to
GotoBLAS (1.26) and OpenMPI (1.3.1),
both also Gnu-compiled.
I have got
If you are running on a single node, then btl=openib,sm,self would be
equivalent to btl=sm,self. OMPI is smart enough to know not to use IB if you
are on a single node, and instead uses the shared memory subsystem.
Are you saying that the inclusion of openib is causing a difference in
behavior,
Hi Gus,
For single node runs, don't bother specifying the btl. Openmpi should select
the best option.
Beyond that, the "80% total RAM" recommendation is misleading. Base your N off
the memfree rather than memtotal. IB can reserve quite a bit. Verify your
/etc/security/limits.conf limits
Dear all,
I am trying to install openmpi 1.3 on my laptop. I successfully
installed BLCR in /usr/local.
When installing openmpi using the following options:
./configure --prefix=/usr/local --with-ft=cr --enable-ft-thread
--enable-MPI-thread --with-blcr=/usr/local
I got the
Try replacing "--enable-MPI-thread" with "--enable-mpi-threads". That
should fix it.
-- Josh
On May 1, 2009, at 4:17 PM, Kritiraj Sajadah wrote:
Dear all,
I am trying to install openmpi 1.3 on my laptop. I
successfully installed BLCR in /usr/local.
When installing openmpi
You might want to consider --enable-mpi-threads=yes
Regards
Yaakoub El Khamra
On Fri, May 1, 2009 at 3:17 PM, Kritiraj Sajadah wrote:
>
> Dear all,
> I am trying to install openmpi 1.3 on my laptop. I successfully
> installed BLCR in /usr/local.
>
> When
This typically this means that one or more of the rcp/scp or rsh/ssh
commands failed. FileM should be printing an error message when one
of the copy commands fail. Try turning up the verbose level to 10 to
see if it indicates any problems:
-mca filem_rsh_verbose 10
Can you send me the MCA
Hi Ralph
Thank you very much for the prompt answer.
Sorry for being so confusing on my original message.
Yes, I am saying that the inclusion of openib is causing the difference
in behavior.
It runs with "sm,self", it fails with "openib,sm,self".
I am as puzzled as you are, because I thought the
Gus -
Open MPI 1.3.0 & 1.3.1 attempted to use some controls in the glibc malloc
implementation to handle memory registration caching for InfiniBand.
Unfortunately, it was not only bugging in that it didn't work, but it also
has the side effect that certain memory usage patterns can cause the
Hi Jacob
Thank you very much for the suggestions and insight.
On an idle node MemFree is about 15599152 kB (14.8GB).
Applying the "80%" rule to it, I get a problem size N=38,440.
However, the HPL run fails with the memory leak problem
even if I use N=35,000,
with openib among the MCA btl
Hi Brian
Thank you very much for the instant help!
I just tried "-mca btl openib,sm,self" and
"-mca mpi_leave_pinned 0" together (still with OpenMPI 1.3.1).
So far so good, it passed through two NB cases/linear system solutions,
it is running the third NB, and the memory use hasn't increased.
On Apr 30, 2009, at 5:17 PM, Barrett, Brian W wrote:
I have what's probably a stupid question, but I couldn't find the
answer on
the wiki.
The wiki has a lot of info, but it is probably incomplete. :-\
I've currently been building OMPI and the tests then running the
tests all
in the
At ORNL, I do this (when I have time to run MTT and time
to check the results). What I do is set up my script to
check to see if it is in a batch job. If so, it runs
the tests, like so:
mtt --verbose\
--print-time
16 matches
Mail list logo