Hello all,
I am unable to get past ./configure as ICC fails on C++ tests (see
attached ompi-output.tar.gz). Configure was called without and the with
sourcing `/opt/intel/cc/10.1.xxx/bin/iccvars.sh` as per one of the
invocation options in icc's doc. I was unable to find the relevant
(well
Jeff,
Thanks for the detailed discussion. It certainly makes things a lot clearer,
just as I was giving up my hopes for a reply.
The app is fairly heavy on communication (~10k messages per minute) and is
also embarrassingly parallel. Taking this into account, I think I'll
readjust my resilience ex
On Dec 6, 2007, at 10:14 AM, Sajjad Tabib wrote:
Is it possible to disable ptmalloc2 at runtime by disabling the
component?
Nope -- this one has to be compiled and linked in ahead of time.
Sorry. :-\
--
Jeff Squyres
Cisco Systems
Is there a way to disable this at runtime? Also can an user app use
mallopt options without interfering with the memory managers?
We have these options set but are getting memory corruption that moves
around realloc in the program.
mallopt(M_MMAP_MAX, 0);
mallopt(M_TRIM_THRESHOLD, -1);
Hi,
Is it possible to disable ptmalloc2 at runtime by disabling the component?
Thanks,
Sajjad Tabib
Jeff Squyres
Sent by: users-boun...@open-mpi.org
12/06/07 07:44 AM
Please respond to
Open MPI Users
To
Open MPI Users
cc
Subject
Re: [OMPI users] Using mtrace with openmpi segfaults
On Dec 6, 2007, at 9:54 AM, Durga Choudhury wrote:
Automatically striping large messages across multiple NICs is
certainly a very nice feature; I was not aware that OpenMPI does
this transparently. (I wonder if other MPI implementations do this
or not). However, I have the following concern
> -Original Message-
> From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org]
On
> Behalf Of Jeff Squyres
>
> If Intel is telling you that they don't support your glibc version, I
> wouldn't be surprised by random segv's in any application that you
> build on that platform (i
Automatically striping large messages across multiple NICs is certainly a
very nice feature; I was not aware that OpenMPI does this transparently. (I
wonder if other MPI implementations do this or not). However, I have the
following concern: Since the communication over an ethernet NIC is most
like
Wow, that's quite a .sig. :-)
Open MPI will automatically stripe large messages across however many
NICs you have. So you shouldn't need to use multiple threads.
The threading support in the OMPI v1.2 series is broken; it's not
worth using. There's a big warning in configure when you enab
On 12/5/07 8:47 AM, "Brian Dobbins" wrote:
> Hi Josh,
>
>> I believe the problem is that you are only applying the MCA
>> parameters to the first app context instead of all of them:
>
> Thank you very much.. applying the parameters with -gmca works fine with the
> test case (and I'll try t
If Intel is telling you that they don't support your glibc version, I
wouldn't be surprised by random segv's in any application that you
build on that platform (including Open MPI).
On Dec 5, 2007, at 9:59 AM, de Almeida, Valmor F. wrote:
Attached is the config.log file for intel 9.1.05
It certainly does make sense to use MPI for such a setup. But there
are some important things to consider:
1. MPI, at its heart, is a communications system. There's lots of
other bells and whistles (e.g., starting up a whole bunch of processes
in tandem), but at the core: it's all about p
On Dec 5, 2007, at 9:57 PM, Tee Wen Kai wrote:
I have installed openmpi-1.2.3. My system has two ethernet ports.
Thus, I am trying to make use of both ports to speed up the
communication process by using openmp to split into two threads.
Why not use Ethernet Bonding at the system level, it
I have not tried to use mtrace myself. But I can see how it would be
problematic with OMPI's internal use of ptmalloc2. If you are not
using InfiniBand or Myrinet over GM, you don't need OMPI to have an
internal copy of ptmalloc2. You can disable OMPI's ptmalloc2 by
configuring with:
Hello,
Working with different MPI flavours, I have encountered limits when using
MPI_spawn and threads.
The limit is that the number of spawns that can be made is limited. Over a
given limit the application crashes.
I am trying to overcome the limitation but launching a new process that will
To add more info, here is a backtrace of the spawned (hung) program.
(gdb) bt
#0 0xe410 in __kernel_vsyscall ()
#1 0x402cdaec in sched_yield () from /lib/tls/libc.so.6
#2 0x4016360c in opal_progress () at runtime/opal_progress.c:301
#3 0x403a9b29 in mca_oob_tcp_msg_wait (msg=0x805cc70, rc
Hi Edgar,
I changed the spawned program from /bin/hostname to a very simple MPI
program as below. But now, the slave hangs right at MPI_Init line.
What could the issue be?
slave.c
#include
#include
#include
#include "mpi.h"
#include /* standard system types */
#include /*
17 matches
Mail list logo