Hello all,
I am unable to get past ./configure as ICC fails on C++ tests (see
attached ompi-output.tar.gz). Configure was called without and the with
sourcing `/opt/intel/cc/10.1.xxx/bin/iccvars.sh` as per one of the
invocation options in icc's doc. I was unable to find the relevant
Jeff,
Thanks for the detailed discussion. It certainly makes things a lot clearer,
just as I was giving up my hopes for a reply.
The app is fairly heavy on communication (~10k messages per minute) and is
also embarrassingly parallel. Taking this into account, I think I'll
readjust my resilience
On Dec 6, 2007, at 10:14 AM, Sajjad Tabib wrote:
Is it possible to disable ptmalloc2 at runtime by disabling the
component?
Nope -- this one has to be compiled and linked in ahead of time.
Sorry. :-\
--
Jeff Squyres
Cisco Systems
Is there a way to disable this at runtime? Also can an user app use
mallopt options without interfering with the memory managers?
We have these options set but are getting memory corruption that moves
around realloc in the program.
mallopt(M_MMAP_MAX, 0);
mallopt(M_TRIM_THRESHOLD, -1);
Hi,
Is it possible to disable ptmalloc2 at runtime by disabling the component?
Thanks,
Sajjad Tabib
Jeff Squyres
Sent by: users-boun...@open-mpi.org
12/06/07 07:44 AM
Please respond to
Open MPI Users
To
Open MPI Users
cc
On Dec 6, 2007, at 9:54 AM, Durga Choudhury wrote:
Automatically striping large messages across multiple NICs is
certainly a very nice feature; I was not aware that OpenMPI does
this transparently. (I wonder if other MPI implementations do this
or not). However, I have the following
> -Original Message-
> From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org]
On
> Behalf Of Jeff Squyres
>
> If Intel is telling you that they don't support your glibc version, I
> wouldn't be surprised by random segv's in any application that you
> build on that platform
Automatically striping large messages across multiple NICs is certainly a
very nice feature; I was not aware that OpenMPI does this transparently. (I
wonder if other MPI implementations do this or not). However, I have the
following concern: Since the communication over an ethernet NIC is most
Wow, that's quite a .sig. :-)
Open MPI will automatically stripe large messages across however many
NICs you have. So you shouldn't need to use multiple threads.
The threading support in the OMPI v1.2 series is broken; it's not
worth using. There's a big warning in configure when you
On 12/5/07 8:47 AM, "Brian Dobbins" wrote:
> Hi Josh,
>
>> I believe the problem is that you are only applying the MCA
>> parameters to the first app context instead of all of them:
>
> Thank you very much.. applying the parameters with -gmca works fine with the
> test
If Intel is telling you that they don't support your glibc version, I
wouldn't be surprised by random segv's in any application that you
build on that platform (including Open MPI).
On Dec 5, 2007, at 9:59 AM, de Almeida, Valmor F. wrote:
Attached is the config.log file for intel
On Dec 5, 2007, at 1:42 PM, Karol Mroz wrote:
Removal of .ompi_ignore should not create build problems for anyone
who
is running without some form of SCTP support. To test this claim, we
built Open MPI with .ompi_ignore removed and no SCTP support on
both an
ubuntu linux and an OSX machine.
To add more info, here is a backtrace of the spawned (hung) program.
(gdb) bt
#0 0xe410 in __kernel_vsyscall ()
#1 0x402cdaec in sched_yield () from /lib/tls/libc.so.6
#2 0x4016360c in opal_progress () at runtime/opal_progress.c:301
#3 0x403a9b29 in mca_oob_tcp_msg_wait (msg=0x805cc70,
Hi Edgar,
I changed the spawned program from /bin/hostname to a very simple MPI
program as below. But now, the slave hangs right at MPI_Init line.
What could the issue be?
slave.c
#include
#include
#include
#include "mpi.h"
#include /* standard system types */
#include
14 matches
Mail list logo