Our scenario is that we are running python, then importing a module
written in Fortran.
We run via:
mpiexec -n 8 -x PYTHONPATH -x SIDL_DLL_PATH python tokHsmNP8.py
where the script calls into Fortran to call MPI_Init.
On 8 procs (but not one) we get hangs in the code (on some machines but
not
- Original Message -
From: "Jeff Squyres"
To: "Open MPI Users"
Sent: Thursday, July 9, 2009 10:10:30 AM (GMT-0500) America/New_York
Subject: Re: [OMPI users] bulding rpm
On Jul 9, 2009, at 10:22 AM, rahmani wrote:
> yes, they are intel library and all are in LD_LIBRARY_PATH
> /usr/loca
I was able to get rid of the segfaults/invalid reads by disabling the
shared memory path. They still reported an error with uninitialized memory
in the same spot which I believe is due to the struct being padded for
alignment. I added a supression and was able to get past this part just
fine.
T
Although I have perhaps the least experience on the topic in this
list, I will take a shot; more experienced people, please correct me:
MPI standards specify communication mechanism, not fault tolerance at
any level. You may achieve network tolerance at the IP level by
implementing 'equal cost mul
On Jul 9, 2009, at 10:22 AM, rahmani wrote:
yes, they are intel library and all are in LD_LIBRARY_PATH
/usr/local/openmpi/intel/1.3.2/bin/mpif90 --showme
gfortran -I/usr/local/include -pthread -I/usr/local/lib -L/usr/local/
lib -lmpi_f90 -lmpi_f77 -lmpi -lopen-rte -lopen-pal -ldl -Wl,--
export
- Original Message -
From: "Jeff Squyres"
To: "Open MPI Users"
Sent: Thursday, July 9, 2009 7:34:49 AM (GMT-0500) America/New_York
Subject: Re: [OMPI users] bulding rpm
On Jul 7, 2009, at 1:32 AM, rahmani wrote:
> it create openmpi-1.3.2-1.x86_64.rpm with no error, but when I
> inst
On Jul 7, 2009, at 1:32 AM, rahmani wrote:
it create openmpi-1.3.2-1.x86_64.rpm with no error, but when I
install it with rpm -ivh I see:
error: Failed dependencies:
libifcoremt.so.5()(64bit) is needed by openmpi-1.3.2-1.x86_64
libifport.so.5()(64bit) is needed by openmpi-1.3.
On Jul 7, 2009, at 11:47 AM, Justin wrote:
(Sorry if this is posted twice, I sent the same email yesterday but it
never appeared on the list).
Sorry for the delay in replying. FWIW, I got your original message as
well.
Hi, I am attempting to debug a memory corruption in an mpi program
Open MPI includes VampirTrace to generate tracing info. In addition
to Vampir (a commercial product), there are a few free tools that can
read VT's Open Tracefile Format (OTF) output -- I'll leave this up to
the VT guys to describe.
Alternatively, you could also use MPE (http://www.mcs.anl
Hello,
Some weeks ago, I reported a problem using MPI IO in OpenMPI 1.3,
which did not occur with OpenMPI 1.2 or MPICH2.
The bug was encountered with the Code_Saturne CFD tool
(http://www.code-saturne.org),
and seemed to be an issue with individual file pointers, as another mode using
explicit o
Hi Jeff,
I tried your suggestion, insert MPI_Barrier every few iterations, but it
doesn't work, in fact it became even slower.
i want to try tracing the communication avtivity, can you give me some more
details about how to use mpitrace.
Thank you for your attention.
regards
Lin
-Origin
Hi all,
I want to know whether open mpi supports Network and process fault tolerance
or not? If there is any example demonstrating these features that will be
best.
Regards,
--
Vipin K.
Research Engineer,
C-DOTB, India
I guess this question was already before
https://svn.open-mpi.org/trac/ompi/ticket/1367
On Thu, Jul 9, 2009 at 10:35 AM, Lenny Verkhovsky <
lenny.verkhov...@gmail.com> wrote:
> BTW, What kind of threads Open MPI supports ?
> I found in the https://svn.open-mpi.org/trac/ompi/browser/trunk/READMEtha
BTW, What kind of threads Open MPI supports ?
I found in the https://svn.open-mpi.org/trac/ompi/browser/trunk/README that
we support MPI_THREAD_MULTIPLE,
and found few unclear mails about MPI_THREAD_FUNNELED and
MPI_THREAD_SERIALIZED.
Also found nothing in FAQ :(.
Thanks,Lenny.
On Thu, Jul 2, 2
14 matches
Mail list logo