Here is an example of my data measured in seconds:
communication overhead = commuT + migraT + print,
compuT is computational cost,
totalT = compuT + communication overhead,
overhead% denotes percentage of communication overhead
intelmpi (walltime=00:03:51)
iter [commuT migraT
Hi Ralph, congratulations on releasing new openmpi-1.7.5.
By the way, opnempi-1.7.5rc3 has been slowing down our application
with smaller size of testing data, where the time consuming part
of our application is so called sparse solver. It's negligible
with medium or large size data - more practi
MPI_Testsome seems to have returned successfully, with a positive
outcount, and yet given me a negative index, -4432. Can anyone help me
understand what's going on?
The call is from R, and so there might be a translation issue. My first
thought was that it might be 32 vs 64 bit integers, but
As for the performance, my 4-node (64-processes) 3-hour job indicates Intel MPI
and OpenMPI have close benchmarks. Intel MPI takes 2:53 while Open MPI takes
3:10.
It is interesting that all my MPI_Wtime calls show OpenMPI is faster (up to
twice or even more) than Intel MPI in communication for
On 03/20/2014 04:48 PM, Beichuan Yan wrote:
Ralph and Noam,
Thanks for the clarifications, they are important.
I could be wrong in understanding the filesystem.
Spirit appears to use a scratch directory for
shared memory backing which is mounted on Lustre,
and does not seem to have local dir
Good for me to read it.
-Original Message-
From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Gus Correa
Sent: Thursday, March 20, 2014 15:00
To: Open MPI Users
Subject: Re: [OMPI users] OpenMPI job initializing problem
On 03/20/2014 02:13 PM, Ralph Castain wrote:
>
> On Mar 20,
On 03/20/2014 02:13 PM, Ralph Castain wrote:
On Mar 20, 2014, at 9:48 AM, Beichuan Yan wrote:
Hi,
Today I tested OMPI v1.7.5rc5 and surprisingly, it works like a charm!
I found discussions related to this issue:
1. http://www.open-mpi.org/community/lists/users/2011/11/17688.php
The correct
Yeah, those are local - just in RAM, which is fine and quite typical. In 1.7.5,
we replaced some of the rendezvous logic, which may be why the problem went
away.
On Mar 20, 2014, at 1:48 PM, Beichuan Yan wrote:
> Ralph and Noam,
>
> Thanks for the clarifications, they are important. I could
Ralph and Noam,
Thanks for the clarifications, they are important. I could be wrong in
understanding the filesystem.
Spirit appears to use a scratch directory for shared memory backing which is
mounted on Lustre, and does not seem to have local directories or does not
allow user to change TEMP
On Mar 20, 2014, at 2:13 PM, Ralph Castain wrote:
>
> On Mar 20, 2014, at 9:48 AM, Beichuan Yan wrote:
>
>> Hi,
>>
>> Today I tested OMPI v1.7.5rc5 and surprisingly, it works like a charm!
>>
>> I found discussions related to this issue:
>>
>> 1. http://www.open-mpi.org/community/lists/user
On Mar 20, 2014, at 9:48 AM, Beichuan Yan wrote:
> Hi,
>
> Today I tested OMPI v1.7.5rc5 and surprisingly, it works like a charm!
>
> I found discussions related to this issue:
>
> 1. http://www.open-mpi.org/community/lists/users/2011/11/17688.php
> The correct solution here is get your sys a
Jeff, here you go:
(3) $ cd ompi/mpi/fortran/use-mpi-ignore-tkr
total 2888
-rw-r--r-- 1 fortran staff 1.7K Apr 13 2013 Makefile.am
-rw-r--r-- 1 fortran staff 215K Dec 17 21:09
mpi-ignore-tkr-interfaces.h.in
-rw-r--r-- 1 fortran staff39K Dec 17 21:09
mpi-ignore-tkr-file-interfaces.h.
On Mar 20, 2014, at 12:48 PM, Beichuan Yan wrote:
> 2. http://www.open-mpi.org/community/lists/users/2011/11/17684.php
> In the upcoming OMPI v1.7, we revamped the shared memory setup code such that
> it'll actually use /dev/shm properly, or use some other mechanism other than
> a mmap file bac
Hi,
Today I tested OMPI v1.7.5rc5 and surprisingly, it works like a charm!
I found discussions related to this issue:
1. http://www.open-mpi.org/community/lists/users/2011/11/17688.php
The correct solution here is get your sys admin to make /tmp local. Making /tmp
NFS mounted across multiple no
Very odd. Your logfiles indicate that OMPI's configure found the right ignore
TKR syntax and decided to build the ignore TKR mpi module.
-
checking Fortran compiler ignore TKR syntax... not cached; checking variants
checking for Fortran compiler support of TYPE(*), DIMENSION(*)... no
check
Jeff,
It does not:
Directory:
/Users/fortran/MPI/src/openmpi-1.7.4/ompi/mpi/fortran/use-mpi-ignore-tkr/.libs
(106) $ ls -ltr
total 1560
-rw-r--r-- 1 fortran staff 784824 Mar 18 20:47 mpi-ignore-tkr.o
-rw-r--r-- 1 fortran staff1021 Mar 18 20:47
libmpi_usempi_ignore_tkr.lai
lrwxr-xr-x 1 f
Sorry for the delay; we're working on releasing 1.7.5 and that's consuming all
my time...
That's a strange error. Can you confirm whether
ompi_buil_dir/ompi/mpi/fortran/use-mpi-ignore-tkr/.libs/libmpi_usempi_ignore_tkr.0.dylib
exists or not?
Can you send all the info listed here:
http://
17 matches
Mail list logo