[OMPI users] try to understand heat equation 2D mpi version

2010-10-21 Thread christophe petit
Hello,

i'm studying the parallelized version of a solving 2D heat equation code in
order to understand cartesian topology and the famous "MPI_CART_SHIFT".
Here's my problem at this part of the code :


---
call MPI_INIT(infompi)
  comm = MPI_COMM_WORLD
  call MPI_COMM_SIZE(comm,nproc,infompi)
  call MPI_COMM_RANK(comm,me,infompi)
!

..


! Create 2D cartesian grid
  periods(:) = .false.

  ndims = 2
  dims(1)=x_domains
  dims(2)=y_domains
  CALL MPI_CART_CREATE(MPI_COMM_WORLD, ndims, dims, periods, &
reorganisation,comm2d,infompi)
!
! Identify neighbors
!
  NeighBor(:) = MPI_PROC_NULL
! Left/West and right/Est neigbors
  CALL MPI_CART_SHIFT(comm2d,0,1,NeighBor(W),NeighBor(E),infompi)

  print *,'rank=', me
  print *, 'here first mpi_cart_shift : neighbor(w)=',NeighBor(W)
  print *, 'here first mpi_cart_shift : neighbor(e)=',NeighBor(E)

...

---

with x_domains=y_domains=2

and i get at the execution :" mpirun -np 4 ./explicitPar"

 rank=   0
here first mpi_cart_shift : neighbor(w)=  -1
 here first  mpi_cart_shift : neighbor(e)=   2
rank=   3
 here first mpi_cart_shift : neighbor(w)=   1
 here first mpi_cart_shift : neighbor(e)=  -1
 rank=   2
 here first mpi_cart_shift : neighbor(w)=   0
 here first mpi_cart_shift : neighbor(e)=  -1
 rank=   1
 here first mpi_cart_shift : neighbor(w)=  -1
 here first mpi_cart_shift : neighbor(e)=   3

I saw that if the rank is out of the topology and wihtout periodicity, the
rank should be equal to MPI_UNDEFINED whis is assigned to -32766 in "mpif.h"
. So, why have i got the value "-1" ?
On my Macbook pro, i get the value "-2".

Thanks in advance.


Re: [OMPI users] Question about MPI_Barrier

2010-10-21 Thread Storm Zhang
I got it. You're right, it might not related to MPI. I need to figure out
what's the possible reason for it.
Again, thanks for your help.

Linbao

On Thu, Oct 21, 2010 at 12:06 PM, Eugene Loh  wrote:

>  My main point was that, while what Jeff said about the short-comings of
> calling timers after Barriers was true, I wanted to come in defense of this
> timing strategy.  Otherwise, I was just agreeing with him that it seems
> implausible that commenting out B should influence the timing of A, but I'm
> equally clueless what that real issue is.  I have seen cases where the
> presence or absence of code that isn't executed can influence timings
> (perhaps because code will come out of the instruction cache differently),
> but all that is speculation.  It's all a guess that what you're really
> seeing isn't really MPI related at all.
>
> Storm Zhang wrote:
>
> Hi, Eugene, You said:
> " The bottom line here is that from a causal point of view it would seem
> that B should not impact the timings.  Presumably, some other variable is
> actually responsible here."  Could you explain it in more details for the
> second sentence. Thanks a lot.
>
> On Thu, Oct 21, 2010 at 9:58 AM, Eugene Loh  wrote:
>
>
>> Jeff Squyres wrote:
>>
>> MPI::COMM_WORLD.Barrier();
>>> if(rank == master) t1 = clock();
>>> "code A";
>>> MPI::COMM_WORLD.Barrier();
>>> if(rank == master) t2 = clock();
>>> "code B";
>>>
>>> Remember that the time that individual processes exit barrier is not
>>> guaranteed to be uniform (indeed, it most likely *won't* be the same).  MPI
>>> only guarantees that a process will not exit until after all processes have
>>> entered.  So taking t2 after the barrier might be a bit misleading, and may
>>> cause unexpected skew.
>>>
>>>
>>  The barrier exit times are not guaranteed to be uniform, but in practice
>> this style of timing is often the best (or only practical) tool one has for
>> measuring the collective performance of a group of processes.
>>
>>  Code B *probably* has no effect on time spent between t1 and t2.  But
>>> extraneous effects might cause it to do so -- e.g., are you running in an
>>> oversubscribed scenario?  And so on.
>>>
>>>
>>  Right.  The bottom line here is that from a causal point of view it would
>> seem that B should not impact the timings.  Presumably, some other variable
>> is actually responsible here.
>
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>


Re: [OMPI users] Question about MPI_Barrier

2010-10-21 Thread Eugene Loh




My main point was that, while what Jeff said about the short-comings of
calling timers after Barriers was true, I wanted to come in defense of
this timing strategy.  Otherwise, I was just agreeing with him that it
seems implausible that commenting out B should influence the timing of
A, but I'm equally clueless what that real issue is.  I have seen cases
where the presence or absence of code that isn't executed can influence
timings (perhaps because code will come out of the instruction cache
differently), but all that is speculation.  It's all a guess that what
you're really seeing isn't really MPI related at all.

Storm Zhang wrote:
Hi, Eugene, You said:
" The bottom line here is that from a causal point of view it would
seem that B should not impact the timings.  Presumably, some other
variable is actually responsible here."  Could you explain it in more
details for the second sentence. Thanks a lot.
  
  On Thu, Oct 21, 2010 at 9:58 AM, Eugene Loh 
wrote: 
  
Jeff Squyres wrote:

MPI::COMM_WORLD.Barrier();
if(rank == master) t1 = clock();
"code A";
MPI::COMM_WORLD.Barrier();
if(rank == master) t2 = clock();
"code B";
  
Remember that the time that individual processes exit barrier is not
guaranteed to be uniform (indeed, it most likely *won't* be the same).
 MPI only guarantees that a process will not exit until after all
processes have entered.  So taking t2 after the barrier might be a bit
misleading, and may cause unexpected skew.
 


The barrier exit times are not guaranteed to be uniform, but in
practice this style of timing is often the best (or only practical)
tool one has for measuring the collective performance of a group of
processes.


Code B *probably* has no effect on time spent between t1 and t2.  But
extraneous effects might cause it to do so -- e.g., are you running in
an oversubscribed scenario?  And so on.
 


Right.  The bottom line here is that from a causal point of view it
would seem that B should not impact the timings.  Presumably, some
other variable is actually responsible here.
  





Re: [OMPI users] Question about MPI_Barrier

2010-10-21 Thread Storm Zhang
 Hi, Eugene,
You said:
" The bottom line here is that from a causal point of view it would seem
that B should not impact the timings.  Presumably, some other variable is
actually responsible here."
Could you explain it in more details for the second sentence. Thanks a lot.

Linbao

On Thu, Oct 21, 2010 at 9:58 AM, Eugene Loh  wrote:

> Jeff Squyres wrote:
>
>  Ah.  The original code snipit you sent was:
>>
>> MPI::COMM_WORLD.Barrier();
>> if(rank == master) t1 = clock();
>> "code A";
>> MPI::COMM_WORLD.Barrier();
>> if(rank == master) t2 = clock();
>> "code B";
>>
>> Remember that the time that individual processes exit barrier is not
>> guaranteed to be uniform (indeed, it most likely *won't* be the same).  MPI
>> only guarantees that a process will not exit until after all processes have
>> entered.  So taking t2 after the barrier might be a bit misleading, and may
>> cause unexpected skew.
>>
>>
> The barrier exit times are not guaranteed to be uniform, but in practice
> this style of timing is often the best (or only practical) tool one has for
> measuring the collective performance of a group of processes.
>
>
>  Code B *probably* has no effect on time spent between t1 and t2.  But
>> extraneous effects might cause it to do so -- e.g., are you running in an
>> oversubscribed scenario?  And so on.
>>
>>
> Right.  The bottom line here is that from a causal point of view it would
> seem that B should not impact the timings.  Presumably, some other variable
> is actually responsible here.
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>


Re: [OMPI users] Question about MPI_Barrier

2010-10-21 Thread Storm Zhang
Thanks a lot.

On Thu, Oct 21, 2010 at 9:21 AM, Jeff Squyres  wrote:

> Ah.  The original code snipit you sent was:
>
> MPI::COMM_WORLD.Barrier();
> if if(rank == master) t1 = clock();
> "code A";
> MPI::COMM_WORLD.Barrier();
> if if(rank == master) t2 = clock();
> "code B";
>
> Remember that the time that individual processes exit barrier is not
> guaranteed to be uniform (indeed, it most likely *won't* be the same).  MPI
> only guarantees that a process will not exit until after all processes have
> entered.  So taking t2 after the barrier might be a bit misleading, and may
> cause unexpected skew.
>
Yes, it makes sense but I have no idea how big the time difference for
running barrier function. I don't expect it big either since all our compute
nodes have same configuration.

> Code B *probably* has no effect on time spent between t1 and t2.  But
> extraneous effects might cause it to do so -- e.g., are you running in an
> oversubscribed scenario?  And so on.
>
No. We have 1024 nodes available and I'm using 500.

>
> On Oct 21, 2010, at 9:24 AM, Storm Zhang wrote:
>
> >
> > Thanks a lot for your reply. By commenting code B, I mean if I remove the
> code B part, then the time spent on code A seems to run faster. I do have a
> lot of communications in code B too. It involves 500 procs. I had thought
> code B should have no effect on the time spent on code A if I use
> MPI_Barrier.
> >
> > Linbao
> > On Thu, Oct 21, 2010 at 5:17 AM, Jeff Squyres 
> wrote:
> > On Oct 20, 2010, at 5:51 PM, Storm Zhang wrote:
> >
> > > I need to measure t2-t1 to see the time spent on the code A between
> these two MPI_Barriers. I notice that if I comment code B, the time seems
> much less the original time (almost half). How does it happen? What is a
> possible reason for it? I have no idea.
> >
> > I'm not sure what you're asking here -- do you mean that if you put some
> comments in code B, it takes much less time than if you don't put comments?
>  If so, then the comments have nothing to do with the execution run-time --
> something else is going on that is causing the delay.  Some questions:
> >
> > - how long does it take to execute code B -- microseconds, or seconds, or
> ...?
> > - how many processes are involved?
> > - what are you doing in code B; is it communication intensive and/or do
> you synchronize with other processes?
> > - are you doing your timings on otherwise-empty machines?
> >
> > --
> > Jeff Squyres
> > jsquy...@cisco.com
> > For corporate legal information go to:
> > http://www.cisco.com/web/about/doing_business/legal/cri/
> >
> >
> > ___
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
> >
> > ___
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
> --
> Jeff Squyres
> jsquy...@cisco.com
> For corporate legal information go to:
> http://www.cisco.com/web/about/doing_business/legal/cri/
>
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>


Re: [OMPI users] Question about MPI_Barrier

2010-10-21 Thread Eugene Loh

Jeff Squyres wrote:


Ah.  The original code snipit you sent was:

MPI::COMM_WORLD.Barrier();
if(rank == master) t1 = clock();
"code A";
MPI::COMM_WORLD.Barrier();
if(rank == master) t2 = clock();
"code B";

Remember that the time that individual processes exit barrier is not guaranteed 
to be uniform (indeed, it most likely *won't* be the same).  MPI only 
guarantees that a process will not exit until after all processes have entered. 
 So taking t2 after the barrier might be a bit misleading, and may cause 
unexpected skew.
 

The barrier exit times are not guaranteed to be uniform, but in practice 
this style of timing is often the best (or only practical) tool one has 
for measuring the collective performance of a group of processes.



Code B *probably* has no effect on time spent between t1 and t2.  But 
extraneous effects might cause it to do so -- e.g., are you running in an 
oversubscribed scenario?  And so on.
 

Right.  The bottom line here is that from a causal point of view it 
would seem that B should not impact the timings.  Presumably, some other 
variable is actually responsible here.


Re: [OMPI users] segmentation fault in mpiexec (Linux, Oracle/Sun C)

2010-10-21 Thread Terry Dontje
 When you do a make can your add a V=1 to have the actual compile lines 
printed out.  That will probably show you the line with 
-fno-directives-only in it.  Which is odd because I think that option is 
a gcc'ism and don't know why it would show up in a studio build (note my 
build doesn't show it).


Maybe a copy of the config.log and config.status might be helpful.  Have 
you tried to start from square one?  It really seems like the configure 
or libtool might be setting things up for gcc which is odd with the 
configure line you show.


--td

On 10/21/2010 09:41 AM, Siegmar Gross wrote:

   I wonder if the error below be due to crap being left over in the
source tree.  Can you do a "make clean".  Note on a new checkout from
the v1.5 svn branch I was able to build 64 bit with the following
configure line:

linpc4 openmpi-1.5-Linux.x86_64.32_cc 123 make clean
Making clean in test
make[1]: Entering directory
...

../openmpi-1.5/configure \
   FC=f95 F77=f77 CC=cc CXX=CC --without-openib --without-udapl \
   --enable-heterogeneous --enable-cxx-exceptions --enable-shared \
   --enable-orterun-prefix-by-default --with-sge --disable-mpi-threads \
   --enable-mpi-f90 --with-mpi-f90-size=small --disable-progress-threads \
   --prefix=/usr/local/openmpi-1.5_32_cc CFLAGS=-m64 CXXFLAGS=-m64 \
   FFLAGS=-m64 FCFLAGS=-m64

make |&  tee log.make.$SYSTEM_ENV.$MACHINE_ENV.32_cc


...
make[3]: Leaving directory
`/export2/src/openmpi-1.5/openmpi-1.5-Linux.x86_64.32_cc/opal/libltdl'
make[2]: Leaving directory
`/export2/src/openmpi-1.5/openmpi-1.5-Linux.x86_64.32_cc/opal/libltdl'
Making all in asm
make[2]: Entering directory
`/export2/src/openmpi-1.5/openmpi-1.5-Linux.x86_64.32_cc/opal/asm'
   CC asm.lo
rm -f atomic-asm.S
ln -s ".../opal/asm/generated/atomic-ia32-linux-nongas.s" atomic-asm.S
   CPPAS  atomic-asm.lo
cc1: error: unrecognized command line option "-fno-directives-only"
cc: cpp failed for atomic-asm.S
make[2]: *** [atomic-asm.lo] Error 1
make[2]: Leaving directory `.../opal/asm'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/.../opal'
make: *** [all-recursive] Error 1


Do you know where I can find "-fno-directives-only"? "grep" didn't
show any results. I tried to rebuild the package with my original
settings and didn't succeed (same error as above) so that something
must have changed in the last two days on "linpc4". The operator told
me that he hasn't changed anything, so I have no idea why I cannot
build the package today. The log-files from "configure" are identical,
but the log-files from "make" differ (I changed the language with
"setenv LC_ALL C" because I have some errors on other machines as well
and wanted english messages so that you can read them).


tyr openmpi-1.5 198 diff
   openmpi-1.5-Linux.x86_64.32_cc.old/log.configure.Linux.x86_64.32_cc
   openmpi-1.5-Linux.x86_64.32_cc/log.configure.Linux.x86_64.32_cc |more

tyr openmpi-1.5 199 diff
   openmpi-1.5-Linux.x86_64.32_cc.old/log.make.Linux.x86_64.32_cc
   openmpi-1.5-Linux.x86_64.32_cc/log.make.Linux.x86_64.32_cc | more
3c3
<  make[1]: Für das Ziel »all« ist nichts zu tun.
---

make[1]: Nothing to be done for `all'.

7c7
<  make[1]: Für das Ziel »all« ist nichts zu tun.
---

make[1]: Nothing to be done for `all'.

74,76c74,76
<  :19:0: Warnung: »__FLT_EVAL_METHOD__« redefiniert
<  :93:0: Anmerkung: dies ist die Stelle der vorherigen Definition


Re: [OMPI users] OpenMPI 1.4.2 with Myrinet MX, mpirun seg faults

2010-10-21 Thread Raymond Muno

On 10/20/2010 8:30 PM, Scott Atchley wrote:

We have fixed this bug in the most recent 1.4.x and 1.5.x releases.

Scott

OK, a few more tests.  I was using PGI 10.4 as the compiler.

I have now tried OpenMPI 1.4.3 with PGI 10.8 and Intel 11.1.  I get the 
same results in each case, mpirun seg faults. (I really did not expect 
that to change anything).


I tried OpenMPI 1.5.  Under PGI, I could not get it to compile.   With 
Intel 11.1, it compiles. When I try to run a simple test, mpirun just 
seems to hang and I never see anything start on the nodes.  I would 
rather stick with 1.4.x for now since that is what we are running on our 
other production cluster.  I will leave this for a later day.


I grabbed the 1.4.3 version from this page.

http://www.open-mpi.org/software/ompi/v1.4/

When you say this bug is fixed in recent  1.4.x releases,  should I try 
one from here?


http://www.open-mpi.org/nightly/v1.4/

For grins, I compiled the OpenMPI 1.4.1 tree.  This what Myricom 
supplied with the MX roll. Same result.  I can still run with their 
compiled version of mpirun, even when I compile with the other build 
trees and compilers.  I just do not know what options they compiled with.


Any insight would be appreciated.

-Ray Muno
 University of Minnesota


Re: [OMPI users] segmentation fault in mpiexec (Linux, Oracle/Sun C)

2010-10-21 Thread Terry Dontje

 On 10/21/2010 10:18 AM, Jeff Squyres wrote:

Terry --

Can you file relevant ticket(s) for v1.5 on Trac?

Once I have more information and have proven it isn't due to us using 
old compilers or a compiler error itself.


--td

On Oct 21, 2010, at 10:10 AM, Terry Dontje wrote:


I've reproduced Siegmar's issue when I have the threads options on but it does 
not show up when they are off.  It is actually segv'ing in 
mca_btl_sm_component_close on an access at address 0 (obviously not a good 
thing).  I am going compile things with debug on and see if I can track this 
further but I think I am smelling the smoke of a bug...

Siegmar, I was able to get stuff working with 32 bits when I removed -with-threads=posix 
and replaced "-enable-mpi-threads" with --disable-mpi-threads in your configure 
line.  I think your previous issue with things not building must be left over cruft.

Note, my compiler hang disappeared on me.  So maybe there was an environmental 
issue on my side.

--td


On 10/21/2010 06:47 AM, Terry Dontje wrote:

On 10/21/2010 06:43 AM, Jeff Squyres (jsquyres) wrote:

Also, i'm not entirely sure what all the commands are that you are showing. 
Some of those warnings (eg in config.log) are normal.

The 32 bit test failure is not, though. Terry - any idea there?

The test program is failing in MPI_Finalize which seems odd and the code itself 
looks pretty dead simple.  I am rebuilding a v1.5 workspace without the 
different thread options.  Once that is done I'll try the test program.

BTW, when I tried to build with the original options Siegmar used the compiles 
looked like they hung, doh.

--td


Sent from my PDA. No type good.

On Oct 21, 2010, at 6:25 AM, "Terry Dontje"  wrote:


I wonder if the error below be due to crap being left over in the source tree.  Can you 
do a "make clean".  Note on a new checkout from the v1.5 svn branch I was able 
to build 64 bit with the following configure line:

../configure FC=f95 F77=f77 CC=cc CXX=CC --without-openib --without-udapl 
-enable-heterogeneous --enable-cxx-exceptions --enable-shared 
--enable-orterun-prefix-by-default --with-sge --disable-mpi-threads 
--enable-mpi-f90 --with-mpi-f90-size=small --disable-progress-threads 
--prefix=/workspace/tdd/ctnext/v15 CFLAGS=-m64 CXXFLAGS=-m64 
FFLAGS=-m64 FCFLAGS=-m64

--td
On 10/21/2010 05:38 AM, Siegmar Gross wrote:

Hi,

thank you very much for your reply.



   Can you remove the -with-threads and -enable-mpi-threads options from
the configure line and see if that helps your 32 bit problem any?


I cannot build the package when I remove these options.

linpc4 openmpi-1.5-Linux.x86_64.32_cc 189 head -8 config.log
This file contains any messages produced by compilers while
running configure, to aid debugging if configure makes a mistake.

It was created by Open MPI configure 1.5, which was
generated by GNU Autoconf 2.65.  Invocation command line was

   $ ../openmpi-1.5/configure --prefix=/usr/local/openmpi-1.5_32_cc
   CFLAGS=-m32 CXXFLAGS=-m32 FFLAGS=-m32 FCFLAGS=-m32 CXXLDFLAGS=-m32
   CPPFLAGS= LDFLAGS=-m32 C_INCL_PATH= C_INCLUDE_PATH= CPLUS_INCLUDE_PATH=
   OBJC_INCLUDE_PATH= MPICHHOME= CC=cc CXX=CC F77=f95 FC=f95
   --without-udapl --enable-shared --enable-heterogeneous
   --enable-cxx-exceptions


linpc4 openmpi-1.5-Linux.x86_64.32_cc 190 head -8 ../*.old/config.log
This file contains any messages produced by compilers while
running configure, to aid debugging if configure makes a mistake.

It was created by Open MPI configure 1.5, which was
generated by GNU Autoconf 2.65.  Invocation command line was

   $ ../openmpi-1.5/configure --prefix=/usr/local/openmpi-1.5_32_cc
   CFLAGS=-m32 CXXFLAGS=-m32 FFLAGS=-m32 FCFLAGS=-m32 CXXLDFLAGS=-m32
   CPPFLAGS= LDFLAGS=-m32 C_INCL_PATH= C_INCLUDE_PATH= CPLUS_INCLUDE_PATH=
   OBJC_INCLUDE_PATH= MPICHHOME= CC=cc CXX=CC F77=f95 FC=f95
   --without-udapl --with-threads=posix --enable-mpi-threads
   --enable-shared --enable-heterogeneous --enable-cxx-exceptions


linpc4 openmpi-1.5-Linux.x86_64.32_cc 194 dir log.* ../*.old/log.*
... 132406 Oct 19 13:01
   ../openmpi-1.5-Linux.x86_64.32_cc.old/log.configure.Linux.x86_64.32_cc
... 195587 Oct 19 16:09
   ../openmpi-1.5-Linux.x86_64.32_cc.old/log.make-check.Linux.x86_64.32_cc
... 356672 Oct 19 16:07
   ../openmpi-1.5-Linux.x86_64.32_cc.old/log.make-install.Linux.x86_64.32_cc
... 280596 Oct 19 13:42
   ../openmpi-1.5-Linux.x86_64.32_cc.old/log.make.Linux.x86_64.32_cc
... 132265 Oct 21 10:51 log.configure.Linux.x86_64.32_cc
...  10890 Oct 21 10:51 log.make.Linux.x86_64.32_cc


linpc4 openmpi-1.5-Linux.x86_64.32_cc 195 grep -i warning:
   log.configure.Linux.x86_64.32_cc
configure: WARNING: *** Did not find corresponding C type
configure: WARNING: MPI_REAL16 and MPI_COMPLEX32 support have been disabled
configure: WARNING: *** Corresponding Fortran 77 type (REAL*16) not supported
configure: WARNING: *** Skipping Fortran 90 type (REAL*16)
configure: WARNING: valgrind.h not found
configure: WARNING:

Re: [OMPI users] Question about MPI_Barrier

2010-10-21 Thread Jeff Squyres
Ah.  The original code snipit you sent was:

MPI::COMM_WORLD.Barrier();
if if(rank == master) t1 = clock();
"code A";
MPI::COMM_WORLD.Barrier();
if if(rank == master) t2 = clock();
"code B";

Remember that the time that individual processes exit barrier is not guaranteed 
to be uniform (indeed, it most likely *won't* be the same).  MPI only 
guarantees that a process will not exit until after all processes have entered. 
 So taking t2 after the barrier might be a bit misleading, and may cause 
unexpected skew.

Code B *probably* has no effect on time spent between t1 and t2.  But 
extraneous effects might cause it to do so -- e.g., are you running in an 
oversubscribed scenario?  And so on.


On Oct 21, 2010, at 9:24 AM, Storm Zhang wrote:

> 
> Thanks a lot for your reply. By commenting code B, I mean if I remove the 
> code B part, then the time spent on code A seems to run faster. I do have a 
> lot of communications in code B too. It involves 500 procs. I had thought 
> code B should have no effect on the time spent on code A if I use MPI_Barrier.
> 
> Linbao
> On Thu, Oct 21, 2010 at 5:17 AM, Jeff Squyres  wrote:
> On Oct 20, 2010, at 5:51 PM, Storm Zhang wrote:
> 
> > I need to measure t2-t1 to see the time spent on the code A between these 
> > two MPI_Barriers. I notice that if I comment code B, the time seems much 
> > less the original time (almost half). How does it happen? What is a 
> > possible reason for it? I have no idea.
> 
> I'm not sure what you're asking here -- do you mean that if you put some 
> comments in code B, it takes much less time than if you don't put comments?  
> If so, then the comments have nothing to do with the execution run-time -- 
> something else is going on that is causing the delay.  Some questions:
> 
> - how long does it take to execute code B -- microseconds, or seconds, or ...?
> - how many processes are involved?
> - what are you doing in code B; is it communication intensive and/or do you 
> synchronize with other processes?
> - are you doing your timings on otherwise-empty machines?
> 
> --
> Jeff Squyres
> jsquy...@cisco.com
> For corporate legal information go to:
> http://www.cisco.com/web/about/doing_business/legal/cri/
> 
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/




Re: [OMPI users] segmentation fault in mpiexec (Linux, Oracle/Sun C)

2010-10-21 Thread Siegmar Gross
>   I wonder if the error below be due to crap being left over in the 
> source tree.  Can you do a "make clean".  Note on a new checkout from 
> the v1.5 svn branch I was able to build 64 bit with the following 
> configure line:

linpc4 openmpi-1.5-Linux.x86_64.32_cc 123 make clean
Making clean in test
make[1]: Entering directory 
...

../openmpi-1.5/configure \
  FC=f95 F77=f77 CC=cc CXX=CC --without-openib --without-udapl \
  --enable-heterogeneous --enable-cxx-exceptions --enable-shared \
  --enable-orterun-prefix-by-default --with-sge --disable-mpi-threads \
  --enable-mpi-f90 --with-mpi-f90-size=small --disable-progress-threads \
  --prefix=/usr/local/openmpi-1.5_32_cc CFLAGS=-m64 CXXFLAGS=-m64 \
  FFLAGS=-m64 FCFLAGS=-m64

make |& tee log.make.$SYSTEM_ENV.$MACHINE_ENV.32_cc


...
make[3]: Leaving directory 
`/export2/src/openmpi-1.5/openmpi-1.5-Linux.x86_64.32_cc/opal/libltdl'
make[2]: Leaving directory 
`/export2/src/openmpi-1.5/openmpi-1.5-Linux.x86_64.32_cc/opal/libltdl'
Making all in asm
make[2]: Entering directory 
`/export2/src/openmpi-1.5/openmpi-1.5-Linux.x86_64.32_cc/opal/asm'
  CC asm.lo
rm -f atomic-asm.S
ln -s ".../opal/asm/generated/atomic-ia32-linux-nongas.s" atomic-asm.S
  CPPAS  atomic-asm.lo
cc1: error: unrecognized command line option "-fno-directives-only"
cc: cpp failed for atomic-asm.S
make[2]: *** [atomic-asm.lo] Error 1
make[2]: Leaving directory `.../opal/asm'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/.../opal'
make: *** [all-recursive] Error 1


Do you know where I can find "-fno-directives-only"? "grep" didn't
show any results. I tried to rebuild the package with my original
settings and didn't succeed (same error as above) so that something
must have changed in the last two days on "linpc4". The operator told
me that he hasn't changed anything, so I have no idea why I cannot
build the package today. The log-files from "configure" are identical,
but the log-files from "make" differ (I changed the language with
"setenv LC_ALL C" because I have some errors on other machines as well
and wanted english messages so that you can read them).


tyr openmpi-1.5 198 diff 
  openmpi-1.5-Linux.x86_64.32_cc.old/log.configure.Linux.x86_64.32_cc
  openmpi-1.5-Linux.x86_64.32_cc/log.configure.Linux.x86_64.32_cc |more

tyr openmpi-1.5 199 diff 
  openmpi-1.5-Linux.x86_64.32_cc.old/log.make.Linux.x86_64.32_cc
  openmpi-1.5-Linux.x86_64.32_cc/log.make.Linux.x86_64.32_cc | more
3c3
< make[1]: Für das Ziel »all« ist nichts zu tun.
---
> make[1]: Nothing to be done for `all'.
7c7
< make[1]: Für das Ziel »all« ist nichts zu tun.
---
> make[1]: Nothing to be done for `all'.
74,76c74,76
< :19:0: Warnung: »__FLT_EVAL_METHOD__« redefiniert
< :93:0: Anmerkung: dies ist die Stelle der vorherigen Definition
<   CCLD   libasm.la
---
> cc1: error: unrecognized command line option "-fno-directives-only"
> cc: cpp failed for atomic-asm.S
> make[2]: *** [atomic-asm.lo] Error 1
78,426c78
< Making all in datatype
< make[2]: Entering directory `/.../opal/datatype'
<   CC libdatatype_reliable_la-opal_datatype_pack.lo
<   CC libdatatype_reliable_la-opal_datatype_unpack.lo
<   CCLD   libdatatype_reliable.la
<   CC opal_convertor.lo
...


The difference is that two days ago "__FLT_EVAL_METHOD__" was redefined
and today it isn't. Obviously the package cannot be build without that
redefinition.

...
make[3]: Leaving directory `/.../opal/libltdl'
make[2]: Leaving directory `/.../opal/libltdl'
Making all in asm
make[2]: Entering directory `/.../opal/asm'
  CC asm.lo
rm -f atomic-asm.S
ln -s "../../../openmpi-1.5/opal/asm/generated/atomic-ia32-linux-nongas.s"
  atomic-asm.S
  CPPAS  atomic-asm.lo
:19:0: Warnung: »__FLT_EVAL_METHOD__« redefiniert
:93:0: Anmerkung: dies ist die Stelle der vorherigen Definition
  CCLD   libasm.la
make[2]: Leaving directory `/.../opal/asm'
Making all in datatype
make[2]: Entering directory `/.../opal/datatype'
  CC libdatatype_reliable_la-opal_datatype_pack.lo
  CC libdatatype_reliable_la-opal_datatype_unpack.lo
...


Therefore I removed "setenv LC_ALL C" from my environment and logged in
into linpc4 once more. But still no success. The messages are once more
in german but "__FLT_EVAL_METHOD__" wasn't redefined.

tyr openmpi-1.5 205 diff
  openmpi-1.5-Linux.x86_64.32_cc.old/log.make.Linux.x86_64.32_cc
  openmpi-1.5-Linux.x86_64.32_cc/log.make.Linux.x86_64.32_cc | more
74,76c74,76
< :19:0: Warnung: »__FLT_EVAL_METHOD__« redefiniert
< :93:0: Anmerkung: dies ist die Stelle der vorherigen Definition
<   CCLD   libasm.la
---
> cc1: Fehler: nicht erkannte Kommandozeilenoption »-fno-directives-only«
> cc: cpp failed for atomic-asm.S
> make[2]: *** [atomic-asm.lo] Fehler 1
78,426c78
< Making all in datatype
< make[2]: Entering directory `/.../opal/datatype'
<   CC libdatatype_reliable_la-opal_datatype_pack.lo
<   CC libdatatype_reliable_la-opal_datatype_unpack.lo
<   CCLD   libdatatype_reliable.la
...


I have no idea why it

Re: [OMPI users] segmentation fault in mpiexec (Linux, Oracle/Sun C)

2010-10-21 Thread Jeff Squyres
Terry --

Can you file relevant ticket(s) for v1.5 on Trac?


On Oct 21, 2010, at 10:10 AM, Terry Dontje wrote:

> I've reproduced Siegmar's issue when I have the threads options on but it 
> does not show up when they are off.  It is actually segv'ing in 
> mca_btl_sm_component_close on an access at address 0 (obviously not a good 
> thing).  I am going compile things with debug on and see if I can track this 
> further but I think I am smelling the smoke of a bug...
> 
> Siegmar, I was able to get stuff working with 32 bits when I removed 
> -with-threads=posix and replaced "-enable-mpi-threads" with 
> --disable-mpi-threads in your configure line.  I think your previous issue 
> with things not building must be left over cruft.
> 
> Note, my compiler hang disappeared on me.  So maybe there was an 
> environmental issue on my side.
> 
> --td
> 
> 
> On 10/21/2010 06:47 AM, Terry Dontje wrote:
>> On 10/21/2010 06:43 AM, Jeff Squyres (jsquyres) wrote:
>>> Also, i'm not entirely sure what all the commands are that you are showing. 
>>> Some of those warnings (eg in config.log) are normal. 
>>> 
>>> The 32 bit test failure is not, though. Terry - any idea there?
>> The test program is failing in MPI_Finalize which seems odd and the code 
>> itself looks pretty dead simple.  I am rebuilding a v1.5 workspace without 
>> the different thread options.  Once that is done I'll try the test program.
>> 
>> BTW, when I tried to build with the original options Siegmar used the 
>> compiles looked like they hung, doh.
>> 
>> --td
>> 
>>> 
>>> Sent from my PDA. No type good. 
>>> 
>>> On Oct 21, 2010, at 6:25 AM, "Terry Dontje"  wrote:
>>> 
 I wonder if the error below be due to crap being left over in the source 
 tree.  Can you do a "make clean".  Note on a new checkout from the v1.5 
 svn branch I was able to build 64 bit with the following 
 configure line:
 
 ../configure FC=f95 F77=f77 CC=cc CXX=CC --without-openib --without-udapl 
 -enable-heterogeneous --enable-cxx-exceptions --enable-shared 
 --enable-orterun-prefix-by-default --with-sge --disable-mpi-threads 
 --enable-mpi-f90 --with-mpi-f90-size=small --disable-progress-threads 
 --prefix=/workspace/tdd/ctnext/v15 CFLAGS=-m64 CXXFLAGS=-m64 
 FFLAGS=-m64 FCFLAGS=-m64
 
 --td
 On 10/21/2010 05:38 AM, Siegmar Gross wrote:
> Hi,
> 
> thank you very much for your reply.
> 
> 
>>   Can you remove the -with-threads and -enable-mpi-threads options from 
>> the configure line and see if that helps your 32 bit problem any?
>> 
> I cannot build the package when I remove these options.
> 
> linpc4 openmpi-1.5-Linux.x86_64.32_cc 189 head -8 config.log
> This file contains any messages produced by compilers while
> running configure, to aid debugging if configure makes a mistake.
> 
> It was created by Open MPI configure 1.5, which was
> generated by GNU Autoconf 2.65.  Invocation command line was
> 
>   $ ../openmpi-1.5/configure --prefix=/usr/local/openmpi-1.5_32_cc
>   CFLAGS=-m32 CXXFLAGS=-m32 FFLAGS=-m32 FCFLAGS=-m32 CXXLDFLAGS=-m32
>   CPPFLAGS= LDFLAGS=-m32 C_INCL_PATH= C_INCLUDE_PATH= CPLUS_INCLUDE_PATH=
>   OBJC_INCLUDE_PATH= MPICHHOME= CC=cc CXX=CC F77=f95 FC=f95
>   --without-udapl --enable-shared --enable-heterogeneous
>   --enable-cxx-exceptions
> 
> 
> linpc4 openmpi-1.5-Linux.x86_64.32_cc 190 head -8 ../*.old/config.log
> This file contains any messages produced by compilers while
> running configure, to aid debugging if configure makes a mistake.
> 
> It was created by Open MPI configure 1.5, which was
> generated by GNU Autoconf 2.65.  Invocation command line was
> 
>   $ ../openmpi-1.5/configure --prefix=/usr/local/openmpi-1.5_32_cc
>   CFLAGS=-m32 CXXFLAGS=-m32 FFLAGS=-m32 FCFLAGS=-m32 CXXLDFLAGS=-m32
>   CPPFLAGS= LDFLAGS=-m32 C_INCL_PATH= C_INCLUDE_PATH= CPLUS_INCLUDE_PATH=
>   OBJC_INCLUDE_PATH= MPICHHOME= CC=cc CXX=CC F77=f95 FC=f95
>   --without-udapl --with-threads=posix --enable-mpi-threads
>   --enable-shared --enable-heterogeneous --enable-cxx-exceptions
> 
> 
> linpc4 openmpi-1.5-Linux.x86_64.32_cc 194 dir log.* ../*.old/log.*
> ... 132406 Oct 19 13:01
>   ../openmpi-1.5-Linux.x86_64.32_cc.old/log.configure.Linux.x86_64.32_cc
> ... 195587 Oct 19 16:09
>   ../openmpi-1.5-Linux.x86_64.32_cc.old/log.make-check.Linux.x86_64.32_cc
> ... 356672 Oct 19 16:07
>   
> ../openmpi-1.5-Linux.x86_64.32_cc.old/log.make-install.Linux.x86_64.32_cc
> ... 280596 Oct 19 13:42
>   ../openmpi-1.5-Linux.x86_64.32_cc.old/log.make.Linux.x86_64.32_cc
> ... 132265 Oct 21 10:51 log.configure.Linux.x86_64.32_cc
> ...  10890 Oct 21 10:51 log.make.Linux.x86_64.32_cc
> 
> 
> linpc4 openmpi-1.5-Linux.x86_64.32_cc 195 grep -i warning:
>   log.configure.Linux.x86_64.32_cc 
> 

Re: [OMPI users] segmentation fault in mpiexec (Linux, Oracle/Sun C)

2010-10-21 Thread Terry Dontje
 I've reproduced Siegmar's issue when I have the threads options on but 
it does not show up when they are off.  It is actually segv'ing in 
mca_btl_sm_component_close on an access at address 0 (obviously not a 
good thing).  I am going compile things with debug on and see if I can 
track this further but I think I am smelling the smoke of a bug...


Siegmar, I was able to get stuff working with 32 bits when I removed 
-with-threads=posix and replaced "-enable-mpi-threads" with 
--disable-mpi-threads in your configure line.  I think your previous 
issue with things not building must be left over cruft.


Note, my compiler hang disappeared on me.  So maybe there was an 
environmental issue on my side.


--td


On 10/21/2010 06:47 AM, Terry Dontje wrote:

On 10/21/2010 06:43 AM, Jeff Squyres (jsquyres) wrote:
Also, i'm not entirely sure what all the commands are that you are 
showing. Some of those warnings (eg in config.log) are normal.


The 32 bit test failure is not, though. Terry - any idea there?
The test program is failing in MPI_Finalize which seems odd and the 
code itself looks pretty dead simple.  I am rebuilding a v1.5 
workspace without the different thread options.  Once that is done 
I'll try the test program.


BTW, when I tried to build with the original options Siegmar used the 
compiles looked like they hung, doh.


--td



Sent from my PDA. No type good.

On Oct 21, 2010, at 6:25 AM, "Terry Dontje" > wrote:


I wonder if the error below be due to crap being left over in the 
source tree.  Can you do a "make clean".  Note on a new checkout 
from the v1.5 svn branch I was able to build 64 bit with the 
following configure line:


../configure FC=f95 F77=f77 CC=cc CXX=CC --without-openib 
--without-udapl -enable-heterogeneous --enable-cxx-exceptions 
--enable-shared --enable-orterun-prefix-by-default --with-sge 
--disable-mpi-threads --enable-mpi-f90 --with-mpi-f90-size=small 
--disable-progress-threads --prefix=/workspace/tdd/ctnext/v15 
CFLAGS=-m64 CXXFLAGS=-m64 FFLAGS=-m64 FCFLAGS=-m64


--td
On 10/21/2010 05:38 AM, Siegmar Gross wrote:

Hi,

thank you very much for your reply.


   Can you remove the -with-threads and -enable-mpi-threads options from
the configure line and see if that helps your 32 bit problem any?

I cannot build the package when I remove these options.

linpc4 openmpi-1.5-Linux.x86_64.32_cc 189 head -8 config.log
This file contains any messages produced by compilers while
running configure, to aid debugging if configure makes a mistake.

It was created by Open MPI configure 1.5, which was
generated by GNU Autoconf 2.65.  Invocation command line was

   $ ../openmpi-1.5/configure --prefix=/usr/local/openmpi-1.5_32_cc
   CFLAGS=-m32 CXXFLAGS=-m32 FFLAGS=-m32 FCFLAGS=-m32 CXXLDFLAGS=-m32
   CPPFLAGS= LDFLAGS=-m32 C_INCL_PATH= C_INCLUDE_PATH= CPLUS_INCLUDE_PATH=
   OBJC_INCLUDE_PATH= MPICHHOME= CC=cc CXX=CC F77=f95 FC=f95
   --without-udapl --enable-shared --enable-heterogeneous
   --enable-cxx-exceptions


linpc4 openmpi-1.5-Linux.x86_64.32_cc 190 head -8 ../*.old/config.log
This file contains any messages produced by compilers while
running configure, to aid debugging if configure makes a mistake.

It was created by Open MPI configure 1.5, which was
generated by GNU Autoconf 2.65.  Invocation command line was

   $ ../openmpi-1.5/configure --prefix=/usr/local/openmpi-1.5_32_cc
   CFLAGS=-m32 CXXFLAGS=-m32 FFLAGS=-m32 FCFLAGS=-m32 CXXLDFLAGS=-m32
   CPPFLAGS= LDFLAGS=-m32 C_INCL_PATH= C_INCLUDE_PATH= CPLUS_INCLUDE_PATH=
   OBJC_INCLUDE_PATH= MPICHHOME= CC=cc CXX=CC F77=f95 FC=f95
   --without-udapl --with-threads=posix --enable-mpi-threads
   --enable-shared --enable-heterogeneous --enable-cxx-exceptions


linpc4 openmpi-1.5-Linux.x86_64.32_cc 194 dir log.* ../*.old/log.*
... 132406 Oct 19 13:01
   ../openmpi-1.5-Linux.x86_64.32_cc.old/log.configure.Linux.x86_64.32_cc
... 195587 Oct 19 16:09
   ../openmpi-1.5-Linux.x86_64.32_cc.old/log.make-check.Linux.x86_64.32_cc
... 356672 Oct 19 16:07
   ../openmpi-1.5-Linux.x86_64.32_cc.old/log.make-install.Linux.x86_64.32_cc
... 280596 Oct 19 13:42
   ../openmpi-1.5-Linux.x86_64.32_cc.old/log.make.Linux.x86_64.32_cc
... 132265 Oct 21 10:51 log.configure.Linux.x86_64.32_cc
...  10890 Oct 21 10:51 log.make.Linux.x86_64.32_cc


linpc4 openmpi-1.5-Linux.x86_64.32_cc 195 grep -i warning:
   log.configure.Linux.x86_64.32_cc
configure: WARNING: *** Did not find corresponding C type
configure: WARNING: MPI_REAL16 and MPI_COMPLEX32 support have been disabled
configure: WARNING: *** Corresponding Fortran 77 type (REAL*16) not supported
configure: WARNING: *** Skipping Fortran 90 type (REAL*16)
configure: WARNING: valgrind.h not found
configure: WARNING: Unknown architecture ... proceeding anyway
configure: WARNING: File locks may not work with NFS.  See the Installation and
configure: WARNING:  -xldscope=hidden has been added to CFLAGS

linpc4 openmpi-1.5-Linux.x86_64.32_cc 196 grep -i warning:
   ../*.old/log.

Re: [OMPI users] Question about MPI_Barrier

2010-10-21 Thread Storm Zhang
Thanks a lot for your reply. By commenting code B, I mean if I remove the
code B part, then the time spent on code A seems to run faster. I do have a
lot of communications in code B too. It involves 500 procs. I had thought
code B should have no effect on the time spent on code A if I use
MPI_Barrier.

Linbao
On Thu, Oct 21, 2010 at 5:17 AM, Jeff Squyres  wrote:

> On Oct 20, 2010, at 5:51 PM, Storm Zhang wrote:
>
> > I need to measure t2-t1 to see the time spent on the code A between these
> two MPI_Barriers. I notice that if I comment code B, the time seems much
> less the original time (almost half). How does it happen? What is a possible
> reason for it? I have no idea.
>
> I'm not sure what you're asking here -- do you mean that if you put some
> comments in code B, it takes much less time than if you don't put comments?
>  If so, then the comments have nothing to do with the execution run-time --
> something else is going on that is causing the delay.  Some questions:
>
> - how long does it take to execute code B -- microseconds, or seconds, or
> ...?
> - how many processes are involved?
> - what are you doing in code B; is it communication intensive and/or do you
> synchronize with other processes?
> - are you doing your timings on otherwise-empty machines?
>
> --
> Jeff Squyres
> jsquy...@cisco.com
> For corporate legal information go to:
> http://www.cisco.com/web/about/doing_business/legal/cri/
>
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>


Re: [OMPI users] Question about MPI_Barrier

2010-10-21 Thread Storm Zhang
Thanks for your suggestion. I am trying MPI_Wtime to see if there is any
difference.

Linbao

On Thu, Oct 21, 2010 at 1:37 AM, jody  wrote:

> Hi
>
> I don't know the reason for the strange behaviour, but anyway,
> to measure time in an MPI application you should use MPI_Wtime(), not
> clock()
>
> regards
>  jody
>
> On Wed, Oct 20, 2010 at 11:51 PM, Storm Zhang  wrote:
> > Dear all,
> >
> > I got confused with my recent C++ MPI program's behavior. I have an MPI
> > program in which I use clock() to measure the time spent between to
> > MPI_Barrier, just like this:
> >
> > MPI::COMM_WORLD.Barrier();
> > if if(rank == master) t1 = clock();
> > "code A";
> > MPI::COMM_WORLD.Barrier();
> > if if(rank == master) t2 = clock();
> > "code B";
> >
> > I need to measure t2-t1 to see the time spent on the code A between these
> > two MPI_Barriers. I notice that if I comment code B, the time seems much
> > less the original time (almost half). How does it happen? What is a
> possible
> > reason for it? I have no idea.
> >
> > Thanks for your help.
> >
> > Linbao
> > ___
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
> >
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>


Re: [OMPI users] segmentation fault in mpiexec (Linux, Oracle/Sun C)

2010-10-21 Thread Terry Dontje

 On 10/21/2010 06:43 AM, Jeff Squyres (jsquyres) wrote:
Also, i'm not entirely sure what all the commands are that you are 
showing. Some of those warnings (eg in config.log) are normal.


The 32 bit test failure is not, though. Terry - any idea there?
The test program is failing in MPI_Finalize which seems odd and the code 
itself looks pretty dead simple.  I am rebuilding a v1.5 workspace 
without the different thread options.  Once that is done I'll try the 
test program.


BTW, when I tried to build with the original options Siegmar used the 
compiles looked like they hung, doh.


--td



Sent from my PDA. No type good.

On Oct 21, 2010, at 6:25 AM, "Terry Dontje" > wrote:


I wonder if the error below be due to crap being left over in the 
source tree.  Can you do a "make clean".  Note on a new checkout from 
the v1.5 svn branch I was able to build 64 bit with the following 
configure line:


../configure FC=f95 F77=f77 CC=cc CXX=CC --without-openib 
--without-udapl -enable-heterogeneous --enable-cxx-exceptions 
--enable-shared --enable-orterun-prefix-by-default --with-sge 
--disable-mpi-threads --enable-mpi-f90 --with-mpi-f90-size=small 
--disable-progress-threads --prefix=/workspace/tdd/ctnext/v15 
CFLAGS=-m64 CXXFLAGS=-m64 FFLAGS=-m64 FCFLAGS=-m64


--td
On 10/21/2010 05:38 AM, Siegmar Gross wrote:

Hi,

thank you very much for your reply.


   Can you remove the -with-threads and -enable-mpi-threads options from
the configure line and see if that helps your 32 bit problem any?

I cannot build the package when I remove these options.

linpc4 openmpi-1.5-Linux.x86_64.32_cc 189 head -8 config.log
This file contains any messages produced by compilers while
running configure, to aid debugging if configure makes a mistake.

It was created by Open MPI configure 1.5, which was
generated by GNU Autoconf 2.65.  Invocation command line was

   $ ../openmpi-1.5/configure --prefix=/usr/local/openmpi-1.5_32_cc
   CFLAGS=-m32 CXXFLAGS=-m32 FFLAGS=-m32 FCFLAGS=-m32 CXXLDFLAGS=-m32
   CPPFLAGS= LDFLAGS=-m32 C_INCL_PATH= C_INCLUDE_PATH= CPLUS_INCLUDE_PATH=
   OBJC_INCLUDE_PATH= MPICHHOME= CC=cc CXX=CC F77=f95 FC=f95
   --without-udapl --enable-shared --enable-heterogeneous
   --enable-cxx-exceptions


linpc4 openmpi-1.5-Linux.x86_64.32_cc 190 head -8 ../*.old/config.log
This file contains any messages produced by compilers while
running configure, to aid debugging if configure makes a mistake.

It was created by Open MPI configure 1.5, which was
generated by GNU Autoconf 2.65.  Invocation command line was

   $ ../openmpi-1.5/configure --prefix=/usr/local/openmpi-1.5_32_cc
   CFLAGS=-m32 CXXFLAGS=-m32 FFLAGS=-m32 FCFLAGS=-m32 CXXLDFLAGS=-m32
   CPPFLAGS= LDFLAGS=-m32 C_INCL_PATH= C_INCLUDE_PATH= CPLUS_INCLUDE_PATH=
   OBJC_INCLUDE_PATH= MPICHHOME= CC=cc CXX=CC F77=f95 FC=f95
   --without-udapl --with-threads=posix --enable-mpi-threads
   --enable-shared --enable-heterogeneous --enable-cxx-exceptions


linpc4 openmpi-1.5-Linux.x86_64.32_cc 194 dir log.* ../*.old/log.*
... 132406 Oct 19 13:01
   ../openmpi-1.5-Linux.x86_64.32_cc.old/log.configure.Linux.x86_64.32_cc
... 195587 Oct 19 16:09
   ../openmpi-1.5-Linux.x86_64.32_cc.old/log.make-check.Linux.x86_64.32_cc
... 356672 Oct 19 16:07
   ../openmpi-1.5-Linux.x86_64.32_cc.old/log.make-install.Linux.x86_64.32_cc
... 280596 Oct 19 13:42
   ../openmpi-1.5-Linux.x86_64.32_cc.old/log.make.Linux.x86_64.32_cc
... 132265 Oct 21 10:51 log.configure.Linux.x86_64.32_cc
...  10890 Oct 21 10:51 log.make.Linux.x86_64.32_cc


linpc4 openmpi-1.5-Linux.x86_64.32_cc 195 grep -i warning:
   log.configure.Linux.x86_64.32_cc
configure: WARNING: *** Did not find corresponding C type
configure: WARNING: MPI_REAL16 and MPI_COMPLEX32 support have been disabled
configure: WARNING: *** Corresponding Fortran 77 type (REAL*16) not supported
configure: WARNING: *** Skipping Fortran 90 type (REAL*16)
configure: WARNING: valgrind.h not found
configure: WARNING: Unknown architecture ... proceeding anyway
configure: WARNING: File locks may not work with NFS.  See the Installation and
configure: WARNING:  -xldscope=hidden has been added to CFLAGS

linpc4 openmpi-1.5-Linux.x86_64.32_cc 196 grep -i warning:
   ../*.old/log.configure.Linux.x86_64.32_cc
configure: WARNING: *** Did not find corresponding C type
configure: WARNING: MPI_REAL16 and MPI_COMPLEX32 support have been disabled
configure: WARNING: *** Corresponding Fortran 77 type (REAL*16) not supported
configure: WARNING: *** Skipping Fortran 90 type (REAL*16)
configure: WARNING: valgrind.h not found
configure: WARNING: Unknown architecture ... proceeding anyway
configure: WARNING: File locks may not work with NFS.  See the Installation and
configure: WARNING:  -xldscope=hidden has been added to CFLAGS

linpc4 openmpi-1.5-Linux.x86_64.32_cc 197 grep -i error:
   log.configure.Linux.x86_64.32_cc
configure: error: no libz found; check path for ZLIB package first...
configure: error: no vtf3.h found; check path for V

Re: [OMPI users] segmentation fault in mpiexec (Linux, Oracle/Sun C)

2010-10-21 Thread Jeff Squyres (jsquyres)
Also, i'm not entirely sure what all the commands are that you are showing. 
Some of those warnings (eg in config.log) are normal. 

The 32 bit test failure is not, though. Terry - any idea there?

Sent from my PDA. No type good. 

On Oct 21, 2010, at 6:25 AM, "Terry Dontje"  wrote:

> I wonder if the error below be due to crap being left over in the source 
> tree.  Can you do a "make clean".  Note on a new checkout from the v1.5 svn 
> branch I was able to build 64 bit with the following configure line:
> 
> ../configure FC=f95 F77=f77 CC=cc CXX=CC --without-openib --without-udapl 
> -enable-heterogeneous --enable-cxx-exceptions --enable-shared 
> --enable-orterun-prefix-by-default --with-sge --disable-mpi-threads 
> --enable-mpi-f90 --with-mpi-f90-size=small --disable-progress-threads 
> --prefix=/workspace/tdd/ctnext/v15 CFLAGS=-m64 CXXFLAGS=-m64 FFLAGS=-m64 
> FCFLAGS=-m64
> 
> --td
> On 10/21/2010 05:38 AM, Siegmar Gross wrote:
>> 
>> Hi,
>> 
>> thank you very much for your reply.
>> 
>>>   Can you remove the -with-threads and -enable-mpi-threads options from 
>>> the configure line and see if that helps your 32 bit problem any?
>> I cannot build the package when I remove these options.
>> 
>> linpc4 openmpi-1.5-Linux.x86_64.32_cc 189 head -8 config.log
>> This file contains any messages produced by compilers while
>> running configure, to aid debugging if configure makes a mistake.
>> 
>> It was created by Open MPI configure 1.5, which was
>> generated by GNU Autoconf 2.65.  Invocation command line was
>> 
>>   $ ../openmpi-1.5/configure --prefix=/usr/local/openmpi-1.5_32_cc
>>   CFLAGS=-m32 CXXFLAGS=-m32 FFLAGS=-m32 FCFLAGS=-m32 CXXLDFLAGS=-m32
>>   CPPFLAGS= LDFLAGS=-m32 C_INCL_PATH= C_INCLUDE_PATH= CPLUS_INCLUDE_PATH=
>>   OBJC_INCLUDE_PATH= MPICHHOME= CC=cc CXX=CC F77=f95 FC=f95
>>   --without-udapl --enable-shared --enable-heterogeneous
>>   --enable-cxx-exceptions
>> 
>> 
>> linpc4 openmpi-1.5-Linux.x86_64.32_cc 190 head -8 ../*.old/config.log
>> This file contains any messages produced by compilers while
>> running configure, to aid debugging if configure makes a mistake.
>> 
>> It was created by Open MPI configure 1.5, which was
>> generated by GNU Autoconf 2.65.  Invocation command line was
>> 
>>   $ ../openmpi-1.5/configure --prefix=/usr/local/openmpi-1.5_32_cc
>>   CFLAGS=-m32 CXXFLAGS=-m32 FFLAGS=-m32 FCFLAGS=-m32 CXXLDFLAGS=-m32
>>   CPPFLAGS= LDFLAGS=-m32 C_INCL_PATH= C_INCLUDE_PATH= CPLUS_INCLUDE_PATH=
>>   OBJC_INCLUDE_PATH= MPICHHOME= CC=cc CXX=CC F77=f95 FC=f95
>>   --without-udapl --with-threads=posix --enable-mpi-threads
>>   --enable-shared --enable-heterogeneous --enable-cxx-exceptions
>> 
>> 
>> linpc4 openmpi-1.5-Linux.x86_64.32_cc 194 dir log.* ../*.old/log.*
>> ... 132406 Oct 19 13:01
>>   ../openmpi-1.5-Linux.x86_64.32_cc.old/log.configure.Linux.x86_64.32_cc
>> ... 195587 Oct 19 16:09
>>   ../openmpi-1.5-Linux.x86_64.32_cc.old/log.make-check.Linux.x86_64.32_cc
>> ... 356672 Oct 19 16:07
>>   ../openmpi-1.5-Linux.x86_64.32_cc.old/log.make-install.Linux.x86_64.32_cc
>> ... 280596 Oct 19 13:42
>>   ../openmpi-1.5-Linux.x86_64.32_cc.old/log.make.Linux.x86_64.32_cc
>> ... 132265 Oct 21 10:51 log.configure.Linux.x86_64.32_cc
>> ...  10890 Oct 21 10:51 log.make.Linux.x86_64.32_cc
>> 
>> 
>> linpc4 openmpi-1.5-Linux.x86_64.32_cc 195 grep -i warning:
>>   log.configure.Linux.x86_64.32_cc 
>> configure: WARNING: *** Did not find corresponding C type
>> configure: WARNING: MPI_REAL16 and MPI_COMPLEX32 support have been disabled
>> configure: WARNING: *** Corresponding Fortran 77 type (REAL*16) not supported
>> configure: WARNING: *** Skipping Fortran 90 type (REAL*16)
>> configure: WARNING: valgrind.h not found
>> configure: WARNING: Unknown architecture ... proceeding anyway
>> configure: WARNING: File locks may not work with NFS.  See the Installation 
>> and
>> configure: WARNING:  -xldscope=hidden has been added to CFLAGS
>> 
>> linpc4 openmpi-1.5-Linux.x86_64.32_cc 196 grep -i warning:
>>   ../*.old/log.configure.Linux.x86_64.32_cc
>> configure: WARNING: *** Did not find corresponding C type
>> configure: WARNING: MPI_REAL16 and MPI_COMPLEX32 support have been disabled
>> configure: WARNING: *** Corresponding Fortran 77 type (REAL*16) not supported
>> configure: WARNING: *** Skipping Fortran 90 type (REAL*16)
>> configure: WARNING: valgrind.h not found
>> configure: WARNING: Unknown architecture ... proceeding anyway
>> configure: WARNING: File locks may not work with NFS.  See the Installation 
>> and
>> configure: WARNING:  -xldscope=hidden has been added to CFLAGS
>> 
>> linpc4 openmpi-1.5-Linux.x86_64.32_cc 197 grep -i error:
>>   log.configure.Linux.x86_64.32_cc
>> configure: error: no libz found; check path for ZLIB package first...
>> configure: error: no vtf3.h found; check path for VTF3 package first...
>> configure: error: no BPatch.h found; check path for Dyninst package first...
>> configure: error: no f2c.h found; check path for CLAPACK package first...
>> configur

Re: [OMPI users] segmentation fault in mpiexec (Linux, Oracle/Sun C)

2010-10-21 Thread Terry Dontje
 I wonder if the error below be due to crap being left over in the 
source tree.  Can you do a "make clean".  Note on a new checkout from 
the v1.5 svn branch I was able to build 64 bit with the following 
configure line:


../configure FC=f95 F77=f77 CC=cc CXX=CC --without-openib 
--without-udapl -enable-heterogeneous --enable-cxx-exceptions 
--enable-shared --enable-orterun-prefix-by-default --with-sge 
--disable-mpi-threads --enable-mpi-f90 --with-mpi-f90-size=small 
--disable-progress-threads --prefix=/workspace/tdd/ctnext/v15 
CFLAGS=-m64 CXXFLAGS=-m64 FFLAGS=-m64 FCFLAGS=-m64


--td
On 10/21/2010 05:38 AM, Siegmar Gross wrote:

Hi,

thank you very much for your reply.


   Can you remove the -with-threads and -enable-mpi-threads options from
the configure line and see if that helps your 32 bit problem any?

I cannot build the package when I remove these options.

linpc4 openmpi-1.5-Linux.x86_64.32_cc 189 head -8 config.log
This file contains any messages produced by compilers while
running configure, to aid debugging if configure makes a mistake.

It was created by Open MPI configure 1.5, which was
generated by GNU Autoconf 2.65.  Invocation command line was

   $ ../openmpi-1.5/configure --prefix=/usr/local/openmpi-1.5_32_cc
   CFLAGS=-m32 CXXFLAGS=-m32 FFLAGS=-m32 FCFLAGS=-m32 CXXLDFLAGS=-m32
   CPPFLAGS= LDFLAGS=-m32 C_INCL_PATH= C_INCLUDE_PATH= CPLUS_INCLUDE_PATH=
   OBJC_INCLUDE_PATH= MPICHHOME= CC=cc CXX=CC F77=f95 FC=f95
   --without-udapl --enable-shared --enable-heterogeneous
   --enable-cxx-exceptions


linpc4 openmpi-1.5-Linux.x86_64.32_cc 190 head -8 ../*.old/config.log
This file contains any messages produced by compilers while
running configure, to aid debugging if configure makes a mistake.

It was created by Open MPI configure 1.5, which was
generated by GNU Autoconf 2.65.  Invocation command line was

   $ ../openmpi-1.5/configure --prefix=/usr/local/openmpi-1.5_32_cc
   CFLAGS=-m32 CXXFLAGS=-m32 FFLAGS=-m32 FCFLAGS=-m32 CXXLDFLAGS=-m32
   CPPFLAGS= LDFLAGS=-m32 C_INCL_PATH= C_INCLUDE_PATH= CPLUS_INCLUDE_PATH=
   OBJC_INCLUDE_PATH= MPICHHOME= CC=cc CXX=CC F77=f95 FC=f95
   --without-udapl --with-threads=posix --enable-mpi-threads
   --enable-shared --enable-heterogeneous --enable-cxx-exceptions


linpc4 openmpi-1.5-Linux.x86_64.32_cc 194 dir log.* ../*.old/log.*
... 132406 Oct 19 13:01
   ../openmpi-1.5-Linux.x86_64.32_cc.old/log.configure.Linux.x86_64.32_cc
... 195587 Oct 19 16:09
   ../openmpi-1.5-Linux.x86_64.32_cc.old/log.make-check.Linux.x86_64.32_cc
... 356672 Oct 19 16:07
   ../openmpi-1.5-Linux.x86_64.32_cc.old/log.make-install.Linux.x86_64.32_cc
... 280596 Oct 19 13:42
   ../openmpi-1.5-Linux.x86_64.32_cc.old/log.make.Linux.x86_64.32_cc
... 132265 Oct 21 10:51 log.configure.Linux.x86_64.32_cc
...  10890 Oct 21 10:51 log.make.Linux.x86_64.32_cc


linpc4 openmpi-1.5-Linux.x86_64.32_cc 195 grep -i warning:
   log.configure.Linux.x86_64.32_cc
configure: WARNING: *** Did not find corresponding C type
configure: WARNING: MPI_REAL16 and MPI_COMPLEX32 support have been disabled
configure: WARNING: *** Corresponding Fortran 77 type (REAL*16) not supported
configure: WARNING: *** Skipping Fortran 90 type (REAL*16)
configure: WARNING: valgrind.h not found
configure: WARNING: Unknown architecture ... proceeding anyway
configure: WARNING: File locks may not work with NFS.  See the Installation and
configure: WARNING:  -xldscope=hidden has been added to CFLAGS

linpc4 openmpi-1.5-Linux.x86_64.32_cc 196 grep -i warning:
   ../*.old/log.configure.Linux.x86_64.32_cc
configure: WARNING: *** Did not find corresponding C type
configure: WARNING: MPI_REAL16 and MPI_COMPLEX32 support have been disabled
configure: WARNING: *** Corresponding Fortran 77 type (REAL*16) not supported
configure: WARNING: *** Skipping Fortran 90 type (REAL*16)
configure: WARNING: valgrind.h not found
configure: WARNING: Unknown architecture ... proceeding anyway
configure: WARNING: File locks may not work with NFS.  See the Installation and
configure: WARNING:  -xldscope=hidden has been added to CFLAGS

linpc4 openmpi-1.5-Linux.x86_64.32_cc 197 grep -i error:
   log.configure.Linux.x86_64.32_cc
configure: error: no libz found; check path for ZLIB package first...
configure: error: no vtf3.h found; check path for VTF3 package first...
configure: error: no BPatch.h found; check path for Dyninst package first...
configure: error: no f2c.h found; check path for CLAPACK package first...
configure: error: MPI Correctness Checking support cannot be built inside Open
MPI
configure: error: no papi.h found; check path for PAPI package first...
configure: error: no libcpc.h found; check path for CPC package first...
configure: error: no ctool/ctool.h found; check path for CTool package first...

linpc4 openmpi-1.5-Linux.x86_64.32_cc 198 grep -i error:
   ../*.old/log.configure.Linux.x86_64.32_cc
configure: error: no libz found; check path for ZLIB package first...
configure: error: no vtf3.h found; check path for VTF3 package firs

Re: [OMPI users] Question about MPI_Barrier

2010-10-21 Thread Jeff Squyres
On Oct 20, 2010, at 5:51 PM, Storm Zhang wrote:

> I need to measure t2-t1 to see the time spent on the code A between these two 
> MPI_Barriers. I notice that if I comment code B, the time seems much less the 
> original time (almost half). How does it happen? What is a possible reason 
> for it? I have no idea.

I'm not sure what you're asking here -- do you mean that if you put some 
comments in code B, it takes much less time than if you don't put comments?  If 
so, then the comments have nothing to do with the execution run-time -- 
something else is going on that is causing the delay.  Some questions:

- how long does it take to execute code B -- microseconds, or seconds, or ...?
- how many processes are involved?
- what are you doing in code B; is it communication intensive and/or do you 
synchronize with other processes?
- are you doing your timings on otherwise-empty machines?

-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/




[OMPI users] some warnings and failures building and testing openmpi-1.5

2010-10-21 Thread Siegmar Gross
Hi,

I have built Open MPI 1.5 on SunOS Sparc with the Oracle/Sun Studio C
compiler and gcc-4.2.0 in 32- and 64-bit mode. A small test program
works, but I got some warnings and errors building and checking the
installation as you can see below. Perhaps somebody knows how to fix
these things and has the time to do it.

tyr small_prog 105 cc -V
cc: Sun C 5.9 SunOS_sparc Patch 124867-16 2010/08/11
usage: cc [ options] files.  Use 'cc -flags' for details

tyr small_prog 108 gcc -v
Using built-in specs.
Target: sparc-sun-solaris2.10
Configured with: /.../configure --prefix=/usr/local/gcc-4.2.0
  --enable-languages=c,c++,java,fortran,objc --enable-java-gc=boehm
  --enable-nls --enable-libgcj --enable-threads=posix
Thread model: posix
gcc version 4.2.0

tyr small_prog 106 uname -a
SunOS tyr.informatik.hs-fulda.de 5.10 Generic_141444-09 sun4u sparc
  SUNW,A70 Solaris


cc, 32-bit:
---

tyr small_prog 107 mpicc -show
cc -I/usr/local/openmpi-1.5_32_cc/include -mt
  -L/usr/local/openmpi-1.5_32_cc/lib -lmpi -lsocket -lnsl -lrt -lm

A small test program works. "make" returns some warnings and
"make check" returns a failure.


tyr openmpi-1.5-SunOS.sparc.32_cc 113 grep -i warning:
  log.configure.SunOS.sparc.32_cc
configure: WARNING: MPI_REAL16 and MPI_COMPLEX32 support have been disabled
configure: WARNING: netinet/sctp.h: present but cannot be compiled
configure: WARNING: netinet/sctp.h: check for missing prerequisite headers?
configure: WARNING: netinet/sctp.h: see the Autoconf documentation
configure: WARNING: netinet/sctp.h: section "Present But Cannot Be Compiled"
configure: WARNING: netinet/sctp.h: proceeding with the compiler's result
configure: WARNING: ## 
-- ##
configure: WARNING: ## Report this to 
http://www.open-mpi.org/community/help/ ##
configure: WARNING: ## 
-- ##
configure: WARNING: Unknown architecture ... proceeding anyway
configure: WARNING: File locks may not work with NFS.  See the Installation and

tyr openmpi-1.5-SunOS.sparc.32_cc 114 grep -i error:
  log.configure.SunOS.sparc.32_cc
configure: error: no vtf3.h found; check path for VTF3 package first...
configure: error: no BPatch.h found; check path for Dyninst package first...
configure: error: no f2c.h found; check path for CLAPACK package first...
configure: error: MPI Correctness Checking support cannot be built inside Open 
MPI
configure: error: PAPI version could not be determined and/or is incompatible 
(< 
3)
configure: error: no ctool/ctool.h found; check path for CTool package first...

tyr openmpi-1.5-SunOS.sparc.32_cc 115 grep -i warning:
  log.make.SunOS.sparc.32_cc 
"../opal/mca/crs/none/crs_none_module.c", line 136:
  warning: statement not reached
"../orte/mca/rmcast/tcp/rmcast_tcp.c", line 982:
  warning: assignment type mismatch:
"../orte/mca/rmcast/tcp/rmcast_tcp.c", line 1023:
  warning: assignment type mismatch:
"../orte/mca/rmcast/udp/rmcast_udp.c", line 877:
  warning: assignment type mismatch:
" ../orte/mca/rmcast/udp/rmcast_udp.c", line 918:
  warning: assignment type mismatch:
"../orte/tools/orte-ps/orte-ps.c", line 288: warning: initializer
  does not fit or is out of range: 0xfffe
"../orte/tools/orte-ps/orte-ps.c", line 289: warning: initializer
  does not fit or is out of range: 0xfffe
CC: Warning: Option -pthread passed to ld, if ld is invoked,
  ignored otherwise
... (some more)
CC: Warning: Specify a supported level of optimization when
  using -xopenmp, -xopenmp will not set an optimization level
  in a future release. Optimization level changed to 3 to support
  -xopenmp.
... (very often)

tyr openmpi-1.5-SunOS.sparc.32_cc 116 grep -i error:
  log.make.SunOS.sparc.32_cc
tyr openmpi-1.5-SunOS.sparc.32_cc 117 

tyr openmpi-1.5-SunOS.sparc.32_cc 107 grep -i warning:
  log.make-install.SunOS.sparc.32_cc 
libtool: install: warning: relinking `libmpi_cxx.la'
CC: Warning: Option -pthread passed to ld, if ld is invoked,
  ignored otherwise
CC: Warning: Option -pthread passed to ld, if ld is invoked,
  ignored otherwise
libtool: install: warning: relinking `libmpi_f77.la'
libtool: install: warning: relinking `libmpi_f90.la'
libtool: install: warning: relinking `mca_btl_sm.la'
libtool: install: warning: relinking `mca_coll_sm.la'
libtool: install: warning: relinking `mca_mpool_sm.la'
libtool: install: warning: relinking `libvt.la'
libtool: install: warning: relinking `libvt-mpi.la'
libtool: install: warning: relinking `libvt-mt.la'
libtool: install: warning: relinking `libvt-hyb.la'
libtool: install: warning: relinking `libvt-java.la'
tyr openmpi-1.5-SunOS.sparc.32_cc 108 grep -i error:
  log.make-install.SunOS.sparc.32_cc
tyr openmpi-1.5-SunOS.sparc.32_cc 109 


tyr openmpi-1.5-SunOS.sparc.32_cc 120 grep FAIL
  log.make-check.SunOS.sparc.32_cc
FAIL: atomic_cmpset

tyr openmpi-1.5-SunOS.sparc.32_cc 121 grep PASS
  log.make-check.SunOS.sparc.32_cc
PASS: predefined_gap_test
PASS: dlo

Re: [OMPI users] segmentation fault in mpiexec (Linux, Oracle/Sun C)

2010-10-21 Thread Siegmar Gross
Hi,

thank you very much for your reply.

>   Can you remove the -with-threads and -enable-mpi-threads options from 
> the configure line and see if that helps your 32 bit problem any?

I cannot build the package when I remove these options.

linpc4 openmpi-1.5-Linux.x86_64.32_cc 189 head -8 config.log
This file contains any messages produced by compilers while
running configure, to aid debugging if configure makes a mistake.

It was created by Open MPI configure 1.5, which was
generated by GNU Autoconf 2.65.  Invocation command line was

  $ ../openmpi-1.5/configure --prefix=/usr/local/openmpi-1.5_32_cc
  CFLAGS=-m32 CXXFLAGS=-m32 FFLAGS=-m32 FCFLAGS=-m32 CXXLDFLAGS=-m32
  CPPFLAGS= LDFLAGS=-m32 C_INCL_PATH= C_INCLUDE_PATH= CPLUS_INCLUDE_PATH=
  OBJC_INCLUDE_PATH= MPICHHOME= CC=cc CXX=CC F77=f95 FC=f95
  --without-udapl --enable-shared --enable-heterogeneous
  --enable-cxx-exceptions


linpc4 openmpi-1.5-Linux.x86_64.32_cc 190 head -8 ../*.old/config.log
This file contains any messages produced by compilers while
running configure, to aid debugging if configure makes a mistake.

It was created by Open MPI configure 1.5, which was
generated by GNU Autoconf 2.65.  Invocation command line was

  $ ../openmpi-1.5/configure --prefix=/usr/local/openmpi-1.5_32_cc
  CFLAGS=-m32 CXXFLAGS=-m32 FFLAGS=-m32 FCFLAGS=-m32 CXXLDFLAGS=-m32
  CPPFLAGS= LDFLAGS=-m32 C_INCL_PATH= C_INCLUDE_PATH= CPLUS_INCLUDE_PATH=
  OBJC_INCLUDE_PATH= MPICHHOME= CC=cc CXX=CC F77=f95 FC=f95
  --without-udapl --with-threads=posix --enable-mpi-threads
  --enable-shared --enable-heterogeneous --enable-cxx-exceptions


linpc4 openmpi-1.5-Linux.x86_64.32_cc 194 dir log.* ../*.old/log.*
... 132406 Oct 19 13:01
  ../openmpi-1.5-Linux.x86_64.32_cc.old/log.configure.Linux.x86_64.32_cc
... 195587 Oct 19 16:09
  ../openmpi-1.5-Linux.x86_64.32_cc.old/log.make-check.Linux.x86_64.32_cc
... 356672 Oct 19 16:07
  ../openmpi-1.5-Linux.x86_64.32_cc.old/log.make-install.Linux.x86_64.32_cc
... 280596 Oct 19 13:42
  ../openmpi-1.5-Linux.x86_64.32_cc.old/log.make.Linux.x86_64.32_cc
... 132265 Oct 21 10:51 log.configure.Linux.x86_64.32_cc
...  10890 Oct 21 10:51 log.make.Linux.x86_64.32_cc


linpc4 openmpi-1.5-Linux.x86_64.32_cc 195 grep -i warning:
  log.configure.Linux.x86_64.32_cc 
configure: WARNING: *** Did not find corresponding C type
configure: WARNING: MPI_REAL16 and MPI_COMPLEX32 support have been disabled
configure: WARNING: *** Corresponding Fortran 77 type (REAL*16) not supported
configure: WARNING: *** Skipping Fortran 90 type (REAL*16)
configure: WARNING: valgrind.h not found
configure: WARNING: Unknown architecture ... proceeding anyway
configure: WARNING: File locks may not work with NFS.  See the Installation and
configure: WARNING:  -xldscope=hidden has been added to CFLAGS

linpc4 openmpi-1.5-Linux.x86_64.32_cc 196 grep -i warning:
  ../*.old/log.configure.Linux.x86_64.32_cc
configure: WARNING: *** Did not find corresponding C type
configure: WARNING: MPI_REAL16 and MPI_COMPLEX32 support have been disabled
configure: WARNING: *** Corresponding Fortran 77 type (REAL*16) not supported
configure: WARNING: *** Skipping Fortran 90 type (REAL*16)
configure: WARNING: valgrind.h not found
configure: WARNING: Unknown architecture ... proceeding anyway
configure: WARNING: File locks may not work with NFS.  See the Installation and
configure: WARNING:  -xldscope=hidden has been added to CFLAGS

linpc4 openmpi-1.5-Linux.x86_64.32_cc 197 grep -i error:
  log.configure.Linux.x86_64.32_cc
configure: error: no libz found; check path for ZLIB package first...
configure: error: no vtf3.h found; check path for VTF3 package first...
configure: error: no BPatch.h found; check path for Dyninst package first...
configure: error: no f2c.h found; check path for CLAPACK package first...
configure: error: MPI Correctness Checking support cannot be built inside Open 
MPI
configure: error: no papi.h found; check path for PAPI package first...
configure: error: no libcpc.h found; check path for CPC package first...
configure: error: no ctool/ctool.h found; check path for CTool package first...

linpc4 openmpi-1.5-Linux.x86_64.32_cc 198 grep -i error:
  ../*.old/log.configure.Linux.x86_64.32_cc
configure: error: no libz found; check path for ZLIB package first...
configure: error: no vtf3.h found; check path for VTF3 package first...
configure: error: no BPatch.h found; check path for Dyninst package first...
configure: error: no f2c.h found; check path for CLAPACK package first...
configure: error: MPI Correctness Checking support cannot be built inside Open 
MPI
configure: error: no papi.h found; check path for PAPI package first...
configure: error: no libcpc.h found; check path for CPC package first...
configure: error: no ctool/ctool.h found; check path for CTool package first...
linpc4 openmpi-1.5-Linux.x86_64.32_cc 199 


linpc4 openmpi-1.5-Linux.x86_64.32_cc 199 grep -i warning:
  log.make.Linux.x86_64.32_cc  
linpc4 openmpi-1.5-Linux.x86_64.32_cc 200 grep -i warnin

Re: [OMPI users] Question about MPI_Barrier

2010-10-21 Thread jody
Hi

I don't know the reason for the strange behaviour, but anyway,
to measure time in an MPI application you should use MPI_Wtime(), not clock()

regards
  jody

On Wed, Oct 20, 2010 at 11:51 PM, Storm Zhang  wrote:
> Dear all,
>
> I got confused with my recent C++ MPI program's behavior. I have an MPI
> program in which I use clock() to measure the time spent between to
> MPI_Barrier, just like this:
>
> MPI::COMM_WORLD.Barrier();
> if if(rank == master) t1 = clock();
> "code A";
> MPI::COMM_WORLD.Barrier();
> if if(rank == master) t2 = clock();
> "code B";
>
> I need to measure t2-t1 to see the time spent on the code A between these
> two MPI_Barriers. I notice that if I comment code B, the time seems much
> less the original time (almost half). How does it happen? What is a possible
> reason for it? I have no idea.
>
> Thanks for your help.
>
> Linbao
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>