[OMPI users] Problems with MPI_Iprobe

2011-07-22 Thread Rodrigo Oliveira
Hi there.

I have an application in which I need to terminate a process anytime due an
external command. In order to maintain the consistence of the processes, I
need to receive the messages that were already sent to the terminating
process. I used the MPI_Iprobe to check whether there is messages to be
received, but I noticed that I have to call this function twice. Otherwise
it does not work properly. The code bellow exemplifies what happens. Can
anyone help me? Is there another way to do what I need?

Thanks in advance.


#include "mpi.h"
#include 

int main(int argc, char *argv[]) {
int rank, size, i;
MPI_Status status;

MPI_Init(, );
MPI_Comm_size(MPI_COMM_WORLD, );
MPI_Comm_rank(MPI_COMM_WORLD, );
 if (size < 2) {
printf("Please run with two processes.\n"); fflush(stdout);
MPI_Finalize();
return 0;
}
if (rank == 0) {
for (i=0; i<10; i++) {
MPI_Send(, 1, MPI_INT, 1, 123, MPI_COMM_WORLD);
}
}
if (rank == 1) {
int value, has_message;
MPI_Status status;
sleep (2);
 *// Code bellow does not work properly*
MPI_Iprobe(0, 123, MPI_COMM_WORLD, _message, );
while (has_message) {
MPI_Recv(, 1, MPI_INT, 0, 123, MPI_COMM_WORLD, );
printf("Process %d received message %d.\n", rank, value);
MPI_Iprobe(0, 123, MPI_COMM_WORLD, _message, );
}

*// Calling MPI_Iprobe twice for each incoming message makes the code work.*
/*
MPI_Iprobe(0, 123, MPI_COMM_WORLD, _message, );
MPI_Iprobe(0, 123, MPI_COMM_WORLD, _message, );
while (has_message) {
MPI_Recv(, 1, MPI_INT, 0, 123, MPI_COMM_WORLD, );
printf("Process %d received message %d.\n", rank, value);
MPI_Iprobe(0, 123, MPI_COMM_WORLD, _message, );
MPI_Iprobe(0, 123, MPI_COMM_WORLD, _message, );
}
*/
 fflush(stdout);
}
MPI_Finalize();
return 0;
}


Re: [OMPI users] and the next one (3th today!) PGI+OpenMPI issue

2011-07-22 Thread Jeff Squyres
Could you try compiling a trunk nightly tarball of Open MPI?

We recently upgraded the version of Libtool that is used to bootstrap that 
tarball (compared to the v1.4 and v1.5 tarballs):

http://www.open-mpi.org/nightly/trunk/

On Jul 22, 2011, at 12:13 PM, Paul Kapinos wrote:

> ... just do almost the same thing: Try to install OpenMPI 1.4.3 using 11.7 
> PGI Compiler on Scientific Linux 6.0. The same place, but other error message:
> --
> /usr/lib64/crt1.o: In function `_start':
> (.text+0x20): undefined reference to `main'
> gmake[2]: *** [libmpi_cxx.la] Error 2
> gmake[2]: Leaving directory 
> `/tmp/pk224850/linuxc2_11254/openmpi-1.4.3_linux64_pgi/ompi/mpi/cxx'
> --
> 
> and then the compilation aborted. Configure string below. With the Intel, gcc 
> and Studio compiles, the very same installations were happily through.
> 
> Maybe someone can give me a hint about this is an issue by openmpi, pgi or 
> somehow else...
> 
> Best wishes,
> 
> Paul
> 
> P.S.
> 
> again, more logs downloadable:
> https://gigamove.rz.rwth-aachen.de/d/id/WNk69nPr4w7svT
> 
> 
> -- 
> Dipl.-Inform. Paul Kapinos   -   High Performance Computing,
> RWTH Aachen University, Center for Computing and Communication
> Seffenter Weg 23,  D 52074  Aachen (Germany)
> Tel: +49 241/80-24915
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/




Re: [OMPI users] Cofigure(?) problem building /1.5.3 on ScientificLinux6.0

2011-07-22 Thread Gus Correa

Hi

Would "cp -rp" help?
(To preserve time stamps, instead of "cp -r".)

Anyway, since 1.2.8 here I build 5, sometimes more versions,
all from the same tarball, but on separate build directories,
as Jeff suggests.
[VPATH] Works for me.

My two cents.
Gus Correa

Jeff Squyres wrote:

Ah -- Ralph pointed out the relevant line to me in your first mail that I 
initially missed:


In each case I build 16 versions at all (4 compiler * 32bit/64bit *
support for multithreading ON/OFF). The same error arise in all 16 versions.


Perhaps you should just expand the tarball once and then do VPATH builds...?

Something like this:

tar xf openmpi-1.5.3.tar.bz2
cd openmpi-1.5.3

mkdir build-gcc
cd build-gcc
../configure blah..
make -j 4
make install
cd ..

mkdir build-icc
../configure CC=icc CXX=icpc FC=ifort F77=ifort ..blah.
make -j 4
make install
cd .. 


etc.

This allows you to have one set of source and have N different builds from it.  
Open MPI uses the GNU Autotools correctly to support this kind of build pattern.




On Jul 22, 2011, at 2:37 PM, Jeff Squyres wrote:


Your RUNME script is a *very* strange way to build Open MPI.  It starts with a 
massive copy:

cp -r /home/pk224850/OpenMPI/openmpi-1.5.3/AUTHORS 
/home/pk224850/OpenMPI/openmpi-1.5.3/CMakeLists.txt <...much snipped...> .

Why are you doing this kind of copy?  I suspect that the GNU autotools' timestamps are 
getting all out of whack when you do this kind of copy, and therefore when you run 
"configure", it tries to re-autogen itself.

To be clear: when you expand OMPI from a tarball, you shouldn't need the GNU 
Autotools installed at all -- the tarball is pre-bootstrapped exactly to avoid 
you needing to use the Autotools (much less any specific version of the 
Autotools).

I suspect that if you do this:

-
tar xf openmpi-1.5.3.tar.bz2
cd openmpi-1.5.3
./configure etc.
-

everything will work just fine.


On Jul 22, 2011, at 11:12 AM, Paul Kapinos wrote:


Dear OpenMPI volks,
currently I have a problem by building the version 1.5.3 of OpenMPI on
Scientific Linux 6.0 systems, which seem vor me to be a configuration
problem.

After the configure run (which seem to terminate without error code),
the "gmake all" stage produces errors and exits.

Typical is the output below.

Fancy: the 1.4.3 version on same computer can be build with no special
trouble. Both the 1.4.3 and 1.5.3 versions can be build on other
computer running CentOS 5.6.

In each case I build 16 versions at all (4 compiler * 32bit/64bit *
support for multithreading ON/OFF). The same error arise in all 16 versions.

Can someone give a hint about how to avoid this issue? Thanks!

Best wishes,

Paul


Some logs and configure are downloadable here:
https://gigamove.rz.rwth-aachen.de/d/id/2jM6MEa2nveJJD

The configure line is in RUNME.sh, the
logs of configure and build stage in log_* files; I also attached the
config.log file and the configure itself (which is the standard from the
1.5.3 release).


##


CDPATH="${ZSH_VERSION+.}:" && cd . && /bin/sh
/tmp/pk224850/linuxc2_11254/openmpi-1.5.3mt_linux64_gcc/config/missing
--run aclocal-1.11 -I config
sh: config/ompi_get_version.sh: No such file or directory
/usr/bin/m4: esyscmd subprocess failed



configure.ac:953: warning: OMPI_CONFIGURE_SETUP is m4_require'd but not
m4_defun'd
config/ompi_mca.m4:37: OMPI_MCA is expanded from...
configure.ac:953: the top level
configure.ac:953: warning: AC_COMPILE_IFELSE was called before
AC_USE_SYSTEM_EXTENSIONS
../../lib/autoconf/specific.m4:386: AC_USE_SYSTEM_EXTENSIONS is expanded
from...
opal/mca/paffinity/hwloc/hwloc/config/hwloc.m4:152:
HWLOC_SETUP_CORE_AFTER_C99 is expanded from...
../../lib/m4sugar/m4sh.m4:505: AS_IF is expanded from...
opal/mca/paffinity/hwloc/hwloc/config/hwloc.m4:22: HWLOC_SETUP_CORE is
expanded from...
opal/mca/paffinity/hwloc/configure.m4:40: MCA_paffinity_hwloc_CONFIG is
expanded from...
config/ompi_mca.m4:540: MCA_CONFIGURE_M4_CONFIG_COMPONENT is expanded
from...
config/ompi_mca.m4:326: MCA_CONFIGURE_FRAMEWORK is expanded from...
config/ompi_mca.m4:247: MCA_CONFIGURE_PROJECT is expanded from...
configure.ac:953: warning: AC_RUN_IFELSE was called before
AC_USE_SYSTEM_EXTENSIONS




--
Dipl.-Inform. Paul Kapinos   -   High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23,  D 52074  Aachen (Germany)
Tel: +49 241/80-24915


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


--
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users







Re: [OMPI users] Cofigure(?) problem building /1.5.3 on ScientificLinux6.0

2011-07-22 Thread Jeff Squyres
Ah -- Ralph pointed out the relevant line to me in your first mail that I 
initially missed:

> In each case I build 16 versions at all (4 compiler * 32bit/64bit *
> support for multithreading ON/OFF). The same error arise in all 16 versions.

Perhaps you should just expand the tarball once and then do VPATH builds...?

Something like this:

tar xf openmpi-1.5.3.tar.bz2
cd openmpi-1.5.3

mkdir build-gcc
cd build-gcc
../configure blah..
make -j 4
make install
cd ..

mkdir build-icc
../configure CC=icc CXX=icpc FC=ifort F77=ifort ..blah.
make -j 4
make install
cd .. 

etc.

This allows you to have one set of source and have N different builds from it.  
Open MPI uses the GNU Autotools correctly to support this kind of build pattern.




On Jul 22, 2011, at 2:37 PM, Jeff Squyres wrote:

> Your RUNME script is a *very* strange way to build Open MPI.  It starts with 
> a massive copy:
> 
> cp -r /home/pk224850/OpenMPI/openmpi-1.5.3/AUTHORS 
> /home/pk224850/OpenMPI/openmpi-1.5.3/CMakeLists.txt <...much snipped...> .
> 
> Why are you doing this kind of copy?  I suspect that the GNU autotools' 
> timestamps are getting all out of whack when you do this kind of copy, and 
> therefore when you run "configure", it tries to re-autogen itself.
> 
> To be clear: when you expand OMPI from a tarball, you shouldn't need the GNU 
> Autotools installed at all -- the tarball is pre-bootstrapped exactly to 
> avoid you needing to use the Autotools (much less any specific version of the 
> Autotools).
> 
> I suspect that if you do this:
> 
> -
> tar xf openmpi-1.5.3.tar.bz2
> cd openmpi-1.5.3
> ./configure etc.
> -
> 
> everything will work just fine.
> 
> 
> On Jul 22, 2011, at 11:12 AM, Paul Kapinos wrote:
> 
>> Dear OpenMPI volks,
>> currently I have a problem by building the version 1.5.3 of OpenMPI on
>> Scientific Linux 6.0 systems, which seem vor me to be a configuration
>> problem.
>> 
>> After the configure run (which seem to terminate without error code),
>> the "gmake all" stage produces errors and exits.
>> 
>> Typical is the output below.
>> 
>> Fancy: the 1.4.3 version on same computer can be build with no special
>> trouble. Both the 1.4.3 and 1.5.3 versions can be build on other
>> computer running CentOS 5.6.
>> 
>> In each case I build 16 versions at all (4 compiler * 32bit/64bit *
>> support for multithreading ON/OFF). The same error arise in all 16 versions.
>> 
>> Can someone give a hint about how to avoid this issue? Thanks!
>> 
>> Best wishes,
>> 
>> Paul
>> 
>> 
>> Some logs and configure are downloadable here:
>> https://gigamove.rz.rwth-aachen.de/d/id/2jM6MEa2nveJJD
>> 
>> The configure line is in RUNME.sh, the
>> logs of configure and build stage in log_* files; I also attached the
>> config.log file and the configure itself (which is the standard from the
>> 1.5.3 release).
>> 
>> 
>> ##
>> 
>> 
>> CDPATH="${ZSH_VERSION+.}:" && cd . && /bin/sh
>> /tmp/pk224850/linuxc2_11254/openmpi-1.5.3mt_linux64_gcc/config/missing
>> --run aclocal-1.11 -I config
>> sh: config/ompi_get_version.sh: No such file or directory
>> /usr/bin/m4: esyscmd subprocess failed
>> 
>> 
>> 
>> configure.ac:953: warning: OMPI_CONFIGURE_SETUP is m4_require'd but not
>> m4_defun'd
>> config/ompi_mca.m4:37: OMPI_MCA is expanded from...
>> configure.ac:953: the top level
>> configure.ac:953: warning: AC_COMPILE_IFELSE was called before
>> AC_USE_SYSTEM_EXTENSIONS
>> ../../lib/autoconf/specific.m4:386: AC_USE_SYSTEM_EXTENSIONS is expanded
>> from...
>> opal/mca/paffinity/hwloc/hwloc/config/hwloc.m4:152:
>> HWLOC_SETUP_CORE_AFTER_C99 is expanded from...
>> ../../lib/m4sugar/m4sh.m4:505: AS_IF is expanded from...
>> opal/mca/paffinity/hwloc/hwloc/config/hwloc.m4:22: HWLOC_SETUP_CORE is
>> expanded from...
>> opal/mca/paffinity/hwloc/configure.m4:40: MCA_paffinity_hwloc_CONFIG is
>> expanded from...
>> config/ompi_mca.m4:540: MCA_CONFIGURE_M4_CONFIG_COMPONENT is expanded
>> from...
>> config/ompi_mca.m4:326: MCA_CONFIGURE_FRAMEWORK is expanded from...
>> config/ompi_mca.m4:247: MCA_CONFIGURE_PROJECT is expanded from...
>> configure.ac:953: warning: AC_RUN_IFELSE was called before
>> AC_USE_SYSTEM_EXTENSIONS
>> 
>> 
>> 
>> 
>> -- 
>> Dipl.-Inform. Paul Kapinos   -   High Performance Computing,
>> RWTH Aachen University, Center for Computing and Communication
>> Seffenter Weg 23,  D 52074  Aachen (Germany)
>> Tel: +49 241/80-24915
>> 
>> 
>> ___
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> 
> -- 
> Jeff Squyres
> jsquy...@cisco.com
> For corporate legal information go to:
> http://www.cisco.com/web/about/doing_business/legal/cri/
> 
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


-- 
Jeff Squyres
jsquy...@cisco.com
For corporate 

Re: [OMPI users] Cofigure(?) problem building /1.5.3 on ScientificLinux6.0

2011-07-22 Thread Jeff Squyres
Your RUNME script is a *very* strange way to build Open MPI.  It starts with a 
massive copy:

cp -r /home/pk224850/OpenMPI/openmpi-1.5.3/AUTHORS 
/home/pk224850/OpenMPI/openmpi-1.5.3/CMakeLists.txt <...much snipped...> .

Why are you doing this kind of copy?  I suspect that the GNU autotools' 
timestamps are getting all out of whack when you do this kind of copy, and 
therefore when you run "configure", it tries to re-autogen itself.

To be clear: when you expand OMPI from a tarball, you shouldn't need the GNU 
Autotools installed at all -- the tarball is pre-bootstrapped exactly to avoid 
you needing to use the Autotools (much less any specific version of the 
Autotools).

I suspect that if you do this:

-
tar xf openmpi-1.5.3.tar.bz2
cd openmpi-1.5.3
./configure etc.
-

everything will work just fine.


On Jul 22, 2011, at 11:12 AM, Paul Kapinos wrote:

> Dear OpenMPI volks,
> currently I have a problem by building the version 1.5.3 of OpenMPI on
> Scientific Linux 6.0 systems, which seem vor me to be a configuration
> problem.
> 
> After the configure run (which seem to terminate without error code),
> the "gmake all" stage produces errors and exits.
> 
> Typical is the output below.
> 
> Fancy: the 1.4.3 version on same computer can be build with no special
> trouble. Both the 1.4.3 and 1.5.3 versions can be build on other
> computer running CentOS 5.6.
> 
> In each case I build 16 versions at all (4 compiler * 32bit/64bit *
> support for multithreading ON/OFF). The same error arise in all 16 versions.
> 
> Can someone give a hint about how to avoid this issue? Thanks!
> 
> Best wishes,
> 
> Paul
> 
> 
> Some logs and configure are downloadable here:
> https://gigamove.rz.rwth-aachen.de/d/id/2jM6MEa2nveJJD
> 
>  The configure line is in RUNME.sh, the
> logs of configure and build stage in log_* files; I also attached the
> config.log file and the configure itself (which is the standard from the
> 1.5.3 release).
> 
> 
> ##
> 
> 
> CDPATH="${ZSH_VERSION+.}:" && cd . && /bin/sh
> /tmp/pk224850/linuxc2_11254/openmpi-1.5.3mt_linux64_gcc/config/missing
> --run aclocal-1.11 -I config
> sh: config/ompi_get_version.sh: No such file or directory
> /usr/bin/m4: esyscmd subprocess failed
> 
> 
> 
> configure.ac:953: warning: OMPI_CONFIGURE_SETUP is m4_require'd but not
> m4_defun'd
> config/ompi_mca.m4:37: OMPI_MCA is expanded from...
> configure.ac:953: the top level
> configure.ac:953: warning: AC_COMPILE_IFELSE was called before
> AC_USE_SYSTEM_EXTENSIONS
> ../../lib/autoconf/specific.m4:386: AC_USE_SYSTEM_EXTENSIONS is expanded
> from...
> opal/mca/paffinity/hwloc/hwloc/config/hwloc.m4:152:
> HWLOC_SETUP_CORE_AFTER_C99 is expanded from...
> ../../lib/m4sugar/m4sh.m4:505: AS_IF is expanded from...
> opal/mca/paffinity/hwloc/hwloc/config/hwloc.m4:22: HWLOC_SETUP_CORE is
> expanded from...
> opal/mca/paffinity/hwloc/configure.m4:40: MCA_paffinity_hwloc_CONFIG is
> expanded from...
> config/ompi_mca.m4:540: MCA_CONFIGURE_M4_CONFIG_COMPONENT is expanded
> from...
> config/ompi_mca.m4:326: MCA_CONFIGURE_FRAMEWORK is expanded from...
> config/ompi_mca.m4:247: MCA_CONFIGURE_PROJECT is expanded from...
> configure.ac:953: warning: AC_RUN_IFELSE was called before
> AC_USE_SYSTEM_EXTENSIONS
> 
> 
> 
> 
> -- 
> Dipl.-Inform. Paul Kapinos   -   High Performance Computing,
> RWTH Aachen University, Center for Computing and Communication
> Seffenter Weg 23,  D 52074  Aachen (Germany)
> Tel: +49 241/80-24915
> 
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/




Re: [OMPI users] Cofigure(?) problem building /1.5.3 on ScientificLinux6.0

2011-07-22 Thread Paul Kapinos

Hi Ralph,

Higher rev levels of the autotools are required for the 1.5 series - are you at 
the right ones? See
http://www.open-mpi.org/svn/building.php


Many thanks for the link.
Short test, and it's out: autoconf version on our release is too old. We 
have 2.63 and needed ist 2.65.


I will trigger our admins...

Best wishes,

Paul




m4 (GNU M4) 1.4.13 (OK)
autoconf (GNU Autoconf) 2.63 (Need: 2.65, NOK)
automake (GNU automake) 1.11.1 (OK)
ltmain.sh (GNU libtool) 2.2.6b (OK)





On Jul 22, 2011, at 9:12 AM, Paul Kapinos wrote:


Dear OpenMPI volks,
currently I have a problem by building the version 1.5.3 of OpenMPI on
Scientific Linux 6.0 systems, which seem vor me to be a configuration
problem.

After the configure run (which seem to terminate without error code),
the "gmake all" stage produces errors and exits.

Typical is the output below.

Fancy: the 1.4.3 version on same computer can be build with no special
trouble. Both the 1.4.3 and 1.5.3 versions can be build on other
computer running CentOS 5.6.

In each case I build 16 versions at all (4 compiler * 32bit/64bit *
support for multithreading ON/OFF). The same error arise in all 16 versions.

Can someone give a hint about how to avoid this issue? Thanks!

Best wishes,

Paul


Some logs and configure are downloadable here:
https://gigamove.rz.rwth-aachen.de/d/id/2jM6MEa2nveJJD

 The configure line is in RUNME.sh, the
logs of configure and build stage in log_* files; I also attached the
config.log file and the configure itself (which is the standard from the
1.5.3 release).


##


CDPATH="${ZSH_VERSION+.}:" && cd . && /bin/sh
/tmp/pk224850/linuxc2_11254/openmpi-1.5.3mt_linux64_gcc/config/missing
--run aclocal-1.11 -I config
sh: config/ompi_get_version.sh: No such file or directory
/usr/bin/m4: esyscmd subprocess failed



configure.ac:953: warning: OMPI_CONFIGURE_SETUP is m4_require'd but not
m4_defun'd
config/ompi_mca.m4:37: OMPI_MCA is expanded from...
configure.ac:953: the top level
configure.ac:953: warning: AC_COMPILE_IFELSE was called before
AC_USE_SYSTEM_EXTENSIONS
../../lib/autoconf/specific.m4:386: AC_USE_SYSTEM_EXTENSIONS is expanded
from...
opal/mca/paffinity/hwloc/hwloc/config/hwloc.m4:152:
HWLOC_SETUP_CORE_AFTER_C99 is expanded from...
../../lib/m4sugar/m4sh.m4:505: AS_IF is expanded from...
opal/mca/paffinity/hwloc/hwloc/config/hwloc.m4:22: HWLOC_SETUP_CORE is
expanded from...
opal/mca/paffinity/hwloc/configure.m4:40: MCA_paffinity_hwloc_CONFIG is
expanded from...
config/ompi_mca.m4:540: MCA_CONFIGURE_M4_CONFIG_COMPONENT is expanded
from...
config/ompi_mca.m4:326: MCA_CONFIGURE_FRAMEWORK is expanded from...
config/ompi_mca.m4:247: MCA_CONFIGURE_PROJECT is expanded from...
configure.ac:953: warning: AC_RUN_IFELSE was called before
AC_USE_SYSTEM_EXTENSIONS




--
Dipl.-Inform. Paul Kapinos   -   High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23,  D 52074  Aachen (Germany)
Tel: +49 241/80-24915


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




--
Dipl.-Inform. Paul Kapinos   -   High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23,  D 52074  Aachen (Germany)
Tel: +49 241/80-24915


smime.p7s
Description: S/MIME Cryptographic Signature


Re: [OMPI users] Cofigure(?) problem building /1.5.3 on ScientificLinux6.0

2011-07-22 Thread Ralph Castain
Higher rev levels of the autotools are required for the 1.5 series - are you at 
the right ones? See

http://www.open-mpi.org/svn/building.php


On Jul 22, 2011, at 9:12 AM, Paul Kapinos wrote:

> Dear OpenMPI volks,
> currently I have a problem by building the version 1.5.3 of OpenMPI on
> Scientific Linux 6.0 systems, which seem vor me to be a configuration
> problem.
> 
> After the configure run (which seem to terminate without error code),
> the "gmake all" stage produces errors and exits.
> 
> Typical is the output below.
> 
> Fancy: the 1.4.3 version on same computer can be build with no special
> trouble. Both the 1.4.3 and 1.5.3 versions can be build on other
> computer running CentOS 5.6.
> 
> In each case I build 16 versions at all (4 compiler * 32bit/64bit *
> support for multithreading ON/OFF). The same error arise in all 16 versions.
> 
> Can someone give a hint about how to avoid this issue? Thanks!
> 
> Best wishes,
> 
> Paul
> 
> 
> Some logs and configure are downloadable here:
> https://gigamove.rz.rwth-aachen.de/d/id/2jM6MEa2nveJJD
> 
>  The configure line is in RUNME.sh, the
> logs of configure and build stage in log_* files; I also attached the
> config.log file and the configure itself (which is the standard from the
> 1.5.3 release).
> 
> 
> ##
> 
> 
> CDPATH="${ZSH_VERSION+.}:" && cd . && /bin/sh
> /tmp/pk224850/linuxc2_11254/openmpi-1.5.3mt_linux64_gcc/config/missing
> --run aclocal-1.11 -I config
> sh: config/ompi_get_version.sh: No such file or directory
> /usr/bin/m4: esyscmd subprocess failed
> 
> 
> 
> configure.ac:953: warning: OMPI_CONFIGURE_SETUP is m4_require'd but not
> m4_defun'd
> config/ompi_mca.m4:37: OMPI_MCA is expanded from...
> configure.ac:953: the top level
> configure.ac:953: warning: AC_COMPILE_IFELSE was called before
> AC_USE_SYSTEM_EXTENSIONS
> ../../lib/autoconf/specific.m4:386: AC_USE_SYSTEM_EXTENSIONS is expanded
> from...
> opal/mca/paffinity/hwloc/hwloc/config/hwloc.m4:152:
> HWLOC_SETUP_CORE_AFTER_C99 is expanded from...
> ../../lib/m4sugar/m4sh.m4:505: AS_IF is expanded from...
> opal/mca/paffinity/hwloc/hwloc/config/hwloc.m4:22: HWLOC_SETUP_CORE is
> expanded from...
> opal/mca/paffinity/hwloc/configure.m4:40: MCA_paffinity_hwloc_CONFIG is
> expanded from...
> config/ompi_mca.m4:540: MCA_CONFIGURE_M4_CONFIG_COMPONENT is expanded
> from...
> config/ompi_mca.m4:326: MCA_CONFIGURE_FRAMEWORK is expanded from...
> config/ompi_mca.m4:247: MCA_CONFIGURE_PROJECT is expanded from...
> configure.ac:953: warning: AC_RUN_IFELSE was called before
> AC_USE_SYSTEM_EXTENSIONS
> 
> 
> 
> 
> -- 
> Dipl.-Inform. Paul Kapinos   -   High Performance Computing,
> RWTH Aachen University, Center for Computing and Communication
> Seffenter Weg 23,  D 52074  Aachen (Germany)
> Tel: +49 241/80-24915
> 
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users




[OMPI users] and the next one (3th today!) PGI+OpenMPI issue

2011-07-22 Thread Paul Kapinos
... just do almost the same thing: Try to install OpenMPI 1.4.3 using 
11.7 PGI Compiler on Scientific Linux 6.0. The same place, but other 
error message:

--
/usr/lib64/crt1.o: In function `_start':
(.text+0x20): undefined reference to `main'
gmake[2]: *** [libmpi_cxx.la] Error 2
gmake[2]: Leaving directory 
`/tmp/pk224850/linuxc2_11254/openmpi-1.4.3_linux64_pgi/ompi/mpi/cxx'

--

and then the compilation aborted. Configure string below. With the 
Intel, gcc and Studio compiles, the very same installations were happily 
through.


Maybe someone can give me a hint about this is an issue by openmpi, pgi 
or somehow else...


Best wishes,

Paul

P.S.

again, more logs downloadable:
https://gigamove.rz.rwth-aachen.de/d/id/WNk69nPr4w7svT


--
Dipl.-Inform. Paul Kapinos   -   High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23,  D 52074  Aachen (Germany)
Tel: +49 241/80-24915


smime.p7s
Description: S/MIME Cryptographic Signature


[OMPI users] Usage of PGI compilers (Libtool or OpenMPI issue?)

2011-07-22 Thread Paul Kapinos

Hi,

just found out: the --instantiation_dir, --one_instantiation_per_object, 
and --template_info_file flags are deprecated in the newer versions of 
the PGI compilers, cf. http://www.pgroup.com/support/release_tprs_2010.htm


But, compiling OpenMPI /1.4.3 with 11.7 PGI compilers, I see the warnings:

pgCC-Warning-prelink_objects switch is deprecated
pgCC-Warning-instantiation_dir switch is deprecated

coming from the below-noted call.

I do not know about this is a Libtool or a libtool usage (=OpenMPI 
issue, but I do not want to keep secret this...


Best wishes
Paul Kapinos






libtool: link:  pgCC --prelink_objects --instantiation_dir Template.dir 
  .libs/mpicxx.o .libs/intercepts.o .libs/comm.o .libs/datatype.o 
.libs/win.o .libs/file.o   -Wl,--rpath 
-Wl,/tmp/pk224850/linuxc2_11254/openmpi-1.4.3_linux32_pgi/ompi/.libs 
-Wl,--rpath 
-Wl,/tmp/pk224850/linuxc2_11254/openmpi-1.4.3_linux32_pgi/orte/.libs 
-Wl,--rpath 
-Wl,/tmp/pk224850/linuxc2_11254/openmpi-1.4.3_linux32_pgi/opal/.libs 
-Wl,--rpath -Wl,/opt/MPI/openmpi-1.4.3/linux/pgi/lib/lib32 
-L/tmp/pk224850/linuxc2_11254/openmpi-1.4.3_linux32_pgi/orte/.libs 
-L/tmp/pk224850/linuxc2_11254/openmpi-1.4.3_linux32_pgi/opal/.libs 
-L/opt/lsf/8.0/linux2.6-glibc2.3-x86/lib ../../../ompi/.libs/libmpi.so 
/tmp/pk224850/linuxc2_11254/openmpi-1.4.3_linux32_pgi/orte/.libs/libopen-rte.so 
/tmp/pk224850/linuxc2_11254/openmpi-1.4.3_linux32_pgi/opal/.libs/libopen-pal.so 
-ldl -lnsl -lutil



--
Dipl.-Inform. Paul Kapinos   -   High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23,  D 52074  Aachen (Germany)
Tel: +49 241/80-24915


smime.p7s
Description: S/MIME Cryptographic Signature


[OMPI users] stuck after IMB calling MPI_Finalize in Open MPI trunk

2011-07-22 Thread tma

HI:

I tried to run IMB's Bcast test with 144 processes on 6
nodes(24cores/node) with Open MPI trunk like:
mpirun --hostfile ~/host --bynode -np 144  ./IMB-MPI1 Bcast -npmin 144

Most time, it's stuck there after IMB's calling MPI_finalize.  When I
quit by ctrl+c, it throws the complain like:
[frennes.rennes.grid5000.fr:32047] [[38098,0],0,0]:route_callback trying
to get message from [[38098,0],0,0] to [[38098,0],4,0]:1, routing loop
[0]
func:/home/tma/opt/ompitrunk/lib/libopen-rte.so.0(opal_backtrace_print+0x1f)
[0x7fb46a06d06f]
[1] func:/home/tma/opt/ompitrunk/lib/openmpi/mca_rml_oob.so [0x7fb467d69e41]
[2] func:/home/tma/opt/ompitrunk/lib/openmpi/mca_oob_tcp.so [0x7fb467b606de]
[3] func:/home/tma/opt/ompitrunk/lib/openmpi/mca_oob_tcp.so [0x7fb467b641a7]
[4] func:/home/tma/opt/ompitrunk/lib/openmpi/mca_rml_oob.so [0x7fb467d6c23a]
[5]
func:/home/tma/opt/ompitrunk/lib/libopen-rte.so.0(orte_plm_base_orted_exit+0x182)
[0x7fb46a02a472]
[6] func:/home/tma/opt/ompitrunk/lib/openmpi/mca_plm_rsh.so [0x7fb467f7124b]
[7] func:/home/tma/opt/ompitrunk/lib/openmpi/mca_errmgr_hnp.so
[0x7fb46693abf7]
[8] func:/home/tma/opt/ompitrunk/lib/openmpi/mca_errmgr_hnp.so
[0x7fb46693c531]
[9] func:/home/tma/opt/ompitrunk/lib/libopen-rte.so.0 [0x7fb46a026b3f]
[10]
func:/home/tma/opt/ompitrunk/lib/libopen-rte.so.0(opal_libevent207_event_base_loop+0x3f5)
[0x7fb46a07b7f5]
[11] func:/home/tma/opt/mpi/bin/mpirun [0x403b45]
[12] func:/home/tma/opt/mpi/bin/mpirun [0x402fd7]
[13] func:/lib/libc.so.6(__libc_start_main+0xe6) [0x7fb468fc81a6]
[14] func:/home/tma/opt/mpi/bin/mpirun [0x402ef9]



Thanks for help
Teng Ma


[OMPI users] Cofigure(?) problem building /1.5.3 on ScientificLinux6.0

2011-07-22 Thread Paul Kapinos

Dear OpenMPI volks,
currently I have a problem by building the version 1.5.3 of OpenMPI on
Scientific Linux 6.0 systems, which seem vor me to be a configuration
problem.

After the configure run (which seem to terminate without error code),
the "gmake all" stage produces errors and exits.

Typical is the output below.

Fancy: the 1.4.3 version on same computer can be build with no special
trouble. Both the 1.4.3 and 1.5.3 versions can be build on other
computer running CentOS 5.6.

In each case I build 16 versions at all (4 compiler * 32bit/64bit *
support for multithreading ON/OFF). The same error arise in all 16 versions.

Can someone give a hint about how to avoid this issue? Thanks!

Best wishes,

Paul


Some logs and configure are downloadable here:
https://gigamove.rz.rwth-aachen.de/d/id/2jM6MEa2nveJJD

  The configure line is in RUNME.sh, the
logs of configure and build stage in log_* files; I also attached the
config.log file and the configure itself (which is the standard from the
1.5.3 release).


##


CDPATH="${ZSH_VERSION+.}:" && cd . && /bin/sh
/tmp/pk224850/linuxc2_11254/openmpi-1.5.3mt_linux64_gcc/config/missing
--run aclocal-1.11 -I config
sh: config/ompi_get_version.sh: No such file or directory
/usr/bin/m4: esyscmd subprocess failed



configure.ac:953: warning: OMPI_CONFIGURE_SETUP is m4_require'd but not
m4_defun'd
config/ompi_mca.m4:37: OMPI_MCA is expanded from...
configure.ac:953: the top level
configure.ac:953: warning: AC_COMPILE_IFELSE was called before
AC_USE_SYSTEM_EXTENSIONS
../../lib/autoconf/specific.m4:386: AC_USE_SYSTEM_EXTENSIONS is expanded
from...
opal/mca/paffinity/hwloc/hwloc/config/hwloc.m4:152:
HWLOC_SETUP_CORE_AFTER_C99 is expanded from...
../../lib/m4sugar/m4sh.m4:505: AS_IF is expanded from...
opal/mca/paffinity/hwloc/hwloc/config/hwloc.m4:22: HWLOC_SETUP_CORE is
expanded from...
opal/mca/paffinity/hwloc/configure.m4:40: MCA_paffinity_hwloc_CONFIG is
expanded from...
config/ompi_mca.m4:540: MCA_CONFIGURE_M4_CONFIG_COMPONENT is expanded
from...
config/ompi_mca.m4:326: MCA_CONFIGURE_FRAMEWORK is expanded from...
config/ompi_mca.m4:247: MCA_CONFIGURE_PROJECT is expanded from...
configure.ac:953: warning: AC_RUN_IFELSE was called before
AC_USE_SYSTEM_EXTENSIONS




--
Dipl.-Inform. Paul Kapinos   -   High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23,  D 52074  Aachen (Germany)
Tel: +49 241/80-24915




smime.p7s
Description: S/MIME Cryptographic Signature