Re: [OMPI users] [Fwd: MPI question/problem] including code attachments

2007-06-20 Thread George Bosilca

Jeff,

With the proper MPI_Finalize added at the end of the main function,  
your program orks fine with the current version of Open MPI up to 32  
processors. Here is the output I got for 4 processors:


I am 2 of 4 WORLD procesors
I am 3 of 4 WORLD procesors
I am 0 of 4 WORLD procesors
I am 1 of 4 WORLD procesors
Initial inttemp 1
Initial inttemp 0
final inttemp 0,0
0, WORLD barrier leaving routine
final inttemp 1,0
1, WORLD barrier leaving routine
Initial inttemp 2
final inttemp 2,0
2, WORLD barrier leaving routine
SERVER Got a DONE flag
Initial inttemp 3
final inttemp 3,0
3, WORLD barrier leaving routine

This output seems to indicate that the program is running to  
completion and it does what you expect it to do.


Btw, what version of Open MPI are you using and on what kind of  
hardware ?


  george.

On Jun 20, 2007, at 6:31 PM, Jeffrey L. Tilson wrote:


Hi,
ANL suggested I post this question to you. This is my second  
posting..but now with the proper attachments.


From: Jeffrey Tilson 
Date: June 20, 2007 5:17:50 PM PDT
To: mpich2-ma...@mcs.anl.gov, Jeffrey Tilson 
Subject: MPI question/problem


Hello All,
This will probably turn out to be my fault as I haven't used MPI in  
a few years.


I am attempting to use an MPI implementation of a "nxtval" (see the  
MPI book). I am using the client-server scenario. The MPI book  
specifies the three functions required. Two are collective and one  
is not. Only the  two collectives are tested in the supplied code.  
All three of the MPI functions are reproduced in the attached code,  
however.  I wrote a tiny application to create and free a counter  
object and it fails.


I need to know if this is a bug in the MPI book and a  
misunderstanding on my part.


The complete code is attached. I was using openMPI/intel to compile  
and run.


The error I get is:


[compute-0-1.local:22637] *** An error occurred in MPI_Comm_rank
[compute-0-1.local:22637] *** on communicator MPI_COMM_WORLD
[compute-0-1.local:22637] *** MPI_ERR_COMM: invalid communicator
[compute-0-1.local:22637] *** MPI_ERRORS_ARE_FATAL (goodbye)
mpirun noticed that job rank 0 with PID 22635 on node  
"compute-0-1.local" exited on signal 15.


I've attempted to google my way to understanding but with little  
success. If someone could point me to
a sample application that actually uses these functions, I would  
appreciate it.


Sorry if this is the wrong list, it is not an MPICH question and I  
wasn't sure where to turn.


Thanks,
--jeff

-- 
--


/* A beginning piece of code to perform large-scale web  
construction.  */

#include 
#include 
#include 
#include "mpi.h"

typedef struct {
char description[1024];
double startwtime;
double endwtime;
double difftime;
} Timer;

/* prototypes */
int MPE_Counter_nxtval(MPI_Comm , int *);
int MPE_Counter_free( MPI_Comm *, MPI_Comm * );
void MPE_Counter_create( MPI_Comm , MPI_Comm *, MPI_Comm *);
/* End prototypes */

/* Globals */
int  rank,numsize;

int main( argc, argv )
int argc;
char **argv;
{

int i,j;
MPI_Status   status;
MPI_Request  r;
MPI_Comm  smaller_comm,  counter_comm;


int numtimings=0;
int inttemp;
int value=-1;
int server;

//Init parallel environment

MPI_Init( ,  );
MPI_Comm_rank( MPI_COMM_WORLD,  );
MPI_Comm_size( MPI_COMM_WORLD,  );

printf("I am %i of %i WORLD procesors\n",rank,numsize);
server = numsize -1;

MPE_Counter_create( MPI_COMM_WORLD, _comm, _comm );
printf("Initial inttemp %i\n",rank);

inttemp = MPE_Counter_free( _comm, _comm );
printf("final inttemp %i,%i\n",rank,inttemp);

printf("%i, WORLD barrier leaving routine\n",rank);
MPI_Barrier( MPI_COMM_WORLD );
}

 Add new MPICH based shared counter.
 grabbed from http://www-unix.mcs.anl.gov/mpi/usingmpi/examples/ 
advanced/nxtval_create_c.htm


/* tag values */
#define REQUEST 0
#define GOAWAY  1
#define VALUE   2
#define MPE_SUCCESS 0

void MPE_Counter_create( MPI_Comm oldcomm, MPI_Comm * smaller_comm,  
MPI_Comm * counter_comm )

{
int counter = 0;
int message, done = 0, myid, numprocs, server, color,ranks[1];
MPI_Status status;
MPI_Group oldgroup, smaller_group;

MPI_Comm_size(oldcomm, );
MPI_Comm_rank(oldcomm, );
server = numprocs-1; /*   last proc is server */
MPI_Comm_dup( oldcomm, counter_comm ); /* make one new comm */
if (myid == server) color = MPI_UNDEFINED;
else color =0;
MPI_Comm_split( oldcomm, color, myid, smaller_comm);

if (myid == server) {   /* I am the server */
while (!done) {
MPI_Recv(, 1, MPI_INT, MPI_ANY_SOURCE,  
MPI_ANY_TAG,

 *counter_comm,  );
if (status.MPI_TAG == REQUEST) {
MPI_Send(, 1, MPI_INT, status.MPI_SOURCE,  

[OMPI users] [Fwd: MPI question/problem] including code attachments

2007-06-20 Thread Jeffrey L. Tilson

Hi,
ANL suggested I post this question to you. This is my second 
posting..but now with the proper attachments.
--- Begin Message ---

Hello All,
This will probably turn out to be my fault as I haven't used MPI in a 
few years.


I am attempting to use an MPI implementation of a "nxtval" (see the MPI 
book). I am using the client-server scenario. The MPI book specifies the 
three functions required. Two are collective and one is not. Only the  
two collectives are tested in the supplied code. All three of the MPI 
functions are reproduced in the attached code, however.  I wrote a tiny 
application to create and free a counter object and it fails.


I need to know if this is a bug in the MPI book and a misunderstanding 
on my part.


The complete code is attached. I was using openMPI/intel to compile and 
run.


The error I get is:


[compute-0-1.local:22637] *** An error occurred in MPI_Comm_rank
[compute-0-1.local:22637] *** on communicator MPI_COMM_WORLD
[compute-0-1.local:22637] *** MPI_ERR_COMM: invalid communicator
[compute-0-1.local:22637] *** MPI_ERRORS_ARE_FATAL (goodbye)
mpirun noticed that job rank 0 with PID 22635 on node 
"compute-0-1.local" exited on signal 15.


I've attempted to google my way to understanding but with little 
success. If someone could point me to
a sample application that actually uses these functions, I would 
appreciate it.


Sorry if this is the wrong list, it is not an MPICH question and I 
wasn't sure where to turn.


Thanks,
--jeff



/* A beginning piece of code to perform large-scale web construction.  */ 
#include 
#include 
#include 
#include "mpi.h"

typedef struct {
char description[1024];
double startwtime;
double endwtime;
double difftime;
} Timer;

/* prototypes */
int MPE_Counter_nxtval(MPI_Comm , int *);
int MPE_Counter_free( MPI_Comm *, MPI_Comm * );
void MPE_Counter_create( MPI_Comm , MPI_Comm *, MPI_Comm *);
/* End prototypes */

/* Globals */
int  rank,numsize;

int main( argc, argv )
int argc;
char **argv;
{

int i,j;
MPI_Status   status;
MPI_Request  r;
MPI_Comm  smaller_comm,  counter_comm;


int numtimings=0;
int inttemp;
int value=-1;
int server;

//Init parallel environment

MPI_Init( ,  );
MPI_Comm_rank( MPI_COMM_WORLD,  );
MPI_Comm_size( MPI_COMM_WORLD,  );

printf("I am %i of %i WORLD procesors\n",rank,numsize);
server = numsize -1;

MPE_Counter_create( MPI_COMM_WORLD, _comm, _comm );
printf("Initial inttemp %i\n",rank);

inttemp = MPE_Counter_free( _comm, _comm );
printf("final inttemp %i,%i\n",rank,inttemp);

printf("%i, WORLD barrier leaving routine\n",rank);
MPI_Barrier( MPI_COMM_WORLD );
}

 Add new MPICH based shared counter.
 grabbed from 
http://www-unix.mcs.anl.gov/mpi/usingmpi/examples/advanced/nxtval_create_c.htm

/* tag values */
#define REQUEST 0
#define GOAWAY  1
#define VALUE   2
#define MPE_SUCCESS 0 

void MPE_Counter_create( MPI_Comm oldcomm, MPI_Comm * smaller_comm, MPI_Comm * 
counter_comm )
{
int counter = 0;
int message, done = 0, myid, numprocs, server, color,ranks[1];
MPI_Status status;
MPI_Group oldgroup, smaller_group;

MPI_Comm_size(oldcomm, );
MPI_Comm_rank(oldcomm, );
server = numprocs-1; /*   last proc is server */
MPI_Comm_dup( oldcomm, counter_comm ); /* make one new comm */
if (myid == server) color = MPI_UNDEFINED;
else color =0;
MPI_Comm_split( oldcomm, color, myid, smaller_comm);

if (myid == server) {   /* I am the server */
while (!done) {
MPI_Recv(, 1, MPI_INT, MPI_ANY_SOURCE, MPI_ANY_TAG,
 *counter_comm,  ); 
if (status.MPI_TAG == REQUEST) {
MPI_Send(, 1, MPI_INT, status.MPI_SOURCE, VALUE,
 *counter_comm );
counter++;
}
else if (status.MPI_TAG == GOAWAY) {
printf("SERVER Got a DONE flag\n");
done = 1;
}
else {
fprintf(stderr, "bad tag sent to MPE counter\n");
MPI_Abort(*counter_comm, 1);
   }
}
MPE_Counter_free( smaller_comm, counter_comm );
}
}

/***/
int MPE_Counter_free( MPI_Comm *smaller_comm, MPI_Comm * counter_comm )  
{
int myid, numprocs;

MPI_Comm_rank( *counter_comm,  );
MPI_Comm_size( *counter_comm,  );

if (myid == 0)
MPI_Send(NULL, 0, MPI_INT, numprocs-1, GOAWAY, *counter_comm);

MPI_Comm_free( counter_comm );

if (*smaller_comm != MPI_COMM_NULL) {
MPI_Comm_free( smaller_comm );
}
return 0;
}

//
int MPE_Counter_nxtval(MPI_Comm counter_comm, int * value)
{
int server,numprocs;
MPI_Status status;

MPI_Comm_size( 

[OMPI users] [Fwd: Re: [MPICH2 Req #3480] MPI question/problem]

2007-06-20 Thread Jeffrey L. Tilson

Hello openMPI users,
ANL suggested I post to you. BTW, the missing mpi_finalize  is not the 
problem.

--jeff



--- Begin Message ---


On Wed, 20 Jun 2007, Jeffrey Tilson wrote:

> Hello All,
> This will probably turn out to be my fault as I haven't used MPI in a
> few years.
>
> I am attempting to use an MPI implementation of a "nxtval" (see the MPI
> book). I am using the client-server scenario. The MPI book specifies the
> three functions required. Two are collective and one is not. Only the
> two collectives are tested in the supplied code. All three of the MPI
> functions are reproduced in the attached code, however.  I wrote a tiny
> application to create and free a counter object and it fails.
>
> I need to know if this is a bug in the MPI book and a misunderstanding
> on my part.
>
> The complete code is attached. I was using openMPI/intel to compile and
> run.

If you are using OpenMPI (which is a different MPI implementation) to
compile your code, you may want to contact OpenMPI folks at
us...@open-mpi.org or www.open-mpi.org for support.  Our current MPI
release is mpich2-1.0.5p4 at http://www.mcs.anl.gov/mpi/mpich

However, a quick look at your program, I notice your code does not have
MPI_Finalize.  Without it, your program isn't a valid MPI program.

A.Chan

>
> The error I get is:
>
> > [compute-0-1.local:22637] *** An error occurred in MPI_Comm_rank
> > [compute-0-1.local:22637] *** on communicator MPI_COMM_WORLD
> > [compute-0-1.local:22637] *** MPI_ERR_COMM: invalid communicator
> > [compute-0-1.local:22637] *** MPI_ERRORS_ARE_FATAL (goodbye)
> > mpirun noticed that job rank 0 with PID 22635 on node
> > "compute-0-1.local" exited on signal 15.
>
> I've attempted to google my way to understanding but with little
> success. If someone could point me to
> a sample application that actually uses these functions, I would
> appreciate it.
>
> Sorry if this is the wrong list, it is not an MPICH question and I
> wasn't sure where to turn.
>
> Thanks,
> --jeff
>
> 
>
>
--- End Message ---


[OMPI users] mpi python recommendation

2007-06-20 Thread de Almeida, Valmor F.

Hello list,

I would appreciate recommendations on what to use for developing mpi
python codes. I've seen several packages on public domain: mympi, pypar,
mpi python, mpi4py and it would be helpful to start in the right
direction.

Thanks,

--
Valmor de Almeida
ORNL

PS. I apologize if this message appears twice in the list but something
weird happened with my first send; probably a local problem on my side.




[OMPI users] Open MPI v1.2.3 released

2007-06-20 Thread Tim Mattox

The Open MPI Team, representing a consortium of research, academic,
and industry partners, is pleased to announce the release of Open MPI
version 1.2.3. This release is mainly a bug fix release over the v1.2.2
release, but there are few minor new features.  We strongly
recommend that all users upgrade to version 1.2.3 if possible.

Version 1.2.3 can be downloaded from the main Open MPI web site or
any of its mirrors (mirrors will be updating shortly).

Here are a list of changes in v1.2.3 as compared to v1.2.2:

- Fix a regression in comm_spawn functionality that inadvertently
 caused the mapping of child processes to always start at the same
 place.  Thanks to Prakash Velayutham for helping discover the
 problem.
- Fix segfault when a user's home directory is unavailable on a remote
 node.  Thanks to Guillaume Thomas-Collignon for bringing the issue
 to our attention.
- Fix MPI_IPROBE to properly handle MPI_STATUS_IGNORE on mx and
 psm MTLs. Thanks to Sophia Corwell for finding this and supplying a
 reproducer.
- Fix some error messages in the tcp BTL.
- Use _NSGetEnviron instead of environ on Mac OS X so that there
 are no undefined symbols in the shared libraries.
- On OS X, when MACOSX_DEPLOYMENT_TARGET is 10.3 or higher,
 support building the Fortran 90 bindings as a shared library.  Thanks to
 Jack Howarth for his advice on making this work.
- No longer require extra include flag for the C++ bindings.
- Fix detection of weak symbols support with Intel compilers.
- Fix issue found by Josh England: ompi_info would not show framework
 MCA parameters set in the environment properly.
- Rename the oob_tcp_include/exclude MCA params to oob_tcp_if_include/exclude
 so that they match the naming convention of the btl_tcp_if_include/exclude
 params.  The old names are depreciated, but will still work.
- Add -wd as a synonym for the -wdir orterun/mpirun option.
- Fix the mvapi BTL to compile properly with compilers that do not support
 anonymous unions.  Thanks to Luis Kornblueh for reporting the bug.

--
Tim Mattox
Open Systems Lab
Indiana University


Re: [OMPI users] error -- libtool unsupported hardcode properties

2007-06-20 Thread Jeff Squyres

On Jun 20, 2007, at 11:41 AM, Andrew Friedley wrote:


I'm not seeing anything particularly relevant in the libtool
documentation.  I think this might be referring to hardcoding paths in
shared libraries?

Using pathf90 for both FC and F77 does not change anything.  Should  
have

been more clear in my first email -- gcc 3.4.5 using pathf90 for FC
works fine.

Even more interesting -- I just tried configuring with --disable- 
mpi-f77

--disable-mpi-f90 and still see the exact same error.  So I'm pretty
convinced this is not a fortran issue.  I've tried disabling debugging
and using -O0 as well, but still get the same error.


Bummer.  :-(


Is there anywhere else I should be looking?  I have to admit, I'm
stumped here..


It could be a gcc bug...?  What I would do is try to replicate this  
with a small / non-OMPI example using a trivial AC/AM/LT-based  
project.  If you can replicate (which I hope you would be able to; we  
do some weird things in building OMPI, but not that weird), send it  
on to the libtool list.



Andrew


Jeff Squyres wrote:

It could be; I didn't mention it because this is building ompi_info,
a C++ application that should have no fortran issues with it.

But then again, who knows?  Maybe you're right :-) -- perhaps libtool
is just getting confused because you used g77 and pathf90 -- why not
use pathf90 for both FC and F77?  pathf90 is capable of compiling
both Fortran 77 and 90 applications.


On Jun 20, 2007, at 5:58 AM, Terry Frankcombe wrote:


Isn't this another case of trying to use two different Fortran
compilers
at the same time?


On Tue, 2007-06-19 at 20:04 -0400, Jeff Squyres wrote:

I have not seen this before -- did you look in the libtool
documentation?  ("See the libtool documentation for more
information.")

On Jun 19, 2007, at 6:46 PM, Andrew Friedley wrote:


I'm trying to build Open MPI v1.2.2 with gcc/g++/g77 3.4.4 and
pathf90 v2.4 on a linux system, and see this error when compiling
ompi_info:

/bin/sh ../../../libtool --tag=CXX --mode=link g++  -g -O2 - 
finline-

functions -pthread  -export-dynamic   -o ompi_info components.o
ompi_info.o output.o param.o version.o ../../../ompi/libmpi.la -
lnsl -lutil  -lm
libtool: link: unsupported hardcode properties
libtool: link: See the libtool documentation for more information.
libtool: link: Fatal configuration error.
make[2]: *** [ompi_info] Error 1
make[2]: Leaving directory `/g/g21/afriedle/work/ompibuild/
openmpi-1.2.2/ompi/tools/ompi_info'
make[1]: *** [all-recursive] Error 1

Google didn't turn anything up.  Strange thing is, gcc 3.4.5 works
just fine on this system.  I'm using this to build:

export CC=gcc
export CXX=g++
export F77=g77
export FC=pathf90
export CFLAGS="-g -O2"
export CXXFLAGS="-g -O2"
export FFLAGS="-fno-second-underscore -g -O2"
export FCFLAGS="-fno-second-underscore -g -O2"
export PREFIX=$ROOT/gnudbg

./configure --prefix=$PREFIX --enable-debug --enable-mpi-f77 --
enable-mpi-f90 --with-openib=/usr

I've attached the config.log.. any ideas?

Andrew





___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



--
Jeff Squyres
Cisco Systems



Re: [OMPI users] error -- libtool unsupported hardcode properties

2007-06-20 Thread Andrew Friedley
I'm not seeing anything particularly relevant in the libtool 
documentation.  I think this might be referring to hardcoding paths in 
shared libraries?


Using pathf90 for both FC and F77 does not change anything.  Should have 
been more clear in my first email -- gcc 3.4.5 using pathf90 for FC 
works fine.


Even more interesting -- I just tried configuring with --disable-mpi-f77 
--disable-mpi-f90 and still see the exact same error.  So I'm pretty 
convinced this is not a fortran issue.  I've tried disabling debugging 
and using -O0 as well, but still get the same error.


Is there anywhere else I should be looking?  I have to admit, I'm 
stumped here..


Andrew


Jeff Squyres wrote:
It could be; I didn't mention it because this is building ompi_info,  
a C++ application that should have no fortran issues with it.


But then again, who knows?  Maybe you're right :-) -- perhaps libtool  
is just getting confused because you used g77 and pathf90 -- why not  
use pathf90 for both FC and F77?  pathf90 is capable of compiling  
both Fortran 77 and 90 applications.



On Jun 20, 2007, at 5:58 AM, Terry Frankcombe wrote:

Isn't this another case of trying to use two different Fortran  
compilers

at the same time?


On Tue, 2007-06-19 at 20:04 -0400, Jeff Squyres wrote:

I have not seen this before -- did you look in the libtool
documentation?  ("See the libtool documentation for more  
information.")


On Jun 19, 2007, at 6:46 PM, Andrew Friedley wrote:


I'm trying to build Open MPI v1.2.2 with gcc/g++/g77 3.4.4 and
pathf90 v2.4 on a linux system, and see this error when compiling
ompi_info:

/bin/sh ../../../libtool --tag=CXX --mode=link g++  -g -O2 -finline-
functions -pthread  -export-dynamic   -o ompi_info components.o
ompi_info.o output.o param.o version.o ../../../ompi/libmpi.la -
lnsl -lutil  -lm
libtool: link: unsupported hardcode properties
libtool: link: See the libtool documentation for more information.
libtool: link: Fatal configuration error.
make[2]: *** [ompi_info] Error 1
make[2]: Leaving directory `/g/g21/afriedle/work/ompibuild/
openmpi-1.2.2/ompi/tools/ompi_info'
make[1]: *** [all-recursive] Error 1

Google didn't turn anything up.  Strange thing is, gcc 3.4.5 works
just fine on this system.  I'm using this to build:

export CC=gcc
export CXX=g++
export F77=g77
export FC=pathf90
export CFLAGS="-g -O2"
export CXXFLAGS="-g -O2"
export FFLAGS="-fno-second-underscore -g -O2"
export FCFLAGS="-fno-second-underscore -g -O2"
export PREFIX=$ROOT/gnudbg

./configure --prefix=$PREFIX --enable-debug --enable-mpi-f77 --
enable-mpi-f90 --with-openib=/usr

I've attached the config.log.. any ideas?

Andrew





___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users





Re: [OMPI users] Problem with user shell and mpirun-prefix

2007-06-20 Thread Jeff Squyres

On Jun 19, 2007, at 11:35 AM, Alf Wachsmann wrote:


In line 568 of openmpi-1.2.2/orte/mca/pls/rsh/pls_rsh_module.c
the call "p = getpwuid(getuid());" returns an invalid shell on our  
compute
nodes. This leads to "pls:rsh: local csh: 0, local sh: 0", i.e. the  
local
shell is not defined and only the user's ~/.profile gets exectuted  
in lines
649ff. This forces users to set their LD_LIBRARY_PATH instead of  
having

OpenMPI do this for them in lines 981ff.


Wow -- neat!  We certainly didn't think of this case.  :-)

Before LSF starts a user job, it sets their complete environment  
including
the SHELL environment variable. I am wondering whether OpenMPI  
could look

at that env. variable in lines 567ff in addition to or instead of the
getpwuid() call.


A good idea; yes, we can do this.  I'm sorry that it won't make the  
1.2.3 release, though.  I'll file a ticket.


FWIW: we're actively working on native LSF support in Open MPI such  
that this kind of rsh tomfoolery won't be necessary in the future.   
We hope to have it ready for the 1.3 series.


--
Jeff Squyres
Cisco Systems



Re: [OMPI users] error -- libtool unsupported hardcode properties

2007-06-20 Thread Jeff Squyres
It could be; I didn't mention it because this is building ompi_info,  
a C++ application that should have no fortran issues with it.


But then again, who knows?  Maybe you're right :-) -- perhaps libtool  
is just getting confused because you used g77 and pathf90 -- why not  
use pathf90 for both FC and F77?  pathf90 is capable of compiling  
both Fortran 77 and 90 applications.



On Jun 20, 2007, at 5:58 AM, Terry Frankcombe wrote:

Isn't this another case of trying to use two different Fortran  
compilers

at the same time?


On Tue, 2007-06-19 at 20:04 -0400, Jeff Squyres wrote:

I have not seen this before -- did you look in the libtool
documentation?  ("See the libtool documentation for more  
information.")


On Jun 19, 2007, at 6:46 PM, Andrew Friedley wrote:


I'm trying to build Open MPI v1.2.2 with gcc/g++/g77 3.4.4 and
pathf90 v2.4 on a linux system, and see this error when compiling
ompi_info:

/bin/sh ../../../libtool --tag=CXX --mode=link g++  -g -O2 -finline-
functions -pthread  -export-dynamic   -o ompi_info components.o
ompi_info.o output.o param.o version.o ../../../ompi/libmpi.la -
lnsl -lutil  -lm
libtool: link: unsupported hardcode properties
libtool: link: See the libtool documentation for more information.
libtool: link: Fatal configuration error.
make[2]: *** [ompi_info] Error 1
make[2]: Leaving directory `/g/g21/afriedle/work/ompibuild/
openmpi-1.2.2/ompi/tools/ompi_info'
make[1]: *** [all-recursive] Error 1

Google didn't turn anything up.  Strange thing is, gcc 3.4.5 works
just fine on this system.  I'm using this to build:

export CC=gcc
export CXX=g++
export F77=g77
export FC=pathf90
export CFLAGS="-g -O2"
export CXXFLAGS="-g -O2"
export FFLAGS="-fno-second-underscore -g -O2"
export FCFLAGS="-fno-second-underscore -g -O2"
export PREFIX=$ROOT/gnudbg

./configure --prefix=$PREFIX --enable-debug --enable-mpi-f77 --
enable-mpi-f90 --with-openib=/usr

I've attached the config.log.. any ideas?

Andrew







___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



--
Jeff Squyres
Cisco Systems



Re: [OMPI users] error -- libtool unsupported hardcode properties

2007-06-20 Thread Terry Frankcombe
Isn't this another case of trying to use two different Fortran compilers
at the same time?


On Tue, 2007-06-19 at 20:04 -0400, Jeff Squyres wrote:
> I have not seen this before -- did you look in the libtool  
> documentation?  ("See the libtool documentation for more information.")
> 
> On Jun 19, 2007, at 6:46 PM, Andrew Friedley wrote:
> 
> > I'm trying to build Open MPI v1.2.2 with gcc/g++/g77 3.4.4 and  
> > pathf90 v2.4 on a linux system, and see this error when compiling  
> > ompi_info:
> >
> > /bin/sh ../../../libtool --tag=CXX --mode=link g++  -g -O2 -finline- 
> > functions -pthread  -export-dynamic   -o ompi_info components.o  
> > ompi_info.o output.o param.o version.o ../../../ompi/libmpi.la - 
> > lnsl -lutil  -lm
> > libtool: link: unsupported hardcode properties
> > libtool: link: See the libtool documentation for more information.
> > libtool: link: Fatal configuration error.
> > make[2]: *** [ompi_info] Error 1
> > make[2]: Leaving directory `/g/g21/afriedle/work/ompibuild/ 
> > openmpi-1.2.2/ompi/tools/ompi_info'
> > make[1]: *** [all-recursive] Error 1
> >
> > Google didn't turn anything up.  Strange thing is, gcc 3.4.5 works  
> > just fine on this system.  I'm using this to build:
> >
> > export CC=gcc
> > export CXX=g++
> > export F77=g77
> > export FC=pathf90
> > export CFLAGS="-g -O2"
> > export CXXFLAGS="-g -O2"
> > export FFLAGS="-fno-second-underscore -g -O2"
> > export FCFLAGS="-fno-second-underscore -g -O2"
> > export PREFIX=$ROOT/gnudbg
> >
> > ./configure --prefix=$PREFIX --enable-debug --enable-mpi-f77 -- 
> > enable-mpi-f90 --with-openib=/usr
> >
> > I've attached the config.log.. any ideas?
> >
> > Andrew
> > 
> > 
> 
> 



Re: [OMPI users] Processes stuck in MPI_BARRIER

2007-06-20 Thread Gleb Natapov
On Tue, Jun 19, 2007 at 11:24:24AM -0700, George Bosilca wrote:
> 1. I don't believe the OS to release the binding when we close the  
> socket. As an example on Linux the kernel sockets are release at a  
> later moment. That means the socket might be still in use for the  
> next run.
>
This is not Linux specific. This is required by TCP RFC. Socket that
initiated close will remain in TIME_WAIT state for round-trip time.

--
Gleb.