Re: [OMPI users] [Open MPI] #2681: ompi-server publish name broken in 1.5.x

2011-01-11 Thread Bernard Secher - SFME/LGLS
Hello, This feature is very important for my project. It is managing coupling parallel codes. Thanks to correct this bug as soon as possible. Best Bernard Open MPI a écrit : #2681: ompi-server publish name broken in 1.5.x

Re: [OMPI users] change between openmpi 1.4.1 and 1.5.1 about MPI2 publish name

2011-01-07 Thread Bernard Secher - SFME/LGLS
The accept and connect tests are OK with version openmpi 1.4.1. I think there is a bug in version 1.5.1 Best Bernard Bernard Secher - SFME/LGLS a écrit : I get the same dead lock with openmpi tests: pubsub, accept and connect with version 1.5.1 Bernard Secher - SFME/LGLS a écrit : Jeff

Re: [OMPI users] change between openmpi 1.4.1 and 1.5.1 about MPI2 publish name

2011-01-07 Thread Bernard Secher - SFME/LGLS
I get the same dead lock with openmpi tests: pubsub, accept and connect with version 1.5.1 Bernard Secher - SFME/LGLS a écrit : Jeff, The dead lock is not in MPI_Comm_accept and MPI_Comm_connect, but before in MPI_Publish_name and MPI_Lookup_name. So the broadcast of srv is not involved

Re: [OMPI users] change between openmpi 1.4.1 and 1.5.1 about MPI2 publish name

2011-01-07 Thread Bernard Secher - SFME/LGLS
Jeff, The dead lock is not in MPI_Comm_accept and MPI_Comm_connect, but before in MPI_Publish_name and MPI_Lookup_name. So the broadcast of srv is not involved in the dead lock. Best Bernard Bernard Secher - SFME/LGLS a écrit : Jeff, Only the processes of the program where process 0

Re: [OMPI users] change between openmpi 1.4.1 and 1.5.1 about MPI2 publish name

2011-01-07 Thread Bernard Secher - SFME/LGLS
. Is it different whith openmpi 1.5.1 ? Best Bernard Jeff Squyres a écrit : On Jan 5, 2011, at 10:36 AM, Bernard Secher - SFME/LGLS wrote: MPI_Comm remoteConnect(int myrank, int *srv, char *port_name, char* service) { int clt=0; MPI_Request request; /* requete pour communication non bloquante

Re: [OMPI users] change between openmpi 1.4.1 and 1.5.1 about MPI2 publish name

2011-01-06 Thread Bernard Secher - SFME/LGLS
Is it a bug in openmpi V1.5.1 ? Bernard Bernard Secher - SFME/LGLS a écrit : Hello, What are the changes between openMPI 1.4.1 and 1.5.1 about MPI2 service of publishing name. I have 2 programs which connect them via MPI_Publish_name and MPI_Lookup_name subroutines and ompi-server. That's

[OMPI users] change between openmpi 1.4.1 and 1.5.1 about MPI2 publish name

2011-01-05 Thread Bernard Secher - SFME/LGLS
Hello, What are the changes between openMPI 1.4.1 and 1.5.1 about MPI2 service of publishing name. I have 2 programs which connect them via MPI_Publish_name and MPI_Lookup_name subroutines and ompi-server. That's OK with 1.4.1 version , but I have a deadlock with 1.5.1 version inside the

[OMPI users] [SPAM:### 85%] Re: [SPAM:### 83%] problem when compiling ompenmpiV1.5.1

2010-12-16 Thread Bernard Secher - SFME/LGLS
en you can compile it with the compiler option -fopenmp (in gcc) Jody On Thu, Dec 16, 2010 at 11:56 AM, Bernard Secher - SFME/LGLS <bernard.sec...@cea.fr> wrote: I get the following error message when I compile openmpi V1.5.1: CXXotfprofile-otfprofile.o ../../../../../../../../../o

[OMPI users] [SPAM:### 83%] problem when compiling ompenmpi V1.5.1

2010-12-16 Thread Bernard Secher - SFME/LGLS
I get the following error message when I compile openmpi V1.5.1: CXXotfprofile-otfprofile.o ../../../../../../../../../openmpi-1.5.1-src/ompi/contrib/vt/vt/extlib/otf/tools/otfprofile/otfprofile.cpp:11:18: erreur: omp.h : Aucun fichier ou dossier de ce type

Re: [OMPI users] dead lock in MPI_Finalize

2009-01-26 Thread Bernard Secher - SFME/LGLS
, Bernard Secher - SFME/LGLS <bernard.sec...@cea.fr> wrote: Thanks Jody for your answer. I launch 2 instances of my program on 2 processes each instance, on the same machine. I use MPI_Publish_name, MPI_Lookup_name to create a global communicator on the 4 processes. Then the 4 processes ex

Re: [OMPI users] dead lock in MPI_Finalize

2009-01-26 Thread Bernard Secher - SFME/LGLS
should solve the = problem. george. On Jan 23, 2009, at 06:00 , Bernard Secher - SFME/LGLS wrote: No i didn't run this program whith Open-MPI 1.2.X because one said = to me there were many changes between 1.2.X version and 1.3 version = about MPI_publish_name, MPI_Lookup_name (new ompi

Re: [OMPI users] dead lock in MPI_Finalize

2009-01-23 Thread Bernard Secher - SFME/LGLS
MPI_Sends are matched by corresponding MPI_Recvs. Jody On Fri, Jan 23, 2009 at 11:08 AM, Bernard Secher - SFME/LGLS <bernard.sec...@cea.fr> wrote: Thanks Jody for your answer. I launch 2 instances of my program on 2 processes each instance, on the same machine. I use MPI_Publis

Re: [OMPI users] dead lock in MPI_Finalize

2009-01-23 Thread Bernard Secher - SFME/LGLS
esn't work", nobody can give you any help whatsoever. Jody On Fri, Jan 23, 2009 at 9:33 AM, Bernard Secher - SFME/LGLS <bernard.sec...@cea.fr> wrote: Hello Jeff, I don't understand what you mean by "A _detailed_ description of what is failing". The problem is a dead lock

Re: [OMPI users] dead lock in MPI_Finalize

2009-01-23 Thread Bernard Secher - SFME/LGLS
please include as much information detailed in your initial e-mail as possible." Additionally: "The best way to get help is to provide a "recipie" for reproducing the problem." Thanks! On Jan 22, 2009, at 8:53 AM, Bernard Secher - SFME/LGLS wrote: Hello Tim, I

Re: [OMPI users] dead lock in MPI_Finalize

2009-01-22 Thread Bernard Secher - SFME/LGLS
Hello Tim, I send you the information in join files. Bernard Tim Mattox a écrit : Can you send all the information listed here: http://www.open-mpi.org/community/help/ On Wed, Jan 21, 2009 at 8:58 AM, Bernard Secher - SFME/LGLS <bernard.sec...@cea.fr> wrote: Hello, I have

[OMPI users] dead lock in MPI_Finalize

2009-01-21 Thread Bernard Secher - SFME/LGLS
Hello, I have a case wher i have a dead lock in MPI_Finalize() function with openMPI v1.3. Can some body help me please? Bernard

[OMPI users] ORTE_ERROR_LOG

2009-01-16 Thread Bernard Secher - SFME/LGLS
Hello, I have the following error at the beginning of my mpi code: [is124684:07869] [[38040,0],0] ORTE_ERROR_LOG: Data unpack would read past end of buffer in file orted/orted_comm.c at line 448 Anybody can help me to solve this pb? Bernard

Re: [OMPI users] default hostfile with 1.3 version

2009-01-06 Thread Bernard Secher - SFME/LGLS
-params.conf file, if you want. Ralph On Jan 6, 2009, at 4:36 AM, Bernard Secher - SFME/LGLS wrote: Hello, I take 1.3 version from svn base. The default hostfile in etc/openmpi-default-hostfile is not taken. I must give to mpirun the -hostfile option to take this file. Is there any change

[OMPI users] default hostfile with 1.3 version

2009-01-06 Thread Bernard Secher - SFME/LGLS
Hello, I take 1.3 version from svn base. The default hostfile in etc/openmpi-default-hostfile is not taken. I must give to mpirun the -hostfile option to take this file. Is there any change in 1.3 version? Regards Bernard

Re: [OMPI users] using of MPI_Publish_name with openmpi

2008-12-11 Thread Bernard Secher - SFME/LGLS
I have the same pb with 1.2.9rc1 version. I don't see any orte-clean utility in this version. But the best is i use the 1.3 version. Thanks to give me more details about ompi-server in the 1.3 version. Regards Bernard Bernard Secher - SFME/LGLS a écrit : I use first 1.2.5 version then 1.2.8

Re: [OMPI users] using of MPI_Publish_name with openmpi

2008-12-11 Thread Bernard Secher - SFME/LGLS
. Regards, Aurelien -- * Dr. Aurélien Bouteiller * Sr. Research Associate at Innovative Computing Laboratory * University of Tennessee * 1122 Volunteer Boulevard, suite 350 * Knoxville, TN 37996 * 865 974 6321 Le 10 déc. 08 à 10:28, Bernard Secher - SFME/LGLS a écrit : Hi everybody I want

[OMPI users] using of MPI_Publish_name with openmpi

2008-12-10 Thread Bernard Secher - SFME/LGLS
Hi everybody I want to use MPI_publish_name function to do communicaztion between two independant code. I saw on the web i must use the orted daemon with the following command: orted --persistent --seed --scope public --universe foo The communication success, but when i close the