[OMPI users] replication in Open MPI

2012-05-22 Thread Thomas Ropars
regards, Thomas Ropars

[OMPI users] segfault on finalize

2009-09-25 Thread Thomas Ropars
Hi, I'm using r21970 of the trunk on Linux 2.6.18-3-amd64 and gcc version 4.2.3 (Debian 4.2.3-2). When I compile open mpi with the default options, it works. But if I use --with-platform=optimized option, then I get a segfault for every program I run. ==3073== Access not within mapped

Re: [OMPI users] segfault on finalize

2009-09-28 Thread Thomas Ropars
25, 2009, at 9:49 AM, Thomas Ropars wrote: Hi, I'm using r21970 of the trunk on Linux 2.6.18-3-amd64 and gcc version 4.2.3 (Debian 4.2.3-2). When I compile open mpi with the default options, it works. But if I use --with-platform=optimized option, then I get a segfault for every program I run

[OMPI users] error with Vprotocol pessimist

2007-12-11 Thread Thomas Ropars
Hi, I've tried to test the message logging component vprotocol pessimist. (svn checkout revision 16926) When I run an mpi application, I get the following error : mca: base: component_find: unable to open vprotocol pessimist: /local/openmpi/lib/openmpi/mca_vprotocol_pessimist.so: undefined

Re: [OMPI users] error with Vprotocol pessimist

2007-12-13 Thread Thomas Ropars
you removed .ompi_ignore. If this does not fix the problem, please let me know your command line options to mpirun. Aurelien Le 11 déc. 07 à 14:36, Aurelien Bouteiller a écrit : Mmm, I'll investigate this today. Aurelien Le 11 déc. 07 à 08:46, Thomas Ropars a écrit : Hi, I've

Re: [OMPI users] error with Vprotocol pessimist

2007-12-19 Thread Thomas Ropars
Aurelien Le 13 déc. 07 à 07:58, Thomas Ropars a écrit : I still have the same error after update (r16951). I have the lib/openmpi/mca_pml_v.so file in my builld and the command line I use is: mpirun -np 4 my_application Thomas Aurelien Bouteiller wrote: I could reproduce and f

Re: [OMPI users] error with Vprotocol pessimist

2008-01-29 Thread Thomas Ropars
g for argz bugfix in libtool 1.5 -- your libtool doesn't need this! yay! ++ patching 64-bit OS X bug in ltmain.sh -- your libtool doesn't need this! yay! ++ RTLD_GLOBAL in libltdl -- your libltdl doesn't need this! yay! Thomas Thomas Ropars wrote: Hi, I have the same error message

Re: [OMPI users] error with Vprotocol pessimist

2008-01-30 Thread Thomas Ropars
nt in your dlopen.c, then it doesn't find it and therefore autogen.sh doesn't patch it. Did you already patch dlopen.c, perchance, or is your original dlopen.c different than this? On Jan 29, 2008, at 9:35 AM, Thomas Ropars wrote: I've solved the problem by adding the flag RTLD_GLOBAL i

Re: [OMPI users] error with Vprotocol pessimist

2008-01-30 Thread Thomas Ropars
Jeff Squyres wrote: On Jan 30, 2008, at 4:43 AM, Thomas Ropars wrote: After running autogen.sh, the file opal/libltdl/loaders/dlopen.c doesn't exist and more generally the directory opal/libltdl/loaders/ doesn't exist. That's why I need to add the RTLD_GLOBAL flag after running

Re: [OMPI users] MPI piggyback mechanism

2008-02-01 Thread Thomas Ropars
Hi, I'm currently working on optimistic message logging and I would like to implement an optimistic message logging protocol in OpenMPI. Optimistic message logging protocols piggyback information about dependencies between processes on the application messages to be able to find a

Re: [OMPI users] MPI piggyback mechanism

2008-02-18 Thread Thomas Ropars
Sorry for my late reply. And thank you all for your answers and comments. Oleg, Same question as Aurélien. You mentionned that you have implemented some piggyback mechanisms in Open MPI. Are these mechanisms available ? Would it be possible to use it ? Regards. Thomas Ropars Aurélien

[OMPI users] Problem using VampirTrace

2008-08-11 Thread Thomas Ropars
Hi all, I'm trying to use VampirTrace. I'm working with r19234 of svn trunk. When I try to run a simple application with 4 processes on the same computer, it works well. But if try to use the same application with the 4 processes executed on 4 different computers, I never get the .otf file.

Re: [OMPI users] Problem using VampirTrace

2008-09-15 Thread Thomas Ropars
Hello, I don't have a common file system for all cluster nodes. I've tried to run the application again with VT_UNIFY=no and to call vtunify manually. It works well. I managed to get the .otf file. Thank you. Thomas Ropars Andreas Knüpfer wrote: Hello Thomas, sorry for the delay. My

Re: [OMPI users] mpirun, paths and xterm again (xserver problem solved; library problem still there)

2008-09-24 Thread Thomas Ropars
Hi, I'm trying to use gdb and xterm with open mpi on my computer (Ubuntu 8.04). When I run an application without gdb on my computer in works find but if I try to use gdb in xterm I get the following error: mpirun -n 2 -x DISPLAY=:0.0 xterm -e gdb ./ring.out (gdb) run Starting program:

Re: [OMPI users] mpirun, paths and xterm again (xserver problem solved; library problem still there)

2008-09-24 Thread Thomas Ropars
LD_LIBRARY_PATH= or use the -Wl,-rpath= to compiler the search path into the executable. best regards, Samuel P.S.: This xterm behavior causes us a lot of problems as well. Other terminals like konsole don't have that problem. Thomas Ropars wrote: Hi, I'm trying to use gdb and xterm with open mpi

[OMPI users] using ompi-server on a single node

2009-01-05 Thread Thomas Ropars
Hi, I've tried to use ompi-server to connect 2 processes belonging to different jobs but running on the same computer. It works when the computer has a network interface up. But if the only active network interface is the local loop, it doesn't work. According to what I understood reading the

Re: [OMPI users] Error message when using MPI_Type_struct()

2009-01-08 Thread Thomas Ropars
of data 8 My question is : what does this message means ? Is there an error in my code ? and what can I do to avoid this message ? Regards, Thomas Thomas Ropars wrote: Hi, I'm currently implementing a mechanism to piggyback information on messages. On message sending, I dynamically cr

Re: [OMPI users] Error message when using MPI_Type_struct()

2009-01-12 Thread Thomas Ropars
, you will be able to use the mpool instead of malloc, which should moderate the overhead of creating the intermediate buffer. I will have a look at that. Thomas Aurelien Le 8 janv. 09 à 05:13, Thomas Ropars a écrit : Hi, I submit again this old question because I didn't get any answer last

[OMPI users] implementation of a message logging protocol

2007-03-22 Thread Thomas Ropars
wondering if in the actual state of Open MPI it is possible to do the same kind of work in this library ? Is there somebody currently working on the same subject ? Best regards, Thomas Ropars.