[OMPI users] TCP btl misbehaves if btl_tcp_port_min_v4 is not set.

2009-07-23 Thread Eric Thibodeau
Hello all, (this _might_ be related to https://svn.open-mpi.org/trac/ompi/ticket/1505) I just compiled and installed 1.3.3 ins a CentOS 5 environment and we noticed the processes would deadlock as soon as they would start using TCP communications. The test program is one that has been

Re: [OMPI users] Can 2 IB HCAs give twice the bandwidth?

2008-10-19 Thread Eric Thibodeau
Jeff Squyres wrote: On Oct 18, 2008, at 9:19 PM, Mostyn Lewis wrote: Can OpenMPI do like Scali and MVAPICH2 and utilize 2 IB HCAs per machine to approach double the bandwidth on simple tests such as IMB PingPong? Yes. OMPI will automatically (and aggressively) use as many active ports as

[OMPI users] Tuned Collective MCA params

2008-10-03 Thread Eric Thibodeau
Hello all, I am currently profiling a simple case where I replace multiple S/R calls with Allgather calls and it would _seem_ the simple S/R calls are faster. Now, *before* I come to any conclusion on this, one of the pieces I am missing is more details on how /if/when the tuned coll MCA

Re: [OMPI users] Need help resolving No route to host error with OpenMPI 1.1.2

2008-09-15 Thread Eric Thibodeau
lengthy since the entire system (or at least all libs openmpi links to) needs to be rebuilt. Eric Eric Thibodeau wrote: Prasanna, Please send me your /etc/make.conf and the contents of /var/db/pkg/sys-cluster/openmpi-1.2.7/ You can package this with the following command line: tar -cjf

Re: [OMPI users] MPI_sendrecv = MPI_Send+ MPI_RECV ?

2008-09-15 Thread Eric Thibodeau
Sorry about that, I had misinterpreted your original post as being the pair of send-receive. The example you give below does seem correct indeed, which means you might have to show us the code that doesn't work. Note that I am in no way a Fortran expert, I'm more versed in C. The only hint I'd

Re: [OMPI users] MPI_sendrecv = MPI_Send+ MPI_RECV ?

2008-09-13 Thread Eric Thibodeau
Enrico Barausse wrote: Hello, I apologize in advance if my question is naive, but I started to use open-mpi only one week ago. I have a complicated fortran 90 code which is giving me a segmentation fault (address not mapped). I tracked down the problem to the following lines:

Re: [OMPI users] Need help resolving No route to host error with OpenMPI 1.1.2

2008-09-12 Thread Eric Thibodeau
Prasanna, Please send me your /etc/make.conf and the contents of /var/db/pkg/sys-cluster/openmpi-1.2.7/ You can package this with the following command line: tar -cjf data.tbz /etc/make.conf /var/db/pkg/sys-cluster/openmpi-1.2.7/ And simply send me the data.tbz file. Thanks, Eric

Re: [OMPI users] Need help resolving No route to host error with OpenMPI 1.1.2

2008-09-11 Thread Eric Thibodeau
Prasanna, I opened up a bug report to enable a better control over the threading options (http://bugs.gentoo.org/show_bug.cgi?id=237435). In the meanwhile, if your helloWorld isn't too fluffy, could you send it over (off list if you prefer) so I can take a look at it, the Segmentation

Re: [OMPI users] Need help resolving No route to host error with OpenMPI 1.1.2

2008-09-11 Thread Eric Thibodeau
Jeff Squyres wrote: On Sep 11, 2008, at 3:27 PM, Eric Thibodeau wrote: Ok, added to the information from the README, I'm thinking none of the 3 configure options have an impact on the said 'threaded TCP listener' and the MCA option you suggested should still work, is this correct

Re: [OMPI users] Need help resolving No route to host error with OpenMPI 1.1.2

2008-09-11 Thread Eric Thibodeau
Jeff Squyres wrote: On Sep 11, 2008, at 2:38 PM, Eric Thibodeau wrote: In short: Which of the 3 options is the one known to be unstable in the following: --enable-mpi-threadsEnable threads for MPI applications (default: disabled) --enable-progress-threads

Re: [OMPI users] Need help resolving No route to host error with OpenMPI 1.1.2

2008-09-11 Thread Eric Thibodeau
Jeff, In short: Which of the 3 options is the one known to be unstable in the following: --enable-mpi-threadsEnable threads for MPI applications (default: disabled) --enable-progress-threads Enable threads asynchronous communication

Re: [OMPI users] Need help resolving No route to host error with OpenMPI 1.1.2

2008-09-11 Thread Eric Thibodeau
ed and logged ;) On Sep 10, 2008, at 7:52 PM, Eric Thibodeau wrote: Prasanna, also make sure you try with USE=-threads ...as the ebuild states, it's _experimental_ ;) Keep your eye on: https://svn.open-mpi.org/trac/ompi/wiki/ThreadSafetySupport Eric Prasanna Ranganathan wrote: H

Re: [OMPI users] Need help resolving No route to host error with OpenMPI 1.1.2

2008-09-10 Thread Eric Thibodeau
Prasanna Ranganathan wrote: Hi Eric, Thanks a lot for the reply. I am currently working on upgrading to 1.2.7 I do not quite follow your directions; What do you refer to when you say say "try with USE=-threads..." I am referring to the USE variable which is used to set global package

Re: [OMPI users] Need help resolving No route to host error with OpenMPI 1.1.2

2008-09-10 Thread Eric Thibodeau
Prasanna, also make sure you try with USE=-threads ...as the ebuild states, it's _experimental_ ;) Keep your eye on: https://svn.open-mpi.org/trac/ompi/wiki/ThreadSafetySupport Eric Prasanna Ranganathan wrote: Hi, I have upgraded my openMPI to 1.2.6 (We have gentoo and emerge showed

Re: [OMPI users] Need help resolving No route to host error with OpenMPI 1.1.2

2008-09-10 Thread Eric Thibodeau
Prasanna Ranganathan wrote: Hi, I have upgraded my openMPI to 1.2.6 (We have gentoo and emerge showed 1.2.6-r1 to be the latest stable version of openMPI). Prasanna, do a sync, 1.2.7 is in portage and report back. Eric I do still get the following error message when running my test

Re: [OMPI users] Configure fails with icc 10.1.008

2007-12-07 Thread Eric Thibodeau
; return 0; } You should probably check with Intel support for more details. On Dec 6, 2007, at 11:25 PM, Eric Thibodeau wrote: Hello all, I am unable to get past ./configure as ICC fails on C++ tests (see attached ompi-output.tar.gz). Configure was called without and the with sourcing `

[OMPI users] Configure fails with icc 10.1.008

2007-12-06 Thread Eric Thibodeau
(well..intelligible for me that is ;P ) cause of the failure in config.log. Any help would be appreciated. Thanks, Eric Thibodeau ompi-output.tar.gz Description: application/gzip

Re: [OMPI users] Performance of MPI_Isend() worse than MPI_Send() and even MPI_Ssend()

2007-10-15 Thread Eric Thibodeau
locking send, there the library > do not return until the data is pushed on the network buffers, i.e. > the library is the one in control until the send is completed. > >Thanks, > george. > > On Oct 15, 2007, at 2:23 PM, Eric Thibodeau wrote: > > > Hello Georg

Re: [OMPI users] "Address not mapped" error on user defined MPI_OP function

2007-04-04 Thread Eric Thibodeau
I'm attaching the functionnal code so that others can maybe see this one as an example ;) Le mercredi 4 avril 2007 11:47, Eric Thibodeau a écrit : > Hello all, > > First off, please excuse the attached code as I may be naïve in my > attempts to implement my own MPI_OP. > >

Re: [OMPI users] "Address not mapped" error on user defined MPI_OP function

2007-04-04 Thread Eric Thibodeau
OPAL: 1.2 OPAL SVN revision: r14027 Prefix: /home/kyron/openmpi_i686 Configured architecture: i686-pc-linux-gnu Configured by: kyron Configured on: Wed Apr 4 10:21:34 EDT 2007 Le mercredi 4 avril 2007 11:47, Eric Thibodeau a écrit : > He

[OMPI users] "Address not mapped" error on user defined MPI_OP function

2007-04-04 Thread Eric Thibodeau
ron:14074] [ 5] /lib/libc.so.6(__libc_start_main+0xe3) [0x6fcbd823] [kyron:14074] *** End of error message *** Eric Thibodeau #include #include #include #include #define V_LEN 10 //Vector Length #define E_CNT 10 //Element count MPI_Op MPI_MySum; //Custom Sum function MPI_Datatype MPI_MyTyp

Re: [OMPI users] Compiling HPCC with OpenMPI

2007-02-27 Thread Eric Thibodeau
> v1.2 because someone else out in the Linux community uses "libopal". > > I typically prefer using "mpicc" as CC and LINKER and therefore > letting the OMPI wrapper handle everything for exactly this reason. > > > On Feb 21, 2007, at 12:39 PM, Eric Th

Re: [OMPI users] Compiling HPCC with OpenMPI

2007-02-21 Thread Eric Thibodeau
Batiment 506 >BP 167 >F - 91403 ORSAY Cedex > Site Web :http://www.idris.fr > ** > > Eric Thibodeau a écrit : > > Hello all, > > > > As we all know, compiling OpenMPI is not a matter of adding -lmpi >

Re: [OMPI users] compiling mpptest using OpenMPI

2007-02-19 Thread Eric Thibodeau
xed a > shared memory race condition, for example: > > http://www.open-mpi.org/nightly/v1.2/ > > > On Feb 16, 2007, at 12:12 AM, Eric Thibodeau wrote: > > > Hello devs, > > > > Thought I would let you know there seems to be a problem with > &g

Re: [OMPI users] compiling mpptest using OpenMPI

2007-02-16 Thread Eric Thibodeau
ild process! Eric Le jeudi 15 février 2007 19:51, Anthony Chan a écrit : > > As long as mpicc is working, try configuring mpptest as > > mpptest/configure MPICC=/bin/mpicc > > or > > mpptest/configure --with-mpich= > > A.Chan > > On Thu, 15 Feb 2007, Er

Re: [OMPI users] x86_64 head with x86 diskless nodes, Node execution fails with SEGV_MAPERR

2006-07-16 Thread Eric Thibodeau
Thanks, now all makes more sense to me. I'll try the hard way, multiple builds for multiple envs ;) Eric Le dimanche 16 juillet 2006 18:21, Brian Barrett a écrit : > On Jul 16, 2006, at 4:13 PM, Eric Thibodeau wrote: > > Now that I have that out of the way, I'd like to know

Re: [OMPI users] x86_64 head with x86 diskless nodes, Node execution fails with SEGV_MAPERR

2006-07-16 Thread Eric Thibodeau
14:31, Brian Barrett a écrit : > On Jul 15, 2006, at 2:58 PM, Eric Thibodeau wrote: > > But, for some reason, on the Athlon node (in their image on the > > server I should say) OpenMPI still doesn't seem to be built > > correctly since it crashes as follows: > &

[OMPI users] x86_64 head with x86 diskless nodes, Node execution fails with SEGV_MAPERR

2006-07-15 Thread Eric Thibodeau
<--config log for the Opteron build (works locally) config.log_node0<--config log for the Athlon build (on the node) ompi_info.i686 <--ompi_info on the Athlon node ompi_info.x86_64<--ompi_info on the Opteron Master Thanks, -- Eric Thibodeau Neural Bucket Solution

Re: [OMPI users] Tutorial

2006-07-11 Thread Eric Thibodeau
l on open-mpi? > Thank you ;) > ___ > users mailing list > us...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/users > -- Eric Thibodeau Neural Bucket Solutions Inc. T. (514) 736-1436 C. (514) 710-0517

Re: [OMPI users] MPI_Recv, is it possible to switch on/off aggresive mode during runtime?

2006-07-07 Thread Eric Thibodeau
t; > > ___ > users mailing list > us...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/users > -- Eric Thibodeau Neural Bucket Solutions Inc. T. (514) 736-1436 C. (514) 710-0517

Re: [OMPI users] Can I install OpenMPI on a machine where I have mpich2

2006-07-04 Thread Eric Thibodeau
pecific points to add, > > Thanks again, I appreciate it, > > Manal > > On Mon, 2006-07-03 at 23:17 -0400, Eric Thibodeau wrote: > > See comments below: > > > > Le lundi 3 juillet 2006 23:01, Manal Helal a écrit : > > > Hi > > > > >

Re: [OMPI users] Re : OpenMPI 1.1: Signal:10, info.si_errno:0(Unknown, error: 0), si_code:1(BUS_ADRALN)

2006-06-28 Thread Eric Thibodeau
> running the > same thing, as of yet. > > I have a cluster of two v440 that have 4 cpus each running Solaris 10. > The tests I > am running are np=2 one process on each node. > > --td > > Eric Thibodeau wrote: > > >Terry, > > > > I was a

Re: [OMPI users] users Digest, Vol 317, Issue 4

2006-06-28 Thread Eric Thibodeau
ment issues on Solaris 64 bit platforms, but thought that we might > > have had a pretty good handle on it in 1.1. Obviously we didn't solve > > everything. Bonk. > > > > Did you get a corefile, perchance? If you could send a stack trace, that > > woul

Re: [OMPI users] Installing OpenMPI on a solaris

2006-06-28 Thread Eric Thibodeau
Yeah bummers, but something tells me it might not be OpenMPI's fault. Here's why: 1- The tech that takes care of these machines told me that he gets RTC errors on bootup (the cpu borads are apprantly "out of sync" since the clocks aren't set correctly). 2- There is also a possibility that the

Re: [OMPI users] Installing OpenMPI on a solaris

2006-06-20 Thread Eric Thibodeau
MCA sds: pipe (MCA v1.0, API v1.0, Component v1.1) MCA sds: seed (MCA v1.0, API v1.0, Component v1.1) MCA sds: singleton (MCA v1.0, API v1.0, Component v1.1) Le mardi 20 juin 2006 17:06, Eric Thibodeau a écrit : > Thanks for the pointer, it WORKS!!

Re: [OMPI users] pls:rsh: execv failed with errno=2

2006-06-17 Thread Eric Thibodeau
Hello Jeff, Fristly, don't worry about jumping in late, I'll send you a skid rope ;) Secondly, thanks for your nice little artilces on clustermonkey.net (good refresher on MPI). And finally, down to my issues, thanks for clearing out the --prefix LD_LIBRARY_PATH and all. The ebuild I

Re: [OMPI users] pls:rsh: execv failed with errno=2

2006-06-16 Thread Eric Thibodeau
our prefix set to the lib dir, can you try without the > lib64 part and rerun? > > Eric Thibodeau wrote: > > Hello everyone, > > > > Well, first off, I hope this problem I am reporting is of some validity, > > I tried finding simmilar situations off Google and the mai