Re: [OMPI users] enable-mpi-threads

2009-07-02 Thread rahmani
Hi, Very thanks for your discussion - Original Message - From: "Jeff Squyres" To: "Open MPI Users" Sent: Tuesday, June 30, 2009 7:23:13 AM (GMT-0500) America/New_York Subject: Re: [OMPI users] enable-mpi-threads On Jun 30, 2009, at 1:29 AM,

[OMPI users] Error connecting to nodes ?

2009-07-02 Thread Ashika Umanga Umagiliya
Greetings all, In my MPI environment I have 3 Debian machines all setup openMPI in /usr/local/openMPI, configured PATH and LD_LIBRARY_PATH correctly. And I have also configured passwordless SSH login in each node. But when I execute my application , it gives following error , what seems to

Re: [OMPI users] Spawning processes through MPI::Intracomm::Spawn_multiple

2009-07-02 Thread vipin kumar
Hi all, I got the solution but its not flexible. I have to provide two host files "chfile" and "dhfile". Contents of host files are as follows $ cat chfile #This file contains all slaves as well as master node localhost 200.40.70.193 $cat dhfile #This file contains all slave nodes 200.40.70.193

Re: [OMPI users] Spawning processes through MPI::Intracomm::Spawn_multiple

2009-07-02 Thread Ralph Castain
At the moment, the answer is "no". :-/ However, we do have a "ticket" in our plans to add a "addhost" and "addhostfile" capability to the system. I haven't implemented it yet because of other priorities and the fact that nobody has asked for it before now. Well...actually, people -did-

Re: [OMPI users] Spawning processes through MPI::Intracomm::Spawn_multiple

2009-07-02 Thread vipin kumar
Hi Ralph, I am thankful to your reply regarding the matter, however to carry forward with my activities it would be of great help if I can know where OpenMpi/mpirun holds the contents of "hostfile" so that I can dynamically add/alter the values till such a feature is officially included in

Re: [OMPI users] Spawning processes through MPI::Intracomm::Spawn_multiple

2009-07-02 Thread vipin kumar
Hi Ralph, To add few more points to my queries, as you said earlier "addhost" and "addhostfile" features will come soon. So can you please tell how we are going to use those features, will there be any API for that to calll from inside the program or we have to execute command to use those

Re: [OMPI users] Error connecting to nodes ?

2009-07-02 Thread Raymond Wan
Hi Ashika, Ashika Umanga Umagiliya wrote: In my MPI environment I have 3 Debian machines all setup openMPI in /usr/local/openMPI, configured PATH and LD_LIBRARY_PATH correctly. And I have also configured passwordless SSH login in each node. But when I execute my application , it gives

Re: [OMPI users] Error connecting to nodes ?

2009-07-02 Thread Ashika Umanga Umagiliya
Hi Raymond , Thanks for the tips, I configured out the problem, its with the .bashrc in the nodes. When logged in to Bash in 'non-interactive' mode, I figured out that "$MPI\bin" folder is missing in the PATH. I edited .bashrc in every node so that the "$MPI_HOME/bin" is added to PATH.

Re: [OMPI users] Checkpointing automatically at regular intervals

2009-07-02 Thread Josh Hursey
I created a feature ticket for this if you wanted to track it: https://svn.open-mpi.org/trac/ompi/ticket/1961 I do not know when I will have time look at implementing this (of course patches from the community are always welcome). But hopefully in the next couple months. Cheers, Josh On

Re: [OMPI users] quadrics support?

2009-07-02 Thread Michael Di Domenico
Jeff, Thanks, honestly though if the patches haven't been pulled mainline, we are not likely to bring it internally. I was hoping that quadrics support was mainline, but the documentation was out of date. On Thu, Jul 2, 2009 at 8:08 AM, Jeff Squyres wrote: > George -- > > I

Re: [OMPI users] quadrics support?

2009-07-02 Thread Jeff Squyres
I see ompi/mca/btl/elan in the OMPI SVN development trunk and in the 1.3 tree (where elan = the quadrics interface). So actually, looking at the 1.3.x README, I see configure switches like "--with-elan" that specifies where the Elan (Quadrics) headers and libraries live. I have no

Re: [OMPI users] quadrics support?

2009-07-02 Thread Michael Di Domenico
Jeff, Okay, thanks. I'll give it a shot and report back. I can't contribute any code, but I can certainly do testing... On Thu, Jul 2, 2009 at 9:23 AM, Jeff Squyres wrote: > I see ompi/mca/btl/elan in the OMPI SVN development trunk and in the 1.3 > tree (where elan = the

Re: [OMPI users] OpenMPI vs Intel MPI

2009-07-02 Thread Swamy Kandadai
Jeff: I am running on a 2.66 GHz Nehalem node. On this node, the turbo mode and hyperthreading are enabled. When I run LINPACK with Intel MPI, I get 82.68 GFlops without much trouble. When I ran with OpenMPI (I have OpenMPI 1.2.8 but my colleague was using 1.3.2). I was using the same MKL

Re: [OMPI users] OpenMPI vs Intel MPI

2009-07-02 Thread Lenny Verkhovsky
Hi, I am not an HPL expert, but this might help. 1. rankfile mapper is avaliale only from Open MPI 1.3 version, if you are using Open MPI 1.2.8 try -mca mpi_paffinity_alone 1 2. if you are using Open MPI 1.3 you dont have to use mpi_leave_pinned 1 , since it's a default value Lenny. On Thu,

Re: [OMPI users] OpenMPI vs Intel MPI

2009-07-02 Thread Eugene Loh
Swamy Kandadai wrote: Jeff: I'm not Jeff, but... Linpack has different characteristics at different problem sizes. At small problem sizes, any number of different overheads could be the problem. At large problem sizes, one should approach the peak floating-point performance of the

Re: [OMPI users] OpenMPI vs Intel MPI

2009-07-02 Thread Eugene Loh
Lenny Verkhovsky wrote: 2.   if you are using Open MPI 1.3 you dont have to use mpi_leave_pinned 1 , since it's a default value And if you're using "-mca btl self,sm" on a single node, I think mpi_leave_pinned is immaterial (since it's for openib). On Thu, Jul 2, 2009 at 4:47 PM, Swamy

Re: [OMPI users] quadrics support?

2009-07-02 Thread Ashley Pittman
On Thu, 2009-07-02 at 09:34 -0400, Michael Di Domenico wrote: > Jeff, > > Okay, thanks. I'll give it a shot and report back. I can't > contribute any code, but I can certainly do testing... I'm from the Quadrics stable so could certainty support a port should you require it but I don't have

[OMPI users] Problems with MPI_Issend and MX

2009-07-02 Thread 8mj6tc902
Hi. I've now spent many many hours tracking down a bug that was causing my program to die, as though either its memory were getting corrupted or messages were getting clobbered while going through the network, I couldn't tell which. I really wish the checksum flag on btl_mx_flags were working. But

Re: [OMPI users] Problems with MPI_Issend and MX

2009-07-02 Thread Scott Atchley
Hi Kris, I have not run your code yet, but I will try to this weekend. You can have MX checksum its messages if you set MX_CSUM=1 and use the MX debug library (e.g. LD_LIBRARY_PATH to /opt/mx/lib/debug). Do you have the problem if you use the MX MTL? To test it modify your mpirun as