Hi,
Very thanks for your discussion
- Original Message -
From: "Jeff Squyres"
To: "Open MPI Users"
Sent: Tuesday, June 30, 2009 7:23:13 AM (GMT-0500) America/New_York
Subject: Re: [OMPI users] enable-mpi-threads
On Jun 30, 2009, at 1:29 AM,
Greetings all,
In my MPI environment I have 3 Debian machines all setup openMPI in
/usr/local/openMPI,
configured PATH and LD_LIBRARY_PATH correctly.
And I have also configured passwordless SSH login in each node.
But when I execute my application , it gives following error , what
seems to
Hi all,
I got the solution but its not flexible. I have to provide two host files
"chfile" and "dhfile". Contents of host files are as follows
$ cat chfile
#This file contains all slaves as well as master node
localhost
200.40.70.193
$cat dhfile
#This file contains all slave nodes
200.40.70.193
At the moment, the answer is "no". :-/
However, we do have a "ticket" in our plans to add a "addhost" and
"addhostfile" capability to the system. I haven't implemented it yet
because of other priorities and the fact that nobody has asked for it
before now.
Well...actually, people -did-
Hi Ralph,
I am thankful to your reply regarding the matter, however to carry forward
with my activities it would be of great help if I can know where
OpenMpi/mpirun holds the contents of "hostfile" so that I can dynamically
add/alter the values till such a feature is officially included in
Hi Ralph,
To add few more points to my queries, as you said earlier "addhost" and
"addhostfile" features will come soon. So can you please tell how we are
going to use those features, will there be any API for that to calll from
inside the program or we have to execute command to use those
Hi Ashika,
Ashika Umanga Umagiliya wrote:
In my MPI environment I have 3 Debian machines all setup openMPI in
/usr/local/openMPI,
configured PATH and LD_LIBRARY_PATH correctly.
And I have also configured passwordless SSH login in each node.
But when I execute my application , it gives
Hi Raymond ,
Thanks for the tips,
I configured out the problem, its with the .bashrc in the nodes.
When logged in to Bash in 'non-interactive' mode, I figured out that
"$MPI\bin" folder is missing in the PATH.
I edited .bashrc in every node so that the "$MPI_HOME/bin" is added to PATH.
I created a feature ticket for this if you wanted to track it:
https://svn.open-mpi.org/trac/ompi/ticket/1961
I do not know when I will have time look at implementing this (of
course patches from the community are always welcome). But hopefully
in the next couple months.
Cheers,
Josh
On
Jeff,
Thanks, honestly though if the patches haven't been pulled mainline,
we are not likely to bring it internally. I was hoping that quadrics
support was mainline, but the documentation was out of date.
On Thu, Jul 2, 2009 at 8:08 AM, Jeff Squyres wrote:
> George --
>
> I
I see ompi/mca/btl/elan in the OMPI SVN development trunk and in the
1.3 tree (where elan = the quadrics interface).
So actually, looking at the 1.3.x README, I see configure switches
like "--with-elan" that specifies where the Elan (Quadrics) headers
and libraries live. I have no
Jeff,
Okay, thanks. I'll give it a shot and report back. I can't
contribute any code, but I can certainly do testing...
On Thu, Jul 2, 2009 at 9:23 AM, Jeff Squyres wrote:
> I see ompi/mca/btl/elan in the OMPI SVN development trunk and in the 1.3
> tree (where elan = the
Jeff:
I am running on a 2.66 GHz Nehalem node. On this node, the turbo mode and
hyperthreading are enabled.
When I run LINPACK with Intel MPI, I get 82.68 GFlops without much
trouble.
When I ran with OpenMPI (I have OpenMPI 1.2.8 but my colleague was using
1.3.2). I was using the same MKL
Hi,
I am not an HPL expert, but this might help.
1. rankfile mapper is avaliale only from Open MPI 1.3 version, if you are
using Open MPI 1.2.8 try -mca mpi_paffinity_alone 1
2. if you are using Open MPI 1.3 you dont have to use mpi_leave_pinned 1 ,
since it's a default value
Lenny.
On Thu,
Swamy Kandadai wrote:
Jeff:
I'm not Jeff, but...
Linpack has different characteristics at different problem sizes. At
small problem sizes, any number of different overheads could be the
problem. At large problem sizes, one should approach the peak
floating-point performance of the
Lenny Verkhovsky wrote:
2. if you are using Open MPI 1.3 you dont have to
use mpi_leave_pinned 1 , since it's a default value
And if you're using "-mca btl self,sm" on a single node, I think
mpi_leave_pinned is immaterial (since it's for openib).
On Thu, Jul 2, 2009 at 4:47 PM, Swamy
On Thu, 2009-07-02 at 09:34 -0400, Michael Di Domenico wrote:
> Jeff,
>
> Okay, thanks. I'll give it a shot and report back. I can't
> contribute any code, but I can certainly do testing...
I'm from the Quadrics stable so could certainty support a port should
you require it but I don't have
Hi. I've now spent many many hours tracking down a bug that was causing
my program to die, as though either its memory were getting corrupted or
messages were getting clobbered while going through the network, I
couldn't tell which. I really wish the checksum flag on btl_mx_flags
were working. But
Hi Kris,
I have not run your code yet, but I will try to this weekend.
You can have MX checksum its messages if you set MX_CSUM=1 and use the
MX debug library (e.g. LD_LIBRARY_PATH to /opt/mx/lib/debug).
Do you have the problem if you use the MX MTL? To test it modify your
mpirun as
19 matches
Mail list logo