On Jul 12, 2011, at 2:34 PM, Paul Kapinos wrote:
> Hi OpenMPI folks,
>
> Using the version 1.4.3 of OpenMPI, I wanna to wrap the 'ssh' calls, which
> are made from the OpenMPIs 'mpiexec'. For this purpose, at least two ways
> seem to be possible for me:
>
> 1. let the wrapper have the name 's
On 7/12/2011 11:06 PM, Mohan, Ashwin wrote:
Tim,
Thanks for your message. I was however not clear about your suggestions. Would
appreciate if you could clarify.
You say," So, if you want a sane comparison but aren't willing to study the compiler
manuals, you might use (if your source code doe
Tim,
Thanks for your message. I was however not clear about your suggestions. Would
appreciate if you could clarify.
You say," So, if you want a sane comparison but aren't willing to study the
compiler manuals, you might use (if your source code doesn't violate the
aliasing rules) mpiicpc -pr
On 7/12/2011 7:45 PM, Mohan, Ashwin wrote:
Hi,
I noticed that the exact same code took 50% more time to run on OpenMPI
than Intel. I use the following syntax to compile and run:
Intel MPI Compiler: (Redhat Fedora Core release 3 (Heidelberg), Kernel
version: Linux 2.6.9-1.667smp x86_64**
I believe we responded to this before...you might check your spam or inbox.
On Jul 12, 2011, at 7:39 PM, zhuangchao wrote:
> hello all :
>
>
>I run the following command :
>
> /data1/cluster/openmpi/bin/mpirun -d -machinefile /tmp/nodes.10515.txt
> -np 3 /data1/cluster
On 7/12/2011 4:45 PM, Mohan, Ashwin wrote:
I noticed that the exact same code took 50% more time to run on OpenMPI
than Intel.
It would be good to know if that extra time is spent inside MPI calls or
not. There is a discussion of how you might do this here:
http://www.open-mpi.org/faq/?catego
hello all :
I run the following command :
/data1/cluster/openmpi/bin/mpirun -d -machinefile /tmp/nodes.10515.txt -np
3 /data1/cluster/mpiblast-pio-1.6/bin/mpiblast -p blastn -i
/data1/cluster/sequences/seq_4.txt -d Baculo_Nucleotide -o
/data1/cluster/blast.out/blast.out.
Hi,
I noticed that the exact same code took 50% more time to run on OpenMPI
than Intel. I use the following syntax to compile and run:
Intel MPI Compiler: (Redhat Fedora Core release 3 (Heidelberg), Kernel
version: Linux 2.6.9-1.667smp x86_64**
mpiicpc -o .cpp -lmpi
OpenMPI 1.4.3: (
Hi OpenMPI folks,
Using the version 1.4.3 of OpenMPI, I wanna to wrap the 'ssh' calls,
which are made from the OpenMPIs 'mpiexec'. For this purpose, at least
two ways seem to be possible for me:
1. let the wrapper have the name 'ssh' and paste the path where it is
into the PATH envvar *befor
On Tue, Jul 12, 2011 at 11:03:42AM -0700, Steve Kargl wrote:
> On Tue, Jul 12, 2011 at 10:37:14AM -0700, Steve Kargl wrote:
> > On Fri, Jul 08, 2011 at 07:03:13PM -0400, Jeff Squyres wrote:
> > > Sorry -- I got distracted all afternoon...
> > >
> > > In addition to what Ralph said (i.e., I'm not s
I wonder if someone might have possible ideas to explore as to why this
program might not be working correctly under TotalView. Essentially a
user is running a very simple hello world like program that does this:
#include
#include
#include
int main(int argc, char **argv)
{
MPI_Init( &argc
On Tue, Jul 12, 2011 at 10:37:14AM -0700, Steve Kargl wrote:
> On Fri, Jul 08, 2011 at 07:03:13PM -0400, Jeff Squyres wrote:
> > Sorry -- I got distracted all afternoon...
> >
> > In addition to what Ralph said (i.e., I'm not sure if the CIDR
> > notation stuff made it over to the v1.5 branch or n
On Fri, Jul 08, 2011 at 07:03:13PM -0400, Jeff Squyres wrote:
> Sorry -- I got distracted all afternoon...
>
> In addition to what Ralph said (i.e., I'm not sure if the CIDR
> notation stuff made it over to the v1.5 branch or not, but it
> is available from the nightly SVN trunk tarballs:
> http:/
I can't quite parse your output.
paddress.c should be a sym link in ompi/mpi/c/profile back to
ompi/mpi/c/address.c. I'm not sure why "ls paddress.c" shows a whole directory
of files...?
You might want to whack your OMPI source tree, re-expand the tarball, and try
again. If it still fails, p
On Jul 11, 2011, at 11:31 AM, Randolph Pullen wrote:
> There are no firewalls by default. I can ssh between both nodes without a
> password so I assumed that all is good with the comms.
FWIW, ssh'ing is different than "comms" (which I assume you mean opening random
TCP sockets between two serv
Hello, buddies, i am trying to build openmpi with icc and it s not
working. I ve tried versions 1.4.3 and 1.4.2 the error is the same
but for different source files(directories)
I am using the latest icc version and have compiled same version of
openmpi with gnu before.
Is there any switch that n
16 matches
Mail list logo