That didn't come from OMPI - that error message is from LAM-MPI, which no
longer is supported.
I suggest you check the default path being set by Torque - looks like it is
picking up an old LAM install.
On Jun 30, 2011, at 8:24 PM, zhuangchao wrote:
> hello all ,
>
> I submited the f
hello all ,
I submited the following Torque/pbs script.
#PBS -e /tmp/blast_19297.err
#PBS -o /tmp/blast.output
mpiexec -d -machinefile /tmp/nodes.19297.txt -np 3
/data1/bin/mpiblast -p tblastx -i /data1/cluster/sequences/seq_4.txt -d nt -o
/data1/cluster/blast.ou
Hi folks,
I installed open-mpi with the libnuma library using the --with-libnuma.
Everything installed fine. When I do
./ompi_info |grep maffinity
MCA maffinity: first_use (MCA v2.0, API v2.0, Component v1.4.2)
MCA maffinity: libnuma (MCA v2.0, API v2.0, Component v1.4.2)
I
Thanks, Joe.
I did say that, but I meant that in a different way. For program 'foo',
I need to tell Visual Studio that when I click on the 'run' button, I
need it to execute
mpiexec -np X foo
instead of just
foo
I know what I *need* to do to the VS environment, I just don't know
*how* to do it
I have a cluster with mostly Mellanox ConnectX hardware and a few with
Qlogic QLE7340's. After looking through the web, FAQs etc. I built
openmpi-1.5.3 with psm and openib. If I run within the same hardware it
is fast and works fine. If I run between without specifying an MTL (e.g.
mpirun -np 2
Prentice,
It might or might not matter, but on your older system you
may have used "LD_LIBRARY_PATH" but on windows you need "PATH"
to contain the PATH.
I only mention this because you said it runs in one environment,
but not the other.
Joe
-Original Message-
From: users-boun...@open-mp
Does anyone on this list have experience using MS Visual Studio for MPI
development? I'm supporting a Windows user who has been doing Fortran
programming on Windows using an ANCIENT version of Digital Visual
Fortran (I know, I know - using "ancient" and "Digital" in the same
sentence is redundant.)
OOOps - i did not intend to cause any heart attacks =:)
Perhaps my reaction was a bit exaggerated, but i spent quite some time
to figure out why i didn't receive the same numbers i sent off
And, after reading section 3.1 of the MPI complete reference i must say
that i would have been warned if