I believe this -should- work, but can't verify it myself. The most important
thing is to be sure you built with --enable-heterogeneous or else it will
definitely fail.
Ralph
On 4/10/08 7:17 AM, "Rolf Vandevaart" wrote:
>
> On a CentOS Linux box, I see the following:
Hi Jody
Simple answer - the 1.2.x series does not support multiple hostfiles. I
believe you will find that documented in the FAQ section.
What you have to do here is have -one- hostfile that includes all the hosts,
and then -host each app-context to indicate which of those hosts are to be
used
Rolf,
I was able to run hostname on the two noes that way,
and also a simplified version of my testprogram (without a barrier)
works. Only MPI_Barrier shows bad behaviour.
Do you know what this message means?
[aim-plankton][0,1,2][btl_tcp_endpoint.c:572:mca_btl_tcp_endpoint_complete_connect]
This worked for me although I am not sure how extensive our 32/64
interoperability support is. I tested on Solaris using the TCP
interconnect and a 1.2.5 version of Open MPI. Also, we configure with
the --enable-heterogeneous flag which may make a difference here. Also
this did not work
Hi
Using a more realistic application than a simple "Hello, world"
even the --host version doesn't work correctly
Called this way
mpirun -np 3 --host aim-plankton ./QHGLauncher
--read-config=pureveg_new.cfg -o output.txt : -np 3 --host aim-fanta4
./QHGLauncher_64 --read-config=pureveg_new.cfg -o
HI
In my network i have some 32 bit machines and some 64 bit machines.
With --host i successfully call my application:
mpirun -np 3 --host aim-plankton -x DISPLAY ./run_gdb.sh ./MPITest :
-np 3 --host aim-fanta4 -x DISPLAY ./run_gdb.sh ./MPITest64
(MPITest64 has the same code as MPITest, but was