What version of OSCAR are you using, and on what platform?

Also, could you send us a copy of hello++.cpp, it looks like there are
some errors there?  Also, did all the oscar tests pass?

LAM appears to be working correctly at surface anyway.



On 4/3/06, Michelle Chu <[EMAIL PROTECTED]> wrote:
> Hello, there,
>
>  When I mpirun a simple hello MPI program on all my eight nodes as the
> following. I get a sequence of hello world! i am 0 of 1, instead of 1 of 8,
> 2 of 8, 3 of 8. Also, problem with MPI_INIT. Thank you for your help.
>
>  which mpicc
>  /opt/lam-7.0.6/bin/mpicc
>  lamboot -v my_hostfile
>  my_hostfile is:
>  **************************************
>  athena.cs.xxx.edu
>  oscarnode1.cs.xxx.edu
>  oscarnode2.cs.xxx.edu
>  oscarnode3.cs.xxx.edu
>  oscarnode4.cs.xxx.edu
>  oscarnode5.cs.xxx.edu
>  oscarnode6.cs.xxx.edu
>  oscarnode7.cs.xxx.edu
>  oscarnode8.cs.xxx.edu
>
> *****************************************************************************
>  LAM 7.0.6/MPI 2 C++/ROMIO - Indiana University
>
>  n-1<16365> ssi:boot:base:linear: booting n0 (athena.cs.xxx.edu)
>  n-1<16365> ssi:boot:base:linear: booting n1 (oscarnode1.cs.xxx.edu)
>  n-1<16365> ssi:boot:base:linear: booting n2 (oscarnode2.cs.xxx.edu)
>  n-1<16365> ssi:boot:base:linear: booting n3 (oscarnode3.cs.xxx.edu)
>  n-1<16365> ssi:boot:base:linear: booting n4 (oscarnode4.cs.xxx.edu)
>  n-1<16365> ssi:boot:base:linear: booting n5 (oscarnode5.cs.xxx.edu)
>  n-1<16365> ssi:boot:base:linear: booting n6 (oscarnode6.cs.xxx.edu)
>  n-1<16365> ssi:boot:base:linear: booting n7 (oscarnode7.cs.xxx.edu)
>  n-1<16365> ssi:boot:base:linear: booting n8 (oscarnode8.cs.xxx.edu)
>  n-1<16365> ssi:boot:base:linear: finished
>
>  mpirun N hello++
> *****************************************************************
>  Hello World! I am 0 of 1
>  Hello World! I am 0 of 1
>  Hello World! I am 0 of 1
>  Hello World! I am 0 of 1
>  Hello World! I am 0 of 1
>  Hello World! I am 0 of 1
> -----------------------------------------------------------------------------
>  It seems that [at least] one of the processes that was started with
>  mpirun did not invoke MPI_INIT before quitting (it is possible that
>  more than one process did not invoke MPI_INIT -- mpirun was only
>  notified of the first one, which was on node n0).
>
>  mpirun can *only* be used with MPI programs (i.e., programs that
>  invoke MPI_INIT and MPI_FINALIZE).  You can use the "lamexec" program
>  to run non-MPI programs over the lambooted nodes.
> -----------------------------------------------------------------------------
>  Hello World! I am 0 of 1
> ****************************************************************************************************
>


-------------------------------------------------------
This SF.Net email is sponsored by xPML, a groundbreaking scripting language
that extends applications into web and mobile media. Attend the live webcast
and join the prime developer group breaking into this new coding territory!
http://sel.as-us.falkag.net/sel?cmd=lnk&kid0944&bid$1720&dat1642
_______________________________________________
Oscar-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/oscar-users

Reply via email to