It is 1.4.3 as in the subject... Is that the version?

otherwise, I should have indeed attached ompi_info output (edited):

                 Package: Open MPI hicham.mouline@hostname
                          Distribution
                Open MPI: 1.4.3
   Open MPI SVN revision: r23834
   Open MPI release date: Oct 05, 2010
                Open RTE: 1.4.3
   Open RTE SVN revision: r23834
   Open RTE release date: Oct 05, 2010
                    OPAL: 1.4.3
       OPAL SVN revision: r23834
       OPAL release date: Oct 05, 2010
            Ident string: 1.4.3
                  Prefix: C:/Program Files/openmpi
 Configured architecture: x86 Windows-5.1
          Configure host: hostname
           Configured by: hicham.mouline
           Configured on: 18:07 19/11/2010
          Configure host: hostname
                Built by: hicham.mouline
                Built on: 18:07 19/11/2010
              Built host: hostname
              C bindings: yes
            C++ bindings: yes
      Fortran77 bindings: no
      Fortran90 bindings: no
 Fortran90 bindings size: na
              C compiler: C:/Program Files/Microsoft Visual Studio
                          9.0/VC/bin/cl.exe
     C compiler absolute: C:/Program Files/Microsoft Visual Studio
                          9.0/VC/bin/cl.exe
            C++ compiler: C:/Program Files/Microsoft Visual Studio
                          9.0/VC/bin/cl.exe
   C++ compiler absolute: C:/Program Files/Microsoft Visual Studio
                          9.0/VC/bin/cl.exe
      Fortran77 compiler: CMAKE_Fortran_COMPILER-NOTFOUND
  Fortran77 compiler abs: none
      Fortran90 compiler:
  Fortran90 compiler abs: none
             C profiling: yes
           C++ profiling: yes
     Fortran77 profiling: no
     Fortran90 profiling: no
          C++ exceptions: no
          Thread support: no
           Sparse Groups: no
  Internal debug support: no
     MPI parameter check: runtime
Memory profiling support: no
Memory debugging support: no
         libltdl support: no
   Heterogeneous support: no
 mpirun default --prefix: yes
         MPI I/O support: yes
       MPI_WTIME support: gettimeofday
Symbol visibility support: yes
   FT Checkpoint support: yes  (checkpoint thread: no)
           MCA backtrace: none (MCA v2.0, API v2.0, Component v1.4.3)
           MCA paffinity: windows (MCA v2.0, API v2.0, Component v1.4.3)
               MCA carto: auto_detect (MCA v2.0, API v2.0, Component v1.4.3)
           MCA maffinity: first_use (MCA v2.0, API v2.0, Component v1.4.3)
               MCA timer: windows (MCA v2.0, API v2.0, Component v1.4.3)
         MCA installdirs: windows (MCA v2.0, API v2.0, Component v1.4.3)
         MCA installdirs: env (MCA v2.0, API v2.0, Component v1.4.3)
         MCA installdirs: config (MCA v2.0, API v2.0, Component v1.4.3)
                 MCA crs: none (MCA v2.0, API v2.0, Component v1.4.3)
                 MCA dpm: orte (MCA v2.0, API v2.0, Component v1.4.3)
              MCA pubsub: orte (MCA v2.0, API v2.0, Component v1.4.3)
           MCA allocator: basic (MCA v2.0, API v2.0, Component v1.4.3)
           MCA allocator: bucket (MCA v2.0, API v2.0, Component v1.4.3)
                MCA coll: basic (MCA v2.0, API v2.0, Component v1.4.3)
                MCA coll: hierarch (MCA v2.0, API v2.0, Component v1.4.3)
                MCA coll: self (MCA v2.0, API v2.0, Component v1.4.3)
                MCA coll: sm (MCA v2.0, API v2.0, Component v1.4.3)
                MCA coll: sync (MCA v2.0, API v2.0, Component v1.4.3)
               MCA mpool: rdma (MCA v2.0, API v2.0, Component v1.4.3)
               MCA mpool: sm (MCA v2.0, API v2.0, Component v1.4.3)
                 MCA pml: ob1 (MCA v2.0, API v2.0, Component v1.4.3)
                 MCA bml: r2 (MCA v2.0, API v2.0, Component v1.4.3)
                 MCA btl: self (MCA v2.0, API v2.0, Component v1.4.3)
                 MCA btl: sm (MCA v2.0, API v2.0, Component v1.4.3)
                 MCA btl: tcp (MCA v2.0, API v2.0, Component v1.4.3)
                MCA topo: unity (MCA v2.0, API v2.0, Component v1.4.3)
                 MCA osc: pt2pt (MCA v2.0, API v2.0, Component v1.4.3)
                 MCA osc: rdma (MCA v2.0, API v2.0, Component v1.4.3)
                 MCA iof: hnp (MCA v2.0, API v2.0, Component v1.4.3)
                 MCA iof: orted (MCA v2.0, API v2.0, Component v1.4.3)
                 MCA iof: tool (MCA v2.0, API v2.0, Component v1.4.3)
                 MCA oob: tcp (MCA v2.0, API v2.0, Component v1.4.3)
                MCA odls: process (MCA v2.0, API v2.0, Component v1.4.3)
               MCA rmaps: round_robin (MCA v2.0, API v2.0, Component v1.4.3)
               MCA rmaps: seq (MCA v2.0, API v2.0, Component v1.4.3)
                 MCA rml: ftrm (MCA v2.0, API v2.0, Component v1.4.3)
                 MCA rml: oob (MCA v2.0, API v2.0, Component v1.4.3)
              MCA routed: binomial (MCA v2.0, API v2.0, Component v1.4.3)
              MCA routed: linear (MCA v2.0, API v2.0, Component v1.4.3)
                 MCA plm: process (MCA v2.0, API v2.0, Component v1.4.3)
              MCA errmgr: default (MCA v2.0, API v2.0, Component v1.4.3)
                 MCA ess: env (MCA v2.0, API v2.0, Component v1.4.3)
                 MCA ess: hnp (MCA v2.0, API v2.0, Component v1.4.3)
                 MCA ess: singleton (MCA v2.0, API v2.0, Component v1.4.3)
                 MCA ess: tool (MCA v2.0, API v2.0, Component v1.4.3)
             MCA grpcomm: basic (MCA v2.0, API v2.0, Component v1.4.3)

 help is appreciated,

regards,
-----Original Message-----
From: "Ralph Castain" [r...@open-mpi.org]
List-Post: users@lists.open-mpi.org
Date: 30/11/2010 02:47 PM
To: "Open MPI Users" 
Subject: Re: [OMPI users] failure to launch MPMD program on win32 w 1.4.3

It truly does help to know what version of OMPI you are using - otherwise, 
there is little we can do to help

On Nov 30, 2010, at 4:05 AM, Hicham Mouline wrote:

> Hello,
>
> I have successfully run
>
> mpirun -np 3 .\test.exe
>
> when I try MPMP
>
> mpirun -np 3 .\test.exe : -np 3 .\test2.exe
>
> where test and test2 are identical (just for a trial), I get this error:
>
> [hostname:04960] [[47427,1],0]-[[47427,0],0] mca_oob_tcp_peer_send_blocking: 
> send() failed: Unknown error (10057)
> [hostname:04960] [[47427,1],0] routed:binomial: Connection to lifeline 
> [[47427,0],0] lost
>
> Granted this uses boost::mpi, but it worked for SPMD, and the source for the 
> main function is trivial:
>
> #include <iostream>
> #include <boost/mpi.hpp>
>
> namespace mpi = boost::mpi;
>
> int main(int argc, char* argv[])
> {
> mpi::environment env(argc, argv);
> mpi::communicator world;
>
> std::cout << "Process #" << world.rank() << " says "<< std::endl;
> return 0;
> }
>
>
> as far as I understand, there should be 1 world with 6 processes, ranking 0 1 
> 2 , 3 4 5
>
> regards,

Reply via email to