Re: [OMPI users] init_thread + spawn error

2007-10-01 Thread Tim Prins
Hi Joao,

Unfortunately Comm_spawn is a bit broken right now on the Open MPI trunk. We 
are currently working on some major changes to the runtime system, so I would 
rather not dig into this until these changes have made it onto the trunk.

I do not know of a timeline for when this these changes will be put in the 
trunk and Comm_spawn (especially with threads) will be expected to work 
correctly again.

Tim

On Monday 01 October 2007 03:40:46 pm Joao Vicente Lima wrote:
> Hi all!
> I'm getting a error on call MPI_Init_thread and MPI_Comm_spawn.
> am I mistaking something?
> the attachments contains my ompi_info and source ...
>
> thank!
> Joao
>
> 
>   char *arg[]= {"spawn1", (char *)0};
>
>   MPI_Init_thread (, , MPI_THREAD_MULTIPLE, );
>   MPI_Comm_spawn ("./spawn_slave", arg, 1,
>   MPI_INFO_NULL, 0, MPI_COMM_SELF, ,
>   MPI_ERRCODES_IGNORE);
> .
>
> and the error:
>
> opal_mutex_lock(): Resource deadlock avoided
> [c8:13335] *** Process received signal ***
> [c8:13335] Signal: Aborted (6)
> [c8:13335] Signal code:  (-6)
> [c8:13335] [ 0] [0xb7fbf440]
> [c8:13335] [ 1] /lib/libc.so.6(abort+0x101) [0xb7abd5b1]
> [c8:13335] [ 2] /usr/local/openmpi/openmpi-svn/lib/libmpi.so.0 [0xb7e2933c]
> [c8:13335] [ 3] /usr/local/openmpi/openmpi-svn/lib/libmpi.so.0 [0xb7e2923a]
> [c8:13335] [ 4] /usr/local/openmpi/openmpi-svn/lib/libmpi.so.0 [0xb7e292e3]
> [c8:13335] [ 5] /usr/local/openmpi/openmpi-svn/lib/libmpi.so.0 [0xb7e29fa7]
> [c8:13335] [ 6] /usr/local/openmpi/openmpi-svn/lib/libmpi.so.0 [0xb7e29eda]
> [c8:13335] [ 7] /usr/local/openmpi/openmpi-svn/lib/libmpi.so.0 [0xb7e2adec]
> [c8:13335] [ 8]
> /usr/local/openmpi/openmpi-svn/lib/libmpi.so.0(ompi_proc_unpack+ 0x181)
> [0xb7e2b142]
> [c8:13335] [ 9]
> /usr/local/openmpi/openmpi-svn/lib/libmpi.so.0(ompi_comm_connect
> _accept+0x57c) [0xb7e0fb70]
> [c8:13335] [10]
> /usr/local/openmpi/openmpi-svn/lib/libmpi.so.0(PMPI_Comm_spawn+0 x395)
> [0xb7e5e285]
> [c8:13335] [11] ./spawn(main+0x7f) [0x80486ef]
> [c8:13335] [12] /lib/libc.so.6(__libc_start_main+0xdc) [0xb7aa7ebc]
> [c8:13335] [13] ./spawn [0x80485e1]
> [c8:13335] *** End of error message ***
> --
> mpirun has exited due to process rank 0 with PID 13335 on
> node c8 calling "abort". This will have caused other processes
> in the application to be terminated by signals sent by mpirun
> (as reported here).
> --




[OMPI users] init_thread + spawn error

2007-10-01 Thread Joao Vicente Lima
Hi all!
I'm getting a error on call MPI_Init_thread and MPI_Comm_spawn.
am I mistaking something?
the attachments contains my ompi_info and source ...

thank!
Joao


  char *arg[]= {"spawn1", (char *)0};

  MPI_Init_thread (, , MPI_THREAD_MULTIPLE, );
  MPI_Comm_spawn ("./spawn_slave", arg, 1,
  MPI_INFO_NULL, 0, MPI_COMM_SELF, ,
  MPI_ERRCODES_IGNORE);
.

and the error:

opal_mutex_lock(): Resource deadlock avoided
[c8:13335] *** Process received signal ***
[c8:13335] Signal: Aborted (6)
[c8:13335] Signal code:  (-6)
[c8:13335] [ 0] [0xb7fbf440]
[c8:13335] [ 1] /lib/libc.so.6(abort+0x101) [0xb7abd5b1]
[c8:13335] [ 2] /usr/local/openmpi/openmpi-svn/lib/libmpi.so.0 [0xb7e2933c]
[c8:13335] [ 3] /usr/local/openmpi/openmpi-svn/lib/libmpi.so.0 [0xb7e2923a]
[c8:13335] [ 4] /usr/local/openmpi/openmpi-svn/lib/libmpi.so.0 [0xb7e292e3]
[c8:13335] [ 5] /usr/local/openmpi/openmpi-svn/lib/libmpi.so.0 [0xb7e29fa7]
[c8:13335] [ 6] /usr/local/openmpi/openmpi-svn/lib/libmpi.so.0 [0xb7e29eda]
[c8:13335] [ 7] /usr/local/openmpi/openmpi-svn/lib/libmpi.so.0 [0xb7e2adec]
[c8:13335] [ 8] /usr/local/openmpi/openmpi-svn/lib/libmpi.so.0(ompi_proc_unpack+
0x181) [0xb7e2b142]
[c8:13335] [ 9] /usr/local/openmpi/openmpi-svn/lib/libmpi.so.0(ompi_comm_connect
_accept+0x57c) [0xb7e0fb70]
[c8:13335] [10] /usr/local/openmpi/openmpi-svn/lib/libmpi.so.0(PMPI_Comm_spawn+0
x395) [0xb7e5e285]
[c8:13335] [11] ./spawn(main+0x7f) [0x80486ef]
[c8:13335] [12] /lib/libc.so.6(__libc_start_main+0xdc) [0xb7aa7ebc]
[c8:13335] [13] ./spawn [0x80485e1]
[c8:13335] *** End of error message ***
--
mpirun has exited due to process rank 0 with PID 13335 on
node c8 calling "abort". This will have caused other processes
in the application to be terminated by signals sent by mpirun
(as reported here).
--

#include "mpi.h"
#include 

int main (int argc, char **argv)
{
  int provided;
  MPI_Comm slave;
  char *arg[]= {"spawn1", (char *)0};

  MPI_Init_thread (, , MPI_THREAD_MULTIPLE, );
  MPI_Comm_spawn ("./spawn_slave", arg, 1, 
  MPI_INFO_NULL, 0, MPI_COMM_SELF, ,
  MPI_ERRCODES_IGNORE);

  MPI_Finalize ();
  return 0;
}
Open MPI: 1.3a1r16236
   Open MPI SVN revision: r16236
Open RTE: 1.3a1r16236
   Open RTE SVN revision: r16236
OPAL: 1.3a1r16236
   OPAL SVN revision: r16236
  Prefix: /usr/local/openmpi/openmpi-svn
 Configured architecture: i686-pc-linux-gnu
  Configure host: corisco
   Configured by: lima
   Configured on: Wed Sep 26 11:37:04 BRT 2007
  Configure host: corisco
Built by: lima
Built on: Wed Sep 26 12:07:13 BRT 2007
  Built host: corisco
  C bindings: yes
C++ bindings: yes
  Fortran77 bindings: yes (all)
  Fortran90 bindings: no
 Fortran90 bindings size: na
  C compiler: gcc
 C compiler absolute: /usr/bin/gcc
C++ compiler: g++
   C++ compiler absolute: /usr/bin/g++
  Fortran77 compiler: g77
  Fortran77 compiler abs: /usr/bin/g77
  Fortran90 compiler: none
  Fortran90 compiler abs: none
 C profiling: yes
   C++ profiling: yes
 Fortran77 profiling: yes
 Fortran90 profiling: no
  C++ exceptions: no
  Thread support: posix (mpi: yes, progress: no)
   Sparse Groups: no
  Internal debug support: yes
 MPI parameter check: runtime
Memory profiling support: yes
Memory debugging support: yes
 libltdl support: yes
   Heterogeneous support: yes
 mpirun default --prefix: no
 MPI I/O support: yes
   MCA backtrace: execinfo (MCA v1.0, API v1.0, Component v1.3)
  MCA memory: ptmalloc2 (MCA v1.0, API v1.0, Component v1.3)
   MCA paffinity: linux (MCA v1.0, API v1.1, Component v1.3)
   MCA maffinity: first_use (MCA v1.0, API v1.0, Component v1.3)
   MCA timer: linux (MCA v1.0, API v1.0, Component v1.3)
 MCA installdirs: env (MCA v1.0, API v1.0, Component v1.3)
 MCA installdirs: config (MCA v1.0, API v1.0, Component v1.3)
   MCA allocator: basic (MCA v1.0, API v1.0, Component v1.0)
   MCA allocator: bucket (MCA v1.0, API v1.0, Component v1.0)
MCA coll: basic (MCA v1.0, API v1.0, Component v1.3)
MCA coll: inter (MCA v1.0, API v1.0, Component v1.3)
MCA coll: self (MCA v1.0, API v1.0, Component v1.3)
MCA coll: sm (MCA v1.0, API v1.0, Component v1.3)
MCA coll: tuned (MCA v1.0, API v1.0, Component v1.3)
  MCA io: romio (MCA v1.0, API v1.0, Component v1.3)
   MCA mpool: rdma (MCA v1.0, API v1.0, Component v1.3)
   MCA mpool: sm (MCA v1.0, API v1.0, Component v1.3)
 MCA pml: cm (MCA v1.0, API v1.0,