Hi Brian,

I have installed OpenMPI-1.1a1r9260 on my SunOS machines. It has solved
the problems. However there is one more issue that I found in my testing
and that I failed to report. This concerns Linux machines too.

My host file is

hosts.txt
---------
csultra06
csultra02
csultra05
csultra08

My app file is 

mpiinit_appfile
---------------
-np 1 /home/cs/manredd/OpenMPI/openmpi-1.1a1r9260/MPITESTS/mpiinit
-np 1 /home/cs/manredd/OpenMPI/openmpi-1.1a1r9260/MPITESTS/mpiinit
-np 1 /home/cs/manredd/OpenMPI/openmpi-1.1a1r9260/MPITESTS/mpiinit
-np 1 /home/cs/manredd/OpenMPI/openmpi-1.1a1r9260/MPITESTS/mpiinit
-np 1 /home/cs/manredd/OpenMPI/openmpi-1.1a1r9260/MPITESTS/mpiinit
-np 1 /home/cs/manredd/OpenMPI/openmpi-1.1a1r9260/MPITESTS/mpiinit
-np 1 /home/cs/manredd/OpenMPI/openmpi-1.1a1r9260/MPITESTS/mpiinit
-np 1 /home/cs/manredd/OpenMPI/openmpi-1.1a1r9260/MPITESTS/mpiinit

My application program is

mpiinit.c
---------

#include <mpi.h>

int main(int argc, char** argv)
{
    int rc, me;
    char pname[MPI_MAX_PROCESSOR_NAME];
    int plen;

    MPI_Init(
       &argc,
       &argv
    );

    rc = MPI_Comm_rank(
            MPI_COMM_WORLD,
            &me
    );

    if (rc != MPI_SUCCESS)
    {
       return rc;
    }

    MPI_Get_processor_name(
       pname,
       &plen
    );

    printf("%s:Hello world from %d\n", pname, me);

    MPI_Finalize();

    return 0;
}

Compilation is successful

csultra06$ mpicc -o mpiinit mpiinit.c

However mpirun prints just 6 statements instead of 8.

csultra06$ mpirun --hostfile hosts.txt --app mpiinit_appfile
csultra02:Hello world from 5
csultra06:Hello world from 0
csultra06:Hello world from 4
csultra02:Hello world from 1
csultra08:Hello world from 3
csultra05:Hello world from 2

The following two more statements are not printed.

csultra05:Hello world from 6
csultra08:Hello world from 7

This behavior I observed on my Linux cluster too.

I have attached the log for "-d" option for your debugging purposes.

Regards,
Ravi.

----- Original Message -----
From: Brian Barrett <brbar...@open-mpi.org>
List-Post: users@lists.open-mpi.org
Date: Monday, March 13, 2006 7:56 pm
Subject: Re: [OMPI users] problems with OpenMPI-1.0.1 on SunOS 5.9;
problems on heterogeneous cluster
To: Open MPI Users <us...@open-mpi.org>

> Hi Ravi -
> 
> With the help of another Open MPI user, I spent the weekend finding 
> a  
> couple of issues with Open MPI on Solaris.  I believe you are 
> running  
> into the same problems.  We're in the process of certifying the  
> changes for release as part of 1.0.2, but it's Monday morning and 
> the  
> release manager hasn't gotten them into the release branch just 
> yet.   
> Could you give the nightly tarball from our development trunk a try 
> 
> and let us know if it solves your problems on Solaris?  You 
> probably  
> want last night's 1.1a1r9260 release.
> 
>     http://www.open-mpi.org/nightly/trunk/
> 
> Thanks,
> 
> Brian
> 
> 
> On Mar 12, 2006, at 11:23 PM, Ravi Manumachu wrote:
> 
> >
> >  Hi Brian,
> >
> >  Thank you for your help. I have attached all the files you have 
> asked>  for in a tar file.
> >
> >  Please find attached the 'config.log' and 'libmpi.la' for my 
> Solaris>  installation.
> >
> >  The output from 'mpicc -showme' is
> >
> >  sunos$ mpicc -showme
> >  gcc -I/home/cs/manredd/OpenMPI/openmpi-1.0.1/OpenMPI-SunOS-5.9/ 
> > include
> >  -I/home/cs/manredd/OpenMPI/openmpi-1.0.1/OpenMPI-SunOS-
> >  5.9/include/openmpi/ompi-L/home/cs/manredd/OpenMPI/openmpi-
> >  1.0.1/OpenMPI-SunOS-5.9/lib -lmpi
> >  -lorte -lopal -lnsl -lsocket -lthread -laio -lm -lnsl -lsocket -
> >  lthread -ldl
> >
> >  There are serious issues when running on just solaris machines.
> >
> >  I am using the host file and app file shown below. Both the
> >  machines are
> >  SunOS and are similar.
> >
> >  hosts.txt
> >  ---------
> >  csultra01 slots=1
> >  csultra02 slots=1
> >
> >  mpiinit_appfile
> >  ---------------
> >  -np 1 /home/cs/manredd/OpenMPI/openmpi-1.0.1/MPITESTS/mpiinit_sunos
> >  -np 1 /home/cs/manredd/OpenMPI/openmpi-1.0.1/MPITESTS/mpiinit_sunos
> >
> >  Running mpirun without -d option hangs.
> >
> >  csultra01$ mpirun --hostfile hosts.txt --app mpiinit_appfile
> >  hangs
> >
> >  Running mpirun with -d option dumps core with output in the file
> >  "mpirun_output_d_option.txt", which is attached. The core is also
> >  attached.
> >  Running just on only one host is also not working. The output from
> >  mpirun using "-d" option for this scenario is attached in file
> >  "mpirun_output_d_option_one_host.txt".
> >
> >  I have also attached the list of packages installed on my solaris
> >  machine in "pkginfo.txt"
> >
> >  I hope these will help you to resolve the issue.
> >
> >  Regards,
> >  Ravi.
> >
> >> ----- Original Message -----
> >> From: Brian Barrett <brbar...@open-mpi.org>
> >> Date: Friday, March 10, 2006 7:09 pm
> >> Subject: Re: [OMPI users] problems with OpenMPI-1.0.1 on SunOS 5.9;
> >> problems on heterogeneous cluster
> >> To: Open MPI Users <us...@open-mpi.org>
> >>
> >>> On Mar 10, 2006, at 12:09 AM, Ravi Manumachu wrote:
> >>>
> >>>> I am facing problems running OpenMPI-1.0.1 on a heterogeneous
> >>> cluster.>
> >>>> I have a Linux machine and a SunOS machine in this cluster.
> >>>>
> >>>> linux$ uname -a
> >>>> Linux pg1cluster01 2.6.8-1.521smp #1 SMP Mon Aug 16 09:25:06
> >> EDT
> >>> 2004> i686 i686 i386 GNU/Linux
> >>>>
> >>>> sunos$ uname -a
> >>>> SunOS csultra01 5.9 Generic_112233-10 sun4u sparc SUNW,Ultra-5_10
> >>>
> >>> Unfortunately, this will not work with Open MPI at present.  Open
> >>> MPI
> >>> 1.0.x does not have any support for running across platforms with
> >>
> >>> different endianness.  Open MPI 1.1.x has much better support for
> >>
> >>> such situations, but is far from complete, as the MPI datatype
> >>> engine
> >>> does not properly fix up endian issues.  We're working on the
> >>> issue,
> >>> but can not give a timetable for completion.
> >>>
> >>> Also note that (while not a problem here) Open MPI also does not
> >>> support running in a mixed 32 bit / 64 bit environment.  All
> >>> processes must be 32 or 64 bit, but not a mix.
> >>>
> >>>> $ mpirun --hostfile hosts.txt --app mpiinit_appfile
> >>>> ld.so.1: /home/cs/manredd/OpenMPI/openmpi-1.0.1/MPITESTS/
> >>>> mpiinit_sunos:
> >>>> fatal: relocation error: file
> >>>> /home/cs/manredd/OpenMPI/openmpi-1.0.1/OpenMPI-SunOS-5.9/lib/
> >>>> libmca_common_sm.so.0:
> >>>> symbol nanosleep: referenced symbol not found
> >>>> ld.so.1: /home/cs/manredd/OpenMPI/openmpi-1.0.1/MPITESTS/
> >>>> mpiinit_sunos:
> >>>> fatal: relocation error: file
> >>>> /home/cs/manredd/OpenMPI/openmpi-1.0.1/OpenMPI-SunOS-5.9/lib/
> >>>> libmca_common_sm.so.0:
> >>>> symbol nanosleep: referenced symbol not found
> >>>>
> >>>> I have fixed this by compiling with "-lrt" option to the linker.
> >>>
> >>> You shouldn't have to do this...  Could you send me the
> >> config.log
> >>> file configure for Open MPI, the installed $prefix/lib/libmpi.la
> >>> file, and the output of mpicc -showme?
> >>>
> >>>> sunos$ mpicc -o mpiinit_sunos mpiinit.c -lrt
> >>>>
> >>>> However when I run this again, I get the error:
> >>>>
> >>>> $ mpirun --hostfile hosts.txt --app mpiinit_appfile
> >>>> [pg1cluster01:19858] ERROR: A daemon on node csultra01 failed
> >> to
> >>> start> as expected.
> >>>> [pg1cluster01:19858] ERROR: There may be more information
> >>> available
> >>>> from
> >>>> [pg1cluster01:19858] ERROR: the remote shell (see above).
> >>>> [pg1cluster01:19858] ERROR: The daemon exited unexpectedly with
> >>
> >>>> status 255.
> >>>> 2 processes killed (possibly by Open MPI)
> >>>
> >>> Both of these are quite unexpected.  It looks like there is
> >>> something
> >>> wrong with your Solaris build.  Can you run on *just* the Solaris
> >>
> >>> machine?  We only have limited resources for testing on Solaris,
> >>> but
> >>> have not run into this issue before.  What happens if you run
> >>> mpirun
> >>> on just the Solaris machine with the -d option to mpirun?
> >>>
> >>>> Sometimes I get the error.
> >>>>
> >>>> $ mpirun --hostfile hosts.txt --app mpiinit_appfile
> >>>> [csultra01:06256] mca_common_sm_mmap_init: ftruncate failed
> >> with
> >>>> errno=28
> >>>> [csultra01:06256] mca_mpool_sm_init: unable to create shared
> >>> memory
> >>>> mapping
> >>>> ---------------------------------------------------------------
> -
> >> --
> >>> ----
> >>>> ----
> >>>> It looks like MPI_INIT failed for some reason; your parallel
> >>>> process is
> >>>> likely to abort.  There are many reasons that a parallel
> >> process can
> >>>> fail during MPI_INIT; some of which are due to configuration or
> >>
> >>>> environment
> >>>> problems.  This failure appears to be an internal failure;
> >> here's
> >>> some> additional information (which may only be relevant to an
> >> Open
> >>> MPI> developer):
> >>>>
> >>>>   PML add procs failed
> >>>>   --> Returned value -2 instead of OMPI_SUCCESS
> >>>> ---------------------------------------------------------------
> -
> >> --
> >>> ----
> >>>> ----
> >>>> *** An error occurred in MPI_Init
> >>>> *** before MPI was initialized
> >>>> *** MPI_ERRORS_ARE_FATAL (goodbye)
> >>>
> >>> This looks like you got far enough along that you ran into our
> >>> endianness issues, so this is about the best case you can hope
> >> for
> >>> in
> >>> your configuration.  The ftruncate error worries me, however.
> >> But
> >>> I
> >>> think this is another symptom of something wrong with your Sun
> >>> Sparc
> >>> build.
> >>>
> >>> Brian
> >>>
> >>> -- 
> >>>   Brian Barrett
> >>>   Open MPI developer
> >>>   http://www.open-mpi.org/
> >>>
> >>>
> >>> _______________________________________________
> >>> users mailing list
> >>> us...@open-mpi.org
> >>> http://www.open-mpi.org/mailman/listinfo.cgi/users
> >>>
> >>
> >> <OpenMPI-1.0.1-SunOS-5.9.tar.gz>
> > _______________________________________________
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> -- 
>   Brian Barrett
>   Open MPI developer
>   http://www.open-mpi.org/
> 
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
[csultra06:00526] [0,0,0] setting up session dir with
[csultra06:00526]       universe default-universe
[csultra06:00526]       user manredd
[csultra06:00526]       host csultra06
[csultra06:00526]       jobid 0
[csultra06:00526]       procid 0
[csultra06:00526] procdir:
/tmp/openmpi-sessions-manredd@csultra06_0/default-universe/0/0
[csultra06:00526] jobdir:
/tmp/openmpi-sessions-manredd@csultra06_0/default-universe/0
[csultra06:00526] unidir:
/tmp/openmpi-sessions-manredd@csultra06_0/default-universe
[csultra06:00526] top: openmpi-sessions-manredd@csultra06_0
[csultra06:00526] tmp: /tmp
[csultra06:00526] [0,0,0] contact_file
/tmp/openmpi-sessions-manredd@csultra06_0/default-universe/universe-setup.txt
[csultra06:00526] [0,0,0] wrote setup file
[csultra06:00526] pls:rsh: local csh: 0, local bash: 1
[csultra06:00526] pls:rsh: assuming same remote shell as local shell
[csultra06:00526] pls:rsh: remote csh: 0, remote bash: 1
[csultra06:00526] pls:rsh: final template argv:
[csultra06:00526] pls:rsh:     /bin/ssh <template> orted --debug --bootproxy 1
--name <template> --num_procs 5 --vpid_start 0 --nodename <template>
--universe manredd@csultra06:default-universe --nsreplica
"0.0.0;tcp://193.1.132.62:51629" --gprreplica "0.0.0;tcp://193.1.132.62:51629"
--mpi-call-yield 0
[csultra06:00526] pls:rsh: launching on node csultra08
[csultra06:00526] pls:rsh: not oversubscribed -- setting mpi_yield_when_idle
to 0
[csultra06:00526] pls:rsh: csultra08 is a REMOTE node
[csultra06:00526] pls:rsh: executing: /bin/ssh csultra08 orted --debug
--bootproxy 1 --name 0.0.1 --num_procs 5 --vpid_start 0 --nodename csultra08
--universe manredd@csultra06:default-universe --nsreplica
"0.0.0;tcp://193.1.132.62:51629" --gprreplica "0.0.0;tcp://193.1.132.62:51629"
--mpi-call-yield 0
[csultra06:00526] pls:rsh: launching on node csultra05
[csultra06:00526] pls:rsh: oversubscribed -- setting mpi_yield_when_idle to 1
(1 2)
[csultra06:00526] pls:rsh: csultra05 is a REMOTE node
[csultra06:00526] pls:rsh: executing: /bin/ssh csultra05 orted --debug
--bootproxy 1 --name 0.0.2 --num_procs 5 --vpid_start 0 --nodename csultra05
--universe manredd@csultra06:default-universe --nsreplica
"0.0.0;tcp://193.1.132.62:51629" --gprreplica "0.0.0;tcp://193.1.132.62:51629"
--mpi-call-yield 1
[csultra08:04400] [0,0,1] setting up session dir with
[csultra08:04400]       universe default-universe
[csultra08:04400]       user manredd
[csultra08:04400]       host csultra08
[csultra08:04400]       jobid 0
[csultra08:04400]       procid 1
[csultra08:04400] procdir:
/tmp/openmpi-sessions-manredd@csultra08_0/default-universe/0/1
[csultra08:04400] jobdir:
/tmp/openmpi-sessions-manredd@csultra08_0/default-universe/0
[csultra08:04400] unidir:
/tmp/openmpi-sessions-manredd@csultra08_0/default-universe
[csultra08:04400] top: openmpi-sessions-manredd@csultra08_0
[csultra08:04400] tmp: /tmp
[csultra06:00526] pls:rsh: launching on node csultra02
[csultra06:00526] pls:rsh: oversubscribed -- setting mpi_yield_when_idle to 1
(1 2)
[csultra06:00526] pls:rsh: csultra02 is a REMOTE node
[csultra06:00526] pls:rsh: executing: /bin/ssh csultra02 orted --debug
--bootproxy 1 --name 0.0.3 --num_procs 5 --vpid_start 0 --nodename csultra02
--universe manredd@csultra06:default-universe --nsreplica
"0.0.0;tcp://193.1.132.62:51629" --gprreplica "0.0.0;tcp://193.1.132.62:51629"
--mpi-call-yield 1
[csultra05:02884] [0,0,2] setting up session dir with
[csultra05:02884]       universe default-universe
[csultra05:02884]       user manredd
[csultra05:02884]       host csultra05
[csultra05:02884]       jobid 0
[csultra05:02884]       procid 2
[csultra05:02884] procdir:
/tmp/openmpi-sessions-manredd@csultra05_0/default-universe/0/2
[csultra05:02884] jobdir:
/tmp/openmpi-sessions-manredd@csultra05_0/default-universe/0
[csultra05:02884] unidir:
/tmp/openmpi-sessions-manredd@csultra05_0/default-universe
[csultra05:02884] top: openmpi-sessions-manredd@csultra05_0
[csultra05:02884] tmp: /tmp
[csultra06:00526] pls:rsh: launching on node csultra06
[csultra06:00526] pls:rsh: oversubscribed -- setting mpi_yield_when_idle to 1
(1 2)
[csultra06:00526] pls:rsh: csultra06 is a LOCAL node
[csultra06:00526] pls:rsh: changing to directory /home/cs/manredd
[csultra06:00526] pls:rsh: executing: orted --debug --bootproxy 1 --name 0.0.4
--num_procs 5 --vpid_start 0 --nodename csultra06 --universe
manredd@csultra06:default-universe --nsreplica
"0.0.0;tcp://193.1.132.62:51629" --gprreplica "0.0.0;tcp://193.1.132.62:51629"
--mpi-call-yield 1
[csultra06:00530] [0,0,4] setting up session dir with
[csultra06:00530]       universe default-universe
[csultra06:00530]       user manredd
[csultra06:00530]       host csultra06
[csultra06:00530]       jobid 0
[csultra06:00530]       procid 4
[csultra06:00530] procdir:
/tmp/openmpi-sessions-manredd@csultra06_0/default-universe/0/4
[csultra06:00530] jobdir:
/tmp/openmpi-sessions-manredd@csultra06_0/default-universe/0
[csultra06:00530] unidir:
/tmp/openmpi-sessions-manredd@csultra06_0/default-universe
[csultra06:00530] top: openmpi-sessions-manredd@csultra06_0
[csultra06:00530] tmp: /tmp
[csultra02:28730] [0,0,3] setting up session dir with
[csultra02:28730]       universe default-universe
[csultra02:28730]       user manredd
[csultra02:28730]       host csultra02
[csultra02:28730]       jobid 0
[csultra02:28730]       procid 3
[csultra02:28730] procdir:
/tmp/openmpi-sessions-manredd@csultra02_0/default-universe/0/3
[csultra02:28730] jobdir:
/tmp/openmpi-sessions-manredd@csultra02_0/default-universe/0
[csultra02:28730] unidir:
/tmp/openmpi-sessions-manredd@csultra02_0/default-universe
[csultra02:28730] top: openmpi-sessions-manredd@csultra02_0
[csultra02:28730] tmp: /tmp
[csultra08:04452] [0,1,3] setting up session dir with
[csultra08:04452]       universe default-universe
[csultra08:04452]       user manredd
[csultra08:04452]       host csultra08
[csultra08:04452]       jobid 1
[csultra08:04452]       procid 3
[csultra08:04452] procdir:
/tmp/openmpi-sessions-manredd@csultra08_0/default-universe/1/3
[csultra08:04452] jobdir:
/tmp/openmpi-sessions-manredd@csultra08_0/default-universe/1
[csultra08:04452] unidir:
/tmp/openmpi-sessions-manredd@csultra08_0/default-universe
[csultra08:04452] top: openmpi-sessions-manredd@csultra08_0
[csultra08:04452] tmp: /tmp
[csultra02:28782] [0,1,1] setting up session dir with
[csultra02:28782]       universe default-universe
[csultra02:28782]       user manredd
[csultra02:28782]       host csultra02
[csultra02:28782]       jobid 1
[csultra02:28782]       procid 1
[csultra02:28782] procdir:
/tmp/openmpi-sessions-manredd@csultra02_0/default-universe/1/1
[csultra02:28782] jobdir:
/tmp/openmpi-sessions-manredd@csultra02_0/default-universe/1
[csultra02:28782] unidir:
/tmp/openmpi-sessions-manredd@csultra02_0/default-universe
[csultra02:28782] top: openmpi-sessions-manredd@csultra02_0
[csultra02:28782] tmp: /tmp
[csultra05:02936] [0,1,2] setting up session dir with
[csultra05:02936]       universe default-universe
[csultra05:02936]       user manredd
[csultra05:02936]       host csultra05
[csultra05:02936]       jobid 1
[csultra05:02936]       procid 2
[csultra05:02936] procdir:
/tmp/openmpi-sessions-manredd@csultra05_0/default-universe/1/2
[csultra05:02936] jobdir:
/tmp/openmpi-sessions-manredd@csultra05_0/default-universe/1
[csultra05:02936] unidir:
/tmp/openmpi-sessions-manredd@csultra05_0/default-universe
[csultra05:02936] top: openmpi-sessions-manredd@csultra05_0
[csultra05:02936] tmp: /tmp
[csultra06:00534] [0,1,4] setting up session dir with
[csultra06:00534]       universe default-universe
[csultra06:00534]       user manredd
[csultra06:00534]       host csultra06
[csultra06:00534]       jobid 1
[csultra06:00534]       procid 4
[csultra06:00534] procdir:
/tmp/openmpi-sessions-manredd@csultra06_0/default-universe/1/4
[csultra06:00534] jobdir:
/tmp/openmpi-sessions-manredd@csultra06_0/default-universe/1
[csultra06:00534] unidir:
/tmp/openmpi-sessions-manredd@csultra06_0/default-universe
[csultra06:00534] top: openmpi-sessions-manredd@csultra06_0
[csultra06:00534] tmp: /tmp
[csultra05:02938] [0,1,6] setting up session dir with
[csultra05:02938]       universe default-universe
[csultra05:02938]       user manredd
[csultra05:02938]       host csultra05
[csultra05:02938]       jobid 1
[csultra05:02938]       procid 6
[csultra05:02938] procdir:
/tmp/openmpi-sessions-manredd@csultra05_0/default-universe/1/6
[csultra05:02938] jobdir:
/tmp/openmpi-sessions-manredd@csultra05_0/default-universe/1
[csultra05:02938] unidir:
/tmp/openmpi-sessions-manredd@csultra05_0/default-universe
[csultra02:28784] [0,1,5] setting up session dir with
[csultra05:02938] top: openmpi-sessions-manredd@csultra05_0
[csultra02:28784]       universe default-universe
[csultra05:02938] tmp: /tmp
[csultra02:28784]       user manredd
[csultra02:28784]       host csultra02
[csultra02:28784]       jobid 1
[csultra02:28784]       procid 5
[csultra02:28784] procdir:
/tmp/openmpi-sessions-manredd@csultra02_0/default-universe/1/5
[csultra02:28784] jobdir:
/tmp/openmpi-sessions-manredd@csultra02_0/default-universe/1
[csultra02:28784] unidir:
/tmp/openmpi-sessions-manredd@csultra02_0/default-universe
[csultra02:28784] top: openmpi-sessions-manredd@csultra02_0
[csultra02:28784] tmp: /tmp
[csultra06:00532] [0,1,0] setting up session dir with
[csultra06:00532]       universe default-universe
[csultra06:00532]       user manredd
[csultra06:00532]       host csultra06
[csultra06:00532]       jobid 1
[csultra06:00532]       procid 0
[csultra06:00532] procdir:
/tmp/openmpi-sessions-manredd@csultra06_0/default-universe/1/0
[csultra06:00532] jobdir:
/tmp/openmpi-sessions-manredd@csultra06_0/default-universe/1
[csultra06:00532] unidir:
/tmp/openmpi-sessions-manredd@csultra06_0/default-universe
[csultra06:00532] top: openmpi-sessions-manredd@csultra06_0
[csultra06:00532] tmp: /tmp
[csultra06:00526] spawn: in job_state_callback(jobid = 1, state = 0x4)
[csultra06:00526] Info: Setting up debugger process table for applications
  MPIR_being_debugged = 0
  MPIR_debug_gate = 0
  MPIR_debug_state = 1
  MPIR_acquired_pre_main = 0
  MPIR_i_am_starter = 0
  MPIR_proctable_size = 7
  MPIR_proctable:
    (i, host, exe, pid) = (0, csultra06,
/home/cs/manredd/OpenMPI/openmpi-1.1a1r9260/MPITESTS/mpiinit, 532)
    (i, host, exe, pid) = (1, csultra02,
/home/cs/manredd/OpenMPI/openmpi-1.1a1r9260/MPITESTS/mpiinit, 28782)
    (i, host, exe, pid) = (2, csultra05,
/home/cs/manredd/OpenMPI/openmpi-1.1a1r9260/MPITESTS/mpiinit, 2936)
    (i, host, exe, pid) = (3, csultra08,
/home/cs/manredd/OpenMPI/openmpi-1.1a1r9260/MPITESTS/mpiinit, 4452)
    (i, host, exe, pid) = (4, csultra06,
/home/cs/manredd/OpenMPI/openmpi-1.1a1r9260/MPITESTS/mpiinit, 534)
    (i, host, exe, pid) = (5, csultra02,
/home/cs/manredd/OpenMPI/openmpi-1.1a1r9260/MPITESTS/mpiinit, 28784)
    (i, host, exe, pid) = (6, csultra05,
/home/cs/manredd/OpenMPI/openmpi-1.1a1r9260/MPITESTS/mpiinit, 2938)
[csultra06:00532] [0,1,0] ompi_mpi_init completed
csultra08:Hello world from 3
[csultra05:02936] [0,1,2] ompi_mpi_init completed
csultra06:Hello world from 0
[csultra08:04452] [0,1,3] ompi_mpi_init completed
csultra05:Hello world from 2
[csultra06:00534] [0,1,4] ompi_mpi_init completed
[csultra05:02938] [0,1,6] ompi_mpi_init completed
csultra06:Hello world from 4
csultra02:Hello world from 1
csultra05:Hello world from 6
[csultra02:28782] [0,1,1] ompi_mpi_init completed
csultra02:Hello world from 5
[csultra02:28784] [0,1,5] ompi_mpi_init completed
[csultra06:00530] sess_dir_finalize: proc session dir not empty - leaving
[csultra06:00530] sess_dir_finalize: proc session dir not empty - leaving
[csultra02:28730] sess_dir_finalize: proc session dir not empty - leaving
[csultra05:02884] sess_dir_finalize: proc session dir not empty - leaving
[csultra08:04400] sess_dir_finalize: proc session dir not empty - leaving
[csultra08:04452] sess_dir_finalize: found proc session dir empty - deleting
[csultra08:04452] sess_dir_finalize: found job session dir empty - deleting
[csultra08:04452] sess_dir_finalize: univ session dir not empty - leaving
[csultra05:02936] sess_dir_finalize: found proc session dir empty - deleting
[csultra06:00532] sess_dir_finalize: found proc session dir empty - deleting
[csultra02:28782] sess_dir_finalize: found proc session dir empty - deleting
[csultra02:28782] sess_dir_finalize: job session dir not empty - leaving
[csultra06:00530] orted: job_state_callback(jobid = 1, state =
ORTE_PROC_STATE_TERMINATED)
[csultra06:00530] sess_dir_finalize: job session dir not empty - leaving
[csultra06:00530] sess_dir_finalize: found proc session dir empty - deleting
[csultra06:00530] sess_dir_finalize: job session dir not empty - leaving
[csultra05:02884] sess_dir_finalize: proc session dir not empty - leaving
[csultra02:28730] sess_dir_finalize: proc session dir not empty - leaving

Reply via email to