George --

I've confirmed that it works with 1.6.4 and am awaiting additional information 
from this user.


On May 29, 2013, at 8:08 AM, George Bosilca <bosi...@icl.utk.edu>
 wrote:

> I can't check on the 1.6.4 posted on the web but I can confirm this test 
> works as expected on the current 1.6 branch (the next to be 1.6.5). So this 
> might have been fixed along the way.
> 
>  George.
> 
> 
> On May 27, 2013, at 07:05 , Hayato KUNIIE <kuni...@oita.email.ne.jp> wrote:
> 
>> Hello
>> 
>> I posted this topic in last week.
>> But Information about this problem was few.
>> And I post again with more information.
>> 
>> I build bewulf type PC Cluster (Cent OS release 6.4). And I studing
>> about MPI.(Open MPI Ver.1.6.4) I tried following sample which using
>> MPI_REDUCE (FORTRAN).
>> 
>> Then, following Error occured.
>> --------------------------------------
>> [bwslv01:30793] *** An error occurred in MPI_Reduce: the reduction
>> operation MPI_SUM is not defined on the MPI_INTEGER datatype
>> [bwslv01:30793] *** on communicator MPI_COMM_WORLD
>> [bwslv01:30793] *** MPI_ERR_OP: invalid reduce operation
>> [bwslv01:30793] *** MPI_ERRORS_ARE_FATAL: your MPI job will now abort
>> -------------------------------------- All informaion is showd in attached 
>> file err.log. and source file is attached as main.f
>> 
>> This cluster system consist of one head node and 2 slave nodes.
>> And sharing home directory in head node by NFS. so Open MPI is installed
>> each nodes.
>> 
>> When I test this program on only head node, program is run correctly.
>> and output result.
>> But When I test this program on only slave node, same error occured.
>> 
>> Please tell me, good idea or advise.
>> 
>> Other information is included attached file.
>> Following is construction of directories .
>> ompiReport
>> ├── head
>> │ ├── config.log // Item 3 on help page
>> │ ├── ifocnfig.txt // Item 8 on help page
>> │ ├── lstopo.txt // Item 5 on help page
>> | |---- PATH.txt // Item 7 on help page
>> | |---- LD_LIBRARY_PATH.txt // Item 7 on help page
>> │ └── ompi_info_all.txt // Item 4 on help page
>> ├── ompi_info_full.txt // Item 6 on help page
>> |---- main.f // source file
>> |---- err.log // error message
>> ├── slv01
>> │ ├── config.log // Item 3 on help page
>> │ ├── ifconfig.txt // Item 8 on help page
>> │ ├── lstopo.txt // Item 5 on help page
>> | |---- PATH.txt // Item 7 on help page
>> | |---- LD_LIBRARY_PATH.txt // Item 7 on help page
>> │ └── ompi_info_all.txt // Item 4 on help page
>> └── slv02
>> ├── config.log // Item 3 on help page
>> ├── ifconfig.txt // Item 8 on help page
>> ├── lstopo.txt // Item 5 on help page
>> |---- PATH.txt // Item 7 on help page
>> |---- LD_LIBRARY_PATH.txt // Item 7 on help page
>> └── ompi_info_all.txt // Item 4 on help page
>> 
>> 3 directories, 13 files
>> 
>> Best regards
>> 
>> 
>> 
>> (2013/05/16 23:24), Jeff Squyres (jsquyres) wrote:
>>> (OFF LIST)
>>> 
>>> Let's figure this out off-list and post the final resolution back to the 
>>> list.
>>> 
>>> This is quite odd.
>>> 
>>> You launched this mpirun from a single node, right?  I'm trying to make 
>>> sure that you're doing non-interactive logins on the remote nodes to find 
>>> the ompi_info's, because sometimes there's a difference between paths that 
>>> are set for interactive and non-interactive logins.
>>> 
>>> Can you send all the information listed here:
>>> 
>>>    http://www.open-mpi.org/community/help/
>>> 
>>> 
>>> 
>>> On May 16, 2013, at 9:53 AM, Hayato KUNIIE <kuni...@oita.email.ne.jp> wrote:
>>> 
>>>> Following is result of mpirun ompi_info on three_nodes.
>>>> 
>>>> three nodes version is same.
>>>> 
>>>> Package: Open MPI root@bwhead.clnet Distribution  Open MPI root@bwslv01 
>>>> Distribution  Open MPI root@bwslv02 Distribution
>>>> Open MPI: 1.6.4  1.6.4  1.6.4
>>>> Open MPI SVN revision: r28081  r28081  r28081
>>>> Open MPI release date: Feb 19, 2013  Feb 19, 2013  Feb 19, 2013
>>>> Open RTE: 1.6.4  1.6.4  1.6.4
>>>> Open RTE SVN revision: r28081  r28081  r28081
>>>> Open RTE release date: Feb 19, 2013  Feb 19, 2013  Feb 19, 2013
>>>> OPAL: 1.6.4  1.6.4  1.6.4
>>>> OPAL SVN revision: r28081  r28081  r28081
>>>> OPAL release date: Feb 19, 2013  Feb 19, 2013  Feb 19, 2013
>>>> MPI API: 2.1  2.1  2.1
>>>> Ident string: 1.6.4  1.6.4  1.6.4
>>>> Prefix: /usr/local  /usr/local  /usr/local
>>>> Configured architecture: x86_64-unknown-linux-gnu x86_64-unknown-linux-gnu 
>>>>  x86_64-unknown-linux-gnu
>>>> Configure host: bwhead.clnet  bwslv01  bwslv02
>>>> Configured by: root  root  root
>>>> Configured on: Wed May  8 20:38:14 JST 2013 45 JST 2013 29 JST 2013
>>>> Configure host: bwhead.clnet  bwslv01  bwslv02
>>>> Built by: root  root  root
>>>> Built on: Wed May  8 20:48:44 JST 2013 43 JST 2013 38 JST 2013
>>>> Built host: bwhead.clnet  bwslv01  bwslv02
>>>> C bindings: yes  yes  yes
>>>> C++ bindings: yes  yes  yes
>>>> Fortran77 bindings: yes (all)  yes (all)  yes (all)
>>>> Fortran90 bindings: yes  yes  yes
>>>> Fortran90 bindings size: small  small  small
>>>> C compiler: gcc  gcc  gcc
>>>> C compiler absolute: /usr/bin/gcc  /usr/bin/gcc  /usr/bin/gcc
>>>> C compiler family name: GNU  GNU  GNU
>>>> C compiler version: 4.4.7  4.4.7  4.4.7
>>>> C++ compiler: g++  g++  g++
>>>> C++ compiler absolute: /usr/bin/g++  /usr/bin/g++  /usr/bin/g++
>>>> Fortran77 compiler: gfortran  gfortran  gfortran
>>>> Fortran77 compiler abs: /usr/bin/gfortran  /usr/bin/gfortran 
>>>> /usr/bin/gfortran
>>>> Fortran90 compiler: gfortran  gfortran  gfortran
>>>> Fortran90 compiler abs: /usr/bin/gfortran  /usr/bin/gfortran 
>>>> /usr/bin/gfortran
>>>> C profiling: yes  yes  yes
>>>> C++ profiling: yes  yes  yes
>>>> Fortran77 profiling: yes  yes  yes
>>>> Fortran90 profiling: yes  yes  yes
>>>> C++ exceptions: no  no  no
>>>> Thread support: posix (MPI_THREAD_MULTIPLE: no, progress: no) no)  no)
>>>> Sparse Groups: no  no  no
>>>> Internal debug support: no  no  no
>>>> MPI interface warnings: no  no  no
>>>> MPI parameter check: runtime  runtime  runtime
>>>> Memory profiling support: no  no  no
>>>> Memory debugging support: no  no  no
>>>> libltdl support: yes  yes  yes
>>>> Heterogeneous support: no  no  no
>>>> mpirun default --prefix: no  no  no
>>>> MPI I/O support: yes  yes  yes
>>>> MPI_WTIME support: gettimeofday  gettimeofday  gettimeofday
>>>> Symbol vis. support: yes  yes  yes
>>>> Host topology support: yes  yes  yes
>>>> MPI extensions: affinity example  affinity example  affinity example
>>>> FT Checkpoint support: no (checkpoint thread: no)  no)  no)
>>>> VampirTrace support: yes  yes  yes
>>>> MPI_MAX_PROCESSOR_NAME: 256  256  256
>>>> MPI_MAX_ERROR_STRING: 256  256  256
>>>> MPI_MAX_OBJECT_NAME: 64  64  64
>>>> MPI_MAX_INFO_KEY: 36  36  36
>>>> MPI_MAX_INFO_VAL: 256  256  256
>>>> MPI_MAX_PORT_NAME: 1024  1024  1024
>>>> MPI_MAX_DATAREP_STRING: 128  128  128
>>>> Package: Open MPI root@bwslv01 Distribution  execinfo (MCA v2.0, API v2.0, 
>>>> Component v1.6.4)  execinfo (MCA v2.0, API v2.0, Component v1.6.4)
>>>> Open MPI: 1.6.4  linux (MCA v2.0, API v2.0, Component v1.6.4) linux (MCA 
>>>> v2.0, API v2.0, Component v1.6.4)
>>>> Open MPI SVN revision: r28081  hwloc (MCA v2.0, API v2.0, Component 
>>>> v1.6.4)  hwloc (MCA v2.0, API v2.0, Component v1.6.4)
>>>> Open MPI release date: Feb 19, 2013  auto_detect (MCA v2.0, API v2.0, 
>>>> Component v1.6.4)  auto_detect (MCA v2.0, API v2.0, Component v1.6.4)
>>>> Open RTE: 1.6.4  file (MCA v2.0, API v2.0, Component v1.6.4)  file (MCA 
>>>> v2.0, API v2.0, Component v1.6.4)
>>>> Open RTE SVN revision: r28081  mmap (MCA v2.0, API v2.0, Component v1.6.4) 
>>>>  mmap (MCA v2.0, API v2.0, Component v1.6.4)
>>>> Open RTE release date: Feb 19, 2013  posix (MCA v2.0, API v2.0, Component 
>>>> v1.6.4)  posix (MCA v2.0, API v2.0, Component v1.6.4)
>>>> OPAL: 1.6.4  sysv (MCA v2.0, API v2.0, Component v1.6.4)  sysv (MCA v2.0, 
>>>> API v2.0, Component v1.6.4)
>>>> OPAL SVN revision: r28081  first_use (MCA v2.0, API v2.0, Component 
>>>> v1.6.4)  first_use (MCA v2.0, API v2.0, Component v1.6.4)
>>>> OPAL release date: Feb 19, 2013  hwloc (MCA v2.0, API v2.0, Component 
>>>> v1.6.4)  hwloc (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MPI API: 2.1  linux (MCA v2.0, API v2.0, Component v1.6.4)  linux (MCA 
>>>> v2.0, API v2.0, Component v1.6.4)
>>>> Ident string: 1.6.4  env (MCA v2.0, API v2.0, Component v1.6.4) env (MCA 
>>>> v2.0, API v2.0, Component v1.6.4)
>>>> Prefix: /usr/local  config (MCA v2.0, API v2.0, Component v1.6.4) config 
>>>> (MCA v2.0, API v2.0, Component v1.6.4)
>>>> Configured architecture: x86_64-unknown-linux-gnu  linux (MCA v2.0, API 
>>>> v2.0, Component v1.6.4)  linux (MCA v2.0, API v2.0, Component v1.6.4)
>>>> Configure host: bwslv01  hwloc132 (MCA v2.0, API v2.0, Component v1.6.4)  
>>>> hwloc132 (MCA v2.0, API v2.0, Component v1.6.4)
>>>> Configured by: root  orte (MCA v2.0, API v2.0, Component v1.6.4) orte (MCA 
>>>> v2.0, API v2.0, Component v1.6.4)
>>>> Configured on: Wed May  8 20:56:45 JST 2013  orte (MCA v2.0, API v2.0, 
>>>> Component v1.6.4)  orte (MCA v2.0, API v2.0, Component v1.6.4)
>>>> Configure host: bwslv01  basic (MCA v2.0, API v2.0, Component v1.6.4)  
>>>> basic (MCA v2.0, API v2.0, Component v1.6.4)
>>>> Built by: root  bucket (MCA v2.0, API v2.0, Component v1.6.4) bucket (MCA 
>>>> v2.0, API v2.0, Component v1.6.4)
>>>> Built on: Wed May  8 21:05:43 JST 2013  basic (MCA v2.0, API v2.0, 
>>>> Component v1.6.4)  basic (MCA v2.0, API v2.0, Component v1.6.4)
>>>> Built host: bwslv01  hierarch (MCA v2.0, API v2.0, Component v1.6.4)  
>>>> hierarch (MCA v2.0, API v2.0, Component v1.6.4)
>>>> C bindings: yes  inter (MCA v2.0, API v2.0, Component v1.6.4) inter (MCA 
>>>> v2.0, API v2.0, Component v1.6.4)
>>>> C++ bindings: yes  self (MCA v2.0, API v2.0, Component v1.6.4) self (MCA 
>>>> v2.0, API v2.0, Component v1.6.4)
>>>> Fortran77 bindings: yes (all)  sm (MCA v2.0, API v2.0, Component v1.6.4)  
>>>> sm (MCA v2.0, API v2.0, Component v1.6.4)
>>>> Fortran90 bindings: yes  sync (MCA v2.0, API v2.0, Component v1.6.4)  sync 
>>>> (MCA v2.0, API v2.0, Component v1.6.4)
>>>> Fortran90 bindings size: small  tuned (MCA v2.0, API v2.0, Component 
>>>> v1.6.4)  tuned (MCA v2.0, API v2.0, Component v1.6.4)
>>>> C compiler: gcc  romio (MCA v2.0, API v2.0, Component v1.6.4) romio (MCA 
>>>> v2.0, API v2.0, Component v1.6.4)
>>>> C compiler absolute: /usr/bin/gcc  fake (MCA v2.0, API v2.0, Component 
>>>> v1.6.4)  fake (MCA v2.0, API v2.0, Component v1.6.4)
>>>> C compiler family name: GNU  rdma (MCA v2.0, API v2.0, Component v1.6.4)  
>>>> rdma (MCA v2.0, API v2.0, Component v1.6.4)
>>>> C compiler version: 4.4.7  sm (MCA v2.0, API v2.0, Component v1.6.4)  sm 
>>>> (MCA v2.0, API v2.0, Component v1.6.4)
>>>> C++ compiler: g++  bfo (MCA v2.0, API v2.0, Component v1.6.4)  bfo (MCA 
>>>> v2.0, API v2.0, Component v1.6.4)
>>>> C++ compiler absolute: /usr/bin/g++  csum (MCA v2.0, API v2.0, Component 
>>>> v1.6.4)  csum (MCA v2.0, API v2.0, Component v1.6.4)
>>>> Fortran77 compiler: gfortran  ob1 (MCA v2.0, API v2.0, Component v1.6.4)  
>>>> ob1 (MCA v2.0, API v2.0, Component v1.6.4)
>>>> Fortran77 compiler abs: /usr/bin/gfortran  v (MCA v2.0, API v2.0, 
>>>> Component v1.6.4)  v (MCA v2.0, API v2.0, Component v1.6.4)
>>>> Fortran90 compiler: gfortran  r2 (MCA v2.0, API v2.0, Component v1.6.4)  
>>>> r2 (MCA v2.0, API v2.0, Component v1.6.4)
>>>> Fortran90 compiler abs: /usr/bin/gfortran  vma (MCA v2.0, API v2.0, 
>>>> Component v1.6.4)  vma (MCA v2.0, API v2.0, Component v1.6.4)
>>>> C profiling: yes  self (MCA v2.0, API v2.0, Component v1.6.4) self (MCA 
>>>> v2.0, API v2.0, Component v1.6.4)
>>>> C++ profiling: yes  sm (MCA v2.0, API v2.0, Component v1.6.4)  sm (MCA 
>>>> v2.0, API v2.0, Component v1.6.4)
>>>> Fortran77 profiling: yes  tcp (MCA v2.0, API v2.0, Component v1.6.4)  tcp 
>>>> (MCA v2.0, API v2.0, Component v1.6.4)
>>>> Fortran90 profiling: yes  unity (MCA v2.0, API v2.0, Component v1.6.4)  
>>>> unity (MCA v2.0, API v2.0, Component v1.6.4)
>>>> C++ exceptions: no  pt2pt (MCA v2.0, API v2.0, Component v1.6.4) pt2pt 
>>>> (MCA v2.0, API v2.0, Component v1.6.4)
>>>> Thread support: posix (MPI_THREAD_MULTIPLE: no, progress: no) rdma (MCA 
>>>> v2.0, API v2.0, Component v1.6.4)  rdma (MCA v2.0, API v2.0, Component 
>>>> v1.6.4)
>>>> Sparse Groups: no  hnp (MCA v2.0, API v2.0, Component v1.6.4)  hnp (MCA 
>>>> v2.0, API v2.0, Component v1.6.4)
>>>> Internal debug support: no  orted (MCA v2.0, API v2.0, Component v1.6.4)  
>>>> orted (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MPI interface warnings: no  tool (MCA v2.0, API v2.0, Component v1.6.4)  
>>>> tool (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MPI parameter check: runtime  tcp (MCA v2.0, API v2.0, Component v1.6.4)  
>>>> tcp (MCA v2.0, API v2.0, Component v1.6.4)
>>>> Memory profiling support: no  default (MCA v2.0, API v2.0, Component 
>>>> v1.6.4)  default (MCA v2.0, API v2.0, Component v1.6.4)
>>>> Memory debugging support: no  cm (MCA v2.0, API v2.0, Component v1.6.4)  
>>>> cm (MCA v2.0, API v2.0, Component v1.6.4)
>>>> libltdl support: yes  loadleveler (MCA v2.0, API v2.0, Component v1.6.4)  
>>>> loadleveler (MCA v2.0, API v2.0, Component v1.6.4)
>>>> Heterogeneous support: no  slurm (MCA v2.0, API v2.0, Component v1.6.4)  
>>>> slurm (MCA v2.0, API v2.0, Component v1.6.4)
>>>> mpirun default --prefix: no  load_balance (MCA v2.0, API v2.0, Component 
>>>> v1.6.4)  load_balance (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MPI I/O support: yes  rank_file (MCA v2.0, API v2.0, Component v1.6.4)  
>>>> rank_file (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MPI_WTIME support: gettimeofday  resilient (MCA v2.0, API v2.0, Component 
>>>> v1.6.4)  resilient (MCA v2.0, API v2.0, Component v1.6.4)
>>>> Symbol vis. support: yes  round_robin (MCA v2.0, API v2.0, Component 
>>>> v1.6.4)  round_robin (MCA v2.0, API v2.0, Component v1.6.4)
>>>> Host topology support: yes  seq (MCA v2.0, API v2.0, Component v1.6.4)  
>>>> seq (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MPI extensions: affinity example  topo (MCA v2.0, API v2.0, Component 
>>>> v1.6.4)  topo (MCA v2.0, API v2.0, Component v1.6.4)
>>>> FT Checkpoint support: no (checkpoint thread: no)  oob (MCA v2.0, API 
>>>> v2.0, Component v1.6.4)  oob (MCA v2.0, API v2.0, Component v1.6.4)
>>>> VampirTrace support: yes  binomial (MCA v2.0, API v2.0, Component v1.6.4)  
>>>> binomial (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MPI_MAX_PROCESSOR_NAME: 256  cm (MCA v2.0, API v2.0, Component v1.6.4)  cm 
>>>> (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MPI_MAX_ERROR_STRING: 256  direct (MCA v2.0, API v2.0, Component v1.6.4)  
>>>> direct (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MPI_MAX_OBJECT_NAME: 64  linear (MCA v2.0, API v2.0, Component v1.6.4)  
>>>> linear (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MPI_MAX_INFO_KEY: 36  radix (MCA v2.0, API v2.0, Component v1.6.4)  radix 
>>>> (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MPI_MAX_INFO_VAL: 256  slave (MCA v2.0, API v2.0, Component v1.6.4)  slave 
>>>> (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MPI_MAX_PORT_NAME: 1024  rsh (MCA v2.0, API v2.0, Component v1.6.4)  rsh 
>>>> (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MPI_MAX_DATAREP_STRING: 128  slurm (MCA v2.0, API v2.0, Component v1.6.4)  
>>>> slurm (MCA v2.0, API v2.0, Component v1.6.4)
>>>> Package: Open MPI root@bwslv02 Distribution  rsh (MCA v2.0, API v2.0, 
>>>> Component v1.6.4)  rsh (MCA v2.0, API v2.0, Component v1.6.4)
>>>> Open MPI: 1.6.4  default (MCA v2.0, API v2.0, Component v1.6.4) default 
>>>> (MCA v2.0, API v2.0, Component v1.6.4)
>>>> Open MPI SVN revision: r28081  env (MCA v2.0, API v2.0, Component v1.6.4)  
>>>> env (MCA v2.0, API v2.0, Component v1.6.4)
>>>> Open MPI release date: Feb 19, 2013  hnp (MCA v2.0, API v2.0, Component 
>>>> v1.6.4)  hnp (MCA v2.0, API v2.0, Component v1.6.4)
>>>> Open RTE: 1.6.4  singleton (MCA v2.0, API v2.0, Component v1.6.4) 
>>>> singleton (MCA v2.0, API v2.0, Component v1.6.4)
>>>> Open RTE SVN revision: r28081  slave (MCA v2.0, API v2.0, Component 
>>>> v1.6.4)  slave (MCA v2.0, API v2.0, Component v1.6.4)
>>>> Open RTE release date: Feb 19, 2013  slurm (MCA v2.0, API v2.0, Component 
>>>> v1.6.4)  slurm (MCA v2.0, API v2.0, Component v1.6.4)
>>>> OPAL: 1.6.4  slurmd (MCA v2.0, API v2.0, Component v1.6.4)  slurmd (MCA 
>>>> v2.0, API v2.0, Component v1.6.4)
>>>> OPAL SVN revision: r28081  tool (MCA v2.0, API v2.0, Component v1.6.4)  
>>>> tool (MCA v2.0, API v2.0, Component v1.6.4)
>>>> OPAL release date: Feb 19, 2013  bad (MCA v2.0, API v2.0, Component 
>>>> v1.6.4)  bad (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MPI API: 2.1  basic (MCA v2.0, API v2.0, Component v1.6.4)  basic (MCA 
>>>> v2.0, API v2.0, Component v1.6.4)
>>>> Ident string: 1.6.4  hier (MCA v2.0, API v2.0, Component v1.6.4) hier (MCA 
>>>> v2.0, API v2.0, Component v1.6.4)
>>>> Prefix: /usr/local  command (MCA v2.0, API v1.0, Component v1.6.4)  
>>>> command (MCA v2.0, API v1.0, Component v1.6.4)
>>>> Configured architecture: x86_64-unknown-linux-gnu  syslog (MCA v2.0, API 
>>>> v1.0, Component v1.6.4)  syslog (MCA v2.0, API v1.0, Component v1.6.4)
>>>> Configure host: bwslv02
>>>> Configured by: root
>>>> Configured on: Wed May  8 20:56:29 JST 2013
>>>> Configure host: bwslv02
>>>> Built by: root
>>>> Built on: Wed May  8 21:05:38 JST 2013
>>>> Built host: bwslv02
>>>> C bindings: yes
>>>> C++ bindings: yes
>>>> Fortran77 bindings: yes (all)
>>>> Fortran90 bindings: yes
>>>> Fortran90 bindings size: small
>>>> C compiler: gcc
>>>> C compiler absolute: /usr/bin/gcc
>>>> C compiler family name: GNU
>>>> C compiler version: 4.4.7
>>>> C++ compiler: g++
>>>> C++ compiler absolute: /usr/bin/g++
>>>> Fortran77 compiler: gfortran
>>>> Fortran77 compiler abs: /usr/bin/gfortran
>>>> Fortran90 compiler: gfortran
>>>> Fortran90 compiler abs: /usr/bin/gfortran
>>>> C profiling: yes
>>>> C++ profiling: yes
>>>> Fortran77 profiling: yes
>>>> Fortran90 profiling: yes
>>>> C++ exceptions: no
>>>> Thread support: posix (MPI_THREAD_MULTIPLE: no, progress: no)
>>>> Sparse Groups: no
>>>> Internal debug support: no
>>>> MPI interface warnings: no
>>>> MPI parameter check: runtime
>>>> Memory profiling support: no
>>>> Memory debugging support: no
>>>> libltdl support: yes
>>>> Heterogeneous support: no
>>>> mpirun default --prefix: no
>>>> MPI I/O support: yes
>>>> MPI_WTIME support: gettimeofday
>>>> Symbol vis. support: yes
>>>> Host topology support: yes
>>>> MPI extensions: affinity example
>>>> FT Checkpoint support: no (checkpoint thread: no)
>>>> VampirTrace support: yes
>>>> MPI_MAX_PROCESSOR_NAME: 256
>>>> MPI_MAX_ERROR_STRING: 256
>>>> MPI_MAX_OBJECT_NAME: 64
>>>> MPI_MAX_INFO_KEY: 36
>>>> MPI_MAX_INFO_VAL: 256
>>>> MPI_MAX_PORT_NAME: 1024
>>>> MPI_MAX_DATAREP_STRING: 128
>>>> MCA backtrace: execinfo (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA memory: linux (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA paffinity: hwloc (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA carto: auto_detect (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA carto: file (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA shmem: mmap (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA shmem: posix (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA shmem: sysv (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA maffinity: first_use (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA maffinity: hwloc (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA timer: linux (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA installdirs: env (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA installdirs: config (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA sysinfo: linux (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA hwloc: hwloc132 (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA dpm: orte (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA pubsub: orte (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA allocator: basic (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA allocator: bucket (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA coll: basic (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA coll: hierarch (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA coll: inter (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA coll: self (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA coll: sm (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA coll: sync (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA coll: tuned (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA io: romio (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA mpool: fake (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA mpool: rdma (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA mpool: sm (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA pml: bfo (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA pml: csum (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA pml: ob1 (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA pml: v (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA bml: r2 (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA rcache: vma (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA btl: self (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA btl: sm (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA btl: tcp (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA topo: unity (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA osc: pt2pt (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA osc: rdma (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA iof: hnp (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA iof: orted (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA iof: tool (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA oob: tcp (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA odls: default (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA ras: cm (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA ras: loadleveler (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA ras: slurm (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA rmaps: load_balance (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA rmaps: rank_file (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA rmaps: resilient (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA rmaps: round_robin (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA rmaps: seq (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA rmaps: topo (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA rml: oob (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA routed: binomial (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA routed: cm (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA routed: direct (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA routed: linear (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA routed: radix (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA routed: slave (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA plm: rsh (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA plm: slurm (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA filem: rsh (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA errmgr: default (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA ess: env (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA ess: hnp (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA ess: singleton (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA ess: slave (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA ess: slurm (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA ess: slurmd (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA ess: tool (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA grpcomm: bad (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA grpcomm: basic (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA grpcomm: hier (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA notifier: command (MCA v2.0, API v1.0, Component v1.6.4)
>>>> MCA notifier: syslog (MCA v2.0, API v1.0, Component v1.6.4)
>>>> MCA backtrace: execinfo (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA memory: linux (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA paffinity: hwloc (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA carto: auto_detect (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA carto: file (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA shmem: mmap (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA shmem: posix (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA shmem: sysv (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA maffinity: first_use (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA maffinity: hwloc (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA timer: linux (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA installdirs: env (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA installdirs: config (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA sysinfo: linux (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA hwloc: hwloc132 (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA dpm: orte (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA pubsub: orte (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA allocator: basic (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA allocator: bucket (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA coll: basic (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA coll: hierarch (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA coll: inter (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA coll: self (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA coll: sm (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA coll: sync (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA coll: tuned (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA io: romio (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA mpool: fake (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA mpool: rdma (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA mpool: sm (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA pml: bfo (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA pml: csum (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA pml: ob1 (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA pml: v (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA bml: r2 (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA rcache: vma (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA btl: self (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA btl: sm (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA btl: tcp (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA topo: unity (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA osc: pt2pt (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA osc: rdma (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA iof: hnp (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA iof: orted (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA iof: tool (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA oob: tcp (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA odls: default (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA ras: cm (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA ras: loadleveler (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA ras: slurm (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA rmaps: load_balance (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA rmaps: rank_file (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA rmaps: resilient (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA rmaps: round_robin (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA rmaps: seq (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA rmaps: topo (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA rml: oob (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA routed: binomial (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA routed: cm (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA routed: direct (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA routed: linear (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA routed: radix (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA routed: slave (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA plm: rsh (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA plm: slurm (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA filem: rsh (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA errmgr: default (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA ess: env (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA ess: hnp (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA ess: singleton (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA ess: slave (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA ess: slurm (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA ess: slurmd (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA ess: tool (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA grpcomm: bad (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA grpcomm: basic (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA grpcomm: hier (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA notifier: command (MCA v2.0, API v1.0, Component v1.6.4)
>>>> MCA notifier: syslog (MCA v2.0, API v1.0, Component v1.6.4)
>>>> MCA backtrace: execinfo (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA memory: linux (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA paffinity: hwloc (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA carto: auto_detect (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA carto: file (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA shmem: mmap (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA shmem: posix (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA shmem: sysv (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA maffinity: first_use (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA maffinity: hwloc (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA timer: linux (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA installdirs: env (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA installdirs: config (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA sysinfo: linux (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA hwloc: hwloc132 (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA dpm: orte (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA pubsub: orte (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA allocator: basic (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA allocator: bucket (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA coll: basic (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA coll: hierarch (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA coll: inter (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA coll: self (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA coll: sm (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA coll: sync (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA coll: tuned (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA io: romio (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA mpool: fake (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA mpool: rdma (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA mpool: sm (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA pml: bfo (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA pml: csum (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA pml: ob1 (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA pml: v (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA bml: r2 (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA rcache: vma (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA btl: self (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA btl: sm (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA btl: tcp (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA topo: unity (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA osc: pt2pt (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA osc: rdma (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA iof: hnp (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA iof: orted (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA iof: tool (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA oob: tcp (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA odls: default (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA ras: cm (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA ras: loadleveler (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA ras: slurm (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA rmaps: load_balance (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA rmaps: rank_file (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA rmaps: resilient (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA rmaps: round_robin (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA rmaps: seq (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA rmaps: topo (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA rml: oob (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA routed: binomial (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA routed: cm (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA routed: direct (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA routed: linear (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA routed: radix (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA routed: slave (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA plm: rsh (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA plm: slurm (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA filem: rsh (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA errmgr: default (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA ess: env (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA ess: hnp (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA ess: singleton (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA ess: slave (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA ess: slurm (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA ess: slurmd (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA ess: tool (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA grpcomm: bad (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA grpcomm: basic (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA grpcomm: hier (MCA v2.0, API v2.0, Component v1.6.4)
>>>> MCA notifier: command (MCA v2.0, API v1.0, Component v1.6.4)
>>>> MCA notifier: syslog (MCA v2.0, API v1.0, Component v1.6.4)
>>>> 
>>>> 
>>>> 
>>>> 
>>>> (2013/05/16 9:12), Jeff Squyres (jsquyres) wrote:
>>>>> I am unable to replicate your error -- 1.6.4 has MPI_REDUCE defined on 
>>>>> MPI_SUM properly.
>>>>> 
>>>>> Are you absolutely sure you're using OMPI 1.6.4 on all nodes?
>>>>> 
>>>>> Try this:
>>>>> 
>>>>>    mpirun ... ompi_info
>>>>> 
>>>>> (insert whatever hostfile and -np value you're using for your fortran 
>>>>> test) and see what is reported as the OMPI version from other nodes.
>>>>> 
>>>>> 
>>>>> On May 15, 2013, at 7:46 AM, Hayato KUNIIE <kuni...@oita.email.ne.jp> 
>>>>> wrote:
>>>>> 
>>>>>> I using Ver, 1.6.4 in all nodes.
>>>>>> 
>>>>>> (2013/05/15 7:10), Jeff Squyres (jsquyres) wrote:
>>>>>>> Are you sure that you have exactly the same version of Open MPI on all 
>>>>>>> your nodes?
>>>>>>> 
>>>>>>> 
>>>>>>> On May 14, 2013, at 11:39 AM, Hayato KUNIIE <kuni...@oita.email.ne.jp> 
>>>>>>> wrote:
>>>>>>> 
>>>>>>>> Hello I'm kuni255
>>>>>>>> 
>>>>>>>> I build bewulf type PC Cluster (Cent OS release 6.4). And I studing
>>>>>>>> about MPI.(Open MPI Ver.1.6.4) I tried following sample which using
>>>>>>>> MPI_REDUCE.
>>>>>>>> 
>>>>>>>> Then, Error occured.
>>>>>>>> 
>>>>>>>> This cluster system consist of one head node and 2 slave nodes.
>>>>>>>> And sharing home directory in head node by NFS. so Open MPI is 
>>>>>>>> installed
>>>>>>>> each nodes.
>>>>>>>> 
>>>>>>>> When I test this program on only head node, program is run correctly.
>>>>>>>> and output result.
>>>>>>>> But When I test this program on only slave node, same error occured.
>>>>>>>> 
>>>>>>>> Please tell me, good idea : )
>>>>>>>> 
>>>>>>>> Error message
>>>>>>>> [bwslv01:30793] *** An error occurred in MPI_Reduce: the reduction
>>>>>>>> operation MPI_SUM is not defined on the MPI_INTEGER datatype
>>>>>>>> [bwslv01:30793] *** on communicator MPI_COMM_WORLD
>>>>>>>> [bwslv01:30793] *** MPI_ERR_OP: invalid reduce operation
>>>>>>>> [bwslv01:30793] *** MPI_ERRORS_ARE_FATAL: your MPI job will now abort
>>>>>>>> --------------------------------------------------------------------------
>>>>>>>> mpirun has exited due to process rank 1 with PID 30793 on
>>>>>>>> node bwslv01 exiting improperly. There are two reasons this could 
>>>>>>>> occur:
>>>>>>>> 
>>>>>>>> 1. this process did not call "init" before exiting, but others in
>>>>>>>> the job did. This can cause a job to hang indefinitely while it waits
>>>>>>>> for all processes to call "init". By rule, if one process calls "init",
>>>>>>>> then ALL processes must call "init" prior to termination.
>>>>>>>> 
>>>>>>>> 2. this process called "init", but exited without calling "finalize".
>>>>>>>> By rule, all processes that call "init" MUST call "finalize" prior to
>>>>>>>> exiting or it will be considered an "abnormal termination"
>>>>>>>> 
>>>>>>>> This may have caused other processes in the application to be
>>>>>>>> terminated by signals sent by mpirun (as reported here).
>>>>>>>> --------------------------------------------------------------------------
>>>>>>>> [bwhead.clnet:02147] 1 more process has sent help message
>>>>>>>> help-mpi-errors.txt / mpi_errors_are_fatal
>>>>>>>> [bwhead.clnet:02147] Set MCA parameter "orte_base_help_aggregate" to 0
>>>>>>>> to see all help / error messages
>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>>>> Fortran90 source code
>>>>>>>> include 'mpif.h'
>>>>>>>> parameter(nmax=12)
>>>>>>>> integer n(nmax)
>>>>>>>> 
>>>>>>>> call mpi_init(ierr)
>>>>>>>> call mpi_comm_size(MPI_COMM_WORLD, isize, ierr)
>>>>>>>> call mpi_comm_rank(MPI_COMM_WORLD, irank, ierr)
>>>>>>>> ista=irank*(nmax/isize) + 1
>>>>>>>> iend=ista+(nmax/isize-1)
>>>>>>>> isum=0
>>>>>>>> do i=1,nmax
>>>>>>>> n(i) = i
>>>>>>>> isum = isum + n(i)
>>>>>>>> end do
>>>>>>>> call mpi_reduce(isum, itmp, 1, MPI_INTEGER, MPI_SUM,
>>>>>>>> & 0, MPI_COMM_WORLD, ierr)
>>>>>>>> 
>>>>>>>> if (irank == 0) then
>>>>>>>> isum=itmp
>>>>>>>> WRITE(*,*) isum
>>>>>>>> endif
>>>>>>>> call mpi_finalize(ierr)
>>>>>>>> end
>>>>>>>> _______________________________________________
>>>>>>>> users mailing list
>>>>>>>> us...@open-mpi.org
>>>>>>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>>>> _______________________________________________
>>>>>> users mailing list
>>>>>> us...@open-mpi.org
>>>>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>> _______________________________________________
>>>> users mailing list
>>>> us...@open-mpi.org
>>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>> 
>> 
>> 
>> <ompiReport.tar.xz>_______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to: 
http://www.cisco.com/web/about/doing_business/legal/cri/


Reply via email to