Re: [OMPI users] Configuring openib on openmpi 1.8.1

2014-07-30 Thread Chaitra Kumar
Yes, both machines have same prm's installed.

When I add --level 9 to ompi_info, it listed many openib components.
Thanks.

Should I add any other flag to 'mpirun *--mca btl self,sm,openib'* to make
openib components visible to mpirun.
I set PATH and LD_LIBRARY_PATH before running these commands. so it is not
environment issue*.*


On Wed, Jul 30, 2014 at 7:26 PM, Ralph Castain  wrote:

> Does "polaris" have the same rpm's as the host where you checked in your
> prior email?
>
> Try adding "--level 9" to your ompi_info command line - the MCA param
> system changed somewhat and the params may just not be getting shown by
> default
>
>
> On Jul 30, 2014, at 2:35 AM, Chaitra Kumar 
> wrote:
>
> The command: 'ompi_info --param btl openib' doesnt return any openib
> component.
>
> When I try to use command like: ' mpirun *--mca btl self,sm,openib* ...'
> it throws an error:
> --
> A requested component was not found, or was unable to be opened.  This
> means that this component is either not installed or is unable to be
> used on your system (e.g., sometimes this means that shared libraries
> that the component requires are unable to be found/loaded).  Note that
> Open MPI stopped checking at the first component that it did not find.
>
> Host:  polaris
> Framework: btl
> Component: openib
> --
>
> Regards,
> Chaitra
>
>
>
>
> On Wed, Jul 30, 2014 at 2:40 PM, Ralph Castain  wrote:
>
>> According to your output, you *do* have the IB components available:
>>
>>  MCA btl: openib (MCA v2.0, API v2.0, Component v1.8.1)
>>
>>
>> What made you think that you don't have them?
>>
>>
>> On Jul 30, 2014, at 12:10 AM, Chaitra Kumar 
>> wrote:
>>
>> Hi Howard,
>>
>> The attached file "config,out" has the output of configure.
>>
>> Output of ompi_info command:
>>  Package: Open MPI padmanac@polaris-4 Distribution
>> Open MPI: 1.8.1
>>   Open MPI repo revision: r31483
>>Open MPI release date: Apr 22, 2014
>> Open RTE: 1.8.1
>>   Open RTE repo revision: r31483
>>Open RTE release date: Apr 22, 2014
>> OPAL: 1.8.1
>>   OPAL repo revision: r31483
>>OPAL release date: Apr 22, 2014
>>  MPI API: 3.0
>> Ident string: 1.8.1
>>   Prefix: /home/padmanac/openmpi181
>>  Configured architecture: x86_64-unknown-linux-gnu
>>   Configure host: polaris-4
>>Configured by: padmanac
>>Configured on: Tue Jul 29 11:41:12 PDT 2014
>>   Configure host: polaris-4
>> Built by: padmanac
>> Built on: Tue Jul 29 11:57:53 PDT 2014
>>   Built host: polaris-4
>>   C bindings: yes
>> C++ bindings: yes
>>  Fort mpif.h: yes (all)
>> Fort use mpi: yes (limited: overloading)
>>Fort use mpi size: deprecated-ompi-info-value
>> Fort use mpi_f08: no
>>  Fort mpi_f08 compliance: The mpi_f08 module was not built
>>   Fort mpi_f08 subarrays: no
>>Java bindings: no
>>   Wrapper compiler rpath: runpath
>>   C compiler: gcc
>>  C compiler absolute: /opt/gcc/bin/gcc
>>   C compiler family name: GNU
>>   C compiler version: 4.8.2
>> C++ compiler: g++
>>C++ compiler absolute: /opt/gcc/bin/g++
>>Fort compiler: gfortran
>>Fort compiler abs: /opt/gcc/bin/gfortran
>>  Fort ignore TKR: no
>>Fort 08 assumed shape: no
>>   Fort optional args: no
>>   Fort BIND(C) (all): no
>>   Fort ISO_C_BINDING: no
>>  Fort SUBROUTINE BIND(C): no
>>Fort TYPE,BIND(C): no
>>  Fort T,BIND(C,name="a"): no
>> Fort PRIVATE: no
>>   Fort PROTECTED: no
>>Fort ABSTRACT: no
>>Fort ASYNCHRONOUS: no
>>   Fort PROCEDURE: no
>>  Fort f08 using wrappers: no
>>  C profiling: yes
>>C++ profiling: yes
>>Fort mpif.h profiling: yes
>>   Fort use mpi profiling: yes
>>Fort use mpi_f08 prof: no
>>   C++ exceptions: no
>>   Thread support: posix (MPI_THREAD_MULTIPLE: no, OPAL support:
>> yes,
>>   OMPI progress: no, ORTE progress: yes, Event
>> lib:
>>   yes)
>>Sparse Groups: no
>>   Internal debug support: no
>>   MPI interface warnings: yes
>>  MPI parameter check: runtime
>> Memory profiling support: no
>> Memory debugging support: no
>>  libltdl support: yes
>>Heterogeneous support: no
>>  mpirun default --prefix: no
>>  MPI I/O support: yes
>>MPI_WTIME support: gettimeofday
>>  Symbol vis. support: yes
>>Host topology support: yes
>>   MPI extensions:
>>FT Checkpoint support: no (checkpoint thread: no)
>>C/R Enabled Debugging: no
>>  VampirTrace support: yes
>>   MPI_MAX

Re: [OMPI users] Configuring openib on openmpi 1.8.1

2014-07-30 Thread Gus Correa

Also, just in case, make sure the mpirun, ompi_info,
and other Open MPI (OMPI) commands you are using are the ones
that you installed.
I.e., not something from an RPM, or that may have come bundled with a 
compiler or other software.

Those may not have infiniband enabled, and in any case should not be mixed.
The OMPI implementations should be the same on all machines as well.
Running "which mpirun" on those machines may help.
These user enviroment problems often cause confusion.

My two cents,
Gus Correa

On 07/30/2014 09:56 AM, Ralph Castain wrote:

Does "polaris" have the same rpm's as the host where you checked in your
prior email?

Try adding "--level 9" to your ompi_info command line - the MCA param
system changed somewhat and the params may just not be getting shown by
default


On Jul 30, 2014, at 2:35 AM, Chaitra Kumar mailto:chaitragku...@gmail.com>> wrote:


The command: 'ompi_info --param btl openib' doesnt return any openib
component.

When I try to use command like: ' mpirun *--mca btl self,sm,openib* ...'
it throws an error:
--
A requested component was not found, or was unable to be opened.  This
means that this component is either not installed or is unable to be
used on your system (e.g., sometimes this means that shared libraries
that the component requires are unable to be found/loaded).  Note that
Open MPI stopped checking at the first component that it did not find.

Host:  polaris
Framework: btl
Component: openib
--

Regards,
Chaitra




On Wed, Jul 30, 2014 at 2:40 PM, Ralph Castain mailto:r...@open-mpi.org>> wrote:

According to your output, you *do* have the IB components available:


 MCA btl: openib (MCA v2.0, API v2.0, Component
v1.8.1)


What made you think that you don't have them?


On Jul 30, 2014, at 12:10 AM, Chaitra Kumar
mailto:chaitragku...@gmail.com>> wrote:


Hi Howard,

The attached file "config,out" has the output of configure.

Output of ompi_info command:
 Package: Open MPI padmanac@polaris-4 Distribution
Open MPI: 1.8.1
  Open MPI repo revision: r31483
   Open MPI release date: Apr 22, 2014
Open RTE: 1.8.1
  Open RTE repo revision: r31483
   Open RTE release date: Apr 22, 2014
OPAL: 1.8.1
  OPAL repo revision: r31483
   OPAL release date: Apr 22, 2014
 MPI API: 3.0
Ident string: 1.8.1
  Prefix: /home/padmanac/openmpi181
 Configured architecture: x86_64-unknown-linux-gnu
  Configure host: polaris-4
   Configured by: padmanac
   Configured on: Tue Jul 29 11:41:12 PDT 2014
  Configure host: polaris-4
Built by: padmanac
Built on: Tue Jul 29 11:57:53 PDT 2014
  Built host: polaris-4
  C bindings: yes
C++ bindings: yes
 Fort mpif.h: yes (all)
Fort use mpi: yes (limited: overloading)
   Fort use mpi size: deprecated-ompi-info-value
Fort use mpi_f08: no
 Fort mpi_f08 compliance: The mpi_f08 module was not built
  Fort mpi_f08 subarrays: no
   Java bindings: no
  Wrapper compiler rpath: runpath
  C compiler: gcc
 C compiler absolute: /opt/gcc/bin/gcc
  C compiler family name: GNU
  C compiler version: 4.8.2
C++ compiler: g++
   C++ compiler absolute: /opt/gcc/bin/g++
   Fort compiler: gfortran
   Fort compiler abs: /opt/gcc/bin/gfortran
 Fort ignore TKR: no
   Fort 08 assumed shape: no
  Fort optional args: no
  Fort BIND(C) (all): no
  Fort ISO_C_BINDING: no
 Fort SUBROUTINE BIND(C): no
   Fort TYPE,BIND(C): no
 Fort T,BIND(C,name="a"): no
Fort PRIVATE: no
  Fort PROTECTED: no
   Fort ABSTRACT: no
   Fort ASYNCHRONOUS: no
  Fort PROCEDURE: no
 Fort f08 using wrappers: no
 C profiling: yes
   C++ profiling: yes
   Fort mpif.h profiling: yes
  Fort use mpi profiling: yes
   Fort use mpi_f08 prof: no
  C++ exceptions: no
  Thread support: posix (MPI_THREAD_MULTIPLE: no, OPAL
support: yes,
  OMPI progress: no, ORTE progress: yes,
Event lib:
  yes)
   Sparse Groups: no
  Internal debug support: no
  MPI interface warnings: yes
 MPI parameter check: runtime
Memory profiling support: no
Memory debugging support: no
 libltdl support: yes
   Heterogeneous support: no
 mpirun default --prefix: no
 MPI I/O support: yes

Re: [OMPI users] Configuring openib on openmpi 1.8.1

2014-07-30 Thread Ralph Castain
Does "polaris" have the same rpm's as the host where you checked in your prior 
email?

Try adding "--level 9" to your ompi_info command line - the MCA param system 
changed somewhat and the params may just not be getting shown by default


On Jul 30, 2014, at 2:35 AM, Chaitra Kumar  wrote:

> The command: 'ompi_info --param btl openib' doesnt return any openib 
> component.
> 
> When I try to use command like: ' mpirun --mca btl self,sm,openib ...'
> it throws an error: 
> --
> A requested component was not found, or was unable to be opened.  This
> means that this component is either not installed or is unable to be
> used on your system (e.g., sometimes this means that shared libraries
> that the component requires are unable to be found/loaded).  Note that
> Open MPI stopped checking at the first component that it did not find.
> 
> Host:  polaris
> Framework: btl
> Component: openib
> --
> 
> Regards,
> Chaitra
> 
> 
> 
> 
> On Wed, Jul 30, 2014 at 2:40 PM, Ralph Castain  wrote:
> According to your output, you *do* have the IB components available:
> 
>>  MCA btl: openib (MCA v2.0, API v2.0, Component v1.8.1)
> 
> What made you think that you don't have them?
> 
> 
> On Jul 30, 2014, at 12:10 AM, Chaitra Kumar  wrote:
> 
>> Hi Howard,
>> 
>> The attached file "config,out" has the output of configure.
>> 
>> Output of ompi_info command:
>>  Package: Open MPI padmanac@polaris-4 Distribution
>> Open MPI: 1.8.1
>>   Open MPI repo revision: r31483
>>Open MPI release date: Apr 22, 2014
>> Open RTE: 1.8.1
>>   Open RTE repo revision: r31483
>>Open RTE release date: Apr 22, 2014
>> OPAL: 1.8.1
>>   OPAL repo revision: r31483
>>OPAL release date: Apr 22, 2014
>>  MPI API: 3.0
>> Ident string: 1.8.1
>>   Prefix: /home/padmanac/openmpi181
>>  Configured architecture: x86_64-unknown-linux-gnu
>>   Configure host: polaris-4
>>Configured by: padmanac
>>Configured on: Tue Jul 29 11:41:12 PDT 2014
>>   Configure host: polaris-4
>> Built by: padmanac
>> Built on: Tue Jul 29 11:57:53 PDT 2014
>>   Built host: polaris-4
>>   C bindings: yes
>> C++ bindings: yes
>>  Fort mpif.h: yes (all)
>> Fort use mpi: yes (limited: overloading)
>>Fort use mpi size: deprecated-ompi-info-value
>> Fort use mpi_f08: no
>>  Fort mpi_f08 compliance: The mpi_f08 module was not built
>>   Fort mpi_f08 subarrays: no
>>Java bindings: no
>>   Wrapper compiler rpath: runpath
>>   C compiler: gcc
>>  C compiler absolute: /opt/gcc/bin/gcc
>>   C compiler family name: GNU
>>   C compiler version: 4.8.2
>> C++ compiler: g++
>>C++ compiler absolute: /opt/gcc/bin/g++
>>Fort compiler: gfortran
>>Fort compiler abs: /opt/gcc/bin/gfortran
>>  Fort ignore TKR: no
>>Fort 08 assumed shape: no
>>   Fort optional args: no
>>   Fort BIND(C) (all): no
>>   Fort ISO_C_BINDING: no
>>  Fort SUBROUTINE BIND(C): no
>>Fort TYPE,BIND(C): no
>>  Fort T,BIND(C,name="a"): no
>> Fort PRIVATE: no
>>   Fort PROTECTED: no
>>Fort ABSTRACT: no
>>Fort ASYNCHRONOUS: no
>>   Fort PROCEDURE: no
>>  Fort f08 using wrappers: no
>>  C profiling: yes
>>C++ profiling: yes
>>Fort mpif.h profiling: yes
>>   Fort use mpi profiling: yes
>>Fort use mpi_f08 prof: no
>>   C++ exceptions: no
>>   Thread support: posix (MPI_THREAD_MULTIPLE: no, OPAL support: yes,
>>   OMPI progress: no, ORTE progress: yes, Event lib:
>>   yes)
>>Sparse Groups: no
>>   Internal debug support: no
>>   MPI interface warnings: yes
>>  MPI parameter check: runtime
>> Memory profiling support: no
>> Memory debugging support: no
>>  libltdl support: yes
>>Heterogeneous support: no
>>  mpirun default --prefix: no
>>  MPI I/O support: yes
>>MPI_WTIME support: gettimeofday
>>  Symbol vis. support: yes
>>Host topology support: yes
>>   MPI extensions:
>>FT Checkpoint support: no (checkpoint thread: no)
>>C/R Enabled Debugging: no
>>  VampirTrace support: yes
>>   MPI_MAX_PROCESSOR_NAME: 256
>> MPI_MAX_ERROR_STRING: 256
>>  MPI_MAX_OBJECT_NAME: 64
>> MPI_MAX_INFO_KEY: 36
>> MPI_MAX_INFO_VAL: 256
>>MPI_MAX_PORT_NAME: 1024
>>   MPI_MAX_DATAREP_STRING: 128
>>MCA backtrace: execinfo (MCA v2.0, API v2.0, Component v1.8.1)
>> MCA compress: bzip (MCA v2.0, API v2.0, Component v1.8.1)
>> MCA compress: gzip (MCA v2.0, AP

Re: [OMPI users] Configuring openib on openmpi 1.8.1

2014-07-30 Thread Chaitra Kumar
The command: 'ompi_info --param btl openib' doesnt return any openib
component.

When I try to use command like: ' mpirun *--mca btl self,sm,openib* ...'
it throws an error:
--
A requested component was not found, or was unable to be opened.  This
means that this component is either not installed or is unable to be
used on your system (e.g., sometimes this means that shared libraries
that the component requires are unable to be found/loaded).  Note that
Open MPI stopped checking at the first component that it did not find.

Host:  polaris
Framework: btl
Component: openib
--

Regards,
Chaitra




On Wed, Jul 30, 2014 at 2:40 PM, Ralph Castain  wrote:

> According to your output, you *do* have the IB components available:
>
>  MCA btl: openib (MCA v2.0, API v2.0, Component v1.8.1)
>
>
> What made you think that you don't have them?
>
>
> On Jul 30, 2014, at 12:10 AM, Chaitra Kumar 
> wrote:
>
> Hi Howard,
>
> The attached file "config,out" has the output of configure.
>
> Output of ompi_info command:
>  Package: Open MPI padmanac@polaris-4 Distribution
> Open MPI: 1.8.1
>   Open MPI repo revision: r31483
>Open MPI release date: Apr 22, 2014
> Open RTE: 1.8.1
>   Open RTE repo revision: r31483
>Open RTE release date: Apr 22, 2014
> OPAL: 1.8.1
>   OPAL repo revision: r31483
>OPAL release date: Apr 22, 2014
>  MPI API: 3.0
> Ident string: 1.8.1
>   Prefix: /home/padmanac/openmpi181
>  Configured architecture: x86_64-unknown-linux-gnu
>   Configure host: polaris-4
>Configured by: padmanac
>Configured on: Tue Jul 29 11:41:12 PDT 2014
>   Configure host: polaris-4
> Built by: padmanac
> Built on: Tue Jul 29 11:57:53 PDT 2014
>   Built host: polaris-4
>   C bindings: yes
> C++ bindings: yes
>  Fort mpif.h: yes (all)
> Fort use mpi: yes (limited: overloading)
>Fort use mpi size: deprecated-ompi-info-value
> Fort use mpi_f08: no
>  Fort mpi_f08 compliance: The mpi_f08 module was not built
>   Fort mpi_f08 subarrays: no
>Java bindings: no
>   Wrapper compiler rpath: runpath
>   C compiler: gcc
>  C compiler absolute: /opt/gcc/bin/gcc
>   C compiler family name: GNU
>   C compiler version: 4.8.2
> C++ compiler: g++
>C++ compiler absolute: /opt/gcc/bin/g++
>Fort compiler: gfortran
>Fort compiler abs: /opt/gcc/bin/gfortran
>  Fort ignore TKR: no
>Fort 08 assumed shape: no
>   Fort optional args: no
>   Fort BIND(C) (all): no
>   Fort ISO_C_BINDING: no
>  Fort SUBROUTINE BIND(C): no
>Fort TYPE,BIND(C): no
>  Fort T,BIND(C,name="a"): no
> Fort PRIVATE: no
>   Fort PROTECTED: no
>Fort ABSTRACT: no
>Fort ASYNCHRONOUS: no
>   Fort PROCEDURE: no
>  Fort f08 using wrappers: no
>  C profiling: yes
>C++ profiling: yes
>Fort mpif.h profiling: yes
>   Fort use mpi profiling: yes
>Fort use mpi_f08 prof: no
>   C++ exceptions: no
>   Thread support: posix (MPI_THREAD_MULTIPLE: no, OPAL support:
> yes,
>   OMPI progress: no, ORTE progress: yes, Event lib:
>   yes)
>Sparse Groups: no
>   Internal debug support: no
>   MPI interface warnings: yes
>  MPI parameter check: runtime
> Memory profiling support: no
> Memory debugging support: no
>  libltdl support: yes
>Heterogeneous support: no
>  mpirun default --prefix: no
>  MPI I/O support: yes
>MPI_WTIME support: gettimeofday
>  Symbol vis. support: yes
>Host topology support: yes
>   MPI extensions:
>FT Checkpoint support: no (checkpoint thread: no)
>C/R Enabled Debugging: no
>  VampirTrace support: yes
>   MPI_MAX_PROCESSOR_NAME: 256
> MPI_MAX_ERROR_STRING: 256
>  MPI_MAX_OBJECT_NAME: 64
> MPI_MAX_INFO_KEY: 36
> MPI_MAX_INFO_VAL: 256
>MPI_MAX_PORT_NAME: 1024
>   MPI_MAX_DATAREP_STRING: 128
>MCA backtrace: execinfo (MCA v2.0, API v2.0, Component v1.8.1)
> MCA compress: bzip (MCA v2.0, API v2.0, Component v1.8.1)
> MCA compress: gzip (MCA v2.0, API v2.0, Component v1.8.1)
>  MCA crs: none (MCA v2.0, API v2.0, Component v1.8.1)
>   MCA db: hash (MCA v2.0, API v1.0, Component v1.8.1)
>   MCA db: print (MCA v2.0, API v1.0, Component v1.8.1)
>MCA event: libevent2021 (MCA v2.0, API v2.0, Component
> v1.8.1)
>MCA hwloc: hwloc172 (MCA v2.0, API v2.0, Component v1.8.1)
>   MCA if: posix_i

Re: [OMPI users] Configuring openib on openmpi 1.8.1

2014-07-30 Thread Ralph Castain
According to your output, you *do* have the IB components available:

>  MCA btl: openib (MCA v2.0, API v2.0, Component v1.8.1)

What made you think that you don't have them?


On Jul 30, 2014, at 12:10 AM, Chaitra Kumar  wrote:

> Hi Howard,
> 
> The attached file "config,out" has the output of configure.
> 
> Output of ompi_info command:
>  Package: Open MPI padmanac@polaris-4 Distribution
> Open MPI: 1.8.1
>   Open MPI repo revision: r31483
>Open MPI release date: Apr 22, 2014
> Open RTE: 1.8.1
>   Open RTE repo revision: r31483
>Open RTE release date: Apr 22, 2014
> OPAL: 1.8.1
>   OPAL repo revision: r31483
>OPAL release date: Apr 22, 2014
>  MPI API: 3.0
> Ident string: 1.8.1
>   Prefix: /home/padmanac/openmpi181
>  Configured architecture: x86_64-unknown-linux-gnu
>   Configure host: polaris-4
>Configured by: padmanac
>Configured on: Tue Jul 29 11:41:12 PDT 2014
>   Configure host: polaris-4
> Built by: padmanac
> Built on: Tue Jul 29 11:57:53 PDT 2014
>   Built host: polaris-4
>   C bindings: yes
> C++ bindings: yes
>  Fort mpif.h: yes (all)
> Fort use mpi: yes (limited: overloading)
>Fort use mpi size: deprecated-ompi-info-value
> Fort use mpi_f08: no
>  Fort mpi_f08 compliance: The mpi_f08 module was not built
>   Fort mpi_f08 subarrays: no
>Java bindings: no
>   Wrapper compiler rpath: runpath
>   C compiler: gcc
>  C compiler absolute: /opt/gcc/bin/gcc
>   C compiler family name: GNU
>   C compiler version: 4.8.2
> C++ compiler: g++
>C++ compiler absolute: /opt/gcc/bin/g++
>Fort compiler: gfortran
>Fort compiler abs: /opt/gcc/bin/gfortran
>  Fort ignore TKR: no
>Fort 08 assumed shape: no
>   Fort optional args: no
>   Fort BIND(C) (all): no
>   Fort ISO_C_BINDING: no
>  Fort SUBROUTINE BIND(C): no
>Fort TYPE,BIND(C): no
>  Fort T,BIND(C,name="a"): no
> Fort PRIVATE: no
>   Fort PROTECTED: no
>Fort ABSTRACT: no
>Fort ASYNCHRONOUS: no
>   Fort PROCEDURE: no
>  Fort f08 using wrappers: no
>  C profiling: yes
>C++ profiling: yes
>Fort mpif.h profiling: yes
>   Fort use mpi profiling: yes
>Fort use mpi_f08 prof: no
>   C++ exceptions: no
>   Thread support: posix (MPI_THREAD_MULTIPLE: no, OPAL support: yes,
>   OMPI progress: no, ORTE progress: yes, Event lib:
>   yes)
>Sparse Groups: no
>   Internal debug support: no
>   MPI interface warnings: yes
>  MPI parameter check: runtime
> Memory profiling support: no
> Memory debugging support: no
>  libltdl support: yes
>Heterogeneous support: no
>  mpirun default --prefix: no
>  MPI I/O support: yes
>MPI_WTIME support: gettimeofday
>  Symbol vis. support: yes
>Host topology support: yes
>   MPI extensions:
>FT Checkpoint support: no (checkpoint thread: no)
>C/R Enabled Debugging: no
>  VampirTrace support: yes
>   MPI_MAX_PROCESSOR_NAME: 256
> MPI_MAX_ERROR_STRING: 256
>  MPI_MAX_OBJECT_NAME: 64
> MPI_MAX_INFO_KEY: 36
> MPI_MAX_INFO_VAL: 256
>MPI_MAX_PORT_NAME: 1024
>   MPI_MAX_DATAREP_STRING: 128
>MCA backtrace: execinfo (MCA v2.0, API v2.0, Component v1.8.1)
> MCA compress: bzip (MCA v2.0, API v2.0, Component v1.8.1)
> MCA compress: gzip (MCA v2.0, API v2.0, Component v1.8.1)
>  MCA crs: none (MCA v2.0, API v2.0, Component v1.8.1)
>   MCA db: hash (MCA v2.0, API v1.0, Component v1.8.1)
>   MCA db: print (MCA v2.0, API v1.0, Component v1.8.1)
>MCA event: libevent2021 (MCA v2.0, API v2.0, Component v1.8.1)
>MCA hwloc: hwloc172 (MCA v2.0, API v2.0, Component v1.8.1)
>   MCA if: posix_ipv4 (MCA v2.0, API v2.0, Component v1.8.1)
>   MCA if: linux_ipv6 (MCA v2.0, API v2.0, Component v1.8.1)
>  MCA installdirs: env (MCA v2.0, API v2.0, Component v1.8.1)
>  MCA installdirs: config (MCA v2.0, API v2.0, Component v1.8.1)
>   MCA memory: linux (MCA v2.0, API v2.0, Component v1.8.1)
>MCA pstat: linux (MCA v2.0, API v2.0, Component v1.8.1)
>  MCA sec: basic (MCA v2.0, API v1.0, Component v1.8.1)
>MCA shmem: mmap (MCA v2.0, API v2.0, Component v1.8.1)
>MCA shmem: posix (MCA v2.0, API v2.0, Component v1.8.1)
>MCA shmem: sysv (MCA v2.0, API v2.0, Component v1.8.1)
>MCA timer: linux (MCA v2.0, API v2.0, Component v1.8.1)
>  MCA dfs: app (MCA v2.0, API v1

Re: [OMPI users] Configuring openib on openmpi 1.8.1

2014-07-30 Thread Chaitra Kumar
Hi Howard,

The attached file "config,out" has the output of configure.

Output of ompi_info command:
 Package: Open MPI padmanac@polaris-4 Distribution
Open MPI: 1.8.1
  Open MPI repo revision: r31483
   Open MPI release date: Apr 22, 2014
Open RTE: 1.8.1
  Open RTE repo revision: r31483
   Open RTE release date: Apr 22, 2014
OPAL: 1.8.1
  OPAL repo revision: r31483
   OPAL release date: Apr 22, 2014
 MPI API: 3.0
Ident string: 1.8.1
  Prefix: /home/padmanac/openmpi181
 Configured architecture: x86_64-unknown-linux-gnu
  Configure host: polaris-4
   Configured by: padmanac
   Configured on: Tue Jul 29 11:41:12 PDT 2014
  Configure host: polaris-4
Built by: padmanac
Built on: Tue Jul 29 11:57:53 PDT 2014
  Built host: polaris-4
  C bindings: yes
C++ bindings: yes
 Fort mpif.h: yes (all)
Fort use mpi: yes (limited: overloading)
   Fort use mpi size: deprecated-ompi-info-value
Fort use mpi_f08: no
 Fort mpi_f08 compliance: The mpi_f08 module was not built
  Fort mpi_f08 subarrays: no
   Java bindings: no
  Wrapper compiler rpath: runpath
  C compiler: gcc
 C compiler absolute: /opt/gcc/bin/gcc
  C compiler family name: GNU
  C compiler version: 4.8.2
C++ compiler: g++
   C++ compiler absolute: /opt/gcc/bin/g++
   Fort compiler: gfortran
   Fort compiler abs: /opt/gcc/bin/gfortran
 Fort ignore TKR: no
   Fort 08 assumed shape: no
  Fort optional args: no
  Fort BIND(C) (all): no
  Fort ISO_C_BINDING: no
 Fort SUBROUTINE BIND(C): no
   Fort TYPE,BIND(C): no
 Fort T,BIND(C,name="a"): no
Fort PRIVATE: no
  Fort PROTECTED: no
   Fort ABSTRACT: no
   Fort ASYNCHRONOUS: no
  Fort PROCEDURE: no
 Fort f08 using wrappers: no
 C profiling: yes
   C++ profiling: yes
   Fort mpif.h profiling: yes
  Fort use mpi profiling: yes
   Fort use mpi_f08 prof: no
  C++ exceptions: no
  Thread support: posix (MPI_THREAD_MULTIPLE: no, OPAL support: yes,
  OMPI progress: no, ORTE progress: yes, Event lib:
  yes)
   Sparse Groups: no
  Internal debug support: no
  MPI interface warnings: yes
 MPI parameter check: runtime
Memory profiling support: no
Memory debugging support: no
 libltdl support: yes
   Heterogeneous support: no
 mpirun default --prefix: no
 MPI I/O support: yes
   MPI_WTIME support: gettimeofday
 Symbol vis. support: yes
   Host topology support: yes
  MPI extensions:
   FT Checkpoint support: no (checkpoint thread: no)
   C/R Enabled Debugging: no
 VampirTrace support: yes
  MPI_MAX_PROCESSOR_NAME: 256
MPI_MAX_ERROR_STRING: 256
 MPI_MAX_OBJECT_NAME: 64
MPI_MAX_INFO_KEY: 36
MPI_MAX_INFO_VAL: 256
   MPI_MAX_PORT_NAME: 1024
  MPI_MAX_DATAREP_STRING: 128
   MCA backtrace: execinfo (MCA v2.0, API v2.0, Component v1.8.1)
MCA compress: bzip (MCA v2.0, API v2.0, Component v1.8.1)
MCA compress: gzip (MCA v2.0, API v2.0, Component v1.8.1)
 MCA crs: none (MCA v2.0, API v2.0, Component v1.8.1)
  MCA db: hash (MCA v2.0, API v1.0, Component v1.8.1)
  MCA db: print (MCA v2.0, API v1.0, Component v1.8.1)
   MCA event: libevent2021 (MCA v2.0, API v2.0, Component
v1.8.1)
   MCA hwloc: hwloc172 (MCA v2.0, API v2.0, Component v1.8.1)
  MCA if: posix_ipv4 (MCA v2.0, API v2.0, Component v1.8.1)
  MCA if: linux_ipv6 (MCA v2.0, API v2.0, Component v1.8.1)
 MCA installdirs: env (MCA v2.0, API v2.0, Component v1.8.1)
 MCA installdirs: config (MCA v2.0, API v2.0, Component v1.8.1)
  MCA memory: linux (MCA v2.0, API v2.0, Component v1.8.1)
   MCA pstat: linux (MCA v2.0, API v2.0, Component v1.8.1)
 MCA sec: basic (MCA v2.0, API v1.0, Component v1.8.1)
   MCA shmem: mmap (MCA v2.0, API v2.0, Component v1.8.1)
   MCA shmem: posix (MCA v2.0, API v2.0, Component v1.8.1)
   MCA shmem: sysv (MCA v2.0, API v2.0, Component v1.8.1)
   MCA timer: linux (MCA v2.0, API v2.0, Component v1.8.1)
 MCA dfs: app (MCA v2.0, API v1.0, Component v1.8.1)
 MCA dfs: orted (MCA v2.0, API v1.0, Component v1.8.1)
 MCA dfs: test (MCA v2.0, API v1.0, Component v1.8.1)
  MCA errmgr: default_app (MCA v2.0, API v3.0, Component v1.8.1)
  MCA errmgr: default_hnp (MCA v2.0, API v3.0, Component v1.8.1)
  MCA errmgr: default_orted (MCA v2.0, API v3.0, Component
  v1.8.1)
  MCA errmgr: default_tool (MCA v2