Re: [OMPI users] Newbie question. Please help.

2007-05-10 Thread Jeff Squyres
Good to know.  This suggests that building VASP properly with Open  
MPI should work properly; perhaps there's some secret sauce in the  
Makefile somewhere...?  Off list, someone cited the following to me:


-
Also VASP has a forum for things like this too.
http://cms.mpi.univie.ac.at/vasp-forum/forum.php

From there it looks like people have been having problems with  
ifort 9.1.043 with vasp.


and from this post it looks like I'm not the only one to use openMPI  
and VASP


http://cms.mpi.univie.ac.at/vasp-forum/forum_viewtopic.php?2.550
-

I have not received a reply from the VASP author yet.



On May 10, 2007, at 8:52 AM, Terry Frankcombe wrote:



I have previously been running parallel VASP happily with an old,
prerelease version of OpenMPI:


[terry@nocona Vasp.4.6-OpenMPI]$
head /home/terry/Install_trees/OpenMPI-1.0rc6/config.log
This file contains any messages produced by compilers while
running configure, to aid debugging if configure makes a mistake.

It was created by Open MPI configure 1.0rc6, which was
generated by GNU Autoconf 2.59.  Invocation command line was

  $ ./configure --enable-static --disable-shared
--prefix=/home/terry/bin/Local --enable-picky --disable-heterogeneous
--without-libnuma --without-slurm --without-tm F77=ifort



In my VASP makefile:

FC=/home/terry/bin/Local/bin/mpif90

OFLAG= -O3 -xP -tpp7

CPP = $(CPP_) -DMPI  -DHOST=\"LinuxIFC\" -DIFC -Dkind8 -DNGZhalf
-DCACHE_SIZE=12000 -DPGF90 -Davoidalloc -DMPI_BLOCK=500 -DRPROMU_DGEMV
-DRACCMU_DGEMV

FFLAGS =  -FR -lowercase -assume byterecl

As far as I can see (it was a long time ago!) I didn't use BLACS or
SCALAPACK libraries.  I used ATLAS.



Maybe this will help.


--
Dr Terry Frankcombe
Physical Chemistry, Department of Chemistry
Göteborgs Universitet
SE-412 96 Göteborg Sweden
Ph: +46 76 224 0887   Skype: terry.frankcombe


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



--
Jeff Squyres
Cisco Systems




Re: [OMPI users] Newbie question. Please help.

2007-05-10 Thread Terry Frankcombe

I have previously been running parallel VASP happily with an old,
prerelease version of OpenMPI:


[terry@nocona Vasp.4.6-OpenMPI]$
head /home/terry/Install_trees/OpenMPI-1.0rc6/config.log
This file contains any messages produced by compilers while
running configure, to aid debugging if configure makes a mistake.

It was created by Open MPI configure 1.0rc6, which was
generated by GNU Autoconf 2.59.  Invocation command line was

  $ ./configure --enable-static --disable-shared
--prefix=/home/terry/bin/Local --enable-picky --disable-heterogeneous
--without-libnuma --without-slurm --without-tm F77=ifort



In my VASP makefile:

FC=/home/terry/bin/Local/bin/mpif90

OFLAG= -O3 -xP -tpp7

CPP = $(CPP_) -DMPI  -DHOST=\"LinuxIFC\" -DIFC -Dkind8 -DNGZhalf
-DCACHE_SIZE=12000 -DPGF90 -Davoidalloc -DMPI_BLOCK=500 -DRPROMU_DGEMV
-DRACCMU_DGEMV

FFLAGS =  -FR -lowercase -assume byterecl

As far as I can see (it was a long time ago!) I didn't use BLACS or
SCALAPACK libraries.  I used ATLAS.



Maybe this will help.


-- 
Dr Terry Frankcombe
Physical Chemistry, Department of Chemistry
Göteborgs Universitet
SE-412 96 Göteborg Sweden
Ph: +46 76 224 0887   Skype: terry.frankcombe




Re: [OMPI users] Newbie question. Please help.

2007-05-09 Thread Steven Truong

Thank Jeff very much for your efforts and helps.

On 5/9/07, Jeff Squyres  wrote:

I have mailed the VASP maintainer asking for a copy of the code.
Let's see what happens.

On May 9, 2007, at 2:44 PM, Steven Truong wrote:

> Hi, Jeff.   Thank you very much for looking into this issue.   I am
> afraid that I can not give you the application/package because it is a
> comercial software.  I believe that a lot of people are using this
> VASP software package http://cms.mpi.univie.ac.at/vasp/.
>
> My current environment uses MPICH 1.2.7p1, however, because a new set
> of dual core machines has posed a new set of challenges and I am
> looking into replacing MPICH with openmpi on these machines.
>
> Could Mr. Radican, who wrote that he was able to run VASP with
> openMPI, provide a lot more detail regarding how he configure openmpi,
> how he compile and run VASP job and anything relating to this issue?
>
> Thank you very much for all your helps.
> Steven.
>
> On 5/9/07, Jeff Squyres  wrote:
>> Can you send a simple test that reproduces these errors?
>>
>> I.e., if there's a single, simple package that you can send
>> instructions on how to build, it would be most helpful if we could
>> reproduce the error (and therefore figure out how to fix it).
>>
>> Thanks!
>>
>>
>> On May 9, 2007, at 2:19 PM, Steven Truong wrote:
>>
>>> Oh, no.  I tried with ACML and had the same set of errors.
>>>
>>> Steven.
>>>
>>> On 5/9/07, Steven Truong  wrote:
 Hi, Kevin and all.  I tried with the following:

 ./configure --prefix=/usr/local/openmpi-1.2.1 --disable-ipv6
 --with-tm=/usr/local/pbs  --enable-mpirun-prefix-by-default
 --enable-mpi-f90 --with-threads=posix  --enable-static

 and added the mpi.o in my VASP's makefile but i still got error.

 I forgot to mention that our environment has Intel MKL 9.0 or
 8.1 and
 my machines are dual proc dual core Xeon 5130 .

  Well, I am going to try acml too.

 Attached is my makefile for VASP and I am not sure if I missed
 anything again.

 Thank you very much for all your helps.

 On 5/9/07, Steven Truong  wrote:
> Thank Kevin and Brook for replying to my question.  I am going to
> try
> out what Kevin suggested.
>
> Steven.
>
> On 5/9/07, Kevin Radican  wrote:
>> Hi,
>>
>> We use VASP 4.6 in parallel with opemmpi 1.1.2 without any
>> problems on
>> x86_64 with opensuse and compiled with gcc and Intel fortran and
>> use
>> torque PBS.
>>
>> I used standard configure to build openmpi something like
>>
>> ./configure --prefix=/usr/local --enable-static --with-threads
>> --with-tm=/usr/local --with-libnuma
>>
>> I used the ACLM math lapack libs and built Blacs and Scalapack
>> with them
>> too.
>>
>> I attached my vasp makefile, I might of added
>>
>> mpi.o : mpi.F
>> $(CPP)
>> $(FC) -FR -lowercase -O0 -c $*$(SUFFIX)
>>
>> to the end of the make file, It doesn't look like it is in the
>> example
>> makefiles they give, but I compiled this a while ago.
>>
>> Hope this helps.
>>
>> Cheers,
>> Kevin
>>
>>
>>
>>
>>
>> On Tue, 2007-05-08 at 19:18 -0700, Steven Truong wrote:
>>> Hi, all.  I am new to OpenMPI and after initial setup I tried
>>> to run
>>> my app but got the followign errors:
>>>
>>> [node07.my.com:16673] *** An error occurred in MPI_Comm_rank
>>> [node07.my.com:16673] *** on communicator MPI_COMM_WORLD
>>> [node07.my.com:16673] *** MPI_ERR_COMM: invalid communicator
>>> [node07.my.com:16673] *** MPI_ERRORS_ARE_FATAL (goodbye)
>>> [node07.my.com:16674] *** An error occurred in MPI_Comm_rank
>>> [node07.my.com:16674] *** on communicator MPI_COMM_WORLD
>>> [node07.my.com:16674] *** MPI_ERR_COMM: invalid communicator
>>> [node07.my.com:16674] *** MPI_ERRORS_ARE_FATAL (goodbye)
>>> [node07.my.com:16675] *** An error occurred in MPI_Comm_rank
>>> [node07.my.com:16675] *** on communicator MPI_COMM_WORLD
>>> [node07.my.com:16675] *** MPI_ERR_COMM: invalid communicator
>>> [node07.my.com:16675] *** MPI_ERRORS_ARE_FATAL (goodbye)
>>> [node07.my.com:16676] *** An error occurred in MPI_Comm_rank
>>> [node07.my.com:16676] *** on communicator MPI_COMM_WORLD
>>> [node07.my.com:16676] *** MPI_ERR_COMM: invalid communicator
>>> [node07.my.com:16676] *** MPI_ERRORS_ARE_FATAL (goodbye)
>>> mpiexec noticed that job rank 2 with PID 16675 on node node07
>>> exited
>>> on signal 60 (Real-time signal 26).
>>>
>>>  /usr/local/openmpi-1.2.1/bin/ompi_info
>>> Open MPI: 1.2.1
>>>Open MPI SVN revision: r14481
>>> Open RTE: 1.2.1
>>>Open RTE SVN revision: r14481
>>> 

Re: [OMPI users] Newbie question. Please help.

2007-05-09 Thread Jeff Squyres

Can you send a simple test that reproduces these errors?

I.e., if there's a single, simple package that you can send  
instructions on how to build, it would be most helpful if we could  
reproduce the error (and therefore figure out how to fix it).


Thanks!


On May 9, 2007, at 2:19 PM, Steven Truong wrote:


Oh, no.  I tried with ACML and had the same set of errors.

Steven.

On 5/9/07, Steven Truong  wrote:

Hi, Kevin and all.  I tried with the following:

./configure --prefix=/usr/local/openmpi-1.2.1 --disable-ipv6
--with-tm=/usr/local/pbs  --enable-mpirun-prefix-by-default
--enable-mpi-f90 --with-threads=posix  --enable-static

and added the mpi.o in my VASP's makefile but i still got error.

I forgot to mention that our environment has Intel MKL 9.0 or 8.1 and
my machines are dual proc dual core Xeon 5130 .

 Well, I am going to try acml too.

Attached is my makefile for VASP and I am not sure if I missed  
anything again.


Thank you very much for all your helps.

On 5/9/07, Steven Truong  wrote:
Thank Kevin and Brook for replying to my question.  I am going to  
try

out what Kevin suggested.

Steven.

On 5/9/07, Kevin Radican  wrote:

Hi,

We use VASP 4.6 in parallel with opemmpi 1.1.2 without any  
problems on
x86_64 with opensuse and compiled with gcc and Intel fortran and  
use

torque PBS.

I used standard configure to build openmpi something like

./configure --prefix=/usr/local --enable-static --with-threads
--with-tm=/usr/local --with-libnuma

I used the ACLM math lapack libs and built Blacs and Scalapack  
with them

too.

I attached my vasp makefile, I might of added

mpi.o : mpi.F
$(CPP)
$(FC) -FR -lowercase -O0 -c $*$(SUFFIX)

to the end of the make file, It doesn't look like it is in the  
example

makefiles they give, but I compiled this a while ago.

Hope this helps.

Cheers,
Kevin





On Tue, 2007-05-08 at 19:18 -0700, Steven Truong wrote:
Hi, all.  I am new to OpenMPI and after initial setup I tried  
to run

my app but got the followign errors:

[node07.my.com:16673] *** An error occurred in MPI_Comm_rank
[node07.my.com:16673] *** on communicator MPI_COMM_WORLD
[node07.my.com:16673] *** MPI_ERR_COMM: invalid communicator
[node07.my.com:16673] *** MPI_ERRORS_ARE_FATAL (goodbye)
[node07.my.com:16674] *** An error occurred in MPI_Comm_rank
[node07.my.com:16674] *** on communicator MPI_COMM_WORLD
[node07.my.com:16674] *** MPI_ERR_COMM: invalid communicator
[node07.my.com:16674] *** MPI_ERRORS_ARE_FATAL (goodbye)
[node07.my.com:16675] *** An error occurred in MPI_Comm_rank
[node07.my.com:16675] *** on communicator MPI_COMM_WORLD
[node07.my.com:16675] *** MPI_ERR_COMM: invalid communicator
[node07.my.com:16675] *** MPI_ERRORS_ARE_FATAL (goodbye)
[node07.my.com:16676] *** An error occurred in MPI_Comm_rank
[node07.my.com:16676] *** on communicator MPI_COMM_WORLD
[node07.my.com:16676] *** MPI_ERR_COMM: invalid communicator
[node07.my.com:16676] *** MPI_ERRORS_ARE_FATAL (goodbye)
mpiexec noticed that job rank 2 with PID 16675 on node node07  
exited

on signal 60 (Real-time signal 26).

 /usr/local/openmpi-1.2.1/bin/ompi_info
Open MPI: 1.2.1
   Open MPI SVN revision: r14481
Open RTE: 1.2.1
   Open RTE SVN revision: r14481
OPAL: 1.2.1
   OPAL SVN revision: r14481
  Prefix: /usr/local/openmpi-1.2.1
 Configured architecture: x86_64-unknown-linux-gnu
   Configured by: root
   Configured on: Mon May  7 18:32:56 PDT 2007
  Configure host: neptune.nanostellar.com
Built by: root
Built on: Mon May  7 18:40:28 PDT 2007
  Built host: neptune.my.com
  C bindings: yes
C++ bindings: yes
  Fortran77 bindings: yes (all)
  Fortran90 bindings: yes
 Fortran90 bindings size: small
  C compiler: gcc
 C compiler absolute: /usr/bin/gcc
C++ compiler: g++
   C++ compiler absolute: /usr/bin/g++
  Fortran77 compiler: /opt/intel/fce/9.1.043/bin/ifort
  Fortran77 compiler abs: /opt/intel/fce/9.1.043/bin/ifort
  Fortran90 compiler: /opt/intel/fce/9.1.043/bin/ifort
  Fortran90 compiler abs: /opt/intel/fce/9.1.043/bin/ifort
 C profiling: yes
   C++ profiling: yes
 Fortran77 profiling: yes
 Fortran90 profiling: yes
  C++ exceptions: no
  Thread support: posix (mpi: no, progress: no)
  Internal debug support: no
 MPI parameter check: runtime
Memory profiling support: no
Memory debugging support: no
 libltdl support: yes
   Heterogeneous support: yes
 mpirun default --prefix: yes
   MCA backtrace: execinfo (MCA v1.0, API v1.0,  
Component v1.2.1)
  MCA memory: ptmalloc2 (MCA v1.0, API v1.0,  
Component v1.2.1)
   MCA paffinity: linux (MCA v1.0, API v1.0, Component  
v1.2.1)
   MCA maffinity: first_use (MCA v1.0, API v1.0,  
Component v1.2.1)
   

Re: [OMPI users] Newbie question. Please help.

2007-05-09 Thread Steven Truong

Oh, no.  I tried with ACML and had the same set of errors.

Steven.

On 5/9/07, Steven Truong  wrote:

Hi, Kevin and all.  I tried with the following:

./configure --prefix=/usr/local/openmpi-1.2.1 --disable-ipv6
--with-tm=/usr/local/pbs  --enable-mpirun-prefix-by-default
--enable-mpi-f90 --with-threads=posix  --enable-static

and added the mpi.o in my VASP's makefile but i still got error.

I forgot to mention that our environment has Intel MKL 9.0 or 8.1 and
my machines are dual proc dual core Xeon 5130 .

 Well, I am going to try acml too.

Attached is my makefile for VASP and I am not sure if I missed anything again.

Thank you very much for all your helps.

On 5/9/07, Steven Truong  wrote:
> Thank Kevin and Brook for replying to my question.  I am going to try
> out what Kevin suggested.
>
> Steven.
>
> On 5/9/07, Kevin Radican  wrote:
> > Hi,
> >
> > We use VASP 4.6 in parallel with opemmpi 1.1.2 without any problems on
> > x86_64 with opensuse and compiled with gcc and Intel fortran and use
> > torque PBS.
> >
> > I used standard configure to build openmpi something like
> >
> > ./configure --prefix=/usr/local --enable-static --with-threads
> > --with-tm=/usr/local --with-libnuma
> >
> > I used the ACLM math lapack libs and built Blacs and Scalapack with them
> > too.
> >
> > I attached my vasp makefile, I might of added
> >
> > mpi.o : mpi.F
> > $(CPP)
> > $(FC) -FR -lowercase -O0 -c $*$(SUFFIX)
> >
> > to the end of the make file, It doesn't look like it is in the example
> > makefiles they give, but I compiled this a while ago.
> >
> > Hope this helps.
> >
> > Cheers,
> > Kevin
> >
> >
> >
> >
> >
> > On Tue, 2007-05-08 at 19:18 -0700, Steven Truong wrote:
> > > Hi, all.  I am new to OpenMPI and after initial setup I tried to run
> > > my app but got the followign errors:
> > >
> > > [node07.my.com:16673] *** An error occurred in MPI_Comm_rank
> > > [node07.my.com:16673] *** on communicator MPI_COMM_WORLD
> > > [node07.my.com:16673] *** MPI_ERR_COMM: invalid communicator
> > > [node07.my.com:16673] *** MPI_ERRORS_ARE_FATAL (goodbye)
> > > [node07.my.com:16674] *** An error occurred in MPI_Comm_rank
> > > [node07.my.com:16674] *** on communicator MPI_COMM_WORLD
> > > [node07.my.com:16674] *** MPI_ERR_COMM: invalid communicator
> > > [node07.my.com:16674] *** MPI_ERRORS_ARE_FATAL (goodbye)
> > > [node07.my.com:16675] *** An error occurred in MPI_Comm_rank
> > > [node07.my.com:16675] *** on communicator MPI_COMM_WORLD
> > > [node07.my.com:16675] *** MPI_ERR_COMM: invalid communicator
> > > [node07.my.com:16675] *** MPI_ERRORS_ARE_FATAL (goodbye)
> > > [node07.my.com:16676] *** An error occurred in MPI_Comm_rank
> > > [node07.my.com:16676] *** on communicator MPI_COMM_WORLD
> > > [node07.my.com:16676] *** MPI_ERR_COMM: invalid communicator
> > > [node07.my.com:16676] *** MPI_ERRORS_ARE_FATAL (goodbye)
> > > mpiexec noticed that job rank 2 with PID 16675 on node node07 exited
> > > on signal 60 (Real-time signal 26).
> > >
> > >  /usr/local/openmpi-1.2.1/bin/ompi_info
> > > Open MPI: 1.2.1
> > >Open MPI SVN revision: r14481
> > > Open RTE: 1.2.1
> > >Open RTE SVN revision: r14481
> > > OPAL: 1.2.1
> > >OPAL SVN revision: r14481
> > >   Prefix: /usr/local/openmpi-1.2.1
> > >  Configured architecture: x86_64-unknown-linux-gnu
> > >Configured by: root
> > >Configured on: Mon May  7 18:32:56 PDT 2007
> > >   Configure host: neptune.nanostellar.com
> > > Built by: root
> > > Built on: Mon May  7 18:40:28 PDT 2007
> > >   Built host: neptune.my.com
> > >   C bindings: yes
> > > C++ bindings: yes
> > >   Fortran77 bindings: yes (all)
> > >   Fortran90 bindings: yes
> > >  Fortran90 bindings size: small
> > >   C compiler: gcc
> > >  C compiler absolute: /usr/bin/gcc
> > > C++ compiler: g++
> > >C++ compiler absolute: /usr/bin/g++
> > >   Fortran77 compiler: /opt/intel/fce/9.1.043/bin/ifort
> > >   Fortran77 compiler abs: /opt/intel/fce/9.1.043/bin/ifort
> > >   Fortran90 compiler: /opt/intel/fce/9.1.043/bin/ifort
> > >   Fortran90 compiler abs: /opt/intel/fce/9.1.043/bin/ifort
> > >  C profiling: yes
> > >C++ profiling: yes
> > >  Fortran77 profiling: yes
> > >  Fortran90 profiling: yes
> > >   C++ exceptions: no
> > >   Thread support: posix (mpi: no, progress: no)
> > >   Internal debug support: no
> > >  MPI parameter check: runtime
> > > Memory profiling support: no
> > > Memory debugging support: no
> > >  libltdl support: yes
> > >Heterogeneous support: yes
> > >  mpirun default --prefix: yes
> > >MCA backtrace: execinfo (MCA v1.0, API v1.0, Component v1.2.1)
> > >   MCA memory: ptmalloc2 (MCA v1.0, API v1.0, 

Re: [OMPI users] Newbie question. Please help.

2007-05-09 Thread Steven Truong

Hi, Kevin and all.  I tried with the following:

./configure --prefix=/usr/local/openmpi-1.2.1 --disable-ipv6
--with-tm=/usr/local/pbs  --enable-mpirun-prefix-by-default
--enable-mpi-f90 --with-threads=posix  --enable-static

and added the mpi.o in my VASP's makefile but i still got error.

I forgot to mention that our environment has Intel MKL 9.0 or 8.1 and
my machines are dual proc dual core Xeon 5130 .

Well, I am going to try acml too.

Attached is my makefile for VASP and I am not sure if I missed anything again.

Thank you very much for all your helps.

On 5/9/07, Steven Truong  wrote:

Thank Kevin and Brook for replying to my question.  I am going to try
out what Kevin suggested.

Steven.

On 5/9/07, Kevin Radican  wrote:
> Hi,
>
> We use VASP 4.6 in parallel with opemmpi 1.1.2 without any problems on
> x86_64 with opensuse and compiled with gcc and Intel fortran and use
> torque PBS.
>
> I used standard configure to build openmpi something like
>
> ./configure --prefix=/usr/local --enable-static --with-threads
> --with-tm=/usr/local --with-libnuma
>
> I used the ACLM math lapack libs and built Blacs and Scalapack with them
> too.
>
> I attached my vasp makefile, I might of added
>
> mpi.o : mpi.F
> $(CPP)
> $(FC) -FR -lowercase -O0 -c $*$(SUFFIX)
>
> to the end of the make file, It doesn't look like it is in the example
> makefiles they give, but I compiled this a while ago.
>
> Hope this helps.
>
> Cheers,
> Kevin
>
>
>
>
>
> On Tue, 2007-05-08 at 19:18 -0700, Steven Truong wrote:
> > Hi, all.  I am new to OpenMPI and after initial setup I tried to run
> > my app but got the followign errors:
> >
> > [node07.my.com:16673] *** An error occurred in MPI_Comm_rank
> > [node07.my.com:16673] *** on communicator MPI_COMM_WORLD
> > [node07.my.com:16673] *** MPI_ERR_COMM: invalid communicator
> > [node07.my.com:16673] *** MPI_ERRORS_ARE_FATAL (goodbye)
> > [node07.my.com:16674] *** An error occurred in MPI_Comm_rank
> > [node07.my.com:16674] *** on communicator MPI_COMM_WORLD
> > [node07.my.com:16674] *** MPI_ERR_COMM: invalid communicator
> > [node07.my.com:16674] *** MPI_ERRORS_ARE_FATAL (goodbye)
> > [node07.my.com:16675] *** An error occurred in MPI_Comm_rank
> > [node07.my.com:16675] *** on communicator MPI_COMM_WORLD
> > [node07.my.com:16675] *** MPI_ERR_COMM: invalid communicator
> > [node07.my.com:16675] *** MPI_ERRORS_ARE_FATAL (goodbye)
> > [node07.my.com:16676] *** An error occurred in MPI_Comm_rank
> > [node07.my.com:16676] *** on communicator MPI_COMM_WORLD
> > [node07.my.com:16676] *** MPI_ERR_COMM: invalid communicator
> > [node07.my.com:16676] *** MPI_ERRORS_ARE_FATAL (goodbye)
> > mpiexec noticed that job rank 2 with PID 16675 on node node07 exited
> > on signal 60 (Real-time signal 26).
> >
> >  /usr/local/openmpi-1.2.1/bin/ompi_info
> > Open MPI: 1.2.1
> >Open MPI SVN revision: r14481
> > Open RTE: 1.2.1
> >Open RTE SVN revision: r14481
> > OPAL: 1.2.1
> >OPAL SVN revision: r14481
> >   Prefix: /usr/local/openmpi-1.2.1
> >  Configured architecture: x86_64-unknown-linux-gnu
> >Configured by: root
> >Configured on: Mon May  7 18:32:56 PDT 2007
> >   Configure host: neptune.nanostellar.com
> > Built by: root
> > Built on: Mon May  7 18:40:28 PDT 2007
> >   Built host: neptune.my.com
> >   C bindings: yes
> > C++ bindings: yes
> >   Fortran77 bindings: yes (all)
> >   Fortran90 bindings: yes
> >  Fortran90 bindings size: small
> >   C compiler: gcc
> >  C compiler absolute: /usr/bin/gcc
> > C++ compiler: g++
> >C++ compiler absolute: /usr/bin/g++
> >   Fortran77 compiler: /opt/intel/fce/9.1.043/bin/ifort
> >   Fortran77 compiler abs: /opt/intel/fce/9.1.043/bin/ifort
> >   Fortran90 compiler: /opt/intel/fce/9.1.043/bin/ifort
> >   Fortran90 compiler abs: /opt/intel/fce/9.1.043/bin/ifort
> >  C profiling: yes
> >C++ profiling: yes
> >  Fortran77 profiling: yes
> >  Fortran90 profiling: yes
> >   C++ exceptions: no
> >   Thread support: posix (mpi: no, progress: no)
> >   Internal debug support: no
> >  MPI parameter check: runtime
> > Memory profiling support: no
> > Memory debugging support: no
> >  libltdl support: yes
> >Heterogeneous support: yes
> >  mpirun default --prefix: yes
> >MCA backtrace: execinfo (MCA v1.0, API v1.0, Component v1.2.1)
> >   MCA memory: ptmalloc2 (MCA v1.0, API v1.0, Component v1.2.1)
> >MCA paffinity: linux (MCA v1.0, API v1.0, Component v1.2.1)
> >MCA maffinity: first_use (MCA v1.0, API v1.0, Component v1.2.1)
> >MCA maffinity: libnuma (MCA v1.0, API v1.0, Component v1.2.1)
> >MCA timer: linux (MCA v1.0, API v1.0, Component v1.2.1)
> >