[OMPI users] error in mpi processes: openmpi-3.0

2018-04-03 Thread abhisek Mondal
Hi,

I have recently upgraded my system to openmpi-3.0. But despite proper
installation and gpu integration I keep receiving this error, as I was also
receiving in openmpi-1.4:

*$ mpirun -np 10 -bynode  `which relion_preprocess_mpi`  --i
input_micrographs.star --coord_dir "." --coord_suffix .coords.star
--part_star extra/output_particles.star --part_dir "." --extract
--extract_size 140 --bg_radius 52 --invert_contrast --norm*

*00021:
 --*
*00022:   The following command line options and corresponding MCA
parameter have*
*00023:   been deprecated and replaced as follows:*
*00024:   *
*00025: Command line options:*
*00026:   Deprecated:  --bynode, -bynode*
*00027:   Replacement: --map-by node*
*00028:   *
*00029: Equivalent MCA parameter:*
*00030:   Deprecated:  rmaps_base_bynode*
*00031:   Replacement: rmaps_base_mapping_policy=node*
*00032:   *
*00033:   The deprecated forms *will* disappear in a future version of Open
MPI.*
*00034:   Please update to the new syntax.*
*00035:
 --*
*00036:   [localhost.localdomain:29946] PMIX ERROR: BAD-PARAM in file
../../../../../../../opal/mca/pmix/pmix2x/pmix/src/dstore/pmix_esh.c at
line 1005*
*00037:   [localhost.localdomain:29951] PMIX ERROR: BAD-PARAM in file
../../../../../../../opal/mca/pmix/pmix2x/pmix/src/dstore/pmix_esh.c at
line 1005*
*00038:   [localhost.localdomain:29948] PMIX ERROR: BAD-PARAM in file
../../../../../../../opal/mca/pmix/pmix2x/pmix/src/dstore/pmix_esh.c at
line 1005*
*00039:   [localhost.localdomain:29952] PMIX ERROR: BAD-PARAM in file
../../../../../../../opal/mca/pmix/pmix2x/pmix/src/dstore/pmix_esh.c at
line 1005*
*00040:   [localhost.localdomain:29944] PMIX ERROR: BAD-PARAM in file
../../../../../../../opal/mca/pmix/pmix2x/pmix/src/dstore/pmix_esh.c at
line 1005*
*00041:   [localhost.localdomain:29950] PMIX ERROR: BAD-PARAM in file
../../../../../../../opal/mca/pmix/pmix2x/pmix/src/dstore/pmix_esh.c at
line 1005*
*00042:   [localhost.localdomain:29949] PMIX ERROR: BAD-PARAM in file
../../../../../../../opal/mca/pmix/pmix2x/pmix/src/dstore/pmix_esh.c at
line 1005*
*00043:   *** An error occurred in MPI_Init*
*00044:   *** on a NULL communicator*
*00045:   *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now
abort,*
*00046:   ***and potentially your MPI job)*
*00047:   *** An error occurred in MPI_Init*
*00048:   *** on a NULL communicator*
*00049:   *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now
abort,*
*00050:   ***and potentially your MPI job)*
*00051:   *** An error occurred in MPI_Init*
*00052:   *** on a NULL communicator*
*00053:   *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now
abort,*
*00054:   ***and potentially your MPI job)*
*00055:   [localhost.localdomain:29952] Local abort before MPI_INIT
completed completed successfully, but am not able to aggregate error
messages, and not able to guarantee that all other processes were killed!*

I'm not sure what is causing this crash.

Please help me out.

Thank you

*-- *
Abhisek Mondal

*Senior Research Fellow*

*Structural Biology and Bioinformatics Division*
*CSIR-Indian Institute of Chemical Biology*

*Kolkata 700032*

*INDIA*
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] running mpi program between my PC and an ARM-architektur raspberry

2018-04-03 Thread Gilles Gouaillardet

Let me shed a different light on that.


Once in a while, I run Open MPI between x86_64 and sparcv9, and it works 
quite well as far as I am concerned.


Note this is the master branch, and I never try older nor releases branches.


Note you likely need to configure Open MPI with --enable-heterogeneous 
on both arch.


If you are still unlucky, then I suggest you download and build the 
3.1.0rc3 version (with --enable-debug --enable-heterogeneous)


and then

mpirun --mca oob_base_verbose 10 --mca pml_base_verbose 10 --host ... 
hostname


and then

mpirun --mca oob_base_verbose 10 --mca pml_base_verbose 10 --mca 
btl_base_verbose 10 --host ... mpi_helloworld


and either open a github issue or post the logs on this ML


Cheers,


Gilles




On 4/4/2018 8:39 AM, Jeff Squyres (jsquyres) wrote:

On Apr 2, 2018, at 1:39 PM, dpchoudh .  wrote:

Sorry for a pedantic follow up:

Is this (heterogeneous cluster support) something that is specified by
the MPI standard (perhaps as an optional component)?

The MPI standard states that if you send a message, you should receive the same 
values at the receiver.  E.g., if you sent int=3, you should receive int=3, 
even if one machine is big endian and the other machine is little endian.

It does not specify what happens when data sizes are different (e.g., if type X 
is 4 bits on one side and 8 bits on the other) -- there's no good answers on 
what to do there.


Do people know if
MPICH. MVAPICH, Intel MPI etc support it? (I do realize this is an
OpenMPI forum)

I don't know offhand.  I know that this kind of support is very unpopular with 
MPI implementors because:

1. Nearly nobody uses it (we get *at most* one request a year to properly support 
BE<-->LE transformation).
2. It's difficult to implement BE<-->LE transformation properly without causing 
at least some performance loss and/or code complexity in the main datatype engine.
3. It is very difficult for MPI implementors to test properly (especially in 
automated environments).

#1 is probably the most important reason.  If lots of people were asking for 
this, MPI implementors would take the time to figure out #2 and #3.  But since 
almost no one asks for it, it gets pushed (waay) down on the priority list 
of things to implement.

Sorry -- just being brutally honest here.  :-\


The reason I ask is that I have a mini Linux lab of sort that consists
of Linux running on many architectures, both 32 and 64 bit and both LE
and BE. Some have advanced fabrics, but all have garden variety
Ethernet. I mostly use this for software porting work, but I'd love to
set it up as a test bench for testing OpenMPI in a heterogeneous
environment and report issues, if that is something that the
developers want to achieve.

Effectively, the current set of Open MPI developers have not put up any resources to 
fix, update, and maintain the BE<-->LE transformation in the Open MPI datatype 
engine.  I don't think that there are any sane answers for what to do when datatypes 
are different sizes.

However, that being said, Open MPI is an open source community -- if someone 
wants to contribute pull requests and/or testing to support this feature, that 
would be great!



___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users


Re: [OMPI users] running mpi program between my PC and an ARM-architektur raspberry

2018-04-03 Thread Jeff Squyres (jsquyres)
On Apr 2, 2018, at 1:39 PM, dpchoudh .  wrote:
> 
> Sorry for a pedantic follow up:
> 
> Is this (heterogeneous cluster support) something that is specified by
> the MPI standard (perhaps as an optional component)?

The MPI standard states that if you send a message, you should receive the same 
values at the receiver.  E.g., if you sent int=3, you should receive int=3, 
even if one machine is big endian and the other machine is little endian.

It does not specify what happens when data sizes are different (e.g., if type X 
is 4 bits on one side and 8 bits on the other) -- there's no good answers on 
what to do there.

> Do people know if
> MPICH. MVAPICH, Intel MPI etc support it? (I do realize this is an
> OpenMPI forum)

I don't know offhand.  I know that this kind of support is very unpopular with 
MPI implementors because:

1. Nearly nobody uses it (we get *at most* one request a year to properly 
support BE<-->LE transformation).
2. It's difficult to implement BE<-->LE transformation properly without causing 
at least some performance loss and/or code complexity in the main datatype 
engine.
3. It is very difficult for MPI implementors to test properly (especially in 
automated environments).

#1 is probably the most important reason.  If lots of people were asking for 
this, MPI implementors would take the time to figure out #2 and #3.  But since 
almost no one asks for it, it gets pushed (waay) down on the priority list 
of things to implement.

Sorry -- just being brutally honest here.  :-\

> The reason I ask is that I have a mini Linux lab of sort that consists
> of Linux running on many architectures, both 32 and 64 bit and both LE
> and BE. Some have advanced fabrics, but all have garden variety
> Ethernet. I mostly use this for software porting work, but I'd love to
> set it up as a test bench for testing OpenMPI in a heterogeneous
> environment and report issues, if that is something that the
> developers want to achieve.

Effectively, the current set of Open MPI developers have not put up any 
resources to fix, update, and maintain the BE<-->LE transformation in the Open 
MPI datatype engine.  I don't think that there are any sane answers for what to 
do when datatypes are different sizes.

However, that being said, Open MPI is an open source community -- if someone 
wants to contribute pull requests and/or testing to support this feature, that 
would be great!

-- 
Jeff Squyres
jsquy...@cisco.com

___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users


Re: [OMPI users] Linkage problem

2018-04-03 Thread Jeff Squyres (jsquyres)
Everything that Nathan said, plus it looks like you're running into general 
running-Open-MPI-successfully issues (i.e., your "ompi_info: error while 
loading shared libraries: libmpi.so.40: cannot open shared object file: No such 
file or directory" error -- see 
https://www.open-mpi.org/faq/?category=running#adding-ompi-to-path).


> On Apr 3, 2018, at 4:25 PM, Nathan Hjelm  wrote:
> 
> I guess I should point out the reason the compiler thought you had linker 
> input was a missing -I on /usr/lib/openmpi/include . Though that include 
> shouldn't be needed as the wrapper will do that for you. You can see what the 
> wrapper passes to gcc by running: mpicxx --showme.
> 
> -Nathan
> 
> On Apr 03, 2018, at 02:16 PM, Quentin Faure  wrote:
> 
>> Hello,
>> 
>>> 
>>> Date: Fri, 30 Mar 2018 14:29:57 +
>>> From: "Jeff Squyres (jsquyres)" 
>>> To: "Open MPI User's List" 
>>> Subject: Re: [OMPI users] Linkage problem
>>> Message-ID: 
>>> Content-Type: text/plain; charset="utf-8"
>>> 
>>> On Mar 29, 2018, at 11:19 AM, Quentin Faure  wrote:
 
 I would like to use openmpi with a software called LAMMPS. I know it is 
 possible when compiling the software to indicate it to use it with 
 openmpi. However when I do that I have a warning message telling me that 
 the linkage could not have been done (I specified the path for openmpi 
 library and name like it is done in LAMMPS manual).
>>> 
>>> What error message are you getting?
>> 
>> The error I get is:
>>  /usr/lib/openmpi/include: linker input file unused because linking not done 
>> mpicxx -g -O3  -DLAMMPS_GZIP -DLMP_USER_INTEL -DLMP_MPIIO  
>> /usr/lib/openmpi/include -pthread -DFFT_FFTW3 -DFFT_SINGLE   
>> -I../../lib/molfile   -c ../create_atoms.cpp
>> 
>> 
>>> 
 I tried to reinstall openmpi in two different ways (following the advices 
 of people that had LAMMPS and openmpi works together) but it still does 
 not work. Also I don?t know if this is part of my problem or not but, the 
 option ?showme does not work (command: mpicc ?showme).
>>> 
>>> What error message are you getting?  It's quite possible that you're using 
>>> a different mpicc (e.g., from a different MPI installation on your same 
>>> machine), and not using the mpicc from the Open MPI that you just installed.
>> 
>> I do not have any errors when I install openmpi, i just have an error when I 
>> tried to compile my other software with openmpi. Concerning the mpicc 
>> command, normally there is no other MPI software install on this computer.
>> 
>>> 
>>> Can you send all the information listed here: 
>>> https://www.open-mpi.org/community/help/
>> 
>> I am enclosing the config log file. I tried the command ompi_info —all and I 
>> got: ompi_info: error while loading shared libraries: libmpi.so.40: cannot 
>> open shared object file: No such file or directory.
>> 
>> I tried to install the software hwloc and used the command stoop -v but I 
>> got an error: lstopo: error while loading shared libraries: libhwloc.so15: 
>> cannot open shared object file: No such file or directory.
>> 
>> One thing that I did not specify about the computer is that it has been 
>> build and all the software have been install manually, it is not a computer 
>> already built when it was bought.
>> 
>> 
>> 
>> 
>> 
>> 
>>> 
>>> -- 
>>> Jeff Squyres
>>> jsquy...@cisco.com
>>> 
>> 
>> Quentin 
>> ___
>> users mailing list
>> users@lists.open-mpi.org
>> https://lists.open-mpi.org/mailman/listinfo/users
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users


-- 
Jeff Squyres
jsquy...@cisco.com

___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] can not compile Openmpi-3.0

2018-04-03 Thread abhisek Mondal
Thanks a lot.
It worked very nicely for me.

On Tue, Apr 3, 2018 at 12:54 PM, Gilles Gouaillardet 
wrote:

> In that case, you need to
>
> configure --with-cuda=/usr/local/cuda-8.0
>
>
> Cheers,
>
>
> Gilles
>
>
> On 4/3/2018 4:12 PM, abhisek Mondal wrote:
>
>> I have cuda installed in /usr/local/cuda-8.0.
>>
>> On Tue, Apr 3, 2018 at 12:37 PM, Gilles Gouaillardet > > wrote:
>>
>> Did you install CUDA and where ?
>>
>>
>> On 4/3/2018 3:51 PM, abhisek Mondal wrote:
>>
>> Thanks a lot. Installing the headers worked like a charm !
>>
>> I have a question though. Why does configuration says:
>>
>>  Version: 3.0.1
>> Build MPI C bindings: yes
>> Build MPI C++ bindings (deprecated): no
>> Build MPI Fortran bindings: mpif.h, use mpi
>> MPI Build Java bindings (experimental): no
>> Build Open SHMEM support: yes
>> Debug build: no
>> Platform file: (none)
>>
>> Miscellaneous
>> ---
>> *CUDA support: no*
>> *
>> *
>> *
>> *
>> I have a single GPU on my machine. Do I need to define it
>> manually while configuring openmpi?
>>
>> On Tue, Apr 3, 2018 at 11:21 AM, Gilles Gouaillardet
>> mailto:gil...@rist.or.jp>
>> >> wrote:
>>
>> It looks like kernel-headers is required too
>>
>>
>> Cheers,
>>
>>
>> Gilles
>>
>>
>> On 4/3/2018 2:28 PM, abhisek Mondal wrote:
>>
>> Hello,
>>
>> Installing glibc moved the installation little bit but
>> again
>> stuck at sanity check.
>> Location of config.log:
>> https://drive.google.com/drive/u/0/folders/0B6O-L5Y7BiGJfmQ4
>> N2FpblBEcFNxaDZnaGpsUFFEUlotVWFjajR0UFFHNk5aYlhoSHVTWkU
>> > 4N2FpblBEcFNxaDZnaGpsUFFEUlotVWFjajR0UFFHNk5aYlhoSHVTWkU>
>>
>> > 4N2FpblBEcFNxaDZnaGpsUFFEUlotVWFjajR0UFFHNk5aYlhoSHVTWkU
>> > 4N2FpblBEcFNxaDZnaGpsUFFEUlotVWFjajR0UFFHNk5aYlhoSHVTWkU>>
>>
>> $ cpp --version
>> cpp (GCC) 4.8.5 20150623 (Red Hat 4.8.5-16)
>> Copyright (C) 2015 Free Software Foundation, Inc.
>> This is free software; see the source for copying
>> conditions.
>> There is NO
>> warranty; not even for MERCHANTABILITY or FITNESS FOR A
>> PARTICULAR PURPOSE.
>>
>> I have gcc and gcc-c++ installed.
>>
>> On Tue, Apr 3, 2018 at 10:08 AM, Gilles Gouaillardet
>> mailto:gil...@rist.or.jp>
>> >
>> 
>> >
>> This is the relevant part related to this error
>>
>> configure:6620: checking whether we are cross
>> compiling
>> configure:6628: gcc -o conftest conftest.c >&5
>> conftest.c:10:19: fatal error: stdio.h: No such
>> file or
>> directory
>>  #include 
>>^
>> compilation terminated.
>> configure:6632: $? = 1
>> configure:6639: ./conftest
>> ../configure: line 6641: ./conftest: No such file
>> or directory
>> configure:6643: $? = 127
>>
>>
>> at the very least, you need to install the
>> glibc-headers
>> package
>>
>>
>> Cheers,
>>
>>
>> Gilles
>>
>>
>> On 4/3/2018 1:22 PM, abhisek Mondal wrote:
>>
>> Hello,
>>
>> I have uploaded the config.log here:
>> https://drive.google.com/drive/u/0/folders/0B6O-L5Y7BiGJfmQ4
>> N2FpblBEcFNxaDZnaGpsUFFEUlotVWFjajR0UFFHNk5aYlhoSHVTWkU
>> > 4N2FpblBEcFNxaDZnaGpsUFFEUlotVWFjajR0UFFHNk5aYlhoSHVTWkU>
>>
>> > 4N2FpblBEcFNxaDZnaGpsUFFEUlotVWFjajR0UFFHNk5aYlhoSHVTWkU
>> > 4N2FpblBEcFNxaDZnaGpsUFFEUlotVWFjajR0UFFHNk5aYlhoSHVTWkU>>
>>
>>
>> > 4N2FpblBEcFNxaDZnaGpsUFFEUlotVWFjajR0UFFHNk5aYlhoSHVTWkU
>> > 4N2FpblBEcFNxaDZnaGpsUFFEUlo

Re: [OMPI users] can not compile Openmpi-3.0

2018-04-03 Thread Gilles Gouaillardet

In that case, you need to

configure --with-cuda=/usr/local/cuda-8.0


Cheers,


Gilles


On 4/3/2018 4:12 PM, abhisek Mondal wrote:

I have cuda installed in /usr/local/cuda-8.0.

On Tue, Apr 3, 2018 at 12:37 PM, Gilles Gouaillardet 
mailto:gil...@rist.or.jp>> wrote:


Did you install CUDA and where ?


On 4/3/2018 3:51 PM, abhisek Mondal wrote:

Thanks a lot. Installing the headers worked like a charm !

I have a question though. Why does configuration says:

 Version: 3.0.1
Build MPI C bindings: yes
Build MPI C++ bindings (deprecated): no
Build MPI Fortran bindings: mpif.h, use mpi
MPI Build Java bindings (experimental): no
Build Open SHMEM support: yes
Debug build: no
Platform file: (none)

Miscellaneous
---
*CUDA support: no*
*
*
*
*
I have a single GPU on my machine. Do I need to define it
manually while configuring openmpi?

On Tue, Apr 3, 2018 at 11:21 AM, Gilles Gouaillardet
mailto:gil...@rist.or.jp>
>> wrote:

    It looks like kernel-headers is required too


    Cheers,


    Gilles


    On 4/3/2018 2:28 PM, abhisek Mondal wrote:

        Hello,

        Installing glibc moved the installation little bit but
again
        stuck at sanity check.
        Location of config.log:

https://drive.google.com/drive/u/0/folders/0B6O-L5Y7BiGJfmQ4N2FpblBEcFNxaDZnaGpsUFFEUlotVWFjajR0UFFHNk5aYlhoSHVTWkU


       

>

        $ cpp --version
        cpp (GCC) 4.8.5 20150623 (Red Hat 4.8.5-16)
        Copyright (C) 2015 Free Software Foundation, Inc.
        This is free software; see the source for copying
conditions.
        There is NO
        warranty; not even for MERCHANTABILITY or FITNESS FOR A
        PARTICULAR PURPOSE.

        I have gcc and gcc-c++ installed.

        On Tue, Apr 3, 2018 at 10:08 AM, Gilles Gouaillardet
        mailto:gil...@rist.or.jp>
>
        
&5
            conftest.c:10:19: fatal error: stdio.h: No such
file or
        directory
             #include 
               ^
            compilation terminated.
            configure:6632: $? = 1
            configure:6639: ./conftest
            ../configure: line 6641: ./conftest: No such file
or directory
            configure:6643: $? = 127


            at the very least, you need to install the
glibc-headers
        package


            Cheers,


            Gilles


            On 4/3/2018 1:22 PM, abhisek Mondal wrote:

                Hello,

                I have uploaded the config.log here:

https://drive.google.com/drive/u/0/folders/0B6O-L5Y7BiGJfmQ4N2FpblBEcFNxaDZnaGpsUFFEUlotVWFjajR0UFFHNk5aYlhoSHVTWkU


       

>

       


       

>>




                On Tue, Apr 3, 2018 at 9:47 AM, Gilles
Gouaillardet
                mailto:gil...@ris

Re: [OMPI users] can not compile Openmpi-3.0

2018-04-03 Thread abhisek Mondal
I have cuda installed in /usr/local/cuda-8.0.

On Tue, Apr 3, 2018 at 12:37 PM, Gilles Gouaillardet 
wrote:

> Did you install CUDA and where ?
>
>
> On 4/3/2018 3:51 PM, abhisek Mondal wrote:
>
>> Thanks a lot. Installing the headers worked like a charm !
>>
>> I have a question though. Why does configuration says:
>>
>>  Version: 3.0.1
>> Build MPI C bindings: yes
>> Build MPI C++ bindings (deprecated): no
>> Build MPI Fortran bindings: mpif.h, use mpi
>> MPI Build Java bindings (experimental): no
>> Build Open SHMEM support: yes
>> Debug build: no
>> Platform file: (none)
>>
>> Miscellaneous
>> ---
>> *CUDA support: no*
>> *
>> *
>> *
>> *
>> I have a single GPU on my machine. Do I need to define it manually while
>> configuring openmpi?
>>
>> On Tue, Apr 3, 2018 at 11:21 AM, Gilles Gouaillardet > > wrote:
>>
>> It looks like kernel-headers is required too
>>
>>
>> Cheers,
>>
>>
>> Gilles
>>
>>
>> On 4/3/2018 2:28 PM, abhisek Mondal wrote:
>>
>> Hello,
>>
>> Installing glibc moved the installation little bit but again
>> stuck at sanity check.
>> Location of config.log:
>> https://drive.google.com/drive/u/0/folders/0B6O-L5Y7BiGJfmQ4
>> N2FpblBEcFNxaDZnaGpsUFFEUlotVWFjajR0UFFHNk5aYlhoSHVTWkU
>> > 4N2FpblBEcFNxaDZnaGpsUFFEUlotVWFjajR0UFFHNk5aYlhoSHVTWkU>
>>
>> $ cpp --version
>> cpp (GCC) 4.8.5 20150623 (Red Hat 4.8.5-16)
>> Copyright (C) 2015 Free Software Foundation, Inc.
>> This is free software; see the source for copying conditions.
>> There is NO
>> warranty; not even for MERCHANTABILITY or FITNESS FOR A
>> PARTICULAR PURPOSE.
>>
>> I have gcc and gcc-c++ installed.
>>
>> On Tue, Apr 3, 2018 at 10:08 AM, Gilles Gouaillardet
>> mailto:gil...@rist.or.jp>
>> >> wrote:
>>
>> This is the relevant part related to this error
>>
>> configure:6620: checking whether we are cross compiling
>> configure:6628: gcc -o conftestconftest.c >&5
>> conftest.c:10:19: fatal error: stdio.h: No such file or
>> directory
>>  #include 
>>^
>> compilation terminated.
>> configure:6632: $? = 1
>> configure:6639: ./conftest
>> ../configure: line 6641: ./conftest: No such file or directory
>> configure:6643: $? = 127
>>
>>
>> at the very least, you need to install the glibc-headers
>> package
>>
>>
>> Cheers,
>>
>>
>> Gilles
>>
>>
>> On 4/3/2018 1:22 PM, abhisek Mondal wrote:
>>
>> Hello,
>>
>> I have uploaded the config.log here:
>> https://drive.google.com/drive/u/0/folders/0B6O-L5Y7BiGJfmQ4
>> N2FpblBEcFNxaDZnaGpsUFFEUlotVWFjajR0UFFHNk5aYlhoSHVTWkU
>> > 4N2FpblBEcFNxaDZnaGpsUFFEUlotVWFjajR0UFFHNk5aYlhoSHVTWkU>
>>
>> > 4N2FpblBEcFNxaDZnaGpsUFFEUlotVWFjajR0UFFHNk5aYlhoSHVTWkU
>> > 4N2FpblBEcFNxaDZnaGpsUFFEUlotVWFjajR0UFFHNk5aYlhoSHVTWkU>>
>>
>>
>>
>>
>> On Tue, Apr 3, 2018 at 9:47 AM, Gilles Gouaillardet
>> mailto:gil...@rist.or.jp>
>> >
>> 
>> >
>> Can you please compress and attach your config.log ?
>>
>>
>> You might also want to double check you can
>> compile *and*
>> run a
>> simple C hello world program
>>
>>
>> Cheers,
>>
>>
>> Gilles
>>
>>
>> On 4/3/2018 1:06 PM, abhisek Mondal wrote:
>>
>> Hi,
>>
>> I need some help regarding compiling
>> Openmpi-3.0. I have
>> perfectly working C compiler, however, during
>> configuration
>> I'm keep getting this error:
>>
>>
>>
>> /===
>> =/
>> /== Configuring Open MPI/
>>
>>
>> /===
>> =/
>> /
>> /
>> /*** Startup tests/
>> /checking build system type...
>> x86_64-unknown-linux-gnu/
>> /checking host system type...
>>

Re: [OMPI users] can not compile Openmpi-3.0

2018-04-03 Thread Gilles Gouaillardet

Did you install CUDA and where ?


On 4/3/2018 3:51 PM, abhisek Mondal wrote:

Thanks a lot. Installing the headers worked like a charm !

I have a question though. Why does configuration says:

 Version: 3.0.1
Build MPI C bindings: yes
Build MPI C++ bindings (deprecated): no
Build MPI Fortran bindings: mpif.h, use mpi
MPI Build Java bindings (experimental): no
Build Open SHMEM support: yes
Debug build: no
Platform file: (none)

Miscellaneous
---
*CUDA support: no*
*
*
*
*
I have a single GPU on my machine. Do I need to define it manually 
while configuring openmpi?


On Tue, Apr 3, 2018 at 11:21 AM, Gilles Gouaillardet 
mailto:gil...@rist.or.jp>> wrote:


It looks like kernel-headers is required too


Cheers,


Gilles


On 4/3/2018 2:28 PM, abhisek Mondal wrote:

Hello,

Installing glibc moved the installation little bit but again
stuck at sanity check.
Location of config.log:

https://drive.google.com/drive/u/0/folders/0B6O-L5Y7BiGJfmQ4N2FpblBEcFNxaDZnaGpsUFFEUlotVWFjajR0UFFHNk5aYlhoSHVTWkU



$ cpp --version
cpp (GCC) 4.8.5 20150623 (Red Hat 4.8.5-16)
Copyright (C) 2015 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. 
There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A
PARTICULAR PURPOSE.

I have gcc and gcc-c++ installed.

On Tue, Apr 3, 2018 at 10:08 AM, Gilles Gouaillardet
mailto:gil...@rist.or.jp>
>> wrote:

    This is the relevant part related to this error

    configure:6620: checking whether we are cross compiling
    configure:6628: gcc -o conftest    conftest.c >&5
    conftest.c:10:19: fatal error: stdio.h: No such file or
directory
     #include 
       ^
    compilation terminated.
    configure:6632: $? = 1
    configure:6639: ./conftest
    ../configure: line 6641: ./conftest: No such file or directory
    configure:6643: $? = 127


    at the very least, you need to install the glibc-headers
package


    Cheers,


    Gilles


    On 4/3/2018 1:22 PM, abhisek Mondal wrote:

        Hello,

        I have uploaded the config.log here:

https://drive.google.com/drive/u/0/folders/0B6O-L5Y7BiGJfmQ4N2FpblBEcFNxaDZnaGpsUFFEUlotVWFjajR0UFFHNk5aYlhoSHVTWkU


       

>




        On Tue, Apr 3, 2018 at 9:47 AM, Gilles Gouaillardet
        mailto:gil...@rist.or.jp>
>