may also use IP address, or a subnet mask,
>> whichever it is simpler for you.
>> It is better explained in this FAQ:
>>
>> https://www.open-mpi.org/faq/?category=all#tcp-selection
>>
>> BTW, some of your questions (and others that you may hit later)
>> are c
14 July 2017 at 03:58, Gilles Gouaillardet <gil...@rist.or.jp>
> wrote:
> >>>
> >>> Boris,
> >>>
> >>>
> >>> Open MPI should automatically detect the infiniband hardware, and use
> >>> openib (and *not* tcp) for inter node communications
nodes.
>>
>> (just run ibstat, at least one port should be listed, state should be
>> Active, and all nodes should have the same SM lid)
>>
>>
>> then try to run two tasks on two nodes.
>>
>>
>> if this does not work, you can
>>
>>
I would like to know how to invoke InfiniBand hardware on CentOS 6x cluster
with OpenMPI (static libs.) for running my C++ code. This is how I compile
and run:
/usr/local/open-mpi/1.10.7/bin/mpic++ -L/usr/local/open-mpi/1.10.7/lib
-Bstatic main.cpp -o DoWork
usr/local/open-mpi/1.10.7/bin/mpiexec