Re: [OMPI users] Q: Basic invoking of InfiniBand with OpenMPI

2017-07-13 Thread Gilles Gouaillardet
Boris, Open MPI should automatically detect the infiniband hardware, and use openib (and *not* tcp) for inter node communications and a shared memory optimized btl (e.g. sm or vader) for intra node communications. note if you "-mca btl openib,self", you tell Open MPI to use the openib

Re: [OMPI users] weird issue with output redirection to a file when using different compilers

2017-07-13 Thread fabricio
Em 13-07-2017 22:32, Gilles Gouaillardet escreveu: Fabricio, the fortran runtime might (or not) use buffering for I/O. as a consequence, data might be written immediatly to disk, or at a later time (e.g. the file is closed, the buffer is full or the buffer is flushed) you might want to

Re: [OMPI users] weird issue with output redirection to a file when using different compilers

2017-07-13 Thread Gilles Gouaillardet
Fabricio, the fortran runtime might (or not) use buffering for I/O. as a consequence, data might be written immediatly to disk, or at a later time (e.g. the file is closed, the buffer is full or the buffer is flushed) you might want to manually flush the file, or there might be an option not to

[OMPI users] weird issue with output redirection to a file when using different compilers

2017-07-13 Thread fabricio
Hello there I'm facing a weird issue in a centos 7.3 x86 machine when remotely running a program [https://www.myroms.org] from host1 to host2 and host3. When the program was compiled with intel ifort 17.0, output redirection happens immediately and the file is constantly updated. If the

Re: [OMPI users] Q: Basic invoking of InfiniBand with OpenMPI

2017-07-13 Thread Gus Correa
Have you tried: -mca btl vader,openib,self or -mca btl sm,openib,self by chance? That adds a btl for intra-node communication (vader or sm). On 07/13/2017 05:43 PM, Boris M. Vulovic wrote: I would like to know how to invoke InfiniBand hardware on CentOS 6x cluster with OpenMPI (static

[OMPI users] Q: Basic invoking of InfiniBand with OpenMPI

2017-07-13 Thread Boris M. Vulovic
I would like to know how to invoke InfiniBand hardware on CentOS 6x cluster with OpenMPI (static libs.) for running my C++ code. This is how I compile and run: /usr/local/open-mpi/1.10.7/bin/mpic++ -L/usr/local/open-mpi/1.10.7/lib -Bstatic main.cpp -o DoWork usr/local/open-mpi/1.10.7/bin/mpiexec

Re: [OMPI users] Network performance over TCP

2017-07-13 Thread Adam Sylvester
Bummer - thanks for the info Brian. As an FYI, I do have a real world use case for this faster connectivity (i.e. beyond just a benchmark). While my application will happily gobble up and run on however many machines it's given, there's a resource manager that lives on top of everything that