Re: [OMPI devel] Combining Binaries for Launch

2017-05-15 Thread Kumar, Amit
>>So long as both binaries use the same OMPI version, I can’t see why there >>would be an issue. It sounds like you are thinking of running an MPI process >>on the GPU itself (instead of >>using an offload library)? People have done >>that before - IIRC, the only issue is trying to launch a pro

Re: [OMPI devel] Combining Binaries for Launch

2017-05-15 Thread r...@open-mpi.org
So long as both binaries use the same OMPI version, I can’t see why there would be an issue. It sounds like you are thinking of running an MPI process on the GPU itself (instead of using an offload library)? People have done that before - IIRC, the only issue is trying to launch a process onto t

[OMPI devel] Combining Binaries for Launch

2017-05-15 Thread Kumar, Amit
Dear Open MPI, I would like to gain a better understanding for running two different binaries on two different types of nodes(GPU nodes and Non GPUnodes) as a single job. I have run two different binaries with mpirun command and that works fine for us. But My question is: if I have a binary-1 t

Re: [OMPI devel] Socket buffer sizes

2017-05-15 Thread r...@open-mpi.org
Thanks - already done, as you say > On May 15, 2017, at 7:32 AM, Håkon Bugge wrote: > > Dear Open MPIers, > > > Automatic tuning of socket buffers has been in the linux kernel since > 2.4.17/2.6.7. That is some time ago. I remember, at the time, that we removed > the default setsockopt() for

[OMPI devel] Socket buffer sizes

2017-05-15 Thread Håkon Bugge
Dear Open MPIers, Automatic tuning of socket buffers has been in the linux kernel since 2.4.17/2.6.7. That is some time ago. I remember, at the time, that we removed the default setsockopt() for SO_SNDBUF and SO_RCVBUF in Scali MPI. Today, running Open MPI 1.10.2 using the TCP BTL, on a 10Gbit