Yes, I agree that the CUDA support is more intrusive and ends up in different
areas. The problem is that the changes could not be simply isolated in a BTL.
1. To support the direct movement of GPU buffers, we often utilize copying into
host memory and then out of host memory. These copies hav
As Jeff said in his commit message for r29059:
>> Turns out that AC_CHECK_DECLS is one of the "new style" Autoconf
>> macros that #defines the output to be 0 or 1 (vs. #define'ing or
>> #undef'ing it). So don't check for "#if defined(..."; just check for
>> "#if ...".
On Aug 23, 2013, at 8:10 A
Why is the 1.7 changeset different from the trunk changeset? Specifically,
#if defined(HAVE_IBV_LINK_LAYER_ETHERENET)
Is changed to
#if HAVE_DECL_IBV_LINK_LAYER_ETHERNET
Instead of
#if defined(HAVE_DECL_IBV_LINK_LAYER_ETHERNET)
> -Original Message-
> From: svn [mailto:svn-boun...
Rolf,
On Aug 22, 2013, at 19:24 , Rolf vandeVaart wrote:
> Hi George:
>
> The reason it tainted the PML is because the CUDA IPC support makes use of
> the large message RDMA protocol of the PML layer. The smcuda btl starts up,
> but does not initially support any large message RDMA (RGET,RPU
Some questions are easier than others …
On Aug 23, 2013, at 07:54 , mahesh wrote:
> I know that its a huge code base. But, I am looking for specific answers like
> does OpenMPI use sockets?
When Open MPI is using TCP it does indeed use sockets.
> Can programs written without mpi libraries be
I know that its a huge code base. But, I am looking for specific answers like
does OpenMPI use sockets? Can programs written without mpi libraries be run on
clusters using orte(with some changes)? It would be really helpful if atleast
these and related doubts can be solved.
Thanks,
Mahesh
On A