penib BTL flags.
>
> -- mca btl_openib_flags 304
>
> Rolf
>
> -Original Message-
> From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
> Behalf Of Brice Goglin
> Sent: Monday, February 28, 2011 11:16 AM
> To: us...@open-mpi.org
> Subject
Of Pak Lui
Sent: Monday, February 28, 2011 11:30 AM
To: Open MPI Users
Subject: Re: [OMPI users] anybody tried OMPI with gpudirect?
Hi Brice,
You will need the MLNX_OFED with the GPUDirect support in order to work. I will
check to there's a release of it that supports SLES and let you know.
[pak
/gpu_direct_shares
/sys/module/ib_core/parameters/gpu_direct_pages
Regards,
- Pak
-Original Message-
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On Behalf
Of Brice Goglin
Sent: Monday, February 28, 2011 11:14 AM
To: Open MPI Users
Subject: Re: [OMPI users] anybody tried
-Original Message-
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On Behalf
Of Brice Goglin
Sent: Monday, February 28, 2011 2:14 PM
To: Open MPI Users
Subject: Re: [OMPI users] anybody tried OMPI with gpudirect?
Le 28/02/2011 19:49, Rolf vandeVaart a écrit
Le 28/02/2011 19:49, Rolf vandeVaart a écrit :
> For the GPU Direct to work with Infiniband, you need to get some updated OFED
> bits from your Infiniband vendor.
>
> In terms of checking the driver updates, you can do a grep on the string
> get_driver_pages in the file/proc/kallsyms. If it is
] anybody tried OMPI with gpudirect?
Le 28/02/2011 17:30, Rolf vandeVaart a écrit :
> Hi Brice:
> Yes, I have tired OMPI 1.5 with gpudirect and it worked for me. You
> definitely need the patch or you will see the behavior just as you described,
> a hang. One thing you could try
Le 28/02/2011 17:30, Rolf vandeVaart a écrit :
> Hi Brice:
> Yes, I have tired OMPI 1.5 with gpudirect and it worked for me. You
> definitely need the patch or you will see the behavior just as you described,
> a hang. One thing you could try is disabling the large message RDMA in OMPI
> and
the openib BTL flags.
-- mca btl_openib_flags 304
Rolf
-Original Message-
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On Behalf
Of Brice Goglin
Sent: Monday, February 28, 2011 11:16 AM
To: us...@open-mpi.org
Subject: [OMPI users] anybody tried OMPI with gpudirect
Hello,
I am trying to play with nvidia's gpudirect. The test program given with
the gpudirect tarball just does a basic MPI ping-pong between two
process that allocated their buffers with cudaHostMalloc instead of
malloc. It seems to work with Intel MPI but Open MPI 1.5 hangs in the
first