Re: [OMPI users] segfault during MPI_Isend when transmitting GPU arrays between multiple GPUs

2015-03-30 Thread Rolf vandeVaart
>-Original Message- >From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Rolf >vandeVaart >Sent: Monday, March 30, 2015 9:37 AM >To: Open MPI Users >Subject: Re: [OMPI users] segfault during MPI_Isend when transmitting GPU >arrays between multiple GPUs >

Re: [OMPI users] segfault during MPI_Isend when transmitting GPU arrays between multiple GPUs

2015-03-30 Thread Rolf vandeVaart
>-Original Message- >From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Lev Givon >Sent: Sunday, March 29, 2015 10:11 PM >To: Open MPI Users >Subject: Re: [OMPI users] segfault during MPI_Isend when transmitting GPU >arrays between multiple GPUs > >Recei

Re: [OMPI users] segfault during MPI_Isend when transmitting GPU arrays between multiple GPUs

2015-03-29 Thread Lev Givon
Received from Rolf vandeVaart on Fri, Mar 27, 2015 at 04:09:58PM EDT: > >-Original Message- > >From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Lev Givon > >Sent: Friday, March 27, 2015 3:47 PM > >To: us...@open-mpi.org > >Subject: [OMPI users]

Re: [OMPI users] segfault during MPI_Isend when transmitting GPU arrays between multiple GPUs

2015-03-27 Thread Rolf vandeVaart
March 27, 2015 3:47 PM >To: us...@open-mpi.org >Subject: [OMPI users] segfault during MPI_Isend when transmitting GPU >arrays between multiple GPUs > >I'm using PyCUDA 2014.1 and mpi4py (git commit 3746586, uploaded today) >built against OpenMPI 1.8.4 with CUDA support activated

[OMPI users] segfault during MPI_Isend when transmitting GPU arrays between multiple GPUs

2015-03-27 Thread Lev Givon
I'm using PyCUDA 2014.1 and mpi4py (git commit 3746586, uploaded today) built against OpenMPI 1.8.4 with CUDA support activated to asynchronously send GPU arrays between multiple Tesla GPUs (Fermi generation). Each MPI process is associated with a single GPU; the process has a run loop that starts