On Mon, 25 Aug 2008, Mi Yan wrote:

Does OpenMPI always use SEND/RECV protocol between heterogeneous
processors with different endianness?

I tried btl_openib_flags to be 2 , 4 and 6 respectively to allowe RDMA,
but the bandwidth between the two heterogeneous nodes is slow, same as
the bandwidth when btl_openib_flags to be 1. Seems to me SEND/RECV is
always used no matter btl_openib_flags is. Can I force OpenMPI to use
RDMA between x86 and PPC? I only transfer MPI_BYTE, so we do not need the
support for endianness.

Which version of Open MPI are you using? In recent versions (I don't remember exactly when the change occured, unfortuantely), the decision between send/recv and rdma was moved from being solely based on the architecture of the remote process to being based on the architecture and datatype. It's possible this has been broken again, but there defintiely was some window (possibly only on the development trunk) when that worked correctly.

Brian

Reply via email to