The collective extensions are designed to support MPI and general multicast operations over IB fabrics that support offloaded collectives. Where feasible, they come as close to MPI semantics as possible. Unless otherwise stated, all members participating in a data collective operation must call the associated collective routine for the data transfer operation to complete. Unless otherwise stated, the root collective member of a data operation will receive its own portion of the collective data. In most cases, the root member can prevent sending/receiving data when such operations would be redundant. When root data is already "in place" the root member may set the send and/or receive buffer pointer argument to NULL.
Unlike standard DAPL movement operations that require registered memory and LMR objects, collective data movement operations employ pointers to user-virtual address space that do not require pre-registration by the application. From a resource usage point of view, the API user should consider that the provider implementation my perform memory registrations/deregistration on behalf of the application to accomplish a data transfer. Most collective calls are asynchronous. Upon completion, an event will be posted to the EVD specified when the collective was created. New collective driver for Mellanox FCA 2.1 on ConnectX-2 adapters is included with this patch set and is supported via all IB providers (cma, ucm, scm). See the following document for API details: http://www.openfabrics.org/downloads/dapl/documentation/dat_collective_preview.pdf _______________________________________________ ofw mailing list [email protected] http://lists.openfabrics.org/cgi-bin/mailman/listinfo/ofw
