On Nov 3, 2008, at 10:39 AM, Eugene Loh wrote:
Main answer: no great docs to look at. I think I've asked some OMPI experts and that was basically the answer they gave me.
This is unfortunately the current state of the art -- no one has had time to write up good docs.
Galen pointed to the new papers -- our main PML these days is "ob1" (teg died a long time ago).
PML = Point to point messaging layer; it's basically the layer that is right behind MPI_SEND and friends.
The ob1 PML uses BTL modules underneath. BTL = Byte transfer layer; individual modules that send bytes back and forth over individual transports (e.g., shared memory, TCP, openfabrics, etc.). There's a BTL for each of the major transports that we support. The protocols that ob1 uses are described nicely in the papers that Galen sent, but the specific function interfaces are only best described in ompi/mca/ btl/btl.h.
Alternatively, we have a "cm" PML which uses MTL modules underneath. MTL = Matching transport layer; it's basically for transports that expose very MPI-like interfaces (e.g., elan, tports, PSM, portals, MX). This cm component is extremely thin; it basically provides a shim between Open MPI and the underlying transport.
The big difference between cm and ob1 is that ob1 is a progress engine that tracks multiple transport interfaces (e.g., shared memory, tcp, openfabrics, ...etc. -- and therefore potentially multiple BTL module instances) and cm is a thin shim that simply translates between OMPI and the back-end interface -- cm will only use *ONE* MTL module instance. Specifically: it is expected that the one MTL module will do all the progression, striping, ...or whatever it wants to do to move bytes from A to B by itself (very little/no help at all from OMPI's infrastructure).
Does that help some? -- Jeff Squyres Cisco Systems