What: As per the discussion at the developer's meeting in June add an interface 
for performance variables to the MCA base. The interface is implemented at the 
MCA level to allow performance variables to be exposed from any level (opal, 
orte, and ompi). For more information on performance variables please see MPI 
3.0 ?? 14.3.7.

When: I want to get this new API right the first time so I am setting a 
relatively long timeout: 2 weeks (July 16, 2013). Please take a look and let me 
know what should change before it is committed. Because this is a 
well-contained change I intend to CMR it to 1.7.

Working tree can be found @ github: https://github.com/hjelmn/ompi-mca-var 
(branch:mpit_pvar).


Some high-level design details:

 1) When registering a performance variable with mca_base_pvar_register() the 
caller provides three functions: read - reads the current value of the variable 
(may be NULL), write - write the current value (can be NULL-- not used at this 
stage), and notify - used to deliver notification when handles are 
allocated/freed/started/stopped (can be NULL).

 2) If a notification function is provided it MUST return the number of values 
associated with the variable/bound mpi object combination (see the examples in 
pml/ob1) when notified that a handle has been bound. See the MPI_T 
documentation for more information.

 3) If the number of values associated with a variable are dependent on the 
bound object the caller MUST provide a notify function. The function MUST 
return this count when a performance variable handle is bound.

 4) If either the read or write functions are NULL default function(s) are 
used. These functions read/write to a context pointer specified at registration 
time (see the documentation in mca_base_pvar.h). The type of this pointer is 
determined by the variable type supplied at registration 
(int/unsigned/double/etc).

 5) It should be relatively straightforward to add variables to cover all 
existing peruse functionality. We do not intend to deprecate or remove peruse 
at this time.

-Nathan Hjelm
HPC-3, LANL

Reply via email to