Ummmm....did you really mean a timeout of Mon?? Could you delay that a bit so 
we can actually have time to look at it?


On Apr 5, 2013, at 11:56 AM, Nathan Hjelm <hje...@lanl.gov> wrote:

> Also, please look at the thread level support. We had some discussion at the 
> forum about what level should be returned depending on which function is 
> called first (MPI_T_init_thread or MPI_Init_thread). I don't think it was 
> clarified what should be done but since it will be in future errata what is 
> in the current implementation should be fine for now.
> 
> -Nathan
> 
> On Fri, Apr 05, 2013 at 12:52:12PM -0600, Nathan Hjelm wrote:
>> What: Add initial support for the MPI 3.0 tools interface (MPI_T). Inital 
>> support includes full support for the MPI_T_cvar and MPI_T_category 
>> interfaces. No pvars are available at this time. Support for pvars will be 
>> added at a later time.
>> 
>> Why: To be MPI 3.0 compliant the MPI_T interface must be implemented. This 
>> RFC implements the complete interface.
>> 
>> When: Monday April 8, 2013. We can make adjustments to the implementation 
>> after it is committed to the trunk. The only changes that really need to be 
>> reviewed are: ompi/include/mpi.h.in, and structure of the incomming code.
>> 
>> The changes can be found @ 
>> https://github.com/hjelmn/ompi-mca-var/tree/mpit-commit
>> 
>> Jeff, I added the MPI_T error codes to mpif-values.pl even though they will 
>> never be returned by a fortran function. Don't know if that was necessary or 
>> not. Please advise.
>> 
>> Look at:
>> https://github.com/hjelmn/ompi-mca-var/commit/689d7d5d95f982794bcba8afb385cdce80bf75e1
>> https://github.com/hjelmn/ompi-mca-var/commit/80338e741b1749bb235b9cf45e0db579efe4d61d
>> 
>> -Nathan
>> _______________________________________________
>> devel mailing list
>> de...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/devel
> _______________________________________________
> devel mailing list
> de...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/devel


Reply via email to