On Nov 18, 2011, at 11:50 , Hugo Daniel Meyer wrote:

> 
> 2011/11/18 George Bosilca <bosi...@eecs.utk.edu>
> 
> On Nov 18, 2011, at 11:14 , Hugo Daniel Meyer wrote:
> 
>> 2011/11/18 George Bosilca <bosi...@eecs.utk.edu>
>> 
>> On Nov 18, 2011, at 07:29 , Hugo Daniel Meyer wrote:
>> 
>>> Hello again.
>>> 
>>> I was doing some trace into de PML_OB1 files. I start to follow a 
>>> MPI_Ssend() trying to find where a message is stored (in the sender) if it 
>>> is not send until the receiver post the recv, but i didn't find that place. 
>> 
>> Right, you can't find this as the message is not stored on the sender. The 
>> pointer to the send request is sent encapsulated in the matching header, and 
>> the receiver will provide it back once the message has been matched (this 
>> means the data is now ready to flow).
>> 
>> So, what you're saying is that the sender only sends the header, so when the 
>> receiver post the recv will send again the header so the sender starts with 
>> the data sent? am i getting it right?  If this is ok, the data stays in the 
>> sender, but where it is stored?
> 
> If we consider rendez-vous messages the data is remains in the sender buffer 
> (aka the buffer provided by the upper level to the MPI_Send function).
> 
> Yes, so i will only need to save the headears of the messages (where the 
> status is incomplete), and then maybe just call again the upper level 
> MP_Send. A question here, the headers are not marked as pending (at least i 
> think so), so, my only approach might be to create a list of pending headers 
> and store there the pointer to the send, then try to identify its 
> corresponding upper level MPI_Send and retries it in case of failure, is this 
> a correct approach? 

Look in the mca/vprotocol/base to see how we deal with the send requests in our 
message logging protocol. We hijack the send request list, and replace them 
with our own, allowing us to chain all active requests. This make the tracking 
of chive requests very simple, and minimize the impact on the overall code.

  george.

Reply via email to