Benjamin Kirk wrote:

>> Or don't bother writing that case at all?  In the places where nonblocking
>> receives make sense, often any blocking at all would lead to a deadlock -
>> you'd really want to handle vector sizing manually.
> 
> Agreed.  If someone is calling the nonblocking methods they pretty much have
> to have handled the resizing already.  Perhaps it is just sufficient to
> assert the vector is not empty to guard against the truly clueless
> invocation...

I don't want to rule out the case where a user actually tries to send an 
empty vector.  Cases where a processor has data to send only to a few 
neighbors come up all the time with good partitioning.  Most of the time 
(but in theory not all the time!) intelligent user code can detect when 
that happens and avoiding even calling Parallel::send/receive for the 
disjoint processor pairs, but at worst I'd like to spit out a warning 
when an empty vector is sent, not stop the code from continuing.

> BTW, I have decided class methods which return void are wasting an
> opportunity. 

I may disagree, but I'd like to hear your reasoning.

I was actually just thinking the opposite: that we've been sloppy about 
checking all our MPI error code return values and that it would be nicer 
to be using an interface that returned void and threw an exception if 
anything went wrong.

> this is just too cumbersome:
> 
> Parallel::receive (0, (vec.clear(), vec));

I disagree; people don't use the comma operator enough.  ;-)

> What about instead using MPI_Probe on the receiving side to get the message
> size and resizing the buffer when need be?

That would be very smooth.

> It avoids the unnecessary
> latency hit of the additional MPI_Send/MPI_Recv associated with the other
> option.  I think we could *always* do that for the blocking methods without
> much performance hit at all.

Yeah, that sounds like it would work.  There would be no need for a 
presized_send() function, or for a distinction between presized and 
non-presized blocking receives.  Nonblocking receives would all have to 
be presized (unless we can do some sort of MPI_Iprobe magic?) but that's 
not a huge problem for me.

Unless MPI_Probe involves some performance hit (and I don't see why it 
should) this is the way to go.
---
Roy

-------------------------------------------------------------------------
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
_______________________________________________
Libmesh-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/libmesh-devel

Reply via email to