> We are currently working on the development of a new provider, and we are 
> trying to
> implement resource management RX. We are facing an issue with the MULTI_RECV 
> option and
> we have several questions about it.

FYI - if you're working on a provider, the ofiwg mailing list (copied) is a 
better fit.

> In our case, FI_MULTI_RECV mode is really difficult to handle with good 
> performances
> when trying to implement bufferization (it prevent us from setting a clear 
> limit
> between resources for small messages and large ones).
> 
> In our mind, MULTI_RECV is usually dedicated for small messages.

If you're provider cannot reasonably support this capability, my suggestion is 
to not support it.  Other providers, such as tcp and verbs, do not support it 
either.  Higher level providers support this through software emulation.

The original requirement that drove this capability was centered around 
receiving small to medium sized messages.

> *     Is MULTI_RECV option intended for only small messages or for all 
> messages ?

The feature itself is not limited to small messages, from the application's 
point of view. 

> *     Would it be acceptable for a provider to set a limit on buffers’ size 
> with
> MULTI_RECV mode ?

IMO, it would be reasonable, would just need to be documented.  If it becomes 
necessary, we could look at exposing the limit through the API somehow.

Would you be able to emulate MULTI_RECV for larger transfers, for example, by 
using some sort of rendezvous protocol to reserve a portion of the buffer?

> *     If so, what would be a reasonable limit ?

This is really dependent on the use case and your provider capabilities.  As 
just a guess, relating this to some eager limit message size seems reasonable.  
FWIW, I think the default eager size used in rxm is 16k to 128k depending on 
how you define the eager size.  I'm aware of some hardware that restricts eager 
messages to a single MTU (which I think was 4k?).

- Sean
_______________________________________________
ofiwg mailing list
[email protected]
https://lists.openfabrics.org/mailman/listinfo/ofiwg

Reply via email to