Doug Ledford wrote:
So, one of the options when creating a QP is the max inline data size.
If I understand this correctly, for any send up to that size, the
payload of that send will be transmitted to the receiving side along
with the request to send.

What it really means is the payload is DMA'd to the HW on the local side in the work request itself as opposed being DMA'd down in a 2nd transaction after the WR is DMA'd and processed. It has no end-to-end significance. Other than to reduce the latency needed to transfer the data.

This reduces back and forth packet counts on
the wire in the case that the receiving side is good to go, because it
basically just responds with "OK, got it" and you're done.

I don't think this is true. Definitely not with iWARP. INLINE is just an optimization to push small amts of data downto the local adapter as part of the work request, thus avoiding 2 DMA's.

The trade
off of course is that if there is a resource shortage on the receiving
side, then it sends a RNR packet back, and however much payload data you
sent over the wire with the original request to send was just wasted
bandwidth as it was thrown away on the receiving side.

So, if my understanding of that is correct, then inline data improves
latency and maximum bandwidth up until the point where the receiving
side starts to have resource problems, then it wastes bandwidth and
doesn't help latency at all.  So, if a person wanted to write their
program to use inline data up until this point of congestion, then quit
using it until the congestion clears, how would they go about doing
that?

Even though you create the QP with the inline option, only WRs that pass in the IBV_SEND_INLINE flag will do inline processing, so you can control this functionality at a per-WR basis.


Would I have to set RNR retry count to something ridiculously
small and take the RNR error (along with the corresponding queue flush
and the pain that brings in terms of requeuing all the flushed events)
and do an ibv_modify_qp to turn off inline data until some number of
sends have completed without error?  Or is there possibly a counter
somewhere that I can monitor?  Or should I just forget about trying to
optimize this part of my code?

Separate question, when using an SRQ, and let's say you have more than 1
QP associate with that SRQ, then does a single QP going into QP_ERR
state flush the SRQ requests, or only the send requests on the QP that's
in error?  And if you get down to only 1 QP left attached to the SRQ,
and you then set that QP to the error state, will it flush the SRQ
entries?  Reading everything I can on SRQs, it's not clear to me how you
might flush one, especially since setting the SRQ itself to error state
specifically does *not* flush the posted and unused recv requests.



------------------------------------------------------------------------

_______________________________________________
general mailing list
[email protected]
http://lists.openfabrics.org/cgi-bin/mailman/listinfo/general

To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general
_______________________________________________
general mailing list
[email protected]
http://lists.openfabrics.org/cgi-bin/mailman/listinfo/general

To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general

Reply via email to