Hi, The patch below enable setting max_inline directly through ib_qp_create_t. I've already sent it (a month ago) and ABI breaking issue arose. Since In the next release the ABI will be broken I resend it. The user may specify in ib_qp_create_t the max size of inline messages that he will send and the HCA will be prepared for that. In case that the user specified max_inline exceed HCA limit, ib_create_qp fails. The HCA may set max_inline above user request (but not less) which can be noticed using ib_query_qp Thanks, Reuven.
Index: hw/mthca/user/mlnx_ual_qp.c
===================================================================
--- hw/mthca/user/mlnx_ual_qp.c (revision 1094)
+++ hw/mthca/user/mlnx_ual_qp.c (working copy)
@@ -120,7 +120,7 @@
attr.cap.max_recv_wr = p_create_attr->rq_depth;
attr.cap.max_send_sge = p_create_attr->sq_sge;
attr.cap.max_recv_sge = p_create_attr->rq_sge;
- attr.cap.max_inline_data = 0; /* absent in IBAL
*/
+ attr.cap.max_inline_data = p_create_attr->sq_max_inline;
attr.qp_type =
p_create_attr->qp_type;
attr.sq_sig_all =
p_create_attr->sq_signaled;
Index: inc/iba/ib_types.h
===================================================================
--- inc/iba/ib_types.h (revision 1094)
+++ inc/iba/ib_types.h (working copy)
@@ -9786,6 +9786,7 @@
{
ib_qp_type_t qp_type;
+ uint32_t sq_max_inline;
uint32_t sq_depth;
uint32_t rq_depth;
uint32_t sq_sge;
@@ -9803,6 +9804,10 @@
* type
* Specifies the type of queue pair to create.
*
+* sq_max_inline
+* Maximum payload that can be inlined directly in a WQE,
eliminating
+* protection checks and additional DMA operations.
+*
* sq_depth
* Indicates the requested maximum number of work requests that
may be
* outstanding on the queue pair's send queue. This value must
be less
qp_max_inline.patch
Description: qp_max_inline.patch
_______________________________________________ ofw mailing list [email protected] http://lists.openfabrics.org/cgi-bin/mailman/listinfo/ofw
