> Unfortunately for IB the size of an RMPP response and buffer cannot be
> controlled by the kernel.  So if an application has a large response to
> send, the entire buffer must be copied into the kernel and the kernel
> cannot decide on its own segmentation boundaries.  Hence the ability for
> selected management applications to control and limit the amount of kernel
> memory space is desirable.  These issues become serious at scale when
> larger RMPP responses are needed and more clients may also be issuing
> requests.  The two can combine and result in N^2 types behavior for kernel
> memory footprint relative to N=cluster node count or potentially N=cluster
> CPU core count.

There's no requirement that an entire RMPP response be copied to the kernel 
before being sent.  The current implementation does this, but that behavior can 
be modified to copy the data only when needed.  The total number of outstanding 
sends that a client can have can also be restricted to throttle back a client 
trying to send large messages to everyone on the fabric.

> >There are QLogic customers who have requested the ability to perform
> >RMPP transaction handling in user space.  This was an option in our old
> >proprietary stack and there are a few customers still using it which
> >need a way to forward migrate to OFED while containing the scope of
> >their application changes.  While we have developed appropriate "shim"
> libraries to allow their applications to migrate, we can't simulate/shim
> rmpp processing without some kernel support.

There's nothing that prevents RMPP from running between an application and a 
library, with the library exchanging reassembled mads with the kernel.  It may 
not be ideal from your perspective, but I don't see why it's not possible to 
support existing applications.


--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to