On Jul 16, 2015, at 4:49 PM, Jason Gunthorpe <[email protected]>
wrote:
> On Thu, Jul 16, 2015 at 04:07:04PM -0400, Chuck Lever wrote:
>
>> The MRs are registered only for remote read. I don’t think
>> catastrophic harm can occur on the client in this case if the
>> invalidation and DMA sync comes late. In fact, I’m unsure why
>> a DMA sync is even necessary as the MR is invalidated in this
>> case.
>
> For RDMA, the worst case would be some kind of information leakage or
> machine check halt.
>
> For read side the DMA API should be called before posting the FRWR, no
> completion side issues.
It is: rpcrdma_map_one() is done by .ro_map in both the RDMA READ
and WRITE cases.
Just to confirm: you’re saying that for MRs that are read-accessed,
no matching ib_dma_unmap_{page,single}() is required ?
>> In the case of incoming data payloads (NFS READ) the DMA sync
>> ordering is probably an important issue. The sync has to happen
>> before the ULP can touch the data, 100% of the time.
>
> Absolultely, the sync is critical.
>
>> That could be addressed by performing a DMA sync on the write
>> list or reply chunk MRs right in the RPC reply handler (before
>> xprt_complete_rqst).
>
> That sounds good to me, much more in line with what I'd expect to
> see. The fmr unmap and invalidate post should also be in the reply
> handler (for flow control reasons, see below)
Sure. It might be possible to move both the DMA unmap and the
invalidate into the reply handler without a lot of surgery.
We’ll see.
There would be some performance cost. That’s unfortunate because
the scenarios we’re guarding against are exceptionally rare.
>>> The only absolutely correct way to run the RDMA stack is to keep track
>>> of SQ/SCQ space directly, and only update that tracking by processing
>>> SCQEs.
>>
>> In other words, the only time it is truly safe to do a post_send is
>> after you’ve received a send completion that indicates you have
>> space on the send queue.
>
> Yes.
>
> Use a scheme where you supress signaling and use the SQE accounting to
> request a completion entry and signal around every 1/2 length of the
> SQ.
Actually Sagi and I have found we can’t leave more than about 80
sends unsignalled, no matter how long the pre-allocated SQ is.
xprtrdma caps the maximum number of unsignalled sends at 20,
though, as a margin of error. That gives about 95% send completion
mitigation.
Since most send completions are silenced, xprtrdma relies on seeing
the completion of a _subsequent_ WR.
So, if my reply handler were to issue a LOCAL_INV WR and wait for
its completion, then the completion of send WRs submitted before
that one, even if they are silent, is guaranteed.
In the cases where the reply handler issues a LOCAL_INV, waiting
for its completion before allowing the next RPC to be sent is
enough to guarantee space on the SQ, I would think.
For FMR and smaller RPCs that don’t need RDMA, we’d probably
have to wait on the completion of the RDMA SEND of the RPC call
message.
So, we could get away with signalling only the last send WR issued
for each RPC.
> Use the WRID in some way to encode the # SQEs each completion
> represents.
>
> I've used a scheme where the wrid is a wrapping index into
> an array of SQ length long, that holds any meta information..
>
> That makes it trivial to track SQE accounting and avoids memory
> allocations for wrids.
>
> Generically:
>
> posted_sqes -= (wc->wrid - last_wrid);
> for (.. I = last_wrid; I != wc->wrid; ++I)
> complete(wr_data[I].ptr);
>
> Many other options, too.
>
> -----
>
> There is a bit more going on too, *technically* the HCA owns the
> buffer until a SCQE is produced. The recv proves the peer will drop
> any re-transmits of the message, but it doesn't prove that the local
> HCA won't create a re-transmit. Lost acks or other protocol weirdness
> could *potentially* cause buffer re-read in the general RDMA
> framework.
>
> So if you use recv to drive re-use of the SEND buffer memory, it is
> important that the SEND buffer remain full of data to send to that
> peer and not be kfree'd, dma unmapped, or reused for another peer's
> data.
>
> kfree/dma unmap/etc may only be done on a SEND buffer after seeing a
> SCQE proving that buffer is done, or tearing down the QP and halting
> the send side.
The buffers the client uses to send an RPC call are DMA mapped once
when the transport is created, and a local lkey is used in the SEND
WR.
They are re-used for the next RPCs in the pipe, but as far as I can
tell the client’s send buffer contains the RPC call data until the
RPC request slot is retired (xprt_release).
I need to review the mechanism in rpcrdma_buffer_get() to see if
that logic does prevent early re-use.
--
Chuck Lever
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html