Steve Wise wrote:
Craig Prescott wrote:

Hi Steve;

The SDP socket gets an associated mr when sdp_init_qp() calls ib_get_dma_mr(). It looks to me like this drills down into
the provider layer, which will ultimately end up calling
build_phys_page_list() from iwch_register_phys_mem().

Unfortunately, when I try to look at the ib_mr_attrs via
ib_query_mr(), the call fails.

When sdp_post_recv() calls ib_post_recv(), it looks to me
like a DMA mapping has been set up between the SDP private
receive buffers and card.  The receive buffers are kmalloc'd
in sdp_init_qp().

I hope I have this right.  But it sounds like it is possible
I am hitting both issues you describe.

I guess one way to check is to drop my test nodes down to 4GB
or less, right?  They currently have 16GB.


Drop them down to 1 or 2GB and try it. 4GB still requires the iommu to remap things above 4GB.

Awesome ;-)  rdma_accept() now returns zero and no more complaints
about opcodes and such, and the server now gets to RDMA_CM_EVENT_ESTABLISHED. Thanks!

Of course, the client panic'd at this point.  Little by little...

Sorry about that. I forgot about the 4GB limitation and get_dma_mr(). I guess the chelsio driver should really just fail the get_dma_mr() call since it doesn't properly support it.

There is one other experiment you could try. You could try using lkey 0 for any sgl used in a send or receive work request. This maps to the zero stag in iwarp lingo. But I haven't tested that yet :)

I'll try to do this tomorrow.

Thanks again!
Craig

_______________________________________________
general mailing list
[email protected]
http://lists.openfabrics.org/cgi-bin/mailman/listinfo/general

To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general

Reply via email to