Re: [PATCH v3 12/20] RDMA/rw: use dma_map_sgtable()

2021-10-05 Thread Max Gurtovoy via iommu



On 9/28/2021 10:43 PM, Jason Gunthorpe wrote:

On Thu, Sep 16, 2021 at 05:40:52PM -0600, Logan Gunthorpe wrote:

dma_map_sg() now supports the use of P2PDMA pages so pci_p2pdma_map_sg()
is no longer necessary and may be dropped.

Switch to the dma_map_sgtable() interface which will allow for better
error reporting if the P2PDMA pages are unsupported.

The change to sgtable also appears to fix a couple subtle error path
bugs:

   - In rdma_rw_ctx_init(), dma_unmap would be called with an sg
 that could have been incremented from the original call, as
 well as an nents that was not the original number of nents
 called when mapped.
   - Similarly in rdma_rw_ctx_signature_init, both sg and prot_sg
 were unmapped with the incorrect number of nents.

Those bugs should definately get fixed.. I might extract the sgtable
conversion into a stand alone patch to do it.


Yes, we need these fixes before this series will converge.

Looks good,

Reviewed-by: Max Gurtovoy 



But as it is, this looks fine

Reviewed-by: Jason Gunthorpe 

Jason

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v3 12/20] RDMA/rw: use dma_map_sgtable()

2021-09-29 Thread Logan Gunthorpe




On 2021-09-28 1:43 p.m., Jason Gunthorpe wrote:
> On Thu, Sep 16, 2021 at 05:40:52PM -0600, Logan Gunthorpe wrote:
>> dma_map_sg() now supports the use of P2PDMA pages so pci_p2pdma_map_sg()
>> is no longer necessary and may be dropped.
>>
>> Switch to the dma_map_sgtable() interface which will allow for better
>> error reporting if the P2PDMA pages are unsupported.
>>
>> The change to sgtable also appears to fix a couple subtle error path
>> bugs:
>>
>>   - In rdma_rw_ctx_init(), dma_unmap would be called with an sg
>> that could have been incremented from the original call, as
>> well as an nents that was not the original number of nents
>> called when mapped.
>>   - Similarly in rdma_rw_ctx_signature_init, both sg and prot_sg
>> were unmapped with the incorrect number of nents.
> 
> Those bugs should definately get fixed.. I might extract the sgtable
> conversion into a stand alone patch to do it.

Yes. I can try to split it off myself and send a patch later this week.

Logan
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v3 12/20] RDMA/rw: use dma_map_sgtable()

2021-09-28 Thread Jason Gunthorpe
On Thu, Sep 16, 2021 at 05:40:52PM -0600, Logan Gunthorpe wrote:
> dma_map_sg() now supports the use of P2PDMA pages so pci_p2pdma_map_sg()
> is no longer necessary and may be dropped.
> 
> Switch to the dma_map_sgtable() interface which will allow for better
> error reporting if the P2PDMA pages are unsupported.
> 
> The change to sgtable also appears to fix a couple subtle error path
> bugs:
> 
>   - In rdma_rw_ctx_init(), dma_unmap would be called with an sg
> that could have been incremented from the original call, as
> well as an nents that was not the original number of nents
> called when mapped.
>   - Similarly in rdma_rw_ctx_signature_init, both sg and prot_sg
> were unmapped with the incorrect number of nents.

Those bugs should definately get fixed.. I might extract the sgtable
conversion into a stand alone patch to do it.

But as it is, this looks fine

Reviewed-by: Jason Gunthorpe 

Jason
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v3 12/20] RDMA/rw: use dma_map_sgtable()

2021-09-16 Thread Logan Gunthorpe
dma_map_sg() now supports the use of P2PDMA pages so pci_p2pdma_map_sg()
is no longer necessary and may be dropped.

Switch to the dma_map_sgtable() interface which will allow for better
error reporting if the P2PDMA pages are unsupported.

The change to sgtable also appears to fix a couple subtle error path
bugs:

  - In rdma_rw_ctx_init(), dma_unmap would be called with an sg
that could have been incremented from the original call, as
well as an nents that was not the original number of nents
called when mapped.
  - Similarly in rdma_rw_ctx_signature_init, both sg and prot_sg
were unmapped with the incorrect number of nents.

Signed-off-by: Logan Gunthorpe 
---
 drivers/infiniband/core/rw.c | 75 +++-
 include/rdma/ib_verbs.h  | 19 +
 2 files changed, 51 insertions(+), 43 deletions(-)

diff --git a/drivers/infiniband/core/rw.c b/drivers/infiniband/core/rw.c
index 5221cce65675..1bdb56380764 100644
--- a/drivers/infiniband/core/rw.c
+++ b/drivers/infiniband/core/rw.c
@@ -273,26 +273,6 @@ static int rdma_rw_init_single_wr(struct rdma_rw_ctx *ctx, 
struct ib_qp *qp,
return 1;
 }
 
-static void rdma_rw_unmap_sg(struct ib_device *dev, struct scatterlist *sg,
-u32 sg_cnt, enum dma_data_direction dir)
-{
-   if (is_pci_p2pdma_page(sg_page(sg)))
-   pci_p2pdma_unmap_sg(dev->dma_device, sg, sg_cnt, dir);
-   else
-   ib_dma_unmap_sg(dev, sg, sg_cnt, dir);
-}
-
-static int rdma_rw_map_sg(struct ib_device *dev, struct scatterlist *sg,
- u32 sg_cnt, enum dma_data_direction dir)
-{
-   if (is_pci_p2pdma_page(sg_page(sg))) {
-   if (WARN_ON_ONCE(ib_uses_virt_dma(dev)))
-   return 0;
-   return pci_p2pdma_map_sg(dev->dma_device, sg, sg_cnt, dir);
-   }
-   return ib_dma_map_sg(dev, sg, sg_cnt, dir);
-}
-
 /**
  * rdma_rw_ctx_init - initialize a RDMA READ/WRITE context
  * @ctx:   context to initialize
@@ -313,12 +293,16 @@ int rdma_rw_ctx_init(struct rdma_rw_ctx *ctx, struct 
ib_qp *qp, u32 port_num,
u64 remote_addr, u32 rkey, enum dma_data_direction dir)
 {
struct ib_device *dev = qp->pd->device;
+   struct sg_table sgt = {
+   .sgl = sg,
+   .orig_nents = sg_cnt,
+   };
int ret;
 
-   ret = rdma_rw_map_sg(dev, sg, sg_cnt, dir);
-   if (!ret)
-   return -ENOMEM;
-   sg_cnt = ret;
+   ret = ib_dma_map_sgtable(dev, &sgt, dir, 0);
+   if (ret)
+   return ret;
+   sg_cnt = sgt.nents;
 
/*
 * Skip to the S/G entry that sg_offset falls into:
@@ -354,7 +338,7 @@ int rdma_rw_ctx_init(struct rdma_rw_ctx *ctx, struct ib_qp 
*qp, u32 port_num,
return ret;
 
 out_unmap_sg:
-   rdma_rw_unmap_sg(dev, sg, sg_cnt, dir);
+   ib_dma_unmap_sgtable(dev, &sgt, dir, 0);
return ret;
 }
 EXPORT_SYMBOL(rdma_rw_ctx_init);
@@ -387,6 +371,14 @@ int rdma_rw_ctx_signature_init(struct rdma_rw_ctx *ctx, 
struct ib_qp *qp,
qp->integrity_en);
struct ib_rdma_wr *rdma_wr;
int count = 0, ret;
+   struct sg_table sgt = {
+   .sgl = sg,
+   .orig_nents = sg_cnt,
+   };
+   struct sg_table prot_sgt = {
+   .sgl = prot_sg,
+   .orig_nents = prot_sg_cnt,
+   };
 
if (sg_cnt > pages_per_mr || prot_sg_cnt > pages_per_mr) {
pr_err("SG count too large: sg_cnt=%u, prot_sg_cnt=%u, 
pages_per_mr=%u\n",
@@ -394,18 +386,14 @@ int rdma_rw_ctx_signature_init(struct rdma_rw_ctx *ctx, 
struct ib_qp *qp,
return -EINVAL;
}
 
-   ret = rdma_rw_map_sg(dev, sg, sg_cnt, dir);
-   if (!ret)
-   return -ENOMEM;
-   sg_cnt = ret;
+   ret = ib_dma_map_sgtable(dev, &sgt, dir, 0);
+   if (ret)
+   return ret;
 
if (prot_sg_cnt) {
-   ret = rdma_rw_map_sg(dev, prot_sg, prot_sg_cnt, dir);
-   if (!ret) {
-   ret = -ENOMEM;
+   ret = ib_dma_map_sgtable(dev, &prot_sgt, dir, 0);
+   if (ret)
goto out_unmap_sg;
-   }
-   prot_sg_cnt = ret;
}
 
ctx->type = RDMA_RW_SIG_MR;
@@ -426,10 +414,11 @@ int rdma_rw_ctx_signature_init(struct rdma_rw_ctx *ctx, 
struct ib_qp *qp,
 
memcpy(ctx->reg->mr->sig_attrs, sig_attrs, sizeof(struct ib_sig_attrs));
 
-   ret = ib_map_mr_sg_pi(ctx->reg->mr, sg, sg_cnt, NULL, prot_sg,
- prot_sg_cnt, NULL, SZ_4K);
+   ret = ib_map_mr_sg_pi(ctx->reg->mr, sg, sgt.nents, NULL, prot_sg,
+ prot_sgt.nents, NULL, SZ_4K);
if (unlikely(ret)) {
-   pr_err("failed to map PI sg (%u)\n", sg_cnt + prot_sg_cnt);
+   pr_err("failed to map PI sg (%u)\n",
+