[PATCH] dax: super.c: fix kernel-doc bad line warning

2023-01-16 Thread Randy Dunlap
Convert an empty line to " *" to avoid a kernel-doc warning:

drivers/dax/super.c:478: warning: bad line: 

Signed-off-by: Randy Dunlap 
Cc: Dan Williams 
Cc: Vishal Verma 
Cc: Dave Jiang 
Cc: nvd...@lists.linux.dev
---
 drivers/dax/super.c |2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff -- a/drivers/dax/super.c b/drivers/dax/super.c
--- a/drivers/dax/super.c
+++ b/drivers/dax/super.c
@@ -475,7 +475,7 @@ EXPORT_SYMBOL_GPL(put_dax);
 /**
  * dax_holder() - obtain the holder of a dax device
  * @dax_dev: a dax_device instance
-
+ *
  * Return: the holder's data which represents the holder if registered,
  * otherwize NULL.
  */



Re: [PATCH for-next v3 4/7] RDMA/rxe: Add page invalidation support

2023-01-16 Thread Jason Gunthorpe
On Fri, Dec 23, 2022 at 03:51:55PM +0900, Daisuke Matsuda wrote:

> +static bool rxe_ib_invalidate_range(struct mmu_interval_notifier *mni,
> + const struct mmu_notifier_range *range,
> + unsigned long cur_seq)
> +{
> + struct ib_umem_odp *umem_odp =
> + container_of(mni, struct ib_umem_odp, notifier);
> + unsigned long start;
> + unsigned long end;
> +
> + if (!mmu_notifier_range_blockable(range))
> + return false;
> +
> + mutex_lock(&umem_odp->umem_mutex);
> + mmu_interval_set_seq(mni, cur_seq);
> +
> + start = max_t(u64, ib_umem_start(umem_odp), range->start);
> + end = min_t(u64, ib_umem_end(umem_odp), range->end);
> +
> + ib_umem_odp_unmap_dma_pages(umem_odp, start, end);

After bob's xarray conversion this can be done alot faster, it just an
xa_for_each_range and make the xarray items non-present

non-present is probably just a null struct page in the xarray.

Jason



Re: [PATCH for-next v3 3/7] RDMA/rxe: Cleanup code for responder Atomic operations

2023-01-16 Thread Jason Gunthorpe
On Fri, Dec 23, 2022 at 03:51:54PM +0900, Daisuke Matsuda wrote:
> @@ -733,60 +734,83 @@ static enum resp_states process_flush(struct rxe_qp *qp,
>  /* Guarantee atomicity of atomic operations at the machine level. */
>  static DEFINE_SPINLOCK(atomic_ops_lock);
>  
> -static enum resp_states atomic_reply(struct rxe_qp *qp,
> -  struct rxe_pkt_info *pkt)
> +enum resp_states rxe_process_atomic(struct rxe_qp *qp,
> + struct rxe_pkt_info *pkt, u64 *vaddr)
>  {
> - u64 *vaddr;
>   enum resp_states ret;
> - struct rxe_mr *mr = qp->resp.mr;
>   struct resp_res *res = qp->resp.res;
>   u64 value;
>  
> - if (!res) {
> - res = rxe_prepare_res(qp, pkt, RXE_ATOMIC_MASK);
> - qp->resp.res = res;
> + /* check vaddr is 8 bytes aligned. */
> + if (!vaddr || (uintptr_t)vaddr & 7) {
> + ret = RESPST_ERR_MISALIGNED_ATOMIC;
> + goto out;
>   }
>  
> - if (!res->replay) {
> - if (mr->state != RXE_MR_STATE_VALID) {
> - ret = RESPST_ERR_RKEY_VIOLATION;
> - goto out;
> - }
> + spin_lock(&atomic_ops_lock);
> + res->atomic.orig_val = value = *vaddr;
>  
> - vaddr = iova_to_vaddr(mr, qp->resp.va + qp->resp.offset,
> - sizeof(u64));

I think you need to properly fix the lifetime problem with iova_to_vaddr
function, not hack around it like this.

iova_to_vaddr should be able to return an IOVA for ODP just fine - the
reason it can't is the same bug it has with normal MRs, the mapping
can just change under the feet and there is no protective locking.

If you are going to follow the same ODP design as mlx5 then
fundamentally all ODP does to the MR is add a not-present bit and
allow the MR pages to churn rapidly.

Make the MR safe to changes in the page references against races and
ODP will work just fine.

This will be easier on top of Bob's xarray patch, please check what he
has there and test it.

Thanks,
Jason