On Mon, 19 Apr 2021 17:36:35 -0400
Vivek Goyal <vgo...@redhat.com> wrote:

> As of now put_unlocked_entry() always wakes up next waiter. In next
> patches we want to wake up all waiters at one callsite. Hence, add a
> parameter to the function.
> 
> This patch does not introduce any change of behavior.
> 
> Suggested-by: Dan Williams <dan.j.willi...@intel.com>
> Signed-off-by: Vivek Goyal <vgo...@redhat.com>
> ---
>  fs/dax.c | 13 +++++++------
>  1 file changed, 7 insertions(+), 6 deletions(-)
> 
> diff --git a/fs/dax.c b/fs/dax.c
> index 00978d0838b1..f19d76a6a493 100644
> --- a/fs/dax.c
> +++ b/fs/dax.c
> @@ -275,11 +275,12 @@ static void wait_entry_unlocked(struct xa_state *xas, 
> void *entry)
>       finish_wait(wq, &ewait.wait);
>  }
>  
> -static void put_unlocked_entry(struct xa_state *xas, void *entry)
> +static void put_unlocked_entry(struct xa_state *xas, void *entry,
> +                            enum dax_entry_wake_mode mode)
>  {
>       /* If we were the only waiter woken, wake the next one */

With this change, the comment is no longer accurate since the
function can now wake all waiters if passed mode == WAKE_ALL.
Also, it paraphrases the code which is simple enough, so I'd
simply drop it.

This is minor though and it shouldn't prevent this fix to go
forward.

Reviewed-by: Greg Kurz <gr...@kaod.org>

>       if (entry && !dax_is_conflict(entry))
> -             dax_wake_entry(xas, entry, WAKE_NEXT);
> +             dax_wake_entry(xas, entry, mode);
>  }
>  
>  /*
> @@ -633,7 +634,7 @@ struct page *dax_layout_busy_page_range(struct 
> address_space *mapping,
>                       entry = get_unlocked_entry(&xas, 0);
>               if (entry)
>                       page = dax_busy_page(entry);
> -             put_unlocked_entry(&xas, entry);
> +             put_unlocked_entry(&xas, entry, WAKE_NEXT);
>               if (page)
>                       break;
>               if (++scanned % XA_CHECK_SCHED)
> @@ -675,7 +676,7 @@ static int __dax_invalidate_entry(struct address_space 
> *mapping,
>       mapping->nrexceptional--;
>       ret = 1;
>  out:
> -     put_unlocked_entry(&xas, entry);
> +     put_unlocked_entry(&xas, entry, WAKE_NEXT);
>       xas_unlock_irq(&xas);
>       return ret;
>  }
> @@ -954,7 +955,7 @@ static int dax_writeback_one(struct xa_state *xas, struct 
> dax_device *dax_dev,
>       return ret;
>  
>   put_unlocked:
> -     put_unlocked_entry(xas, entry);
> +     put_unlocked_entry(xas, entry, WAKE_NEXT);
>       return ret;
>  }
>  
> @@ -1695,7 +1696,7 @@ dax_insert_pfn_mkwrite(struct vm_fault *vmf, pfn_t pfn, 
> unsigned int order)
>       /* Did we race with someone splitting entry or so? */
>       if (!entry || dax_is_conflict(entry) ||
>           (order == 0 && !dax_is_pte_entry(entry))) {
> -             put_unlocked_entry(&xas, entry);
> +             put_unlocked_entry(&xas, entry, WAKE_NEXT);
>               xas_unlock_irq(&xas);
>               trace_dax_insert_pfn_mkwrite_no_entry(mapping->host, vmf,
>                                                     VM_FAULT_NOPAGE);
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-le...@lists.01.org

Reply via email to