Hi Mike,

On Sat, Dec 19, 2020 at 11:12 AM Mike Galbraith <efa...@gmx.de> wrote:
>
> (mailer partially munged formatting? resend)
>
> mm/zswap: fix zswap_frontswap_load() vs zsmalloc::map/unmap() might_sleep() 
> splat
>
> zsmalloc map/unmap methods use preemption disabling bit spinlocks.  Take the
> mutex outside of pool map/unmap methods in zswap_frontswap_load() as is done
> in zswap_frontswap_store().

oh wait... So is zsmalloc taking a spin lock in its map callback and
releasing it only in unmap? In this case, I would rather keep zswap as
is, mark zsmalloc as RT unsafe and have zsmalloc maintainer fix it.

Best regards,
   Vitaly

> Signed-off-by: Mike Galbraith <efa...@gmx.de>
> Fixes: 1ec3b5fe6eec "mm/zswap: move to use crypto_acomp API for hardware 
> acceleration"
> ---
>  mm/zswap.c |    6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
>
> --- a/mm/zswap.c
> +++ b/mm/zswap.c
> @@ -1258,20 +1258,20 @@ static int zswap_frontswap_load(unsigned
>
>         /* decompress */
>         dlen = PAGE_SIZE;
> +       acomp_ctx = raw_cpu_ptr(entry->pool->acomp_ctx);
> +       mutex_lock(acomp_ctx->mutex);
>         src = zpool_map_handle(entry->pool->zpool, entry->handle, 
> ZPOOL_MM_RO);
>         if (zpool_evictable(entry->pool->zpool))
>                 src += sizeof(struct zswap_header);
>
> -       acomp_ctx = raw_cpu_ptr(entry->pool->acomp_ctx);
> -       mutex_lock(acomp_ctx->mutex);
>         sg_init_one(&input, src, entry->length);
>         sg_init_table(&output, 1);
>         sg_set_page(&output, page, PAGE_SIZE, 0);
>         acomp_request_set_params(acomp_ctx->req, &input, &output, 
> entry->length, dlen);
>         ret = crypto_wait_req(crypto_acomp_decompress(acomp_ctx->req), 
> &acomp_ctx->wait);
> -       mutex_unlock(acomp_ctx->mutex);
>
>         zpool_unmap_handle(entry->pool->zpool, entry->handle);
> +       mutex_unlock(acomp_ctx->mutex);
>         BUG_ON(ret);
>
>  freeentry:
>

Reply via email to