On Wed, Jan 16, 2019 at 10:50:00AM -0700, Jens Axboe wrote:
> If we have fixed user buffers, we can map them into the kernel when we
> setup the io_context. That avoids the need to do get_user_pages() for
> each and every IO.
.....
> +                     return -ENOMEM;
> +     } while (atomic_long_cmpxchg(&ctx->user->locked_vm, cur_pages,
> +                                     new_pages) != cur_pages);
> +
> +     return 0;
> +}
> +
> +static int io_sqe_buffer_unregister(struct io_ring_ctx *ctx)
> +{
> +     int i, j;
> +
> +     if (!ctx->user_bufs)
> +             return -EINVAL;
> +
> +     for (i = 0; i < ctx->sq_entries; i++) {
> +             struct io_mapped_ubuf *imu = &ctx->user_bufs[i];
> +
> +             for (j = 0; j < imu->nr_bvecs; j++) {
> +                     set_page_dirty_lock(imu->bvec[j].bv_page);
> +                     put_page(imu->bvec[j].bv_page);
> +             }

Hmmm, so we call set_page_dirty() when the gup reference is dropped...

.....

> +static int io_sqe_buffer_register(struct io_ring_ctx *ctx, void __user *arg,
> +                               unsigned nr_args)
> +{

.....

> +             down_write(&current->mm->mmap_sem);
> +             pret = get_user_pages_longterm(ubuf, nr_pages, FOLL_WRITE,
> +                                             pages, NULL);
> +             up_write(&current->mm->mmap_sem);

Thought so. This has the same problem as RDMA w.r.t. using
file-backed mappings for the user buffer.  It is not synchronised
against truncate, hole punches, async page writeback cleaning the
page, etc, and so can lead to data corruption and/or kernel panics.

It also can't be used with DAX because the above problems are
actually a user-after-free of storage space, not just a dangling
page reference that can be cleaned up after the gup pin is dropped.

Perhaps, at least until we solve the GUP problems w.r.t. file backed
pages and/or add and require file layout leases for these reference,
we should error out if the  user buffer pages are file-backed
mappings?

Cheers,

Dave.
-- 
Dave Chinner
[email protected]

Reply via email to