On Mon, Jan 12, 2026 at 10:17 AM Alexander Atanasov <[email protected]> wrote:
>
> On 8.01.26 11:19, Caleb Sander Mateos wrote:
> > __ublk_check_and_get_req() checks that the passed in offset is within
> > the data length of the specified ublk request. However, only user copy
> > (ublk_check_and_get_req()) supports accessing ublk request data at a
> > nonzero offset. Zero-copy buffer registration (ublk_register_io_buf())
> > always passes 0 for the offset, so the check is unnecessary. Move the
> > check from __ublk_check_and_get_req() to ublk_check_and_get_req().
> >
> > Signed-off-by: Caleb Sander Mateos <[email protected]>
> > Reviewed-by: Ming Lei <[email protected]>
> > ---
> > drivers/block/ublk_drv.c | 16 +++++++++-------
> > 1 file changed, 9 insertions(+), 7 deletions(-)
> >
> > diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
> > index e7697dc4a812..8eefb838b563 100644
> > --- a/drivers/block/ublk_drv.c
> > +++ b/drivers/block/ublk_drv.c
> > @@ -253,11 +253,11 @@ struct ublk_params_header {
> >
>
> [snip]
>
> > @@ -2603,13 +2603,10 @@ static inline struct request
> > *__ublk_check_and_get_req(struct ublk_device *ub,
> > goto fail_put;
> >
> > if (!ublk_rq_has_data(req))
> > goto fail_put;
> >
> > - if (offset > blk_rq_bytes(req))
> > - goto fail_put;
> > -
> > return req;
> > fail_put:
> > ublk_put_req_ref(io, req);
> > return NULL;
> > }
> > @@ -2687,14 +2684,19 @@ ublk_user_copy(struct kiocb *iocb, struct iov_iter
> > *iter, int dir)
> >
> > if (tag >= ub->dev_info.queue_depth)
> > return -EINVAL;
> >
> > io = &ubq->ios[tag];
> > - req = __ublk_check_and_get_req(ub, q_id, tag, io, buf_off);
> > + req = __ublk_check_and_get_req(ub, q_id, tag, io);
> > if (!req)
> > return -EINVAL;
> >
> > + if (buf_off > blk_rq_bytes(req)) {
> > + ret = -EINVAL;
> > + goto out;
> > + }
> > +
>
> Offset is zero based, bytes are count so it should be >= here.
>
> It will work this way but for buf_off == blk_rq_bytes(req) user will get
> 0 instead of EINVAL.
This is the existing behavior in __ublk_check_and_get_req(). I agree
allowing buf_off == blk_rq_bytes(req) seems odd, but changing it now
could break ublk servers relying on the current behavior.
Best,
Caleb
>
> static size_t ublk_copy_user_pages(const struct request *req,
> unsigned offset, struct iov_iter *uiter, int dir)
> {
> size_t done = 0;
> ...
> rq_for_each_segment(bv, req, iter) {
> ...
> if (offset >= bv.bv_len) {
> offset -= bv.bv_len; // bv_len is same as
> blk_rq_bytes(req)
> continue; // this breaks the loop when ==
> }
> ...
> }
> return done; // done is never incremented
> }
>
> > if (!ublk_check_ubuf_dir(req, dir)) {
> > ret = -EACCES;
> > goto out;
> > }
>
>
> --
> have fun,
> alex
>