On Fri, Jan 21, 2022 at 11:48:45AM -0500, Kevin O'Connor wrote:
> Rename nvme_build_prpl() to nvme_prpl_xfer() and directly invoke
> nvme_io_xfer() or nvme_bounce_xfer() from that function.
> 
> Signed-off-by: Kevin O'Connor <ke...@koconnor.net>
> ---
>  src/hw/nvme-int.h |  1 -
>  src/hw/nvme.c     | 46 ++++++++++++++++++++--------------------------
>  2 files changed, 20 insertions(+), 27 deletions(-)
> 
> diff --git a/src/hw/nvme-int.h b/src/hw/nvme-int.h
> index a4c1555..9564c17 100644
> --- a/src/hw/nvme-int.h
> +++ b/src/hw/nvme-int.h
> @@ -125,7 +125,6 @@ struct nvme_namespace {
>  
>      /* Page List */
>      u32 prpl_len;
> -    void *prp1;
>      u64 prpl[NVME_MAX_PRPL_ENTRIES];
>  };
>  
> diff --git a/src/hw/nvme.c b/src/hw/nvme.c
> index d656e9b..eee7d17 100644
> --- a/src/hw/nvme.c
> +++ b/src/hw/nvme.c
> @@ -498,10 +498,13 @@ static int nvme_add_prpl(struct nvme_namespace *ns, u64 
> base)
>      return 0;
>  }
>  
> -static int nvme_build_prpl(struct nvme_namespace *ns, void *op_buf, u16 
> count)
> +// Transfer data using page list (if applicable)
> +static int
> +nvme_prpl_xfer(struct nvme_namespace *ns, u64 lba, void *buf, u16 count,
> +               int write)
>  {
>      int first_page = 1;
> -    u32 base = (long)op_buf;
> +    u32 base = (long)buf;
>      s32 size;
>  
>      if (count > ns->max_req_size)
> @@ -511,31 +514,32 @@ static int nvme_build_prpl(struct nvme_namespace *ns, 
> void *op_buf, u16 count)
>  
>      size = count * ns->block_size;
>      /* Special case for transfers that fit into PRP1, but are unaligned */
> -    if (((size + (base & ~NVME_PAGE_MASK)) <= NVME_PAGE_SIZE)) {
> -        ns->prp1 = op_buf;
> -        return count;
> -    }
> +    if (((size + (base & ~NVME_PAGE_MASK)) <= NVME_PAGE_SIZE))
> +        return nvme_io_xfer(ns, lba, buf, count, write);

Just as a "self review", I think this code path could result in
nvme_io_xfer() being called with a buffer that is not word aligned and
thus result in a DISK_RET_EBADTRACK.  It would seem using the bounce
buffer would be a better result.


>      /* Every request has to be page aligned */
>      if (base & ~NVME_PAGE_MASK)
> -        return 0;
> +        goto bounce;
>  
>      /* Make sure a full block fits into the last chunk */
>      if (size & (ns->block_size - 1ULL))
> -        return 0;
> +        goto bounce;

Also, it's not clear to me how this branch could ever be taken as size
is assigned a few lines up as "size = count * ns->block_size".


However, in this patch series I tried to reproduce the existing
behavior.  Since I don't have a good way for testing nvme, I'm not
planning to make functional changes for the above.

-Kevin
_______________________________________________
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org

Reply via email to