On Mar 25 12:38, Maxim Levitsky wrote:
> On Mon, 2020-03-16 at 07:28 -0700, Klaus Jensen wrote:
> > From: Klaus Jensen <k.jen...@samsung.com>
> > 
> > Pull the controller memory buffer check to its own function. The check
> > will be used on its own in later patches.
> > 
> > Signed-off-by: Klaus Jensen <k.jen...@samsung.com>
> > Acked-by: Keith Busch <kbu...@kernel.org>
> > ---
> >  hw/block/nvme.c | 16 ++++++++++++----
> >  1 file changed, 12 insertions(+), 4 deletions(-)
> > 
> > diff --git a/hw/block/nvme.c b/hw/block/nvme.c
> > index b38d7e548a60..08a83d449de3 100644
> > --- a/hw/block/nvme.c
> > +++ b/hw/block/nvme.c
> > @@ -52,14 +52,22 @@
> >  
> >  static void nvme_process_sq(void *opaque);
> >  
> > +static inline bool nvme_addr_is_cmb(NvmeCtrl *n, hwaddr addr)
> > +{
> > +    hwaddr low = n->ctrl_mem.addr;
> > +    hwaddr hi  = n->ctrl_mem.addr + int128_get64(n->ctrl_mem.size);
> > +
> > +    return addr >= low && addr < hi;
> > +}
> > +
> >  static void nvme_addr_read(NvmeCtrl *n, hwaddr addr, void *buf, int size)
> >  {
> > -    if (n->cmbsz && addr >= n->ctrl_mem.addr &&
> > -                addr < (n->ctrl_mem.addr + 
> > int128_get64(n->ctrl_mem.size))) {
> > +    if (n->cmbsz && nvme_addr_is_cmb(n, addr)) {
> >          memcpy(buf, (void *)&n->cmbuf[addr - n->ctrl_mem.addr], size);
> > -    } else {
> > -        pci_dma_read(&n->parent_obj, addr, buf, size);
> > +        return;
> >      }
> > +
> > +    pci_dma_read(&n->parent_obj, addr, buf, size);
> >  }
> >  
> >  static int nvme_check_sqid(NvmeCtrl *n, uint16_t sqid)
> 
> Note that this patch still contains a bug that it removes the check against 
> the accessed
> size, which you fix in later patch.
> I prefer to not add a bug in first place
> However if you have a reason for this, I won't mind.
> 

So yeah. The resons is that there is actually no bug at this point
because the controller only supports PRPs. I actually thought there was
a bug as well and reported it to qemu-security some months ago as a
potential out of bounds access. I was then schooled by Keith on how PRPs
work ;) Below is a paraphrased version of Keiths analysis.

The PRPs does not cross page boundaries:

    trans_len = n->page_size - (prp1 % n->page_size);

The PRPs are always verified to be page aligned:

    if (unlikely(!prp_ent || prp_ent & (n->page_size - 1))) {

and the transfer length wont go above page size. So, since the beginning
of the address is within the CMB and considering that the CMB is of an
MB aligned and sized granularity, then we can never cross outside it
with PRPs.

I could add the check at this point (because it *is* needed for when
SGLs are introduced), but I think it would just be noise and I would
need to explain why the check is there, but not really needed at this
point. Instead I'm adding a new patch before the SGL patch that explains
this.

Reply via email to