On Tue, Jan 13, 2026 at 02:30:13AM -0500, Michael S. Tsirkin wrote:
> > Signed-off-by: Kommula Shiva Shankar <[email protected]>
> > Acked-by: Jason Wang <[email protected]>
> 
> I also worry a bit about regressing on other hardware.
> Cc nvidia guys.

> > +   vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
> >     vm_flags_set(vma, VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP);

This is definitely required and correct if notify.addr comes from a
PCI BAR address.

You need to trace the origin of that memory in all the drivers to
determine if it is OK or not.

For instance mlx5 is:

        kick_addr = mdev->bar_addr + offset;
        res->phys_kick_addr = kick_addr;
[..]
        addr = (phys_addr_t)ndev->mvdev.res.phys_kick_addr;

"bar_addr" is PCI memory so this patch is correct and required for
mlx5.

ifcvf:
                        hw->notify_base_pa = pci_resource_start(pdev, cap.bar) +
                                        le32_to_cpu(cap.offset);
[..]
                hw->vring[i].notify_pa = hw->notify_base_pa +
                        notify_off * hw->notify_off_multiplier;
[..]
        area.addr = vf->vring[idx].notify_pa;

octep:

                        oct_hw->notify_base_pa = pci_resource_start(pdev, 
cap.bar) +
                                                 le32_to_cpu(cap.offset);
[..]
                oct_hw->vqs[i].notify_pa = oct_hw->notify_base_pa +
                        notify_off * oct_hw->notify_off_multiplier;
[..]
        area.addr = oct_hw->vqs[idx].notify_pa;

pds:
 No idea, it is messed up though:
        area.addr = pdsv->vqs[qid].notify_pa;
    struct pds_vdpa_vq_info {
        dma_addr_t notify_pa;
 Can't cast dma_addr_t to phys_addr_t!

virtio_pci:
 Also messed up:
        notify.addr = vp_vdpa->vring[qid].notify_pa;
    struct vp_vring {
        resource_size_t notify_pa;
 phys_addr is not a resource_size_t

Guessing pds and virtio_pci are also both fine, even if I gave up trying to
figure out where notify_pa gets set from in the end.

So the patch is OK

Reviewed-by: Jason Gunthorpe <[email protected]>

Jason

Reply via email to