On Wed, 2 Feb 2022 09:30:42 +0000 Peter Maydell <peter.mayd...@linaro.org> wrote:
> On Tue, 1 Feb 2022 at 23:51, Alex Williamson <alex.william...@redhat.com> > wrote: > > > > On Tue, 1 Feb 2022 21:24:08 +0000 > > Jag Raman <jag.ra...@oracle.com> wrote: > > > The PCIBus data structure already has address_space_mem and > > > address_space_io to contain the BAR regions of devices attached > > > to it. I understand that these two PCIBus members form the > > > PCI address space. > > > > These are the CPU address spaces. When there's no IOMMU, the PCI bus is > > identity mapped to the CPU address space. When there is an IOMMU, the > > device address space is determined by the granularity of the IOMMU and > > may be entirely separate from address_space_mem. > > Note that those fields in PCIBus are just whatever MemoryRegions > the pci controller model passed in to the call to pci_root_bus_init() > or equivalent. They may or may not be specifically the CPU's view > of anything. (For instance on the versatilepb board, the PCI controller > is visible to the CPU via several MMIO "windows" at known addresses, > which let the CPU access into the PCI address space at a programmable > offset. We model that by creating a couple of container MRs which > we pass to pci_root_bus_init() to be the PCI memory and IO spaces, > and then using alias MRs to provide the view into those at the > guest-programmed offset. The CPU sees those windows, and doesn't > have direct access to the whole PCIBus::address_space_mem.) > I guess you could say they're the PCI controller's view of the PCI > address space ? Sure, that's fair. > We have a tendency to be a bit sloppy with use of AddressSpaces > within QEMU where it happens that the view of the world that a > DMA-capable device matches that of the CPU, but conceptually > they can definitely be different, especially in the non-x86 world. > (Linux also confuses matters here by preferring to program a 1:1 > mapping even if the hardware is more flexible and can do other things. > The model of the h/w in QEMU should support the other cases too, not > just 1:1.) Right, this is why I prefer to look at the device address space as simply an IOVA. The IOVA might be a direct physical address or coincidental identity mapped physical address via an IOMMU, but none of that should be the concern of the device. > > I/O port space is always the identity mapped CPU address space unless > > sparse translations are used to create multiple I/O port spaces (not > > implemented). I/O port space is only accessed by the CPU, there are no > > device initiated I/O port transactions, so the address space relative > > to the device is irrelevant. > > Does the PCI spec actually forbid any master except the CPU from > issuing I/O port transactions, or is it just that in practice nobody > makes a PCI device that does weird stuff like that ? As realized in reply to MST, more the latter. Not used, no point to enabling, no means to enable depending on the physical IOMMU implementation. Thanks, Alex