On Mon, Sep 18, 2023 at 4:30 PM Ani Sinha <anisi...@redhat.com> wrote: > > On Mon, Sep 18, 2023 at 4:28 PM David Hildenbrand <da...@redhat.com> wrote: > > > > On 18.09.23 12:54, Ani Sinha wrote: > > > On Mon, Sep 18, 2023 at 3:49 PM David Hildenbrand <da...@redhat.com> > > > wrote: > > >> > > >> On 18.09.23 12:11, Ani Sinha wrote: > > >> > > >>> > > >>> Ok hopefully my last question. I am still confused on something. > > >>> Does the above mean that the hole64 will actually start from an > > >>> address that is beyond maxram? Like basically if you added all of > > >>> ram_below_4G, ram_above_4G, hot plug_mem and pci_hole64 then can it > > >>> exceed maxram? I think it will. Does this not an issue? > > >> > > >> If you'd have a 2 GiB VM, the device memory region and hole64 would > > >> always be placed >= 4 GiB address, yes. > > >> > > >> As maxram is just a size, and not a PFN, I don't think there is any > > >> issue with that. > > > > > > So this is all just a scheme to decide what to place where with maxram > > > amount of memory available. When the processor needs to access the > > > > Yes. ram_size and maxram_size are only used to create the memory layout. > > > > > memory mapped PCI device, its simply dynamically mapped to the > > > available physical ram. Is my understanding correct here? > > > > I'm no expert on that, but from my understanding that's what the > > pci/pci64 hole is for -- mapping PCI BARs into these areas, such that > > they don't conflict with actual guest RAM. That's why we still account > > these "holes" as valid GFN that could be used+accessed by the VM once a > > PCI BAR gets mapped in there. > > Yes that was my understanding too but since device drivers need to > access those BAR addresses, they need to be mapped to the actual > available physical ram.
No sorry I was confused. They are just register addresses on the device unrelated to RAM.