Let me follow up on this a little. Does anyone (likely Gabe) have pointers to information about how the *physical addresses* are mapped in normal x86 systems.
Here's what a couple of real systems look like when looking at /proc/iomem (populated from the E820 entries in the BIOS). I'm omitting all of the IO ranges, but they fill in the gaps. This is a simple Intel desktop system with one NUMA node and 32 GB DRAM (4 8GB DIMMs, IIRC). /proc/meminfo says there's 31.3 GB memory total 00001000-0009c3ff : System RAM 0.000-0.001 (0.001GB) 00100000-cb3a8fff : System RAM 0.001-3.175 (3.174GB) cb3ab000-dac45fff : System RAM 3.175-3.418 (0.243GB) dbdff000-dbdfffff : System RAM 3.436-3.436 (0.000GB) 100000000-81dffffff : System RAM 4.000-32.469 (28.469GB) This is an 8 NUMA node AMD system with 512 GB RAM (16 32 GB DIMMs, IIRC). /proc/meminfo says there's 503.79 GB memory total 00001000-000997ff : System RAM 0.000-0.001 (0.001GB) 00100000-76daffff : System RAM 0.001-1.857 (1.856GB) 77000000-c9d69fff : System RAM 1.859-3.154 (1.294GB) c9dda000-c9e90fff : System RAM 3.154-3.155 (0.001GB) cacc9000-cbffffff : System RAM 3.169-3.187 (0.019GB) 100000000-102f37ffff : System RAM 4.000-64.738 (60.738GB) 1030000000-202ff7ffff : System RAM 64.750-128.750 (64.000GB) 2030000000-302ff7ffff : System RAM 128.750-192.750 (64.000GB) 3030000000-402ff7ffff : System RAM 192.750-256.750 (64.000GB) 4030000000-502ff7ffff : System RAM 256.750-320.750 (64.000GB) 5030000000-602ff7ffff : System RAM 320.750-384.750 (64.000GB) 6030000000-702ff7ffff : System RAM 384.750-448.750 (64.000GB) 7030000000-802ff7ffff : System RAM 448.750-512.750 (64.000GB) Our main question comes down to how to get reasonable interleavings at the memory controllers across banks/ranks/channels? We've been assuming that physical memory starts at 0 and goes to the size of memory, but that's clearly not how real (x86) systems are set up. If we were to use the addresses above, I don't believe the default interleavings will work correctly (I could be wrong here...). Any ideas on what is going on under the covers? Does the bus controller have a level of address "translation" before the memory controllers? Do the gaps just mask parts of DRAM? Right now in gem5's x86 FS configuration, we have one physical memory range (AddrRange) for each E820 entry. Would it be more correct to set up the E820 regions and physical memory separately and add a translation layer somewhere in the memory system (e.g., just before each memory controller)? Any pointers to documentation on this (OSDev wiki was slightly helpful but didn't explain how hardware interleavings work) or other ideas would be greatly appreciated! Cheers, Jason On Thu, Oct 10, 2019 at 11:27 AM Pouya Fotouhi <pfoto...@ucdavis.edu> wrote: > Hi All, > > I am trying to add a GPU as PCI device in full system. The kernel expects > the shadowed ROM for vga devices to be at 0xc0000, and would attempt to > read the ROM from there. However, we start mapping the memory range from > address 0x0, and that would result in any accesses to vga ROM to go through > the memory and not the device itself. > > As a workaround, we can check for this particular address range before > dispatching the incoming request and send it to pio instead of memory. > However, this would be more of a hack, and would not solve the issue for > KVM cpu. > > I was wondering if anyone faced similar issues, or have any feedback on how > to properly handle this. Looking at how this is handled in real systems, is > seems like DRAM is often not mapped starting from address 0x0. Other than > simplicity, is there any particular reason for mapping DRAM starting from > address zero? > > Best, > -- > Pouya Fotouhi > PhD Candidate > Department of Electrical and Computer Engineering > University of California, Davis > _______________________________________________ > gem5-dev mailing list > gem5-dev@gem5.org > http://m5sim.org/mailman/listinfo/gem5-dev _______________________________________________ gem5-dev mailing list gem5-dev@gem5.org http://m5sim.org/mailman/listinfo/gem5-dev