Re: [coreboot] Discussion about dynamic PCI MMIO size on x86
Hi, > > There's a known failure case. > > If someone puts in multiple PCI cards that uses more than 2GB of mmio > > it'll break again. > And what will happen if you need more than 3GiB MMIO space? more than > 4GiB? ... you have to set a limit somewhere. And that can be confi- > gurable, IMO (It doesn't have to be Kconfig. I'd actually like to see > it in the devicetree). Map the big bars above 4G. Those are typically 64bit bars anyway. cheers, Gerd -- coreboot mailing list: coreboot@coreboot.org https://www.coreboot.org/mailman/listinfo/coreboot
Re: [coreboot] Discussion about dynamic PCI MMIO size on x86
On 07.06.2016 16:40, Patrick Rudolph wrote: > On 2016-06-06 09:58 PM, ron minnich wrote: >> On Mon, Jun 6, 2016 at 12:52 PM Patrick Rudolph>> wrote: >> >>> To summarize: >>> The easy way is to use 2G. >>> The preferred way would be to mimic mrc behaviour and reboot after >>> finding the correct size. >> >> I'm not sure it's "easy vs. preferred" so much as >> - simple that has no known failure cases (yet). >> - best that requires that we have another variable we have to store in >> flash >> >> At least to me, it's not as cut and dried as easy vs. preferred. The >> preferred way adds a lot of moving runtime parts, and we'll need to >> cover for the case that the stored variable gets damaged somehow. The >> easy way is a runtime constant, which I kind of like. >> >> If the "easy" way always works, i.e. we never find a case where it >> causes a problem, then it seems superior to me. >> >> ron > > There's a known failure case. > If someone puts in multiple PCI cards that uses more than 2GB of mmio > it'll break again. And what will happen if you need more than 3GiB MMIO space? more than 4GiB? ... you have to set a limit somewhere. And that can be confi- gurable, IMO (It doesn't have to be Kconfig. I'd actually like to see it in the devicetree). > Most users won't reach this limit, but I'm wondering if I'm reaching it > anytime soon. > I want to use 2 PEG cards and the onboard intel card, which will be a > good test-system. > > I've read this serveral times and wonder how a stored variable could get > damaged ? > There is a checksum to verify the mrc cache. How should anything go > wrong ? Well, the checksum could be correct even if the data was damaged. Also, what do you do when the checksum is wrong but you need more MMIO space than the default? Try again to write the variable? Boot without suffi- cient MMIO space? Don't boot at all? > In case something writes to the flash you might get unexpected behavior > anyway, won't you ? Yes, but making this more complex makes unexpected behavior more likely. Nico -- coreboot mailing list: coreboot@coreboot.org https://www.coreboot.org/mailman/listinfo/coreboot
Re: [coreboot] Discussion about dynamic PCI MMIO size on x86
On Tue, Jun 7, 2016 at 7:40 AM Patrick Rudolphwrote: > > > I've read this serveral times and wonder how a stored variable could get > damaged ? > There is a checksum to verify the mrc cache. How should anything go > wrong ? > > oh boy. "how should anything go wrong" ... :-) What I wonder, every time I see one of these systems boot, is, "how did it work at all" :-) ron -- coreboot mailing list: coreboot@coreboot.org https://www.coreboot.org/mailman/listinfo/coreboot
Re: [coreboot] Discussion about dynamic PCI MMIO size on x86
On 2016-06-06 09:58 PM, ron minnich wrote: > On Mon, Jun 6, 2016 at 12:52 PM Patrick Rudolph> wrote: > >> To summarize: >> The easy way is to use 2G. >> The preferred way would be to mimic mrc behaviour and reboot after >> finding the correct size. > > I'm not sure it's "easy vs. preferred" so much as > - simple that has no known failure cases (yet). > - best that requires that we have another variable we have to store in > flash > > At least to me, it's not as cut and dried as easy vs. preferred. The > preferred way adds a lot of moving runtime parts, and we'll need to > cover for the case that the stored variable gets damaged somehow. The > easy way is a runtime constant, which I kind of like. > > If the "easy" way always works, i.e. we never find a case where it > causes a problem, then it seems superior to me. > > ron There's a known failure case. If someone puts in multiple PCI cards that uses more than 2GB of mmio it'll break again. Most users won't reach this limit, but I'm wondering if I'm reaching it anytime soon. I want to use 2 PEG cards and the onboard intel card, which will be a good test-system. I've read this serveral times and wonder how a stored variable could get damaged ? There is a checksum to verify the mrc cache. How should anything go wrong ? In case something writes to the flash you might get unexpected behavior anyway, won't you ? Regards, Patrick -- coreboot mailing list: coreboot@coreboot.org https://www.coreboot.org/mailman/listinfo/coreboot
Re: [coreboot] Discussion about dynamic PCI MMIO size on x86
On 06.06.2016 23:40, Kyösti Mälkki wrote: > On Mon, Jun 6, 2016 at 10:36 PM, ron minnichwrote: >> I'm getting the sense here that reasonably modern CPUs can easily handle the >> 2G hole. From what I've seen, it would not cause trouble for older CPUs >> because they're most likely to be in small systems that are not likely to >> have more than 2G memory anyway (I'm thinking of the vortex). >> > > Not that I would particularly care, but was i945 able to reclaim > memory from below 4GiB to above 4GiB? There used to be a fair amount > of Lenovo T60/X60(s) users. I don't think we should use the 2GiB limit for all x86 platforms. It's more a question for those platforms that are actively maintained and at least support PAE, IMO. Regarding i945, I had a look at the code and there is a TOM register that's programmed to the total amount of DRAM. But sadly it's undocu- mented (at my level of NDA) and coreboot doesn't seem to advertise memory above 4GiB (I haven't tried it yet though). Nico -- coreboot mailing list: coreboot@coreboot.org https://www.coreboot.org/mailman/listinfo/coreboot
Re: [coreboot] Discussion about dynamic PCI MMIO size on x86
On Mon, Jun 6, 2016 at 10:36 PM, ron minnichwrote: > I'm getting the sense here that reasonably modern CPUs can easily handle the > 2G hole. From what I've seen, it would not cause trouble for older CPUs > because they're most likely to be in small systems that are not likely to > have more than 2G memory anyway (I'm thinking of the vortex). > Not that I would particularly care, but was i945 able to reclaim memory from below 4GiB to above 4GiB? There used to be a fair amount of Lenovo T60/X60(s) users. Additionally, early Atom (model_106cx) might have 32-bit physical address space only without PAE. Kyösti > The 2G hole seems like a reasonable way go to. > > ron > > On Mon, Jun 6, 2016 at 1:01 AM Gerd Hoffmann wrote: >> >> Hi, >> >> > I think one can go with 2GB MMIO hole. >> >> Agreeing here. We have PAE. Non-ancient 32bit kernels should support >> and use it, for both security reasons (nox support requires PAE page >> table format) and accessing physical address space above 4G. >> >> > The PCIe > 4GB is a question, I don't >> > think Windows have good support for this. >> >> Depends on the version. Recent windows versions have no problems >> handling it. WinXP throws a BSOD though in case it finds a 64bit mmio >> window described in \_SB.PCI0._CRS ... >> >> cheers, >> Gerd >> >> >> -- >> coreboot mailing list: coreboot@coreboot.org >> https://www.coreboot.org/mailman/listinfo/coreboot > > > -- > coreboot mailing list: coreboot@coreboot.org > https://www.coreboot.org/mailman/listinfo/coreboot -- coreboot mailing list: coreboot@coreboot.org https://www.coreboot.org/mailman/listinfo/coreboot
Re: [coreboot] Discussion about dynamic PCI MMIO size on x86
Patrick Rudolph wrote: > The easy way is to use 2G. Do this now. > The preferred way would be to mimic mrc behaviour and reboot after > finding the correct size. As Ron writes, I don't think it's neccessarily preferable to do something significantly more complex, if it isn't actually significantly better. Keep it simple, so to say. Thanks! //Peter -- coreboot mailing list: coreboot@coreboot.org https://www.coreboot.org/mailman/listinfo/coreboot
Re: [coreboot] Discussion about dynamic PCI MMIO size on x86
On Mon, Jun 6, 2016 at 12:52 PM Patrick Rudolphwrote: > To summarize: > The easy way is to use 2G. > The preferred way would be to mimic mrc behaviour and reboot after > finding the correct size. > > > I'm not sure it's "easy vs. preferred" so much as - simple that has no known failure cases (yet). - best that requires that we have another variable we have to store in flash At least to me, it's not as cut and dried as easy vs. preferred. The preferred way adds a lot of moving runtime parts, and we'll need to cover for the case that the stored variable gets damaged somehow. The easy way is a runtime constant, which I kind of like. If the "easy" way always works, i.e. we never find a case where it causes a problem, then it seems superior to me. ron -- coreboot mailing list: coreboot@coreboot.org https://www.coreboot.org/mailman/listinfo/coreboot
Re: [coreboot] Discussion about dynamic PCI MMIO size on x86
To summarize: The easy way is to use 2G. The preferred way would be to mimic mrc behaviour and reboot after finding the correct size. On 2016-06-06 09:36 PM, ron minnich wrote: > I'm getting the sense here that reasonably modern CPUs can easily > handle the 2G hole. From what I've seen, it would not cause trouble > for older CPUs because they're most likely to be in small systems that > are not likely to have more than 2G memory anyway (I'm thinking of the > vortex). > > The 2G hole seems like a reasonable way go to. > > ron > > On Mon, Jun 6, 2016 at 1:01 AM Gerd Hoffmann> wrote: > >> Hi, >> >>> I think one can go with 2GB MMIO hole. >> >> Agreeing here. We have PAE. Non-ancient 32bit kernels should >> support >> and use it, for both security reasons (nox support requires PAE page >> table format) and accessing physical address space above 4G. >> >>> The PCIe > 4GB is a question, I don't >>> think Windows have good support for this. >> >> Depends on the version. Recent windows versions have no problems >> handling it. WinXP throws a BSOD though in case it finds a 64bit >> mmio >> window described in \_SB.PCI0._CRS ... >> >> cheers, >> Gerd >> >> -- >> coreboot mailing list: coreboot@coreboot.org >> https://www.coreboot.org/mailman/listinfo/coreboot -- coreboot mailing list: coreboot@coreboot.org https://www.coreboot.org/mailman/listinfo/coreboot
Re: [coreboot] Discussion about dynamic PCI MMIO size on x86
I'm getting the sense here that reasonably modern CPUs can easily handle the 2G hole. From what I've seen, it would not cause trouble for older CPUs because they're most likely to be in small systems that are not likely to have more than 2G memory anyway (I'm thinking of the vortex). The 2G hole seems like a reasonable way go to. ron On Mon, Jun 6, 2016 at 1:01 AM Gerd Hoffmannwrote: > Hi, > > > I think one can go with 2GB MMIO hole. > > Agreeing here. We have PAE. Non-ancient 32bit kernels should support > and use it, for both security reasons (nox support requires PAE page > table format) and accessing physical address space above 4G. > > > The PCIe > 4GB is a question, I don't > > think Windows have good support for this. > > Depends on the version. Recent windows versions have no problems > handling it. WinXP throws a BSOD though in case it finds a 64bit mmio > window described in \_SB.PCI0._CRS ... > > cheers, > Gerd > > > -- > coreboot mailing list: coreboot@coreboot.org > https://www.coreboot.org/mailman/listinfo/coreboot > -- coreboot mailing list: coreboot@coreboot.org https://www.coreboot.org/mailman/listinfo/coreboot
Re: [coreboot] Discussion about dynamic PCI MMIO size on x86
Hi, > I think one can go with 2GB MMIO hole. Agreeing here. We have PAE. Non-ancient 32bit kernels should support and use it, for both security reasons (nox support requires PAE page table format) and accessing physical address space above 4G. > The PCIe > 4GB is a question, I don't > think Windows have good support for this. Depends on the version. Recent windows versions have no problems handling it. WinXP throws a BSOD though in case it finds a 64bit mmio window described in \_SB.PCI0._CRS ... cheers, Gerd -- coreboot mailing list: coreboot@coreboot.org https://www.coreboot.org/mailman/listinfo/coreboot
Re: [coreboot] Discussion about dynamic PCI MMIO size on x86
Hi all, Most of 32-bit kernels (Unix/OS/whatever) usually have PAE support, so in fact they can cope with 36 bits of memory. The CPU PAE support started around Pentium. Windows XP+ has support for this. I think one can go with 2GB MMIO hole. The PCIe > 4GB is a question, I don't think Windows have good support for this. So, using 2GB memory hole and having MMIO < 4GB seems like a good idea. Thanks Rudolf -- coreboot mailing list: coreboot@coreboot.org https://www.coreboot.org/mailman/listinfo/coreboot
Re: [coreboot] Discussion about dynamic PCI MMIO size on x86
On Sat, Jun 4, 2016 at 12:00 PM, Zoran Stojsavljevicwrote: > Hello to all, > > If I correctly remember: PCIe configuration space addressing consists of 3 > parts: bus (8 bits), device (5 bits) and function (3 bits). This gives in > total 8+5+3= 16 bits, thus 2^16 (65536). With additional 256 bytes legacy, > gives maximum of 16MB of configuration address space (just below TOM) for > legacy. With 4KB per function (extended config space), gives 256MB > (0x1000). Here it is: > https://en.wikipedia.org/wiki/PCI_configuration_space For extended config space it's 256 MiB total. > > The question about system memory beyond the PCIe bridge is the another > question. Seems to me that 2GB is too much (not compliant with IA32/x86, > only with x86_64 architecture). Thus I agree with Aaron. Hard to comply with > IA32/x86. > > But in the past I saw similar behavior with BIOSes (double reboot). Not sure > if this is the real reason (Aaron writes: Intel UEFI systems do a reboot > after calculating the MMIO space requirements so that the memory init code > knows how much to allocate for MMIO resource allocation.). Hard to imagine > for UEFI 32bit compliant BIOSes, all 64 BIOSes are UEFI compliant BIOSes. > > If Aaron is correct, I've learned something new. ;-) It's a trade off of accessible memory (DRAM) vs simplification. If your kernel you are booting is 64-bit it matters nothing at all using 2GiB I/O hole. If you care about 32-bit kernels then you care more about the size of the I/O hole. As i noted previously, I haven't been concerned with 32-bit kernels for over decade, but that's just what I'm used to and am concerned about. Obviously if 32-bit kernels are a concern then you wouldn't use a 2GiB hole when trying to maximize dram access. > > Thank you, > Zoran > > > On Sat, Jun 4, 2016 at 6:49 PM, ron minnich wrote: >> >> Another Kconfig option? How many people will really understand what it >> means and whether to use it? >> >> Has just reserving 2 GiB as a hard and fast rule hurt anyone yet? >> >> thanks >> >> ron >> >> On Fri, Jun 3, 2016 at 11:25 PM Patrick Rudolph >> wrote: >>> >>> On 2016-06-03 05:41 PM, Aaron Durbin via coreboot wrote: >>> > On Fri, Jun 3, 2016 at 7:04 AM, Patrick Rudolph >>> > wrote: >>> >> Hello, >>> >> I want to start a discussion about PCI MMIO size that hit me a couple >>> >> of >>> >> times using coreboot. >>> >> I'm focused on Intel Sandybridge, but I guess this topic applies to >>> >> all >>> >> x86 systems. >>> >> >>> >> On most Intel systems the PCI mmio size is hard-coded in coreboot to >>> >> 1024Mbyte, but the value is hidden deep inside raminit code. >>> >> The mmio size for dynamic resources is limited on top by PCIEXBAR, >>> >> IOAPIC, ME stolen, ... that takes 128Mbyte and on the other end it's >>> >> limited by graphics stolen and TSEG that require at least 40Mbyte. >>> >> In total there's only space for two 256Mbyte PCI BARs, due to >>> >> alignment. >>> >> That's enough for systems that only do have an Intel GPU, but it fails >>> >> as soon as PCI devices use more than a single 256Mbyte BAR. >>> >> The PCI mmio size is set in romstage, but PCI BARs are configured in >>> >> ramstage. >>> >> >>> >> Following questions came to my mind: >>> >> * How does the MRC handle this ? >>> >> * Should the user be able to modify PCI mmio size ? >>> >> * How to pass the required PCI mmio size to romstage ? >>> >> A good place seems to be the mrc cache, but ramstage doesn't know >>> >> about it's structure. >>> >> * How is this solved on AMD systems ? >>> >> * Should the romstage scan PCI devices and count required BAR size ? >>> > >>> > In the past (not sure if it's still true), Intel UEFI systems do a >>> > reboot after calculating the MMIO space requirements so that the >>> > memory init code knows how much to allocate for MMIO resource >>> > allocation. I always found that distasteful and I have always used a >>> > 2GiB I/O hole ever since. I never cared about 32-bit kernels so I >>> > always found that to be a decent tradeoff. It also makes MTRR and >>> > address space easy to digest when looking at things. >>> > >>> >>> I like the idea of a reboot. It only has to be done after hardware >>> changes that affect the PCI mmio size. >>> With mrc cache in place it shouldn't be notable at all. >>> >>> On the other hand, hard-coding the limit is much simpler. >>> What do you think about a Kconfig option >>> "Optimize PCI mmio size for x86_64 OS" ? >>> >>> It would increase the size to 2GiB. Of course it would work on i386, but >>> you might see less usable >>> DRAM than before. >>> >>> >> >>> >> Regards, >>> >> Patrick >>> >> >>> >> -- >>> >> coreboot mailing list: coreboot@coreboot.org >>> >> https://www.coreboot.org/mailman/listinfo/coreboot >>> >>> -- >>> coreboot mailing list: coreboot@coreboot.org >>> https://www.coreboot.org/mailman/listinfo/coreboot >> >> >> -- >> coreboot mailing
Re: [coreboot] Discussion about dynamic PCI MMIO size on x86
Hello to all, If I correctly remember: PCIe configuration space addressing consists of 3 parts: bus (8 bits), device (5 bits) and function (3 bits). This gives in total 8+5+3= 16 bits, thus 2^16 (65536). With additional 256 bytes legacy, gives maximum of 16MB of configuration address space (just below TOM) for legacy. With 4KB per function (extended config space), gives 256MB (0x1000). Here it is: https://en.wikipedia.org/wiki/PCI_configuration_space The question about system memory beyond the PCIe bridge is the another question. Seems to me that 2GB is too much (not compliant with IA32/x86, only with x86_64 architecture). Thus I agree with Aaron. Hard to comply with IA32/x86. But in the past I saw similar behavior with BIOSes (double reboot). Not sure if this is the real reason (*Aaron writes: Intel UEFI systems do a reboot after calculating the MMIO space requirements so that the memory init code knows how much to allocate for MMIO resource **allocation.*). Hard to imagine for UEFI 32bit compliant BIOSes, all 64 BIOSes are UEFI compliant BIOSes. If Aaron is correct, I've learned something new. ;-) Thank you, Zoran On Sat, Jun 4, 2016 at 6:49 PM, ron minnichwrote: > Another Kconfig option? How many people will really understand what it > means and whether to use it? > > Has just reserving 2 GiB as a hard and fast rule hurt anyone yet? > > thanks > > ron > > On Fri, Jun 3, 2016 at 11:25 PM Patrick Rudolph > wrote: > >> On 2016-06-03 05:41 PM, Aaron Durbin via coreboot wrote: >> > On Fri, Jun 3, 2016 at 7:04 AM, Patrick Rudolph >> wrote: >> >> Hello, >> >> I want to start a discussion about PCI MMIO size that hit me a couple >> of >> >> times using coreboot. >> >> I'm focused on Intel Sandybridge, but I guess this topic applies to all >> >> x86 systems. >> >> >> >> On most Intel systems the PCI mmio size is hard-coded in coreboot to >> >> 1024Mbyte, but the value is hidden deep inside raminit code. >> >> The mmio size for dynamic resources is limited on top by PCIEXBAR, >> >> IOAPIC, ME stolen, ... that takes 128Mbyte and on the other end it's >> >> limited by graphics stolen and TSEG that require at least 40Mbyte. >> >> In total there's only space for two 256Mbyte PCI BARs, due to >> alignment. >> >> That's enough for systems that only do have an Intel GPU, but it fails >> >> as soon as PCI devices use more than a single 256Mbyte BAR. >> >> The PCI mmio size is set in romstage, but PCI BARs are configured in >> >> ramstage. >> >> >> >> Following questions came to my mind: >> >> * How does the MRC handle this ? >> >> * Should the user be able to modify PCI mmio size ? >> >> * How to pass the required PCI mmio size to romstage ? >> >> A good place seems to be the mrc cache, but ramstage doesn't know >> >> about it's structure. >> >> * How is this solved on AMD systems ? >> >> * Should the romstage scan PCI devices and count required BAR size ? >> > >> > In the past (not sure if it's still true), Intel UEFI systems do a >> > reboot after calculating the MMIO space requirements so that the >> > memory init code knows how much to allocate for MMIO resource >> > allocation. I always found that distasteful and I have always used a >> > 2GiB I/O hole ever since. I never cared about 32-bit kernels so I >> > always found that to be a decent tradeoff. It also makes MTRR and >> > address space easy to digest when looking at things. >> > >> >> I like the idea of a reboot. It only has to be done after hardware >> changes that affect the PCI mmio size. >> With mrc cache in place it shouldn't be notable at all. >> >> On the other hand, hard-coding the limit is much simpler. >> What do you think about a Kconfig option >> "Optimize PCI mmio size for x86_64 OS" ? >> >> It would increase the size to 2GiB. Of course it would work on i386, but >> you might see less usable >> DRAM than before. >> >> >> >> >> Regards, >> >> Patrick >> >> >> >> -- >> >> coreboot mailing list: coreboot@coreboot.org >> >> https://www.coreboot.org/mailman/listinfo/coreboot >> >> -- >> coreboot mailing list: coreboot@coreboot.org >> https://www.coreboot.org/mailman/listinfo/coreboot >> > > -- > coreboot mailing list: coreboot@coreboot.org > https://www.coreboot.org/mailman/listinfo/coreboot > -- coreboot mailing list: coreboot@coreboot.org https://www.coreboot.org/mailman/listinfo/coreboot
Re: [coreboot] Discussion about dynamic PCI MMIO size on x86
Another Kconfig option? How many people will really understand what it means and whether to use it? Has just reserving 2 GiB as a hard and fast rule hurt anyone yet? thanks ron On Fri, Jun 3, 2016 at 11:25 PM Patrick Rudolphwrote: > On 2016-06-03 05:41 PM, Aaron Durbin via coreboot wrote: > > On Fri, Jun 3, 2016 at 7:04 AM, Patrick Rudolph > wrote: > >> Hello, > >> I want to start a discussion about PCI MMIO size that hit me a couple of > >> times using coreboot. > >> I'm focused on Intel Sandybridge, but I guess this topic applies to all > >> x86 systems. > >> > >> On most Intel systems the PCI mmio size is hard-coded in coreboot to > >> 1024Mbyte, but the value is hidden deep inside raminit code. > >> The mmio size for dynamic resources is limited on top by PCIEXBAR, > >> IOAPIC, ME stolen, ... that takes 128Mbyte and on the other end it's > >> limited by graphics stolen and TSEG that require at least 40Mbyte. > >> In total there's only space for two 256Mbyte PCI BARs, due to alignment. > >> That's enough for systems that only do have an Intel GPU, but it fails > >> as soon as PCI devices use more than a single 256Mbyte BAR. > >> The PCI mmio size is set in romstage, but PCI BARs are configured in > >> ramstage. > >> > >> Following questions came to my mind: > >> * How does the MRC handle this ? > >> * Should the user be able to modify PCI mmio size ? > >> * How to pass the required PCI mmio size to romstage ? > >> A good place seems to be the mrc cache, but ramstage doesn't know > >> about it's structure. > >> * How is this solved on AMD systems ? > >> * Should the romstage scan PCI devices and count required BAR size ? > > > > In the past (not sure if it's still true), Intel UEFI systems do a > > reboot after calculating the MMIO space requirements so that the > > memory init code knows how much to allocate for MMIO resource > > allocation. I always found that distasteful and I have always used a > > 2GiB I/O hole ever since. I never cared about 32-bit kernels so I > > always found that to be a decent tradeoff. It also makes MTRR and > > address space easy to digest when looking at things. > > > > I like the idea of a reboot. It only has to be done after hardware > changes that affect the PCI mmio size. > With mrc cache in place it shouldn't be notable at all. > > On the other hand, hard-coding the limit is much simpler. > What do you think about a Kconfig option > "Optimize PCI mmio size for x86_64 OS" ? > > It would increase the size to 2GiB. Of course it would work on i386, but > you might see less usable > DRAM than before. > > >> > >> Regards, > >> Patrick > >> > >> -- > >> coreboot mailing list: coreboot@coreboot.org > >> https://www.coreboot.org/mailman/listinfo/coreboot > > -- > coreboot mailing list: coreboot@coreboot.org > https://www.coreboot.org/mailman/listinfo/coreboot > -- coreboot mailing list: coreboot@coreboot.org https://www.coreboot.org/mailman/listinfo/coreboot
Re: [coreboot] Discussion about dynamic PCI MMIO size on x86
On 2016-06-03 05:41 PM, Aaron Durbin via coreboot wrote: > On Fri, Jun 3, 2016 at 7:04 AM, Patrick Rudolphwrote: >> Hello, >> I want to start a discussion about PCI MMIO size that hit me a couple of >> times using coreboot. >> I'm focused on Intel Sandybridge, but I guess this topic applies to all >> x86 systems. >> >> On most Intel systems the PCI mmio size is hard-coded in coreboot to >> 1024Mbyte, but the value is hidden deep inside raminit code. >> The mmio size for dynamic resources is limited on top by PCIEXBAR, >> IOAPIC, ME stolen, ... that takes 128Mbyte and on the other end it's >> limited by graphics stolen and TSEG that require at least 40Mbyte. >> In total there's only space for two 256Mbyte PCI BARs, due to alignment. >> That's enough for systems that only do have an Intel GPU, but it fails >> as soon as PCI devices use more than a single 256Mbyte BAR. >> The PCI mmio size is set in romstage, but PCI BARs are configured in >> ramstage. >> >> Following questions came to my mind: >> * How does the MRC handle this ? >> * Should the user be able to modify PCI mmio size ? >> * How to pass the required PCI mmio size to romstage ? >> A good place seems to be the mrc cache, but ramstage doesn't know >> about it's structure. >> * How is this solved on AMD systems ? >> * Should the romstage scan PCI devices and count required BAR size ? > > In the past (not sure if it's still true), Intel UEFI systems do a > reboot after calculating the MMIO space requirements so that the > memory init code knows how much to allocate for MMIO resource > allocation. I always found that distasteful and I have always used a > 2GiB I/O hole ever since. I never cared about 32-bit kernels so I > always found that to be a decent tradeoff. It also makes MTRR and > address space easy to digest when looking at things. > I like the idea of a reboot. It only has to be done after hardware changes that affect the PCI mmio size. With mrc cache in place it shouldn't be notable at all. On the other hand, hard-coding the limit is much simpler. What do you think about a Kconfig option "Optimize PCI mmio size for x86_64 OS" ? It would increase the size to 2GiB. Of course it would work on i386, but you might see less usable DRAM than before. >> >> Regards, >> Patrick >> >> -- >> coreboot mailing list: coreboot@coreboot.org >> https://www.coreboot.org/mailman/listinfo/coreboot -- coreboot mailing list: coreboot@coreboot.org https://www.coreboot.org/mailman/listinfo/coreboot
Re: [coreboot] Discussion about dynamic PCI MMIO size on x86
On Fri, Jun 3, 2016 at 7:04 AM, Patrick Rudolphwrote: > Hello, > I want to start a discussion about PCI MMIO size that hit me a couple of > times using coreboot. > I'm focused on Intel Sandybridge, but I guess this topic applies to all > x86 systems. > > On most Intel systems the PCI mmio size is hard-coded in coreboot to > 1024Mbyte, but the value is hidden deep inside raminit code. > The mmio size for dynamic resources is limited on top by PCIEXBAR, > IOAPIC, ME stolen, ... that takes 128Mbyte and on the other end it's > limited by graphics stolen and TSEG that require at least 40Mbyte. > In total there's only space for two 256Mbyte PCI BARs, due to alignment. > That's enough for systems that only do have an Intel GPU, but it fails > as soon as PCI devices use more than a single 256Mbyte BAR. > The PCI mmio size is set in romstage, but PCI BARs are configured in > ramstage. > > Following questions came to my mind: > * How does the MRC handle this ? > * Should the user be able to modify PCI mmio size ? > * How to pass the required PCI mmio size to romstage ? > A good place seems to be the mrc cache, but ramstage doesn't know > about it's structure. > * How is this solved on AMD systems ? > * Should the romstage scan PCI devices and count required BAR size ? In the past (not sure if it's still true), Intel UEFI systems do a reboot after calculating the MMIO space requirements so that the memory init code knows how much to allocate for MMIO resource allocation. I always found that distasteful and I have always used a 2GiB I/O hole ever since. I never cared about 32-bit kernels so I always found that to be a decent tradeoff. It also makes MTRR and address space easy to digest when looking at things. > > Regards, > Patrick > > -- > coreboot mailing list: coreboot@coreboot.org > https://www.coreboot.org/mailman/listinfo/coreboot -- coreboot mailing list: coreboot@coreboot.org https://www.coreboot.org/mailman/listinfo/coreboot
[coreboot] Discussion about dynamic PCI MMIO size on x86
Hello, I want to start a discussion about PCI MMIO size that hit me a couple of times using coreboot. I'm focused on Intel Sandybridge, but I guess this topic applies to all x86 systems. On most Intel systems the PCI mmio size is hard-coded in coreboot to 1024Mbyte, but the value is hidden deep inside raminit code. The mmio size for dynamic resources is limited on top by PCIEXBAR, IOAPIC, ME stolen, ... that takes 128Mbyte and on the other end it's limited by graphics stolen and TSEG that require at least 40Mbyte. In total there's only space for two 256Mbyte PCI BARs, due to alignment. That's enough for systems that only do have an Intel GPU, but it fails as soon as PCI devices use more than a single 256Mbyte BAR. The PCI mmio size is set in romstage, but PCI BARs are configured in ramstage. Following questions came to my mind: * How does the MRC handle this ? * Should the user be able to modify PCI mmio size ? * How to pass the required PCI mmio size to romstage ? A good place seems to be the mrc cache, but ramstage doesn't know about it's structure. * How is this solved on AMD systems ? * Should the romstage scan PCI devices and count required BAR size ? Regards, Patrick -- coreboot mailing list: coreboot@coreboot.org https://www.coreboot.org/mailman/listinfo/coreboot