Re: [Qemu-devel] [PATCH v4 5/5] docs: update documentation considering PCIE-PCI bridge
On 08/09/17 18:52, Aleksandr Bezzubikov wrote: > 2017-08-09 13:18 GMT+03:00 Laszlo Ersek : >> On 08/08/17 21:21, Aleksandr Bezzubikov wrote: >>> 2017-08-08 18:11 GMT+03:00 Laszlo Ersek : one comment below On 08/05/17 22:27, Aleksandr Bezzubikov wrote: > +Capability layout (defined in include/hw/pci/pci_bridge.h): > + > +uint8_t id; Standard PCI capability header field > +uint8_t next; Standard PCI capability header field > +uint8_t len;Standard PCI vendor-specific capability header field > + > +uint8_t type; Red Hat vendor-specific capability type > +List of currently existing types: > +QEMU_RESERVE = 1 > + > + > +uint32_t bus_res; Minimum number of buses to reserve > + > +uint64_t io;IO space to reserve > +uint64_t memNon-prefetchable memory to reserve > +uint64_t mem_pref; Prefetchable memory to reserve (I apologize if I missed any concrete points from the past messages regarding this structure.) How is the firmware supposed to know whether the prefetchable MMIO reservation should be made in 32-bit or 64-bit address space? If we reserve prefetchable MMIO outside of the 32-bit address space, then hot-plugging a device without 64-bit MMIO support could fail. My earlier request, to distinguish "prefetchable_32" from "prefetchable_64" (mutually exclusively), was so that firmware would know whether to restrict the MMIO reservation to 32-bit address space. >>> >>> IIUC now (in SeaBIOS at least) we just assign this PREF registers >>> unconditionally, >>> so the decision about the mode can be made basing on !=0 >>> UPPER_PREF_LIMIT register. >>> My idea was the same - we can just check if the value doesn't fit into >>> 16-bit (PREF_LIMIT reg size, 32-bit MMIO). Do we really need separate >>> fields for that? >> >> The PciBusDxe driver in edk2 tracks 32-bit and 64-bit MMIO resources >> separately from each other, and other (independent) logic exists in it >> that, on some conditions, allocates 64-bit MMIO BARs from 32-bit address >> space. This is just to say that the distinction is intentional in >> PciBusDxe. >> >> Furthermore, the Platform Init spec v1.6 says the following (this is >> what OVMF will have to comply with, in the "platform hook" called by >> PciBusDxe): >> >>> 12.6 PCI Hot Plug PCI Initialization Protocol >>> EFI_PCI_HOT_PLUG_INIT_PROTOCOL.GetResourcePadding() >>> ... >>> Padding The amount of resource padding that is required by the PCI >>> bus under the control of the specified HPC. Because the >>> caller does not know the size of this buffer, this buffer is >>> allocated by the callee and freed by the caller. >>> ... >>> The padding is returned in the form of ACPI (2.0 & 3.0) resource >>> descriptors. The exact definition of each of the fields is the same as >>> in the >>> EFI_PCI_HOST_BRIDGE_RESOURCE_ALLOCATION_PROTOCOL.SubmitResources() >>> function. See the section 10.8 for the definition of this function. >> >> Following that pointer: >> >>> 10.8 PCI HostBridge Code Definitions >>> 10.8.2 PCI Host Bridge Resource Allocation Protocol >>> >>> Table 8. ACPI 2.0 & 3.0 QWORD Address Space Descriptor Usage >>> >>> ByteByteData Description >>> Offset Length >>> ... >>> 0x030x01 Resource type: >>> 0: Memory range >>> 1: I/O range >>> 2: Bus number range >>> ... >>> 0x050x01 Type-specific flags. Ignored except as defined >>> in Table 3-3 and Table 3-4 below. >>> >>> 0x060x08 Address Space Granularity. Used to differentiate >>> between a 32-bit memory request and a 64-bit >>> memory request. For a 32-bit memory request, >>> this field should be set to 32. For a 64-bit >>> memory request, this field should be set to 64. >>> Ignored for I/O and bus resource requests. >>> Ignored during GetProposedResources(). >> >> The "Table 3-3" and "Table 3-4" references under "Type-specific flags" >> are out of date (spec bug); in reality those are: >> - Table 10. I/O Resource Flag (Resource Type = 1) Usage, >> - Table 11. Memory Resource Flag (Resource Type = 0) Usage. >> >> The latter is relevant here: >> >>> Table 11. Memory Resource Flag (Resource Type = 0) Usage >>> >>> Bits Meaning >>> ... >>> Bit[2:1] _MEM. Memory attributes. >>> Value and Meaning: >>> 0 The memory is nonprefetchable. >>> 1 Invalid. >>> 2 Invalid. >>> 3 The memory is prefetchable. >>> Note: The interpretation of these bits is somewhat different >>> from the ACPI Specification. According to the ACPI >>>
Re: [Qemu-devel] [PATCH v4 5/5] docs: update documentation considering PCIE-PCI bridge
2017-08-09 13:18 GMT+03:00 Laszlo Ersek : > On 08/08/17 21:21, Aleksandr Bezzubikov wrote: >> 2017-08-08 18:11 GMT+03:00 Laszlo Ersek : >>> one comment below >>> >>> On 08/05/17 22:27, Aleksandr Bezzubikov wrote: >>> +Capability layout (defined in include/hw/pci/pci_bridge.h): + +uint8_t id; Standard PCI capability header field +uint8_t next; Standard PCI capability header field +uint8_t len;Standard PCI vendor-specific capability header field + +uint8_t type; Red Hat vendor-specific capability type +List of currently existing types: +QEMU_RESERVE = 1 + + +uint32_t bus_res; Minimum number of buses to reserve + +uint64_t io;IO space to reserve +uint64_t memNon-prefetchable memory to reserve +uint64_t mem_pref; Prefetchable memory to reserve >>> >>> (I apologize if I missed any concrete points from the past messages >>> regarding this structure.) >>> >>> How is the firmware supposed to know whether the prefetchable MMIO >>> reservation should be made in 32-bit or 64-bit address space? If we >>> reserve prefetchable MMIO outside of the 32-bit address space, then >>> hot-plugging a device without 64-bit MMIO support could fail. >>> >>> My earlier request, to distinguish "prefetchable_32" from >>> "prefetchable_64" (mutually exclusively), was so that firmware would >>> know whether to restrict the MMIO reservation to 32-bit address >>> space. >> >> IIUC now (in SeaBIOS at least) we just assign this PREF registers >> unconditionally, >> so the decision about the mode can be made basing on !=0 >> UPPER_PREF_LIMIT register. >> My idea was the same - we can just check if the value doesn't fit into >> 16-bit (PREF_LIMIT reg size, 32-bit MMIO). Do we really need separate >> fields for that? > > The PciBusDxe driver in edk2 tracks 32-bit and 64-bit MMIO resources > separately from each other, and other (independent) logic exists in it > that, on some conditions, allocates 64-bit MMIO BARs from 32-bit address > space. This is just to say that the distinction is intentional in > PciBusDxe. > > Furthermore, the Platform Init spec v1.6 says the following (this is > what OVMF will have to comply with, in the "platform hook" called by > PciBusDxe): > >> 12.6 PCI Hot Plug PCI Initialization Protocol >> EFI_PCI_HOT_PLUG_INIT_PROTOCOL.GetResourcePadding() >> ... >> Padding The amount of resource padding that is required by the PCI >> bus under the control of the specified HPC. Because the >> caller does not know the size of this buffer, this buffer is >> allocated by the callee and freed by the caller. >> ... >> The padding is returned in the form of ACPI (2.0 & 3.0) resource >> descriptors. The exact definition of each of the fields is the same as >> in the >> EFI_PCI_HOST_BRIDGE_RESOURCE_ALLOCATION_PROTOCOL.SubmitResources() >> function. See the section 10.8 for the definition of this function. > > Following that pointer: > >> 10.8 PCI HostBridge Code Definitions >> 10.8.2 PCI Host Bridge Resource Allocation Protocol >> >> Table 8. ACPI 2.0 & 3.0 QWORD Address Space Descriptor Usage >> >> ByteByteData Description >> Offset Length >> ... >> 0x030x01 Resource type: >> 0: Memory range >> 1: I/O range >> 2: Bus number range >> ... >> 0x050x01 Type-specific flags. Ignored except as defined >> in Table 3-3 and Table 3-4 below. >> >> 0x060x08 Address Space Granularity. Used to differentiate >> between a 32-bit memory request and a 64-bit >> memory request. For a 32-bit memory request, >> this field should be set to 32. For a 64-bit >> memory request, this field should be set to 64. >> Ignored for I/O and bus resource requests. >> Ignored during GetProposedResources(). > > The "Table 3-3" and "Table 3-4" references under "Type-specific flags" > are out of date (spec bug); in reality those are: > - Table 10. I/O Resource Flag (Resource Type = 1) Usage, > - Table 11. Memory Resource Flag (Resource Type = 0) Usage. > > The latter is relevant here: > >> Table 11. Memory Resource Flag (Resource Type = 0) Usage >> >> Bits Meaning >> ... >> Bit[2:1] _MEM. Memory attributes. >> Value and Meaning: >> 0 The memory is nonprefetchable. >> 1 Invalid. >> 2 Invalid. >> 3 The memory is prefetchable. >> Note: The interpretation of these bits is somewhat different >> from the ACPI Specification. According to the ACPI >> Specification, a value of 0 implies noncacheable memory and >> the value of 3 indicates prefetchable and cacheable memory. > > So whatever OVMF sees i
Re: [Qemu-devel] [PATCH v4 5/5] docs: update documentation considering PCIE-PCI bridge
On 08/08/17 21:21, Aleksandr Bezzubikov wrote: > 2017-08-08 18:11 GMT+03:00 Laszlo Ersek : >> one comment below >> >> On 08/05/17 22:27, Aleksandr Bezzubikov wrote: >> >>> +Capability layout (defined in include/hw/pci/pci_bridge.h): >>> + >>> +uint8_t id; Standard PCI capability header field >>> +uint8_t next; Standard PCI capability header field >>> +uint8_t len;Standard PCI vendor-specific capability header field >>> + >>> +uint8_t type; Red Hat vendor-specific capability type >>> +List of currently existing types: >>> +QEMU_RESERVE = 1 >>> + >>> + >>> +uint32_t bus_res; Minimum number of buses to reserve >>> + >>> +uint64_t io;IO space to reserve >>> +uint64_t memNon-prefetchable memory to reserve >>> +uint64_t mem_pref; Prefetchable memory to reserve >> >> (I apologize if I missed any concrete points from the past messages >> regarding this structure.) >> >> How is the firmware supposed to know whether the prefetchable MMIO >> reservation should be made in 32-bit or 64-bit address space? If we >> reserve prefetchable MMIO outside of the 32-bit address space, then >> hot-plugging a device without 64-bit MMIO support could fail. >> >> My earlier request, to distinguish "prefetchable_32" from >> "prefetchable_64" (mutually exclusively), was so that firmware would >> know whether to restrict the MMIO reservation to 32-bit address >> space. > > IIUC now (in SeaBIOS at least) we just assign this PREF registers > unconditionally, > so the decision about the mode can be made basing on !=0 > UPPER_PREF_LIMIT register. > My idea was the same - we can just check if the value doesn't fit into > 16-bit (PREF_LIMIT reg size, 32-bit MMIO). Do we really need separate > fields for that? The PciBusDxe driver in edk2 tracks 32-bit and 64-bit MMIO resources separately from each other, and other (independent) logic exists in it that, on some conditions, allocates 64-bit MMIO BARs from 32-bit address space. This is just to say that the distinction is intentional in PciBusDxe. Furthermore, the Platform Init spec v1.6 says the following (this is what OVMF will have to comply with, in the "platform hook" called by PciBusDxe): > 12.6 PCI Hot Plug PCI Initialization Protocol > EFI_PCI_HOT_PLUG_INIT_PROTOCOL.GetResourcePadding() > ... > Padding The amount of resource padding that is required by the PCI > bus under the control of the specified HPC. Because the > caller does not know the size of this buffer, this buffer is > allocated by the callee and freed by the caller. > ... > The padding is returned in the form of ACPI (2.0 & 3.0) resource > descriptors. The exact definition of each of the fields is the same as > in the > EFI_PCI_HOST_BRIDGE_RESOURCE_ALLOCATION_PROTOCOL.SubmitResources() > function. See the section 10.8 for the definition of this function. Following that pointer: > 10.8 PCI HostBridge Code Definitions > 10.8.2 PCI Host Bridge Resource Allocation Protocol > > Table 8. ACPI 2.0 & 3.0 QWORD Address Space Descriptor Usage > > ByteByteData Description > Offset Length > ... > 0x030x01 Resource type: > 0: Memory range > 1: I/O range > 2: Bus number range > ... > 0x050x01 Type-specific flags. Ignored except as defined > in Table 3-3 and Table 3-4 below. > > 0x060x08 Address Space Granularity. Used to differentiate > between a 32-bit memory request and a 64-bit > memory request. For a 32-bit memory request, > this field should be set to 32. For a 64-bit > memory request, this field should be set to 64. > Ignored for I/O and bus resource requests. > Ignored during GetProposedResources(). The "Table 3-3" and "Table 3-4" references under "Type-specific flags" are out of date (spec bug); in reality those are: - Table 10. I/O Resource Flag (Resource Type = 1) Usage, - Table 11. Memory Resource Flag (Resource Type = 0) Usage. The latter is relevant here: > Table 11. Memory Resource Flag (Resource Type = 0) Usage > > Bits Meaning > ... > Bit[2:1] _MEM. Memory attributes. > Value and Meaning: > 0 The memory is nonprefetchable. > 1 Invalid. > 2 Invalid. > 3 The memory is prefetchable. > Note: The interpretation of these bits is somewhat different > from the ACPI Specification. According to the ACPI > Specification, a value of 0 implies noncacheable memory and > the value of 3 indicates prefetchable and cacheable memory. So whatever OVMF sees in the capability, it must be able to translate to the above representation. Thanks Laszlo > >> >> This is based on an earlier email from Alex to me: >> >> On 10/03/16 18:01,
Re: [Qemu-devel] [PATCH v4 5/5] docs: update documentation considering PCIE-PCI bridge
2017-08-08 18:11 GMT+03:00 Laszlo Ersek : > one comment below > > On 08/05/17 22:27, Aleksandr Bezzubikov wrote: > >> +Capability layout (defined in include/hw/pci/pci_bridge.h): >> + >> +uint8_t id; Standard PCI capability header field >> +uint8_t next; Standard PCI capability header field >> +uint8_t len;Standard PCI vendor-specific capability header field >> + >> +uint8_t type; Red Hat vendor-specific capability type >> +List of currently existing types: >> +QEMU_RESERVE = 1 >> + >> + >> +uint32_t bus_res; Minimum number of buses to reserve >> + >> +uint64_t io;IO space to reserve >> +uint64_t memNon-prefetchable memory to reserve >> +uint64_t mem_pref; Prefetchable memory to reserve > > (I apologize if I missed any concrete points from the past messages > regarding this structure.) > > How is the firmware supposed to know whether the prefetchable MMIO > reservation should be made in 32-bit or 64-bit address space? If we > reserve prefetchable MMIO outside of the 32-bit address space, then > hot-plugging a device without 64-bit MMIO support could fail. > > My earlier request, to distinguish "prefetchable_32" from > "prefetchable_64" (mutually exclusively), was so that firmware would > know whether to restrict the MMIO reservation to 32-bit address space. IIUC now (in SeaBIOS at least) we just assign this PREF registers unconditionally, so the decision about the mode can be made basing on !=0 UPPER_PREF_LIMIT register. My idea was the same - we can just check if the value doesn't fit into 16-bit (PREF_LIMIT reg size, 32-bit MMIO). Do we really need separate fields for that? > > This is based on an earlier email from Alex to me: > > On 10/03/16 18:01, Alex Williamson wrote: >> I don't think there's such a thing as a 64-bit non-prefetchable >> aperture. In fact, there are not separate 32 and 64 bit prefetchable >> apertures. The apertures are: >> >> I/O base/limit - (default 16bit, may be 32bit) >> Memory base/limit - (32bit only, non-prefetchable) >> Prefetchable Memory base/limit - (default 32bit, may be 64bit) >> >> This is according to Table 3-2 in the PCI-to-PCI bridge spec rev 1.2. > > I don't care much about the 16-bit vs. 32-bit IO difference (that's > entirely academic and the Platform Spec init doesn't even provide a way > for OVMF to express such a difference). However, the optional > restriction to 32-bit matters for the prefetchable MMIO aperture. > > Other than this, the patch looks good to me, and I'm ready to R-b. > > Thanks! > Laszlo -- Aleksandr Bezzubikov
Re: [Qemu-devel] [PATCH v4 5/5] docs: update documentation considering PCIE-PCI bridge
one comment below On 08/05/17 22:27, Aleksandr Bezzubikov wrote: > +Capability layout (defined in include/hw/pci/pci_bridge.h): > + > +uint8_t id; Standard PCI capability header field > +uint8_t next; Standard PCI capability header field > +uint8_t len;Standard PCI vendor-specific capability header field > + > +uint8_t type; Red Hat vendor-specific capability type > +List of currently existing types: > +QEMU_RESERVE = 1 > + > + > +uint32_t bus_res; Minimum number of buses to reserve > + > +uint64_t io;IO space to reserve > +uint64_t memNon-prefetchable memory to reserve > +uint64_t mem_pref; Prefetchable memory to reserve (I apologize if I missed any concrete points from the past messages regarding this structure.) How is the firmware supposed to know whether the prefetchable MMIO reservation should be made in 32-bit or 64-bit address space? If we reserve prefetchable MMIO outside of the 32-bit address space, then hot-plugging a device without 64-bit MMIO support could fail. My earlier request, to distinguish "prefetchable_32" from "prefetchable_64" (mutually exclusively), was so that firmware would know whether to restrict the MMIO reservation to 32-bit address space. This is based on an earlier email from Alex to me: On 10/03/16 18:01, Alex Williamson wrote: > I don't think there's such a thing as a 64-bit non-prefetchable > aperture. In fact, there are not separate 32 and 64 bit prefetchable > apertures. The apertures are: > > I/O base/limit - (default 16bit, may be 32bit) > Memory base/limit - (32bit only, non-prefetchable) > Prefetchable Memory base/limit - (default 32bit, may be 64bit) > > This is according to Table 3-2 in the PCI-to-PCI bridge spec rev 1.2. I don't care much about the 16-bit vs. 32-bit IO difference (that's entirely academic and the Platform Spec init doesn't even provide a way for OVMF to express such a difference). However, the optional restriction to 32-bit matters for the prefetchable MMIO aperture. Other than this, the patch looks good to me, and I'm ready to R-b. Thanks! Laszlo
[Qemu-devel] [PATCH v4 5/5] docs: update documentation considering PCIE-PCI bridge
Signed-off-by: Aleksandr Bezzubikov --- docs/pcie.txt| 49 +++-- docs/pcie_pci_bridge.txt | 110 +++ 2 files changed, 136 insertions(+), 23 deletions(-) create mode 100644 docs/pcie_pci_bridge.txt diff --git a/docs/pcie.txt b/docs/pcie.txt index 5bada24..76b85ec 100644 --- a/docs/pcie.txt +++ b/docs/pcie.txt @@ -46,7 +46,7 @@ Place only the following kinds of devices directly on the Root Complex: (2) PCI Express Root Ports (ioh3420), for starting exclusively PCI Express hierarchies. -(3) DMI-PCI Bridges (i82801b11-bridge), for starting legacy PCI +(3) PCI Express to PCI Bridge (pcie-pci-bridge), for starting legacy PCI hierarchies. (4) Extra Root Complexes (pxb-pcie), if multiple PCI Express Root Buses @@ -55,18 +55,18 @@ Place only the following kinds of devices directly on the Root Complex: pcie.0 bus ||| | - --- -- -- -- - | PCI Dev | | PCIe Root Port | | DMI-PCI Bridge | | pxb-pcie | - --- -- -- -- + --- -- --- -- + | PCI Dev | | PCIe Root Port | | PCIe-PCI Bridge | | pxb-pcie | + --- -- --- -- 2.1.1 To plug a device into pcie.0 as a Root Complex Integrated Endpoint use: -device [,bus=pcie.0] 2.1.2 To expose a new PCI Express Root Bus use: -device pxb-pcie,id=pcie.1,bus_nr=x[,numa_node=y][,addr=z] - Only PCI Express Root Ports and DMI-PCI bridges can be connected - to the pcie.1 bus: + PCI Express Root Ports and PCI Express to PCI bridges can be + connected to the pcie.1 bus: -device ioh3420,id=root_port1[,bus=pcie.1][,chassis=x][,slot=y][,addr=z] \ - -device i82801b11-bridge,id=dmi_pci_bridge1,bus=pcie.1 + -device pcie-pci-bridge,id=pcie_pci_bridge1,bus=pcie.1 2.2 PCI Express only hierarchy @@ -130,24 +130,24 @@ Notes: Legacy PCI devices can be plugged into pcie.0 as Integrated Endpoints, but, as mentioned in section 5, doing so means the legacy PCI device in question will be incapable of hot-unplugging. -Besides that use DMI-PCI Bridges (i82801b11-bridge) in combination -with PCI-PCI Bridges (pci-bridge) to start PCI hierarchies. +Besides that use PCI Express to PCI Bridges (pcie-pci-bridge) in +combination with PCI-PCI Bridges (pci-bridge) to start PCI hierarchies. -Prefer flat hierarchies. For most scenarios a single DMI-PCI Bridge +Prefer flat hierarchies. For most scenarios a single PCI Express to PCI Bridge (having 32 slots) and several PCI-PCI Bridges attached to it (each supporting also 32 slots) will support hundreds of legacy devices. -The recommendation is to populate one PCI-PCI Bridge under the DMI-PCI Bridge -until is full and then plug a new PCI-PCI Bridge... +The recommendation is to populate one PCI-PCI Bridge under the +PCI Express to PCI Bridge until is full and then plug a new PCI-PCI Bridge... pcie.0 bus -- || - --- -- - | PCI Dev | | DMI-PCI BRIDGE | - ---- + --- --- + | PCI Dev | | PCIe-PCI Bridge | + --- --- || ---- - | PCI-PCI Bridge || PCI-PCI Bridge | ... + | PCI-PCI Bridge || PCI-PCI Bridge | ---- | | --- --- @@ -157,11 +157,11 @@ until is full and then plug a new PCI-PCI Bridge... 2.3.1 To plug a PCI device into pcie.0 as an Integrated Endpoint use: -device [,bus=pcie.0] 2.3.2 Plugging a PCI device into a PCI-PCI Bridge: - -device i82801b11-bridge,id=dmi_pci_bridge1[,bus=pcie.0] \ - -device pci-bridge,id=pci_bridge1,bus=dmi_pci_bridge1[,chassis_nr=x][,addr=y] \ + -device pcie-pci-bridge,id=pcie_pci_bridge1[,bus=pcie.0] \ + -device pci-bridge,id=pci_bridge1,bus=pcie_pci_bridge1[,chassis_nr=x][,addr=y] \ -device ,bus=pci_bridge1[,addr=x] Note that 'addr' cannot be 0 unless shpc=off parameter is passed to - the PCI Bridge. + the PCI Bridge/PCI Express to PCI Bridge. 3. IO space issues === @@ -219,14 +219,16 @@ do not support hot-plug, so any devices plugged in