On 12/21/2017 5:52 PM, Ard Biesheuvel wrote:
On 21 December 2017 at 09:48, Ni, Ruiyu <[email protected]> wrote:
On 12/21/2017 5:14 PM, Guo Heyi wrote:
On Thu, Dec 21, 2017 at 08:32:37AM +0000, Ard Biesheuvel wrote:
On 21 December 2017 at 08:27, Guo Heyi <[email protected]> wrote:
On Wed, Dec 20, 2017 at 03:26:45PM +0000, Ard Biesheuvel wrote:
On 20 December 2017 at 15:17, gary guo <[email protected]> wrote:
On Wed, Dec 20, 2017 at 09:13:58AM +0000, Ard Biesheuvel wrote:
Hi Heyi,
On 20 December 2017 at 08:21, Heyi Guo <[email protected]> wrote:
PCIe on some ARM platforms requires address translation, not only
for
legacy IO access, but also for 32bit memory BAR access as well.
There
will be "Address Translation Unit" or something similar in PCI host
bridges to translation CPU address to PCI address and vice versa. So
we think it may be useful to add address translation support to the
generic PCI host bridge driver.
I agree. While unusual on a PC, it is quite common on other
architectures to have more complex non 1:1 topologies, which
currently
require a forked PciHostBridgeDxe driver with local changes applied.
This RFC only contains one minor change to the definition of
PciHostBridgeLib, and there certainly will be a lot of other changes
to make it work, including:
1. Use CPU address for GCD space add and allocate operations,
instead
of PCI address; also IO space will be changed to memory space if
translation exists.
For I/O space, the translation should simply be applied to the I/O
range. I don't think it makes sense to use memory space here, given
that it is specific to architectures that lack native port I/O.
I made an assumption here that platforms supporting real port IO space
do not need address translation, like IA32 and X64, and port IO
translation implies the platform does not support real port IO space.
This may be a reasonable assumption. But I still think it is better
not to encode any assumptions in the first place.
Indeed the assumption is not so "generic", so I'll agree if you
recommend to support IO to IO translation as well. But I still hope to
have IO to memory translation support in PCI host bridge driver,
rather than in CPU IO protocol, since the faked IO space might only be
used for PCI host bridge and we may have overlapping IO ranges for
each host bridge, which is compatible with PCIe specification and PCIe
ACPI description.
That is fine. Under UEFI, these will translate to non-overlapping I/O
spaces in the CPU's view. Under the OS, this could be totally
different.
For example,
RC0 IO 0x0000 .. 0xffff -> CPU 0x00000 .. 0x0ffff
RC1 IO 0x0000 .. 0xffff -> CPU 0x10000 .. 0x1ffff
This is very similar to how MMIO translation works, and makes I/O
devices behind the host bridges uniquely addressable for drivers.
For our understanding, could you share the host bridge configuration
that you are targetting?
IO translation on one of our platforms is like below:
PCI IO space CPU memory space
0x0000 .. 0xffff -> 0xafff0000 .. 0xafffffff
(The sizes are always 0x10000 so I will omit the limit for others)
0x0000 .. 0xffff -> 0x8abff0000
0x0000 .. 0xffff -> 0x8b7ff0000
......
The translated addresses may be beyond 32bit address, will this
violate IO space limitation? From EDK2 code, I didn't see such
limitation for IO space.
The MMIO address will not be used for I/O port addressing by the CPU.
The MMIO to IO translation is an implementation detail of your CpuIo2
protocol implementation.
So there will be two stacked translations, one for PCI I/O to CPU I/O,
and one for CPU I/O to CPU MMIO. The latter is transparent to the PCI
host bridge driver.
Yes this should work.
Hi Star, Eric and Ruiyu,
Any comments on this RFC?
Let me confirm my understanding:
The PciHostBridge core driver/library interface changes only
take care of the MMIO translation.
Heyi you will implement a special CpuIo driver in your
platform code to take care of the IO to MMIO translation.
But let me confirm, will you need to additional translate
the MMIO (translated from IO) to another MMIO using an offset?
If yes, will you handle that translation in your CpuIo driver?
Hi Ray,
The issue is that several PCIe root complexes have colliding I/O ranges:
Ard,
The IO-MMIO mapping needs CPU support. I am not sure whether IA32 or
x64 supports.
But I guess ARM supports. right?
Will all the IO part be implemented in ARM CpuIo2 protocol?
PCI IO space CPU memory space
0x0000 .. 0xffff -> 0xafff0000 .. 0xafffffff
(The sizes are always 0x10000 so I will omit the limit for others)
0x0000 .. 0xffff -> 0x8abff0000
0x0000 .. 0xffff -> 0x8b7ff0000
So the CPU view is different from the PCI view, and to create a single
CPU I/O space where all I/O port ranges behind all host bridges are
accessible, we need I/O translation for the CPU. This will result in
an intermediate representation
PCI IO space CPU IO space
0x0000 .. 0xffff -> 0x00000 .. 0x0ffff
0x0000 .. 0xffff -> 0x10000 .. 0x1ffff
0x0000 .. 0xffff -> 0x20000 .. 0x2ffff
On top of that, given that ARM has no native port I/O instructions, we
will need to implement MMIO to IO translation, but this can be
implemented in the CpuIo2 protocol.
_______________________________________________
edk2-devel mailing list
[email protected]
https://lists.01.org/mailman/listinfo/edk2-devel
--
Thanks,
Ray
_______________________________________________
edk2-devel mailing list
[email protected]
https://lists.01.org/mailman/listinfo/edk2-devel