Re: tracking of PCI address space
On Wed, 2009-04-08 at 15:53 -0500, Kumar Gala wrote: I was wondering if we have anything that tracks regions associated with the inbound side of a pci_bus. What I mean is on embedded PPC we have window/mapping registers for both inbound (accessing memory on the SoC) and outbound (access PCI device MMIO, IO etc). The combination of the inbound outbound convey what exists in the PCI address space vs CPU physical address space (and how to map from one to the other). Today in the PPC land we only attach outbound windows to the pci_bus. So technically the inbound side information (like what subset of physical memory is visible on the PCI bus) seems to be lost. On powerpc, we do keep track of the offset, but that's about it. Tracking inbound ranges is very platform specific though. You can have multiple inbound windows with different translations, in some cases some via iommu and some not, or windows aliasing the same target memory but with different attributes, etc... I don't think there's that much interest in trying to create generic code to keep track. Ben. ___ Linuxppc-dev mailing list Linuxppc-dev@ozlabs.org https://ozlabs.org/mailman/listinfo/linuxppc-dev
Re: tracking of PCI address space
On Wed, Apr 08, 2009 at 03:53:55PM -0500, Kumar Gala wrote: I was wondering if we have anything that tracks regions associated with the inbound side of a pci_bus. What I mean is on embedded PPC we have window/mapping registers for both inbound (accessing memory on the SoC) and outbound (access PCI device MMIO, IO etc). The combination of the inbound outbound convey what exists in the PCI address space vs CPU physical address space (and how to map from one to the other). Most PCI Host bus controllers will negatively decode the outbound ranges for inbound traffic. PARISC and IA64 have extra registers to play some games with that. But routing between PCI bus controllers to make them look like a single PCI segment was the main intent of that. I've not found any other uses to subvert that. Today in the PPC land we only attach outbound windows to the pci_bus. So technically the inbound side information (like what subset of physical memory is visible on the PCI bus) seems to be lost. What did you need inbound routing map for? thanks, grant ___ Linuxppc-dev mailing list Linuxppc-dev@ozlabs.org https://ozlabs.org/mailman/listinfo/linuxppc-dev
RE: tracking of PCI address space
I agree. Every processor(SOC) has unique of setting inbound window. What I noticed is Inbound regions are created big enough to map whole DDR region. And uses physical address of ram as a source/destination address. For example if a PCI-E SATA card wants to do DMA transfers to DDR region. It will create dma_alloc_noncoherent() region and uses physical address as source/destination address for data transfers. From: linuxppc-dev-bounces+tmarri=amcc@ozlabs.org on behalf of Benjamin Herrenschmidt Sent: Wed 4/8/2009 11:21 PM To: Kumar Gala Cc: linux-...@vger.kernel.org; Linux Kernel Mailing List; Jesse Barnes; Linux/PPC Development Subject: Re: tracking of PCI address space On Wed, 2009-04-08 at 15:53 -0500, Kumar Gala wrote: I was wondering if we have anything that tracks regions associated with the inbound side of a pci_bus. What I mean is on embedded PPC we have window/mapping registers for both inbound (accessing memory on the SoC) and outbound (access PCI device MMIO, IO etc). The combination of the inbound outbound convey what exists in the PCI address space vs CPU physical address space (and how to map from one to the other). Today in the PPC land we only attach outbound windows to the pci_bus. So technically the inbound side information (like what subset of physical memory is visible on the PCI bus) seems to be lost. On powerpc, we do keep track of the offset, but that's about it. Tracking inbound ranges is very platform specific though. You can have multiple inbound windows with different translations, in some cases some via iommu and some not, or windows aliasing the same target memory but with different attributes, etc... I don't think there's that much interest in trying to create generic code to keep track. Ben. ___ Linuxppc-dev mailing list Linuxppc-dev@ozlabs.org https://ozlabs.org/mailman/listinfo/linuxppc-dev ___ Linuxppc-dev mailing list Linuxppc-dev@ozlabs.org https://ozlabs.org/mailman/listinfo/linuxppc-dev
Re: tracking of PCI address space
On Apr 8, 2009, at 4:49 PM, Ira Snyder wrote: On Wed, Apr 08, 2009 at 03:53:55PM -0500, Kumar Gala wrote: I was wondering if we have anything that tracks regions associated with the inbound side of a pci_bus. What I mean is on embedded PPC we have window/mapping registers for both inbound (accessing memory on the SoC) and outbound (access PCI device MMIO, IO etc). The combination of the inbound outbound convey what exists in the PCI address space vs CPU physical address space (and how to map from one to the other). Today in the PPC land we only attach outbound windows to the pci_bus. So technically the inbound side information (like what subset of physical memory is visible on the PCI bus) seems to be lost. To the best of my knowledge there is no API to set inbound windows in Linux. I've been implementing a virtio-over-PCI driver which needs the inbound windows. I set them up myself during driver probe, using get_immrbase() to get the IMMR registers. This board is a PCI Slave / Agent, it doesn't even have PCI support compiled into the kernel. I'm not concerned explicitly about setting up inbound windows, its more about have a consistent view of the PCI address space which may be different from the CPU physical address space. I'm working on code to actually setup the inbound windows on 85xx/86xx class devices (based on dma-ranges property in the device tree). As I was thinking about this I realized that the send of ranges/dma-ranges in the .dts and what we map to outbound vs inbound changes if we an agent or host. - k ___ Linuxppc-dev mailing list Linuxppc-dev@ozlabs.org https://ozlabs.org/mailman/listinfo/linuxppc-dev