On 11/6/25 6:16 AM, Leon Romanovsky wrote:
> From: Jason Gunthorpe <[email protected]>
> 
> Reflect latest changes in p2p implementation to support DMABUF lifecycle.
> 
> Signed-off-by: Leon Romanovsky <[email protected]>
> Signed-off-by: Jason Gunthorpe <[email protected]>
> ---
>  Documentation/driver-api/pci/p2pdma.rst | 95 
> +++++++++++++++++++++++++--------
>  1 file changed, 72 insertions(+), 23 deletions(-)
> 
> diff --git a/Documentation/driver-api/pci/p2pdma.rst 
> b/Documentation/driver-api/pci/p2pdma.rst
> index d0b241628cf1..69adea45f73e 100644
> --- a/Documentation/driver-api/pci/p2pdma.rst
> +++ b/Documentation/driver-api/pci/p2pdma.rst
> @@ -9,22 +9,47 @@ between two devices on the bus. This type of transaction is 
> henceforth
>  called Peer-to-Peer (or P2P). However, there are a number of issues that
>  make P2P transactions tricky to do in a perfectly safe way.
>  
> -One of the biggest issues is that PCI doesn't require forwarding
> -transactions between hierarchy domains, and in PCIe, each Root Port
> -defines a separate hierarchy domain. To make things worse, there is no
> -simple way to determine if a given Root Complex supports this or not.
> -(See PCIe r4.0, sec 1.3.1). Therefore, as of this writing, the kernel
> -only supports doing P2P when the endpoints involved are all behind the
> -same PCI bridge, as such devices are all in the same PCI hierarchy
> -domain, and the spec guarantees that all transactions within the
> -hierarchy will be routable, but it does not require routing
> -between hierarchies.
> -
> -The second issue is that to make use of existing interfaces in Linux,
> -memory that is used for P2P transactions needs to be backed by struct
> -pages. However, PCI BARs are not typically cache coherent so there are
> -a few corner case gotchas with these pages so developers need to
> -be careful about what they do with them.
> +For PCIe the routing of TLPs is well defined up until they reach a host 
> bridge

Define what TLP means?
                                   well-defined

> +or root port. If the path includes PCIe switches then based on the ACS 
> settings
> +the transaction can route entirely within the PCIe hierarchy and never reach 
> the
> +root port. The kernel will evaluate the PCIe topology and always permit P2P
> +in these well defined cases.

            well-defined

> +
> +However, if the P2P transaction reaches the host bridge then it might have to
> +hairpin back out the same root port, be routed inside the CPU SOC to another
> +PCIe root port, or routed internally to the SOC.
> +
> +As this is not well defined or well supported in real HW the kernel defaults 
> to

                  well-defined or well-supported

> +blocking such routing. There is an allow list to allow detecting known-good 
> HW,
> +in which case P2P between any two PCIe devices will be permitted.
> +
> +Since P2P inherently is doing transactions between two devices it requires 
> two
> +drivers to be co-operating inside the kernel. The providing driver has to 
> convey
> +its MMIO to the consuming driver. To meet the driver model lifecycle rules 
> the
> +MMIO must have all DMA mapping removed, all CPU accesses prevented, all page
> +table mappings undone before the providing driver completes remove().
> +
> +This requires the providing and consuming driver to actively work together to
> +guarantee that the consuming driver has stopped using the MMIO during a 
> removal
> +cycle. This is done by either a synchronous invalidation shutdown or waiting
> +for all usage refcounts to reach zero.
> +
> +At the lowest level the P2P subsystem offers a naked struct p2p_provider that
> +delegates lifecycle management to the providing driver. It is expected that
> +drivers using this option will wrap their MMIO memory in DMABUF and use 
> DMABUF
> +to provide an invalidation shutdown. These MMIO pages have no struct page, 
> and
> +if used with mmap() must create special PTEs. As such there are very few
> +kernel uAPIs that can accept pointers to them, in particular they cannot be 
> used

                                            them;

> +with read()/write(), including O_DIRECT.
> +
> +Building on this, the subsystem offers a layer to wrap the MMIO in a 
> ZONE_DEVICE
> +pgmap of MEMORY_DEVICE_PCI_P2PDMA to create struct pages. The lifecycle of
> +pgmap ensures that when the pgmap is destroyed all other drivers have stopped
> +using the MMIO. This option works with O_DIRECT flows, in some cases, if the
> +underlying subsystem supports handling MEMORY_DEVICE_PCI_P2PDMA through
> +FOLL_PCI_P2PDMA. The use of FOLL_LONGTERM is prevented. As this relies on 
> pgmap
> +it also relies on architecture support along with alignment and minimum size
> +limitations.
>  
>  
>  Driver Writer's Guide
> @@ -114,14 +139,38 @@ allocating scatter-gather lists with P2P memory.
>  Struct Page Caveats
>  -------------------
>  
> -Driver writers should be very careful about not passing these special
> -struct pages to code that isn't prepared for it. At this time, the kernel
> -interfaces do not have any checks for ensuring this. This obviously
> -precludes passing these pages to userspace.
> +While the MEMORY_DEVICE_PCI_P2PDMA pages can be installed in VMAs,
> +pin_user_pages() and related will not return them unless FOLL_PCI_P2PDMA is 
> set.
>  
> -P2P memory is also technically IO memory but should never have any side
> -effects behind it. Thus, the order of loads and stores should not be 
> important
> -and ioreadX(), iowriteX() and friends should not be necessary.
> +The MEMORY_DEVICE_PCI_P2PDMA pages require care to support in the kernel. The
> +KVA is still MMIO and must still be accessed through the normal
> +readX()/writeX()/etc helpers. Direct CPU access (e.g. memcpy) is forbidden, 
> just
> +like any other MMIO mapping. While this will actually work on some
> +architectures, others will experience corruption or just crash in the kernel.
> +Supporting FOLL_PCI_P2PDMA in a subsystem requires scrubbing it to ensure no 
> CPU
> +access happens.
> +
> +
> +Usage With DMABUF
> +=================
> +
> +DMABUF provides an alternative to the above struct page based

                                                      page-based

> +client/provider/orchestrator system. In this mode the exporting driver will 
> wrap
> +some of its MMIO in a DMABUF and give the DMABUF FD to userspace.
> +
> +Userspace can then pass the FD to an importing driver which will ask the
> +exporting driver to map it.
> +
> +In this case the initiator and target pci_devices are known and the P2P 
> subsystem
> +is used to determine the mapping type. The phys_addr_t based DMA API is used 
> to

                                              phys_addr_t-based

> +establish the dma_addr_t.
> +
> +Lifecycle is controlled by DMABUF move_notify(), when the exporting driver 
> wants

                                     move_notify(). When

> +to remove() it must deliver an invalidation shutdown to all DMABUF importing
> +drivers through move_notify() and synchronously DMA unmap all the MMIO.
> +
> +No importing driver can continue to have a DMA map to the MMIO after the
> +exporting driver has destroyed its p2p_provider.
>  
>  
>  P2P DMA Support Library
> 

-- 
~Randy

Reply via email to