Re: [PATCH v4 00/14] Copy Offload in NVMe Fabrics with P2P PCI Memory

2018-05-08 Thread Alex Williamson
On Tue, 8 May 2018 17:25:24 -0400 Don Dutile wrote: > On 05/08/2018 12:57 PM, Alex Williamson wrote: > > On Mon, 7 May 2018 18:23:46 -0500 > > Bjorn Helgaas wrote: > > > >> On Mon, Apr 23, 2018 at 05:30:32PM -0600, Logan Gunthorpe wrote: > >>> Hi

Re: [PATCH v4 00/14] Copy Offload in NVMe Fabrics with P2P PCI Memory

2018-05-08 Thread Don Dutile
On 05/08/2018 12:57 PM, Alex Williamson wrote: On Mon, 7 May 2018 18:23:46 -0500 Bjorn Helgaas wrote: On Mon, Apr 23, 2018 at 05:30:32PM -0600, Logan Gunthorpe wrote: Hi Everyone, Here's v4 of our series to introduce P2P based copy offload to NVMe fabrics. This version

Re: [PATCH v4 00/14] Copy Offload in NVMe Fabrics with P2P PCI Memory

2018-05-08 Thread Logan Gunthorpe
On 08/05/18 10:57 AM, Alex Williamson wrote: > AIUI from previously questioning this, the change is hidden behind a > build-time config option and only custom kernels or distros optimized > for this sort of support would enable that build option. I'm more than > a little dubious though that

Re: [PATCH v4 00/14] Copy Offload in NVMe Fabrics with P2P PCI Memory

2018-05-08 Thread Alex Williamson
On Mon, 7 May 2018 18:23:46 -0500 Bjorn Helgaas wrote: > On Mon, Apr 23, 2018 at 05:30:32PM -0600, Logan Gunthorpe wrote: > > Hi Everyone, > > > > Here's v4 of our series to introduce P2P based copy offload to NVMe > > fabrics. This version has been rebased onto v4.17-rc2. A

Re: [PATCH v4 00/14] Copy Offload in NVMe Fabrics with P2P PCI Memory

2018-05-07 Thread Logan Gunthorpe
> How do you envison merging this? There's a big chunk in drivers/pci, but > really no opportunity for conflicts there, and there's significant stuff in > block and nvme that I don't really want to merge. > > If Alex is OK with the ACS situation, I can ack the PCI parts and you could > merge it

Re: [PATCH v4 00/14] Copy Offload in NVMe Fabrics with P2P PCI Memory

2018-05-07 Thread Bjorn Helgaas
On Mon, Apr 23, 2018 at 05:30:32PM -0600, Logan Gunthorpe wrote: > Hi Everyone, > > Here's v4 of our series to introduce P2P based copy offload to NVMe > fabrics. This version has been rebased onto v4.17-rc2. A git repo > is here: > > https://github.com/sbates130272/linux-p2pmem pci-p2p-v4 > ...

Re: [PATCH v4 00/14] Copy Offload in NVMe Fabrics with P2P PCI Memory

2018-05-04 Thread Logan Gunthorpe
On 04/05/18 08:27 AM, Christian König wrote: > Are you sure that this is more convenient? At least on first glance it > feels overly complicated. > > I mean what's the difference between the two approaches? > >     sum = pci_p2pdma_distance(target, [A, B, C, target]); > > and > >     sum

Re: [PATCH v4 00/14] Copy Offload in NVMe Fabrics with P2P PCI Memory

2018-05-04 Thread Christian König
Am 03.05.2018 um 20:43 schrieb Logan Gunthorpe: On 03/05/18 11:29 AM, Christian König wrote: Ok, that is the point where I'm stuck. Why do we need that in one function call in the PCIe subsystem? The problem at least with GPUs is that we seriously don't have that information here, cause the

Re: [PATCH v4 00/14] Copy Offload in NVMe Fabrics with P2P PCI Memory

2018-05-03 Thread Logan Gunthorpe
On 03/05/18 11:29 AM, Christian König wrote: > Ok, that is the point where I'm stuck. Why do we need that in one > function call in the PCIe subsystem? > > The problem at least with GPUs is that we seriously don't have that > information here, cause the PCI subsystem might not be aware of all

Re: [PATCH v4 00/14] Copy Offload in NVMe Fabrics with P2P PCI Memory

2018-05-03 Thread Christian König
Am 03.05.2018 um 17:59 schrieb Logan Gunthorpe: On 03/05/18 03:05 AM, Christian König wrote: Second question is how to you want to handle things when device are not behind the same root port (which is perfectly possible in the cases I deal with)? I think we need to implement a whitelist. If

Re: [PATCH v4 00/14] Copy Offload in NVMe Fabrics with P2P PCI Memory

2018-05-03 Thread Logan Gunthorpe
On 03/05/18 03:05 AM, Christian König wrote: > Ok, I'm still missing the big picture here. First question is what is > the P2PDMA provider? Well there's some pretty good documentation in the patchset for this, but in short, a provider is a device that provides some kind of P2P resource (ie.

Re: [PATCH v4 00/14] Copy Offload in NVMe Fabrics with P2P PCI Memory

2018-05-03 Thread Christian König
Am 02.05.2018 um 17:56 schrieb Logan Gunthorpe: Hi Christian, On 5/2/2018 5:51 AM, Christian König wrote: it would be rather nice to have if you could separate out the functions to detect if peer2peer is possible between two devices. This would essentially be pci_p2pdma_distance() in the

Re: [PATCH v4 00/14] Copy Offload in NVMe Fabrics with P2P PCI Memory

2018-05-02 Thread Logan Gunthorpe
Hi Christian, On 5/2/2018 5:51 AM, Christian König wrote: it would be rather nice to have if you could separate out the functions to detect if peer2peer is possible between two devices. This would essentially be pci_p2pdma_distance() in the existing patchset. It returns the sum of the

Re: [PATCH v4 00/14] Copy Offload in NVMe Fabrics with P2P PCI Memory

2018-05-02 Thread Christian König
Hi Logan, it would be rather nice to have if you could separate out the functions to detect if peer2peer is possible between two devices. That would allow me to reuse the same logic for GPU peer2peer where I don't really have ZONE_DEVICE. Regards, Christian. Am 24.04.2018 um 01:30 schrieb

[PATCH v4 00/14] Copy Offload in NVMe Fabrics with P2P PCI Memory

2018-04-23 Thread Logan Gunthorpe
Hi Everyone, Here's v4 of our series to introduce P2P based copy offload to NVMe fabrics. This version has been rebased onto v4.17-rc2. A git repo is here: https://github.com/sbates130272/linux-p2pmem pci-p2p-v4 Thanks, Logan Changes in v4: * Change the original upstream_bridges_match()