On Tue, 8 May 2018 17:25:24 -0400
Don Dutile wrote:
> On 05/08/2018 12:57 PM, Alex Williamson wrote:
> > On Mon, 7 May 2018 18:23:46 -0500
> > Bjorn Helgaas wrote:
> >
> >> On Mon, Apr 23, 2018 at 05:30:32PM -0600, Logan Gunthorpe wrote:
> >>> Hi
On Tue, 8 May 2018 17:25:24 -0400
Don Dutile wrote:
> On 05/08/2018 12:57 PM, Alex Williamson wrote:
> > On Mon, 7 May 2018 18:23:46 -0500
> > Bjorn Helgaas wrote:
> >
> >> On Mon, Apr 23, 2018 at 05:30:32PM -0600, Logan Gunthorpe wrote:
> >>> Hi Everyone,
> >>>
> >>> Here's v4 of our
On 05/08/2018 12:57 PM, Alex Williamson wrote:
On Mon, 7 May 2018 18:23:46 -0500
Bjorn Helgaas wrote:
On Mon, Apr 23, 2018 at 05:30:32PM -0600, Logan Gunthorpe wrote:
Hi Everyone,
Here's v4 of our series to introduce P2P based copy offload to NVMe
fabrics. This version
On 05/08/2018 12:57 PM, Alex Williamson wrote:
On Mon, 7 May 2018 18:23:46 -0500
Bjorn Helgaas wrote:
On Mon, Apr 23, 2018 at 05:30:32PM -0600, Logan Gunthorpe wrote:
Hi Everyone,
Here's v4 of our series to introduce P2P based copy offload to NVMe
fabrics. This version has been rebased onto
On 08/05/18 10:57 AM, Alex Williamson wrote:
> AIUI from previously questioning this, the change is hidden behind a
> build-time config option and only custom kernels or distros optimized
> for this sort of support would enable that build option. I'm more than
> a little dubious though that
On 08/05/18 10:57 AM, Alex Williamson wrote:
> AIUI from previously questioning this, the change is hidden behind a
> build-time config option and only custom kernels or distros optimized
> for this sort of support would enable that build option. I'm more than
> a little dubious though that
On Mon, 7 May 2018 18:23:46 -0500
Bjorn Helgaas wrote:
> On Mon, Apr 23, 2018 at 05:30:32PM -0600, Logan Gunthorpe wrote:
> > Hi Everyone,
> >
> > Here's v4 of our series to introduce P2P based copy offload to NVMe
> > fabrics. This version has been rebased onto v4.17-rc2. A
On Mon, 7 May 2018 18:23:46 -0500
Bjorn Helgaas wrote:
> On Mon, Apr 23, 2018 at 05:30:32PM -0600, Logan Gunthorpe wrote:
> > Hi Everyone,
> >
> > Here's v4 of our series to introduce P2P based copy offload to NVMe
> > fabrics. This version has been rebased onto v4.17-rc2. A git repo
> > is
> How do you envison merging this? There's a big chunk in drivers/pci, but
> really no opportunity for conflicts there, and there's significant stuff in
> block and nvme that I don't really want to merge.
>
> If Alex is OK with the ACS situation, I can ack the PCI parts and you could
> merge it
> How do you envison merging this? There's a big chunk in drivers/pci, but
> really no opportunity for conflicts there, and there's significant stuff in
> block and nvme that I don't really want to merge.
>
> If Alex is OK with the ACS situation, I can ack the PCI parts and you could
> merge it
On Mon, Apr 23, 2018 at 05:30:32PM -0600, Logan Gunthorpe wrote:
> Hi Everyone,
>
> Here's v4 of our series to introduce P2P based copy offload to NVMe
> fabrics. This version has been rebased onto v4.17-rc2. A git repo
> is here:
>
> https://github.com/sbates130272/linux-p2pmem pci-p2p-v4
> ...
On Mon, Apr 23, 2018 at 05:30:32PM -0600, Logan Gunthorpe wrote:
> Hi Everyone,
>
> Here's v4 of our series to introduce P2P based copy offload to NVMe
> fabrics. This version has been rebased onto v4.17-rc2. A git repo
> is here:
>
> https://github.com/sbates130272/linux-p2pmem pci-p2p-v4
> ...
On 04/05/18 08:27 AM, Christian König wrote:
> Are you sure that this is more convenient? At least on first glance it
> feels overly complicated.
>
> I mean what's the difference between the two approaches?
>
> sum = pci_p2pdma_distance(target, [A, B, C, target]);
>
> and
>
> sum
On 04/05/18 08:27 AM, Christian König wrote:
> Are you sure that this is more convenient? At least on first glance it
> feels overly complicated.
>
> I mean what's the difference between the two approaches?
>
> sum = pci_p2pdma_distance(target, [A, B, C, target]);
>
> and
>
> sum
Am 03.05.2018 um 20:43 schrieb Logan Gunthorpe:
On 03/05/18 11:29 AM, Christian König wrote:
Ok, that is the point where I'm stuck. Why do we need that in one
function call in the PCIe subsystem?
The problem at least with GPUs is that we seriously don't have that
information here, cause the
Am 03.05.2018 um 20:43 schrieb Logan Gunthorpe:
On 03/05/18 11:29 AM, Christian König wrote:
Ok, that is the point where I'm stuck. Why do we need that in one
function call in the PCIe subsystem?
The problem at least with GPUs is that we seriously don't have that
information here, cause the
On 03/05/18 11:29 AM, Christian König wrote:
> Ok, that is the point where I'm stuck. Why do we need that in one
> function call in the PCIe subsystem?
>
> The problem at least with GPUs is that we seriously don't have that
> information here, cause the PCI subsystem might not be aware of all
On 03/05/18 11:29 AM, Christian König wrote:
> Ok, that is the point where I'm stuck. Why do we need that in one
> function call in the PCIe subsystem?
>
> The problem at least with GPUs is that we seriously don't have that
> information here, cause the PCI subsystem might not be aware of all
Am 03.05.2018 um 17:59 schrieb Logan Gunthorpe:
On 03/05/18 03:05 AM, Christian König wrote:
Second question is how to you want to handle things when device are not
behind the same root port (which is perfectly possible in the cases I
deal with)?
I think we need to implement a whitelist. If
Am 03.05.2018 um 17:59 schrieb Logan Gunthorpe:
On 03/05/18 03:05 AM, Christian König wrote:
Second question is how to you want to handle things when device are not
behind the same root port (which is perfectly possible in the cases I
deal with)?
I think we need to implement a whitelist. If
On 03/05/18 03:05 AM, Christian König wrote:
> Ok, I'm still missing the big picture here. First question is what is
> the P2PDMA provider?
Well there's some pretty good documentation in the patchset for this,
but in short, a provider is a device that provides some kind of P2P
resource (ie.
On 03/05/18 03:05 AM, Christian König wrote:
> Ok, I'm still missing the big picture here. First question is what is
> the P2PDMA provider?
Well there's some pretty good documentation in the patchset for this,
but in short, a provider is a device that provides some kind of P2P
resource (ie.
Am 02.05.2018 um 17:56 schrieb Logan Gunthorpe:
Hi Christian,
On 5/2/2018 5:51 AM, Christian König wrote:
it would be rather nice to have if you could separate out the
functions to detect if peer2peer is possible between two devices.
This would essentially be pci_p2pdma_distance() in the
Am 02.05.2018 um 17:56 schrieb Logan Gunthorpe:
Hi Christian,
On 5/2/2018 5:51 AM, Christian König wrote:
it would be rather nice to have if you could separate out the
functions to detect if peer2peer is possible between two devices.
This would essentially be pci_p2pdma_distance() in the
Hi Christian,
On 5/2/2018 5:51 AM, Christian König wrote:
it would be rather nice to have if you could separate out the functions
to detect if peer2peer is possible between two devices.
This would essentially be pci_p2pdma_distance() in the existing
patchset. It returns the sum of the
Hi Christian,
On 5/2/2018 5:51 AM, Christian König wrote:
it would be rather nice to have if you could separate out the functions
to detect if peer2peer is possible between two devices.
This would essentially be pci_p2pdma_distance() in the existing
patchset. It returns the sum of the
Hi Logan,
it would be rather nice to have if you could separate out the functions
to detect if peer2peer is possible between two devices.
That would allow me to reuse the same logic for GPU peer2peer where I
don't really have ZONE_DEVICE.
Regards,
Christian.
Am 24.04.2018 um 01:30 schrieb
Hi Logan,
it would be rather nice to have if you could separate out the functions
to detect if peer2peer is possible between two devices.
That would allow me to reuse the same logic for GPU peer2peer where I
don't really have ZONE_DEVICE.
Regards,
Christian.
Am 24.04.2018 um 01:30 schrieb
Hi Everyone,
Here's v4 of our series to introduce P2P based copy offload to NVMe
fabrics. This version has been rebased onto v4.17-rc2. A git repo
is here:
https://github.com/sbates130272/linux-p2pmem pci-p2p-v4
Thanks,
Logan
Changes in v4:
* Change the original upstream_bridges_match()
Hi Everyone,
Here's v4 of our series to introduce P2P based copy offload to NVMe
fabrics. This version has been rebased onto v4.17-rc2. A git repo
is here:
https://github.com/sbates130272/linux-p2pmem pci-p2p-v4
Thanks,
Logan
Changes in v4:
* Change the original upstream_bridges_match()
30 matches
Mail list logo