On 5/11/2018 4:24 PM, Stephen Bates wrote:
All
Alex (or anyone else) can you point to where IOVA addresses are generated?
A case of RTFM perhaps (though a pointer to the code would still be
appreciated).
https://www.kernel.org/doc/Documentation/Intel-IOMMU.txt
Some exceptions to IOVA
On 5/11/2018 4:24 PM, Stephen Bates wrote:
All
Alex (or anyone else) can you point to where IOVA addresses are generated?
A case of RTFM perhaps (though a pointer to the code would still be
appreciated).
https://www.kernel.org/doc/Documentation/Intel-IOMMU.txt
Some exceptions to IOVA
All
> Alex (or anyone else) can you point to where IOVA addresses are generated?
A case of RTFM perhaps (though a pointer to the code would still be
appreciated).
https://www.kernel.org/doc/Documentation/Intel-IOMMU.txt
Some exceptions to IOVA
---
Interrupt ranges are not
All
> Alex (or anyone else) can you point to where IOVA addresses are generated?
A case of RTFM perhaps (though a pointer to the code would still be
appreciated).
https://www.kernel.org/doc/Documentation/Intel-IOMMU.txt
Some exceptions to IOVA
---
Interrupt ranges are not
>I find this hard to believe. There's always the possibility that some
>part of the system doesn't support ACS so if the PCI bus addresses and
>IOVA overlap there's a good chance that P2P and ATS won't work at all on
>some hardware.
I tend to agree but this comes down to how
>I find this hard to believe. There's always the possibility that some
>part of the system doesn't support ACS so if the PCI bus addresses and
>IOVA overlap there's a good chance that P2P and ATS won't work at all on
>some hardware.
I tend to agree but this comes down to how
On 5/11/2018 2:52 AM, Christian König wrote:
This only works when the IOVA and the PCI bus addresses never overlap.
I'm not sure how the IOVA allocation works but I don't think we
guarantee that on Linux.
I find this hard to believe. There's always the possibility that some
part of the
On 5/11/2018 2:52 AM, Christian König wrote:
This only works when the IOVA and the PCI bus addresses never overlap.
I'm not sure how the IOVA allocation works but I don't think we
guarantee that on Linux.
I find this hard to believe. There's always the possibility that some
part of the
Am 10.05.2018 um 19:15 schrieb Logan Gunthorpe:
On 10/05/18 11:11 AM, Stephen Bates wrote:
Not to me. In the p2pdma code we specifically program DMA engines with
the PCI bus address.
Ah yes of course. Brain fart on my part. We are not programming the P2PDMA
initiator with an IOVA but with
Am 10.05.2018 um 19:15 schrieb Logan Gunthorpe:
On 10/05/18 11:11 AM, Stephen Bates wrote:
Not to me. In the p2pdma code we specifically program DMA engines with
the PCI bus address.
Ah yes of course. Brain fart on my part. We are not programming the P2PDMA
initiator with an IOVA but with
On Thu, May 10, 2018 at 01:10:15PM -0600, Alex Williamson wrote:
> On Thu, 10 May 2018 18:41:09 +
> "Stephen Bates" wrote:
> > >Reasons is that GPU are giving up on PCIe (see all specialize link like
> > >NVlink that are popping up in GPU space). So for fast
On Thu, May 10, 2018 at 01:10:15PM -0600, Alex Williamson wrote:
> On Thu, 10 May 2018 18:41:09 +
> "Stephen Bates" wrote:
> > >Reasons is that GPU are giving up on PCIe (see all specialize link like
> > >NVlink that are popping up in GPU space). So for fast GPU inter-connect
> >
On Thu, 10 May 2018 18:41:09 +
"Stephen Bates" wrote:
> >Reasons is that GPU are giving up on PCIe (see all specialize link like
> >NVlink that are popping up in GPU space). So for fast GPU inter-connect
> >we have this new links.
>
> I look forward
On Thu, 10 May 2018 18:41:09 +
"Stephen Bates" wrote:
> >Reasons is that GPU are giving up on PCIe (see all specialize link like
> >NVlink that are popping up in GPU space). So for fast GPU inter-connect
> >we have this new links.
>
> I look forward to Nvidia
On 10/05/18 12:41 PM, Stephen Bates wrote:
> Hi Jerome
>
>>Note on GPU we do would not rely on ATS for peer to peer. Some part
>>of the GPU (DMA engines) do not necessarily support ATS. Yet those
>>are the part likely to be use in peer to peer.
>
> OK this is good to know. I agree
On 10/05/18 12:41 PM, Stephen Bates wrote:
> Hi Jerome
>
>>Note on GPU we do would not rely on ATS for peer to peer. Some part
>>of the GPU (DMA engines) do not necessarily support ATS. Yet those
>>are the part likely to be use in peer to peer.
>
> OK this is good to know. I agree
Hi Jerome
>Hopes this helps understanding the big picture. I over simplify thing and
>devils is in the details.
This was a great primer thanks for putting it together. An LWN.net article
perhaps ;-)??
Stephen
Hi Jerome
>Hopes this helps understanding the big picture. I over simplify thing and
>devils is in the details.
This was a great primer thanks for putting it together. An LWN.net article
perhaps ;-)??
Stephen
Hi Jerome
>Note on GPU we do would not rely on ATS for peer to peer. Some part
>of the GPU (DMA engines) do not necessarily support ATS. Yet those
>are the part likely to be use in peer to peer.
OK this is good to know. I agree the DMA engine is probably one of the GPU
components
Hi Jerome
>Note on GPU we do would not rely on ATS for peer to peer. Some part
>of the GPU (DMA engines) do not necessarily support ATS. Yet those
>are the part likely to be use in peer to peer.
OK this is good to know. I agree the DMA engine is probably one of the GPU
components
On 10/05/18 11:11 AM, Stephen Bates wrote:
>> Not to me. In the p2pdma code we specifically program DMA engines with
>> the PCI bus address.
>
> Ah yes of course. Brain fart on my part. We are not programming the P2PDMA
> initiator with an IOVA but with the PCI bus address...
>
>> So
On 10/05/18 11:11 AM, Stephen Bates wrote:
>> Not to me. In the p2pdma code we specifically program DMA engines with
>> the PCI bus address.
>
> Ah yes of course. Brain fart on my part. We are not programming the P2PDMA
> initiator with an IOVA but with the PCI bus address...
>
>> So
> Not to me. In the p2pdma code we specifically program DMA engines with
> the PCI bus address.
Ah yes of course. Brain fart on my part. We are not programming the P2PDMA
initiator with an IOVA but with the PCI bus address...
> So regardless of whether we are using the IOMMU or
> not, the
> Not to me. In the p2pdma code we specifically program DMA engines with
> the PCI bus address.
Ah yes of course. Brain fart on my part. We are not programming the P2PDMA
initiator with an IOVA but with the PCI bus address...
> So regardless of whether we are using the IOMMU or
> not, the
On 10/05/18 08:16 AM, Stephen Bates wrote:
> Hi Christian
>
>> Why would a switch not identify that as a peer address? We use the PASID
>>together with ATS to identify the address space which a transaction
>>should use.
>
> I think you are conflating two types of TLPs here. If the
On 10/05/18 08:16 AM, Stephen Bates wrote:
> Hi Christian
>
>> Why would a switch not identify that as a peer address? We use the PASID
>>together with ATS to identify the address space which a transaction
>>should use.
>
> I think you are conflating two types of TLPs here. If the
On Thu, May 10, 2018 at 04:29:44PM +0200, Christian König wrote:
> Am 10.05.2018 um 16:20 schrieb Stephen Bates:
> > Hi Jerome
> >
> > > As it is tie to PASID this is done using IOMMU so looks for caller
> > > of amd_iommu_bind_pasid() or intel_svm_bind_mm() in GPU the existing
> > > user is
On Thu, May 10, 2018 at 04:29:44PM +0200, Christian König wrote:
> Am 10.05.2018 um 16:20 schrieb Stephen Bates:
> > Hi Jerome
> >
> > > As it is tie to PASID this is done using IOMMU so looks for caller
> > > of amd_iommu_bind_pasid() or intel_svm_bind_mm() in GPU the existing
> > > user is
On Thu, May 10, 2018 at 02:16:25PM +, Stephen Bates wrote:
> Hi Christian
>
> > Why would a switch not identify that as a peer address? We use the PASID
> >together with ATS to identify the address space which a transaction
> >should use.
>
> I think you are conflating two types
On Thu, May 10, 2018 at 02:16:25PM +, Stephen Bates wrote:
> Hi Christian
>
> > Why would a switch not identify that as a peer address? We use the PASID
> >together with ATS to identify the address space which a transaction
> >should use.
>
> I think you are conflating two types
Am 10.05.2018 um 16:20 schrieb Stephen Bates:
Hi Jerome
As it is tie to PASID this is done using IOMMU so looks for caller
of amd_iommu_bind_pasid() or intel_svm_bind_mm() in GPU the existing
user is the AMD GPU driver see:
Ah thanks. This cleared things up for me. A quick search shows
Am 10.05.2018 um 16:20 schrieb Stephen Bates:
Hi Jerome
As it is tie to PASID this is done using IOMMU so looks for caller
of amd_iommu_bind_pasid() or intel_svm_bind_mm() in GPU the existing
user is the AMD GPU driver see:
Ah thanks. This cleared things up for me. A quick search shows
Hi Jerome
> As it is tie to PASID this is done using IOMMU so looks for caller
> of amd_iommu_bind_pasid() or intel_svm_bind_mm() in GPU the existing
> user is the AMD GPU driver see:
Ah thanks. This cleared things up for me. A quick search shows there are still
no users of
Hi Jerome
> As it is tie to PASID this is done using IOMMU so looks for caller
> of amd_iommu_bind_pasid() or intel_svm_bind_mm() in GPU the existing
> user is the AMD GPU driver see:
Ah thanks. This cleared things up for me. A quick search shows there are still
no users of
Hi Christian
> Why would a switch not identify that as a peer address? We use the PASID
>together with ATS to identify the address space which a transaction
>should use.
I think you are conflating two types of TLPs here. If the device supports ATS
then it will issue a TR TLP to obtain
Hi Christian
> Why would a switch not identify that as a peer address? We use the PASID
>together with ATS to identify the address space which a transaction
>should use.
I think you are conflating two types of TLPs here. If the device supports ATS
then it will issue a TR TLP to obtain
Am 09.05.2018 um 18:45 schrieb Logan Gunthorpe:
On 09/05/18 07:40 AM, Christian König wrote:
The key takeaway is that when any device has ATS enabled you can't
disable ACS without breaking it (even if you unplug and replug it).
I don't follow how you came to this conclusion...
The ACS bits
Am 09.05.2018 um 18:45 schrieb Logan Gunthorpe:
On 09/05/18 07:40 AM, Christian König wrote:
The key takeaway is that when any device has ATS enabled you can't
disable ACS without breaking it (even if you unplug and replug it).
I don't follow how you came to this conclusion...
The ACS bits
On Wed, May 09, 2018 at 04:30:32PM +, Stephen Bates wrote:
> Hi Jerome
>
> > Now inside that page table you can point GPU virtual address
> > to use GPU memory or use system memory. Those system memory entry can
> > also be mark as ATS against a given PASID.
>
> Thanks. This all makes
On Wed, May 09, 2018 at 04:30:32PM +, Stephen Bates wrote:
> Hi Jerome
>
> > Now inside that page table you can point GPU virtual address
> > to use GPU memory or use system memory. Those system memory entry can
> > also be mark as ATS against a given PASID.
>
> Thanks. This all makes
On 09/05/18 07:40 AM, Christian König wrote:
> The key takeaway is that when any device has ATS enabled you can't
> disable ACS without breaking it (even if you unplug and replug it).
I don't follow how you came to this conclusion...
The ACS bits we'd be turning off are the ones that force
On 09/05/18 07:40 AM, Christian König wrote:
> The key takeaway is that when any device has ATS enabled you can't
> disable ACS without breaking it (even if you unplug and replug it).
I don't follow how you came to this conclusion...
The ACS bits we'd be turning off are the ones that force
Hi Jerome
> Now inside that page table you can point GPU virtual address
> to use GPU memory or use system memory. Those system memory entry can
> also be mark as ATS against a given PASID.
Thanks. This all makes sense.
But do you have examples of this in a kernel driver (if so can you
Hi Jerome
> Now inside that page table you can point GPU virtual address
> to use GPU memory or use system memory. Those system memory entry can
> also be mark as ATS against a given PASID.
Thanks. This all makes sense.
But do you have examples of this in a kernel driver (if so can you
On Wed, May 09, 2018 at 03:41:44PM +, Stephen Bates wrote:
> Christian
>
> >Interesting point, give me a moment to check that. That finally makes
> >all the hardware I have standing around here valuable :)
>
> Yes. At the very least it provides an initial standards based path
>
On Wed, May 09, 2018 at 03:41:44PM +, Stephen Bates wrote:
> Christian
>
> >Interesting point, give me a moment to check that. That finally makes
> >all the hardware I have standing around here valuable :)
>
> Yes. At the very least it provides an initial standards based path
>
On 05/09/2018 08:44 AM, Stephen Bates wrote:
Hi Don
RDMA VFs lend themselves to NVMEoF w/device-assignment need a way to
put NVME 'resources' into an assignable/manageable object for
'IOMMU-grouping',
which is really a 'DMA security domain' and less an 'IOMMU grouping
On 05/09/2018 08:44 AM, Stephen Bates wrote:
Hi Don
RDMA VFs lend themselves to NVMEoF w/device-assignment need a way to
put NVME 'resources' into an assignable/manageable object for
'IOMMU-grouping',
which is really a 'DMA security domain' and less an 'IOMMU grouping
On 05/08/2018 05:27 PM, Stephen Bates wrote:
Hi Don
Well, p2p DMA is a function of a cooperating 'agent' somewhere above the two
devices.
That agent should 'request' to the kernel that ACS be removed/circumvented
(p2p enabled) btwn two endpoints.
I recommend doing so via a sysfs
On 05/08/2018 05:27 PM, Stephen Bates wrote:
Hi Don
Well, p2p DMA is a function of a cooperating 'agent' somewhere above the two
devices.
That agent should 'request' to the kernel that ACS be removed/circumvented
(p2p enabled) btwn two endpoints.
I recommend doing so via a sysfs
On 05/09/2018 10:44 AM, Alex Williamson wrote:
On Wed, 9 May 2018 12:35:56 +
"Stephen Bates" wrote:
Hi Alex and Don
Correct, the VM has no concept of the host's IOMMU groups, only
the hypervisor knows about the groups,
But as I understand it these groups are
On 05/09/2018 10:44 AM, Alex Williamson wrote:
On Wed, 9 May 2018 12:35:56 +
"Stephen Bates" wrote:
Hi Alex and Don
Correct, the VM has no concept of the host's IOMMU groups, only
the hypervisor knows about the groups,
But as I understand it these groups are usually passed
On 05/08/2018 08:01 PM, Alex Williamson wrote:
On Tue, 8 May 2018 19:06:17 -0400
Don Dutile wrote:
On 05/08/2018 05:27 PM, Stephen Bates wrote:
As I understand it VMs need to know because VFIO passes IOMMU
grouping up into the VMs. So if a IOMMU grouping changes the VM's
On 05/08/2018 08:01 PM, Alex Williamson wrote:
On Tue, 8 May 2018 19:06:17 -0400
Don Dutile wrote:
On 05/08/2018 05:27 PM, Stephen Bates wrote:
As I understand it VMs need to know because VFIO passes IOMMU
grouping up into the VMs. So if a IOMMU grouping changes the VM's
view of its PCIe
Christian
>Interesting point, give me a moment to check that. That finally makes
>all the hardware I have standing around here valuable :)
Yes. At the very least it provides an initial standards based path for P2P DMAs
across RPs which is something we have discussed on this list in
Christian
>Interesting point, give me a moment to check that. That finally makes
>all the hardware I have standing around here valuable :)
Yes. At the very least it provides an initial standards based path for P2P DMAs
across RPs which is something we have discussed on this list in
On Wed, 9 May 2018 12:35:56 +
"Stephen Bates" wrote:
> Hi Alex and Don
>
> >Correct, the VM has no concept of the host's IOMMU groups, only
> > the hypervisor knows about the groups,
>
> But as I understand it these groups are usually passed through to VMs
> on
On Wed, 9 May 2018 12:35:56 +
"Stephen Bates" wrote:
> Hi Alex and Don
>
> >Correct, the VM has no concept of the host's IOMMU groups, only
> > the hypervisor knows about the groups,
>
> But as I understand it these groups are usually passed through to VMs
> on a pre-group basis by
Am 09.05.2018 um 15:12 schrieb Stephen Bates:
Jerome and Christian
I think there is confusion here, Alex properly explained the scheme
PCIE-device do a ATS request to the IOMMU which returns a valid
translation for a virtual address. Device can then use that address
directly without going
Am 09.05.2018 um 15:12 schrieb Stephen Bates:
Jerome and Christian
I think there is confusion here, Alex properly explained the scheme
PCIE-device do a ATS request to the IOMMU which returns a valid
translation for a virtual address. Device can then use that address
directly without going
Jerome and Christian
> I think there is confusion here, Alex properly explained the scheme
> PCIE-device do a ATS request to the IOMMU which returns a valid
> translation for a virtual address. Device can then use that address
> directly without going through IOMMU for translation.
So I went
Jerome and Christian
> I think there is confusion here, Alex properly explained the scheme
> PCIE-device do a ATS request to the IOMMU which returns a valid
> translation for a virtual address. Device can then use that address
> directly without going through IOMMU for translation.
So I went
Hi Don
>RDMA VFs lend themselves to NVMEoF w/device-assignment need a way to
>put NVME 'resources' into an assignable/manageable object for
> 'IOMMU-grouping',
>which is really a 'DMA security domain' and less an 'IOMMU grouping
> domain'.
Ha, I like your term "DMA Security
Hi Don
>RDMA VFs lend themselves to NVMEoF w/device-assignment need a way to
>put NVME 'resources' into an assignable/manageable object for
> 'IOMMU-grouping',
>which is really a 'DMA security domain' and less an 'IOMMU grouping
> domain'.
Ha, I like your term "DMA Security
Hi Logan
>Yeah, I'm having a hard time coming up with an easy enough solution for
>the user. I agree with Dan though, the bus renumbering risk would be
>fairly low in the custom hardware seeing the switches are likely going
>to be directly soldered to the same board with the CPU.
Hi Logan
>Yeah, I'm having a hard time coming up with an easy enough solution for
>the user. I agree with Dan though, the bus renumbering risk would be
>fairly low in the custom hardware seeing the switches are likely going
>to be directly soldered to the same board with the CPU.
Hi Alex and Don
>Correct, the VM has no concept of the host's IOMMU groups, only the
> hypervisor knows about the groups,
But as I understand it these groups are usually passed through to VMs on a
pre-group basis by the hypervisor? So IOMMU group 1 might be passed to VM A and
IOMMU
Hi Alex and Don
>Correct, the VM has no concept of the host's IOMMU groups, only the
> hypervisor knows about the groups,
But as I understand it these groups are usually passed through to VMs on a
pre-group basis by the hypervisor? So IOMMU group 1 might be passed to VM A and
IOMMU
On Tue, 8 May 2018 17:31:48 -0600
Logan Gunthorpe wrote:
> On 08/05/18 05:11 PM, Alex Williamson wrote:
> > A runtime, sysfs approach has some benefits here,
> > especially in identifying the device assuming we're ok with leaving
> > the persistence problem to userspace
On Tue, 8 May 2018 17:31:48 -0600
Logan Gunthorpe wrote:
> On 08/05/18 05:11 PM, Alex Williamson wrote:
> > A runtime, sysfs approach has some benefits here,
> > especially in identifying the device assuming we're ok with leaving
> > the persistence problem to userspace tools. I'm still a little
On Tue, 8 May 2018 19:06:17 -0400
Don Dutile wrote:
> On 05/08/2018 05:27 PM, Stephen Bates wrote:
> > As I understand it VMs need to know because VFIO passes IOMMU
> > grouping up into the VMs. So if a IOMMU grouping changes the VM's
> > view of its PCIe topology changes. I
On Tue, 8 May 2018 19:06:17 -0400
Don Dutile wrote:
> On 05/08/2018 05:27 PM, Stephen Bates wrote:
> > As I understand it VMs need to know because VFIO passes IOMMU
> > grouping up into the VMs. So if a IOMMU grouping changes the VM's
> > view of its PCIe topology changes. I think we even have
On 08/05/18 05:11 PM, Alex Williamson wrote:
> On to the implementation details... I already mentioned the BDF issue
> in my other reply. If we had a way to persistently identify a device,
> would we specify the downstream points at which we want to disable ACS
> or the endpoints that we want
On 08/05/18 05:11 PM, Alex Williamson wrote:
> On to the implementation details... I already mentioned the BDF issue
> in my other reply. If we had a way to persistently identify a device,
> would we specify the downstream points at which we want to disable ACS
> or the endpoints that we want
On 08/05/18 05:00 PM, Dan Williams wrote:
>> I'd advise caution with a user supplied BDF approach, we have no
>> guaranteed persistence for a device's PCI address. Adding a device
>> might renumber the buses, replacing a device with one that consumes
>> more/less bus numbers can renumber the
On 08/05/18 05:00 PM, Dan Williams wrote:
>> I'd advise caution with a user supplied BDF approach, we have no
>> guaranteed persistence for a device's PCI address. Adding a device
>> might renumber the buses, replacing a device with one that consumes
>> more/less bus numbers can renumber the
On Tue, 8 May 2018 22:25:06 +
"Stephen Bates" wrote:
> >Yeah, so based on the discussion I'm leaning toward just having a
> >command line option that takes a list of BDFs and disables ACS
> > for them. (Essentially as Dan has suggested.) This avoids the
> >
On Tue, 8 May 2018 22:25:06 +
"Stephen Bates" wrote:
> >Yeah, so based on the discussion I'm leaning toward just having a
> >command line option that takes a list of BDFs and disables ACS
> > for them. (Essentially as Dan has suggested.) This avoids the
> > shotgun.
>
> I concur
On 05/08/2018 05:27 PM, Stephen Bates wrote:
Hi Don
Well, p2p DMA is a function of a cooperating 'agent' somewhere above the two
devices.
That agent should 'request' to the kernel that ACS be removed/circumvented
(p2p enabled) btwn two endpoints.
I recommend doing so via a sysfs
On 05/08/2018 05:27 PM, Stephen Bates wrote:
Hi Don
Well, p2p DMA is a function of a cooperating 'agent' somewhere above the two
devices.
That agent should 'request' to the kernel that ACS be removed/circumvented
(p2p enabled) btwn two endpoints.
I recommend doing so via a sysfs
On Tue, May 8, 2018 at 3:32 PM, Alex Williamson
wrote:
> On Tue, 8 May 2018 16:10:19 -0600
> Logan Gunthorpe wrote:
>
>> On 08/05/18 04:03 PM, Alex Williamson wrote:
>> > If IOMMU grouping implies device assignment (because nobody else uses
>> >
On Tue, May 8, 2018 at 3:32 PM, Alex Williamson
wrote:
> On Tue, 8 May 2018 16:10:19 -0600
> Logan Gunthorpe wrote:
>
>> On 08/05/18 04:03 PM, Alex Williamson wrote:
>> > If IOMMU grouping implies device assignment (because nobody else uses
>> > it to the same extent as device assignment) then
On Tue, 8 May 2018 16:10:19 -0600
Logan Gunthorpe wrote:
> On 08/05/18 04:03 PM, Alex Williamson wrote:
> > If IOMMU grouping implies device assignment (because nobody else uses
> > it to the same extent as device assignment) then the build-time option
> > falls to pieces,
On Tue, 8 May 2018 16:10:19 -0600
Logan Gunthorpe wrote:
> On 08/05/18 04:03 PM, Alex Williamson wrote:
> > If IOMMU grouping implies device assignment (because nobody else uses
> > it to the same extent as device assignment) then the build-time option
> > falls to pieces, we need a single
>Yeah, so based on the discussion I'm leaning toward just having a
>command line option that takes a list of BDFs and disables ACS for them.
>(Essentially as Dan has suggested.) This avoids the shotgun.
I concur that this seems to be where the conversation is taking us.
@Alex -
>Yeah, so based on the discussion I'm leaning toward just having a
>command line option that takes a list of BDFs and disables ACS for them.
>(Essentially as Dan has suggested.) This avoids the shotgun.
I concur that this seems to be where the conversation is taking us.
@Alex -
On 05/08/2018 06:03 PM, Alex Williamson wrote:
On Tue, 8 May 2018 21:42:27 +
"Stephen Bates" wrote:
Hi Alex
But it would be a much easier proposal to disable ACS when the
IOMMU is not enabled, ACS has no real purpose in that case.
I guess one issue I have
On 05/08/2018 06:03 PM, Alex Williamson wrote:
On Tue, 8 May 2018 21:42:27 +
"Stephen Bates" wrote:
Hi Alex
But it would be a much easier proposal to disable ACS when the
IOMMU is not enabled, ACS has no real purpose in that case.
I guess one issue I have with this is that it
On 08/05/18 04:03 PM, Alex Williamson wrote:
> If IOMMU grouping implies device assignment (because nobody else uses
> it to the same extent as device assignment) then the build-time option
> falls to pieces, we need a single kernel that can do both. I think we
> need to get more clever about
On 08/05/18 04:03 PM, Alex Williamson wrote:
> If IOMMU grouping implies device assignment (because nobody else uses
> it to the same extent as device assignment) then the build-time option
> falls to pieces, we need a single kernel that can do both. I think we
> need to get more clever about
On Tue, 8 May 2018 21:42:27 +
"Stephen Bates" wrote:
> Hi Alex
>
> >But it would be a much easier proposal to disable ACS when the
> > IOMMU is not enabled, ACS has no real purpose in that case.
>
> I guess one issue I have with this is that it disables IOMMU
On Tue, 8 May 2018 21:42:27 +
"Stephen Bates" wrote:
> Hi Alex
>
> >But it would be a much easier proposal to disable ACS when the
> > IOMMU is not enabled, ACS has no real purpose in that case.
>
> I guess one issue I have with this is that it disables IOMMU groups
> for all Root
Hi Alex
>But it would be a much easier proposal to disable ACS when the IOMMU is
>not enabled, ACS has no real purpose in that case.
I guess one issue I have with this is that it disables IOMMU groups for all
Root Ports and not just the one(s) we wish to do p2pdma on.
>The
Hi Alex
>But it would be a much easier proposal to disable ACS when the IOMMU is
>not enabled, ACS has no real purpose in that case.
I guess one issue I have with this is that it disables IOMMU groups for all
Root Ports and not just the one(s) we wish to do p2pdma on.
>The
Hi Jerome
>I think there is confusion here, Alex properly explained the scheme
> PCIE-device do a ATS request to the IOMMU which returns a valid
>translation for a virtual address. Device can then use that address
>directly without going through IOMMU for translation.
This makes
Hi Jerome
>I think there is confusion here, Alex properly explained the scheme
> PCIE-device do a ATS request to the IOMMU which returns a valid
>translation for a virtual address. Device can then use that address
>directly without going through IOMMU for translation.
This makes
Hi Don
>Well, p2p DMA is a function of a cooperating 'agent' somewhere above the two
>devices.
>That agent should 'request' to the kernel that ACS be removed/circumvented
> (p2p enabled) btwn two endpoints.
>I recommend doing so via a sysfs method.
Yes we looked at something like this
Hi Don
>Well, p2p DMA is a function of a cooperating 'agent' somewhere above the two
>devices.
>That agent should 'request' to the kernel that ACS be removed/circumvented
> (p2p enabled) btwn two endpoints.
>I recommend doing so via a sysfs method.
Yes we looked at something like this
On Tue, 8 May 2018 14:49:23 -0600
Logan Gunthorpe wrote:
> On 08/05/18 02:43 PM, Alex Williamson wrote:
> > Yes, GPUs seem to be leading the pack in implementing ATS. So now the
> > dumb question, why not simply turn off the IOMMU and thus ACS? The
> > argument of using
On Tue, 8 May 2018 14:49:23 -0600
Logan Gunthorpe wrote:
> On 08/05/18 02:43 PM, Alex Williamson wrote:
> > Yes, GPUs seem to be leading the pack in implementing ATS. So now the
> > dumb question, why not simply turn off the IOMMU and thus ACS? The
> > argument of using the IOMMU for security
1 - 100 of 136 matches
Mail list logo