On 12/8/2021 4:39 PM, Jason Gunthorpe wrote:
On Wed, Dec 08, 2021 at 01:59:45PM -0800, Jacob Pan wrote:
Hi Jason,
On Wed, 8 Dec 2021 16:30:22 -0400, Jason Gunthorpe wrote:
On Wed, Dec 08, 2021 at 11:55:16AM -0800, Jacob Pan wrote:
Hi Jason,
On Wed, 8 Dec 2021 09:13:58 -0400, Jason
On 12/8/2021 6:13 AM, Jason Gunthorpe wrote:
On Tue, Dec 07, 2021 at 05:47:14AM -0800, Jacob Pan wrote:
In-kernel DMA should be managed by DMA mapping API. The existing kernel
PASID support is based on the SVA machinery in SVA lib that is intended
for user process SVA. The binding between a
of
using kernel virtual addresses.
Link: https://lore.kernel.org/linux-iommu/20210511194726.gp1002...@nvidia.com/
Signed-off-by: Jacob Pan
Acked-by: Dave Jiang
Also cc Vinod and dmaengine@vger
---
.../admin-guide/kernel-parameters.txt | 6 --
drivers/dma/Kconfig
On 12/1/2021 3:03 PM, Thomas Gleixner wrote:
On Wed, Dec 01 2021 at 14:49, Dave Jiang wrote:
On 12/1/2021 2:44 PM, Thomas Gleixner wrote:
How that is backed on the host does not really matter. You can expose
MSI-X to the guest with a INTx backing as well.
I'm still failing to see
On 12/1/2021 2:44 PM, Thomas Gleixner wrote:
On Wed, Dec 01 2021 at 14:21, Dave Jiang wrote:
On 12/1/2021 1:25 PM, Thomas Gleixner wrote:
The hardware implementation does not have enough MSIX vectors for
guests. There are only 9 MSIX vectors total (8 for queues) and 2048 IMS
vectors. So
On 12/1/2021 1:25 PM, Thomas Gleixner wrote:
On Wed, Dec 01 2021 at 11:47, Dave Jiang wrote:
On 12/1/2021 11:41 AM, Thomas Gleixner wrote:
Hi Thomas. This is actually the IDXD usage for a mediated device passed
to a guest kernel when we plumb the pass through of IMS to the guest
rather than
On 12/1/2021 11:41 AM, Thomas Gleixner wrote:
Dave,
please trim your replies.
On Wed, Dec 01 2021 at 09:28, Dave Jiang wrote:
On 12/1/2021 3:16 AM, Thomas Gleixner wrote:
Jason,
CC+ IOMMU folks
On Tue, Nov 30 2021 at 20:17, Jason Gunthorpe wrote:
On Tue, Nov 30, 2021 at 10:23:16PM
On 12/1/2021 3:16 AM, Thomas Gleixner wrote:
Jason,
CC+ IOMMU folks
On Tue, Nov 30 2021 at 20:17, Jason Gunthorpe wrote:
On Tue, Nov 30, 2021 at 10:23:16PM +0100, Thomas Gleixner wrote:
The real problem is where to store the MSI descriptors because the PCI
device has its own real PCI/MSI-X
On 1/22/2021 4:53 AM, Zhou Wang wrote:
On 2021/1/21 4:47, Dave Jiang wrote:
On 1/8/2021 7:52 AM, Jean-Philippe Brucker wrote:
The IOPF (I/O Page Fault) feature is now enabled independently from the
SVA feature, because some IOPF implementations are device-specific and
do not require IOMMU
On 1/8/2021 7:52 AM, Jean-Philippe Brucker wrote:
The IOPF (I/O Page Fault) feature is now enabled independently from the
SVA feature, because some IOPF implementations are device-specific and
do not require IOMMU support for PCIe PRI or Arm SMMU stall.
Enable IOPF unconditionally when
On 2/25/20 6:11 PM, zhangfei wrote:
On 2020/2/26 上午12:02, Dave Jiang wrote:
On 2/24/20 11:17 PM, Zhangfei Gao wrote:
Add Zhangfei Gao and Zhou Wang as maintainers for uacce
Signed-off-by: Zhangfei Gao
Signed-off-by: Zhou Wang
---
MAINTAINERS | 10 ++
1 file changed, 10
On 2/24/20 11:17 PM, Zhangfei Gao wrote:
Add Zhangfei Gao and Zhou Wang as maintainers for uacce
Signed-off-by: Zhangfei Gao
Signed-off-by: Zhou Wang
---
MAINTAINERS | 10 ++
1 file changed, 10 insertions(+)
diff --git a/MAINTAINERS b/MAINTAINERS
index 38fe2f3..22e647f 100644
On 1/15/20 4:18 AM, zhangfei wrote:
Hi, Greg
On 2020/1/14 下午10:59, Greg Kroah-Hartman wrote:
On Mon, Jan 13, 2020 at 11:34:55AM +0800, zhangfei wrote:
Hi, Greg
Thanks for the review.
On 2020/1/12 上午3:40, Greg Kroah-Hartman wrote:
On Sat, Jan 11, 2020 at 10:48:37AM +0800, Zhangfei Gao
On 12/15/19 8:08 PM, Zhangfei Gao wrote:
Uacce (Unified/User-space-access-intended Accelerator Framework) targets to
provide Shared Virtual Addressing (SVA) between accelerators and processes.
So accelerator can access any data structure of the main cpu.
This differs from the data sharing
On 1/31/2019 4:41 PM, Logan Gunthorpe wrote:
On 2019-01-31 3:46 p.m., Dave Jiang wrote:
I believe irqbalance writes to the file /proc/irq/N/smp_affinity. So
maybe take a look at the code that starts from there and see if it would
have any impact on your stuff.
Ok, well on my system I can
On 1/31/2019 3:39 PM, Logan Gunthorpe wrote:
On 2019-01-31 1:58 p.m., Dave Jiang wrote:
On 1/31/2019 1:48 PM, Logan Gunthorpe wrote:
On 2019-01-31 1:20 p.m., Dave Jiang wrote:
Does this work when the system moves the MSI vector either via software
(irqbalance) or BIOS APIC programming
On 1/31/2019 1:48 PM, Logan Gunthorpe wrote:
On 2019-01-31 1:20 p.m., Dave Jiang wrote:
Does this work when the system moves the MSI vector either via software
(irqbalance) or BIOS APIC programming (some modes cause round robin
behavior)?
I don't know how irqbalance works, and I'm not sure
On 1/31/2019 11:56 AM, Logan Gunthorpe wrote:
Hi,
This patch series adds optional support for using MSI interrupts instead
of NTB doorbells in ntb_transport. This is desirable seeing doorbells on
current hardware are quite slow and therefore switching to MSI interrupts
provides a significant
On 08/10/2018 10:15 AM, Logan Gunthorpe wrote:
>
>
> On 10/08/18 11:01 AM, Dave Jiang wrote:
>> Or if the BIOS has provided mapping for the Intel NTB device
>> specifically? Is that a possibility? NTB does go through the IOMMU.
>
> I don't know but if the BIOS
On 08/10/2018 09:33 AM, Logan Gunthorpe wrote:
>
>
> On 10/08/18 10:31 AM, Dave Jiang wrote:
>>
>>
>> On 08/10/2018 09:24 AM, Logan Gunthorpe wrote:
>>>
>>>
>>> On 10/08/18 10:02 AM, Kit Chow wrote:
>>>> Turns out ther
On 08/10/2018 09:24 AM, Logan Gunthorpe wrote:
>
>
> On 10/08/18 10:02 AM, Kit Chow wrote:
>> Turns out there is no dma_map_resource routine on x86. get_dma_ops
>> returns intel_dma_ops which has map_resource pointing to NULL.
>
> Oh, yup. I wasn't aware of that. From a cursory view, it
On 06/08/2017 06:25 AM, Christoph Hellwig wrote:
> DMA_ERROR_CODE is not a public API and will go away. Instead properly
> unwind based on the loop counter.
>
> Signed-off-by: Christoph Hellwig <h...@lst.de>
Acked-by: Dave Jiang <dave.ji...@intel.com>
> ---
>
22 matches
Mail list logo