Re: [PATCH v4] iommu/of: Fix pci_request_acs() before enumerating PCI devices

2021-09-01 Thread Marek Szyprowski
On 21.05.2021 05:03, Wang Xingang wrote: > From: Xingang Wang > > When booting with devicetree, the pci_request_acs() is called after the > enumeration and initialization of PCI devices, thus the ACS is not > enabled. And ACS should be enabled when IOMMU is detected for the > PCI host bridge, so

Re: [PATCH v4] iommu/of: Fix pci_request_acs() before enumerating PCI devices

2021-09-01 Thread Robin Murphy
On 2021-09-01 09:59, Marek Szyprowski wrote: On 21.05.2021 05:03, Wang Xingang wrote: From: Xingang Wang When booting with devicetree, the pci_request_acs() is called after the enumeration and initialization of PCI devices, thus the ACS is not enabled. And ACS should be enabled when IOMMU is

RE: [RFC][PATCH v2 00/13] iommu/arm-smmu-v3: Add NVIDIA implementation

2021-09-01 Thread Tian, Kevin
> From: Alex Williamson > Sent: Wednesday, September 1, 2021 12:16 AM > > On Mon, 30 Aug 2021 19:59:10 -0700 > Nicolin Chen wrote: > > > The SMMUv3 devices implemented in the Grace SoC support NVIDIA's > custom > > CMDQ-Virtualization (CMDQV) hardware. Like the new ECMDQ feature first > >

[PATCH 0/2] iommu/ipmmu-vmsa: Add support for r8a779a0

2021-09-01 Thread Yoshihiro Shimoda
This patch series adds support for r8a779a0 (R-Car V3U). Yoshihiro Shimoda (2): dt-bindings: iommu: renesas,ipmmu-vmsa: add r8a779a0 support iommu/ipmmu-vmsa: Add support for r8a779a0 .../bindings/iommu/renesas,ipmmu-vmsa.yaml| 1 + drivers/iommu/ipmmu-vmsa.c| 19

[PATCH 1/2] dt-bindings: iommu: renesas, ipmmu-vmsa: add r8a779a0 support

2021-09-01 Thread Yoshihiro Shimoda
Add support for r8a779a0 (R-Car V3U). Signed-off-by: Yoshihiro Shimoda --- Documentation/devicetree/bindings/iommu/renesas,ipmmu-vmsa.yaml | 1 + 1 file changed, 1 insertion(+) diff --git a/Documentation/devicetree/bindings/iommu/renesas,ipmmu-vmsa.yaml

[PATCH 2/2] iommu/ipmmu-vmsa: Add support for r8a779a0

2021-09-01 Thread Yoshihiro Shimoda
Add support for r8a779a0 (R-Car V3U). The IPMMU hardware design of this SoC differs than others. So, add a new ipmmu_features for it. Signed-off-by: Yoshihiro Shimoda --- drivers/iommu/ipmmu-vmsa.c | 19 ++- 1 file changed, 18 insertions(+), 1 deletion(-) diff --git

[GIT PULL] dma-mapping updates for Linux 5.15

2021-09-01 Thread Christoph Hellwig
[Note that there is a conflict with changes from the swiotlb tree due dma_direct_{alloc,free}. The solution is to basically take the changes from both trees and apply them manually.] The following changes since commit 36a21d51725af2ce0700c6ebcb6b9594aac658a6: Linux 5.14-rc5 (2021-08-08

Re: [PATCH v2 3/8] iommu/dma: Disable get_sgtable for granule > PAGE_SIZE

2021-09-01 Thread Sven Peter via iommu
On Tue, Aug 31, 2021, at 23:30, Alyssa Rosenzweig wrote: > I use this function for cross-device sharing on the M1 display driver. > Arguably this is unsafe but it works on 16k kernels and if you want to > test the function on 4k, you know where my code is. > My biggest issue is that I do not

Re: [PATCH v2 6/8] iommu: Move IOMMU pagesize check to attach_device

2021-09-01 Thread Sven Peter via iommu
On Tue, Aug 31, 2021, at 23:39, Alyssa Rosenzweig wrote: > > + if ((1 << __ffs(domain->pgsize_bitmap)) > PAGE_SIZE) { > > Not a fan of this construction. Could you assign `(1 << > __ffs(domain->pgsize_bitmap))` to an appropriately named temporary (e.g > min_io_pgsize) so it's clearer what's

Re: [PATCH v2 11/29] iommu/mediatek: Always pm_runtime_get while tlb flush

2021-09-01 Thread 吴勇
On Tue, 2021-08-24 at 15:10 +0800, Hsin-Yi Wang wrote: > On Fri, Aug 13, 2021 at 2:57 PM Yong Wu wrote: > > > > Prepare for 2 HWs that sharing pgtable in different power-domains. > > > > The previous SoC don't have PM. Only mt8192 has power-domain, > > and it is display's power-domain which

Re: [PATCH v2 1/5] dt-bindings: reserved-memory: Document memory region specifier

2021-09-01 Thread Thierry Reding
On Fri, Jul 02, 2021 at 05:16:25PM +0300, Dmitry Osipenko wrote: > 01.07.2021 21:14, Thierry Reding пишет: > > On Tue, Jun 08, 2021 at 06:51:40PM +0200, Thierry Reding wrote: > >> On Fri, May 28, 2021 at 06:54:55PM +0200, Thierry Reding wrote: > >>> On Thu, May 20, 2021 at 05:03:06PM -0500, Rob

Re: [PATCH v2 16/29] iommu/mediatek: Adjust device link when it is sub-common

2021-09-01 Thread 吴勇
On Tue, 2021-08-24 at 15:35 +0800, Hsin-Yi Wang wrote: > On Fri, Aug 13, 2021 at 3:03 PM Yong Wu wrote: > > > > For MM IOMMU, We always add device link between smi-common and > > IOMMU HW. > > In mt8195, we add smi-sub-common. Thus, if the node is sub-common, > > we still > > need find again to

RE: [PATCH V4 03/13] x86/hyperv: Add new hvcall guest address host visibility support

2021-09-01 Thread Michael Kelley via iommu
From: Tianyu Lan Sent: Friday, August 27, 2021 10:21 AM > > Add new hvcall guest address host visibility support to mark > memory visible to host. Call it inside set_memory_decrypted > /encrypted(). Add HYPERVISOR feature check in the > hv_is_isolation_supported() to optimize in

RE: [PATCH V4 02/13] x86/hyperv: Initialize shared memory boundary in the Isolation VM.

2021-09-01 Thread Michael Kelley via iommu
From: Tianyu Lan Sent: Friday, August 27, 2021 10:21 AM > > Hyper-V exposes shared memory boundary via cpuid > HYPERV_CPUID_ISOLATION_CONFIG and store it in the > shared_gpa_boundary of ms_hyperv struct. This prepares > to share memory with host for SNP guest. > > Signed-off-by: Tianyu Lan >

Re: [PATCH v2 3/8] iommu/dma: Disable get_sgtable for granule > PAGE_SIZE

2021-09-01 Thread Alyssa Rosenzweig
> My biggest issue is that I do not understand how this function is supposed > to be used correctly. It would work fine as-is if it only ever gets passed > buffers > allocated by the coherent API but there's not way to check or guarantee that. > There may also be callers making assumptions that

RE: [PATCH V4 12/13] hv_netvsc: Add Isolation VM support for netvsc driver

2021-09-01 Thread Michael Kelley via iommu
From: Tianyu Lan Sent: Friday, August 27, 2021 10:21 AM > > In Isolation VM, all shared memory with host needs to mark visible > to host via hvcall. vmbus_establish_gpadl() has already done it for > netvsc rx/tx ring buffer. The page buffer used by vmbus_sendpacket_ > pagebuffer() stills need to

RE: [PATCH V4 13/13] hv_storvsc: Add Isolation VM support for storvsc driver

2021-09-01 Thread Michael Kelley via iommu
From: Tianyu Lan Sent: Friday, August 27, 2021 10:21 AM > Per previous comment, the Subject line tag should be "scsi: storvsc: " > In Isolation VM, all shared memory with host needs to mark visible > to host via hvcall. vmbus_establish_gpadl() has already done it for > storvsc rx/tx ring

Re: [PATCH v2 6/8] iommu: Move IOMMU pagesize check to attach_device

2021-09-01 Thread Robin Murphy
On 2021-09-01 18:14, Sven Peter wrote: On Tue, Aug 31, 2021, at 23:39, Alyssa Rosenzweig wrote: + if ((1 << __ffs(domain->pgsize_bitmap)) > PAGE_SIZE) { Not a fan of this construction. Could you assign `(1 << __ffs(domain->pgsize_bitmap))` to an appropriately named temporary (e.g

RE: [PATCH V4 08/13] hyperv/vmbus: Initialize VMbus ring buffer for Isolation VM

2021-09-01 Thread Michael Kelley via iommu
From: Tianyu Lan Sent: Friday, August 27, 2021 10:21 AM > Subject tag should be "Drivers: hv: vmbus: " > VMbus ring buffer are shared with host and it's need to > be accessed via extra address space of Isolation VM with > AMD SNP support. This patch is to map the ring buffer > address in extra

RE: [PATCH V4 01/13] x86/hyperv: Initialize GHCB page in Isolation VM

2021-09-01 Thread Michael Kelley via iommu
From: Tianyu Lan Sent: Friday, August 27, 2021 10:21 AM > > Hyperv exposes GHCB page via SEV ES GHCB MSR for SNP guest > to communicate with hypervisor. Map GHCB page for all > cpus to read/write MSR register and submit hvcall request > via ghcb page. > > Signed-off-by: Tianyu Lan > --- >

RE: [PATCH V4 07/13] hyperv/Vmbus: Add SNP support for VMbus channel initiate message

2021-09-01 Thread Michael Kelley via iommu
From: Tianyu Lan Sent: Friday, August 27, 2021 10:21 AM > Subject line tag should be "Drivers: hv: vmbus:" > The monitor pages in the CHANNELMSG_INITIATE_CONTACT msg are shared > with host in Isolation VM and so it's necessary to use hvcall to set > them visible to host. In Isolation VM with

RE: [PATCH V4 06/13] hyperv: Add ghcb hvcall support for SNP VM

2021-09-01 Thread Michael Kelley via iommu
From: Tianyu Lan Sent: Friday, August 27, 2021 10:21 AM > Subject line tag should probably be "x86/hyperv:" since the majority of the code added is under arch/x86. > hyperv provides ghcb hvcall to handle VMBus > HVCALL_SIGNAL_EVENT and HVCALL_POST_MESSAGE > msg in SNP Isolation VM. Add such

RE: [PATCH V4 11/13] hyperv/IOMMU: Enable swiotlb bounce buffer for Isolation VM

2021-09-01 Thread Michael Kelley via iommu
From: Tianyu Lan Sent: Friday, August 27, 2021 10:21 AM > > hyperv Isolation VM requires bounce buffer support to copy > data from/to encrypted memory and so enable swiotlb force > mode to use swiotlb bounce buffer for DMA transaction. > > In Isolation VM with AMD SEV, the bounce buffer needs

RE: [PATCH V4 04/13] hyperv: Mark vmbus ring buffer visible to host in Isolation VM

2021-09-01 Thread Michael Kelley via iommu
From: Tianyu Lan Sent: Friday, August 27, 2021 10:21 AM > > Mark vmbus ring buffer visible with set_memory_decrypted() when > establish gpadl handle. > > Signed-off-by: Tianyu Lan > --- > Change since v3: >* Change vmbus_teardown_gpadl() parameter and put gpadl handle, >buffer

RE: [PATCH V4 05/13] hyperv: Add Write/Read MSR registers via ghcb page

2021-09-01 Thread Michael Kelley via iommu
From: Tianyu Lan Sent: Friday, August 27, 2021 10:21 AM > > Hyperv provides GHCB protocol to write Synthetic Interrupt > Controller MSR registers in Isolation VM with AMD SEV SNP > and these registers are emulated by hypervisor directly. > Hyperv requires to write SINTx MSR registers twice.

RE: [PATCH V4 12/13] hv_netvsc: Add Isolation VM support for netvsc driver

2021-09-01 Thread Michael Kelley via iommu
From: Michael Kelley Sent: Wednesday, September 1, 2021 7:34 PM [snip] > > +int netvsc_dma_map(struct hv_device *hv_dev, > > + struct hv_netvsc_packet *packet, > > + struct hv_page_buffer *pb) > > +{ > > + u32 page_count = packet->cp_partial ? > > +