On 21.05.2021 05:03, Wang Xingang wrote:
> From: Xingang Wang
>
> When booting with devicetree, the pci_request_acs() is called after the
> enumeration and initialization of PCI devices, thus the ACS is not
> enabled. And ACS should be enabled when IOMMU is detected for the
> PCI host bridge, so
On 2021-09-01 09:59, Marek Szyprowski wrote:
On 21.05.2021 05:03, Wang Xingang wrote:
From: Xingang Wang
When booting with devicetree, the pci_request_acs() is called after the
enumeration and initialization of PCI devices, thus the ACS is not
enabled. And ACS should be enabled when IOMMU is
> From: Alex Williamson
> Sent: Wednesday, September 1, 2021 12:16 AM
>
> On Mon, 30 Aug 2021 19:59:10 -0700
> Nicolin Chen wrote:
>
> > The SMMUv3 devices implemented in the Grace SoC support NVIDIA's
> custom
> > CMDQ-Virtualization (CMDQV) hardware. Like the new ECMDQ feature first
> >
This patch series adds support for r8a779a0 (R-Car V3U).
Yoshihiro Shimoda (2):
dt-bindings: iommu: renesas,ipmmu-vmsa: add r8a779a0 support
iommu/ipmmu-vmsa: Add support for r8a779a0
.../bindings/iommu/renesas,ipmmu-vmsa.yaml| 1 +
drivers/iommu/ipmmu-vmsa.c| 19
Add support for r8a779a0 (R-Car V3U).
Signed-off-by: Yoshihiro Shimoda
---
Documentation/devicetree/bindings/iommu/renesas,ipmmu-vmsa.yaml | 1 +
1 file changed, 1 insertion(+)
diff --git a/Documentation/devicetree/bindings/iommu/renesas,ipmmu-vmsa.yaml
Add support for r8a779a0 (R-Car V3U). The IPMMU hardware design
of this SoC differs than others. So, add a new ipmmu_features for it.
Signed-off-by: Yoshihiro Shimoda
---
drivers/iommu/ipmmu-vmsa.c | 19 ++-
1 file changed, 18 insertions(+), 1 deletion(-)
diff --git
[Note that there is a conflict with changes from the swiotlb tree due
dma_direct_{alloc,free}. The solution is to basically take the changes
from both trees and apply them manually.]
The following changes since commit 36a21d51725af2ce0700c6ebcb6b9594aac658a6:
Linux 5.14-rc5 (2021-08-08
On Tue, Aug 31, 2021, at 23:30, Alyssa Rosenzweig wrote:
> I use this function for cross-device sharing on the M1 display driver.
> Arguably this is unsafe but it works on 16k kernels and if you want to
> test the function on 4k, you know where my code is.
>
My biggest issue is that I do not
On Tue, Aug 31, 2021, at 23:39, Alyssa Rosenzweig wrote:
> > + if ((1 << __ffs(domain->pgsize_bitmap)) > PAGE_SIZE) {
>
> Not a fan of this construction. Could you assign `(1 <<
> __ffs(domain->pgsize_bitmap))` to an appropriately named temporary (e.g
> min_io_pgsize) so it's clearer what's
On Tue, 2021-08-24 at 15:10 +0800, Hsin-Yi Wang wrote:
> On Fri, Aug 13, 2021 at 2:57 PM Yong Wu wrote:
> >
> > Prepare for 2 HWs that sharing pgtable in different power-domains.
> >
> > The previous SoC don't have PM. Only mt8192 has power-domain,
> > and it is display's power-domain which
On Fri, Jul 02, 2021 at 05:16:25PM +0300, Dmitry Osipenko wrote:
> 01.07.2021 21:14, Thierry Reding пишет:
> > On Tue, Jun 08, 2021 at 06:51:40PM +0200, Thierry Reding wrote:
> >> On Fri, May 28, 2021 at 06:54:55PM +0200, Thierry Reding wrote:
> >>> On Thu, May 20, 2021 at 05:03:06PM -0500, Rob
On Tue, 2021-08-24 at 15:35 +0800, Hsin-Yi Wang wrote:
> On Fri, Aug 13, 2021 at 3:03 PM Yong Wu wrote:
> >
> > For MM IOMMU, We always add device link between smi-common and
> > IOMMU HW.
> > In mt8195, we add smi-sub-common. Thus, if the node is sub-common,
> > we still
> > need find again to
From: Tianyu Lan Sent: Friday, August 27, 2021 10:21 AM
>
> Add new hvcall guest address host visibility support to mark
> memory visible to host. Call it inside set_memory_decrypted
> /encrypted(). Add HYPERVISOR feature check in the
> hv_is_isolation_supported() to optimize in
From: Tianyu Lan Sent: Friday, August 27, 2021 10:21 AM
>
> Hyper-V exposes shared memory boundary via cpuid
> HYPERV_CPUID_ISOLATION_CONFIG and store it in the
> shared_gpa_boundary of ms_hyperv struct. This prepares
> to share memory with host for SNP guest.
>
> Signed-off-by: Tianyu Lan
>
> My biggest issue is that I do not understand how this function is supposed
> to be used correctly. It would work fine as-is if it only ever gets passed
> buffers
> allocated by the coherent API but there's not way to check or guarantee that.
> There may also be callers making assumptions that
From: Tianyu Lan Sent: Friday, August 27, 2021 10:21 AM
>
> In Isolation VM, all shared memory with host needs to mark visible
> to host via hvcall. vmbus_establish_gpadl() has already done it for
> netvsc rx/tx ring buffer. The page buffer used by vmbus_sendpacket_
> pagebuffer() stills need to
From: Tianyu Lan Sent: Friday, August 27, 2021 10:21 AM
>
Per previous comment, the Subject line tag should be "scsi: storvsc: "
> In Isolation VM, all shared memory with host needs to mark visible
> to host via hvcall. vmbus_establish_gpadl() has already done it for
> storvsc rx/tx ring
On 2021-09-01 18:14, Sven Peter wrote:
On Tue, Aug 31, 2021, at 23:39, Alyssa Rosenzweig wrote:
+ if ((1 << __ffs(domain->pgsize_bitmap)) > PAGE_SIZE) {
Not a fan of this construction. Could you assign `(1 <<
__ffs(domain->pgsize_bitmap))` to an appropriately named temporary (e.g
From: Tianyu Lan Sent: Friday, August 27, 2021 10:21 AM
>
Subject tag should be "Drivers: hv: vmbus: "
> VMbus ring buffer are shared with host and it's need to
> be accessed via extra address space of Isolation VM with
> AMD SNP support. This patch is to map the ring buffer
> address in extra
From: Tianyu Lan Sent: Friday, August 27, 2021 10:21 AM
>
> Hyperv exposes GHCB page via SEV ES GHCB MSR for SNP guest
> to communicate with hypervisor. Map GHCB page for all
> cpus to read/write MSR register and submit hvcall request
> via ghcb page.
>
> Signed-off-by: Tianyu Lan
> ---
>
From: Tianyu Lan Sent: Friday, August 27, 2021 10:21 AM
>
Subject line tag should be "Drivers: hv: vmbus:"
> The monitor pages in the CHANNELMSG_INITIATE_CONTACT msg are shared
> with host in Isolation VM and so it's necessary to use hvcall to set
> them visible to host. In Isolation VM with
From: Tianyu Lan Sent: Friday, August 27, 2021 10:21 AM
>
Subject line tag should probably be "x86/hyperv:" since the majority
of the code added is under arch/x86.
> hyperv provides ghcb hvcall to handle VMBus
> HVCALL_SIGNAL_EVENT and HVCALL_POST_MESSAGE
> msg in SNP Isolation VM. Add such
From: Tianyu Lan Sent: Friday, August 27, 2021 10:21 AM
>
> hyperv Isolation VM requires bounce buffer support to copy
> data from/to encrypted memory and so enable swiotlb force
> mode to use swiotlb bounce buffer for DMA transaction.
>
> In Isolation VM with AMD SEV, the bounce buffer needs
From: Tianyu Lan Sent: Friday, August 27, 2021 10:21 AM
>
> Mark vmbus ring buffer visible with set_memory_decrypted() when
> establish gpadl handle.
>
> Signed-off-by: Tianyu Lan
> ---
> Change since v3:
>* Change vmbus_teardown_gpadl() parameter and put gpadl handle,
>buffer
From: Tianyu Lan Sent: Friday, August 27, 2021 10:21 AM
>
> Hyperv provides GHCB protocol to write Synthetic Interrupt
> Controller MSR registers in Isolation VM with AMD SEV SNP
> and these registers are emulated by hypervisor directly.
> Hyperv requires to write SINTx MSR registers twice.
From: Michael Kelley Sent: Wednesday, September 1,
2021 7:34 PM
[snip]
> > +int netvsc_dma_map(struct hv_device *hv_dev,
> > + struct hv_netvsc_packet *packet,
> > + struct hv_page_buffer *pb)
> > +{
> > + u32 page_count = packet->cp_partial ?
> > +
26 matches
Mail list logo