Jason,
CC+ IOMMU folks
On Tue, Nov 30 2021 at 20:17, Jason Gunthorpe wrote:
> On Tue, Nov 30, 2021 at 10:23:16PM +0100, Thomas Gleixner wrote:
>> The real problem is where to store the MSI descriptors because the PCI
>> device has its own real PCI/MSI-X interrupts which means it still shares
>>
On 12/1/2021 3:16 AM, Thomas Gleixner wrote:
Jason,
CC+ IOMMU folks
On Tue, Nov 30 2021 at 20:17, Jason Gunthorpe wrote:
On Tue, Nov 30, 2021 at 10:23:16PM +0100, Thomas Gleixner wrote:
The real problem is where to store the MSI descriptors because the PCI
device has its own real PCI/MSI-X
On Wed, Dec 01 2021 at 09:00, Jason Gunthorpe wrote:
> On Wed, Dec 01, 2021 at 11:16:47AM +0100, Thomas Gleixner wrote:
>> Looking at the device slices as subdevices with their own struct device
>> makes a lot of sense from the conceptual level.
>
> Except IMS is not just for subdevices, it should
On Wed, Dec 01, 2021 at 07:15:06PM +0100, Vlastimil Babka wrote:
> From: "Matthew Wilcox (Oracle)"
>
> page->freelist is for the use of slab. We already have the ability
> to free a list of pages in the core mm, but it requires the use of a
> list_head and for the pages to be chained together
Add definitions for the VIRTIO_IOMMU_F_BYPASS_CONFIG, which supersedes
VIRTIO_IOMMU_F_BYPASS.
Reviewed-by: Kevin Tian
Signed-off-by: Jean-Philippe Brucker
---
include/uapi/linux/virtio_iommu.h | 8 +++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git
To ease identity mapping support, keep the list of reserved regions
sorted.
Reviewed-by: Eric Auger
Reviewed-by: Kevin Tian
Signed-off-by: Jean-Philippe Brucker
---
drivers/iommu/virtio-iommu.c | 9 +++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git
To support identity mappings, the virtio-iommu driver must be able to
represent full 64-bit ranges internally. Pass (start, end) instead of
(start, size) to viommu_add/del_mapping().
Clean comments. The one about the returned size was never true: when
sweeping the whole address space the returned
Support identity domains for devices that do not offer the
VIRTIO_IOMMU_F_BYPASS_CONFIG feature, by creating 1:1 mappings between
the virtual and physical address space. Identity domains created this
way still perform noticeably better than DMA domains, because they don't
have the overhead of
The VIRTIO_IOMMU_F_BYPASS_CONFIG feature adds a new flag to the ATTACH
request, that creates a bypass domain. Use it to enable identity
domains.
When VIRTIO_IOMMU_F_BYPASS_CONFIG is not supported by the device, we
currently fail attaching to an identity domain. Future patches will
instead create
Support identity domains, allowing to only enable IOMMU protection for a
subset of endpoints (those assigned to userspace, for example). Users
may enable identity domains at compile time
(CONFIG_IOMMU_DEFAULT_PASSTHROUGH), boot time (iommu.passthrough=1) or
runtime (/sys/kernel/iommu_groups/*/type
On 12/1/2021 11:41 AM, Thomas Gleixner wrote:
Dave,
please trim your replies.
On Wed, Dec 01 2021 at 09:28, Dave Jiang wrote:
On 12/1/2021 3:16 AM, Thomas Gleixner wrote:
Jason,
CC+ IOMMU folks
On Tue, Nov 30 2021 at 20:17, Jason Gunthorpe wrote:
On Tue, Nov 30, 2021 at 10:23:16PM
On Wed, Dec 01, 2021 at 06:35:35PM +0100, Thomas Gleixner wrote:
> On Wed, Dec 01 2021 at 09:00, Jason Gunthorpe wrote:
> > On Wed, Dec 01, 2021 at 11:16:47AM +0100, Thomas Gleixner wrote:
> >> Looking at the device slices as subdevices with their own struct device
> >> makes a lot of sense from
From: "Matthew Wilcox (Oracle)"
page->freelist is for the use of slab. We already have the ability
to free a list of pages in the core mm, but it requires the use of a
list_head and for the pages to be chained together through page->lru.
Switch the iommu code over to using put_pages_list().
Folks from non-slab subsystems are Cc'd only to patches affecting them, and
this cover letter.
Series also available in git, based on 5.16-rc3:
https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v2r2
The plan: as my SLUB PREEMPT_RT series in 5.15, I would
Dave,
please trim your replies.
On Wed, Dec 01 2021 at 09:28, Dave Jiang wrote:
> On 12/1/2021 3:16 AM, Thomas Gleixner wrote:
>> Jason,
>>
>> CC+ IOMMU folks
>>
>> On Tue, Nov 30 2021 at 20:17, Jason Gunthorpe wrote:
>>> On Tue, Nov 30, 2021 at 10:23:16PM +0100, Thomas Gleixner wrote:
>>
>>
On 2021-12-01 11:14 a.m., 'Jason Gunthorpe' via linux-ntb wrote:
> On Wed, Dec 01, 2021 at 06:35:35PM +0100, Thomas Gleixner wrote:
>> On Wed, Dec 01 2021 at 09:00, Jason Gunthorpe wrote:
>>> On Wed, Dec 01, 2021 at 11:16:47AM +0100, Thomas Gleixner wrote:
Looking at the device slices as
On 2021-12-01 19:07, Matthew Wilcox wrote:
On Wed, Dec 01, 2021 at 07:15:06PM +0100, Vlastimil Babka wrote:
From: "Matthew Wilcox (Oracle)"
page->freelist is for the use of slab. We already have the ability
to free a list of pages in the core mm, but it requires the use of a
list_head and
On Wed, Dec 01, 2021 at 11:16:47AM +0100, Thomas Gleixner wrote:
> Looking at the device slices as subdevices with their own struct device
> makes a lot of sense from the conceptual level.
Except IMS is not just for subdevices, it should be usable for any
driver in any case as a general
The dt_binding_check currently issues the following warnings for the
Tegra186 and Tegra194 SMMUs ...
arch/arm64/boot/dts/nvidia/tegra186-p2771-.dt.yaml: iommu@1200:
'nvidia,memory-controller' does not match any of the regexes: 'pinctrl-[0-9]+'
From schema:
On 01.12.2021 08:39, Vinod Koul wrote:
> Add SM8450 qcom iommu implementation to the table of
> qcom_smmu_impl_of_match table which brings in iommu support for
> SM8450 SoC
>
> Signed-off-by: Vinod Koul
> Tested-by: Dmitry Baryshkov
> ---
With deep pain, as we've had to deal with this for a
On 01/12/2021 10:39, Vinod Koul wrote:
Add SM8450 qcom iommu implementation to the table of
qcom_smmu_impl_of_match table which brings in iommu support for
SM8450 SoC
Signed-off-by: Vinod Koul
Tested-by: Dmitry Baryshkov
---
drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c | 1 +
1 file
From: Tianyu Lan
Hyper-V provides two kinds of Isolation VMs. VBS(Virtualization-based
security) and AMD SEV-SNP unenlightened Isolation VMs. This patchset
is to add support for these Isolation VM support in Linux.
The memory of these vms are encrypted and host can't access guest
memory
From: Tianyu Lan
Hyper-V provides Isolation VM which has memory encrypt support. Add
hyperv_cc_platform_has() and return true for check of GUEST_MEM_ENCRYPT
attribute.
Signed-off-by: Tianyu Lan
---
arch/x86/kernel/cc_platform.c | 15 +++
1 file changed, 15 insertions(+)
diff
From: Tianyu Lan
In Isolation VM with AMD SEV, bounce buffer needs to be accessed via
extra address space which is above shared_gpa_boundary (E.G 39 bit
address line) reported by Hyper-V CPUID ISOLATION_CONFIG. The access
physical address will be original physical address + shared_gpa_boundary.
From: Tianyu Lan
In Isolation VM, all shared memory with host needs to mark visible
to host via hvcall. vmbus_establish_gpadl() has already done it for
storvsc rx/tx ring buffer. The page buffer used by vmbus_sendpacket_
mpb_desc() still needs to be handled. Use DMA API(scsi_dma_map/unmap)
to
From: Tianyu Lan
In Isolation VM, all shared memory with host needs to mark visible
to host via hvcall. vmbus_establish_gpadl() has already done it for
netvsc rx/tx ring buffer. The page buffer used by vmbus_sendpacket_
pagebuffer() stills need to be handled. Use DMA API to map/umap
these memory
From: Tianyu Lan
hyperv Isolation VM requires bounce buffer support to copy
data from/to encrypted memory and so enable swiotlb force
mode to use swiotlb bounce buffer for DMA transaction.
In Isolation VM with AMD SEV, the bounce buffer needs to be
accessed via extra address space which is
The display controllers are attached to a separate ARM SMMU instance
that is dedicated to servicing isochronous memory clients. Add this ISO
instance of the ARM SMMU to device tree.
Please note that the display controllers are not hooked up to this SMMU
yet, because we are still missing a means
On Wed, Dec 01 2021 at 14:21, Dave Jiang wrote:
> On 12/1/2021 1:25 PM, Thomas Gleixner wrote:
>>> The hardware implementation does not have enough MSIX vectors for
>>> guests. There are only 9 MSIX vectors total (8 for queues) and 2048 IMS
>>> vectors. So if we are to do MSI-X for all of them,
On 12/1/2021 3:03 PM, Thomas Gleixner wrote:
On Wed, Dec 01 2021 at 14:49, Dave Jiang wrote:
On 12/1/2021 2:44 PM, Thomas Gleixner wrote:
How that is backed on the host does not really matter. You can expose
MSI-X to the guest with a INTx backing as well.
I'm still failing to see the
On Tue, 2021-11-23 at 18:10 +0200, Maxim Levitsky wrote:
> As I sadly found out, a s3 cycle makes the AMD's iommu stop sending interrupts
> until the system is rebooted.
>
> I only noticed it now because otherwise the IOMMU works, and these interrupts
> are only used for errors and for GA log
Jason,
On Wed, Dec 01 2021 at 21:21, Thomas Gleixner wrote:
> On Wed, Dec 01 2021 at 14:14, Jason Gunthorpe wrote:
> Which in turn is consistent all over the place and does not require any
> special case for anything. Neither for interrupts nor for anything else.
that said, feel free to tell me
On 12/1/2021 2:44 PM, Thomas Gleixner wrote:
On Wed, Dec 01 2021 at 14:21, Dave Jiang wrote:
On 12/1/2021 1:25 PM, Thomas Gleixner wrote:
The hardware implementation does not have enough MSIX vectors for
guests. There are only 9 MSIX vectors total (8 for queues) and 2048 IMS
vectors. So if
On Wed, Dec 01 2021 at 14:49, Dave Jiang wrote:
> On 12/1/2021 2:44 PM, Thomas Gleixner wrote:
>> How that is backed on the host does not really matter. You can expose
>> MSI-X to the guest with a INTx backing as well.
>>
>> I'm still failing to see the connection between the 9 MSIX vectors and
>>
dave,
On Wed, Dec 01 2021 at 15:53, Dave Jiang wrote:
> On 12/1/2021 3:03 PM, Thomas Gleixner wrote:
>> This still depends on how this overall discussion about representation
>> of all of this stuff is resolved.
>>
What needs a subdevice to expose?
>> Can you answer that too please?
>
>
Hello,
syzbot found the following issue on:
HEAD commit:c5c17547b778 Merge tag 'net-5.16-rc3' of git://git.kernel...
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=13a73609b0
kernel config: https://syzkaller.appspot.com/x/.config?x=bf85c53718a1e697
On Wed, Dec 01 2021 at 11:47, Dave Jiang wrote:
> On 12/1/2021 11:41 AM, Thomas Gleixner wrote:
>>> Hi Thomas. This is actually the IDXD usage for a mediated device passed
>>> to a guest kernel when we plumb the pass through of IMS to the guest
>>> rather than doing previous implementation of
Jason,
On Wed, Dec 01 2021 at 14:14, Jason Gunthorpe wrote:
> On Wed, Dec 01, 2021 at 06:35:35PM +0100, Thomas Gleixner wrote:
>> On Wed, Dec 01 2021 at 09:00, Jason Gunthorpe wrote:
>> But NTB is operating through an abstraction layer and is not a direct
>> PCIe device driver.
>
> I'm not sure
On 12/1/2021 1:25 PM, Thomas Gleixner wrote:
On Wed, Dec 01 2021 at 11:47, Dave Jiang wrote:
On 12/1/2021 11:41 AM, Thomas Gleixner wrote:
Hi Thomas. This is actually the IDXD usage for a mediated device passed
to a guest kernel when we plumb the pass through of IMS to the guest
rather than
This means the virtgpu driver uses dma mapping helpers but has not set up
a DMA mask (which most likely suggests it is some kind of virtual device).
On Wed, Dec 01, 2021 at 10:18:21AM -0800, syzbot wrote:
> Hello,
>
> syzbot found the following issue on:
>
> HEAD commit:c5c17547b778 Merge
40 matches
Mail list logo