Presently, "/sys/kernel/debug/iommu/intel/dmar_translation_struct" file
dumps DMAR tables in the below format
IOMMU dmar2: Root Table Address:4362cc000
Root Table Entries:
Bus: 0 H: 0 L: 4362f0001
Context Table Entries for Bus: 0
Entry B:D.F HighLow
160 00:14.0 102 4362ef001
A DMAR table walk would typically follow the below process.
1. Bus number is used to index into root table which points to a context
table.
2. Device number and Function number are used together to index into
context table which then points to a pasid directory.
3. PASID[19:6] is used to
A scalable mode DMAR table walk would involve looking at bits in each stage
of walk, like,
1. Is PASID enabled in the context entry?
2. What's the size of PASID directory?
3. Is the PASID directory entry present?
4. Is the PASID table entry present?
5. Number of PASID table entries?
Hence, add
Presently, "/sys/kernel/debug/iommu/intel/dmar_translation_struct" file dumps
only legacy DMAR table which consists of root table and context table. Scalable
mode DMAR table adds PASID directory and PASID table. Hence, add support to dump
these tables as well.
Directly extending the present
On Fri, Sep 22, 2017 at 2:58 AM Jean-Philippe Brucker
wrote:
>
> On 22/09/17 10:02, Joerg Roedel wrote:
> > On Tue, Sep 19, 2017 at 10:23:43AM -0400, Rob Clark wrote:
> >> I would like to decide in the IRQ whether or not to queue work or not,
> >> because when we get a gpu fault, we tend to get
The commit cf04eee8bf0e ("iommu/vt-d: Include ACPI devices in iommu=pt")
added for_each_active_iommu() in iommu_prepare_static_identity_mapping()
but never used the each element, i.e, "drhd->iommu".
drivers/iommu/intel-iommu.c: In function
'iommu_prepare_static_identity_mapping':
On Thu, May 09, 2019 at 11:41:42AM -0700, Sai Praneeth Prakhya wrote:
> From: Sai Praneeth
>
> Presently, "/sys/kernel/debug/iommu/intel/dmar_translation_struct" file dumps
> only legacy DMAR table which consists of root table and context table.
> Scalable
> mode DMAR table adds PASID directory
On 10/05/2019 12:21, Robin Murphy wrote:
On 10/05/2019 09:22, Pierre Morel wrote:
For the generic implementation of VFIO PCI we need to retrieve
the hardware configuration for the PCI functions and the
PCI function groups.
We modify the internal function using CLP Query PCI function and
CLP
Hi Robin,
On 5/8/19 4:38 PM, Robin Murphy wrote:
> On 08/04/2019 13:19, Eric Auger wrote:
>> On attach_pasid_table() we program STE S1 related info set
>> by the guest into the actual physical STEs. At minimum
>> we need to program the context descriptor GPA and compute
>> whether the stage1 is
Hi Robin,
On 5/8/19 3:59 PM, Robin Murphy wrote:
> On 08/04/2019 13:18, Eric Auger wrote:
>> On ARM, MSI are translated by the SMMU. An IOVA is allocated
>> for each MSI doorbell. If both the host and the guest are exposed
>> with SMMUs, we end up with 2 different IOVAs allocated by each.
>> guest
On Tue, Dec 4, 2018 at 2:29 PM Rob Herring wrote:
>
> On Sat, Dec 1, 2018 at 10:54 AM Rob Clark wrote:
> >
> > This solves a problem we see with drm/msm, caused by getting
> > iommu_dma_ops while we attach our own domain and manage it directly at
> > the iommu API level:
> >
> >
Hi Robin,
On 5/8/19 4:24 PM, Robin Murphy wrote:
> On 08/04/2019 13:19, Eric Auger wrote:
>> To allow nested stage support, we need to store both
>> stage 1 and stage 2 configurations (and remove the former
>> union).
>>
>> A nested setup is characterized by both s1_cfg and s2_cfg
>> set.
>>
>>
On 10/05/2019 13:33, Pankaj Bansal wrote:
> Hi Will/Robin/Joerg,
>
> I am s/w engineer from NXP India Pvt. Ltd.
> We are using SMMU-V3 in one of NXP SOC.
> I have a question about the SMMU Stream ID allocation in linux.
>
> Right now the Stream IDs allocated to a device are mapped via device
Hi Will/Robin/Joerg,
I am s/w engineer from NXP India Pvt. Ltd.
We are using SMMU-V3 in one of NXP SOC.
I have a question about the SMMU Stream ID allocation in linux.
Right now the Stream IDs allocated to a device are mapped via device tree to
the device.
Correction: we use ARM SMMU-500.
Corresponding bindings are :
https://elixir.bootlin.com/linux/latest/source/Documentation/devicetree/bindings/iommu/arm,smmu.txt#L49
The #iommu-cells is 1 in our SOC:
Using the PCI VFIO interface allows userland, a.k.a. QEMU, to retrieve
ZPCI specific information without knowing Z specific identifiers
like the function ID or the function handle of the zPCI function
hidden behind the PCI interface.
By using the VFIO_IOMMU_GET_INFO ioctl we enter the
For the generic implementation of VFIO PCI we need to retrieve
the hardware configuration for the PCI functions and the
PCI function groups.
We modify the internal function using CLP Query PCI function and
CLP query PCI function group so that they can be called from
outside the S390 architecture
We add "get attributes" to the S390 iommu operations to retrieve the S390
specific attributes through the call of zPCI dedicated CLP functions.
Signed-off-by: Pierre Morel
---
drivers/iommu/s390-iommu.c | 77 ++
include/linux/iommu.h | 4 +++
2
To use the VFIO_IOMMU_GET_INFO to retrieve IOMMU specific information,
we define a new flag VFIO_IOMMU_INFO_CAPABILITIES in the
vfio_iommu_type1_info structure and the associated capability
information block.
Signed-off-by: Pierre Morel
---
include/uapi/linux/vfio.h | 10 ++
1 file
> > static void ctx_tbl_walk(struct seq_file *m, struct intel_iommu *iommu,
> > u16
> bus)
> > {
> > struct context_entry *context;
> > - u16 devfn;
> > + u16 devfn, pasid_dir_size;
> > + u64 pasid_dir_ptr;
> >
> > for (devfn = 0; devfn < 256; devfn++) {
> > struct
> Hi Sai,
>
> On 5/10/19 2:41 AM, Sai Praneeth Prakhya wrote:
> > From: Sai Praneeth
> >
> > Presently, "/sys/kernel/debug/iommu/intel/dmar_translation_struct"
> > file dumps only legacy DMAR table which consists of root table and
> > context table. Scalable mode DMAR table adds PASID directory
21 matches
Mail list logo