Hi Christoph,
On 2021/11/15 21:27, Christoph Hellwig wrote:
On Mon, Nov 15, 2021 at 10:05:47AM +0800, Lu Baolu wrote:
The vfio needs to set DMA_OWNER_USER for the entire group when attaching
The vfio subsystem? driver?
"vfio subsystem"
it to a vfio container. So expose group variants
On 15.11.21 20:37, Zi Yan wrote:
> From: Zi Yan
>
> Hi David,
Hi,
thanks for looking into this.
>
> You suggested to make alloc_contig_range() deal with pageblock_order instead
> of
> MAX_ORDER - 1 and get rid of MAX_ORDER - 1 dependency in virtio_mem[1]. This
> patchset is my attempt to
Add maintainer for driver and documentation of HiSilicon PTT device.
Signed-off-by: Yicong Yang
---
MAINTAINERS | 7 +++
1 file changed, 7 insertions(+)
diff --git a/MAINTAINERS b/MAINTAINERS
index 7a2345ce8521..823d495ca0d5 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -8601,6 +8601,13 @@
HiSilicon PCIe tune and trace device (PTT) is a PCIe Root Complex
integrated Endpoint (RCiEP) device, providing the capability
to dynamically monitor and tune the PCIe traffic (tune),
and trace the TLP headers (trace).
PTT tune is designed for monitoring and adjusting PCIe link parameters.
We
Export iommu_{get,put}_resv_regions() to the modules so that the driver
can retrieve and use the reserved regions of the device.
Signed-off-by: Yicong Yang
---
drivers/iommu/iommu.c | 2 ++
include/linux/iommu.h | 4 ++--
2 files changed, 4 insertions(+), 2 deletions(-)
diff --git
Add tune function for the HiSilicon Tune and Trace device. The interface
of tune is exposed through sysfs attributes of PTT PMU device.
Signed-off-by: Yicong Yang
---
drivers/hwtracing/hisilicon/hisi_ptt.c | 167 +
1 file changed, 167 insertions(+)
diff --git
HiSilicon PCIe tune and trace device(PTT) is a PCIe Root Complex
integrated Endpoint(RCiEP) device, providing the capability
to dynamically monitor and tune the PCIe traffic(tune),
and trace the TLP headers(trace).
Add the driver for the device to enable the trace function. The driver
will create
From: Qi Liu
'perf record' and 'perf report --dump-raw-trace' supported in this
patch.
Example usage:
Output will contain raw PTT data and its textual representation, such
as:
0 0 0x5810 [0x30]: PERF_RECORD_AUXTRACE size: 0x40 offset: 0
ref: 0xa5d50c725 idx: 0 tid: -1 cpu: 0
.
. ...
Document the introduction and usage of HiSilicon PTT device driver.
Signed-off-by: Yicong Yang
---
Documentation/trace/hisi-ptt.rst | 305 +++
1 file changed, 305 insertions(+)
create mode 100644 Documentation/trace/hisi-ptt.rst
diff --git
On 2021-11-11 06:50, Christoph Hellwig wrote:
Hi all,
Linus complained about the complex flow in dma_direct_alloc, so this
tries to simplify it a bit, and while I was at it I also made sure that
unencrypted pages never leak back into the page allocator.
Before I forget, I've had a quick skim
On 16/11/2021 11:35, Jean-Philippe Brucker wrote:
Add device-tree support to the SMMUv3 PMCG. One small cosmetic change
while factoring the option mask printout: don't display it when zero, it
only contains one erratum at the moment.
Signed-off-by: Jay Chen
Signed-off-by: Jean-Philippe
On 2021/11/16 18:56, Robin Murphy wrote:
> On 2021-11-16 09:06, Yicong Yang via iommu wrote:
> [...]
>> +/*
>> + * Get RMR address if provided by the firmware.
>> + * Return 0 if the IOMMU doesn't present or the policy of the
>> + * IOMMU domain is passthrough or we get a usable RMR region.
>> + *
Add binding for the Arm SMMUv3 PMU. Each node represents a PMCG, and is
placed as a sibling node of the SMMU. Although the PMCGs registers may
be within the SMMU MMIO region, they are separate devices, and there can
be multiple PMCG devices for each SMMU (for example one for the TCU and
one for
Add devicetree binding for the SMMUv3 PMU, called Performance Monitoring
Counter Group (PMCG) in the spec. Each SMMUv3 implementation can have
multiple independent PMCGs, for example one for the Translation Control
Unit (TCU) and one per Translation Buffer Unit (TBU).
I previously sent the
Add device-tree support to the SMMUv3 PMCG. One small cosmetic change
while factoring the option mask printout: don't display it when zero, it
only contains one erratum at the moment.
Signed-off-by: Jay Chen
Signed-off-by: Jean-Philippe Brucker
---
drivers/perf/arm_smmuv3_pmu.c | 25
On Tue, Nov 16, 2021 at 09:57:30AM +0800, Lu Baolu wrote:
> Hi Christoph,
>
> On 11/15/21 9:14 PM, Christoph Hellwig wrote:
> > On Mon, Nov 15, 2021 at 10:05:42AM +0800, Lu Baolu wrote:
> > > +enum iommu_dma_owner {
> > > + DMA_OWNER_NONE,
> > > + DMA_OWNER_KERNEL,
> > > + DMA_OWNER_USER,
> > >
On 2021-11-16 09:06, Yicong Yang via iommu wrote:
[...]
+/*
+ * Get RMR address if provided by the firmware.
+ * Return 0 if the IOMMU doesn't present or the policy of the
+ * IOMMU domain is passthrough or we get a usable RMR region.
+ * Otherwise a negative value is returned.
+ */
+static int
On 2021-11-16 11:35, Jean-Philippe Brucker wrote:
Add devicetree binding for the SMMUv3 PMU, called Performance Monitoring
Counter Group (PMCG) in the spec. Each SMMUv3 implementation can have
multiple independent PMCGs, for example one for the Translation Control
Unit (TCU) and one per
On Tue, 16 Nov 2021 11:35:36 +, Jean-Philippe Brucker wrote:
> Add binding for the Arm SMMUv3 PMU. Each node represents a PMCG, and is
> placed as a sibling node of the SMMU. Although the PMCGs registers may
> be within the SMMU MMIO region, they are separate devices, and there can
> be
On 2021-11-16 14:21, John Garry wrote:
On 04/10/2021 12:44, Will Deacon wrote:
On Fri, Sep 24, 2021 at 06:01:52PM +0800, John Garry wrote:
The IOVA domain structure is a bit overloaded, holding:
- IOVA tree management
- FQ control
- IOVA rcache memories
Indeed only a couple of IOVA users use
On 2021-11-16 15:42, Jean-Philippe Brucker wrote:
On Tue, Nov 16, 2021 at 12:02:47PM +, Robin Murphy wrote:
On 2021-11-16 11:35, Jean-Philippe Brucker wrote:
Add devicetree binding for the SMMUv3 PMU, called Performance Monitoring
Counter Group (PMCG) in the spec. Each SMMUv3
On Tue, Nov 16, 2021 at 08:02:53AM -0600, Rob Herring wrote:
> My bot found errors running 'make DT_CHECKER_FLAGS=-m dt_binding_check'
> on your patch (DT_CHECKER_FLAGS is new in v5.13):
>
> yamllint warnings/errors:
> ./Documentation/devicetree/bindings/iommu/arm,smmu-v3-pmcg.yaml:24:9:
>
On Tue, Nov 16, 2021 at 12:06:36PM +, John Garry wrote:
> On 16/11/2021 11:35, Jean-Philippe Brucker wrote:
> > Add device-tree support to the SMMUv3 PMCG. One small cosmetic change
> > while factoring the option mask printout: don't display it when zero, it
> > only contains one erratum at
On Tue, Nov 16, 2021 at 12:02:47PM +, Robin Murphy wrote:
> On 2021-11-16 11:35, Jean-Philippe Brucker wrote:
> > Add devicetree binding for the SMMUv3 PMU, called Performance Monitoring
> > Counter Group (PMCG) in the spec. Each SMMUv3 implementation can have
> > multiple independent PMCGs,
On 04/10/2021 12:44, Will Deacon wrote:
On Fri, Sep 24, 2021 at 06:01:52PM +0800, John Garry wrote:
The IOVA domain structure is a bit overloaded, holding:
- IOVA tree management
- FQ control
- IOVA rcache memories
Indeed only a couple of IOVA users use the rcache, and only dma-iommu.c
uses
From: Tianyu Lan
Hyper-V netvsc driver needs to allocate noncontiguous DMA memory and
remap it into unencrypted address space before sharing with host. Add
vmap/vunmap_noncontiguous() callback and handle the remap in the Hyper-V
dma ops callback.
Signed-off-by: Tianyu Lan
---
From: Tianyu Lan
In Isolation VM, all shared memory with host needs to mark visible
to host via hvcall. vmbus_establish_gpadl() has already done it for
netvsc rx/tx ring buffer. The page buffer used by vmbus_sendpacket_
pagebuffer() stills need to be handled. Use DMA API to map/umap
these memory
From: Tianyu Lan
In Isolation VM, all shared memory with host needs to mark visible
to host via hvcall. vmbus_establish_gpadl() has already done it for
storvsc rx/tx ring buffer. The page buffer used by vmbus_sendpacket_
mpb_desc() still needs to be handled. Use DMA API(scsi_dma_map/unmap)
to
From: Tianyu Lan
Hyper-V provides two kinds of Isolation VMs. VBS(Virtualization-based
security) and AMD SEV-SNP unenlightened Isolation VMs. This patchset
is to add support for these Isolation VM support in Linux.
The memory of these vms are encrypted and host can't access guest
memory
From: Tianyu Lan
In Isolation VM with AMD SEV, bounce buffer needs to be accessed via
extra address space which is above shared_gpa_boundary (E.G 39 bit
address line) reported by Hyper-V CPUID ISOLATION_CONFIG. The access
physical address will be original physical address + shared_gpa_boundary.
From: Tianyu Lan
hyperv Isolation VM requires bounce buffer support to copy
data from/to encrypted memory and so enable swiotlb force
mode to use swiotlb bounce buffer for DMA transaction.
In Isolation VM with AMD SEV, the bounce buffer needs to be
accessed via extra address space which is
On Tue, Nov 16, 2021 at 05:00:14PM +, Robin Murphy wrote:
> On 2021-11-16 15:42, Jean-Philippe Brucker wrote:
> > On Tue, Nov 16, 2021 at 12:02:47PM +, Robin Murphy wrote:
> > > On 2021-11-16 11:35, Jean-Philippe Brucker wrote:
> > > > Add devicetree binding for the SMMUv3 PMU, called
On Tue, Nov 16, 2021 at 03:24:29PM +0800, Lu Baolu wrote:
> On 2021/11/16 4:44, Bjorn Helgaas wrote:
> > On Mon, Nov 15, 2021 at 10:05:45AM +0800, Lu Baolu wrote:
> > > IOMMU grouping on PCI necessitates that if we lack isolation on a bridge
> > > then all of the downstream devices will be part of
On Tue, Nov 16, 2021 at 02:22:01PM -0600, Bjorn Helgaas wrote:
> On Tue, Nov 16, 2021 at 03:24:29PM +0800, Lu Baolu wrote:
> > On 2021/11/16 4:44, Bjorn Helgaas wrote:
> > > On Mon, Nov 15, 2021 at 10:05:45AM +0800, Lu Baolu wrote:
> > > > IOMMU grouping on PCI necessitates that if we lack
Instead of writing to WC cmdstream buffers that go all the way to the main
memory, let's use the system cache to improve the performance.
Signed-off-by: Georgi Djakov
---
drivers/gpu/drm/msm/msm_gem.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git
From: "Isaac J. Manjarres"
Non-coherent devices on systems that support a system or
last level cache may want to request that allocations be
cached in the system cache. For memory that is allocated
by the kernel, and used for DMA with devices, the memory
attributes used for CPU access should
On Wed, Nov 17, 2021 at 12:16 AM Georgi Djakov
wrote:
>
> Instead of writing to WC cmdstream buffers that go all the way to the main
> memory, let's use the system cache to improve the performance.
>
> Signed-off-by: Georgi Djakov
> ---
> drivers/gpu/drm/msm/msm_gem.c | 2 +-
> 1 file changed,
On 16 Nov 2021, at 3:58, David Hildenbrand wrote:
> On 15.11.21 20:37, Zi Yan wrote:
>> From: Zi Yan
>>
>> Hi David,
>
> Hi,
>
> thanks for looking into this.
>
>>
>> You suggested to make alloc_contig_range() deal with pageblock_order instead
>> of
>> MAX_ORDER - 1 and get rid of MAX_ORDER - 1
Hi Jason,
On 11/16/21 9:46 PM, Jason Gunthorpe wrote:
On Tue, Nov 16, 2021 at 09:57:30AM +0800, Lu Baolu wrote:
Hi Christoph,
On 11/15/21 9:14 PM, Christoph Hellwig wrote:
On Mon, Nov 15, 2021 at 10:05:42AM +0800, Lu Baolu wrote:
+enum iommu_dma_owner {
+ DMA_OWNER_NONE,
+
On Tue, Nov 16, 2021 at 11:31:49AM +, Robin Murphy wrote:
> On 2021-11-11 06:50, Christoph Hellwig wrote:
>> Hi all,
>>
>> Linus complained about the complex flow in dma_direct_alloc, so this
>> tries to simplify it a bit, and while I was at it I also made sure that
>> unencrypted pages never
40 matches
Mail list logo