Now that the RDMA core deals with devices that only do DMA mapping in
lower layers properly, there is no user for dma_virt_ops and it can
be removed.
Signed-off-by: Christoph Hellwig
---
include/linux/dma-mapping.h | 2 --
kernel/dma/Kconfig | 5 ---
kernel/dma/Makefile | 1 -
Remove the pointless paddr variable that was only used once.
Signed-off-by: Christoph Hellwig
Reviewed-by: Logan Gunthorpe
Acked-by: Bjorn Helgaas
---
drivers/pci/p2pdma.c | 5 +
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/drivers/pci/p2pdma.c b/drivers/pci/p2pdma.c
index
Now that all users of dma_virt_ops are gone we can remove the workaround
for it in the PCI peer to peer code.
Signed-off-by: Christoph Hellwig
Reviewed-by: Logan Gunthorpe
Acked-by: Bjorn Helgaas
---
drivers/pci/p2pdma.c | 20
1 file changed, 20 deletions(-)
diff --git a/
Use the ib_dma_* helpers to skip the DMA translation instead. This
removes the last user if dma_virt_ops and keeps the weird layering
violation inside the RDMA core instead of burderning the DMA mapping
subsystems with it. This also means the software RDMA drivers now
don't have to mess with DMA
These two functions are entirely unused.
Signed-off-by: Christoph Hellwig
---
include/rdma/ib_verbs.h | 29 -
1 file changed, 29 deletions(-)
diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
index 9bf6c319a670e2..5f8fd7976034e0 100644
--- a/include/rdma
dma_virt_ops requires that all pages have a kernel virtual address.
Introduce a INFINIBAND_VIRT_DMA Kconfig symbol that depends on !HIGHMEM
and a large enough dma_addr_t, and make all three driver depend on the
new symbol.
Signed-off-by: Christoph Hellwig
---
drivers/infiniband/Kconfig
Hi Jason,
this series switches the RDMA core to opencode the special case of
devices bypassing the DMA mapping in the RDMA ULPs. The virt ops
have caused a bit of trouble due to the P2P code node working with
them due to the fact that we'd do two dma mapping iterations for a
single I/O, but also
Hi Baolu,
On Thu, Nov 5, 2020 at 9:47 AM Lu Baolu wrote:
>
> Hi Zhenzhong,
>
> On 11/4/20 4:19 PM, Zhenzhong Duan wrote:
> > no_platform_optin is redundant with dmar_disabled and it's only used in
> > platform_optin_force_iommu(), remove it and use dmar_disabled instead.
>
> It's actually not.
>
Hi Zhenzhong,
On 11/4/20 4:19 PM, Zhenzhong Duan wrote:
no_platform_optin is redundant with dmar_disabled and it's only used in
platform_optin_force_iommu(), remove it and use dmar_disabled instead.
It's actually not.
If CONFIG_INTEL_IOMMU_DEFAULT_ON is not set, we will get "dmar_disable =
1"
Hello Konrad,
On Wed, Nov 04, 2020 at 05:14:52PM -0500, Konrad Rzeszutek Wilk wrote:
> On Wed, Nov 04, 2020 at 10:08:04PM +, Ashish Kalra wrote:
> > From: Ashish Kalra
> >
> > For SEV, all DMA to and from guest has to use shared
> > (un-encrypted) pages. SEV uses SWIOTLB to make this
> > hap
On Thursday, November 5, 2020, Ashish Kalra wrote:
> From: Ashish Kalra
>
> For SEV, all DMA to and from guest has to use shared
> (un-encrypted) pages. SEV uses SWIOTLB to make this
> happen without requiring changes to device drivers.
> However, depending on workload being run, the default
> 6
On Wed, Nov 04, 2020 at 10:08:04PM +, Ashish Kalra wrote:
> From: Ashish Kalra
>
> For SEV, all DMA to and from guest has to use shared
> (un-encrypted) pages. SEV uses SWIOTLB to make this
> happen without requiring changes to device drivers.
> However, depending on workload being run, the d
From: Ashish Kalra
For SEV, all DMA to and from guest has to use shared
(un-encrypted) pages. SEV uses SWIOTLB to make this
happen without requiring changes to device drivers.
However, depending on workload being run, the default
64MB of SWIOTLB might not be enough and SWIOTLB
may run out of buff
On 04 Nov 2020 10:33, Jean-Philippe Brucker wrote:
> Hi Al,
>
> On Tue, Nov 03, 2020 at 01:09:04PM -0700, Al Stone wrote:
> > So, there are some questions about the VIOT definition and I just
> > don't know enough to be able to answer them. One of the ASWG members
> > is trying to understand the
On Tue, Nov 03, 2020 at 08:14:29PM +0100, j...@8bytes.org wrote:
> On Tue, Nov 03, 2020 at 01:48:51PM -0400, Jason Gunthorpe wrote:
> > I think the same PCI driver with a small flag to support the PF or
> > VF is not the same as two completely different drivers in different
> > subsystems
>
> Ther
On Wed, Nov 04, 2020 at 03:09:04PM +, Bernard Metzler wrote:
> lkey of zero to pass a physical buffer, only allowed for
> kernel applications? Very nice idea I think.
It already exists, it is called the local_dma_lkey, just set
IB_DEVICE_LOCAL_DMA_LKEY and provide the value you want to use
in
On Wed, Nov 04, 2020 at 05:31:35PM +0100, Christoph Hellwig wrote:
> On Wed, Nov 04, 2020 at 11:52:55AM -0400, Jason Gunthorpe wrote:
> > It could work, I think a resonable ULP API would be to have some
> >
> > rdma_fill_ib_sge_from_sgl()
> > rdma_map_sge_single()
> > etc etc
> >
> > ie instea
On 2020-11-04 2:50 a.m., Christoph Hellwig wrote:
> Now that all users of dma_virt_ops are gone we can remove the workaround
> for it in the PCIe peer to peer code.
>
> Signed-off-by: Christoph Hellwig
The two P2PDMA patches look fine to me. Nice to get rid of that hack.
Reviewed-by: Logan
s|PCI/p2p: cleanup up __pci_p2pdma_map_sg|PCI/P2PDMA: Cleanup up
__pci_p2pdma_map_sg|
to match history.
On Wed, Nov 04, 2020 at 10:50:51AM +0100, Christoph Hellwig wrote:
> Remove the pointless paddr variable that was only used once.
>
> Signed-off-by: Christoph Hellwig
Acked-by: Bjorn Helgaas
s|PCI/p2p: remove|PCI/P2PDMA: Remove/
to match history.
On Wed, Nov 04, 2020 at 10:50:50AM +0100, Christoph Hellwig wrote:
> Now that all users of dma_virt_ops are gone we can remove the workaround
> for it in the PCIe peer to peer code.
s/PCIe/PCI/
We went to some trouble to make P2PDMA work on
On Wed, Nov 04, 2020 at 11:52:55AM -0400, Jason Gunthorpe wrote:
> It could work, I think a resonable ULP API would be to have some
>
> rdma_fill_ib_sge_from_sgl()
> rdma_map_sge_single()
> etc etc
>
> ie instead of wrappering the DMA API as-is we have a new API that
> directly builds the ib_s
On Wed, Nov 04, 2020 at 03:01:08PM +0100, Christoph Hellwig wrote:
> > Sigh. I think the proper fix is to replace addr/length with a
> > scatterlist pointer in the struct ib_sge, then have SW drivers
> > directly use the page pointer properly.
>
> The proper fix is to move the DMA mapping into th
-"Christoph Hellwig" wrote: -
>To: "Jason Gunthorpe"
>From: "Christoph Hellwig"
>Date: 11/04/2020 03:02PM
>Cc: "Christoph Hellwig" , "Bjorn Helgaas"
>, "Logan Gunthorpe" ,
>linux-r...@vger.kernel.org, linux-...@vger.kernel.org,
>iommu@lists.linux-foundation.org
>Subject: [EXTERNAL] Re:
Joerg,
One remark:
> However I found out that with Kernel 5.9.3 the amdgpu kernel module is not
> loaded/installed
That is likely my fault because I was compiling that linux kernel on a faster
machine (V1807B CPU against R1305G CPU (target)). I restarted that compile just
now on the target mac
> Yes, but it could be the same underlying reason.
There is no PCI setup issue that we are aware of.
> For a first try, use 5.9.3. If it reproduces there, please try booting with
> "pci=noats" on the kernel command line.
Did compile the kernel 5.9.3 and started a reboot test to see if it is going
On Tue, Nov 03, 2020 at 10:46:43AM +0100, Christoph Hellwig wrote:
> ping?
Hopefully this goes through. I am in the process of testing it but ran
into testing issues that I believe are unrelated.
>
> On Fri, Oct 23, 2020 at 08:33:09AM +0200, Christoph Hellwig wrote:
> > The tbl_dma_addr argumen
On Wed, Nov 04, 2020 at 09:42:41AM -0400, Jason Gunthorpe wrote:
> On Wed, Nov 04, 2020 at 10:50:49AM +0100, Christoph Hellwig wrote:
>
> > +int ib_dma_virt_map_sg(struct ib_device *dev, struct scatterlist *sg, int
> > nents)
> > +{
> > + struct scatterlist *s;
> > + int i;
> > +
> > + for_
On Wed, Nov 04, 2020 at 10:50:49AM +0100, Christoph Hellwig wrote:
> +int ib_dma_virt_map_sg(struct ib_device *dev, struct scatterlist *sg, int
> nents)
> +{
> + struct scatterlist *s;
> + int i;
> +
> + for_each_sg(sg, s, nents, i) {
> + sg_dma_address(s) = (uintptr_t)sg_
On Wed, Nov 04, 2020 at 10:15:49AM +, Robin Murphy wrote:
> On 2020-11-04 08:14, Maxime Ripard wrote:
> > Hi Christoph,
> >
> > On Tue, Nov 03, 2020 at 10:55:38AM +0100, Christoph Hellwig wrote:
> > > Linux 5.10-rc1 switched from having a single dma offset in struct device
> > > to a set of DM
On 2020-11-04 07:17, Kunkun Jiang wrote:
Hi Will and Robin,
Sorry for the late reply.
On 2020/11/3 18:21, Robin Murphy wrote:
On 2020-11-03 09:11, Will Deacon wrote:
On Tue, Nov 03, 2020 at 11:00:27AM +0800, Kunkun Jiang wrote:
Recently, I have read and learned the code related to io-pgtable
On Wed, Nov 04, 2020 at 10:15:49AM +, Robin Murphy wrote:
> How about having something in the platform code that keys off the top-level
> SoC compatible and uses a bus notifier to create offsets for the relevant
> devices if an MBUS description is missing? At least that way the workaround
>
On 2020-11-04 08:14, Maxime Ripard wrote:
Hi Christoph,
On Tue, Nov 03, 2020 at 10:55:38AM +0100, Christoph Hellwig wrote:
Linux 5.10-rc1 switched from having a single dma offset in struct device
to a set of DMA ranges, and introduced a new helper to set them,
dma_direct_set_offset.
This in fa
On Wed, Nov 04, 2020 at 09:21:35AM +, Merger, Edgar [AUTOSOL/MAS/AUGS]
wrote:
> AMD-Vi: Completion-Wait loop timed out is at [65499.964105] but amdgpu-error
> is at [ 52.772273], hence much earlier.
Yes, but it could be the same underlying reason.
> Have not tried to use an upstream kerne
Now that the RDMA core deals with devices that only do DMA mapping in
lower layers properly, there is no user for dma_virt_ops and it can
be removed.
Signed-off-by: Christoph Hellwig
---
include/linux/dma-mapping.h | 2 --
kernel/dma/Kconfig | 5 ---
kernel/dma/Makefile | 1 -
Remove the pointless paddr variable that was only used once.
Signed-off-by: Christoph Hellwig
---
drivers/pci/p2pdma.c | 5 +
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/drivers/pci/p2pdma.c b/drivers/pci/p2pdma.c
index b07018af53876c..afd792cc272832 100644
--- a/drivers/pci
Now that all users of dma_virt_ops are gone we can remove the workaround
for it in the PCIe peer to peer code.
Signed-off-by: Christoph Hellwig
---
drivers/pci/p2pdma.c | 20
1 file changed, 20 deletions(-)
diff --git a/drivers/pci/p2pdma.c b/drivers/pci/p2pdma.c
index de1c
Use the ib_dma_* helpers to skip the DMA translation instead. This
removes the last user if dma_virt_ops and keeps the weird layering
violation inside the RDMA core instead of burderning the DMA mapping
subsystems with it. This also means the software RDMA drivers now
don't have to mess with DMA
These two functions are entirely unused.
Signed-off-by: Christoph Hellwig
---
include/rdma/ib_verbs.h | 29 -
1 file changed, 29 deletions(-)
diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
index 9bf6c319a670e2..5f8fd7976034e0 100644
--- a/include/rdma
Hi Jason,
this series switches the RDMA core to opencode the special case of
devices bypassing the DMA mapping in the RDMA ULPs. The virt ops
have caused a bit of trouble due to the P2P code node working with
them due to the fact that we'd do two dma mapping iterations for a
single I/O, but also
Hi Al,
On Tue, Nov 03, 2020 at 01:09:04PM -0700, Al Stone wrote:
> So, there are some questions about the VIOT definition and I just
> don't know enough to be able to answer them. One of the ASWG members
> is trying to understand the semantics behind the subtables.
Thanks for the update. We drop
Hi Jörg,
AMD-Vi: Completion-Wait loop timed out is at [65499.964105] but amdgpu-error is
at [ 52.772273], hence much earlier.
Have not tried to use an upstream kernel yet. Which one would you recommend?
As far as inconsistencies in the PCI-setup is concerned, the only thing that I
know of ri
Hi Edgar,
On Fri, Oct 30, 2020 at 02:26:23PM +, Merger, Edgar [AUTOSOL/MAS/AUGS]
wrote:
> With one board we have a boot-problem that is reproducible at every ~50 boot.
> The system is accessible via ssh and works fine except for the Graphics. The
> graphics is off. We don´t see a screen. Plea
Hi Robin,
- then cpu3, cpu4, and so on.
- We can do this for all CPUs in the system, so total CPU rcache grows
from zero -> #CPUs * 128 * 2. Yet no DMA mapping leaks.
I get that. That's the initial warm-up phase I alluded to below. In an
even simpler example, allocating on CPU A and freeing
no_platform_optin is redundant with dmar_disabled and it's only used in
platform_optin_force_iommu(), remove it and use dmar_disabled instead.
Meanwhile remove all the dead code in platform_optin_force_iommu().
Signed-off-by: Zhenzhong Duan
---
drivers/iommu/intel/iommu.c | 14 ++
1
Hi Christoph,
On Tue, Nov 03, 2020 at 10:55:38AM +0100, Christoph Hellwig wrote:
> Linux 5.10-rc1 switched from having a single dma offset in struct device
> to a set of DMA ranges, and introduced a new helper to set them,
> dma_direct_set_offset.
>
> This in fact surfaced that a bunch of drivers
45 matches
Mail list logo