Re: [PATCH 16/33] powerpc/powernv: remove dead npu-dma code

2018-10-14 Thread Christoph Hellwig
On Mon, Oct 15, 2018 at 12:34:02PM +1100, Alexey Kardashevskiy wrote:
> 
> On 10/10/2018 00:24, Christoph Hellwig wrote:
> > This code has been unused since it was merged and is in the way of
> > cleaning up the DMA code, thus remove it.
> > 
> > This effectively reverts commit 5d2aa710 ("powerpc/powernv: Add support
> > for Nvlink NPUs").
> 
> 
> This code is heavily used by the NVIDIA GPU driver.

Not by the that actually exists in the kernel tree, so it simply doesn't
matter.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v3 0/8] vfio/mdev: IOMMU aware mediated device

2018-10-14 Thread Lu Baolu

Hi,

On 10/13/2018 04:25 PM, Xu Zaibo wrote:

Hi,

On 2018/10/12 13:16, Lu Baolu wrote:

Hi,

The Mediate Device is a framework for fine-grained physical device
sharing across the isolated domains. Currently the mdev framework
is designed to be independent of the platform IOMMU support. As the
result, the DMA isolation relies on the mdev parent device in a
vendor specific way.

There are several cases where a mediated device could be protected
and isolated by the platform IOMMU. For example, Intel vt-d rev3.0
[1] introduces a new translation mode called 'scalable mode', which
enables PASID-granular translations. The vt-d scalable mode is the
key ingredient for Scalable I/O Virtualization [2] [3] which allows
sharing a device in minimal possible granularity (ADI - Assignable
Device Interface).

A mediated device backed by an ADI could be protected and isolated
by the IOMMU since 1) the parent device supports tagging an unique
PASID to all DMA traffic out of the mediated device; and 2) the DMA
translation unit (IOMMU) supports the PASID granular translation.
We can apply IOMMU protection and isolation to this kind of devices
just as what we are doing with an assignable PCI device.

In order to distinguish the IOMMU-capable mediated devices from those
which still need to rely on parent devices, this patch set adds two
new members in struct mdev_device.

* iommu_device
   - This, if set, indicates that the mediated device could
 be fully isolated and protected by IOMMU via attaching
 an iommu domain to this device. If empty, it indicates
 using vendor defined isolation.

* iommu_domain
   - This is a place holder for an iommu domain. A domain
 could be store here for later use once it has been
 attached to the iommu_device of this mdev.

Below helpers are added to set and get above iommu device
and iommu domain pointers in mdev core implementation.

* mdev_set/get_iommu_device(dev, iommu_device)
   - Set or get the iommu device which represents this mdev
 in IOMMU's device scope. Drivers don't need to set the
 iommu device if it uses vendor defined isolation.

* mdev_set/get_iommu_domain(domain)
   - A iommu domain which has been attached to the iommu
 device in order to protect and isolate the mediated
 device will be kept in the mdev data structure and
 could be retrieved later.

The mdev parent device driver could opt-in that the mdev could be
fully isolated and protected by the IOMMU when the mdev is being
created by invoking mdev_set_iommu_device() in its @create().
I just cannot understand here, how to get an iommu_device while I create 
mediated

device in my parent device driver?


When you are creating an mdev in your parent driver, you should know
which PCI device this mdev belonging to.



And why not reuse the device of MDEV instread of adding a new device here?


iommu_device in the mdev_device structure represents the PCI device
that represents this mdev in iommu's device scope. IOMMU is only aware
of pci devices, it's not aware of mdev device.

Best regards,
Lu Baolu

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 16/33] powerpc/powernv: remove dead npu-dma code

2018-10-14 Thread Benjamin Herrenschmidt
On Mon, 2018-10-15 at 12:34 +1100, Alexey Kardashevskiy wrote:
> On 10/10/2018 00:24, Christoph Hellwig wrote:
> > This code has been unused since it was merged and is in the way of
> > cleaning up the DMA code, thus remove it.
> > 
> > This effectively reverts commit 5d2aa710 ("powerpc/powernv: Add support
> > for Nvlink NPUs").
> 
> 
> This code is heavily used by the NVIDIA GPU driver.

Some of it is, yes. And while I don't want to be involved in the
discussion about that specific can of worms, there is code in this file
related to the custom "always error" DMA ops that I suppose we could
remove, which is what is getting in the way of Christoph cleanups. It's
just meant as a debug stuff to catch incorrect attempts at doing the
dma mappings on the wrong "side" of the GPU.

Cheers,
Ben.


___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 16/33] powerpc/powernv: remove dead npu-dma code

2018-10-14 Thread Alexey Kardashevskiy


On 10/10/2018 00:24, Christoph Hellwig wrote:
> This code has been unused since it was merged and is in the way of
> cleaning up the DMA code, thus remove it.
> 
> This effectively reverts commit 5d2aa710 ("powerpc/powernv: Add support
> for Nvlink NPUs").


This code is heavily used by the NVIDIA GPU driver.



> 
> Signed-off-by: Christoph Hellwig 
> ---
>  arch/powerpc/include/asm/pci.h|   3 -
>  arch/powerpc/include/asm/powernv.h|  23 -
>  arch/powerpc/platforms/powernv/Makefile   |   2 +-
>  arch/powerpc/platforms/powernv/npu-dma.c  | 999 --
>  arch/powerpc/platforms/powernv/pci-ioda.c | 243 --
>  arch/powerpc/platforms/powernv/pci.h  |  11 -
>  6 files changed, 1 insertion(+), 1280 deletions(-)
>  delete mode 100644 arch/powerpc/platforms/powernv/npu-dma.c
> 
> diff --git a/arch/powerpc/include/asm/pci.h b/arch/powerpc/include/asm/pci.h
> index 2af9ded80540..a01d2e3d6ff9 100644
> --- a/arch/powerpc/include/asm/pci.h
> +++ b/arch/powerpc/include/asm/pci.h
> @@ -127,7 +127,4 @@ extern void pcibios_scan_phb(struct pci_controller *hose);
>  
>  #endif   /* __KERNEL__ */
>  
> -extern struct pci_dev *pnv_pci_get_gpu_dev(struct pci_dev *npdev);
> -extern struct pci_dev *pnv_pci_get_npu_dev(struct pci_dev *gpdev, int index);
> -
>  #endif /* __ASM_POWERPC_PCI_H */
> diff --git a/arch/powerpc/include/asm/powernv.h 
> b/arch/powerpc/include/asm/powernv.h
> index 2f3ff7a27881..4848a6b3c6b2 100644
> --- a/arch/powerpc/include/asm/powernv.h
> +++ b/arch/powerpc/include/asm/powernv.h
> @@ -11,33 +11,10 @@
>  #define _ASM_POWERNV_H
>  
>  #ifdef CONFIG_PPC_POWERNV
> -#define NPU2_WRITE 1
>  extern void powernv_set_nmmu_ptcr(unsigned long ptcr);
> -extern struct npu_context *pnv_npu2_init_context(struct pci_dev *gpdev,
> - unsigned long flags,
> - void (*cb)(struct npu_context *, void *),
> - void *priv);
> -extern void pnv_npu2_destroy_context(struct npu_context *context,
> - struct pci_dev *gpdev);
> -extern int pnv_npu2_handle_fault(struct npu_context *context, uintptr_t *ea,
> - unsigned long *flags, unsigned long *status,
> - int count);
> -
>  void pnv_tm_init(void);
>  #else
>  static inline void powernv_set_nmmu_ptcr(unsigned long ptcr) { }
> -static inline struct npu_context *pnv_npu2_init_context(struct pci_dev 
> *gpdev,
> - unsigned long flags,
> - struct npu_context *(*cb)(struct npu_context *, void *),
> - void *priv) { return ERR_PTR(-ENODEV); }
> -static inline void pnv_npu2_destroy_context(struct npu_context *context,
> - struct pci_dev *gpdev) { }
> -
> -static inline int pnv_npu2_handle_fault(struct npu_context *context,
> - uintptr_t *ea, unsigned long *flags,
> - unsigned long *status, int count) {
> - return -ENODEV;
> -}
>  
>  static inline void pnv_tm_init(void) { }
>  static inline void pnv_power9_force_smt4(void) { }
> diff --git a/arch/powerpc/platforms/powernv/Makefile 
> b/arch/powerpc/platforms/powernv/Makefile
> index b540ce8eec55..2b13e9dd137c 100644
> --- a/arch/powerpc/platforms/powernv/Makefile
> +++ b/arch/powerpc/platforms/powernv/Makefile
> @@ -6,7 +6,7 @@ obj-y += opal-msglog.o opal-hmi.o 
> opal-power.o opal-irqchip.o
>  obj-y+= opal-kmsg.o opal-powercap.o opal-psr.o 
> opal-sensor-groups.o
>  
>  obj-$(CONFIG_SMP)+= smp.o subcore.o subcore-asm.o
> -obj-$(CONFIG_PCI)+= pci.o pci-ioda.o npu-dma.o pci-ioda-tce.o
> +obj-$(CONFIG_PCI)+= pci.o pci-ioda.o pci-ioda-tce.o
>  obj-$(CONFIG_CXL_BASE)   += pci-cxl.o
>  obj-$(CONFIG_EEH)+= eeh-powernv.o
>  obj-$(CONFIG_PPC_SCOM)   += opal-xscom.o
> diff --git a/arch/powerpc/platforms/powernv/npu-dma.c 
> b/arch/powerpc/platforms/powernv/npu-dma.c
> deleted file mode 100644
> index 8006c54a91e3..
> --- a/arch/powerpc/platforms/powernv/npu-dma.c
> +++ /dev/null
> @@ -1,999 +0,0 @@
> -/*
> - * This file implements the DMA operations for NVLink devices. The NPU
> - * devices all point to the same iommu table as the parent PCI device.
> - *
> - * Copyright Alistair Popple, IBM Corporation 2015.
> - *
> - * This program is free software; you can redistribute it and/or
> - * modify it under the terms of version 2 of the GNU General Public
> - * License as published by the Free Software Foundation.
> - */
> -
> -#include 
> -#include 
> -#include 
> -#include 
> -#include 
> -#include 
> -#include 
> -#include 
> -#include 
> -
> -#include 
> -#include 
> -#include 
> -#include 
> -#include 
> -#include 
> -#include 
> -#include 
> -#include 
> -#include 
> -
> -#include "powernv.h"
> -#include "pci.h"
> -
> -#define npu_to_phb(x) container_of(x, struct pnv_phb, npu)
> -
> -/*
> - * spinlock to protect 

Re: [PATCH 01/33] powerpc: use mm zones more sensibly

2018-10-14 Thread Benjamin Herrenschmidt
On Tue, 2018-10-09 at 15:24 +0200, Christoph Hellwig wrote:
>   * Find the least restrictive zone that is entirely below the
> @@ -324,11 +305,14 @@ void __init paging_init(void)
> printk(KERN_DEBUG "Memory hole size: %ldMB\n",
>(long int)((top_of_ram - total_ram) >> 20));
>  
> +#ifdef CONFIG_ZONE_DMA
> +   max_zone_pfns[ZONE_DMA] = min(max_low_pfn, 0x7fffUL >> 
> PAGE_SHIFT);
> +#endif
> +   max_zone_pfns[ZONE_NORMAL] = max_low_pfn;
>  #ifdef CONFIG_HIGHMEM
> -   limit_zone_pfn(ZONE_NORMAL, lowmem_end_addr >> PAGE_SHIFT);
> +   max_zone_pfns[ZONE_HIGHMEM] = max_pfn
   ^
Missing a  ";" here  --|

Sorry ... works with that fix on an old laptop with highmem.

>  #endif
> -   limit_zone_pfn(TOP_ZONE, top_of_ram >> PAGE_SHIFT);
> -   zone_limits_final = true;
> +
> free_area_init_nodes(max_zone_pfns);
>  

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH] dma-mapping: move the remap helpers to a separate file

2018-10-14 Thread Christoph Hellwig
The dma remap code only really makes sense for not cache coherent
architectures, and currently is only used by arm, arm64 and xtensa.
Split it out into a separate file with a separate Kconfig symbol.

[Laura: you wrote this code back then, do you have a sensible
 copyright statement to add, given that the mapping.c statement
 obviously does not match your code that was written much later]

Signed-off-by: Christoph Hellwig 
---
 arch/arm/Kconfig |  1 +
 arch/arm64/Kconfig   |  1 +
 arch/xtensa/Kconfig  |  1 +
 kernel/dma/Kconfig   |  4 +++
 kernel/dma/Makefile  |  2 +-
 kernel/dma/mapping.c | 84 ---
 kernel/dma/remap.c   | 85 
 7 files changed, 93 insertions(+), 85 deletions(-)
 create mode 100644 kernel/dma/remap.c

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index e8cd55a5b04c..cf54e572dafd 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -30,6 +30,7 @@ config ARM
select CPU_PM if (SUSPEND || CPU_IDLE)
select DCACHE_WORD_ACCESS if HAVE_EFFICIENT_UNALIGNED_ACCESS
select DMA_DIRECT_OPS if !MMU
+   select DMA_REMAP if MMU
select EDAC_SUPPORT
select EDAC_ATOMIC_SCRUB
select GENERIC_ALLOCATOR
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 1b1a0e95c751..179994b67d11 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -77,6 +77,7 @@ config ARM64
select CPU_PM if (SUSPEND || CPU_IDLE)
select DCACHE_WORD_ACCESS
select DMA_DIRECT_OPS
+   select DMA_REMAP
select EDAC_SUPPORT
select FRAME_POINTER
select GENERIC_ALLOCATOR
diff --git a/arch/xtensa/Kconfig b/arch/xtensa/Kconfig
index 9a7c654a7654..d57abe4cad0f 100644
--- a/arch/xtensa/Kconfig
+++ b/arch/xtensa/Kconfig
@@ -10,6 +10,7 @@ config XTENSA
select CLONE_BACKWARDS
select COMMON_CLK
select DMA_DIRECT_OPS
+   select DMA_REMAP if MMU
select GENERIC_ATOMIC64
select GENERIC_CLOCKEVENTS
select GENERIC_IRQ_SHOW
diff --git a/kernel/dma/Kconfig b/kernel/dma/Kconfig
index 645c7a2ecde8..c92e08173ed8 100644
--- a/kernel/dma/Kconfig
+++ b/kernel/dma/Kconfig
@@ -51,3 +51,7 @@ config SWIOTLB
bool
select DMA_DIRECT_OPS
select NEED_DMA_MAP_STATE
+
+config DMA_REMAP
+   depends on MMU
+   bool
diff --git a/kernel/dma/Makefile b/kernel/dma/Makefile
index 7d581e4eea4a..f4feeceb8020 100644
--- a/kernel/dma/Makefile
+++ b/kernel/dma/Makefile
@@ -7,4 +7,4 @@ obj-$(CONFIG_DMA_DIRECT_OPS)+= direct.o
 obj-$(CONFIG_DMA_VIRT_OPS) += virt.o
 obj-$(CONFIG_DMA_API_DEBUG)+= debug.o
 obj-$(CONFIG_SWIOTLB)  += swiotlb.o
-
+obj-$(CONFIG_DMA_REMAP)+= remap.o
diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c
index 58dec7a92b7b..dfbc3deb95cd 100644
--- a/kernel/dma/mapping.c
+++ b/kernel/dma/mapping.c
@@ -262,87 +262,3 @@ int dma_common_mmap(struct device *dev, struct 
vm_area_struct *vma,
 #endif /* !CONFIG_ARCH_NO_COHERENT_DMA_MMAP */
 }
 EXPORT_SYMBOL(dma_common_mmap);
-
-#ifdef CONFIG_MMU
-static struct vm_struct *__dma_common_pages_remap(struct page **pages,
-   size_t size, unsigned long vm_flags, pgprot_t prot,
-   const void *caller)
-{
-   struct vm_struct *area;
-
-   area = get_vm_area_caller(size, vm_flags, caller);
-   if (!area)
-   return NULL;
-
-   if (map_vm_area(area, prot, pages)) {
-   vunmap(area->addr);
-   return NULL;
-   }
-
-   return area;
-}
-
-/*
- * remaps an array of PAGE_SIZE pages into another vm_area
- * Cannot be used in non-sleeping contexts
- */
-void *dma_common_pages_remap(struct page **pages, size_t size,
-   unsigned long vm_flags, pgprot_t prot,
-   const void *caller)
-{
-   struct vm_struct *area;
-
-   area = __dma_common_pages_remap(pages, size, vm_flags, prot, caller);
-   if (!area)
-   return NULL;
-
-   area->pages = pages;
-
-   return area->addr;
-}
-
-/*
- * remaps an allocated contiguous region into another vm_area.
- * Cannot be used in non-sleeping contexts
- */
-
-void *dma_common_contiguous_remap(struct page *page, size_t size,
-   unsigned long vm_flags,
-   pgprot_t prot, const void *caller)
-{
-   int i;
-   struct page **pages;
-   struct vm_struct *area;
-
-   pages = kmalloc(sizeof(struct page *) << get_order(size), GFP_KERNEL);
-   if (!pages)
-   return NULL;
-
-   for (i = 0; i < (size >> PAGE_SHIFT); i++)
-   pages[i] = nth_page(page, i);
-
-   area = __dma_common_pages_remap(pages, size, vm_flags, prot, caller);
-
-   kfree(pages);
-
-   if (!area)
-   return NULL;
-   return area->addr;
-}
-
-/*
- * unmaps a range previously mapped by dma_common_*_remap
- */

[PATCH] dma-direct: reject highmem pages from dma_alloc_from_contiguous

2018-10-14 Thread Christoph Hellwig
dma_alloc_from_contiguous can return highmem pages depending on the
setup, which a plain non-remapping DMA allocator can't handle.  Detect
this case and try the normal page allocator instead.

Signed-off-by: Christoph Hellwig 
---
 kernel/dma/direct.c | 12 
 1 file changed, 12 insertions(+)

diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index 87a6bc2a96c0..46fbaa49125b 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -126,6 +126,18 @@ void *dma_direct_alloc_pages(struct device *dev, size_t 
size,
if (gfpflags_allow_blocking(gfp)) {
page = dma_alloc_from_contiguous(dev, count, page_order,
 gfp & __GFP_NOWARN);
+   if (page && PageHighMem(page)) {
+   /*
+* Depending on the cma= arguments and per-arch setup
+* dma_alloc_from_contiguous could return highmem
+* pages.  Without remapping there is no way to return
+* them here, so log an error and fail.
+*/
+   dev_info(dev, "Ignoring highmem page from CMA.\n");
+   dma_release_from_contiguous(dev, page, count);
+   page = NULL;
+   }
+
if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) {
dma_release_from_contiguous(dev, page, count);
page = NULL;
-- 
2.19.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH] x86: enable swiotlb for > 4GiG ram on 32-bit kernels

2018-10-14 Thread tedheadster
Please change:

Reported-by: tedheadster 
Tested-by: tedheadster 

to

Reported-by: Matthew Whitehead 
Tested-by: Matthew Whitehead 

- Matthew
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH] x86: enable swiotlb for > 4GiG ram on 32-bit kernels

2018-10-14 Thread Ingo Molnar


* Thomas Gleixner  wrote:

> On Sun, 14 Oct 2018, Christoph Hellwig wrote:
> 
> > On Sun, Oct 14, 2018 at 10:13:31AM +0200, Thomas Gleixner wrote:
> > > On Sun, 14 Oct 2018, Christoph Hellwig wrote:
> > > 
> > > > We already build the swiotlb code for 32b-t kernels with PAE support,
> > > > but the code to actually use swiotlb has only been enabled for 64-bit
> > > > kernel for an unknown reason.
> > > > 
> > > > Before Linux 4.18 we papers over this fact because the networking code,
> > > > the scsi layer and some random block drivers implenented their own
> > > > bounce buffering scheme.
> > > > 
> > > > Fixes: 21e07dba ("scsi: reduce use of block bounce buffers")
> 
> Please use the first 12 characters of the commit SHA for fixes tags in the
> future, as documented. No need to resend, I fixed it up for you and added a
> Cc: stable as well

For those who have their ~/.gitconfig's from ancient Git history, this can be 
done via:

git config --global core.abbrev 12

Thanks,

Ingo
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH] x86: enable swiotlb for > 4GiG ram on 32-bit kernels

2018-10-14 Thread Thomas Gleixner
On Sun, 14 Oct 2018, Christoph Hellwig wrote:

> On Sun, Oct 14, 2018 at 10:13:31AM +0200, Thomas Gleixner wrote:
> > On Sun, 14 Oct 2018, Christoph Hellwig wrote:
> > 
> > > We already build the swiotlb code for 32b-t kernels with PAE support,
> > > but the code to actually use swiotlb has only been enabled for 64-bit
> > > kernel for an unknown reason.
> > > 
> > > Before Linux 4.18 we papers over this fact because the networking code,
> > > the scsi layer and some random block drivers implenented their own
> > > bounce buffering scheme.
> > > 
> > > Fixes: 21e07dba ("scsi: reduce use of block bounce buffers")

Please use the first 12 characters of the commit SHA for fixes tags in the
future, as documented. No need to resend, I fixed it up for you and added a
Cc: stable as well

Thanks,

tglx
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH] x86: enable swiotlb for > 4GiG ram on 32-bit kernels

2018-10-14 Thread Christoph Hellwig
On Sun, Oct 14, 2018 at 10:13:31AM +0200, Thomas Gleixner wrote:
> On Sun, 14 Oct 2018, Christoph Hellwig wrote:
> 
> > We already build the swiotlb code for 32b-t kernels with PAE support,
> > but the code to actually use swiotlb has only been enabled for 64-bit
> > kernel for an unknown reason.
> > 
> > Before Linux 4.18 we papers over this fact because the networking code,
> > the scsi layer and some random block drivers implenented their own
> > bounce buffering scheme.
> > 
> > Fixes: 21e07dba ("scsi: reduce use of block bounce buffers")
> > Fixes: ab74cfeb ("net: remove the PCI_DMA_BUS_IS_PHYS check in 
> > illegal_highdma")
> > Reported-by: tedheadster 
> > Tested-by: tedheadster 
> 
> I'll add your SOB when picking this up :)

Thanks.  Here it is in writing:

Signed-off-by: Christoph Hellwig 
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH] x86: enable swiotlb for > 4GiG ram on 32-bit kernels

2018-10-14 Thread Thomas Gleixner
On Sun, 14 Oct 2018, Christoph Hellwig wrote:

> We already build the swiotlb code for 32b-t kernels with PAE support,
> but the code to actually use swiotlb has only been enabled for 64-bit
> kernel for an unknown reason.
> 
> Before Linux 4.18 we papers over this fact because the networking code,
> the scsi layer and some random block drivers implenented their own
> bounce buffering scheme.
> 
> Fixes: 21e07dba ("scsi: reduce use of block bounce buffers")
> Fixes: ab74cfeb ("net: remove the PCI_DMA_BUS_IS_PHYS check in 
> illegal_highdma")
> Reported-by: tedheadster 
> Tested-by: tedheadster 

I'll add your SOB when picking this up :)

> ---
>  arch/x86/kernel/pci-swiotlb.c | 2 --
>  1 file changed, 2 deletions(-)
> 
> diff --git a/arch/x86/kernel/pci-swiotlb.c b/arch/x86/kernel/pci-swiotlb.c
> index 661583662430..71c0b01d93b1 100644
> --- a/arch/x86/kernel/pci-swiotlb.c
> +++ b/arch/x86/kernel/pci-swiotlb.c
> @@ -42,10 +42,8 @@ IOMMU_INIT_FINISH(pci_swiotlb_detect_override,
>  int __init pci_swiotlb_detect_4gb(void)
>  {
>   /* don't initialize swiotlb if iommu=off (no_iommu=1) */
> -#ifdef CONFIG_X86_64
>   if (!no_iommu && max_possible_pfn > MAX_DMA32_PFN)
>   swiotlb = 1;
> -#endif
>  
>   /*
>* If SME is active then swiotlb will be set to 1 so that bounce
> -- 
> 2.19.1
> 
> 
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH] x86: enable swiotlb for > 4GiG ram on 32-bit kernels

2018-10-14 Thread Christoph Hellwig
We already build the swiotlb code for 32b-t kernels with PAE support,
but the code to actually use swiotlb has only been enabled for 64-bit
kernel for an unknown reason.

Before Linux 4.18 we papers over this fact because the networking code,
the scsi layer and some random block drivers implenented their own
bounce buffering scheme.

Fixes: 21e07dba ("scsi: reduce use of block bounce buffers")
Fixes: ab74cfeb ("net: remove the PCI_DMA_BUS_IS_PHYS check in illegal_highdma")
Reported-by: tedheadster 
Tested-by: tedheadster 
---
 arch/x86/kernel/pci-swiotlb.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/arch/x86/kernel/pci-swiotlb.c b/arch/x86/kernel/pci-swiotlb.c
index 661583662430..71c0b01d93b1 100644
--- a/arch/x86/kernel/pci-swiotlb.c
+++ b/arch/x86/kernel/pci-swiotlb.c
@@ -42,10 +42,8 @@ IOMMU_INIT_FINISH(pci_swiotlb_detect_override,
 int __init pci_swiotlb_detect_4gb(void)
 {
/* don't initialize swiotlb if iommu=off (no_iommu=1) */
-#ifdef CONFIG_X86_64
if (!no_iommu && max_possible_pfn > MAX_DMA32_PFN)
swiotlb = 1;
-#endif
 
/*
 * If SME is active then swiotlb will be set to 1 so that bounce
-- 
2.19.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu