RE: ATTENTION!!!

2017-12-29 Thread Loretta Robles


From: Loretta Robles
Sent: Friday, December 29, 2017 1:01 PM
To: Loretta Robles
Subject: ATTENTION!!!

You have been randomly selected for a donation. Contact soriz4...@gmail.com for 
claims.
--
To unsubscribe from this list: send the line "unsubscribe linux-alpha" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 27/67] dma-direct: add dma address sanity checks

2017-12-29 Thread Geert Uytterhoeven
Hi Christoph,

On Fri, Dec 29, 2017 at 9:18 AM, Christoph Hellwig  wrote:
> Roughly based on the x86 pci-nommu implementation.
>
> Signed-off-by: Christoph Hellwig 

Thanks for your patch!

> --- a/lib/dma-direct.c
> +++ b/lib/dma-direct.c
> @@ -9,6 +9,24 @@
>  #include 
>  #include 
>
> +#define DIRECT_MAPPING_ERROR   0
> +
> +static bool
> +check_addr(struct device *dev, dma_addr_t dma_addr, size_t size,
> +   const char *caller)
> +{
> +   if (unlikely(dev && !dma_capable(dev, dma_addr, size))) {
> +   if (*dev->dma_mask >= DMA_BIT_MASK(32)) {
> +   dev_err(dev,
> +   "%s: overflow %llx+%zu of device mask %llx\n",

Please use "%pad" to format dma_addr_t ...

> +   caller, (long long)dma_addr, size,

... and use _addr.

> +   (long long)*dev->dma_mask);

This cast is not needed, as u64 is unsigned long long in kernelspace on
all architectures.

Gr{oetje,eeting}s,

Geert

--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- ge...@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
-- Linus Torvalds
--
To unsubscribe from this list: send the line "unsubscribe linux-alpha" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: consolidate direct dma mapping and swiotlb support

2017-12-29 Thread Vladimir Murzin
On 29/12/17 08:18, Christoph Hellwig wrote:
> Almost every architecture supports a direct dma mapping implementation,
> where no iommu is used and the device dma address is a 1:1 mapping to
> the physical address or has a simple linear offset.  Currently the
> code for this implementation is most duplicated over the architectures,
> and the duplicated again in the swiotlb code, and then duplicated again
> for special cases like the x86 memory encryption DMA ops.
> 
> This series takes the existing very simple dma-noop dma mapping
> implementation, enhances it with all the x86 features and quirks, and
> creates a common set of architecture hooks for it and the swiotlb code.
> 
> It then switches a large number of architectures to this generic
> direct map implement and the new generic swiotlb dma_map ops.
> 
> Note that for now this only handles architectures that do cache coherent
> DMA, but a similar consolidation for non-coherent architectures is in the
> work for later merge windows.

Is it available in your dma-mapping.git or somewhere else?

Cheers
Vladimir

> 
> ___
> linux-arm-kernel mailing list
> linux-arm-ker...@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-alpha" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 17/67] microblaze: rename dma_direct to dma_microblaze

2017-12-29 Thread Julian Calaby
Hi Christoph,

On Fri, Dec 29, 2017 at 7:18 PM, Christoph Hellwig  wrote:
> This frees the dma_direct_* namespace for a generic implementation.

Don't you mean "dma_nommu" not "dma_microblaze" in the subject line?

Thanks,

-- 
Julian Calaby

Email: julian.cal...@gmail.com
Profile: http://www.google.com/profiles/julian.calaby/
--
To unsubscribe from this list: send the line "unsubscribe linux-alpha" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 01/67] x86: remove X86_PPRO_FENCE

2017-12-29 Thread Christoph Hellwig
There were only a few Pentium Pro multiprocessors systems where this
errata applied. They are more than 20 years old now, and we've slowly
dropped places where put the workarounds in and discuraged anyone
from enabling the workaround.

Get rid of it for good.

Signed-off-by: Christoph Hellwig 
---
 arch/x86/Kconfig.cpu| 13 -
 arch/x86/entry/vdso/vdso32/vclock_gettime.c |  2 --
 arch/x86/include/asm/barrier.h  | 30 -
 arch/x86/include/asm/io.h   | 15 ---
 arch/x86/kernel/pci-nommu.c | 19 --
 arch/x86/um/asm/barrier.h   |  4 
 6 files changed, 83 deletions(-)

diff --git a/arch/x86/Kconfig.cpu b/arch/x86/Kconfig.cpu
index 65a9a4716e34..f0c5ef578153 100644
--- a/arch/x86/Kconfig.cpu
+++ b/arch/x86/Kconfig.cpu
@@ -315,19 +315,6 @@ config X86_L1_CACHE_SHIFT
default "4" if MELAN || M486 || MGEODEGX1
default "5" if MWINCHIP3D || MWINCHIPC6 || MCRUSOE || MEFFICEON || 
MCYRIXIII || MK6 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || 
M586 || MVIAC3_2 || MGEODE_LX
 
-config X86_PPRO_FENCE
-   bool "PentiumPro memory ordering errata workaround"
-   depends on M686 || M586MMX || M586TSC || M586 || M486 || MGEODEGX1
-   ---help---
- Old PentiumPro multiprocessor systems had errata that could cause
- memory operations to violate the x86 ordering standard in rare cases.
- Enabling this option will attempt to work around some (but not all)
- occurrences of this problem, at the cost of much heavier spinlock and
- memory barrier operations.
-
- If unsure, say n here. Even distro kernels should think twice before
- enabling this: there are few systems, and an unlikely bug.
-
 config X86_F00F_BUG
def_bool y
depends on M586MMX || M586TSC || M586 || M486
diff --git a/arch/x86/entry/vdso/vdso32/vclock_gettime.c 
b/arch/x86/entry/vdso/vdso32/vclock_gettime.c
index 7780bbfb06ef..9242b28418d5 100644
--- a/arch/x86/entry/vdso/vdso32/vclock_gettime.c
+++ b/arch/x86/entry/vdso/vdso32/vclock_gettime.c
@@ -5,8 +5,6 @@
 #undef CONFIG_OPTIMIZE_INLINING
 #endif
 
-#undef CONFIG_X86_PPRO_FENCE
-
 #ifdef CONFIG_X86_64
 
 /*
diff --git a/arch/x86/include/asm/barrier.h b/arch/x86/include/asm/barrier.h
index 7fb336210e1b..aa0f7449d4a4 100644
--- a/arch/x86/include/asm/barrier.h
+++ b/arch/x86/include/asm/barrier.h
@@ -24,11 +24,7 @@
 #define wmb()  asm volatile("sfence" ::: "memory")
 #endif
 
-#ifdef CONFIG_X86_PPRO_FENCE
-#define dma_rmb()  rmb()
-#else
 #define dma_rmb()  barrier()
-#endif
 #define dma_wmb()  barrier()
 
 #ifdef CONFIG_X86_32
@@ -40,30 +36,6 @@
 #define __smp_wmb()barrier()
 #define __smp_store_mb(var, value) do { (void)xchg(, value); } while (0)
 
-#if defined(CONFIG_X86_PPRO_FENCE)
-
-/*
- * For this option x86 doesn't have a strong TSO memory
- * model and we should fall back to full barriers.
- */
-
-#define __smp_store_release(p, v)  \
-do {   \
-   compiletime_assert_atomic_type(*p); \
-   __smp_mb(); \
-   WRITE_ONCE(*p, v);  \
-} while (0)
-
-#define __smp_load_acquire(p)  \
-({ \
-   typeof(*p) ___p1 = READ_ONCE(*p);   \
-   compiletime_assert_atomic_type(*p); \
-   __smp_mb(); \
-   ___p1;  \
-})
-
-#else /* regular x86 TSO memory ordering */
-
 #define __smp_store_release(p, v)  \
 do {   \
compiletime_assert_atomic_type(*p); \
@@ -79,8 +51,6 @@ do {  
\
___p1;  \
 })
 
-#endif
-
 /* Atomic operations are already serializing on x86 */
 #define __smp_mb__before_atomic()  barrier()
 #define __smp_mb__after_atomic()   barrier()
diff --git a/arch/x86/include/asm/io.h b/arch/x86/include/asm/io.h
index 95e948627fd0..f6e5b9375d8c 100644
--- a/arch/x86/include/asm/io.h
+++ b/arch/x86/include/asm/io.h
@@ -232,21 +232,6 @@ extern void set_iounmap_nonlazy(void);
  */
 #define __ISA_IO_base ((char __iomem *)(PAGE_OFFSET))
 
-/*
- * Cache management
- *
- * This needed for two cases
- * 1. Out of order aware processors
- * 2. Accidentally out of order processors (PPro errata #51)
- */
-
-static inline void flush_write_buffers(void)
-{

[PATCH 05/67] dma-mapping: replace PCI_DMA_BUS_IS_PHYS with a flag in struct dma_map_ops

2017-12-29 Thread Christoph Hellwig
The current PCI_DMA_BUS_IS_PHYS decided if a dma implementation is bound
by the dma mask in the device because it directly maps to a physical
address range (modulo an offset in the device), or if it is virtualized
by an iommu and can map any address (that includes virtual iommus like
swiotlb).  The problem with this scheme is that it is per-architecture and
not per dma_ops instance, and we are growing more and more setups that
have multiple different dma operations in use on a single system, for
which this scheme can't provide a correct answer.  Depending on the
architecture that means we either get a false positive or false negative
at the moment.

This patch instead extents the is_phys flag in struct dma_map_ops that
is currently only used by a few architectures to be used tree wide.

Note that this means that we now need a struct device parent in the
Scsi_Host or netdevice.  Every modern driver has these, but there might
still be a few outdated legacy drivers out there, which now won't make
an intelligent decision.

Signed-off-by: Christoph Hellwig 
---
 arch/alpha/include/asm/pci.h  |  5 -
 arch/alpha/kernel/pci-noop.c  |  1 +
 arch/arc/include/asm/pci.h|  6 --
 arch/arc/mm/dma.c |  1 +
 arch/arm/include/asm/pci.h|  7 ---
 arch/arm/mm/dma-mapping-nommu.c   |  1 +
 arch/arm/mm/dma-mapping.c |  2 ++
 arch/arm64/include/asm/pci.h  |  5 -
 arch/blackfin/kernel/dma-mapping.c|  2 ++
 arch/c6x/kernel/dma.c |  1 +
 arch/cris/arch-v32/drivers/pci/dma.c  |  1 +
 arch/cris/include/asm/pci.h   |  6 --
 arch/frv/mb93090-mb00/pci-dma-nommu.c |  1 +
 arch/frv/mb93090-mb00/pci-dma.c   |  1 +
 arch/h8300/include/asm/pci.h  |  2 --
 arch/h8300/kernel/dma.c   |  1 +
 arch/hexagon/kernel/dma.c |  2 +-
 arch/ia64/hp/common/sba_iommu.c   |  3 ---
 arch/ia64/include/asm/pci.h   | 17 -
 arch/ia64/kernel/setup.c  | 12 
 arch/ia64/sn/kernel/io_common.c   |  5 -
 arch/m68k/include/asm/pci.h   |  6 --
 arch/m68k/kernel/dma.c|  1 +
 arch/metag/kernel/dma.c   |  1 +
 arch/microblaze/include/asm/pci.h |  6 --
 arch/microblaze/kernel/dma.c  |  1 +
 arch/mips/include/asm/pci.h   |  7 ---
 arch/mips/mm/dma-default.c|  1 +
 arch/mn10300/include/asm/pci.h|  6 --
 arch/mn10300/mm/dma-alloc.c   |  1 +
 arch/nios2/mm/dma-mapping.c   |  1 +
 arch/openrisc/kernel/dma.c|  1 +
 arch/parisc/include/asm/pci.h | 23 ---
 arch/parisc/kernel/pci-dma.c  |  2 ++
 arch/parisc/kernel/setup.c|  5 -
 arch/powerpc/include/asm/pci.h| 18 --
 arch/powerpc/kernel/dma.c |  1 +
 arch/riscv/include/asm/pci.h  |  3 ---
 arch/s390/include/asm/pci.h   |  2 --
 arch/s390/pci/pci_dma.c   |  3 ---
 arch/sh/include/asm/pci.h |  6 --
 arch/sh/kernel/dma-nommu.c|  2 +-
 arch/sparc/include/asm/pci_32.h   |  4 
 arch/sparc/include/asm/pci_64.h   |  6 --
 arch/sparc/kernel/ioport.c|  1 +
 arch/tile/include/asm/pci.h   | 14 --
 arch/tile/kernel/pci-dma.c|  2 ++
 arch/x86/include/asm/pci.h|  2 --
 arch/x86/kernel/pci-nommu.c   |  2 +-
 arch/xtensa/include/asm/pci.h |  7 ---
 arch/xtensa/kernel/pci-dma.c  |  1 +
 drivers/ide/ide-lib.c |  5 ++---
 drivers/ide/ide-probe.c   |  2 +-
 drivers/parisc/ccio-dma.c |  2 --
 drivers/parisc/sba_iommu.c|  2 --
 drivers/pci/host/vmd.c|  1 +
 drivers/scsi/scsi_lib.c   | 14 ++
 include/asm-generic/pci.h |  8 
 include/linux/dma-mapping.h   | 23 ++-
 lib/dma-noop.c|  1 +
 net/core/dev.c| 18 --
 tools/virtio/linux/dma-mapping.h  |  2 --
 62 files changed, 70 insertions(+), 226 deletions(-)

diff --git a/arch/alpha/include/asm/pci.h b/arch/alpha/include/asm/pci.h
index b9ec55351924..cf6bc1e64d66 100644
--- a/arch/alpha/include/asm/pci.h
+++ b/arch/alpha/include/asm/pci.h
@@ -56,11 +56,6 @@ struct pci_controller {
 
 /* IOMMU controls.  */
 
-/* The PCI address space does not equal the physical memory address space.
-   The networking and block device layers use this boolean for bounce buffer
-   decisions.  */
-#define PCI_DMA_BUS_IS_PHYS  0
-
 /* TODO: integrate with include/asm-generic/pci.h ? */
 static inline int pci_get_legacy_ide_irq(struct pci_dev *dev, int channel)
 {
diff --git a/arch/alpha/kernel/pci-noop.c b/arch/alpha/kernel/pci-noop.c
index b995987b1557..d3208254b269 100644
--- a/arch/alpha/kernel/pci-noop.c
+++ 

consolidate direct dma mapping and swiotlb support

2017-12-29 Thread Christoph Hellwig
Almost every architecture supports a direct dma mapping implementation,
where no iommu is used and the device dma address is a 1:1 mapping to
the physical address or has a simple linear offset.  Currently the
code for this implementation is most duplicated over the architectures,
and the duplicated again in the swiotlb code, and then duplicated again
for special cases like the x86 memory encryption DMA ops.

This series takes the existing very simple dma-noop dma mapping
implementation, enhances it with all the x86 features and quirks, and
creates a common set of architecture hooks for it and the swiotlb code.

It then switches a large number of architectures to this generic
direct map implement and the new generic swiotlb dma_map ops.

Note that for now this only handles architectures that do cache coherent
DMA, but a similar consolidation for non-coherent architectures is in the
work for later merge windows.
--
To unsubscribe from this list: send the line "unsubscribe linux-alpha" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 02/67] alpha: mark jensen as broken

2017-12-29 Thread Christoph Hellwig
CONFIG_ALPHA_JENSEN has failed to compile since commit aca05038
("alpha/dma: use common noop dma ops"), so mark it as broken.

Signed-off-by: Christoph Hellwig 
---
 arch/alpha/Kconfig | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/alpha/Kconfig b/arch/alpha/Kconfig
index b31b974a03cb..e96adcbcab41 100644
--- a/arch/alpha/Kconfig
+++ b/arch/alpha/Kconfig
@@ -209,6 +209,7 @@ config ALPHA_EIGER
 
 config ALPHA_JENSEN
bool "Jensen"
+   depends on BROKEN
help
  DEC PC 150 AXP (aka Jensen): This is a very old Digital system - one
  of the first-generation Alpha systems. A number of these systems
-- 
2.14.2

--
To unsubscribe from this list: send the line "unsubscribe linux-alpha" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 08/67] powerpc: remove unused flush_write_buffers definition

2017-12-29 Thread Christoph Hellwig
Signed-off-by: Christoph Hellwig 
---
 arch/powerpc/include/asm/dma-mapping.h | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/arch/powerpc/include/asm/dma-mapping.h 
b/arch/powerpc/include/asm/dma-mapping.h
index 5a6cbe11db6f..592c7f418aa0 100644
--- a/arch/powerpc/include/asm/dma-mapping.h
+++ b/arch/powerpc/include/asm/dma-mapping.h
@@ -107,9 +107,6 @@ static inline void set_dma_offset(struct device *dev, 
dma_addr_t off)
dev->archdata.dma_offset = off;
 }
 
-/* this will be removed soon */
-#define flush_write_buffers()
-
 #define HAVE_ARCH_DMA_SET_MASK 1
 extern int dma_set_mask(struct device *dev, u64 dma_mask);
 
-- 
2.14.2

--
To unsubscribe from this list: send the line "unsubscribe linux-alpha" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 03/67] dma-mapping: take dma_pfn_offset into account in dma_max_pfn

2017-12-29 Thread Christoph Hellwig
This makes sure the generic version can be used with architectures /
devices that have a DMA offset in the direct mapping.

Signed-off-by: Christoph Hellwig 
---
 include/linux/dma-mapping.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
index 81ed9b2d84dc..d84951865be7 100644
--- a/include/linux/dma-mapping.h
+++ b/include/linux/dma-mapping.h
@@ -692,7 +692,7 @@ static inline int dma_set_seg_boundary(struct device *dev, 
unsigned long mask)
 #ifndef dma_max_pfn
 static inline unsigned long dma_max_pfn(struct device *dev)
 {
-   return *dev->dma_mask >> PAGE_SHIFT;
+   return (*dev->dma_mask >> PAGE_SHIFT) + dev->dma_pfn_offset;
 }
 #endif
 
-- 
2.14.2

--
To unsubscribe from this list: send the line "unsubscribe linux-alpha" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 04/67] arm64: don't override dma_max_pfn

2017-12-29 Thread Christoph Hellwig
The generic version now takes dma_pfn_offset into account, so there is no
more need for an architecture override.

Signed-off-by: Christoph Hellwig 
---
 arch/arm64/include/asm/dma-mapping.h | 9 -
 1 file changed, 9 deletions(-)

diff --git a/arch/arm64/include/asm/dma-mapping.h 
b/arch/arm64/include/asm/dma-mapping.h
index 0df756b24863..eada887a93bf 100644
--- a/arch/arm64/include/asm/dma-mapping.h
+++ b/arch/arm64/include/asm/dma-mapping.h
@@ -76,14 +76,5 @@ static inline void dma_mark_clean(void *addr, size_t size)
 {
 }
 
-/* Override for dma_max_pfn() */
-static inline unsigned long dma_max_pfn(struct device *dev)
-{
-   dma_addr_t dma_max = (dma_addr_t)*dev->dma_mask;
-
-   return (ulong)dma_to_phys(dev, dma_max) >> PAGE_SHIFT;
-}
-#define dma_max_pfn(dev) dma_max_pfn(dev)
-
 #endif /* __KERNEL__ */
 #endif /* __ASM_DMA_MAPPING_H */
-- 
2.14.2

--
To unsubscribe from this list: send the line "unsubscribe linux-alpha" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 06/67] hexagon: remove unused flush_write_buffers definition

2017-12-29 Thread Christoph Hellwig
Signed-off-by: Christoph Hellwig 
---
 arch/hexagon/include/asm/io.h | 2 --
 1 file changed, 2 deletions(-)

diff --git a/arch/hexagon/include/asm/io.h b/arch/hexagon/include/asm/io.h
index 66f5e9a61efc..9e8621d94ee9 100644
--- a/arch/hexagon/include/asm/io.h
+++ b/arch/hexagon/include/asm/io.h
@@ -330,8 +330,6 @@ static inline void outsl(unsigned long port, const void 
*buffer, int count)
}
 }
 
-#define flush_write_buffers() do { } while (0)
-
 #endif /* __KERNEL__ */
 
 #endif
-- 
2.14.2

--
To unsubscribe from this list: send the line "unsubscribe linux-alpha" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 07/67] m32r: remove unused flush_write_buffers definition

2017-12-29 Thread Christoph Hellwig
Signed-off-by: Christoph Hellwig 
---
 arch/m32r/include/asm/io.h | 2 --
 1 file changed, 2 deletions(-)

diff --git a/arch/m32r/include/asm/io.h b/arch/m32r/include/asm/io.h
index 1b653bb16f9a..a4272d8f0d9c 100644
--- a/arch/m32r/include/asm/io.h
+++ b/arch/m32r/include/asm/io.h
@@ -191,8 +191,6 @@ static inline void _writel(unsigned long l, unsigned long 
addr)
 
 #define mmiowb()
 
-#define flush_write_buffers() do { } while (0)  /* M32R_FIXME */
-
 static inline void
 memset_io(volatile void __iomem *addr, unsigned char val, int count)
 {
-- 
2.14.2

--
To unsubscribe from this list: send the line "unsubscribe linux-alpha" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 10/67] m32r: remove the unused dma_capable helper

2017-12-29 Thread Christoph Hellwig
Signed-off-by: Christoph Hellwig 
---
 arch/m32r/include/asm/dma-mapping.h | 7 ---
 1 file changed, 7 deletions(-)

diff --git a/arch/m32r/include/asm/dma-mapping.h 
b/arch/m32r/include/asm/dma-mapping.h
index 336ffe60814b..8967fb659691 100644
--- a/arch/m32r/include/asm/dma-mapping.h
+++ b/arch/m32r/include/asm/dma-mapping.h
@@ -14,11 +14,4 @@ static inline const struct dma_map_ops 
*get_arch_dma_ops(struct bus_type *bus)
return _noop_ops;
 }
 
-static inline bool dma_capable(struct device *dev, dma_addr_t addr, size_t 
size)
-{
-   if (!dev->dma_mask)
-   return false;
-   return addr + size - 1 <= *dev->dma_mask;
-}
-
 #endif /* _ASM_M32R_DMA_MAPPING_H */
-- 
2.14.2

--
To unsubscribe from this list: send the line "unsubscribe linux-alpha" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 12/67] s390: remove the unused dma_capable helper

2017-12-29 Thread Christoph Hellwig
Signed-off-by: Christoph Hellwig 
---
 arch/s390/include/asm/dma-mapping.h | 7 ---
 1 file changed, 7 deletions(-)

diff --git a/arch/s390/include/asm/dma-mapping.h 
b/arch/s390/include/asm/dma-mapping.h
index eaf490f9c5bc..2ec7240c1ada 100644
--- a/arch/s390/include/asm/dma-mapping.h
+++ b/arch/s390/include/asm/dma-mapping.h
@@ -16,11 +16,4 @@ static inline const struct dma_map_ops 
*get_arch_dma_ops(struct bus_type *bus)
return _noop_ops;
 }
 
-static inline bool dma_capable(struct device *dev, dma_addr_t addr, size_t 
size)
-{
-   if (!dev->dma_mask)
-   return false;
-   return addr + size - 1 <= *dev->dma_mask;
-}
-
 #endif /* _ASM_S390_DMA_MAPPING_H */
-- 
2.14.2

--
To unsubscribe from this list: send the line "unsubscribe linux-alpha" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 13/67] dma-mapping: move swiotlb arch helpers to a new header

2017-12-29 Thread Christoph Hellwig
phys_to_dma, dma_to_phys and dma_capable are helpers published by
architecture code for use of swiotlb and xen-swiotlb only.  Drivers are
not supposed to use these directly, but use the DMA API instead.

Move these to a new asm/dma-direct.h helper, included by a
linux/dma-direct.h wrapper that provides the default linear mapping
unless the architecture wants to override it.

Signed-off-by: Christoph Hellwig 
---
 MAINTAINERS|  1 +
 arch/Kconfig   |  4 +++
 arch/arm/Kconfig   |  1 +
 arch/arm/include/asm/dma-direct.h  | 36 ++
 arch/arm/include/asm/dma-mapping.h | 31 ---
 arch/arm64/include/asm/dma-mapping.h   | 22 -
 arch/arm64/mm/dma-mapping.c|  2 +-
 arch/ia64/include/asm/dma-mapping.h| 18 ---
 arch/mips/Kconfig  |  2 ++
 arch/mips/include/asm/dma-direct.h |  1 +
 arch/mips/include/asm/dma-mapping.h|  8 -
 .../include/asm/mach-cavium-octeon/dma-coherence.h |  8 +
 arch/mips/include/asm/mach-generic/dma-coherence.h | 12 
 .../include/asm/mach-loongson64/dma-coherence.h|  8 +
 arch/powerpc/Kconfig   |  1 +
 arch/powerpc/include/asm/dma-direct.h  | 29 +
 arch/powerpc/include/asm/dma-mapping.h | 25 ---
 arch/tile/include/asm/dma-mapping.h| 18 ---
 arch/unicore32/include/asm/dma-mapping.h   | 18 ---
 arch/x86/Kconfig   |  1 +
 arch/x86/include/asm/dma-direct.h  | 30 ++
 arch/x86/include/asm/dma-mapping.h | 26 
 arch/x86/kernel/amd_gart_64.c  |  1 +
 arch/x86/kernel/pci-dma.c  |  2 +-
 arch/x86/kernel/pci-nommu.c|  2 +-
 arch/x86/kernel/pci-swiotlb.c  |  2 +-
 arch/x86/mm/mem_encrypt.c  |  2 +-
 arch/x86/pci/sta2x11-fixup.c   |  1 +
 arch/xtensa/include/asm/dma-mapping.h  | 10 --
 drivers/crypto/marvell/cesa.c  |  1 +
 drivers/mtd/nand/qcom_nandc.c  |  1 +
 drivers/xen/swiotlb-xen.c  |  2 +-
 include/linux/dma-direct.h | 32 +++
 lib/swiotlb.c  |  2 +-
 34 files changed, 165 insertions(+), 195 deletions(-)
 create mode 100644 arch/arm/include/asm/dma-direct.h
 create mode 100644 arch/mips/include/asm/dma-direct.h
 create mode 100644 arch/powerpc/include/asm/dma-direct.h
 create mode 100644 arch/x86/include/asm/dma-direct.h
 create mode 100644 include/linux/dma-direct.h

diff --git a/MAINTAINERS b/MAINTAINERS
index a6e86e20761e..7521b063b499 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -4340,6 +4340,7 @@ F:lib/dma-noop.c
 F: lib/dma-virt.c
 F: drivers/base/dma-mapping.c
 F: drivers/base/dma-coherent.c
+F: include/linux/dma-direct.h
 F: include/linux/dma-mapping.h
 
 DME1737 HARDWARE MONITOR DRIVER
diff --git a/arch/Kconfig b/arch/Kconfig
index 400b9e1b2f27..3edf118ad777 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -938,6 +938,10 @@ config STRICT_MODULE_RWX
  and non-text memory will be made non-executable. This provides
  protection against certain security exploits (e.g. writing to text)
 
+# select if the architecture provides an asm/dma-direct.h header
+config ARCH_HAS_PHYS_TO_DMA
+   bool
+
 config ARCH_HAS_REFCOUNT
bool
help
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index 51c8df561077..00d889a37965 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -8,6 +8,7 @@ config ARM
select ARCH_HAS_DEVMEM_IS_ALLOWED
select ARCH_HAS_ELF_RANDOMIZE
select ARCH_HAS_SET_MEMORY
+   select ARCH_HAS_PHYS_TO_DMA
select ARCH_HAS_STRICT_KERNEL_RWX if MMU && !XIP_KERNEL
select ARCH_HAS_STRICT_MODULE_RWX if MMU
select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
diff --git a/arch/arm/include/asm/dma-direct.h 
b/arch/arm/include/asm/dma-direct.h
new file mode 100644
index ..5b0a8a421894
--- /dev/null
+++ b/arch/arm/include/asm/dma-direct.h
@@ -0,0 +1,36 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef ASM_ARM_DMA_DIRECT_H
+#define ASM_ARM_DMA_DIRECT_H 1
+
+static inline dma_addr_t phys_to_dma(struct device *dev, phys_addr_t paddr)
+{
+   unsigned int offset = paddr & ~PAGE_MASK;
+   return pfn_to_dma(dev, __phys_to_pfn(paddr)) + offset;
+}
+
+static inline phys_addr_t dma_to_phys(struct device *dev, dma_addr_t dev_addr)
+{
+   unsigned int offset = dev_addr & ~PAGE_MASK;
+   return __pfn_to_phys(dma_to_pfn(dev, dev_addr)) + 

[PATCH 15/67] hexagon: use the generic dma_capable helper

2017-12-29 Thread Christoph Hellwig
Signed-off-by: Christoph Hellwig 
---
 arch/hexagon/include/asm/dma-mapping.h | 7 ---
 arch/hexagon/kernel/dma.c  | 1 +
 2 files changed, 1 insertion(+), 7 deletions(-)

diff --git a/arch/hexagon/include/asm/dma-mapping.h 
b/arch/hexagon/include/asm/dma-mapping.h
index 5208de242e79..263f6acbfb0f 100644
--- a/arch/hexagon/include/asm/dma-mapping.h
+++ b/arch/hexagon/include/asm/dma-mapping.h
@@ -37,11 +37,4 @@ static inline const struct dma_map_ops 
*get_arch_dma_ops(struct bus_type *bus)
return dma_ops;
 }
 
-static inline bool dma_capable(struct device *dev, dma_addr_t addr, size_t 
size)
-{
-   if (!dev->dma_mask)
-   return 0;
-   return addr + size - 1 <= *dev->dma_mask;
-}
-
 #endif
diff --git a/arch/hexagon/kernel/dma.c b/arch/hexagon/kernel/dma.c
index 3683bb9c05a2..c1d24e37807c 100644
--- a/arch/hexagon/kernel/dma.c
+++ b/arch/hexagon/kernel/dma.c
@@ -19,6 +19,7 @@
  */
 
 #include 
+#include 
 #include 
 #include 
 #include 
-- 
2.14.2

--
To unsubscribe from this list: send the line "unsubscribe linux-alpha" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 20/67] s390: move s390_pci_dma_ops to asm/pci_dma.h

2017-12-29 Thread Christoph Hellwig
This is not needed in drivers, so move it to a private header.

Signed-off-by: Christoph Hellwig 
---
 arch/s390/include/asm/dma-mapping.h | 2 --
 arch/s390/include/asm/pci_dma.h | 3 +++
 2 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/arch/s390/include/asm/dma-mapping.h 
b/arch/s390/include/asm/dma-mapping.h
index 2ec7240c1ada..bdc2455483f6 100644
--- a/arch/s390/include/asm/dma-mapping.h
+++ b/arch/s390/include/asm/dma-mapping.h
@@ -9,8 +9,6 @@
 #include 
 #include 
 
-extern const struct dma_map_ops s390_pci_dma_ops;
-
 static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
 {
return _noop_ops;
diff --git a/arch/s390/include/asm/pci_dma.h b/arch/s390/include/asm/pci_dma.h
index e8d9161fa17a..419fac7a62c0 100644
--- a/arch/s390/include/asm/pci_dma.h
+++ b/arch/s390/include/asm/pci_dma.h
@@ -201,4 +201,7 @@ void dma_cleanup_tables(unsigned long *);
 unsigned long *dma_walk_cpu_trans(unsigned long *rto, dma_addr_t dma_addr);
 void dma_update_cpu_trans(unsigned long *entry, void *page_addr, int flags);
 
+extern const struct dma_map_ops s390_pci_dma_ops;
+
+
 #endif
-- 
2.14.2

--
To unsubscribe from this list: send the line "unsubscribe linux-alpha" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 21/67] dma-mapping: warn when there is no coherent_dma_mask

2017-12-29 Thread Christoph Hellwig
These days all devices should have a DMA coherent mask, and most dma_ops
implementations rely on that fact.  But just to be sure add an assert to
ring the warning bell if that is not the case.

Signed-off-by: Christoph Hellwig 
---
 include/linux/dma-mapping.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
index e77e2dec4723..2779d544485c 100644
--- a/include/linux/dma-mapping.h
+++ b/include/linux/dma-mapping.h
@@ -520,6 +520,7 @@ static inline void *dma_alloc_attrs(struct device *dev, 
size_t size,
void *cpu_addr;
 
BUG_ON(!ops);
+   WARN_ON_ONCE(!dev->coherent_dma_mask);
 
if (dma_alloc_from_dev_coherent(dev, size, dma_handle, _addr))
return cpu_addr;
-- 
2.14.2

--
To unsubscribe from this list: send the line "unsubscribe linux-alpha" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 23/67] dma-mapping: add an arch_dma_supported hook

2017-12-29 Thread Christoph Hellwig
To implement the x86 forbid_dac and iommu_sac_force we want an arch hook
so that it can apply the global options across all dma_map_ops
implementations.

Signed-off-by: Christoph Hellwig 
---
 arch/x86/include/asm/dma-mapping.h |  3 +++
 arch/x86/kernel/pci-dma.c  | 19 ---
 include/linux/dma-mapping.h| 11 +++
 3 files changed, 26 insertions(+), 7 deletions(-)

diff --git a/arch/x86/include/asm/dma-mapping.h 
b/arch/x86/include/asm/dma-mapping.h
index dfdc9357a349..6277c83c0eb1 100644
--- a/arch/x86/include/asm/dma-mapping.h
+++ b/arch/x86/include/asm/dma-mapping.h
@@ -30,6 +30,9 @@ static inline const struct dma_map_ops 
*get_arch_dma_ops(struct bus_type *bus)
return dma_ops;
 }
 
+int arch_dma_supported(struct device *dev, u64 mask);
+#define arch_dma_supported arch_dma_supported
+
 bool arch_dma_alloc_attrs(struct device **dev, gfp_t *gfp);
 #define arch_dma_alloc_attrs arch_dma_alloc_attrs
 
diff --git a/arch/x86/kernel/pci-dma.c b/arch/x86/kernel/pci-dma.c
index 61a8f1cb3829..df7ab02f959f 100644
--- a/arch/x86/kernel/pci-dma.c
+++ b/arch/x86/kernel/pci-dma.c
@@ -215,7 +215,7 @@ static __init int iommu_setup(char *p)
 }
 early_param("iommu", iommu_setup);
 
-int x86_dma_supported(struct device *dev, u64 mask)
+int arch_dma_supported(struct device *dev, u64 mask)
 {
 #ifdef CONFIG_PCI
if (mask > 0x && forbid_dac > 0) {
@@ -224,12 +224,6 @@ int x86_dma_supported(struct device *dev, u64 mask)
}
 #endif
 
-   /* Copied from i386. Doesn't make much sense, because it will
-  only work for pci_alloc_coherent.
-  The caller just has to use GFP_DMA in this case. */
-   if (mask < DMA_BIT_MASK(24))
-   return 0;
-
/* Tell the device to use SAC when IOMMU force is on.  This
   allows the driver to use cheaper accesses in some cases.
 
@@ -249,6 +243,17 @@ int x86_dma_supported(struct device *dev, u64 mask)
 
return 1;
 }
+EXPORT_SYMBOL(arch_dma_supported);
+
+int x86_dma_supported(struct device *dev, u64 mask)
+{
+   /* Copied from i386. Doesn't make much sense, because it will
+  only work for pci_alloc_coherent.
+  The caller just has to use GFP_DMA in this case. */
+   if (mask < DMA_BIT_MASK(24))
+   return 0;
+   return 1;
+}
 
 static int __init pci_iommu_init(void)
 {
diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
index fd5197af882a..72568bf4fc12 100644
--- a/include/linux/dma-mapping.h
+++ b/include/linux/dma-mapping.h
@@ -583,6 +583,14 @@ static inline int dma_mapping_error(struct device *dev, 
dma_addr_t dma_addr)
return 0;
 }
 
+/*
+ * This is a hack for the legacy x86 forbid_dac and iommu_sac_force. Please
+ * don't use this is new code.
+ */
+#ifndef arch_dma_supported
+#define arch_dma_supported(dev, mask)  (1)
+#endif
+
 static inline void dma_check_mask(struct device *dev, u64 mask)
 {
if (sme_active() && (mask < (((u64)sme_get_me_mask() << 1) - 1)))
@@ -595,6 +603,9 @@ static inline int dma_supported(struct device *dev, u64 
mask)
 
if (!ops)
return 0;
+   if (!arch_dma_supported(dev, mask))
+   return 0;
+
if (!ops->dma_supported)
return 1;
return ops->dma_supported(dev, mask);
-- 
2.14.2

--
To unsubscribe from this list: send the line "unsubscribe linux-alpha" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 22/67] dma-mapping: clear harmful GFP_* flags in common code

2017-12-29 Thread Christoph Hellwig
Life the code from x86 so that we behave consistently.  In the future we
should probably warn if any of these is set.

Signed-off-by: Christoph Hellwig 
---
 arch/cris/arch-v32/drivers/pci/dma.c  | 3 ---
 arch/h8300/kernel/dma.c   | 3 ---
 arch/m68k/kernel/dma.c| 2 --
 arch/mips/cavium-octeon/dma-octeon.c  | 3 ---
 arch/mips/loongson64/common/dma-swiotlb.c | 3 ---
 arch/mips/mm/dma-default.c| 3 ---
 arch/mips/netlogic/common/nlm-dma.c   | 3 ---
 arch/mn10300/mm/dma-alloc.c   | 3 ---
 arch/nios2/mm/dma-mapping.c   | 3 ---
 arch/powerpc/kernel/dma.c | 3 ---
 arch/x86/kernel/pci-dma.c | 2 --
 include/linux/dma-mapping.h   | 7 +++
 12 files changed, 7 insertions(+), 31 deletions(-)

diff --git a/arch/cris/arch-v32/drivers/pci/dma.c 
b/arch/cris/arch-v32/drivers/pci/dma.c
index aa16ce27e036..c7e3056885d3 100644
--- a/arch/cris/arch-v32/drivers/pci/dma.c
+++ b/arch/cris/arch-v32/drivers/pci/dma.c
@@ -22,9 +22,6 @@ static void *v32_dma_alloc(struct device *dev, size_t size,
 {
void *ret;
 
-   /* ignore region specifiers */
-   gfp &= ~(__GFP_DMA | __GFP_HIGHMEM);
-
if (dev == NULL || (dev->coherent_dma_mask < 0x))
gfp |= GFP_DMA;
 
diff --git a/arch/h8300/kernel/dma.c b/arch/h8300/kernel/dma.c
index 0e92214310c4..4e27b74df973 100644
--- a/arch/h8300/kernel/dma.c
+++ b/arch/h8300/kernel/dma.c
@@ -16,9 +16,6 @@ static void *dma_alloc(struct device *dev, size_t size,
 {
void *ret;
 
-   /* ignore region specifiers */
-   gfp &= ~(__GFP_DMA | __GFP_HIGHMEM);
-
if (dev == NULL || (*dev->dma_mask < 0x))
gfp |= GFP_DMA;
ret = (void *)__get_free_pages(gfp, get_order(size));
diff --git a/arch/m68k/kernel/dma.c b/arch/m68k/kernel/dma.c
index e0167418072b..2f3492e8295c 100644
--- a/arch/m68k/kernel/dma.c
+++ b/arch/m68k/kernel/dma.c
@@ -76,8 +76,6 @@ static void *m68k_dma_alloc(struct device *dev, size_t size,
dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs)
 {
void *ret;
-   /* ignore region specifiers */
-   gfp &= ~(__GFP_DMA | __GFP_HIGHMEM);
 
if (dev == NULL || (*dev->dma_mask < 0x))
gfp |= GFP_DMA;
diff --git a/arch/mips/cavium-octeon/dma-octeon.c 
b/arch/mips/cavium-octeon/dma-octeon.c
index c64bd87f0b6e..5baf79fce643 100644
--- a/arch/mips/cavium-octeon/dma-octeon.c
+++ b/arch/mips/cavium-octeon/dma-octeon.c
@@ -161,9 +161,6 @@ static void *octeon_dma_alloc_coherent(struct device *dev, 
size_t size,
 {
void *ret;
 
-   /* ignore region specifiers */
-   gfp &= ~(__GFP_DMA | __GFP_DMA32 | __GFP_HIGHMEM);
-
if (IS_ENABLED(CONFIG_ZONE_DMA) && dev == NULL)
gfp |= __GFP_DMA;
else if (IS_ENABLED(CONFIG_ZONE_DMA) &&
diff --git a/arch/mips/loongson64/common/dma-swiotlb.c 
b/arch/mips/loongson64/common/dma-swiotlb.c
index ef07740cee61..15388c24a504 100644
--- a/arch/mips/loongson64/common/dma-swiotlb.c
+++ b/arch/mips/loongson64/common/dma-swiotlb.c
@@ -15,9 +15,6 @@ static void *loongson_dma_alloc_coherent(struct device *dev, 
size_t size,
 {
void *ret;
 
-   /* ignore region specifiers */
-   gfp &= ~(__GFP_DMA | __GFP_DMA32 | __GFP_HIGHMEM);
-
if ((IS_ENABLED(CONFIG_ISA) && dev == NULL) ||
(IS_ENABLED(CONFIG_ZONE_DMA) &&
 dev->coherent_dma_mask < DMA_BIT_MASK(32)))
diff --git a/arch/mips/mm/dma-default.c b/arch/mips/mm/dma-default.c
index 3cd93e0c7a29..6f6b1399e98e 100644
--- a/arch/mips/mm/dma-default.c
+++ b/arch/mips/mm/dma-default.c
@@ -93,9 +93,6 @@ static gfp_t massage_gfp_flags(const struct device *dev, 
gfp_t gfp)
 {
gfp_t dma_flag;
 
-   /* ignore region specifiers */
-   gfp &= ~(__GFP_DMA | __GFP_DMA32 | __GFP_HIGHMEM);
-
 #ifdef CONFIG_ISA
if (dev == NULL)
dma_flag = __GFP_DMA;
diff --git a/arch/mips/netlogic/common/nlm-dma.c 
b/arch/mips/netlogic/common/nlm-dma.c
index 0ec9d9da6d51..49c975b6aa28 100644
--- a/arch/mips/netlogic/common/nlm-dma.c
+++ b/arch/mips/netlogic/common/nlm-dma.c
@@ -47,9 +47,6 @@ static char *nlm_swiotlb;
 static void *nlm_dma_alloc_coherent(struct device *dev, size_t size,
dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs)
 {
-   /* ignore region specifiers */
-   gfp &= ~(__GFP_DMA | __GFP_DMA32 | __GFP_HIGHMEM);
-
 #ifdef CONFIG_ZONE_DMA32
if (dev->coherent_dma_mask <= DMA_BIT_MASK(32))
gfp |= __GFP_DMA32;
diff --git a/arch/mn10300/mm/dma-alloc.c b/arch/mn10300/mm/dma-alloc.c
index 55876a87c247..2629f1f4b04e 100644
--- a/arch/mn10300/mm/dma-alloc.c
+++ b/arch/mn10300/mm/dma-alloc.c
@@ -37,9 +37,6 @@ static void *mn10300_dma_alloc(struct device *dev, size_t 
size,
goto done;
}
 
-   /* ignore region specifiers */
-   gfp &= ~(__GFP_DMA | __GFP_HIGHMEM);
-

[PATCH 24/67] dma-mapping: provide a generic asm/dma-mapping.h

2017-12-29 Thread Christoph Hellwig
For architectures that just use the generic dma_noop_ops we can provide
a generic version of dma-mapping.h.

Signed-off-by: Christoph Hellwig 
---
 MAINTAINERS  |  1 +
 arch/m32r/include/asm/Kbuild |  1 +
 arch/m32r/include/asm/dma-mapping.h  | 17 -
 arch/riscv/include/asm/Kbuild|  1 +
 arch/riscv/include/asm/dma-mapping.h | 30 --
 arch/s390/include/asm/Kbuild |  1 +
 arch/s390/include/asm/dma-mapping.h  | 17 -
 include/asm-generic/dma-mapping.h| 10 ++
 8 files changed, 14 insertions(+), 64 deletions(-)
 delete mode 100644 arch/m32r/include/asm/dma-mapping.h
 delete mode 100644 arch/riscv/include/asm/dma-mapping.h
 delete mode 100644 arch/s390/include/asm/dma-mapping.h
 create mode 100644 include/asm-generic/dma-mapping.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 7521b063b499..a8b35d9f41b2 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -4340,6 +4340,7 @@ F:lib/dma-noop.c
 F: lib/dma-virt.c
 F: drivers/base/dma-mapping.c
 F: drivers/base/dma-coherent.c
+F: include/asm-generic/dma-mapping.h
 F: include/linux/dma-direct.h
 F: include/linux/dma-mapping.h
 
diff --git a/arch/m32r/include/asm/Kbuild b/arch/m32r/include/asm/Kbuild
index 7e11b125c35e..ca83fda8177b 100644
--- a/arch/m32r/include/asm/Kbuild
+++ b/arch/m32r/include/asm/Kbuild
@@ -1,5 +1,6 @@
 generic-y += clkdev.h
 generic-y += current.h
+generic-y += dma-mapping.h
 generic-y += exec.h
 generic-y += extable.h
 generic-y += irq_work.h
diff --git a/arch/m32r/include/asm/dma-mapping.h 
b/arch/m32r/include/asm/dma-mapping.h
deleted file mode 100644
index 8967fb659691..
--- a/arch/m32r/include/asm/dma-mapping.h
+++ /dev/null
@@ -1,17 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-#ifndef _ASM_M32R_DMA_MAPPING_H
-#define _ASM_M32R_DMA_MAPPING_H
-
-#include 
-#include 
-#include 
-#include 
-#include 
-#include 
-
-static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
-{
-   return _noop_ops;
-}
-
-#endif /* _ASM_M32R_DMA_MAPPING_H */
diff --git a/arch/riscv/include/asm/Kbuild b/arch/riscv/include/asm/Kbuild
index 970460a0b492..197460ccbf21 100644
--- a/arch/riscv/include/asm/Kbuild
+++ b/arch/riscv/include/asm/Kbuild
@@ -7,6 +7,7 @@ generic-y += device.h
 generic-y += div64.h
 generic-y += dma.h
 generic-y += dma-contiguous.h
+generic-y += dma-mapping.h
 generic-y += emergency-restart.h
 generic-y += errno.h
 generic-y += exec.h
diff --git a/arch/riscv/include/asm/dma-mapping.h 
b/arch/riscv/include/asm/dma-mapping.h
deleted file mode 100644
index 73849e2cc761..
--- a/arch/riscv/include/asm/dma-mapping.h
+++ /dev/null
@@ -1,30 +0,0 @@
-/*
- * Copyright (C) 2003-2004 Hewlett-Packard Co
- * David Mosberger-Tang 
- * Copyright (C) 2012 ARM Ltd.
- * Copyright (C) 2016 SiFive, Inc.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program.  If not, see .
- */
-#ifndef __ASM_RISCV_DMA_MAPPING_H
-#define __ASM_RISCV_DMA_MAPPING_H
-
-/* Use ops->dma_mapping_error (if it exists) or assume success */
-// #undef DMA_ERROR_CODE
-
-static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
-{
-   return _noop_ops;
-}
-
-#endif /* __ASM_RISCV_DMA_MAPPING_H */
diff --git a/arch/s390/include/asm/Kbuild b/arch/s390/include/asm/Kbuild
index 048450869328..dade72be127b 100644
--- a/arch/s390/include/asm/Kbuild
+++ b/arch/s390/include/asm/Kbuild
@@ -4,6 +4,7 @@ generic-y += cacheflush.h
 generic-y += clkdev.h
 generic-y += device.h
 generic-y += dma-contiguous.h
+generic-y += dma-mapping.h
 generic-y += div64.h
 generic-y += emergency-restart.h
 generic-y += export.h
diff --git a/arch/s390/include/asm/dma-mapping.h 
b/arch/s390/include/asm/dma-mapping.h
deleted file mode 100644
index bdc2455483f6..
--- a/arch/s390/include/asm/dma-mapping.h
+++ /dev/null
@@ -1,17 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-#ifndef _ASM_S390_DMA_MAPPING_H
-#define _ASM_S390_DMA_MAPPING_H
-
-#include 
-#include 
-#include 
-#include 
-#include 
-#include 
-
-static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
-{
-   return _noop_ops;
-}
-
-#endif /* _ASM_S390_DMA_MAPPING_H */
diff --git a/include/asm-generic/dma-mapping.h 
b/include/asm-generic/dma-mapping.h
new file mode 100644
index ..164031531d85
--- /dev/null
+++ b/include/asm-generic/dma-mapping.h
@@ -0,0 

[PATCH 29/67] dma-direct: use node local allocations for coherent memory

2017-12-29 Thread Christoph Hellwig
Signed-off-by: Christoph Hellwig 
---
 lib/dma-direct.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/lib/dma-direct.c b/lib/dma-direct.c
index d0266b39788b..ab81de3ac1d3 100644
--- a/lib/dma-direct.c
+++ b/lib/dma-direct.c
@@ -39,7 +39,7 @@ static void *dma_direct_alloc(struct device *dev, size_t size,
if (gfpflags_allow_blocking(gfp))
page = dma_alloc_from_contiguous(dev, count, page_order, gfp);
if (!page)
-   page = alloc_pages(gfp, page_order);
+   page = alloc_pages_node(dev_to_node(dev), gfp, page_order);
if (!page)
return NULL;
 
-- 
2.14.2

--
To unsubscribe from this list: send the line "unsubscribe linux-alpha" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 26/67] dma-direct: use phys_to_dma

2017-12-29 Thread Christoph Hellwig
This means it uses whatever linear remapping scheme that the architecture
provides is used in the generic dma_direct ops.

Signed-off-by: Christoph Hellwig 
---
 lib/dma-direct.c | 18 +++---
 1 file changed, 7 insertions(+), 11 deletions(-)

diff --git a/lib/dma-direct.c b/lib/dma-direct.c
index 439db40854b7..0e087650e86b 100644
--- a/lib/dma-direct.c
+++ b/lib/dma-direct.c
@@ -1,12 +1,11 @@
 // SPDX-License-Identifier: GPL-2.0
 /*
- * lib/dma-noop.c
- *
- * DMA operations that map to physical addresses without flushing memory.
+ * DMA operations that map physical memory directly without using an IOMMU or
+ * flushing caches.
  */
 #include 
 #include 
-#include 
+#include 
 #include 
 #include 
 
@@ -17,7 +16,7 @@ static void *dma_direct_alloc(struct device *dev, size_t size,
 
ret = (void *)__get_free_pages(gfp, get_order(size));
if (ret)
-   *dma_handle = virt_to_phys(ret) - PFN_PHYS(dev->dma_pfn_offset);
+   *dma_handle = phys_to_dma(dev, virt_to_phys(ret));
 
return ret;
 }
@@ -32,7 +31,7 @@ static dma_addr_t dma_direct_map_page(struct device *dev, 
struct page *page,
unsigned long offset, size_t size, enum dma_data_direction dir,
unsigned long attrs)
 {
-   return page_to_phys(page) + offset - PFN_PHYS(dev->dma_pfn_offset);
+   return phys_to_dma(dev, page_to_phys(page)) + offset;
 }
 
 static int dma_direct_map_sg(struct device *dev, struct scatterlist *sgl,
@@ -42,12 +41,9 @@ static int dma_direct_map_sg(struct device *dev, struct 
scatterlist *sgl,
struct scatterlist *sg;
 
for_each_sg(sgl, sg, nents, i) {
-   dma_addr_t offset = PFN_PHYS(dev->dma_pfn_offset);
-   void *va;
-
BUG_ON(!sg_page(sg));
-   va = sg_virt(sg);
-   sg_dma_address(sg) = (dma_addr_t)virt_to_phys(va) - offset;
+
+   sg_dma_address(sg) = phys_to_dma(dev, sg_phys(sg));
sg_dma_len(sg) = sg->length;
}
 
-- 
2.14.2

--
To unsubscribe from this list: send the line "unsubscribe linux-alpha" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 25/67] dma-direct: rename dma_noop to dma_direct

2017-12-29 Thread Christoph Hellwig
The trivial direct mapping implementation already does a virtual to
physical translation which isn't strictly a noop, and will soon learn
to do non-direct but linear physical to dma translations through the
device offset and a few small tricks.  Rename it to a better fitting
name.

Signed-off-by: Christoph Hellwig 
---
 MAINTAINERS|  2 +-
 arch/arm/Kconfig   |  2 +-
 arch/arm/include/asm/dma-mapping.h |  2 +-
 arch/arm/mm/dma-mapping-nommu.c|  8 
 arch/m32r/Kconfig  |  2 +-
 arch/riscv/Kconfig |  2 +-
 arch/s390/Kconfig  |  2 +-
 include/asm-generic/dma-mapping.h  |  2 +-
 include/linux/dma-mapping.h|  2 +-
 lib/Kconfig|  2 +-
 lib/Makefile   |  2 +-
 lib/{dma-noop.c => dma-direct.c}   | 35 +++
 12 files changed, 29 insertions(+), 34 deletions(-)
 rename lib/{dma-noop.c => dma-direct.c} (53%)

diff --git a/MAINTAINERS b/MAINTAINERS
index a8b35d9f41b2..b4005fe06e4c 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -4336,7 +4336,7 @@ T:git 
git://git.infradead.org/users/hch/dma-mapping.git
 W: http://git.infradead.org/users/hch/dma-mapping.git
 S: Supported
 F: lib/dma-debug.c
-F: lib/dma-noop.c
+F: lib/dma-direct.c
 F: lib/dma-virt.c
 F: drivers/base/dma-mapping.c
 F: drivers/base/dma-coherent.c
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index 00d889a37965..430a0aa710d6 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -25,7 +25,7 @@ config ARM
select CLONE_BACKWARDS
select CPU_PM if (SUSPEND || CPU_IDLE)
select DCACHE_WORD_ACCESS if HAVE_EFFICIENT_UNALIGNED_ACCESS
-   select DMA_NOOP_OPS if !MMU
+   select DMA_DIRECT_OPS if !MMU
select EDAC_SUPPORT
select EDAC_ATOMIC_SCRUB
select GENERIC_ALLOCATOR
diff --git a/arch/arm/include/asm/dma-mapping.h 
b/arch/arm/include/asm/dma-mapping.h
index e5d9020c9ee1..8436f6ade57d 100644
--- a/arch/arm/include/asm/dma-mapping.h
+++ b/arch/arm/include/asm/dma-mapping.h
@@ -18,7 +18,7 @@ extern const struct dma_map_ops arm_coherent_dma_ops;
 
 static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
 {
-   return IS_ENABLED(CONFIG_MMU) ? _dma_ops : _noop_ops;
+   return IS_ENABLED(CONFIG_MMU) ? _dma_ops : _direct_ops;
 }
 
 #ifdef __arch_page_to_dma
diff --git a/arch/arm/mm/dma-mapping-nommu.c b/arch/arm/mm/dma-mapping-nommu.c
index 1cced700e45a..49e9831dc0f1 100644
--- a/arch/arm/mm/dma-mapping-nommu.c
+++ b/arch/arm/mm/dma-mapping-nommu.c
@@ -22,7 +22,7 @@
 #include "dma.h"
 
 /*
- *  dma_noop_ops is used if
+ *  dma_direct_ops is used if
  *   - MMU/MPU is off
  *   - cpu is v7m w/o cache support
  *   - device is coherent
@@ -39,7 +39,7 @@ static void *arm_nommu_dma_alloc(struct device *dev, size_t 
size,
 unsigned long attrs)
 
 {
-   const struct dma_map_ops *ops = _noop_ops;
+   const struct dma_map_ops *ops = _direct_ops;
void *ret;
 
/*
@@ -70,7 +70,7 @@ static void arm_nommu_dma_free(struct device *dev, size_t 
size,
   void *cpu_addr, dma_addr_t dma_addr,
   unsigned long attrs)
 {
-   const struct dma_map_ops *ops = _noop_ops;
+   const struct dma_map_ops *ops = _direct_ops;
 
if (attrs & DMA_ATTR_NON_CONSISTENT) {
ops->free(dev, size, cpu_addr, dma_addr, attrs);
@@ -214,7 +214,7 @@ EXPORT_SYMBOL(arm_nommu_dma_ops);
 
 static const struct dma_map_ops *arm_nommu_get_dma_map_ops(bool coherent)
 {
-   return coherent ? _noop_ops : _nommu_dma_ops;
+   return coherent ? _direct_ops : _nommu_dma_ops;
 }
 
 void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
diff --git a/arch/m32r/Kconfig b/arch/m32r/Kconfig
index 498398d915c1..dd84ee194579 100644
--- a/arch/m32r/Kconfig
+++ b/arch/m32r/Kconfig
@@ -19,7 +19,7 @@ config M32R
select MODULES_USE_ELF_RELA
select HAVE_DEBUG_STACKOVERFLOW
select CPU_NO_EFFICIENT_FFS
-   select DMA_NOOP_OPS
+   select DMA_DIRECT_OPS
select ARCH_NO_COHERENT_DMA_MMAP if !MMU
 
 config SBUS
diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index 2c6adf12713a..865e14f50c14 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -83,7 +83,7 @@ config PGTABLE_LEVELS
 config HAVE_KPROBES
def_bool n
 
-config DMA_NOOP_OPS
+config DMA_DIRECT_OPS
def_bool y
 
 menu "Platform type"
diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
index 829c67986db7..9376637229c9 100644
--- a/arch/s390/Kconfig
+++ b/arch/s390/Kconfig
@@ -140,7 +140,7 @@ config S390
select HAVE_DEBUG_KMEMLEAK
select HAVE_DMA_API_DEBUG
select HAVE_DMA_CONTIGUOUS
-   select DMA_NOOP_OPS
+   select DMA_DIRECT_OPS
select HAVE_DYNAMIC_FTRACE
select HAVE_DYNAMIC_FTRACE_WITH_REGS
select 

[PATCH 27/67] dma-direct: add dma address sanity checks

2017-12-29 Thread Christoph Hellwig
Roughly based on the x86 pci-nommu implementation.

Signed-off-by: Christoph Hellwig 
---
 lib/dma-direct.c | 32 +++-
 1 file changed, 31 insertions(+), 1 deletion(-)

diff --git a/lib/dma-direct.c b/lib/dma-direct.c
index 0e087650e86b..ddd9dcf4e663 100644
--- a/lib/dma-direct.c
+++ b/lib/dma-direct.c
@@ -9,6 +9,24 @@
 #include 
 #include 
 
+#define DIRECT_MAPPING_ERROR   0
+
+static bool
+check_addr(struct device *dev, dma_addr_t dma_addr, size_t size,
+   const char *caller)
+{
+   if (unlikely(dev && !dma_capable(dev, dma_addr, size))) {
+   if (*dev->dma_mask >= DMA_BIT_MASK(32)) {
+   dev_err(dev,
+   "%s: overflow %llx+%zu of device mask %llx\n",
+   caller, (long long)dma_addr, size,
+   (long long)*dev->dma_mask);
+   }
+   return false;
+   }
+   return true;
+}
+
 static void *dma_direct_alloc(struct device *dev, size_t size,
dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs)
 {
@@ -31,7 +49,11 @@ static dma_addr_t dma_direct_map_page(struct device *dev, 
struct page *page,
unsigned long offset, size_t size, enum dma_data_direction dir,
unsigned long attrs)
 {
-   return phys_to_dma(dev, page_to_phys(page)) + offset;
+   dma_addr_t dma_addr = phys_to_dma(dev, page_to_phys(page)) + offset;
+
+   if (!check_addr(dev, dma_addr, size, __func__))
+   return DIRECT_MAPPING_ERROR;
+   return dma_addr;
 }
 
 static int dma_direct_map_sg(struct device *dev, struct scatterlist *sgl,
@@ -44,17 +66,25 @@ static int dma_direct_map_sg(struct device *dev, struct 
scatterlist *sgl,
BUG_ON(!sg_page(sg));
 
sg_dma_address(sg) = phys_to_dma(dev, sg_phys(sg));
+   if (!check_addr(dev, sg_dma_address(sg), sg->length, __func__))
+   return 0;
sg_dma_len(sg) = sg->length;
}
 
return nents;
 }
 
+static int dma_direct_mapping_error(struct device *dev, dma_addr_t dma_addr)
+{
+   return dma_addr == DIRECT_MAPPING_ERROR;
+}
+
 const struct dma_map_ops dma_direct_ops = {
.alloc  = dma_direct_alloc,
.free   = dma_direct_free,
.map_page   = dma_direct_map_page,
.map_sg = dma_direct_map_sg,
+   .mapping_error  = dma_direct_mapping_error,
.is_phys= true,
 };
 EXPORT_SYMBOL(dma_direct_ops);
-- 
2.14.2

--
To unsubscribe from this list: send the line "unsubscribe linux-alpha" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 28/67] dma-direct: add support for CMA allocation

2017-12-29 Thread Christoph Hellwig
Try the CMA allocator for coherent allocations if supported.

Roughly modelled after the x86 code.

Signed-off-by: Christoph Hellwig 
---
 lib/dma-direct.c | 24 ++--
 1 file changed, 18 insertions(+), 6 deletions(-)

diff --git a/lib/dma-direct.c b/lib/dma-direct.c
index ddd9dcf4e663..d0266b39788b 100644
--- a/lib/dma-direct.c
+++ b/lib/dma-direct.c
@@ -7,6 +7,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 
 #define DIRECT_MAPPING_ERROR   0
@@ -30,19 +31,30 @@ check_addr(struct device *dev, dma_addr_t dma_addr, size_t 
size,
 static void *dma_direct_alloc(struct device *dev, size_t size,
dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs)
 {
-   void *ret;
+   unsigned int count = PAGE_ALIGN(size) >> PAGE_SHIFT;
+   int page_order = get_order(size);
+   struct page *page = NULL;
 
-   ret = (void *)__get_free_pages(gfp, get_order(size));
-   if (ret)
-   *dma_handle = phys_to_dma(dev, virt_to_phys(ret));
+   /* CMA can be used only in the context which permits sleeping */
+   if (gfpflags_allow_blocking(gfp))
+   page = dma_alloc_from_contiguous(dev, count, page_order, gfp);
+   if (!page)
+   page = alloc_pages(gfp, page_order);
+   if (!page)
+   return NULL;
 
-   return ret;
+   *dma_handle = phys_to_dma(dev, page_to_phys(page));
+   memset(page_address(page), 0, size);
+   return page_address(page);
 }
 
 static void dma_direct_free(struct device *dev, size_t size, void *cpu_addr,
dma_addr_t dma_addr, unsigned long attrs)
 {
-   free_pages((unsigned long)cpu_addr, get_order(size));
+   unsigned int count = PAGE_ALIGN(size) >> PAGE_SHIFT;
+
+   if (!dma_release_from_contiguous(dev, virt_to_page(cpu_addr), count))
+   free_pages((unsigned long)cpu_addr, get_order(size));
 }
 
 static dma_addr_t dma_direct_map_page(struct device *dev, struct page *page,
-- 
2.14.2

--
To unsubscribe from this list: send the line "unsubscribe linux-alpha" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 32/67] dma-direct: add support for allocation from ZONE_DMA and ZONE_DMA32

2017-12-29 Thread Christoph Hellwig
Signed-off-by: Christoph Hellwig 
---
 lib/dma-direct.c | 14 ++
 1 file changed, 14 insertions(+)

diff --git a/lib/dma-direct.c b/lib/dma-direct.c
index 7e913728e099..2e9b9494610c 100644
--- a/lib/dma-direct.c
+++ b/lib/dma-direct.c
@@ -12,6 +12,14 @@
 
 #define DIRECT_MAPPING_ERROR   0
 
+/*
+ * Most architectures use ZONE_DMA for the first 16 Megabytes, but
+ * some use it for entirely different regions:
+ */
+#ifndef ARCH_ZONE_DMA_BITS
+#define ARCH_ZONE_DMA_BITS 24
+#endif
+
 static bool
 check_addr(struct device *dev, dma_addr_t dma_addr, size_t size,
const char *caller)
@@ -40,6 +48,12 @@ void *dma_direct_alloc(struct device *dev, size_t size, 
dma_addr_t *dma_handle,
int page_order = get_order(size);
struct page *page = NULL;
 
+   /* GFP_DMA32 and GFP_DMA are no ops without the corresponding zones: */
+   if (dev->coherent_dma_mask < DMA_BIT_MASK(32))
+   gfp |= GFP_DMA32;
+   else if (dev->coherent_dma_mask < DMA_BIT_MASK(ARCH_ZONE_DMA_BITS))
+   gfp |= GFP_DMA;
+
 again:
/* CMA can be used only in the context which permits sleeping */
if (gfpflags_allow_blocking(gfp)) {
-- 
2.14.2

--
To unsubscribe from this list: send the line "unsubscribe linux-alpha" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 33/67] dma-direct: reject too small dma masks

2017-12-29 Thread Christoph Hellwig
Signed-off-by: Christoph Hellwig 
---
 include/linux/dma-direct.h |  1 +
 lib/dma-direct.c   | 19 +++
 2 files changed, 20 insertions(+)

diff --git a/include/linux/dma-direct.h b/include/linux/dma-direct.h
index 4788bf0bf683..bcdb1a3e4b1f 100644
--- a/include/linux/dma-direct.h
+++ b/include/linux/dma-direct.h
@@ -42,5 +42,6 @@ void *dma_direct_alloc(struct device *dev, size_t size, 
dma_addr_t *dma_handle,
gfp_t gfp, unsigned long attrs);
 void dma_direct_free(struct device *dev, size_t size, void *cpu_addr,
dma_addr_t dma_addr, unsigned long attrs);
+int dma_direct_supported(struct device *dev, u64 mask);
 
 #endif /* _LINUX_DMA_DIRECT_H */
diff --git a/lib/dma-direct.c b/lib/dma-direct.c
index 2e9b9494610c..5bb289483efc 100644
--- a/lib/dma-direct.c
+++ b/lib/dma-direct.c
@@ -123,6 +123,24 @@ static int dma_direct_map_sg(struct device *dev, struct 
scatterlist *sgl,
return nents;
 }
 
+int dma_direct_supported(struct device *dev, u64 mask)
+{
+#ifdef CONFIG_ZONE_DMA
+   if (mask < DMA_BIT_MASK(ARCH_ZONE_DMA_BITS))
+   return 0;
+#else
+   /*
+* Because 32-bit DMA masks are so common we expect every architecture
+* to be able to satisfy them - either by not supporting more physical
+* memory, or by providing a ZONE_DMA32.  If neither is the case, the
+* architecture needs to use an IOMMU instead of the direct mapping.
+*/
+   if (mask < DMA_BIT_MASK(32))
+   return 0;
+#endif
+   return 1;
+}
+
 static int dma_direct_mapping_error(struct device *dev, dma_addr_t dma_addr)
 {
return dma_addr == DIRECT_MAPPING_ERROR;
@@ -133,6 +151,7 @@ const struct dma_map_ops dma_direct_ops = {
.free   = dma_direct_free,
.map_page   = dma_direct_map_page,
.map_sg = dma_direct_map_sg,
+   .dma_supported  = dma_direct_supported,
.mapping_error  = dma_direct_mapping_error,
.is_phys= true,
 };
-- 
2.14.2

--
To unsubscribe from this list: send the line "unsubscribe linux-alpha" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 34/67] cris: use dma-direct

2017-12-29 Thread Christoph Hellwig
cris currently has an incomplete direct mapping dma_map_ops implementation
is PCI support is enabled.  Replace it with the fully feature generic
dma-direct implementation.

Signed-off-by: Christoph Hellwig 
---
 arch/cris/Kconfig   |  4 ++
 arch/cris/arch-v32/drivers/pci/Makefile |  2 +-
 arch/cris/arch-v32/drivers/pci/dma.c| 78 -
 arch/cris/include/asm/Kbuild|  1 +
 arch/cris/include/asm/dma-mapping.h | 20 -
 5 files changed, 6 insertions(+), 99 deletions(-)
 delete mode 100644 arch/cris/arch-v32/drivers/pci/dma.c
 delete mode 100644 arch/cris/include/asm/dma-mapping.h

diff --git a/arch/cris/Kconfig b/arch/cris/Kconfig
index 54d3f426763b..cd5a0865c97f 100644
--- a/arch/cris/Kconfig
+++ b/arch/cris/Kconfig
@@ -33,6 +33,9 @@ config GENERIC_CALIBRATE_DELAY
 config NO_IOPORT_MAP
def_bool y if !PCI
 
+config NO_DMA
+   def_bool y if !PCI
+
 config FORCE_MAX_ZONEORDER
int
default 6
@@ -72,6 +75,7 @@ config CRIS
select GENERIC_SCHED_CLOCK if ETRAX_ARCH_V32
select HAVE_DEBUG_BUGVERBOSE if ETRAX_ARCH_V32
select HAVE_NMI
+   select DMA_DIRECT_OPS if PCI
 
 config HZ
int
diff --git a/arch/cris/arch-v32/drivers/pci/Makefile 
b/arch/cris/arch-v32/drivers/pci/Makefile
index bff7482f2444..93c8be6170b1 100644
--- a/arch/cris/arch-v32/drivers/pci/Makefile
+++ b/arch/cris/arch-v32/drivers/pci/Makefile
@@ -2,4 +2,4 @@
 # Makefile for Etrax cardbus driver
 #
 
-obj-$(CONFIG_ETRAX_CARDBUS)+= bios.o dma.o
+obj-$(CONFIG_ETRAX_CARDBUS)+= bios.o
diff --git a/arch/cris/arch-v32/drivers/pci/dma.c 
b/arch/cris/arch-v32/drivers/pci/dma.c
deleted file mode 100644
index c7e3056885d3..
--- a/arch/cris/arch-v32/drivers/pci/dma.c
+++ /dev/null
@@ -1,78 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/*
- * Dynamic DMA mapping support.
- *
- * On cris there is no hardware dynamic DMA address translation,
- * so consistent alloc/free are merely page allocation/freeing.
- * The rest of the dynamic DMA mapping interface is implemented
- * in asm/pci.h.
- *
- * Borrowed from i386.
- */
-
-#include 
-#include 
-#include 
-#include 
-#include 
-#include 
-
-static void *v32_dma_alloc(struct device *dev, size_t size,
-   dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs)
-{
-   void *ret;
-
-   if (dev == NULL || (dev->coherent_dma_mask < 0x))
-   gfp |= GFP_DMA;
-
-   ret = (void *)__get_free_pages(gfp,  get_order(size));
-
-   if (ret != NULL) {
-   memset(ret, 0, size);
-   *dma_handle = virt_to_phys(ret);
-   }
-   return ret;
-}
-
-static void v32_dma_free(struct device *dev, size_t size, void *vaddr,
-   dma_addr_t dma_handle, unsigned long attrs)
-{
-   free_pages((unsigned long)vaddr, get_order(size));
-}
-
-static inline dma_addr_t v32_dma_map_page(struct device *dev,
-   struct page *page, unsigned long offset, size_t size,
-   enum dma_data_direction direction, unsigned long attrs)
-{
-   return page_to_phys(page) + offset;
-}
-
-static inline int v32_dma_map_sg(struct device *dev, struct scatterlist *sg,
-   int nents, enum dma_data_direction direction,
-   unsigned long attrs)
-{
-   printk("Map sg\n");
-   return nents;
-}
-
-static inline int v32_dma_supported(struct device *dev, u64 mask)
-{
-/*
- * we fall back to GFP_DMA when the mask isn't all 1s,
- * so we can't guarantee allocations that must be
- * within a tighter range than GFP_DMA..
- */
-if (mask < 0x00ff)
-return 0;
-   return 1;
-}
-
-const struct dma_map_ops v32_dma_ops = {
-   .alloc  = v32_dma_alloc,
-   .free   = v32_dma_free,
-   .map_page   = v32_dma_map_page,
-   .map_sg = v32_dma_map_sg,
-   .dma_supported  = v32_dma_supported,
-   .is_phys= true,
-};
-EXPORT_SYMBOL(v32_dma_ops);
diff --git a/arch/cris/include/asm/Kbuild b/arch/cris/include/asm/Kbuild
index 460349cb147f..8cf45ac30c1b 100644
--- a/arch/cris/include/asm/Kbuild
+++ b/arch/cris/include/asm/Kbuild
@@ -5,6 +5,7 @@ generic-y += cmpxchg.h
 generic-y += current.h
 generic-y += device.h
 generic-y += div64.h
+generic-y += dma-mapping.h
 generic-y += emergency-restart.h
 generic-y += exec.h
 generic-y += extable.h
diff --git a/arch/cris/include/asm/dma-mapping.h 
b/arch/cris/include/asm/dma-mapping.h
deleted file mode 100644
index 1553bdb30a0c..
--- a/arch/cris/include/asm/dma-mapping.h
+++ /dev/null
@@ -1,20 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-#ifndef _ASM_CRIS_DMA_MAPPING_H
-#define _ASM_CRIS_DMA_MAPPING_H
-
-#ifdef CONFIG_PCI
-extern const struct dma_map_ops v32_dma_ops;
-
-static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
-{
-   

[PATCH 31/67] dma-direct: make dma_direct_{alloc,free} available to other implementations

2017-12-29 Thread Christoph Hellwig
So that they don't need to indirect through the operation vector.

Signed-off-by: Christoph Hellwig 
---
 arch/arm/mm/dma-mapping-nommu.c | 9 +++--
 include/linux/dma-direct.h  | 5 +
 lib/dma-direct.c| 6 +++---
 3 files changed, 11 insertions(+), 9 deletions(-)

diff --git a/arch/arm/mm/dma-mapping-nommu.c b/arch/arm/mm/dma-mapping-nommu.c
index 49e9831dc0f1..b4cf3e4e9d4a 100644
--- a/arch/arm/mm/dma-mapping-nommu.c
+++ b/arch/arm/mm/dma-mapping-nommu.c
@@ -11,7 +11,7 @@
 
 #include 
 #include 
-#include 
+#include 
 #include 
 
 #include 
@@ -39,7 +39,6 @@ static void *arm_nommu_dma_alloc(struct device *dev, size_t 
size,
 unsigned long attrs)
 
 {
-   const struct dma_map_ops *ops = _direct_ops;
void *ret;
 
/*
@@ -48,7 +47,7 @@ static void *arm_nommu_dma_alloc(struct device *dev, size_t 
size,
 */
 
if (attrs & DMA_ATTR_NON_CONSISTENT)
-   return ops->alloc(dev, size, dma_handle, gfp, attrs);
+   return dma_direct_alloc(dev, size, dma_handle, gfp, attrs);
 
ret = dma_alloc_from_global_coherent(size, dma_handle);
 
@@ -70,10 +69,8 @@ static void arm_nommu_dma_free(struct device *dev, size_t 
size,
   void *cpu_addr, dma_addr_t dma_addr,
   unsigned long attrs)
 {
-   const struct dma_map_ops *ops = _direct_ops;
-
if (attrs & DMA_ATTR_NON_CONSISTENT) {
-   ops->free(dev, size, cpu_addr, dma_addr, attrs);
+   dma_direct_free(dev, size, cpu_addr, dma_addr, attrs);
} else {
int ret = dma_release_from_global_coherent(get_order(size),
   cpu_addr);
diff --git a/include/linux/dma-direct.h b/include/linux/dma-direct.h
index 10e924b7cba7..4788bf0bf683 100644
--- a/include/linux/dma-direct.h
+++ b/include/linux/dma-direct.h
@@ -38,4 +38,9 @@ static inline void dma_mark_clean(void *addr, size_t size)
 }
 #endif /* CONFIG_ARCH_HAS_DMA_MARK_CLEAN */
 
+void *dma_direct_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle,
+   gfp_t gfp, unsigned long attrs);
+void dma_direct_free(struct device *dev, size_t size, void *cpu_addr,
+   dma_addr_t dma_addr, unsigned long attrs);
+
 #endif /* _LINUX_DMA_DIRECT_H */
diff --git a/lib/dma-direct.c b/lib/dma-direct.c
index f8467cb3d89a..7e913728e099 100644
--- a/lib/dma-direct.c
+++ b/lib/dma-direct.c
@@ -33,8 +33,8 @@ static bool dma_coherent_ok(struct device *dev, phys_addr_t 
phys, size_t size)
return phys_to_dma(dev, phys) + size <= dev->coherent_dma_mask;
 }
 
-static void *dma_direct_alloc(struct device *dev, size_t size,
-   dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs)
+void *dma_direct_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle,
+   gfp_t gfp, unsigned long attrs)
 {
unsigned int count = PAGE_ALIGN(size) >> PAGE_SHIFT;
int page_order = get_order(size);
@@ -71,7 +71,7 @@ static void *dma_direct_alloc(struct device *dev, size_t size,
return page_address(page);
 }
 
-static void dma_direct_free(struct device *dev, size_t size, void *cpu_addr,
+void dma_direct_free(struct device *dev, size_t size, void *cpu_addr,
dma_addr_t dma_addr, unsigned long attrs)
 {
unsigned int count = PAGE_ALIGN(size) >> PAGE_SHIFT;
-- 
2.14.2

--
To unsubscribe from this list: send the line "unsubscribe linux-alpha" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 30/67] dma-direct: retry allocations using GFP_DMA for small masks

2017-12-29 Thread Christoph Hellwig
If we got back an allocation that wasn't inside the support coherent mask,
retry the allocation using GFP_DMA.

Based on the x86 code.

Signed-off-by: Christoph Hellwig 
---
 lib/dma-direct.c | 25 -
 1 file changed, 24 insertions(+), 1 deletion(-)

diff --git a/lib/dma-direct.c b/lib/dma-direct.c
index ab81de3ac1d3..f8467cb3d89a 100644
--- a/lib/dma-direct.c
+++ b/lib/dma-direct.c
@@ -28,6 +28,11 @@ check_addr(struct device *dev, dma_addr_t dma_addr, size_t 
size,
return true;
 }
 
+static bool dma_coherent_ok(struct device *dev, phys_addr_t phys, size_t size)
+{
+   return phys_to_dma(dev, phys) + size <= dev->coherent_dma_mask;
+}
+
 static void *dma_direct_alloc(struct device *dev, size_t size,
dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs)
 {
@@ -35,11 +40,29 @@ static void *dma_direct_alloc(struct device *dev, size_t 
size,
int page_order = get_order(size);
struct page *page = NULL;
 
+again:
/* CMA can be used only in the context which permits sleeping */
-   if (gfpflags_allow_blocking(gfp))
+   if (gfpflags_allow_blocking(gfp)) {
page = dma_alloc_from_contiguous(dev, count, page_order, gfp);
+   if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) {
+   dma_release_from_contiguous(dev, page, count);
+   page = NULL;
+   }
+   }
if (!page)
page = alloc_pages_node(dev_to_node(dev), gfp, page_order);
+
+   if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) {
+   __free_pages(page, page_order);
+   page = NULL;
+
+   if (dev->coherent_dma_mask < DMA_BIT_MASK(32) &&
+   !(gfp & GFP_DMA)) {
+   gfp = (gfp & ~GFP_DMA32) | GFP_DMA;
+   goto again;
+   }
+   }
+
if (!page)
return NULL;
 
-- 
2.14.2

--
To unsubscribe from this list: send the line "unsubscribe linux-alpha" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 35/67] h8300: use dma-direct

2017-12-29 Thread Christoph Hellwig
Replace the bare-bones h8300 direct dma mapping implementation with
the fully featured generic dma-direct one.

Signed-off-by: Christoph Hellwig 
---
 arch/h8300/Kconfig   |  1 +
 arch/h8300/include/asm/Kbuild|  1 +
 arch/h8300/include/asm/dma-mapping.h | 12 ---
 arch/h8300/kernel/Makefile   |  2 +-
 arch/h8300/kernel/dma.c  | 67 
 5 files changed, 3 insertions(+), 80 deletions(-)
 delete mode 100644 arch/h8300/include/asm/dma-mapping.h
 delete mode 100644 arch/h8300/kernel/dma.c

diff --git a/arch/h8300/Kconfig b/arch/h8300/Kconfig
index f8d3fde08190..091d6d04b5e5 100644
--- a/arch/h8300/Kconfig
+++ b/arch/h8300/Kconfig
@@ -23,6 +23,7 @@ config H8300
select HAVE_ARCH_KGDB
select HAVE_ARCH_HASH
select CPU_NO_EFFICIENT_FFS
+   select DMA_DIRECT_OPS
 
 config CPU_BIG_ENDIAN
def_bool y
diff --git a/arch/h8300/include/asm/Kbuild b/arch/h8300/include/asm/Kbuild
index bc077491d299..642752c94306 100644
--- a/arch/h8300/include/asm/Kbuild
+++ b/arch/h8300/include/asm/Kbuild
@@ -9,6 +9,7 @@ generic-y += delay.h
 generic-y += device.h
 generic-y += div64.h
 generic-y += dma.h
+generic-y += dma-mapping.h
 generic-y += emergency-restart.h
 generic-y += exec.h
 generic-y += extable.h
diff --git a/arch/h8300/include/asm/dma-mapping.h 
b/arch/h8300/include/asm/dma-mapping.h
deleted file mode 100644
index 21bb1fc3a6f1..
--- a/arch/h8300/include/asm/dma-mapping.h
+++ /dev/null
@@ -1,12 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-#ifndef _H8300_DMA_MAPPING_H
-#define _H8300_DMA_MAPPING_H
-
-extern const struct dma_map_ops h8300_dma_map_ops;
-
-static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
-{
-   return _dma_map_ops;
-}
-
-#endif
diff --git a/arch/h8300/kernel/Makefile b/arch/h8300/kernel/Makefile
index b62e830525c6..307aa51576dd 100644
--- a/arch/h8300/kernel/Makefile
+++ b/arch/h8300/kernel/Makefile
@@ -7,7 +7,7 @@ extra-y := vmlinux.lds
 
 obj-y := process.o traps.o ptrace.o \
 signal.o setup.o syscalls.o \
-irq.o entry.o dma.o
+irq.o entry.o
 
 obj-$(CONFIG_ROMKERNEL) += head_rom.o
 obj-$(CONFIG_RAMKERNEL) += head_ram.o
diff --git a/arch/h8300/kernel/dma.c b/arch/h8300/kernel/dma.c
deleted file mode 100644
index 4e27b74df973..
--- a/arch/h8300/kernel/dma.c
+++ /dev/null
@@ -1,67 +0,0 @@
-/*
- * This file is subject to the terms and conditions of the GNU General Public
- * License.  See the file COPYING in the main directory of this archive
- * for more details.
- */
-
-#include 
-#include 
-#include 
-#include 
-#include 
-
-static void *dma_alloc(struct device *dev, size_t size,
-  dma_addr_t *dma_handle, gfp_t gfp,
-  unsigned long attrs)
-{
-   void *ret;
-
-   if (dev == NULL || (*dev->dma_mask < 0x))
-   gfp |= GFP_DMA;
-   ret = (void *)__get_free_pages(gfp, get_order(size));
-
-   if (ret != NULL) {
-   memset(ret, 0, size);
-   *dma_handle = virt_to_phys(ret);
-   }
-   return ret;
-}
-
-static void dma_free(struct device *dev, size_t size,
-void *vaddr, dma_addr_t dma_handle,
-unsigned long attrs)
-
-{
-   free_pages((unsigned long)vaddr, get_order(size));
-}
-
-static dma_addr_t map_page(struct device *dev, struct page *page,
- unsigned long offset, size_t size,
- enum dma_data_direction direction,
- unsigned long attrs)
-{
-   return page_to_phys(page) + offset;
-}
-
-static int map_sg(struct device *dev, struct scatterlist *sgl,
- int nents, enum dma_data_direction direction,
- unsigned long attrs)
-{
-   struct scatterlist *sg;
-   int i;
-
-   for_each_sg(sgl, sg, nents, i) {
-   sg->dma_address = sg_phys(sg);
-   }
-
-   return nents;
-}
-
-const struct dma_map_ops h8300_dma_map_ops = {
-   .alloc = dma_alloc,
-   .free = dma_free,
-   .map_page = map_page,
-   .map_sg = map_sg,
-   .is_phys = true,
-};
-EXPORT_SYMBOL(h8300_dma_map_ops);
-- 
2.14.2

--
To unsubscribe from this list: send the line "unsubscribe linux-alpha" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 37/67] x86: use dma-direct

2017-12-29 Thread Christoph Hellwig
The generic dma-direct implementation is now functionally equivalent to
the x86 nommu dma_map implementation, so switch over to using it.

Note that the various iommu drivers are switched from x86_dma_supported
to dma_direct_supported to provide identical functionality, although the
checks looks fairly questionable for at least some of them.

Signed-off-by: Christoph Hellwig 
---
 arch/x86/Kconfig   |  1 +
 arch/x86/include/asm/dma-mapping.h |  8 -
 arch/x86/include/asm/iommu.h   |  3 --
 arch/x86/kernel/Makefile   |  2 +-
 arch/x86/kernel/amd_gart_64.c  |  7 ++--
 arch/x86/kernel/pci-calgary_64.c   |  3 +-
 arch/x86/kernel/pci-dma.c  | 66 +-
 arch/x86/kernel/pci-swiotlb.c  |  5 ++-
 arch/x86/pci/sta2x11-fixup.c   |  2 +-
 drivers/iommu/amd_iommu.c  |  7 ++--
 drivers/iommu/intel-iommu.c|  3 +-
 11 files changed, 17 insertions(+), 90 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index f6f4328103c0..55ad01515075 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -83,6 +83,7 @@ config X86
select CLOCKSOURCE_VALIDATE_LAST_CYCLE
select CLOCKSOURCE_WATCHDOG
select DCACHE_WORD_ACCESS
+   select DMA_DIRECT_OPS
select EDAC_ATOMIC_SCRUB
select EDAC_SUPPORT
select GENERIC_CLOCKEVENTS
diff --git a/arch/x86/include/asm/dma-mapping.h 
b/arch/x86/include/asm/dma-mapping.h
index 545bf3721bc0..df9816b385eb 100644
--- a/arch/x86/include/asm/dma-mapping.h
+++ b/arch/x86/include/asm/dma-mapping.h
@@ -36,14 +36,6 @@ int arch_dma_supported(struct device *dev, u64 mask);
 bool arch_dma_alloc_attrs(struct device **dev, gfp_t *gfp);
 #define arch_dma_alloc_attrs arch_dma_alloc_attrs
 
-extern void *dma_generic_alloc_coherent(struct device *dev, size_t size,
-   dma_addr_t *dma_addr, gfp_t flag,
-   unsigned long attrs);
-
-extern void dma_generic_free_coherent(struct device *dev, size_t size,
- void *vaddr, dma_addr_t dma_addr,
- unsigned long attrs);
-
 static inline gfp_t dma_alloc_coherent_gfp_flags(struct device *dev, gfp_t gfp)
 {
if (dev->coherent_dma_mask <= DMA_BIT_MASK(24))
diff --git a/arch/x86/include/asm/iommu.h b/arch/x86/include/asm/iommu.h
index 1e5d5d92eb40..baedab8ac538 100644
--- a/arch/x86/include/asm/iommu.h
+++ b/arch/x86/include/asm/iommu.h
@@ -2,13 +2,10 @@
 #ifndef _ASM_X86_IOMMU_H
 #define _ASM_X86_IOMMU_H
 
-extern const struct dma_map_ops nommu_dma_ops;
 extern int force_iommu, no_iommu;
 extern int iommu_detected;
 extern int iommu_pass_through;
 
-int x86_dma_supported(struct device *dev, u64 mask);
-
 /* 10 seconds */
 #define DMAR_OPERATION_TIMEOUT ((cycles_t) tsc_khz*10*1000)
 
diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index 81bb565f4497..beee4332e69b 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -54,7 +54,7 @@ obj-$(CONFIG_X86_ESPFIX64)+= espfix_64.o
 obj-$(CONFIG_SYSFS)+= ksysfs.o
 obj-y  += bootflag.o e820.o
 obj-y  += pci-dma.o quirks.o topology.o kdebugfs.o
-obj-y  += alternative.o i8253.o pci-nommu.o hw_breakpoint.o
+obj-y  += alternative.o i8253.o hw_breakpoint.o
 obj-y  += tsc.o tsc_msr.o io_delay.o rtc.o
 obj-y  += pci-iommu_table.o
 obj-y  += resource.o
diff --git a/arch/x86/kernel/amd_gart_64.c b/arch/x86/kernel/amd_gart_64.c
index ecd486cb06ab..52e3abcf3e70 100644
--- a/arch/x86/kernel/amd_gart_64.c
+++ b/arch/x86/kernel/amd_gart_64.c
@@ -501,8 +501,7 @@ gart_alloc_coherent(struct device *dev, size_t size, 
dma_addr_t *dma_addr,
}
__free_pages(page, get_order(size));
} else
-   return dma_generic_alloc_coherent(dev, size, dma_addr, flag,
- attrs);
+   return dma_direct_alloc(dev, size, dma_addr, flag, attrs);
 
return NULL;
 }
@@ -513,7 +512,7 @@ gart_free_coherent(struct device *dev, size_t size, void 
*vaddr,
   dma_addr_t dma_addr, unsigned long attrs)
 {
gart_unmap_page(dev, dma_addr, size, DMA_BIDIRECTIONAL, 0);
-   dma_generic_free_coherent(dev, size, vaddr, dma_addr, attrs);
+   dma_direct_free(dev, size, vaddr, dma_addr, attrs);
 }
 
 static int gart_mapping_error(struct device *dev, dma_addr_t dma_addr)
@@ -705,7 +704,7 @@ static const struct dma_map_ops gart_dma_ops = {
.alloc  = gart_alloc_coherent,
.free   = gart_free_coherent,
.mapping_error  = gart_mapping_error,
-   .dma_supported  = x86_dma_supported,
+   .dma_supported  = dma_direct_supported,
 };
 
 static void gart_iommu_shutdown(void)
diff 

[PATCH 39/67] iommu/amd_iommu: use dma_direct_* helpers for the direct mapping case

2017-12-29 Thread Christoph Hellwig
This adds support for CMA allocations, but is otherwise identical.

Signed-off-by: Christoph Hellwig 
---
 drivers/iommu/Kconfig |  1 +
 drivers/iommu/amd_iommu.c | 27 +--
 2 files changed, 10 insertions(+), 18 deletions(-)

diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
index f3a21343e636..dc7c1914645d 100644
--- a/drivers/iommu/Kconfig
+++ b/drivers/iommu/Kconfig
@@ -107,6 +107,7 @@ config IOMMU_PGTABLES_L2
 # AMD IOMMU support
 config AMD_IOMMU
bool "AMD IOMMU support"
+   select DMA_DIRECT_OPS
select SWIOTLB
select PCI_MSI
select PCI_ATS
diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
index ea4734de5357..a2ad149ab0bf 100644
--- a/drivers/iommu/amd_iommu.c
+++ b/drivers/iommu/amd_iommu.c
@@ -2592,11 +2592,9 @@ static void *alloc_coherent(struct device *dev, size_t 
size,
struct page *page;
 
domain = get_domain(dev);
-   if (PTR_ERR(domain) == -EINVAL) {
-   page = alloc_pages(flag, get_order(size));
-   *dma_addr = page_to_phys(page);
-   return page_address(page);
-   } else if (IS_ERR(domain))
+   if (PTR_ERR(domain) == -EINVAL)
+   return dma_direct_alloc(dev, size, dma_addr, flag, attrs);
+   else if (IS_ERR(domain))
return NULL;
 
dma_dom   = to_dma_ops_domain(domain);
@@ -2642,24 +2640,17 @@ static void free_coherent(struct device *dev, size_t 
size,
  void *virt_addr, dma_addr_t dma_addr,
  unsigned long attrs)
 {
-   struct protection_domain *domain;
-   struct dma_ops_domain *dma_dom;
-   struct page *page;
+   struct protection_domain *domain = get_domain(dev);
 
-   page = virt_to_page(virt_addr);
size = PAGE_ALIGN(size);
 
-   domain = get_domain(dev);
-   if (IS_ERR(domain))
-   goto free_mem;
-
-   dma_dom = to_dma_ops_domain(domain);
+   if (!IS_ERR(domain)) {
+   struct dma_ops_domain *dma_dom = to_dma_ops_domain(domain);
 
-   __unmap_single(dma_dom, dma_addr, size, DMA_BIDIRECTIONAL);
+   __unmap_single(dma_dom, dma_addr, size, DMA_BIDIRECTIONAL);
+   }
 
-free_mem:
-   if (!dma_release_from_contiguous(dev, page, size >> PAGE_SHIFT))
-   __free_pages(page, get_order(size));
+   dma_direct_free(dev, size, virt_addr, dma_addr, attrs);
 }
 
 /*
-- 
2.14.2

--
To unsubscribe from this list: send the line "unsubscribe linux-alpha" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 41/67] x86: remove dma_alloc_coherent_gfp_flags

2017-12-29 Thread Christoph Hellwig
All dma_ops implementations used on x86 now take care of setting their own
required GFP_ masks for the allocation.  And given that the common code
now clears harmful flags itself that means we can stop the flags in all
the iommu implementations as well.

Signed-off-by: Christoph Hellwig 
---
 arch/x86/include/asm/dma-mapping.h | 11 ---
 arch/x86/kernel/amd_gart_64.c  |  1 -
 arch/x86/kernel/pci-calgary_64.c   |  2 --
 arch/x86/kernel/pci-dma.c  |  2 --
 arch/x86/mm/mem_encrypt.c  |  7 ---
 drivers/iommu/amd_iommu.c  |  1 -
 drivers/iommu/intel-iommu.c|  1 -
 7 files changed, 25 deletions(-)

diff --git a/arch/x86/include/asm/dma-mapping.h 
b/arch/x86/include/asm/dma-mapping.h
index df9816b385eb..89ce4bfd241f 100644
--- a/arch/x86/include/asm/dma-mapping.h
+++ b/arch/x86/include/asm/dma-mapping.h
@@ -36,15 +36,4 @@ int arch_dma_supported(struct device *dev, u64 mask);
 bool arch_dma_alloc_attrs(struct device **dev, gfp_t *gfp);
 #define arch_dma_alloc_attrs arch_dma_alloc_attrs
 
-static inline gfp_t dma_alloc_coherent_gfp_flags(struct device *dev, gfp_t gfp)
-{
-   if (dev->coherent_dma_mask <= DMA_BIT_MASK(24))
-   gfp |= GFP_DMA;
-#ifdef CONFIG_X86_64
-   if (dev->coherent_dma_mask <= DMA_BIT_MASK(32) && !(gfp & GFP_DMA))
-   gfp |= GFP_DMA32;
-#endif
-   return gfp;
-}
-
 #endif
diff --git a/arch/x86/kernel/amd_gart_64.c b/arch/x86/kernel/amd_gart_64.c
index 92054815023e..7466dd458e0f 100644
--- a/arch/x86/kernel/amd_gart_64.c
+++ b/arch/x86/kernel/amd_gart_64.c
@@ -487,7 +487,6 @@ gart_alloc_coherent(struct device *dev, size_t size, 
dma_addr_t *dma_addr,
if (!force_iommu || dev->coherent_dma_mask <= DMA_BIT_MASK(24))
return dma_direct_alloc(dev, size, dma_addr, flag, attrs);
 
-   flag &= ~(__GFP_DMA | __GFP_HIGHMEM | __GFP_DMA32);
page = alloc_pages(flag | __GFP_ZERO, get_order(size));
if (!page)
return NULL;
diff --git a/arch/x86/kernel/pci-calgary_64.c b/arch/x86/kernel/pci-calgary_64.c
index 5647853053bd..bbfc8b1e9104 100644
--- a/arch/x86/kernel/pci-calgary_64.c
+++ b/arch/x86/kernel/pci-calgary_64.c
@@ -446,8 +446,6 @@ static void* calgary_alloc_coherent(struct device *dev, 
size_t size,
npages = size >> PAGE_SHIFT;
order = get_order(size);
 
-   flag &= ~(__GFP_DMA | __GFP_HIGHMEM | __GFP_DMA32);
-
/* alloc enough pages (and possibly more) */
ret = (void *)__get_free_pages(flag, order);
if (!ret)
diff --git a/arch/x86/kernel/pci-dma.c b/arch/x86/kernel/pci-dma.c
index db0b88ea8d1b..14437116ffea 100644
--- a/arch/x86/kernel/pci-dma.c
+++ b/arch/x86/kernel/pci-dma.c
@@ -82,8 +82,6 @@ bool arch_dma_alloc_attrs(struct device **dev, gfp_t *gfp)
if (!*dev)
*dev = _dma_fallback_dev;
 
-   *gfp = dma_alloc_coherent_gfp_flags(*dev, *gfp);
-
if (!is_device_dma_capable(*dev))
return false;
return true;
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index 479586b8ca9b..1c786e751b49 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -208,13 +208,6 @@ static void *sev_alloc(struct device *dev, size_t size, 
dma_addr_t *dma_handle,
void *vaddr = NULL;
 
order = get_order(size);
-
-   /*
-* Memory will be memset to zero after marking decrypted, so don't
-* bother clearing it before.
-*/
-   gfp &= ~__GFP_ZERO;
-
page = alloc_pages_node(dev_to_node(dev), gfp, order);
if (page) {
dma_addr_t addr;
diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
index a2ad149ab0bf..51ce6db86fdd 100644
--- a/drivers/iommu/amd_iommu.c
+++ b/drivers/iommu/amd_iommu.c
@@ -2600,7 +2600,6 @@ static void *alloc_coherent(struct device *dev, size_t 
size,
dma_dom   = to_dma_ops_domain(domain);
size  = PAGE_ALIGN(size);
dma_mask  = dev->coherent_dma_mask;
-   flag &= ~(__GFP_DMA | __GFP_HIGHMEM | __GFP_DMA32);
flag |= __GFP_ZERO;
 
page = alloc_pages(flag | __GFP_NOWARN,  get_order(size));
diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index 0de8bfe89061..6c9df0773b78 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -3718,7 +3718,6 @@ static void *intel_alloc_coherent(struct device *dev, 
size_t size,
 
size = PAGE_ALIGN(size);
order = get_order(size);
-   flags &= ~(GFP_DMA | GFP_DMA32);
 
if (gfpflags_allow_blocking(flags)) {
unsigned int count = size >> PAGE_SHIFT;
-- 
2.14.2

--
To unsubscribe from this list: send the line "unsubscribe linux-alpha" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 38/67] x86/amd_gart: clean up gart_alloc_coherent

2017-12-29 Thread Christoph Hellwig
Don't rely on the gfp mask from dma_alloc_coherent_gfp_flags to make the
fallback decision, and streamline the code flow a bit.

Signed-off-by: Christoph Hellwig 
---
 arch/x86/kernel/amd_gart_64.c | 36 ++--
 1 file changed, 18 insertions(+), 18 deletions(-)

diff --git a/arch/x86/kernel/amd_gart_64.c b/arch/x86/kernel/amd_gart_64.c
index 52e3abcf3e70..92054815023e 100644
--- a/arch/x86/kernel/amd_gart_64.c
+++ b/arch/x86/kernel/amd_gart_64.c
@@ -484,26 +484,26 @@ gart_alloc_coherent(struct device *dev, size_t size, 
dma_addr_t *dma_addr,
unsigned long align_mask;
struct page *page;
 
-   if (force_iommu && !(flag & GFP_DMA)) {
-   flag &= ~(__GFP_DMA | __GFP_HIGHMEM | __GFP_DMA32);
-   page = alloc_pages(flag | __GFP_ZERO, get_order(size));
-   if (!page)
-   return NULL;
-
-   align_mask = (1UL << get_order(size)) - 1;
-   paddr = dma_map_area(dev, page_to_phys(page), size,
-DMA_BIDIRECTIONAL, align_mask);
-
-   flush_gart();
-   if (paddr != bad_dma_addr) {
-   *dma_addr = paddr;
-   return page_address(page);
-   }
-   __free_pages(page, get_order(size));
-   } else
+   if (!force_iommu || dev->coherent_dma_mask <= DMA_BIT_MASK(24))
return dma_direct_alloc(dev, size, dma_addr, flag, attrs);
 
-   return NULL;
+   flag &= ~(__GFP_DMA | __GFP_HIGHMEM | __GFP_DMA32);
+   page = alloc_pages(flag | __GFP_ZERO, get_order(size));
+   if (!page)
+   return NULL;
+
+   align_mask = (1UL << get_order(size)) - 1;
+   paddr = dma_map_area(dev, page_to_phys(page), size, DMA_BIDIRECTIONAL,
+   align_mask);
+
+   flush_gart();
+   if (unlikely(paddr == bad_dma_addr)) {
+   __free_pages(page, get_order(size));
+   return NULL;
+   }
+
+   *dma_addr = paddr;
+   return page_address(page);
 }
 
 /* free a coherent mapping */
-- 
2.14.2

--
To unsubscribe from this list: send the line "unsubscribe linux-alpha" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 40/67] iommu/intel-iommu: use dma_direct_* helpers for the direct mapping case

2017-12-29 Thread Christoph Hellwig
This simplifies the code a bit, and prepares for future cleanups.

Signed-off-by: Christoph Hellwig 
---
 drivers/iommu/Kconfig   |  1 +
 drivers/iommu/intel-iommu.c | 17 -
 2 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
index dc7c1914645d..df171cb85822 100644
--- a/drivers/iommu/Kconfig
+++ b/drivers/iommu/Kconfig
@@ -143,6 +143,7 @@ config DMAR_TABLE
 config INTEL_IOMMU
bool "Support for Intel IOMMU using DMA Remapping Devices"
depends on PCI_MSI && ACPI && (X86 || IA64_GENERIC)
+   select DMA_DIRECT_OPS
select IOMMU_API
select IOMMU_IOVA
select DMAR_TABLE
diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index 921caf4f0c3e..0de8bfe89061 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -31,6 +31,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -3712,17 +3713,12 @@ static void *intel_alloc_coherent(struct device *dev, 
size_t size,
struct page *page = NULL;
int order;
 
+   if (iommu_no_mapping(dev))
+   return dma_direct_alloc(dev, size, dma_handle, flags, attrs);
+
size = PAGE_ALIGN(size);
order = get_order(size);
-
-   if (!iommu_no_mapping(dev))
-   flags &= ~(GFP_DMA | GFP_DMA32);
-   else if (dev->coherent_dma_mask < dma_get_required_mask(dev)) {
-   if (dev->coherent_dma_mask < DMA_BIT_MASK(32))
-   flags |= GFP_DMA;
-   else
-   flags |= GFP_DMA32;
-   }
+   flags &= ~(GFP_DMA | GFP_DMA32);
 
if (gfpflags_allow_blocking(flags)) {
unsigned int count = size >> PAGE_SHIFT;
@@ -3758,6 +3754,9 @@ static void intel_free_coherent(struct device *dev, 
size_t size, void *vaddr,
int order;
struct page *page = virt_to_page(vaddr);
 
+   if (iommu_no_mapping(dev))
+   return dma_direct_free(dev, size, vaddr, dma_handle, attrs);
+
size = PAGE_ALIGN(size);
order = get_order(size);
 
-- 
2.14.2

--
To unsubscribe from this list: send the line "unsubscribe linux-alpha" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 36/67] x86: remove dma_alloc_coherent_mask

2017-12-29 Thread Christoph Hellwig
These days all devices (including the ISA fallback device) have a coherent
DMA mask set, so remove the workaround.

Signed-off-by: Christoph Hellwig 
---
 arch/x86/include/asm/dma-mapping.h | 18 ++
 arch/x86/kernel/pci-dma.c  | 10 --
 arch/x86/mm/mem_encrypt.c  |  4 +---
 drivers/xen/swiotlb-xen.c  | 16 +---
 4 files changed, 8 insertions(+), 40 deletions(-)

diff --git a/arch/x86/include/asm/dma-mapping.h 
b/arch/x86/include/asm/dma-mapping.h
index 6277c83c0eb1..545bf3721bc0 100644
--- a/arch/x86/include/asm/dma-mapping.h
+++ b/arch/x86/include/asm/dma-mapping.h
@@ -44,26 +44,12 @@ extern void dma_generic_free_coherent(struct device *dev, 
size_t size,
  void *vaddr, dma_addr_t dma_addr,
  unsigned long attrs);
 
-static inline unsigned long dma_alloc_coherent_mask(struct device *dev,
-   gfp_t gfp)
-{
-   unsigned long dma_mask = 0;
-
-   dma_mask = dev->coherent_dma_mask;
-   if (!dma_mask)
-   dma_mask = (gfp & GFP_DMA) ? DMA_BIT_MASK(24) : 
DMA_BIT_MASK(32);
-
-   return dma_mask;
-}
-
 static inline gfp_t dma_alloc_coherent_gfp_flags(struct device *dev, gfp_t gfp)
 {
-   unsigned long dma_mask = dma_alloc_coherent_mask(dev, gfp);
-
-   if (dma_mask <= DMA_BIT_MASK(24))
+   if (dev->coherent_dma_mask <= DMA_BIT_MASK(24))
gfp |= GFP_DMA;
 #ifdef CONFIG_X86_64
-   if (dma_mask <= DMA_BIT_MASK(32) && !(gfp & GFP_DMA))
+   if (dev->coherent_dma_mask <= DMA_BIT_MASK(32) && !(gfp & GFP_DMA))
gfp |= GFP_DMA32;
 #endif
return gfp;
diff --git a/arch/x86/kernel/pci-dma.c b/arch/x86/kernel/pci-dma.c
index df7ab02f959f..b59820872ec7 100644
--- a/arch/x86/kernel/pci-dma.c
+++ b/arch/x86/kernel/pci-dma.c
@@ -80,13 +80,10 @@ void *dma_generic_alloc_coherent(struct device *dev, size_t 
size,
 dma_addr_t *dma_addr, gfp_t flag,
 unsigned long attrs)
 {
-   unsigned long dma_mask;
struct page *page;
unsigned int count = PAGE_ALIGN(size) >> PAGE_SHIFT;
dma_addr_t addr;
 
-   dma_mask = dma_alloc_coherent_mask(dev, flag);
-
 again:
page = NULL;
/* CMA can be used only in the context which permits sleeping */
@@ -95,7 +92,7 @@ void *dma_generic_alloc_coherent(struct device *dev, size_t 
size,
 flag);
if (page) {
addr = phys_to_dma(dev, page_to_phys(page));
-   if (addr + size > dma_mask) {
+   if (addr + size > dev->coherent_dma_mask) {
dma_release_from_contiguous(dev, page, count);
page = NULL;
}
@@ -108,10 +105,11 @@ void *dma_generic_alloc_coherent(struct device *dev, 
size_t size,
return NULL;
 
addr = phys_to_dma(dev, page_to_phys(page));
-   if (addr + size > dma_mask) {
+   if (addr + size > dev->coherent_dma_mask) {
__free_pages(page, get_order(size));
 
-   if (dma_mask < DMA_BIT_MASK(32) && !(flag & GFP_DMA)) {
+   if (dev->coherent_dma_mask < DMA_BIT_MASK(32) &&
+   !(flag & GFP_DMA)) {
flag = (flag & ~GFP_DMA32) | GFP_DMA;
goto again;
}
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index 764b916ef7da..479586b8ca9b 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -203,12 +203,10 @@ void __init sme_early_init(void)
 static void *sev_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle,
   gfp_t gfp, unsigned long attrs)
 {
-   unsigned long dma_mask;
unsigned int order;
struct page *page;
void *vaddr = NULL;
 
-   dma_mask = dma_alloc_coherent_mask(dev, gfp);
order = get_order(size);
 
/*
@@ -226,7 +224,7 @@ static void *sev_alloc(struct device *dev, size_t size, 
dma_addr_t *dma_handle,
 * mask with it already cleared.
 */
addr = __sme_clr(phys_to_dma(dev, page_to_phys(page)));
-   if ((addr + size) > dma_mask) {
+   if ((addr + size) > dev->coherent_dma_mask) {
__free_pages(page, get_order(size));
} else {
vaddr = page_address(page);
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 5bb72d3f8337..e1c60899fdbc 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -53,20 +53,6 @@
  * API.
  */
 
-#ifndef CONFIG_X86
-static unsigned long dma_alloc_coherent_mask(struct device *dev,
-   gfp_t gfp)
-{
-   unsigned long dma_mask = 

[PATCH 44/67] powerpc: rename swiotlb_dma_ops

2017-12-29 Thread Christoph Hellwig
We'll need that name for a generic implementation soon.

Signed-off-by: Christoph Hellwig 
---
 arch/powerpc/include/asm/swiotlb.h | 2 +-
 arch/powerpc/kernel/dma-swiotlb.c  | 4 ++--
 arch/powerpc/kernel/dma.c  | 2 +-
 arch/powerpc/sysdev/fsl_pci.c  | 2 +-
 4 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/arch/powerpc/include/asm/swiotlb.h 
b/arch/powerpc/include/asm/swiotlb.h
index 9341ee804d19..f65ecf57b66c 100644
--- a/arch/powerpc/include/asm/swiotlb.h
+++ b/arch/powerpc/include/asm/swiotlb.h
@@ -13,7 +13,7 @@
 
 #include 
 
-extern const struct dma_map_ops swiotlb_dma_ops;
+extern const struct dma_map_ops powerpc_swiotlb_dma_ops;
 
 extern unsigned int ppc_swiotlb_enable;
 int __init swiotlb_setup_bus_notifier(void);
diff --git a/arch/powerpc/kernel/dma-swiotlb.c 
b/arch/powerpc/kernel/dma-swiotlb.c
index f1e99b9cee97..506ac4fafac5 100644
--- a/arch/powerpc/kernel/dma-swiotlb.c
+++ b/arch/powerpc/kernel/dma-swiotlb.c
@@ -46,7 +46,7 @@ static u64 swiotlb_powerpc_get_required(struct device *dev)
  * map_page, and unmap_page on highmem, use normal dma_ops
  * for everything else.
  */
-const struct dma_map_ops swiotlb_dma_ops = {
+const struct dma_map_ops powerpc_swiotlb_dma_ops = {
.alloc = __dma_nommu_alloc_coherent,
.free = __dma_nommu_free_coherent,
.mmap = dma_nommu_mmap_coherent,
@@ -89,7 +89,7 @@ static int ppc_swiotlb_bus_notify(struct notifier_block *nb,
 
/* May need to bounce if the device can't address all of DRAM */
if ((dma_get_mask(dev) + 1) < memblock_end_of_DRAM())
-   set_dma_ops(dev, _dma_ops);
+   set_dma_ops(dev, _swiotlb_dma_ops);
 
return NOTIFY_DONE;
 }
diff --git a/arch/powerpc/kernel/dma.c b/arch/powerpc/kernel/dma.c
index 1723001d5de1..b787692b91ee 100644
--- a/arch/powerpc/kernel/dma.c
+++ b/arch/powerpc/kernel/dma.c
@@ -33,7 +33,7 @@ static u64 __maybe_unused get_pfn_limit(struct device *dev)
struct dev_archdata __maybe_unused *sd = >archdata;
 
 #ifdef CONFIG_SWIOTLB
-   if (sd->max_direct_dma_addr && dev->dma_ops == _dma_ops)
+   if (sd->max_direct_dma_addr && dev->dma_ops == _swiotlb_dma_ops)
pfn = min_t(u64, pfn, sd->max_direct_dma_addr >> PAGE_SHIFT);
 #endif
 
diff --git a/arch/powerpc/sysdev/fsl_pci.c b/arch/powerpc/sysdev/fsl_pci.c
index e4d0133bbeeb..61e07c78d64f 100644
--- a/arch/powerpc/sysdev/fsl_pci.c
+++ b/arch/powerpc/sysdev/fsl_pci.c
@@ -118,7 +118,7 @@ static void setup_swiotlb_ops(struct pci_controller *hose)
 {
if (ppc_swiotlb_enable) {
hose->controller_ops.dma_dev_setup = pci_dma_dev_setup_swiotlb;
-   set_pci_dma_ops(_dma_ops);
+   set_pci_dma_ops(_swiotlb_dma_ops);
}
 }
 #else
-- 
2.14.2

--
To unsubscribe from this list: send the line "unsubscribe linux-alpha" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 42/67] arm64: rename swiotlb_dma_ops

2017-12-29 Thread Christoph Hellwig
We'll need that name for a generic implementation soon.

Signed-off-by: Christoph Hellwig 
---
 arch/arm64/mm/dma-mapping.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c
index f3a637b98487..6840426bbe77 100644
--- a/arch/arm64/mm/dma-mapping.c
+++ b/arch/arm64/mm/dma-mapping.c
@@ -368,7 +368,7 @@ static int __swiotlb_dma_mapping_error(struct device 
*hwdev, dma_addr_t addr)
return 0;
 }
 
-static const struct dma_map_ops swiotlb_dma_ops = {
+static const struct dma_map_ops arm64_swiotlb_dma_ops = {
.alloc = __dma_alloc,
.free = __dma_free,
.mmap = __swiotlb_mmap,
@@ -923,7 +923,7 @@ void arch_setup_dma_ops(struct device *dev, u64 dma_base, 
u64 size,
const struct iommu_ops *iommu, bool coherent)
 {
if (!dev->dma_ops)
-   dev->dma_ops = _dma_ops;
+   dev->dma_ops = _swiotlb_dma_ops;
 
dev->archdata.dma_coherent = coherent;
__iommu_setup_dma_ops(dev, dma_base, size, iommu);
-- 
2.14.2

--
To unsubscribe from this list: send the line "unsubscribe linux-alpha" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 43/67] ia64: rename swiotlb_dma_ops

2017-12-29 Thread Christoph Hellwig
We'll need that name for a generic implementation soon.

Signed-off-by: Christoph Hellwig 
---
 arch/ia64/hp/common/hwsw_iommu.c | 4 ++--
 arch/ia64/hp/common/sba_iommu.c  | 6 +++---
 arch/ia64/kernel/pci-swiotlb.c   | 6 +++---
 3 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/arch/ia64/hp/common/hwsw_iommu.c b/arch/ia64/hp/common/hwsw_iommu.c
index 63d8e1d2477f..41279f0442bd 100644
--- a/arch/ia64/hp/common/hwsw_iommu.c
+++ b/arch/ia64/hp/common/hwsw_iommu.c
@@ -19,7 +19,7 @@
 #include 
 #include 
 
-extern const struct dma_map_ops sba_dma_ops, swiotlb_dma_ops;
+extern const struct dma_map_ops sba_dma_ops, ia64_swiotlb_dma_ops;
 
 /* swiotlb declarations & definitions: */
 extern int swiotlb_late_init_with_default_size (size_t size);
@@ -38,7 +38,7 @@ static inline int use_swiotlb(struct device *dev)
 const struct dma_map_ops *hwsw_dma_get_ops(struct device *dev)
 {
if (use_swiotlb(dev))
-   return _dma_ops;
+   return _swiotlb_dma_ops;
return _dma_ops;
 }
 EXPORT_SYMBOL(hwsw_dma_get_ops);
diff --git a/arch/ia64/hp/common/sba_iommu.c b/arch/ia64/hp/common/sba_iommu.c
index 6f05aba9012f..d68849ad2ee1 100644
--- a/arch/ia64/hp/common/sba_iommu.c
+++ b/arch/ia64/hp/common/sba_iommu.c
@@ -2093,7 +2093,7 @@ static int __init acpi_sba_ioc_init_acpi(void)
 /* This has to run before acpi_scan_init(). */
 arch_initcall(acpi_sba_ioc_init_acpi);
 
-extern const struct dma_map_ops swiotlb_dma_ops;
+extern const struct dma_map_ops ia64_swiotlb_dma_ops;
 
 static int __init
 sba_init(void)
@@ -2108,7 +2108,7 @@ sba_init(void)
 * a successful kdump kernel boot is to use the swiotlb.
 */
if (is_kdump_kernel()) {
-   dma_ops = _dma_ops;
+   dma_ops = _swiotlb_dma_ops;
if (swiotlb_late_init_with_default_size(64 * (1<<20)) != 0)
panic("Unable to initialize software I/O TLB:"
  " Try machvec=dig boot option");
@@ -2130,7 +2130,7 @@ sba_init(void)
 * If we didn't find something sba_iommu can claim, we
 * need to setup the swiotlb and switch to the dig machvec.
 */
-   dma_ops = _dma_ops;
+   dma_ops = _swiotlb_dma_ops;
if (swiotlb_late_init_with_default_size(64 * (1<<20)) != 0)
panic("Unable to find SBA IOMMU or initialize "
  "software I/O TLB: Try machvec=dig boot option");
diff --git a/arch/ia64/kernel/pci-swiotlb.c b/arch/ia64/kernel/pci-swiotlb.c
index 5e50939aa03e..f1ae873a8c35 100644
--- a/arch/ia64/kernel/pci-swiotlb.c
+++ b/arch/ia64/kernel/pci-swiotlb.c
@@ -31,7 +31,7 @@ static void ia64_swiotlb_free_coherent(struct device *dev, 
size_t size,
swiotlb_free_coherent(dev, size, vaddr, dma_addr);
 }
 
-const struct dma_map_ops swiotlb_dma_ops = {
+const struct dma_map_ops ia64_swiotlb_dma_ops = {
.alloc = ia64_swiotlb_alloc_coherent,
.free = ia64_swiotlb_free_coherent,
.map_page = swiotlb_map_page,
@@ -48,7 +48,7 @@ const struct dma_map_ops swiotlb_dma_ops = {
 
 void __init swiotlb_dma_init(void)
 {
-   dma_ops = _dma_ops;
+   dma_ops = _swiotlb_dma_ops;
swiotlb_init(1);
 }
 
@@ -60,7 +60,7 @@ void __init pci_swiotlb_init(void)
printk(KERN_INFO "PCI-DMA: Re-initialize machine vector.\n");
machvec_init("dig");
swiotlb_init(1);
-   dma_ops = _dma_ops;
+   dma_ops = _swiotlb_dma_ops;
 #else
panic("Unable to find Intel IOMMU");
 #endif
-- 
2.14.2

--
To unsubscribe from this list: send the line "unsubscribe linux-alpha" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 46/67] swiotlb: lift x86 swiotlb_dma_ops to common code

2017-12-29 Thread Christoph Hellwig
Including the useful helpers for coherent allocations that first try the
full blown direct mapping.

Signed-off-by: Christoph Hellwig 
---
 arch/x86/include/asm/swiotlb.h |  8 
 arch/x86/kernel/pci-swiotlb.c  | 45 --
 arch/x86/pci/sta2x11-fixup.c   |  4 ++--
 include/linux/swiotlb.h|  8 
 lib/swiotlb.c  | 43 
 5 files changed, 53 insertions(+), 55 deletions(-)

diff --git a/arch/x86/include/asm/swiotlb.h b/arch/x86/include/asm/swiotlb.h
index 1c6a6cb230ff..ff6c92eff035 100644
--- a/arch/x86/include/asm/swiotlb.h
+++ b/arch/x86/include/asm/swiotlb.h
@@ -27,12 +27,4 @@ static inline void pci_swiotlb_late_init(void)
 {
 }
 #endif
-
-extern void *x86_swiotlb_alloc_coherent(struct device *hwdev, size_t size,
-   dma_addr_t *dma_handle, gfp_t flags,
-   unsigned long attrs);
-extern void x86_swiotlb_free_coherent(struct device *dev, size_t size,
-   void *vaddr, dma_addr_t dma_addr,
-   unsigned long attrs);
-
 #endif /* _ASM_X86_SWIOTLB_H */
diff --git a/arch/x86/kernel/pci-swiotlb.c b/arch/x86/kernel/pci-swiotlb.c
index 57dea60c2473..661583662430 100644
--- a/arch/x86/kernel/pci-swiotlb.c
+++ b/arch/x86/kernel/pci-swiotlb.c
@@ -17,51 +17,6 @@
 
 int swiotlb __read_mostly;
 
-void *x86_swiotlb_alloc_coherent(struct device *hwdev, size_t size,
-   dma_addr_t *dma_handle, gfp_t flags,
-   unsigned long attrs)
-{
-   void *vaddr;
-
-   /*
-* Don't print a warning when the first allocation attempt fails.
-* swiotlb_alloc_coherent() will print a warning when the DMA
-* memory allocation ultimately failed.
-*/
-   flags |= __GFP_NOWARN;
-
-   vaddr = dma_direct_alloc(hwdev, size, dma_handle, flags, attrs);
-   if (vaddr)
-   return vaddr;
-
-   return swiotlb_alloc_coherent(hwdev, size, dma_handle, flags);
-}
-
-void x86_swiotlb_free_coherent(struct device *dev, size_t size,
- void *vaddr, dma_addr_t dma_addr,
- unsigned long attrs)
-{
-   if (is_swiotlb_buffer(dma_to_phys(dev, dma_addr)))
-   swiotlb_free_coherent(dev, size, vaddr, dma_addr);
-   else
-   dma_direct_free(dev, size, vaddr, dma_addr, attrs);
-}
-
-static const struct dma_map_ops swiotlb_dma_ops = {
-   .mapping_error = swiotlb_dma_mapping_error,
-   .alloc = x86_swiotlb_alloc_coherent,
-   .free = x86_swiotlb_free_coherent,
-   .sync_single_for_cpu = swiotlb_sync_single_for_cpu,
-   .sync_single_for_device = swiotlb_sync_single_for_device,
-   .sync_sg_for_cpu = swiotlb_sync_sg_for_cpu,
-   .sync_sg_for_device = swiotlb_sync_sg_for_device,
-   .map_sg = swiotlb_map_sg_attrs,
-   .unmap_sg = swiotlb_unmap_sg_attrs,
-   .map_page = swiotlb_map_page,
-   .unmap_page = swiotlb_unmap_page,
-   .dma_supported = NULL,
-};
-
 /*
  * pci_swiotlb_detect_override - set swiotlb to 1 if necessary
  *
diff --git a/arch/x86/pci/sta2x11-fixup.c b/arch/x86/pci/sta2x11-fixup.c
index 6c712fe11bdc..4b69b008d5aa 100644
--- a/arch/x86/pci/sta2x11-fixup.c
+++ b/arch/x86/pci/sta2x11-fixup.c
@@ -175,7 +175,7 @@ static void *sta2x11_swiotlb_alloc_coherent(struct device 
*dev,
 {
void *vaddr;
 
-   vaddr = x86_swiotlb_alloc_coherent(dev, size, dma_handle, flags, attrs);
+   vaddr = swiotlb_alloc(dev, size, dma_handle, flags, attrs);
*dma_handle = p2a(*dma_handle, to_pci_dev(dev));
return vaddr;
 }
@@ -183,7 +183,7 @@ static void *sta2x11_swiotlb_alloc_coherent(struct device 
*dev,
 /* We have our own dma_ops: the same as swiotlb but from alloc (above) */
 static const struct dma_map_ops sta2x11_dma_ops = {
.alloc = sta2x11_swiotlb_alloc_coherent,
-   .free = x86_swiotlb_free_coherent,
+   .free = swiotlb_free,
.map_page = swiotlb_map_page,
.unmap_page = swiotlb_unmap_page,
.map_sg = swiotlb_map_sg_attrs,
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 606375e35d87..5b1f2a00491c 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -66,6 +66,12 @@ extern void swiotlb_tbl_sync_single(struct device *hwdev,
enum dma_sync_target target);
 
 /* Accessory functions. */
+
+void *swiotlb_alloc(struct device *hwdev, size_t size, dma_addr_t *dma_handle,
+   gfp_t flags, unsigned long attrs);
+void swiotlb_free(struct device *dev, size_t size, void *vaddr,
+   dma_addr_t dma_addr, unsigned long attrs);
+
 extern void
 *swiotlb_alloc_coherent(struct device *hwdev, size_t size,
dma_addr_t *dma_handle, gfp_t flags);
@@ -126,4 +132,6 @@ extern 

[PATCH 49/67] swiotlb: refactor coherent buffer freeing

2017-12-29 Thread Christoph Hellwig
Factor out a new swiotlb_free_buffer helper that checks if an address
is allocated from the swiotlb bounce buffer, and if yes frees it.

This allows to simplify the swiotlb_free implemenation that uses
dma_direct_free to free the non-bounce buffer allocations.

Signed-off-by: Christoph Hellwig 
---
 lib/swiotlb.c | 35 +--
 1 file changed, 21 insertions(+), 14 deletions(-)

diff --git a/lib/swiotlb.c b/lib/swiotlb.c
index a14fff30ee9d..adb4dd0091fa 100644
--- a/lib/swiotlb.c
+++ b/lib/swiotlb.c
@@ -773,22 +773,31 @@ swiotlb_alloc_coherent(struct device *hwdev, size_t size,
 }
 EXPORT_SYMBOL(swiotlb_alloc_coherent);
 
+static bool swiotlb_free_buffer(struct device *dev, size_t size,
+   dma_addr_t dma_addr)
+{
+   phys_addr_t phys_addr = dma_to_phys(dev, dma_addr);
+
+   WARN_ON_ONCE(irqs_disabled());
+
+   if (!is_swiotlb_buffer(phys_addr))
+   return false;
+
+   /*
+* DMA_TO_DEVICE to avoid memcpy in swiotlb_tbl_unmap_single.
+* DMA_ATTR_SKIP_CPU_SYNC is optional.
+*/
+   swiotlb_tbl_unmap_single(dev, phys_addr, size, DMA_TO_DEVICE,
+DMA_ATTR_SKIP_CPU_SYNC);
+   return true;
+}
+
 void
 swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
  dma_addr_t dev_addr)
 {
-   phys_addr_t paddr = dma_to_phys(hwdev, dev_addr);
-
-   WARN_ON(irqs_disabled());
-   if (!is_swiotlb_buffer(paddr))
+   if (!swiotlb_free_buffer(hwdev, size, dev_addr))
free_pages((unsigned long)vaddr, get_order(size));
-   else
-   /*
-* DMA_TO_DEVICE to avoid memcpy in swiotlb_tbl_unmap_single.
-* DMA_ATTR_SKIP_CPU_SYNC is optional.
-*/
-   swiotlb_tbl_unmap_single(hwdev, paddr, size, DMA_TO_DEVICE,
-DMA_ATTR_SKIP_CPU_SYNC);
 }
 EXPORT_SYMBOL(swiotlb_free_coherent);
 
@@ -1103,9 +1112,7 @@ void *swiotlb_alloc(struct device *dev, size_t size, 
dma_addr_t *dma_handle,
 void swiotlb_free(struct device *dev, size_t size, void *vaddr,
dma_addr_t dma_addr, unsigned long attrs)
 {
-   if (is_swiotlb_buffer(dma_to_phys(dev, dma_addr)))
-   swiotlb_free_coherent(dev, size, vaddr, dma_addr);
-   else
+   if (!swiotlb_free_buffer(dev, size, dma_addr))
dma_direct_free(dev, size, vaddr, dma_addr, attrs);
 }
 
-- 
2.14.2

--
To unsubscribe from this list: send the line "unsubscribe linux-alpha" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 54/67] x86: remove sta2x11_dma_ops

2017-12-29 Thread Christoph Hellwig
Both the swiotlb and the dma-direct code already call into phys_to_dma
to translate the DMA address.  So the sta2x11 into phys_to_dma and
dma_to_phys are enough to handle this "special" device, and we can use
the plain old swiotlb ops.

Signed-off-by: Christoph Hellwig 
---
 arch/x86/include/asm/device.h |  3 +++
 arch/x86/pci/sta2x11-fixup.c  | 46 +--
 2 files changed, 8 insertions(+), 41 deletions(-)

diff --git a/arch/x86/include/asm/device.h b/arch/x86/include/asm/device.h
index 5e12c63b47aa..812bd6c5d602 100644
--- a/arch/x86/include/asm/device.h
+++ b/arch/x86/include/asm/device.h
@@ -6,6 +6,9 @@ struct dev_archdata {
 #if defined(CONFIG_INTEL_IOMMU) || defined(CONFIG_AMD_IOMMU)
void *iommu; /* hook for IOMMU specific extension */
 #endif
+#ifdef CONFIG_STA2X11
+   bool is_sta2x11 : 1;
+#endif
 };
 
 #if defined(CONFIG_X86_DEV_DMA_OPS) && defined(CONFIG_PCI_DOMAINS)
diff --git a/arch/x86/pci/sta2x11-fixup.c b/arch/x86/pci/sta2x11-fixup.c
index 15ad3025e439..7a5bafb76d77 100644
--- a/arch/x86/pci/sta2x11-fixup.c
+++ b/arch/x86/pci/sta2x11-fixup.c
@@ -159,43 +159,6 @@ static dma_addr_t a2p(dma_addr_t a, struct pci_dev *pdev)
return p;
 }
 
-/**
- * sta2x11_swiotlb_alloc_coherent - Allocate swiotlb bounce buffers
- * returns virtual address. This is the only "special" function here.
- * @dev: PCI device
- * @size: Size of the buffer
- * @dma_handle: DMA address
- * @flags: memory flags
- */
-static void *sta2x11_swiotlb_alloc_coherent(struct device *dev,
-   size_t size,
-   dma_addr_t *dma_handle,
-   gfp_t flags,
-   unsigned long attrs)
-{
-   void *vaddr;
-
-   vaddr = swiotlb_alloc(dev, size, dma_handle, flags, attrs);
-   *dma_handle = p2a(*dma_handle, to_pci_dev(dev));
-   return vaddr;
-}
-
-/* We have our own dma_ops: the same as swiotlb but from alloc (above) */
-static const struct dma_map_ops sta2x11_dma_ops = {
-   .alloc = sta2x11_swiotlb_alloc_coherent,
-   .free = swiotlb_free,
-   .map_page = swiotlb_map_page,
-   .unmap_page = swiotlb_unmap_page,
-   .map_sg = swiotlb_map_sg_attrs,
-   .unmap_sg = swiotlb_unmap_sg_attrs,
-   .sync_single_for_cpu = swiotlb_sync_single_for_cpu,
-   .sync_single_for_device = swiotlb_sync_single_for_device,
-   .sync_sg_for_cpu = swiotlb_sync_sg_for_cpu,
-   .sync_sg_for_device = swiotlb_sync_sg_for_device,
-   .mapping_error = swiotlb_dma_mapping_error,
-   .dma_supported = dma_direct_supported,
-};
-
 /* At setup time, we use our own ops if the device is a ConneXt one */
 static void sta2x11_setup_pdev(struct pci_dev *pdev)
 {
@@ -205,7 +168,8 @@ static void sta2x11_setup_pdev(struct pci_dev *pdev)
return;
pci_set_consistent_dma_mask(pdev, STA2X11_AMBA_SIZE - 1);
pci_set_dma_mask(pdev, STA2X11_AMBA_SIZE - 1);
-   pdev->dev.dma_ops = _dma_ops;
+   pdev->dev.dma_ops = _dma_ops;
+   pdev->dev.archdata.is_sta2x11 = true;
 
/* We must enable all devices as master, for audio DMA to work */
pci_set_master(pdev);
@@ -225,7 +189,7 @@ bool dma_capable(struct device *dev, dma_addr_t addr, 
size_t size)
 {
struct sta2x11_mapping *map;
 
-   if (dev->dma_ops != _dma_ops) {
+   if (!dev->archdata.is_sta2x11) {
if (!dev->dma_mask)
return false;
return addr + size - 1 <= *dev->dma_mask;
@@ -249,7 +213,7 @@ bool dma_capable(struct device *dev, dma_addr_t addr, 
size_t size)
  */
 dma_addr_t __phys_to_dma(struct device *dev, phys_addr_t paddr)
 {
-   if (dev->dma_ops != _dma_ops)
+   if (!dev->archdata.is_sta2x11)
return paddr;
return p2a(paddr, to_pci_dev(dev));
 }
@@ -261,7 +225,7 @@ dma_addr_t __phys_to_dma(struct device *dev, phys_addr_t 
paddr)
  */
 phys_addr_t __dma_to_phys(struct device *dev, dma_addr_t daddr)
 {
-   if (dev->dma_ops != _dma_ops)
+   if (!dev->archdata.is_sta2x11)
return daddr;
return a2p(daddr, to_pci_dev(dev));
 }
-- 
2.14.2

--
To unsubscribe from this list: send the line "unsubscribe linux-alpha" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 53/67] swiotlb: remove swiotlb_set_mem_attributes

2017-12-29 Thread Christoph Hellwig
Now that set_memory_decrypted is always available we can just call
it directly.

Signed-off-by: Christoph Hellwig 
---
 arch/x86/include/asm/mem_encrypt.h |  2 --
 arch/x86/mm/mem_encrypt.c  |  9 -
 lib/swiotlb.c  | 12 ++--
 3 files changed, 6 insertions(+), 17 deletions(-)

diff --git a/arch/x86/include/asm/mem_encrypt.h 
b/arch/x86/include/asm/mem_encrypt.h
index c9459a4c3c68..549894d496da 100644
--- a/arch/x86/include/asm/mem_encrypt.h
+++ b/arch/x86/include/asm/mem_encrypt.h
@@ -48,8 +48,6 @@ int __init early_set_memory_encrypted(unsigned long vaddr, 
unsigned long size);
 /* Architecture __weak replacement functions */
 void __init mem_encrypt_init(void);
 
-void swiotlb_set_mem_attributes(void *vaddr, unsigned long size);
-
 bool sme_active(void);
 bool sev_active(void);
 
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index 93de36cc3dd9..b279e90c85cd 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -379,15 +379,6 @@ void __init mem_encrypt_init(void)
 : "Secure Memory Encryption (SME)");
 }
 
-void swiotlb_set_mem_attributes(void *vaddr, unsigned long size)
-{
-   WARN(PAGE_ALIGN(size) != size,
-"size is not page-aligned (%#lx)\n", size);
-
-   /* Make the SWIOTLB buffer area decrypted */
-   set_memory_decrypted((unsigned long)vaddr, size >> PAGE_SHIFT);
-}
-
 static void __init sme_clear_pgd(pgd_t *pgd_base, unsigned long start,
 unsigned long end)
 {
diff --git a/lib/swiotlb.c b/lib/swiotlb.c
index 85b2ad9299e3..4ea0b5710618 100644
--- a/lib/swiotlb.c
+++ b/lib/swiotlb.c
@@ -31,6 +31,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -156,8 +157,6 @@ unsigned long swiotlb_size_or_default(void)
return size ? size : (IO_TLB_DEFAULT_SIZE);
 }
 
-void __weak swiotlb_set_mem_attributes(void *vaddr, unsigned long size) { }
-
 /* Note that this doesn't work with highmem page */
 static dma_addr_t swiotlb_virt_to_bus(struct device *hwdev,
  volatile void *address)
@@ -202,12 +201,12 @@ void __init swiotlb_update_mem_attributes(void)
 
vaddr = phys_to_virt(io_tlb_start);
bytes = PAGE_ALIGN(io_tlb_nslabs << IO_TLB_SHIFT);
-   swiotlb_set_mem_attributes(vaddr, bytes);
+   set_memory_decrypted((unsigned long)vaddr, bytes >> PAGE_SHIFT);
memset(vaddr, 0, bytes);
 
vaddr = phys_to_virt(io_tlb_overflow_buffer);
bytes = PAGE_ALIGN(io_tlb_overflow);
-   swiotlb_set_mem_attributes(vaddr, bytes);
+   set_memory_decrypted((unsigned long)vaddr, bytes >> PAGE_SHIFT);
memset(vaddr, 0, bytes);
 }
 
@@ -348,7 +347,7 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
io_tlb_start = virt_to_phys(tlb);
io_tlb_end = io_tlb_start + bytes;
 
-   swiotlb_set_mem_attributes(tlb, bytes);
+   set_memory_decrypted((unsigned long)tlb, bytes >> PAGE_SHIFT);
memset(tlb, 0, bytes);
 
/*
@@ -359,7 +358,8 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
if (!v_overflow_buffer)
goto cleanup2;
 
-   swiotlb_set_mem_attributes(v_overflow_buffer, io_tlb_overflow);
+   set_memory_decrypted((unsigned long)v_overflow_buffer,
+   io_tlb_overflow >> PAGE_SHIFT);
memset(v_overflow_buffer, 0, io_tlb_overflow);
io_tlb_overflow_buffer = virt_to_phys(v_overflow_buffer);
 
-- 
2.14.2

--
To unsubscribe from this list: send the line "unsubscribe linux-alpha" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 50/67] swiotlb: refactor coherent buffer allocation

2017-12-29 Thread Christoph Hellwig
Factor out a new swiotlb_alloc_buffer helper that allocates DMA coherent
memory from the swiotlb bounce buffer.

This allows to simplify the swiotlb_alloc implemenation that uses
dma_direct_alloc to try to allocate a reachable buffer first.

Signed-off-by: Christoph Hellwig 
---
 lib/swiotlb.c | 100 ++
 1 file changed, 51 insertions(+), 49 deletions(-)

diff --git a/lib/swiotlb.c b/lib/swiotlb.c
index adb4dd0091fa..905eea6353a3 100644
--- a/lib/swiotlb.c
+++ b/lib/swiotlb.c
@@ -709,67 +709,69 @@ void swiotlb_tbl_sync_single(struct device *hwdev, 
phys_addr_t tlb_addr,
 }
 EXPORT_SYMBOL_GPL(swiotlb_tbl_sync_single);
 
+static void *
+swiotlb_alloc_buffer(struct device *dev, size_t size, dma_addr_t *dma_handle)
+{
+   phys_addr_t phys_addr;
+
+   if (swiotlb_force == SWIOTLB_NO_FORCE)
+   goto out_warn;
+
+   phys_addr = swiotlb_tbl_map_single(dev,
+   swiotlb_phys_to_dma(dev, io_tlb_start),
+   0, size, DMA_FROM_DEVICE, 0);
+   if (phys_addr == SWIOTLB_MAP_ERROR)
+   goto out_warn;
+
+   *dma_handle = swiotlb_phys_to_dma(dev, phys_addr);
+
+   /* Confirm address can be DMA'd by device */
+   if (*dma_handle + size - 1 > dev->coherent_dma_mask)
+   goto out_unmap;
+
+   memset(phys_to_virt(phys_addr), 0, size);
+   return phys_to_virt(phys_addr);
+
+out_unmap:
+   dev_warn(dev, "hwdev DMA mask = 0x%016Lx, dev_addr = 0x%016Lx\n",
+   (unsigned long long)dev->coherent_dma_mask,
+   (unsigned long long)*dma_handle);
+
+   /*
+* DMA_TO_DEVICE to avoid memcpy in unmap_single.
+* DMA_ATTR_SKIP_CPU_SYNC is optional.
+*/
+   swiotlb_tbl_unmap_single(dev, phys_addr, size, DMA_TO_DEVICE,
+   DMA_ATTR_SKIP_CPU_SYNC);
+out_warn:
+   dev_warn(dev,
+   "swiotlb: coherent allocation failed, size=%zu\n", size);
+   dump_stack();
+   return NULL;
+}
+
 void *
 swiotlb_alloc_coherent(struct device *hwdev, size_t size,
   dma_addr_t *dma_handle, gfp_t flags)
 {
-   dma_addr_t dev_addr;
-   void *ret;
int order = get_order(size);
+   void *ret;
 
ret = (void *)__get_free_pages(flags, order);
if (ret) {
-   dev_addr = swiotlb_virt_to_bus(hwdev, ret);
-   if (dev_addr + size - 1 > hwdev->coherent_dma_mask) {
-   /*
-* The allocated memory isn't reachable by the device.
-*/
-   free_pages((unsigned long) ret, order);
-   ret = NULL;
+   *dma_handle = swiotlb_virt_to_bus(hwdev, ret);
+   if (*dma_handle  + size - 1 <= hwdev->coherent_dma_mask) {
+   memset(ret, 0, size);
+   return ret;
}
-   }
-   if (!ret) {
+
/*
-* We are either out of memory or the device can't DMA to
-* GFP_DMA memory; fall back on map_single(), which
-* will grab memory from the lowest available address range.
+* The allocated memory isn't reachable by the device.
 */
-   phys_addr_t paddr = map_single(hwdev, 0, size,
-  DMA_FROM_DEVICE, 0);
-   if (paddr == SWIOTLB_MAP_ERROR)
-   goto err_warn;
-
-   ret = phys_to_virt(paddr);
-   dev_addr = swiotlb_phys_to_dma(hwdev, paddr);
-
-   /* Confirm address can be DMA'd by device */
-   if (dev_addr + size - 1 > hwdev->coherent_dma_mask) {
-   printk("hwdev DMA mask = 0x%016Lx, dev_addr = 
0x%016Lx\n",
-  (unsigned long long)hwdev->coherent_dma_mask,
-  (unsigned long long)dev_addr);
-
-   /*
-* DMA_TO_DEVICE to avoid memcpy in unmap_single.
-* The DMA_ATTR_SKIP_CPU_SYNC is optional.
-*/
-   swiotlb_tbl_unmap_single(hwdev, paddr,
-size, DMA_TO_DEVICE,
-DMA_ATTR_SKIP_CPU_SYNC);
-   goto err_warn;
-   }
+   free_pages((unsigned long) ret, order);
}
 
-   *dma_handle = dev_addr;
-   memset(ret, 0, size);
-
-   return ret;
-
-err_warn:
-   pr_warn("swiotlb: coherent allocation failed for device %s size=%zu\n",
-   dev_name(hwdev), size);
-   dump_stack();
-
-   return NULL;
+   return swiotlb_alloc_buffer(hwdev, size, dma_handle);
 }
 EXPORT_SYMBOL(swiotlb_alloc_coherent);
 
@@ -1105,7 +1107,7 @@ void *swiotlb_alloc(struct device *dev, size_t size, 
dma_addr_t 

[PATCH 51/67] set_memory.h: provide set_memory_{en,de}crypted stubs

2017-12-29 Thread Christoph Hellwig
Signed-off-by: Christoph Hellwig 
---
 include/linux/set_memory.h | 12 
 1 file changed, 12 insertions(+)

diff --git a/include/linux/set_memory.h b/include/linux/set_memory.h
index e5140648f638..da5178216da5 100644
--- a/include/linux/set_memory.h
+++ b/include/linux/set_memory.h
@@ -17,4 +17,16 @@ static inline int set_memory_x(unsigned long addr,  int 
numpages) { return 0; }
 static inline int set_memory_nx(unsigned long addr, int numpages) { return 0; }
 #endif
 
+#ifndef CONFIG_ARCH_HAS_MEM_ENCRYPT
+static inline int set_memory_encrypted(unsigned long addr, int numpages)
+{
+   return 0;
+}
+
+static inline int set_memory_decrypted(unsigned long addr, int numpages)
+{
+   return 0;
+}
+#endif /* CONFIG_ARCH_HAS_MEM_ENCRYPT */
+
 #endif /* _LINUX_SET_MEMORY_H_ */
-- 
2.14.2

--
To unsubscribe from this list: send the line "unsubscribe linux-alpha" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 55/67] ia64: replace ZONE_DMA with ZONE_DMA32

2017-12-29 Thread Christoph Hellwig
ia64 uses ZONE_DMA for allocations below 32-bits.  These days we
name the zone for that ZONE_DMA32, which will allow to use the
dma-direct and generic swiotlb code as-is, so rename it.

Signed-off-by: Christoph Hellwig 
---
 arch/ia64/Kconfig  | 2 +-
 arch/ia64/kernel/pci-swiotlb.c | 2 +-
 arch/ia64/mm/contig.c  | 4 ++--
 arch/ia64/mm/discontig.c   | 8 
 4 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/arch/ia64/Kconfig b/arch/ia64/Kconfig
index 4d18fca885ee..888acdb163cb 100644
--- a/arch/ia64/Kconfig
+++ b/arch/ia64/Kconfig
@@ -66,7 +66,7 @@ config 64BIT
select ATA_NONSTANDARD if ATA
default y
 
-config ZONE_DMA
+config ZONE_DMA32
def_bool y
depends on !IA64_SGI_SN2
 
diff --git a/arch/ia64/kernel/pci-swiotlb.c b/arch/ia64/kernel/pci-swiotlb.c
index f1ae873a8c35..4a9a6e58ad6a 100644
--- a/arch/ia64/kernel/pci-swiotlb.c
+++ b/arch/ia64/kernel/pci-swiotlb.c
@@ -20,7 +20,7 @@ static void *ia64_swiotlb_alloc_coherent(struct device *dev, 
size_t size,
 unsigned long attrs)
 {
if (dev->coherent_dma_mask != DMA_BIT_MASK(64))
-   gfp |= GFP_DMA;
+   gfp |= GFP_DMA32;
return swiotlb_alloc_coherent(dev, size, dma_handle, gfp);
 }
 
diff --git a/arch/ia64/mm/contig.c b/arch/ia64/mm/contig.c
index 52715a71aede..7d64b30913d1 100644
--- a/arch/ia64/mm/contig.c
+++ b/arch/ia64/mm/contig.c
@@ -237,9 +237,9 @@ paging_init (void)
unsigned long max_zone_pfns[MAX_NR_ZONES];
 
memset(max_zone_pfns, 0, sizeof(max_zone_pfns));
-#ifdef CONFIG_ZONE_DMA
+#ifdef CONFIG_ZONE_DMA32
max_dma = virt_to_phys((void *) MAX_DMA_ADDRESS) >> PAGE_SHIFT;
-   max_zone_pfns[ZONE_DMA] = max_dma;
+   max_zone_pfns[ZONE_DMA32] = max_dma;
 #endif
max_zone_pfns[ZONE_NORMAL] = max_low_pfn;
 
diff --git a/arch/ia64/mm/discontig.c b/arch/ia64/mm/discontig.c
index 9b2d994cddf6..ac46f0d60b66 100644
--- a/arch/ia64/mm/discontig.c
+++ b/arch/ia64/mm/discontig.c
@@ -38,7 +38,7 @@ struct early_node_data {
struct ia64_node_data *node_data;
unsigned long pernode_addr;
unsigned long pernode_size;
-#ifdef CONFIG_ZONE_DMA
+#ifdef CONFIG_ZONE_DMA32
unsigned long num_dma_physpages;
 #endif
unsigned long min_pfn;
@@ -669,7 +669,7 @@ static __init int count_node_pages(unsigned long start, 
unsigned long len, int n
 {
unsigned long end = start + len;
 
-#ifdef CONFIG_ZONE_DMA
+#ifdef CONFIG_ZONE_DMA32
if (start <= __pa(MAX_DMA_ADDRESS))
mem_data[node].num_dma_physpages +=
(min(end, __pa(MAX_DMA_ADDRESS)) - start) >>PAGE_SHIFT;
@@ -724,8 +724,8 @@ void __init paging_init(void)
}
 
memset(max_zone_pfns, 0, sizeof(max_zone_pfns));
-#ifdef CONFIG_ZONE_DMA
-   max_zone_pfns[ZONE_DMA] = max_dma;
+#ifdef CONFIG_ZONE_DMA32
+   max_zone_pfns[ZONE_DMA32] = max_dma;
 #endif
max_zone_pfns[ZONE_NORMAL] = max_pfn;
free_area_init_nodes(max_zone_pfns);
-- 
2.14.2

--
To unsubscribe from this list: send the line "unsubscribe linux-alpha" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 56/67] ia64: use generic swiotlb_ops

2017-12-29 Thread Christoph Hellwig
These are identical to the ia64 ops, and would also support CMA
if enabled on ia64.

Signed-off-by: Christoph Hellwig 
---
 arch/ia64/Kconfig|  5 +
 arch/ia64/hp/common/hwsw_iommu.c |  4 ++--
 arch/ia64/hp/common/sba_iommu.c  |  6 +++---
 arch/ia64/kernel/pci-swiotlb.c   | 38 +++---
 4 files changed, 13 insertions(+), 40 deletions(-)

diff --git a/arch/ia64/Kconfig b/arch/ia64/Kconfig
index 888acdb163cb..29148fe4bf5a 100644
--- a/arch/ia64/Kconfig
+++ b/arch/ia64/Kconfig
@@ -146,6 +146,7 @@ config IA64_GENERIC
bool "generic"
select NUMA
select ACPI_NUMA
+   select DMA_DIRECT_OPS
select SWIOTLB
select PCI_MSI
help
@@ -166,6 +167,7 @@ config IA64_GENERIC
 
 config IA64_DIG
bool "DIG-compliant"
+   select DMA_DIRECT_OPS
select SWIOTLB
 
 config IA64_DIG_VTD
@@ -181,6 +183,7 @@ config IA64_HP_ZX1
 
 config IA64_HP_ZX1_SWIOTLB
bool "HP-zx1/sx1000 with software I/O TLB"
+   select DMA_DIRECT_OPS
select SWIOTLB
help
  Build a kernel that runs on HP zx1 and sx1000 systems even when they
@@ -204,6 +207,7 @@ config IA64_SGI_UV
bool "SGI-UV"
select NUMA
select ACPI_NUMA
+   select DMA_DIRECT_OPS
select SWIOTLB
help
  Selecting this option will optimize the kernel for use on UV based
@@ -214,6 +218,7 @@ config IA64_SGI_UV
 
 config IA64_HP_SIM
bool "Ski-simulator"
+   select DMA_DIRECT_OPS
select SWIOTLB
depends on !PM
 
diff --git a/arch/ia64/hp/common/hwsw_iommu.c b/arch/ia64/hp/common/hwsw_iommu.c
index 41279f0442bd..58969039bed2 100644
--- a/arch/ia64/hp/common/hwsw_iommu.c
+++ b/arch/ia64/hp/common/hwsw_iommu.c
@@ -19,7 +19,7 @@
 #include 
 #include 
 
-extern const struct dma_map_ops sba_dma_ops, ia64_swiotlb_dma_ops;
+extern const struct dma_map_ops sba_dma_ops;
 
 /* swiotlb declarations & definitions: */
 extern int swiotlb_late_init_with_default_size (size_t size);
@@ -38,7 +38,7 @@ static inline int use_swiotlb(struct device *dev)
 const struct dma_map_ops *hwsw_dma_get_ops(struct device *dev)
 {
if (use_swiotlb(dev))
-   return _swiotlb_dma_ops;
+   return _dma_ops;
return _dma_ops;
 }
 EXPORT_SYMBOL(hwsw_dma_get_ops);
diff --git a/arch/ia64/hp/common/sba_iommu.c b/arch/ia64/hp/common/sba_iommu.c
index d68849ad2ee1..6f05aba9012f 100644
--- a/arch/ia64/hp/common/sba_iommu.c
+++ b/arch/ia64/hp/common/sba_iommu.c
@@ -2093,7 +2093,7 @@ static int __init acpi_sba_ioc_init_acpi(void)
 /* This has to run before acpi_scan_init(). */
 arch_initcall(acpi_sba_ioc_init_acpi);
 
-extern const struct dma_map_ops ia64_swiotlb_dma_ops;
+extern const struct dma_map_ops swiotlb_dma_ops;
 
 static int __init
 sba_init(void)
@@ -2108,7 +2108,7 @@ sba_init(void)
 * a successful kdump kernel boot is to use the swiotlb.
 */
if (is_kdump_kernel()) {
-   dma_ops = _swiotlb_dma_ops;
+   dma_ops = _dma_ops;
if (swiotlb_late_init_with_default_size(64 * (1<<20)) != 0)
panic("Unable to initialize software I/O TLB:"
  " Try machvec=dig boot option");
@@ -2130,7 +2130,7 @@ sba_init(void)
 * If we didn't find something sba_iommu can claim, we
 * need to setup the swiotlb and switch to the dig machvec.
 */
-   dma_ops = _swiotlb_dma_ops;
+   dma_ops = _dma_ops;
if (swiotlb_late_init_with_default_size(64 * (1<<20)) != 0)
panic("Unable to find SBA IOMMU or initialize "
  "software I/O TLB: Try machvec=dig boot option");
diff --git a/arch/ia64/kernel/pci-swiotlb.c b/arch/ia64/kernel/pci-swiotlb.c
index 4a9a6e58ad6a..0f8d5fbd86bd 100644
--- a/arch/ia64/kernel/pci-swiotlb.c
+++ b/arch/ia64/kernel/pci-swiotlb.c
@@ -6,8 +6,7 @@
 #include 
 #include 
 #include 
-
-#include 
+#include 
 #include 
 #include 
 #include 
@@ -15,40 +14,9 @@
 int swiotlb __read_mostly;
 EXPORT_SYMBOL(swiotlb);
 
-static void *ia64_swiotlb_alloc_coherent(struct device *dev, size_t size,
-dma_addr_t *dma_handle, gfp_t gfp,
-unsigned long attrs)
-{
-   if (dev->coherent_dma_mask != DMA_BIT_MASK(64))
-   gfp |= GFP_DMA32;
-   return swiotlb_alloc_coherent(dev, size, dma_handle, gfp);
-}
-
-static void ia64_swiotlb_free_coherent(struct device *dev, size_t size,
-  void *vaddr, dma_addr_t dma_addr,
-  unsigned long attrs)
-{
-   swiotlb_free_coherent(dev, size, vaddr, dma_addr);
-}
-
-const struct dma_map_ops ia64_swiotlb_dma_ops = {
-   .alloc = ia64_swiotlb_alloc_coherent,
-   .free = ia64_swiotlb_free_coherent,
-   .map_page = swiotlb_map_page,

[PATCH 61/67] tile: use generic swiotlb_ops

2017-12-29 Thread Christoph Hellwig
These are identical to the tile ops, and would also support CMA
if enabled on tile.

Signed-off-by: Christoph Hellwig 
---
 arch/tile/Kconfig  |  1 +
 arch/tile/kernel/pci-dma.c | 36 +++-
 2 files changed, 4 insertions(+), 33 deletions(-)

diff --git a/arch/tile/Kconfig b/arch/tile/Kconfig
index 30c586686f29..ef9d403cbbe4 100644
--- a/arch/tile/Kconfig
+++ b/arch/tile/Kconfig
@@ -261,6 +261,7 @@ config NEED_SG_DMA_LENGTH
 config SWIOTLB
bool
default TILEGX
+   select DMA_DIRECT_OPS
select IOMMU_HELPER
select NEED_SG_DMA_LENGTH
select ARCH_HAS_DMA_SET_COHERENT_MASK
diff --git a/arch/tile/kernel/pci-dma.c b/arch/tile/kernel/pci-dma.c
index a9b48520eeb9..6e9365234b6a 100644
--- a/arch/tile/kernel/pci-dma.c
+++ b/arch/tile/kernel/pci-dma.c
@@ -511,39 +511,9 @@ EXPORT_SYMBOL(gx_pci_dma_map_ops);
 /* PCI DMA mapping functions for legacy PCI devices */
 
 #ifdef CONFIG_SWIOTLB
-static void *tile_swiotlb_alloc_coherent(struct device *dev, size_t size,
-dma_addr_t *dma_handle, gfp_t gfp,
-unsigned long attrs)
-{
-   gfp |= GFP_DMA32;
-   return swiotlb_alloc_coherent(dev, size, dma_handle, gfp);
-}
-
-static void tile_swiotlb_free_coherent(struct device *dev, size_t size,
-  void *vaddr, dma_addr_t dma_addr,
-  unsigned long attrs)
-{
-   swiotlb_free_coherent(dev, size, vaddr, dma_addr);
-}
-
-static const struct dma_map_ops pci_swiotlb_dma_ops = {
-   .alloc = tile_swiotlb_alloc_coherent,
-   .free = tile_swiotlb_free_coherent,
-   .map_page = swiotlb_map_page,
-   .unmap_page = swiotlb_unmap_page,
-   .map_sg = swiotlb_map_sg_attrs,
-   .unmap_sg = swiotlb_unmap_sg_attrs,
-   .sync_single_for_cpu = swiotlb_sync_single_for_cpu,
-   .sync_single_for_device = swiotlb_sync_single_for_device,
-   .sync_sg_for_cpu = swiotlb_sync_sg_for_cpu,
-   .sync_sg_for_device = swiotlb_sync_sg_for_device,
-   .dma_supported = swiotlb_dma_supported,
-   .mapping_error = swiotlb_dma_mapping_error,
-};
-
 static const struct dma_map_ops pci_hybrid_dma_ops = {
-   .alloc = tile_swiotlb_alloc_coherent,
-   .free = tile_swiotlb_free_coherent,
+   .alloc = swiotlb_alloc,
+   .free = swiotlb_free,
.map_page = tile_pci_dma_map_page,
.unmap_page = tile_pci_dma_unmap_page,
.map_sg = tile_pci_dma_map_sg,
@@ -554,7 +524,7 @@ static const struct dma_map_ops pci_hybrid_dma_ops = {
.sync_sg_for_device = tile_pci_dma_sync_sg_for_device,
 };
 
-const struct dma_map_ops *gx_legacy_pci_dma_map_ops = _swiotlb_dma_ops;
+const struct dma_map_ops *gx_legacy_pci_dma_map_ops = _dma_ops;
 const struct dma_map_ops *gx_hybrid_pci_dma_map_ops = _hybrid_dma_ops;
 #else
 const struct dma_map_ops *gx_legacy_pci_dma_map_ops;
-- 
2.14.2

--
To unsubscribe from this list: send the line "unsubscribe linux-alpha" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 59/67] unicore32: use generic swiotlb_ops

2017-12-29 Thread Christoph Hellwig
These are identical to the unicore32 ops, and would also support CMA
if enabled on unicore32.

Signed-off-by: Christoph Hellwig 
---
 arch/unicore32/include/asm/dma-mapping.h |  9 +-
 arch/unicore32/mm/Kconfig|  1 +
 arch/unicore32/mm/Makefile   |  2 --
 arch/unicore32/mm/dma-swiotlb.c  | 48 
 4 files changed, 2 insertions(+), 58 deletions(-)
 delete mode 100644 arch/unicore32/mm/dma-swiotlb.c

diff --git a/arch/unicore32/include/asm/dma-mapping.h 
b/arch/unicore32/include/asm/dma-mapping.h
index f2bfec273aa7..790bc2ef4af2 100644
--- a/arch/unicore32/include/asm/dma-mapping.h
+++ b/arch/unicore32/include/asm/dma-mapping.h
@@ -12,18 +12,11 @@
 #ifndef __UNICORE_DMA_MAPPING_H__
 #define __UNICORE_DMA_MAPPING_H__
 
-#ifdef __KERNEL__
-
-#include 
-#include 
 #include 
 
-extern const struct dma_map_ops swiotlb_dma_map_ops;
-
 static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
 {
-   return _dma_map_ops;
+   return _dma_ops;
 }
 
-#endif /* __KERNEL__ */
 #endif
diff --git a/arch/unicore32/mm/Kconfig b/arch/unicore32/mm/Kconfig
index c256460cd363..e9154a59d561 100644
--- a/arch/unicore32/mm/Kconfig
+++ b/arch/unicore32/mm/Kconfig
@@ -42,6 +42,7 @@ config CPU_TLB_SINGLE_ENTRY_DISABLE
 
 config SWIOTLB
def_bool y
+   select DMA_DIRECT_OPS
 
 config IOMMU_HELPER
def_bool SWIOTLB
diff --git a/arch/unicore32/mm/Makefile b/arch/unicore32/mm/Makefile
index 681c0ef5ec9e..8106260583ab 100644
--- a/arch/unicore32/mm/Makefile
+++ b/arch/unicore32/mm/Makefile
@@ -6,8 +6,6 @@
 obj-y  := extable.o fault.o init.o pgd.o mmu.o
 obj-y  += flush.o ioremap.o
 
-obj-$(CONFIG_SWIOTLB)  += dma-swiotlb.o
-
 obj-$(CONFIG_MODULES)  += proc-syms.o
 
 obj-$(CONFIG_ALIGNMENT_TRAP)   += alignment.o
diff --git a/arch/unicore32/mm/dma-swiotlb.c b/arch/unicore32/mm/dma-swiotlb.c
deleted file mode 100644
index 525413d6690e..
--- a/arch/unicore32/mm/dma-swiotlb.c
+++ /dev/null
@@ -1,48 +0,0 @@
-/*
- * Contains routines needed to support swiotlb for UniCore32.
- *
- * Copyright (C) 2010 Guan Xuetao
- *
- * This program is free software; you can redistribute  it and/or modify it
- * under  the terms of  the GNU General  Public License as published by the
- * Free Software Foundation;  either version 2 of the  License, or (at your
- * option) any later version.
- */
-#include 
-#include 
-#include 
-#include 
-#include 
-#include 
-
-#include 
-
-static void *unicore_swiotlb_alloc_coherent(struct device *dev, size_t size,
-   dma_addr_t *dma_handle, gfp_t flags,
-   unsigned long attrs)
-{
-   return swiotlb_alloc_coherent(dev, size, dma_handle, flags);
-}
-
-static void unicore_swiotlb_free_coherent(struct device *dev, size_t size,
- void *vaddr, dma_addr_t dma_addr,
- unsigned long attrs)
-{
-   swiotlb_free_coherent(dev, size, vaddr, dma_addr);
-}
-
-const struct dma_map_ops swiotlb_dma_map_ops = {
-   .alloc = unicore_swiotlb_alloc_coherent,
-   .free = unicore_swiotlb_free_coherent,
-   .map_sg = swiotlb_map_sg_attrs,
-   .unmap_sg = swiotlb_unmap_sg_attrs,
-   .dma_supported = swiotlb_dma_supported,
-   .map_page = swiotlb_map_page,
-   .unmap_page = swiotlb_unmap_page,
-   .sync_single_for_cpu = swiotlb_sync_single_for_cpu,
-   .sync_single_for_device = swiotlb_sync_single_for_device,
-   .sync_sg_for_cpu = swiotlb_sync_sg_for_cpu,
-   .sync_sg_for_device = swiotlb_sync_sg_for_device,
-   .mapping_error = swiotlb_dma_mapping_error,
-};
-EXPORT_SYMBOL(swiotlb_dma_map_ops);
-- 
2.14.2

--
To unsubscribe from this list: send the line "unsubscribe linux-alpha" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 60/67] tile: replace ZONE_DMA with ZONE_DMA32

2017-12-29 Thread Christoph Hellwig
tile uses ZONE_DMA for allocations below 32-bits.  These days we
name the zone for that ZONE_DMA32, which will allow to use the
dma-direct and generic swiotlb code as-is, so rename it.

Signed-off-by: Christoph Hellwig 
---
 arch/tile/Kconfig  | 2 +-
 arch/tile/kernel/pci-dma.c | 4 ++--
 arch/tile/kernel/setup.c   | 8 
 3 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/arch/tile/Kconfig b/arch/tile/Kconfig
index 02f269cfa538..30c586686f29 100644
--- a/arch/tile/Kconfig
+++ b/arch/tile/Kconfig
@@ -249,7 +249,7 @@ config HIGHMEM
 
  If unsure, say "true".
 
-config ZONE_DMA
+config ZONE_DMA32
def_bool y
 
 config IOMMU_HELPER
diff --git a/arch/tile/kernel/pci-dma.c b/arch/tile/kernel/pci-dma.c
index 9072e2c25e59..a9b48520eeb9 100644
--- a/arch/tile/kernel/pci-dma.c
+++ b/arch/tile/kernel/pci-dma.c
@@ -54,7 +54,7 @@ static void *tile_dma_alloc_coherent(struct device *dev, 
size_t size,
 * which case we will return NULL.  But such devices are uncommon.
 */
if (dma_mask <= DMA_BIT_MASK(32)) {
-   gfp |= GFP_DMA;
+   gfp |= GFP_DMA32;
node = 0;
}
 
@@ -515,7 +515,7 @@ static void *tile_swiotlb_alloc_coherent(struct device 
*dev, size_t size,
 dma_addr_t *dma_handle, gfp_t gfp,
 unsigned long attrs)
 {
-   gfp |= GFP_DMA;
+   gfp |= GFP_DMA32;
return swiotlb_alloc_coherent(dev, size, dma_handle, gfp);
 }
 
diff --git a/arch/tile/kernel/setup.c b/arch/tile/kernel/setup.c
index ad83c1e66dbd..eb4e198f6f93 100644
--- a/arch/tile/kernel/setup.c
+++ b/arch/tile/kernel/setup.c
@@ -814,11 +814,11 @@ static void __init zone_sizes_init(void)
 #endif
 
if (start < dma_end) {
-   zones_size[ZONE_DMA] = min(zones_size[ZONE_NORMAL],
+   zones_size[ZONE_DMA32] = min(zones_size[ZONE_NORMAL],
   dma_end - start);
-   zones_size[ZONE_NORMAL] -= zones_size[ZONE_DMA];
+   zones_size[ZONE_NORMAL] -= zones_size[ZONE_DMA32];
} else {
-   zones_size[ZONE_DMA] = 0;
+   zones_size[ZONE_DMA32] = 0;
}
 
/* Take zone metadata from controller 0 if we're isolnode. */
@@ -830,7 +830,7 @@ static void __init zone_sizes_init(void)
   PFN_UP(node_percpu[i]));
 
/* Track the type of memory on each node */
-   if (zones_size[ZONE_NORMAL] || zones_size[ZONE_DMA])
+   if (zones_size[ZONE_NORMAL] || zones_size[ZONE_DMA32])
node_set_state(i, N_NORMAL_MEMORY);
 #ifdef CONFIG_HIGHMEM
if (end != start)
-- 
2.14.2

--
To unsubscribe from this list: send the line "unsubscribe linux-alpha" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 62/67] mips/netlogic: remove swiotlb support

2017-12-29 Thread Christoph Hellwig
nlm_swiotlb_dma_ops is unused code, so the whole swiotlb support is dead.
If it gets resurrected at some point it should use the generic
swiotlb_dma_ops instead.

Signed-off-by: Christoph Hellwig 
---
 arch/mips/include/asm/netlogic/common.h |  3 --
 arch/mips/netlogic/Kconfig  |  5 --
 arch/mips/netlogic/common/Makefile  |  1 -
 arch/mips/netlogic/common/nlm-dma.c | 94 -
 4 files changed, 103 deletions(-)
 delete mode 100644 arch/mips/netlogic/common/nlm-dma.c

diff --git a/arch/mips/include/asm/netlogic/common.h 
b/arch/mips/include/asm/netlogic/common.h
index a6e6cbebe046..57616649b4f3 100644
--- a/arch/mips/include/asm/netlogic/common.h
+++ b/arch/mips/include/asm/netlogic/common.h
@@ -87,9 +87,6 @@ unsigned int nlm_get_cpu_frequency(void);
 extern const struct plat_smp_ops nlm_smp_ops;
 extern char nlm_reset_entry[], nlm_reset_entry_end[];
 
-/* SWIOTLB */
-extern const struct dma_map_ops nlm_swiotlb_dma_ops;
-
 extern unsigned int nlm_threads_per_core;
 extern cpumask_t nlm_cpumask;
 
diff --git a/arch/mips/netlogic/Kconfig b/arch/mips/netlogic/Kconfig
index 8296b13affd2..7fcfc7fe9f14 100644
--- a/arch/mips/netlogic/Kconfig
+++ b/arch/mips/netlogic/Kconfig
@@ -89,9 +89,4 @@ config IOMMU_HELPER
 config NEED_SG_DMA_LENGTH
bool
 
-config SWIOTLB
-   def_bool y
-   select NEED_SG_DMA_LENGTH
-   select IOMMU_HELPER
-
 endif
diff --git a/arch/mips/netlogic/common/Makefile 
b/arch/mips/netlogic/common/Makefile
index 60d00b5d748e..89f6e3f39fed 100644
--- a/arch/mips/netlogic/common/Makefile
+++ b/arch/mips/netlogic/common/Makefile
@@ -1,6 +1,5 @@
 # SPDX-License-Identifier: GPL-2.0
 obj-y  += irq.o time.o
-obj-y  += nlm-dma.o
 obj-y  += reset.o
 obj-$(CONFIG_SMP)  += smp.o smpboot.o
 obj-$(CONFIG_EARLY_PRINTK) += earlycons.o
diff --git a/arch/mips/netlogic/common/nlm-dma.c 
b/arch/mips/netlogic/common/nlm-dma.c
deleted file mode 100644
index 49c975b6aa28..
--- a/arch/mips/netlogic/common/nlm-dma.c
+++ /dev/null
@@ -1,94 +0,0 @@
-/*
-*  Copyright (C) 2003-2013 Broadcom Corporation
-*  All Rights Reserved
- *
- * This software is available to you under a choice of one of two
- * licenses.  You may choose to be licensed under the terms of the GNU
- * General Public License (GPL) Version 2, available from the file
- * COPYING in the main directory of this source tree, or the Broadcom
- * license below:
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- *
- * 1. Redistributions of source code must retain the above copyright
- *notice, this list of conditions and the following disclaimer.
- * 2. Redistributions in binary form must reproduce the above copyright
- *notice, this list of conditions and the following disclaimer in
- *the documentation and/or other materials provided with the
- *distribution.
- *
- * THIS SOFTWARE IS PROVIDED BY BROADCOM ``AS IS'' AND ANY EXPRESS OR
- * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
- * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
- * ARE DISCLAIMED. IN NO EVENT SHALL BROADCOM OR CONTRIBUTORS BE LIABLE
- * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
- * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
- * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
- * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
- * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
- * OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN
- * IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- */
-#include 
-#include 
-#include 
-#include 
-#include 
-#include 
-#include 
-#include 
-
-#include 
-
-static char *nlm_swiotlb;
-
-static void *nlm_dma_alloc_coherent(struct device *dev, size_t size,
-   dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs)
-{
-#ifdef CONFIG_ZONE_DMA32
-   if (dev->coherent_dma_mask <= DMA_BIT_MASK(32))
-   gfp |= __GFP_DMA32;
-#endif
-
-   /* Don't invoke OOM killer */
-   gfp |= __GFP_NORETRY;
-
-   return swiotlb_alloc_coherent(dev, size, dma_handle, gfp);
-}
-
-static void nlm_dma_free_coherent(struct device *dev, size_t size,
-   void *vaddr, dma_addr_t dma_handle, unsigned long attrs)
-{
-   swiotlb_free_coherent(dev, size, vaddr, dma_handle);
-}
-
-const struct dma_map_ops nlm_swiotlb_dma_ops = {
-   .alloc = nlm_dma_alloc_coherent,
-   .free = nlm_dma_free_coherent,
-   .map_page = swiotlb_map_page,
-   .unmap_page = swiotlb_unmap_page,
-   .map_sg = swiotlb_map_sg_attrs,
-   .unmap_sg = swiotlb_unmap_sg_attrs,
-   .sync_single_for_cpu = swiotlb_sync_single_for_cpu,
-   .sync_single_for_device = swiotlb_sync_single_for_device,
-   

[PATCH 63/67] mips: use swiotlb_{alloc,free}

2017-12-29 Thread Christoph Hellwig
These already include the GFP_DMA/GFP_DMA32 usage, and will use CMA
memory if enabled, thus avoiding the GFP_NORETRY hack.

Signed-off-by: Christoph Hellwig 
---
 arch/mips/cavium-octeon/Kconfig   |  1 +
 arch/mips/cavium-octeon/dma-octeon.c  | 26 +++---
 arch/mips/loongson64/Kconfig  |  1 +
 arch/mips/loongson64/common/dma-swiotlb.c | 21 ++---
 4 files changed, 7 insertions(+), 42 deletions(-)

diff --git a/arch/mips/cavium-octeon/Kconfig b/arch/mips/cavium-octeon/Kconfig
index 204a1670fd9b..b5eee1a57d6c 100644
--- a/arch/mips/cavium-octeon/Kconfig
+++ b/arch/mips/cavium-octeon/Kconfig
@@ -75,6 +75,7 @@ config NEED_SG_DMA_LENGTH
 
 config SWIOTLB
def_bool y
+   select DMA_DIRECT_OPS
select IOMMU_HELPER
select NEED_SG_DMA_LENGTH
 
diff --git a/arch/mips/cavium-octeon/dma-octeon.c 
b/arch/mips/cavium-octeon/dma-octeon.c
index 6440ad3f9e3b..7b335ab21697 100644
--- a/arch/mips/cavium-octeon/dma-octeon.c
+++ b/arch/mips/cavium-octeon/dma-octeon.c
@@ -159,33 +159,13 @@ static void octeon_dma_sync_sg_for_device(struct device 
*dev,
 static void *octeon_dma_alloc_coherent(struct device *dev, size_t size,
dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs)
 {
-   void *ret;
-
-   if (IS_ENABLED(CONFIG_ZONE_DMA) && dev == NULL)
-   gfp |= __GFP_DMA;
-   else if (IS_ENABLED(CONFIG_ZONE_DMA) &&
-dev->coherent_dma_mask <= DMA_BIT_MASK(24))
-   gfp |= __GFP_DMA;
-   else if (IS_ENABLED(CONFIG_ZONE_DMA32) &&
-dev->coherent_dma_mask <= DMA_BIT_MASK(32))
-   gfp |= __GFP_DMA32;
-
-   /* Don't invoke OOM killer */
-   gfp |= __GFP_NORETRY;
-
-   ret = swiotlb_alloc_coherent(dev, size, dma_handle, gfp);
+   void *ret = swiotlb_alloc(dev, size, dma_handle, gfp, attrs);
 
mb();
 
return ret;
 }
 
-static void octeon_dma_free_coherent(struct device *dev, size_t size,
-   void *vaddr, dma_addr_t dma_handle, unsigned long attrs)
-{
-   swiotlb_free_coherent(dev, size, vaddr, dma_handle);
-}
-
 static dma_addr_t octeon_unity_phys_to_dma(struct device *dev, phys_addr_t 
paddr)
 {
return paddr;
@@ -225,7 +205,7 @@ EXPORT_SYMBOL(__dma_to_phys);
 static struct octeon_dma_map_ops octeon_linear_dma_map_ops = {
.dma_map_ops = {
.alloc = octeon_dma_alloc_coherent,
-   .free = octeon_dma_free_coherent,
+   .free = swiotlb_free,
.map_page = octeon_dma_map_page,
.unmap_page = swiotlb_unmap_page,
.map_sg = octeon_dma_map_sg,
@@ -311,7 +291,7 @@ void __init plat_swiotlb_setup(void)
 static struct octeon_dma_map_ops _octeon_pci_dma_map_ops = {
.dma_map_ops = {
.alloc = octeon_dma_alloc_coherent,
-   .free = octeon_dma_free_coherent,
+   .free = swiotlb_free,
.map_page = octeon_dma_map_page,
.unmap_page = swiotlb_unmap_page,
.map_sg = octeon_dma_map_sg,
diff --git a/arch/mips/loongson64/Kconfig b/arch/mips/loongson64/Kconfig
index 0d249fc3cfe9..6f109bb54cdb 100644
--- a/arch/mips/loongson64/Kconfig
+++ b/arch/mips/loongson64/Kconfig
@@ -136,6 +136,7 @@ config SWIOTLB
bool "Soft IOMMU Support for All-Memory DMA"
default y
depends on CPU_LOONGSON3
+   select DMA_DIRECT_OPS
select IOMMU_HELPER
select NEED_SG_DMA_LENGTH
select NEED_DMA_MAP_STATE
diff --git a/arch/mips/loongson64/common/dma-swiotlb.c 
b/arch/mips/loongson64/common/dma-swiotlb.c
index 0a02ea70e39f..6a739f8ae110 100644
--- a/arch/mips/loongson64/common/dma-swiotlb.c
+++ b/arch/mips/loongson64/common/dma-swiotlb.c
@@ -13,29 +13,12 @@
 static void *loongson_dma_alloc_coherent(struct device *dev, size_t size,
dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs)
 {
-   void *ret;
+   void *ret = swiotlb_alloc(dev, size, dma_handle, gfp, attrs);
 
-   if ((IS_ENABLED(CONFIG_ISA) && dev == NULL) ||
-   (IS_ENABLED(CONFIG_ZONE_DMA) &&
-dev->coherent_dma_mask < DMA_BIT_MASK(32)))
-   gfp |= __GFP_DMA;
-   else if (IS_ENABLED(CONFIG_ZONE_DMA32) &&
-dev->coherent_dma_mask < DMA_BIT_MASK(40))
-   gfp |= __GFP_DMA32;
-
-   gfp |= __GFP_NORETRY;
-
-   ret = swiotlb_alloc_coherent(dev, size, dma_handle, gfp);
mb();
return ret;
 }
 
-static void loongson_dma_free_coherent(struct device *dev, size_t size,
-   void *vaddr, dma_addr_t dma_handle, unsigned long attrs)
-{
-   swiotlb_free_coherent(dev, size, vaddr, dma_handle);
-}
-
 static dma_addr_t loongson_dma_map_page(struct device *dev, struct page *page,
unsigned long offset, size_t size,
enum dma_data_direction dir,
@@ -106,7 +89,7 @@ phys_addr_t __dma_to_phys(struct 

[PATCH 65/67] arm64: use swiotlb_alloc and swiotlb_free

2017-12-29 Thread Christoph Hellwig
The generic swiotlb_alloc and swiotlb_free routines already take care
of CMA allocations and adding GFP_DMA32 where needed, so use them
instead of the arm specific helpers.

Signed-off-by: Christoph Hellwig 
---
 arch/arm64/Kconfig  |  1 +
 arch/arm64/mm/dma-mapping.c | 46 +++--
 2 files changed, 4 insertions(+), 43 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 6b6985f15d02..53205c02b18a 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -59,6 +59,7 @@ config ARM64
select COMMON_CLK
select CPU_PM if (SUSPEND || CPU_IDLE)
select DCACHE_WORD_ACCESS
+   select DMA_DIRECT_OPS
select EDAC_SUPPORT
select FRAME_POINTER
select GENERIC_ALLOCATOR
diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c
index 0d641875b20e..a96ec0181818 100644
--- a/arch/arm64/mm/dma-mapping.c
+++ b/arch/arm64/mm/dma-mapping.c
@@ -91,46 +91,6 @@ static int __free_from_pool(void *start, size_t size)
return 1;
 }
 
-static void *__dma_alloc_coherent(struct device *dev, size_t size,
- dma_addr_t *dma_handle, gfp_t flags,
- unsigned long attrs)
-{
-   if (IS_ENABLED(CONFIG_ZONE_DMA32) &&
-   dev->coherent_dma_mask <= DMA_BIT_MASK(32))
-   flags |= GFP_DMA32;
-   if (dev_get_cma_area(dev) && gfpflags_allow_blocking(flags)) {
-   struct page *page;
-   void *addr;
-
-   page = dma_alloc_from_contiguous(dev, size >> PAGE_SHIFT,
-get_order(size), flags);
-   if (!page)
-   return NULL;
-
-   *dma_handle = phys_to_dma(dev, page_to_phys(page));
-   addr = page_address(page);
-   memset(addr, 0, size);
-   return addr;
-   } else {
-   return swiotlb_alloc_coherent(dev, size, dma_handle, flags);
-   }
-}
-
-static void __dma_free_coherent(struct device *dev, size_t size,
-   void *vaddr, dma_addr_t dma_handle,
-   unsigned long attrs)
-{
-   bool freed;
-   phys_addr_t paddr = dma_to_phys(dev, dma_handle);
-
-
-   freed = dma_release_from_contiguous(dev,
-   phys_to_page(paddr),
-   size >> PAGE_SHIFT);
-   if (!freed)
-   swiotlb_free_coherent(dev, size, vaddr, dma_handle);
-}
-
 static void *__dma_alloc(struct device *dev, size_t size,
 dma_addr_t *dma_handle, gfp_t flags,
 unsigned long attrs)
@@ -152,7 +112,7 @@ static void *__dma_alloc(struct device *dev, size_t size,
return addr;
}
 
-   ptr = __dma_alloc_coherent(dev, size, dma_handle, flags, attrs);
+   ptr = swiotlb_alloc(dev, size, dma_handle, flags, attrs);
if (!ptr)
goto no_mem;
 
@@ -173,7 +133,7 @@ static void *__dma_alloc(struct device *dev, size_t size,
return coherent_ptr;
 
 no_map:
-   __dma_free_coherent(dev, size, ptr, *dma_handle, attrs);
+   swiotlb_free(dev, size, ptr, *dma_handle, attrs);
 no_mem:
return NULL;
 }
@@ -191,7 +151,7 @@ static void __dma_free(struct device *dev, size_t size,
return;
vunmap(vaddr);
}
-   __dma_free_coherent(dev, size, swiotlb_addr, dma_handle, attrs);
+   swiotlb_free(dev, size, swiotlb_addr, dma_handle, attrs);
 }
 
 static dma_addr_t __swiotlb_map_page(struct device *dev, struct page *page,
-- 
2.14.2

--
To unsubscribe from this list: send the line "unsubscribe linux-alpha" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 64/67] arm64: replace ZONE_DMA with ZONE_DMA32

2017-12-29 Thread Christoph Hellwig
arm64 uses ZONE_DMA for allocations below 32-bits.  These days we
name the zone for that ZONE_DMA32, which will allow to use the
dma-direct and generic swiotlb code as-is, so rename it.

Signed-off-by: Christoph Hellwig 
---
 arch/arm64/Kconfig  |  2 +-
 arch/arm64/mm/dma-mapping.c |  6 +++---
 arch/arm64/mm/init.c| 16 
 3 files changed, 12 insertions(+), 12 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index c9a7e9e1414f..6b6985f15d02 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -227,7 +227,7 @@ config GENERIC_CSUM
 config GENERIC_CALIBRATE_DELAY
def_bool y
 
-config ZONE_DMA
+config ZONE_DMA32
def_bool y
 
 config HAVE_GENERIC_GUP
diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c
index 6840426bbe77..0d641875b20e 100644
--- a/arch/arm64/mm/dma-mapping.c
+++ b/arch/arm64/mm/dma-mapping.c
@@ -95,9 +95,9 @@ static void *__dma_alloc_coherent(struct device *dev, size_t 
size,
  dma_addr_t *dma_handle, gfp_t flags,
  unsigned long attrs)
 {
-   if (IS_ENABLED(CONFIG_ZONE_DMA) &&
+   if (IS_ENABLED(CONFIG_ZONE_DMA32) &&
dev->coherent_dma_mask <= DMA_BIT_MASK(32))
-   flags |= GFP_DMA;
+   flags |= GFP_DMA32;
if (dev_get_cma_area(dev) && gfpflags_allow_blocking(flags)) {
struct page *page;
void *addr;
@@ -397,7 +397,7 @@ static int __init atomic_pool_init(void)
page = dma_alloc_from_contiguous(NULL, nr_pages,
 pool_size_order, GFP_KERNEL);
else
-   page = alloc_pages(GFP_DMA, pool_size_order);
+   page = alloc_pages(GFP_DMA32, pool_size_order);
 
if (page) {
int ret;
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 00e7b900ca41..8f03276443c9 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -217,7 +217,7 @@ static void __init reserve_elfcorehdr(void)
 }
 #endif /* CONFIG_CRASH_DUMP */
 /*
- * Return the maximum physical address for ZONE_DMA (DMA_BIT_MASK(32)). It
+ * Return the maximum physical address for ZONE_DMA32 (DMA_BIT_MASK(32)). It
  * currently assumes that for memory starting above 4G, 32-bit devices will
  * use a DMA offset.
  */
@@ -233,8 +233,8 @@ static void __init zone_sizes_init(unsigned long min, 
unsigned long max)
 {
unsigned long max_zone_pfns[MAX_NR_ZONES]  = {0};
 
-   if (IS_ENABLED(CONFIG_ZONE_DMA))
-   max_zone_pfns[ZONE_DMA] = PFN_DOWN(max_zone_dma_phys());
+   if (IS_ENABLED(CONFIG_ZONE_DMA32))
+   max_zone_pfns[ZONE_DMA32] = PFN_DOWN(max_zone_dma_phys());
max_zone_pfns[ZONE_NORMAL] = max;
 
free_area_init_nodes(max_zone_pfns);
@@ -251,9 +251,9 @@ static void __init zone_sizes_init(unsigned long min, 
unsigned long max)
memset(zone_size, 0, sizeof(zone_size));
 
/* 4GB maximum for 32-bit only capable devices */
-#ifdef CONFIG_ZONE_DMA
+#ifdef CONFIG_ZONE_DMA32
max_dma = PFN_DOWN(arm64_dma_phys_limit);
-   zone_size[ZONE_DMA] = max_dma - min;
+   zone_size[ZONE_DMA32] = max_dma - min;
 #endif
zone_size[ZONE_NORMAL] = max - max_dma;
 
@@ -266,10 +266,10 @@ static void __init zone_sizes_init(unsigned long min, 
unsigned long max)
if (start >= max)
continue;
 
-#ifdef CONFIG_ZONE_DMA
+#ifdef CONFIG_ZONE_DMA32
if (start < max_dma) {
unsigned long dma_end = min(end, max_dma);
-   zhole_size[ZONE_DMA] -= dma_end - start;
+   zhole_size[ZONE_DMA32] -= dma_end - start;
}
 #endif
if (end > max_dma) {
@@ -467,7 +467,7 @@ void __init arm64_memblock_init(void)
early_init_fdt_scan_reserved_mem();
 
/* 4GB maximum for 32-bit only capable devices */
-   if (IS_ENABLED(CONFIG_ZONE_DMA))
+   if (IS_ENABLED(CONFIG_ZONE_DMA32))
arm64_dma_phys_limit = max_zone_dma_phys();
else
arm64_dma_phys_limit = PHYS_MASK + 1;
-- 
2.14.2

--
To unsubscribe from this list: send the line "unsubscribe linux-alpha" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 58/67] ia64: remove an ifdef around the content of pci-dma.c

2017-12-29 Thread Christoph Hellwig
The file is only compiled if CONFIG_INTEL_IOMMU is set to start with.

Signed-off-by: Christoph Hellwig 
---
 arch/ia64/kernel/pci-dma.c | 7 ---
 1 file changed, 7 deletions(-)

diff --git a/arch/ia64/kernel/pci-dma.c b/arch/ia64/kernel/pci-dma.c
index 35e0cad33b7d..b5df084c0af4 100644
--- a/arch/ia64/kernel/pci-dma.c
+++ b/arch/ia64/kernel/pci-dma.c
@@ -12,12 +12,7 @@
 #include 
 #include 
 #include 
-
-
-#ifdef CONFIG_INTEL_IOMMU
-
 #include 
-
 #include 
 
 dma_addr_t bad_dma_address __read_mostly;
@@ -115,5 +110,3 @@ void __init pci_iommu_alloc(void)
}
 #endif /* CONFIG_SWIOTLB */
 }
-
-#endif
-- 
2.14.2

--
To unsubscribe from this list: send the line "unsubscribe linux-alpha" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 19/67] microblaze: remove the dead !NOT_COHERENT_CACHE dma code

2017-12-29 Thread Christoph Hellwig
Signed-off-by: Christoph Hellwig 
---
 arch/microblaze/kernel/dma.c | 28 
 1 file changed, 28 deletions(-)

diff --git a/arch/microblaze/kernel/dma.c b/arch/microblaze/kernel/dma.c
index 49b09648679b..031d889670f5 100644
--- a/arch/microblaze/kernel/dma.c
+++ b/arch/microblaze/kernel/dma.c
@@ -15,42 +15,18 @@
 #include 
 #include 
 
-#define NOT_COHERENT_CACHE
-
 static void *dma_nommu_alloc_coherent(struct device *dev, size_t size,
   dma_addr_t *dma_handle, gfp_t flag,
   unsigned long attrs)
 {
-#ifdef NOT_COHERENT_CACHE
return consistent_alloc(flag, size, dma_handle);
-#else
-   void *ret;
-   struct page *page;
-   int node = dev_to_node(dev);
-
-   /* ignore region specifiers */
-   flag  &= ~(__GFP_HIGHMEM);
-
-   page = alloc_pages_node(node, flag, get_order(size));
-   if (page == NULL)
-   return NULL;
-   ret = page_address(page);
-   memset(ret, 0, size);
-   *dma_handle = virt_to_phys(ret);
-
-   return ret;
-#endif
 }
 
 static void dma_nommu_free_coherent(struct device *dev, size_t size,
 void *vaddr, dma_addr_t dma_handle,
 unsigned long attrs)
 {
-#ifdef NOT_COHERENT_CACHE
consistent_free(size, vaddr);
-#else
-   free_pages((unsigned long)vaddr, get_order(size));
-#endif
 }
 
 static inline void __dma_sync(unsigned long paddr,
@@ -186,12 +162,8 @@ int dma_nommu_mmap_coherent(struct device *dev, struct 
vm_area_struct *vma,
if (off >= count || user_count > (count - off))
return -ENXIO;
 
-#ifdef NOT_COHERENT_CACHE
vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
pfn = consistent_virt_to_pfn(cpu_addr);
-#else
-   pfn = virt_to_pfn(cpu_addr);
-#endif
return remap_pfn_range(vma, vma->vm_start, pfn + off,
   vma->vm_end - vma->vm_start, vma->vm_page_prot);
 #else
-- 
2.14.2

--
To unsubscribe from this list: send the line "unsubscribe linux-alpha" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 14/67] dma-mapping: move dma_mark_clean to dma-direct.h

2017-12-29 Thread Christoph Hellwig
And unlike the other helpers we don't require a  as
this helper is a special case for ia64 only, and this keeps it as
simple as possible.

Signed-off-by: Christoph Hellwig 
---
 arch/arm/include/asm/dma-mapping.h   | 2 --
 arch/arm64/include/asm/dma-mapping.h | 4 
 arch/ia64/Kconfig| 1 +
 arch/ia64/include/asm/dma.h  | 2 --
 arch/mips/include/asm/dma-mapping.h  | 2 --
 arch/powerpc/include/asm/swiotlb.h   | 2 --
 arch/tile/include/asm/dma-mapping.h  | 2 --
 arch/unicore32/include/asm/dma-mapping.h | 2 --
 arch/x86/include/asm/swiotlb.h   | 2 --
 include/linux/dma-direct.h   | 9 +
 10 files changed, 10 insertions(+), 18 deletions(-)

diff --git a/arch/arm/include/asm/dma-mapping.h 
b/arch/arm/include/asm/dma-mapping.h
index 5fb1b7fbdfbe..e5d9020c9ee1 100644
--- a/arch/arm/include/asm/dma-mapping.h
+++ b/arch/arm/include/asm/dma-mapping.h
@@ -109,8 +109,6 @@ static inline bool is_device_dma_coherent(struct device 
*dev)
return dev->archdata.dma_coherent;
 }
 
-static inline void dma_mark_clean(void *addr, size_t size) { }
-
 /**
  * arm_dma_alloc - allocate consistent memory for DMA
  * @dev: valid struct device pointer, or NULL for ISA and EISA-like devices
diff --git a/arch/arm64/include/asm/dma-mapping.h 
b/arch/arm64/include/asm/dma-mapping.h
index 400fa67d3b5a..b7847eb8a7bb 100644
--- a/arch/arm64/include/asm/dma-mapping.h
+++ b/arch/arm64/include/asm/dma-mapping.h
@@ -50,9 +50,5 @@ static inline bool is_device_dma_coherent(struct device *dev)
return dev->archdata.dma_coherent;
 }
 
-static inline void dma_mark_clean(void *addr, size_t size)
-{
-}
-
 #endif /* __KERNEL__ */
 #endif /* __ASM_DMA_MAPPING_H */
diff --git a/arch/ia64/Kconfig b/arch/ia64/Kconfig
index 49583c5a5d44..4d18fca885ee 100644
--- a/arch/ia64/Kconfig
+++ b/arch/ia64/Kconfig
@@ -33,6 +33,7 @@ config IA64
select HAVE_MEMBLOCK
select HAVE_MEMBLOCK_NODE_MAP
select HAVE_VIRT_CPU_ACCOUNTING
+   select ARCH_HAS_DMA_MARK_CLEAN
select ARCH_HAS_SG_CHAIN
select VIRT_TO_BUS
select ARCH_DISCARD_MEMBLOCK
diff --git a/arch/ia64/include/asm/dma.h b/arch/ia64/include/asm/dma.h
index 186850eec934..23604d6a2cb2 100644
--- a/arch/ia64/include/asm/dma.h
+++ b/arch/ia64/include/asm/dma.h
@@ -20,6 +20,4 @@ extern unsigned long MAX_DMA_ADDRESS;
 
 #define free_dma(x)
 
-void dma_mark_clean(void *addr, size_t size);
-
 #endif /* _ASM_IA64_DMA_H */
diff --git a/arch/mips/include/asm/dma-mapping.h 
b/arch/mips/include/asm/dma-mapping.h
index 676c14cfc580..886e75a383f2 100644
--- a/arch/mips/include/asm/dma-mapping.h
+++ b/arch/mips/include/asm/dma-mapping.h
@@ -17,8 +17,6 @@ static inline const struct dma_map_ops 
*get_arch_dma_ops(struct bus_type *bus)
return mips_dma_map_ops;
 }
 
-static inline void dma_mark_clean(void *addr, size_t size) {}
-
 #define arch_setup_dma_ops arch_setup_dma_ops
 static inline void arch_setup_dma_ops(struct device *dev, u64 dma_base,
  u64 size, const struct iommu_ops *iommu,
diff --git a/arch/powerpc/include/asm/swiotlb.h 
b/arch/powerpc/include/asm/swiotlb.h
index 01d45a5fd00b..9341ee804d19 100644
--- a/arch/powerpc/include/asm/swiotlb.h
+++ b/arch/powerpc/include/asm/swiotlb.h
@@ -15,8 +15,6 @@
 
 extern const struct dma_map_ops swiotlb_dma_ops;
 
-static inline void dma_mark_clean(void *addr, size_t size) {}
-
 extern unsigned int ppc_swiotlb_enable;
 int __init swiotlb_setup_bus_notifier(void);
 
diff --git a/arch/tile/include/asm/dma-mapping.h 
b/arch/tile/include/asm/dma-mapping.h
index 75b8aaa4e70b..d25fce101fc0 100644
--- a/arch/tile/include/asm/dma-mapping.h
+++ b/arch/tile/include/asm/dma-mapping.h
@@ -44,8 +44,6 @@ static inline void set_dma_offset(struct device *dev, 
dma_addr_t off)
dev->archdata.dma_offset = off;
 }
 
-static inline void dma_mark_clean(void *addr, size_t size) {}
-
 #define HAVE_ARCH_DMA_SET_MASK 1
 int dma_set_mask(struct device *dev, u64 mask);
 
diff --git a/arch/unicore32/include/asm/dma-mapping.h 
b/arch/unicore32/include/asm/dma-mapping.h
index 5cb250bf2d8c..f2bfec273aa7 100644
--- a/arch/unicore32/include/asm/dma-mapping.h
+++ b/arch/unicore32/include/asm/dma-mapping.h
@@ -25,7 +25,5 @@ static inline const struct dma_map_ops 
*get_arch_dma_ops(struct bus_type *bus)
return _dma_map_ops;
 }
 
-static inline void dma_mark_clean(void *addr, size_t size) {}
-
 #endif /* __KERNEL__ */
 #endif
diff --git a/arch/x86/include/asm/swiotlb.h b/arch/x86/include/asm/swiotlb.h
index bdf9aed40403..1c6a6cb230ff 100644
--- a/arch/x86/include/asm/swiotlb.h
+++ b/arch/x86/include/asm/swiotlb.h
@@ -28,8 +28,6 @@ static inline void pci_swiotlb_late_init(void)
 }
 #endif
 
-static inline void dma_mark_clean(void *addr, size_t size) {}
-
 extern void *x86_swiotlb_alloc_coherent(struct device *hwdev, size_t size,
dma_addr_t *dma_handle, gfp_t flags,
  

[PATCH 09/67] arc: remove CONFIG_ARC_PLAT_NEEDS_PHYS_TO_DMA

2017-12-29 Thread Christoph Hellwig
We always use the stub definitions, so remove the unused other code.

Signed-off-by: Christoph Hellwig 
---
 arch/arc/Kconfig   |  3 ---
 arch/arc/include/asm/dma-mapping.h |  7 ---
 arch/arc/mm/dma.c  | 14 +++---
 3 files changed, 7 insertions(+), 17 deletions(-)

diff --git a/arch/arc/Kconfig b/arch/arc/Kconfig
index 9d5fd00d9e91..f3a80cf164cc 100644
--- a/arch/arc/Kconfig
+++ b/arch/arc/Kconfig
@@ -463,9 +463,6 @@ config ARCH_PHYS_ADDR_T_64BIT
 config ARCH_DMA_ADDR_T_64BIT
bool
 
-config ARC_PLAT_NEEDS_PHYS_TO_DMA
-   bool
-
 config ARC_KVADDR_SIZE
int "Kernel Virtual Address Space size (MB)"
range 0 512
diff --git a/arch/arc/include/asm/dma-mapping.h 
b/arch/arc/include/asm/dma-mapping.h
index 94285031c4fb..7a16824bfe98 100644
--- a/arch/arc/include/asm/dma-mapping.h
+++ b/arch/arc/include/asm/dma-mapping.h
@@ -11,13 +11,6 @@
 #ifndef ASM_ARC_DMA_MAPPING_H
 #define ASM_ARC_DMA_MAPPING_H
 
-#ifndef CONFIG_ARC_PLAT_NEEDS_PHYS_TO_DMA
-#define plat_dma_to_phys(dev, dma_handle) ((phys_addr_t)(dma_handle))
-#define plat_phys_to_dma(dev, paddr) ((dma_addr_t)(paddr))
-#else
-#include 
-#endif
-
 extern const struct dma_map_ops arc_dma_ops;
 
 static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
diff --git a/arch/arc/mm/dma.c b/arch/arc/mm/dma.c
index fad18261ef6a..1d405b86250c 100644
--- a/arch/arc/mm/dma.c
+++ b/arch/arc/mm/dma.c
@@ -60,7 +60,7 @@ static void *arc_dma_alloc(struct device *dev, size_t size,
/* This is linear addr (0x8000_ based) */
paddr = page_to_phys(page);
 
-   *dma_handle = plat_phys_to_dma(dev, paddr);
+   *dma_handle = paddr;
 
/* This is kernel Virtual address (0x7000_ based) */
if (need_kvaddr) {
@@ -92,7 +92,7 @@ static void *arc_dma_alloc(struct device *dev, size_t size,
 static void arc_dma_free(struct device *dev, size_t size, void *vaddr,
dma_addr_t dma_handle, unsigned long attrs)
 {
-   phys_addr_t paddr = plat_dma_to_phys(dev, dma_handle);
+   phys_addr_t paddr = dma_handle;
struct page *page = virt_to_page(paddr);
int is_non_coh = 1;
 
@@ -111,7 +111,7 @@ static int arc_dma_mmap(struct device *dev, struct 
vm_area_struct *vma,
 {
unsigned long user_count = vma_pages(vma);
unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT;
-   unsigned long pfn = __phys_to_pfn(plat_dma_to_phys(dev, dma_addr));
+   unsigned long pfn = __phys_to_pfn(dma_addr);
unsigned long off = vma->vm_pgoff;
int ret = -ENXIO;
 
@@ -175,7 +175,7 @@ static dma_addr_t arc_dma_map_page(struct device *dev, 
struct page *page,
if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
_dma_cache_sync(paddr, size, dir);
 
-   return plat_phys_to_dma(dev, paddr);
+   return paddr;
 }
 
 /*
@@ -190,7 +190,7 @@ static void arc_dma_unmap_page(struct device *dev, 
dma_addr_t handle,
   size_t size, enum dma_data_direction dir,
   unsigned long attrs)
 {
-   phys_addr_t paddr = plat_dma_to_phys(dev, handle);
+   phys_addr_t paddr = handle;
 
if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
_dma_cache_sync(paddr, size, dir);
@@ -224,13 +224,13 @@ static void arc_dma_unmap_sg(struct device *dev, struct 
scatterlist *sg,
 static void arc_dma_sync_single_for_cpu(struct device *dev,
dma_addr_t dma_handle, size_t size, enum dma_data_direction dir)
 {
-   _dma_cache_sync(plat_dma_to_phys(dev, dma_handle), size, 
DMA_FROM_DEVICE);
+   _dma_cache_sync(dma_handle, size, DMA_FROM_DEVICE);
 }
 
 static void arc_dma_sync_single_for_device(struct device *dev,
dma_addr_t dma_handle, size_t size, enum dma_data_direction dir)
 {
-   _dma_cache_sync(plat_dma_to_phys(dev, dma_handle), size, DMA_TO_DEVICE);
+   _dma_cache_sync(dma_handle, size, DMA_TO_DEVICE);
 }
 
 static void arc_dma_sync_sg_for_cpu(struct device *dev,
-- 
2.14.2

--
To unsubscribe from this list: send the line "unsubscribe linux-alpha" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 17/67] microblaze: rename dma_direct to dma_microblaze

2017-12-29 Thread Christoph Hellwig
This frees the dma_direct_* namespace for a generic implementation.

Signed-off-by: Christoph Hellwig 
---
 arch/microblaze/include/asm/dma-mapping.h |  4 +--
 arch/microblaze/kernel/dma.c  | 50 +++
 2 files changed, 27 insertions(+), 27 deletions(-)

diff --git a/arch/microblaze/include/asm/dma-mapping.h 
b/arch/microblaze/include/asm/dma-mapping.h
index 6b9ea39405b8..add50c1373bf 100644
--- a/arch/microblaze/include/asm/dma-mapping.h
+++ b/arch/microblaze/include/asm/dma-mapping.h
@@ -18,11 +18,11 @@
 /*
  * Available generic sets of operations
  */
-extern const struct dma_map_ops dma_direct_ops;
+extern const struct dma_map_ops dma_nommu_ops;
 
 static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
 {
-   return _direct_ops;
+   return _nommu_ops;
 }
 
 #endif /* _ASM_MICROBLAZE_DMA_MAPPING_H */
diff --git a/arch/microblaze/kernel/dma.c b/arch/microblaze/kernel/dma.c
index 2a9a0ec14c46..364b0ac41452 100644
--- a/arch/microblaze/kernel/dma.c
+++ b/arch/microblaze/kernel/dma.c
@@ -17,7 +17,7 @@
 
 #define NOT_COHERENT_CACHE
 
-static void *dma_direct_alloc_coherent(struct device *dev, size_t size,
+static void *dma_nommu_alloc_coherent(struct device *dev, size_t size,
   dma_addr_t *dma_handle, gfp_t flag,
   unsigned long attrs)
 {
@@ -42,7 +42,7 @@ static void *dma_direct_alloc_coherent(struct device *dev, 
size_t size,
 #endif
 }
 
-static void dma_direct_free_coherent(struct device *dev, size_t size,
+static void dma_nommu_free_coherent(struct device *dev, size_t size,
 void *vaddr, dma_addr_t dma_handle,
 unsigned long attrs)
 {
@@ -69,7 +69,7 @@ static inline void __dma_sync(unsigned long paddr,
}
 }
 
-static int dma_direct_map_sg(struct device *dev, struct scatterlist *sgl,
+static int dma_nommu_map_sg(struct device *dev, struct scatterlist *sgl,
 int nents, enum dma_data_direction direction,
 unsigned long attrs)
 {
@@ -89,12 +89,12 @@ static int dma_direct_map_sg(struct device *dev, struct 
scatterlist *sgl,
return nents;
 }
 
-static int dma_direct_dma_supported(struct device *dev, u64 mask)
+static int dma_nommu_dma_supported(struct device *dev, u64 mask)
 {
return 1;
 }
 
-static inline dma_addr_t dma_direct_map_page(struct device *dev,
+static inline dma_addr_t dma_nommu_map_page(struct device *dev,
 struct page *page,
 unsigned long offset,
 size_t size,
@@ -106,7 +106,7 @@ static inline dma_addr_t dma_direct_map_page(struct device 
*dev,
return page_to_phys(page) + offset;
 }
 
-static inline void dma_direct_unmap_page(struct device *dev,
+static inline void dma_nommu_unmap_page(struct device *dev,
 dma_addr_t dma_address,
 size_t size,
 enum dma_data_direction direction,
@@ -122,7 +122,7 @@ static inline void dma_direct_unmap_page(struct device *dev,
 }
 
 static inline void
-dma_direct_sync_single_for_cpu(struct device *dev,
+dma_nommu_sync_single_for_cpu(struct device *dev,
   dma_addr_t dma_handle, size_t size,
   enum dma_data_direction direction)
 {
@@ -136,7 +136,7 @@ dma_direct_sync_single_for_cpu(struct device *dev,
 }
 
 static inline void
-dma_direct_sync_single_for_device(struct device *dev,
+dma_nommu_sync_single_for_device(struct device *dev,
  dma_addr_t dma_handle, size_t size,
  enum dma_data_direction direction)
 {
@@ -150,7 +150,7 @@ dma_direct_sync_single_for_device(struct device *dev,
 }
 
 static inline void
-dma_direct_sync_sg_for_cpu(struct device *dev,
+dma_nommu_sync_sg_for_cpu(struct device *dev,
   struct scatterlist *sgl, int nents,
   enum dma_data_direction direction)
 {
@@ -164,7 +164,7 @@ dma_direct_sync_sg_for_cpu(struct device *dev,
 }
 
 static inline void
-dma_direct_sync_sg_for_device(struct device *dev,
+dma_nommu_sync_sg_for_device(struct device *dev,
  struct scatterlist *sgl, int nents,
  enum dma_data_direction direction)
 {
@@ -178,7 +178,7 @@ dma_direct_sync_sg_for_device(struct device *dev,
 }
 
 static
-int dma_direct_mmap_coherent(struct device *dev, struct vm_area_struct *vma,
+int dma_nommu_mmap_coherent(struct device *dev, struct vm_area_struct *vma,
 void *cpu_addr, dma_addr_t handle, size_t size,
 unsigned long attrs)
 {
@@ -204,21 +204,21 @@ int dma_direct_mmap_coherent(struct device *dev, struct 

[PATCH 11/67] riscv: remove the unused dma_capable helper

2017-12-29 Thread Christoph Hellwig
Signed-off-by: Christoph Hellwig 
---
 arch/riscv/include/asm/dma-mapping.h | 8 
 1 file changed, 8 deletions(-)

diff --git a/arch/riscv/include/asm/dma-mapping.h 
b/arch/riscv/include/asm/dma-mapping.h
index 3eec1000196d..73849e2cc761 100644
--- a/arch/riscv/include/asm/dma-mapping.h
+++ b/arch/riscv/include/asm/dma-mapping.h
@@ -27,12 +27,4 @@ static inline const struct dma_map_ops 
*get_arch_dma_ops(struct bus_type *bus)
return _noop_ops;
 }
 
-static inline bool dma_capable(struct device *dev, dma_addr_t addr, size_t 
size)
-{
-   if (!dev->dma_mask)
-   return false;
-
-   return addr + size - 1 <= *dev->dma_mask;
-}
-
 #endif /* __ASM_RISCV_DMA_MAPPING_H */
-- 
2.14.2

--
To unsubscribe from this list: send the line "unsubscribe linux-alpha" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html