Re: use generic DMA mapping code in powerpc V4

2019-01-02 Thread Christoph Hellwig
Hi Christian,

happy new year and I hope you had a few restful deays off.

I've pushed a new tree to:

   git://git.infradead.org/users/hch/misc.git powerpc-dma.6

Gitweb:

   http://git.infradead.org/users/hch/misc.git/shortlog/refs/heads/powerpc-dma.6

Which has been rebased to the latests Linus tree, which has a lot of
changes, and has also changed the patch split a bit to aid bisection.

I think 

   
http://git.infradead.org/users/hch/misc.git/commitdiff/c446404b041130fbd9d1772d184f24715cf2362f

might be a good commit to re-start testing, then bisecting up to the
last commit using git bisect.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 02/15] swiotlb: remove dma_mark_clean

2019-01-02 Thread Christoph Hellwig
On Wed, Jan 02, 2019 at 01:53:33PM -0800, Tony Luck wrote:
> On Fri, Dec 7, 2018 at 11:08 AM Christoph Hellwig  wrote:
> >
> > Instead of providing a special dma_mark_clean hook just for ia64, switch
> > ia64 to use the normal arch_sync_dma_for_cpu hooks instead.
> >
> > This means that we now also set the PG_arch_1 bit for pages in the
> > swiotlb buffer, which isn't stricly needed as we will never execute code
> > out of the swiotlb buffer, but otherwise harmless.
> 
> ia64 build based on arch/ia64/configs/zx1_defconfig now fails with undefined
> symbols arch_dma_alloc and arch_dma_free (used by kernel/dma/direct.c).
> 
> This config doesn't define CONFIG_SWIOTLB, so we don't get the
> benefit of the routines in arch/ia64/kernel/dma-mapping.c

I think something like the patch below should fix it:

diff --git a/arch/ia64/Kconfig b/arch/ia64/Kconfig
index ccd56f5df8cd..8d7396bd1790 100644
--- a/arch/ia64/Kconfig
+++ b/arch/ia64/Kconfig
@@ -31,7 +31,7 @@ config IA64
select HAVE_MEMBLOCK_NODE_MAP
select HAVE_VIRT_CPU_ACCOUNTING
select ARCH_HAS_DMA_COHERENT_TO_PFN if SWIOTLB
-   select ARCH_HAS_SYNC_DMA_FOR_CPU
+   select ARCH_HAS_SYNC_DMA_FOR_CPU if SWIOTLB
select VIRT_TO_BUS
select ARCH_DISCARD_MEMBLOCK
select GENERIC_IRQ_PROBE
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 02/15] swiotlb: remove dma_mark_clean

2019-01-02 Thread Tony Luck
On Fri, Dec 7, 2018 at 11:08 AM Christoph Hellwig  wrote:
>
> Instead of providing a special dma_mark_clean hook just for ia64, switch
> ia64 to use the normal arch_sync_dma_for_cpu hooks instead.
>
> This means that we now also set the PG_arch_1 bit for pages in the
> swiotlb buffer, which isn't stricly needed as we will never execute code
> out of the swiotlb buffer, but otherwise harmless.

ia64 build based on arch/ia64/configs/zx1_defconfig now fails with undefined
symbols arch_dma_alloc and arch_dma_free (used by kernel/dma/direct.c).

This config doesn't define CONFIG_SWIOTLB, so we don't get the
benefit of the routines in arch/ia64/kernel/dma-mapping.c

-Tony
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 1/1] iova: Allow compiling the library without IOMMU support

2019-01-02 Thread Laurent Pinchart
Hi Sakari,

Thank you for the patch.

On Wednesday, 2 January 2019 23:16:57 EET Sakari Ailus wrote:
> Drivers such as the Intel IPU3 ImgU driver use the IOVA library to manage
> the device's own virtual address space while not implementing the IOMMU
> API.

Why is that ? Could the IPU3 IOMMU be implemented as an IOMMU driver ?

> Currently the IOVA library is only compiled if the IOMMU support is
> enabled, resulting into a failure during linking due to missing symbols.
> 
> Fix this by defining IOVA library Kconfig bits independently of IOMMU
> support configuration, and descending to the iommu directory
> unconditionally during the build.
> 
> Signed-off-by: Sakari Ailus 
> ---
>  drivers/Makefile  | 2 +-
>  drivers/iommu/Kconfig | 7 ---
>  2 files changed, 5 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/Makefile b/drivers/Makefile
> index 578f469f72fb..d9c469983592 100644
> --- a/drivers/Makefile
> +++ b/drivers/Makefile
> @@ -56,7 +56,7 @@ obj-y   += tty/
>  obj-y+= char/
> 
>  # iommu/ comes before gpu as gpu are using iommu controllers
> -obj-$(CONFIG_IOMMU_SUPPORT)  += iommu/
> +obj-y+= iommu/
> 
>  # gpu/ comes after char for AGP vs DRM startup and after iommu
>  obj-y+= gpu/
> diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
> index d9a25715650e..d2c83e62873d 100644
> --- a/drivers/iommu/Kconfig
> +++ b/drivers/iommu/Kconfig
> @@ -1,3 +1,7 @@
> +# The IOVA library may also be used by non-IOMMU_API users
> +config IOMMU_IOVA
> + tristate
> +
>  # IOMMU_API always gets selected by whoever wants it.
>  config IOMMU_API
>   bool
> @@ -81,9 +85,6 @@ config IOMMU_DEFAULT_PASSTHROUGH
> 
> If unsure, say N here.
> 
> -config IOMMU_IOVA
> - tristate
> -
>  config OF_IOMMU
> def_bool y
> depends on OF && IOMMU_API


-- 
Regards,

Laurent Pinchart



___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH 1/1] iova: Allow compiling the library without IOMMU support

2019-01-02 Thread Sakari Ailus
Drivers such as the Intel IPU3 ImgU driver use the IOVA library to manage
the device's own virtual address space while not implementing the IOMMU
API. Currently the IOVA library is only compiled if the IOMMU support is
enabled, resulting into a failure during linking due to missing symbols.

Fix this by defining IOVA library Kconfig bits independently of IOMMU
support configuration, and descending to the iommu directory
unconditionally during the build.

Signed-off-by: Sakari Ailus 
---
 drivers/Makefile  | 2 +-
 drivers/iommu/Kconfig | 7 ---
 2 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/drivers/Makefile b/drivers/Makefile
index 578f469f72fb..d9c469983592 100644
--- a/drivers/Makefile
+++ b/drivers/Makefile
@@ -56,7 +56,7 @@ obj-y += tty/
 obj-y  += char/
 
 # iommu/ comes before gpu as gpu are using iommu controllers
-obj-$(CONFIG_IOMMU_SUPPORT)+= iommu/
+obj-y  += iommu/
 
 # gpu/ comes after char for AGP vs DRM startup and after iommu
 obj-y  += gpu/
diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
index d9a25715650e..d2c83e62873d 100644
--- a/drivers/iommu/Kconfig
+++ b/drivers/iommu/Kconfig
@@ -1,3 +1,7 @@
+# The IOVA library may also be used by non-IOMMU_API users
+config IOMMU_IOVA
+   tristate
+
 # IOMMU_API always gets selected by whoever wants it.
 config IOMMU_API
bool
@@ -81,9 +85,6 @@ config IOMMU_DEFAULT_PASSTHROUGH
 
  If unsure, say N here.
 
-config IOMMU_IOVA
-   tristate
-
 config OF_IOMMU
def_bool y
depends on OF && IOMMU_API
-- 
2.11.0

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RFC v2] iommu/vt-d: Allow iommu_domain_alloc to allocate IOMMU_DOMAIN_DMA

2019-01-02 Thread James Sewart via iommu
Bump

> On 5 Dec 2018, at 17:19, James Sewart  wrote:
> 
> Hey,
> 
> There exists an issue in the logic used to determine domain association 
> with devices. Currently the driver uses find_or_alloc_domain to either 
> reuse an existing domain or allocate a new one if one isn’t found. Domains 
> should be shared between all members of an IOMMU group as this is the 
> minimum granularity that we can guarantee address space isolation.
> 
> The intel IOMMU driver exposes pci_device_group in intel_iommu_ops as the 
> function to call to determine the group of a device, this is implemented 
> in the generic IOMMU code and checks: dma aliases, upstream pcie switch 
> ACS, pci aliases, and pci function aliases. The find_or_alloc_domain code 
> currently only uses dma aliases to determine if a domain is shared. This 
> causes a disconnect between IOMMU groups and domains. We have observed 
> devices under a pcie switch each having their own domain but assigned the 
> same group.
> 
> One solution would be to fix the logic in find_or_alloc_domain to add 
> checks for the other conditions that a device may share a domain. However, 
> this duplicates code which the generic IOMMU code implements. Instead this 
> issue can be fixed by allowing the allocation of default_domain on the 
> IOMMU group. This is not currently supported as the intel driver does not 
> allow allocation of domain type IOMMU_DOMAIN_DMA.
> 
> Allowing allocation of DMA domains has the effect that the default_domain 
> is non NULL and is attached to a device when initialising. This delegates 
> the handling of domains to the generic IOMMU code. Once this is 
> implemented it is possible to remove the lazy allocation of domains 
> entirely.
> 
> This patch implements DMA and identity domains to be allocated for 
> external management. As it isn’t known which device will be attached to a 
> domain, the dma domain is not initialised at alloc time. Instead it is 
> allocated when attached. As we may lose RMRR mappings when attaching a 
> device to a new domain, we also ensure these are mapped at attach time.
> 
> This will likely conflict with the work done for auxiliary domains by 
> Baolu but the code to accommodate won’t change much.
> 
> I had also started on a patch to remove find_or_alloc_domain and various 
> functions that call it but had issues with edge cases such as 
> iommu_prepare_isa that is doing domain operations at IOMMU init time.
> 
> Cheers,
> James.
> 
> 
> ---
> drivers/iommu/intel-iommu.c | 159 +---
> 1 file changed, 110 insertions(+), 49 deletions(-)
> 
> diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
> index 41a4b8808802..6437cb2e9b22 100644
> --- a/drivers/iommu/intel-iommu.c
> +++ b/drivers/iommu/intel-iommu.c
> @@ -351,6 +351,14 @@ static int hw_pass_through = 1;
> /* si_domain contains mulitple devices */
> #define DOMAIN_FLAG_STATIC_IDENTITY   (1 << 1)
> 
> +/* Domain managed externally, don't cleanup if it isn't attached
> + * to any devices. */
> +#define DOMAIN_FLAG_NO_CLEANUP   (1 << 2)
> +
> +/* Set after domain initialisation. Used when allocating dma domains to
> + * defer domain initialisation until it is attached to a device */
> +#define DOMAIN_FLAG_INITIALISED  (1 << 4)
> +
> #define for_each_domain_iommu(idx, domain)\
>   for (idx = 0; idx < g_num_of_iommus; idx++) \
>   if (domain->iommu_refcnt[idx])
> @@ -624,6 +632,16 @@ static inline int domain_type_is_vm_or_si(struct 
> dmar_domain *domain)
>   DOMAIN_FLAG_STATIC_IDENTITY);
> }
> 
> +static inline int domain_managed_externally(struct dmar_domain *domain)
> +{
> + return domain->flags & DOMAIN_FLAG_NO_CLEANUP;
> +}
> +
> +static inline int domain_is_initialised(struct dmar_domain *domain)
> +{
> + return domain->flags & DOMAIN_FLAG_INITIALISED;
> +}
> +
> static inline int domain_pfn_supported(struct dmar_domain *domain,
>  unsigned long pfn)
> {
> @@ -1717,7 +1735,7 @@ static void disable_dmar_iommu(struct intel_iommu 
> *iommu)
> 
>   __dmar_remove_one_dev_info(info);
> 
> - if (!domain_type_is_vm_or_si(domain)) {
> + if (!domain_managed_externally(domain)) {
>   /*
>* The domain_exit() function  can't be called under
>* device_domain_lock, as it takes this lock itself.
> @@ -1951,6 +1969,7 @@ static int domain_init(struct dmar_domain *domain, 
> struct intel_iommu *iommu,
>   domain->pgd = (struct dma_pte *)alloc_pgtable_page(domain->nid);
>   if (!domain->pgd)
>   return -ENOMEM;
> + domain->flags |= DOMAIN_FLAG_INITIALISED;
>   __iommu_flush_cache(iommu, domain->pgd, PAGE_SIZE);
>   return 0;
> }
> @@ -3234,11 +3253,9 @@ static int copy_translation_tables(struct intel_iommu 
> *iommu)
> static int __init init_dmars(void)
> {
>   struc

Re: iommu_intel or i915 regression in 4.18, 4.19.12 and drm-tip

2019-01-02 Thread Joonas Lahtinen
Quoting Eric Wong (2018-12-27 13:49:48)
> I just got a used Thinkpad X201 (Core i5 M 520, Intel QM57
> chipset) and hit some kernel panics while trying to view
> image/animation-intensive stuff in Firefox (X11) unless I use
> "iommu_intel=igfx_off".
> 
> With Debian stable backport kernels, "linux-image-4.17.0-0.bpo.3-amd64"
> (4.17.17-1~bpo9+1) has no problems.  But "linux-image-4.18.0-0.bpo.3-amd64"
> (4.18.20-2~bpo9+1) gives a blank screen before I can login via agetty
> and run startx.

Could you open a new bug at (and attach relevant information there):

https://01.org/linuxgraphics/documentation/how-report-bugs

Most confusing about this is that 4.17 would have worked to begin with,
without intel_iommu=igfx_off (unless it was the default for older
kernel?)

Did you maybe update other parts of the system while updating the
kernel?

If you could attach full boot dmesg from working and non-working kernel +
have config file of both kernel's in Bugzilla. That'd be a good start!

Regards, Joonas

> Building 4.19.12 myself got me into X11 and able to start
> Firefox to panic the kernel.  I also updated to the latest BIOS
> (1.40), but it's an EOL laptop (but it's still the most powerful
> laptop I use).  I intend to replace the BIOS with Coreboot soon...
> 
> Initially, I thought I was hitting another GPU hang from 4.18+:
> 
> https://bugs.freedesktop.org/show_bug.cgi?id=107945
> 
> But building drm-tip @ commit 28bb1fc015cedadf3b099b8bd0bb27609849f362
> ("drm-tip: 2018y-12m-25d-08h-12m-37s UTC integration manifest")
> I was still able to reproduce the panic unless I use iommu_intel=igfx_off
> "i915.reset=1" did not help matters, either.
> 
> Below is what I got from netconsole while on drm-tip:
> 
> Kernel panic - not syncing: DMAR hardware is malfunctioning
> Shutting down cpus with NMI
> Kernel Offset: disabled
> ---[ end Kernel panic - not syncing: DMAR hardware is malfunctioning ]---
> [ cut here ]
> sched: Unexpected reschedule of offline CPU#3!
> WARNING: CPU: 0 PID: 105 at native_smp_send_reschedule+0x34/0x40
> Modules linked in: netconsole ccm snd_hda_codec_hdmi snd_hda_codec_conexant 
> snd_hda_codec_generic intel_powerclamp coretemp kvm_intel kvm irqbypass 
> crc32_pclmul crc32c_intel ghash_clmulni_intel arc4 iwldvm aesni_intel 
> aes_x86_64 crypto_simd cryptd mac80211 glue_helper intel_cstate iwlwifi 
> intel_uncore i915 intel_gtt i2c_algo_bit iosf_mbi drm_kms_helper cfbfillrect 
> syscopyarea intel_ips cfbimgblt sysfillrect sysimgblt fb_sys_fops cfbcopyarea 
> thinkpad_acpi prime_numbers cfg80211 ledtrig_audio i2c_i801 sg snd_hda_intel 
> led_class snd_hda_codec drm ac drm_panel_orientation_quirks snd_hwdep battery 
> e1000e agpgart snd_hda_core snd_pcm snd_timer ptp snd soundcore pps_core 
> ehci_pci ehci_hcd lpc_ich video mfd_core button acpi_cpufreq ecryptfs 
> ip_tables x_tables ipv6 evdev thermal [last unloaded: netconsole]
> CPU: 0 PID: 105 Comm: kworker/u8:3 Not tainted 4.20.0-rc7b1+ #1
> Hardware name: LENOVO 3680FBU/3680FBU, BIOS 6QET70WW (1.40 ) 10/11/2012
> Workqueue: i915 __i915_gem_free_work [i915]
> RIP: 0010:native_smp_send_reschedule+0x34/0x40
> Code: 05 69 c6 c9 00 73 15 48 8b 05 18 2d b3 00 be fd 00 00 00 48 8b 40 30 e9 
> 9a 58 7d 00 89 fe 48 c7 c7 78 73 af 81 e8 dc c2 01 00 <0f> 0b c3 66 0f 1f 84 
> 00 00 00 00 00 66 66 66 66 90 8b 05 0d 7d df
> RSP: 0018:888075003d98 EFLAGS: 00010092
> RAX: 002e RBX: 8880751a0740 RCX: 0006
> RDX: 0007 RSI: 0082 RDI: 888075015440
> RBP: 88806e823700 R08:  R09: 888072fc07c0
> R10: 888075003d60 R11: fff5c002 R12: 8880751a0740
> R13: 8880751a0740 R14:  R15: 0003
> FS:  () GS:88807500() knlGS:
> CS:  0010 DS:  ES:  CR0: 80050033
> CR2: 7fdb1f53f000 CR3: 01c0a004 CR4: 000206f0
> Call Trace:
>  
>  ? check_preempt_curr+0x4e/0x90
>  ? ttwu_do_wakeup.isra.19+0x14/0xf0
>  ? try_to_wake_up+0x323/0x410
>  ? autoremove_wake_function+0xe/0x30
>  ? __wake_up_common+0x8d/0x140
>  ? __wake_up_common_lock+0x6c/0x90
>  ? irq_work_run_list+0x49/0x80
>  ? tick_sched_handle.isra.6+0x50/0x50
>  ? update_process_times+0x3b/0x50
>  ? tick_sched_handle.isra.6+0x30/0x50
>  ? tick_sched_timer+0x3b/0x80
>  ? __hrtimer_run_queues+0xea/0x270
>  ? hrtimer_interrupt+0x101/0x240
>  ? smp_apic_timer_interrupt+0x6a/0x150
>  ? apic_timer_interrupt+0xf/0x20
>  
>  ? panic+0x1ca/0x212
>  ? panic+0x1c7/0x212
>  ? __iommu_flush_iotlb+0x19e/0x1c0
>  ? iommu_flush_iotlb_psi+0x96/0xf0
>  ? intel_unmap+0xbf/0xf0
>  ? i915_gem_object_put_pages_gtt+0x36/0x220 [i915]
>  ? drm_ht_remove+0x20/0x20 [drm]
>  ? drm_mm_remove_node+0x1ad/0x310 [drm]
>  ? __pm_runtime_resume+0x54/0x70
>  ? __i915_gem_object_unset_pages+0x129/0x170 [i915]
>  ? __i915_gem_object_put_pages+0x70/0xa0 [i915]
>  ? __i915_gem_free_objects+0x245/0x4e0 [i915]
>  ? __switch_to_a

Re: [PATCH v5 09/20] iommu/mediatek: Refine protect memory definition

2019-01-02 Thread Yong Wu
On Wed, 2019-01-02 at 14:23 +0800, Nicolas Boichat wrote:
> On Tue, Jan 1, 2019 at 11:58 AM Yong Wu  wrote:
> >
> > The protect memory setting is a little different in the different SoCs.
> > In the register REG_MMU_CTRL_REG(0x110), the TF_PROT(translation fault
> > protect) shift bit is normally 4 while it shift 5 bits only in the
> > mt8173. This patch delete the complex MACRO and use a common if-else
> > instead.
> >
> > Also, use "F_MMU_TF_PROT_TO_PROGRAM_ADDR" instead of the hard code(2)
> > which means the M4U will output the dirty data to the programmed
> > address that we allocated dynamically when translation fault occurs.
> >
> > Signed-off-by: Yong Wu 
> > ---
> > @Nicalos, I don't put it in the plat_data since only the previous mt8173
> > shift 5. As I know, the latest SoC always use the new setting like mt2712
> > and mt8183. Thus, I think it is unnecessary to put it in plat_data and
> > let all the latest SoC set it. Hence, I still keep "== mt8173" for this
> > like the reg REG_MMU_CTRL_REG.
> 
> Should be ok this way. But maybe one way to avoid hard-coding 4/5
> below is to have 2 macros:
> 
> #define F_MMU_TF_PROT_TO_PROGRAM_ADDR (2 << 4)
> #define F_MMU_TF_PROT_TO_PROGRAM_ADDR_MT8173 (2 << 5)
> 
> And still use the if below?

Thanks for your quick review.

OK for me.

I will wait Matthias's review for memory/ part. then send the next
version.

> 
> > ---
> >  drivers/iommu/mtk_iommu.c | 12 +---
> >  1 file changed, 5 insertions(+), 7 deletions(-)
> >
> > diff --git a/drivers/iommu/mtk_iommu.c b/drivers/iommu/mtk_iommu.c
> > index eca1536..35a1263 100644
> > --- a/drivers/iommu/mtk_iommu.c
> > +++ b/drivers/iommu/mtk_iommu.c
> > @@ -53,11 +53,7 @@
> >
> >  #define REG_MMU_CTRL_REG   0x110
> >  #define F_MMU_PREFETCH_RT_REPLACE_MOD  BIT(4)
> > -#define F_MMU_TF_PROTECT_SEL_SHIFT(data) \
> > -   ((data)->plat_data->m4u_plat == M4U_MT2712 ? 4 : 5)
> > -/* It's named by F_MMU_TF_PROT_SEL in mt2712. */
> > -#define F_MMU_TF_PROTECT_SEL(prot, data) \
> > -   (((prot) & 0x3) << F_MMU_TF_PROTECT_SEL_SHIFT(data))
> > +#define F_MMU_TF_PROT_TO_PROGRAM_ADDR  2
> >
> >  #define REG_MMU_IVRP_PADDR 0x114
> >
> > @@ -521,9 +517,11 @@ static int mtk_iommu_hw_init(const struct 
> > mtk_iommu_data *data)
> > return ret;
> > }
> >
> > -   regval = F_MMU_TF_PROTECT_SEL(2, data);
> > if (data->plat_data->m4u_plat == M4U_MT8173)
> > -   regval |= F_MMU_PREFETCH_RT_REPLACE_MOD;
> > +   regval = F_MMU_PREFETCH_RT_REPLACE_MOD |
> > + (F_MMU_TF_PROT_TO_PROGRAM_ADDR << 5);
> > +   else
> > +   regval = F_MMU_TF_PROT_TO_PROGRAM_ADDR << 4;
> > writel_relaxed(regval, data->base + REG_MMU_CTRL_REG);
> >
> > regval = F_L2_MULIT_HIT_EN |
> > --
> > 1.9.1
> >


___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v5 18/20] iommu/mediatek: Fix VLD_PA_RANGE register backup when suspend

2019-01-02 Thread Yong Wu
On Wed, 2019-01-02 at 14:54 +0800, Nicolas Boichat wrote:
> On Tue, Jan 1, 2019 at 11:59 AM Yong Wu  wrote:
> >
> > The register VLD_PA_RNG(0x118) was forgot to backup while adding 4GB
> > mode support for mt2712. this patch add it.
> >
> > Fixes: 30e2fccf9512 ("iommu/mediatek: Enlarge the validate PA range
> > for 4GB mode")
> > Signed-off-by: Yong Wu 
> > ---
> >  drivers/iommu/mtk_iommu.c | 2 ++
> >  drivers/iommu/mtk_iommu.h | 1 +
> >  2 files changed, 3 insertions(+)
> >
> > diff --git a/drivers/iommu/mtk_iommu.c b/drivers/iommu/mtk_iommu.c
> > index 7fcef19..ddf1969 100644
> > --- a/drivers/iommu/mtk_iommu.c
> > +++ b/drivers/iommu/mtk_iommu.c
> > @@ -716,6 +716,7 @@ static int __maybe_unused mtk_iommu_suspend(struct 
> > device *dev)
> > reg->int_control0 = readl_relaxed(base + REG_MMU_INT_CONTROL0);
> > reg->int_main_control = readl_relaxed(base + 
> > REG_MMU_INT_MAIN_CONTROL);
> > reg->ivrp_paddr = readl_relaxed(base + REG_MMU_IVRP_PADDR);
> > +   reg->vld_pa_range = readl_relaxed(base + REG_MMU_VLD_PA_RNG);
> 
> Don't we want to add:
> if (data->plat_data->vld_pa_rng)
> before this save/restore operation? Or it doesn't matter?

It doesn't matter. If some SoCs don't have it, the register doesn't
conflict with the others. Reading it will return 0, and writing 0 will
have no effect.

> 
> > clk_disable_unprepare(data->bclk);
> > return 0;
> >  }
> > @@ -740,6 +741,7 @@ static int __maybe_unused mtk_iommu_resume(struct 
> > device *dev)
> > writel_relaxed(reg->int_control0, base + REG_MMU_INT_CONTROL0);
> > writel_relaxed(reg->int_main_control, base + 
> > REG_MMU_INT_MAIN_CONTROL);
> > writel_relaxed(reg->ivrp_paddr, base + REG_MMU_IVRP_PADDR);
> > +   writel_relaxed(reg->vld_pa_range, base + REG_MMU_VLD_PA_RNG);
> > if (m4u_dom)
> > writel(m4u_dom->cfg.arm_v7s_cfg.ttbr[0] & MMU_PT_ADDR_MASK,
> >base + REG_MMU_PT_BASE_ADDR);
> > diff --git a/drivers/iommu/mtk_iommu.h b/drivers/iommu/mtk_iommu.h
> > index 0a7c463..c500bfd 100644
> > --- a/drivers/iommu/mtk_iommu.h
> > +++ b/drivers/iommu/mtk_iommu.h
> > @@ -33,6 +33,7 @@ struct mtk_iommu_suspend_reg {
> > u32 int_control0;
> > u32 int_main_control;
> > u32 ivrp_paddr;
> > +   u32 vld_pa_range;
> 
> Well, please be consistent ,-) Either vld_pa_rng, or valid_pa_range ,-)

Thanks. I will use "vld_pa_rng", Keep same with the register name from
CODA.

> 
> >  };
> >
> >  enum mtk_iommu_plat {
> > --
> > 1.9.1
> >


___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v5 11/20] iommu/mediatek: Move vld_pa_rng into plat_data

2019-01-02 Thread Yong Wu
On Wed, 2019-01-02 at 14:45 +0800, Nicolas Boichat wrote:
> On Tue, Jan 1, 2019 at 11:58 AM Yong Wu  wrote:
> >
> > Both mt8173 and mt8183 don't have this vld_pa_rng(valid physical address
> > range) register while mt2712 have. Move it into the plat_data.
> >
> > Signed-off-by: Yong Wu 
> > ---
> >  drivers/iommu/mtk_iommu.c | 3 ++-
> >  drivers/iommu/mtk_iommu.h | 1 +
> >  2 files changed, 3 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/iommu/mtk_iommu.c b/drivers/iommu/mtk_iommu.c
> > index 8d8ab21..2913ddb 100644
> > --- a/drivers/iommu/mtk_iommu.c
> > +++ b/drivers/iommu/mtk_iommu.c
> > @@ -548,7 +548,7 @@ static int mtk_iommu_hw_init(const struct 
> > mtk_iommu_data *data)
> >  upper_32_bits(data->protect_base);
> > writel_relaxed(regval, data->base + REG_MMU_IVRP_PADDR);
> >
> > -   if (data->enable_4GB && data->plat_data->m4u_plat != M4U_MT8173) {
> > +   if (data->enable_4GB && data->plat_data->vld_pa_rng) {
> > /*
> >  * If 4GB mode is enabled, the validate PA range is from
> >  * 0x1__ to 0x1__. here record bit[32:30].
> > @@ -741,6 +741,7 @@ static int __maybe_unused mtk_iommu_resume(struct 
> > device *dev)
> > .m4u_plat = M4U_MT2712,
> > .has_4gb_mode = true,
> > .has_bclk = true,
> > +   .vld_pa_rng   = true,
> > .larbid_remap = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9},
> >  };
> >
> > diff --git a/drivers/iommu/mtk_iommu.h b/drivers/iommu/mtk_iommu.h
> > index b46aeaa..a8c5d1e 100644
> > --- a/drivers/iommu/mtk_iommu.h
> > +++ b/drivers/iommu/mtk_iommu.h
> > @@ -48,6 +48,7 @@ struct mtk_iommu_plat_data {
> > /* HW will use the EMI clock if there isn't the "bclk". */
> > boolhas_bclk;
> > boolreset_axi;
> > +   boolvld_pa_rng;
> 
> Since this is not a register name, maybe we can use something more
> readable, like valid_pa_range?
> 
> (or at the very least describe it in a comment in the struct?)

I will add a comment about it. like:

   bool vld_pa_rng; /* valid pa range */


> 
> > unsigned char   larbid_remap[MTK_LARB_NR_MAX];
> >  };
> >
> > --
> > 1.9.1
> >


___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH 5/5] dma-mapping: remove a few unused exports

2019-01-02 Thread Christoph Hellwig
Now that the slow path DMA API calls are implemented out of line a few
helpers only used by them don't need to be exported anymore.

Signed-off-by: Christoph Hellwig 
---
 kernel/dma/coherent.c | 2 --
 kernel/dma/debug.c| 2 --
 2 files changed, 4 deletions(-)

diff --git a/kernel/dma/coherent.c b/kernel/dma/coherent.c
index 597d40893862..66f0fb7e9a3a 100644
--- a/kernel/dma/coherent.c
+++ b/kernel/dma/coherent.c
@@ -223,7 +223,6 @@ int dma_alloc_from_dev_coherent(struct device *dev, ssize_t 
size,
 */
return mem->flags & DMA_MEMORY_EXCLUSIVE;
 }
-EXPORT_SYMBOL(dma_alloc_from_dev_coherent);
 
 void *dma_alloc_from_global_coherent(ssize_t size, dma_addr_t *dma_handle)
 {
@@ -268,7 +267,6 @@ int dma_release_from_dev_coherent(struct device *dev, int 
order, void *vaddr)
 
return __dma_release_from_coherent(mem, order, vaddr);
 }
-EXPORT_SYMBOL(dma_release_from_dev_coherent);
 
 int dma_release_from_global_coherent(int order, void *vaddr)
 {
diff --git a/kernel/dma/debug.c b/kernel/dma/debug.c
index 1e0157113d15..23cf5361bcf1 100644
--- a/kernel/dma/debug.c
+++ b/kernel/dma/debug.c
@@ -1512,7 +1512,6 @@ void debug_dma_alloc_coherent(struct device *dev, size_t 
size,
 
add_dma_entry(entry);
 }
-EXPORT_SYMBOL(debug_dma_alloc_coherent);
 
 void debug_dma_free_coherent(struct device *dev, size_t size,
 void *virt, dma_addr_t addr)
@@ -1540,7 +1539,6 @@ void debug_dma_free_coherent(struct device *dev, size_t 
size,
 
check_unmap(&ref);
 }
-EXPORT_SYMBOL(debug_dma_free_coherent);
 
 void debug_dma_map_resource(struct device *dev, phys_addr_t addr, size_t size,
int direction, dma_addr_t dma_addr)
-- 
2.19.2

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


various DMA mapping fixes for 4.21-rc1

2019-01-02 Thread Christoph Hellwig
Hi all,

this series fixes up some fallout from the changes in this merge window.
Patch 1 ensures dma-debug works correctly with the merge map_page /
map_single implementation, patch 4 makes sure the DMA API calls
are properly stubbed out for COMPILE_TEST builds on UML.  Patches 2 and
3 make patch 4 a little simpler, and patch 5 removes various exports
no needed now that we moved a lot of functionality out of line.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH 3/5] dma-mapping: remove dmam_{declare, release}_coherent_memory

2019-01-02 Thread Christoph Hellwig
These functions have never been used.

Signed-off-by: Christoph Hellwig 
---
 Documentation/driver-model/devres.txt |  1 -
 include/linux/dma-mapping.h   | 19 -
 kernel/dma/mapping.c  | 55 ---
 3 files changed, 75 deletions(-)

diff --git a/Documentation/driver-model/devres.txt 
b/Documentation/driver-model/devres.txt
index 841c99529d27..b277cafce71e 100644
--- a/Documentation/driver-model/devres.txt
+++ b/Documentation/driver-model/devres.txt
@@ -250,7 +250,6 @@ DMA
   dmaenginem_async_device_register()
   dmam_alloc_coherent()
   dmam_alloc_attrs()
-  dmam_declare_coherent_memory()
   dmam_free_coherent()
   dmam_pool_create()
   dmam_pool_destroy()
diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
index fa2ebe8ad4d0..937c2a949fca 100644
--- a/include/linux/dma-mapping.h
+++ b/include/linux/dma-mapping.h
@@ -691,25 +691,6 @@ static inline void dmam_free_coherent(struct device *dev, 
size_t size,
  void *vaddr, dma_addr_t dma_handle) { }
 #endif /* !CONFIG_HAS_DMA */
 
-#ifdef CONFIG_HAVE_GENERIC_DMA_COHERENT
-extern int dmam_declare_coherent_memory(struct device *dev,
-   phys_addr_t phys_addr,
-   dma_addr_t device_addr, size_t size,
-   int flags);
-extern void dmam_release_declared_memory(struct device *dev);
-#else /* CONFIG_HAVE_GENERIC_DMA_COHERENT */
-static inline int dmam_declare_coherent_memory(struct device *dev,
-   phys_addr_t phys_addr, dma_addr_t device_addr,
-   size_t size, gfp_t gfp)
-{
-   return 0;
-}
-
-static inline void dmam_release_declared_memory(struct device *dev)
-{
-}
-#endif /* CONFIG_HAVE_GENERIC_DMA_COHERENT */
-
 static inline void *dmam_alloc_coherent(struct device *dev, size_t size,
dma_addr_t *dma_handle, gfp_t gfp)
 {
diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c
index f00544cda4e9..a11006b6d8e8 100644
--- a/kernel/dma/mapping.c
+++ b/kernel/dma/mapping.c
@@ -105,61 +105,6 @@ void *dmam_alloc_attrs(struct device *dev, size_t size, 
dma_addr_t *dma_handle,
 }
 EXPORT_SYMBOL(dmam_alloc_attrs);
 
-#ifdef CONFIG_HAVE_GENERIC_DMA_COHERENT
-
-static void dmam_coherent_decl_release(struct device *dev, void *res)
-{
-   dma_release_declared_memory(dev);
-}
-
-/**
- * dmam_declare_coherent_memory - Managed dma_declare_coherent_memory()
- * @dev: Device to declare coherent memory for
- * @phys_addr: Physical address of coherent memory to be declared
- * @device_addr: Device address of coherent memory to be declared
- * @size: Size of coherent memory to be declared
- * @flags: Flags
- *
- * Managed dma_declare_coherent_memory().
- *
- * RETURNS:
- * 0 on success, -errno on failure.
- */
-int dmam_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr,
-dma_addr_t device_addr, size_t size, int flags)
-{
-   void *res;
-   int rc;
-
-   res = devres_alloc(dmam_coherent_decl_release, 0, GFP_KERNEL);
-   if (!res)
-   return -ENOMEM;
-
-   rc = dma_declare_coherent_memory(dev, phys_addr, device_addr, size,
-flags);
-   if (!rc)
-   devres_add(dev, res);
-   else
-   devres_free(res);
-
-   return rc;
-}
-EXPORT_SYMBOL(dmam_declare_coherent_memory);
-
-/**
- * dmam_release_declared_memory - Managed dma_release_declared_memory().
- * @dev: Device to release declared coherent memory for
- *
- * Managed dmam_release_declared_memory().
- */
-void dmam_release_declared_memory(struct device *dev)
-{
-   WARN_ON(devres_destroy(dev, dmam_coherent_decl_release, NULL, NULL));
-}
-EXPORT_SYMBOL(dmam_release_declared_memory);
-
-#endif
-
 /*
  * Create scatter-list for the already allocated DMA buffer.
  */
-- 
2.19.2

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH 1/5] dma-mapping: implement dma_map_single_attrs using dma_map_page_attrs

2019-01-02 Thread Christoph Hellwig
And also switch the way we implement the unmap side around to stay
consistent.  This ensures dma-debug works again because it records which
function we used for mapping to ensure it is also used for unmapping,
and also reduces further code duplication.  Last but not least this
also officially allows calling dma_sync_single_* for mappings created
using dma_map_page, which is perfectly fine given that the sync calls
only take a dma_addr_t, but not a virtual address or struct page.

Fixes: 7f0fee242e ("dma-mapping: merge dma_unmap_page_attrs and 
dma_unmap_single_attrs")
Signed-off-by: Christoph Hellwig 
---
 include/linux/dma-debug.h   | 11 +++
 include/linux/dma-mapping.h | 66 ++---
 kernel/dma/debug.c  | 17 +++---
 3 files changed, 32 insertions(+), 62 deletions(-)

diff --git a/include/linux/dma-debug.h b/include/linux/dma-debug.h
index 2ad5c363d7d5..cb422cbe587d 100644
--- a/include/linux/dma-debug.h
+++ b/include/linux/dma-debug.h
@@ -35,13 +35,12 @@ extern void debug_dma_map_single(struct device *dev, const 
void *addr,
 
 extern void debug_dma_map_page(struct device *dev, struct page *page,
   size_t offset, size_t size,
-  int direction, dma_addr_t dma_addr,
-  bool map_single);
+  int direction, dma_addr_t dma_addr);
 
 extern void debug_dma_mapping_error(struct device *dev, dma_addr_t dma_addr);
 
 extern void debug_dma_unmap_page(struct device *dev, dma_addr_t addr,
-size_t size, int direction, bool map_single);
+size_t size, int direction);
 
 extern void debug_dma_map_sg(struct device *dev, struct scatterlist *sg,
 int nents, int mapped_ents, int direction);
@@ -95,8 +94,7 @@ static inline void debug_dma_map_single(struct device *dev, 
const void *addr,
 
 static inline void debug_dma_map_page(struct device *dev, struct page *page,
  size_t offset, size_t size,
- int direction, dma_addr_t dma_addr,
- bool map_single)
+ int direction, dma_addr_t dma_addr)
 {
 }
 
@@ -106,8 +104,7 @@ static inline void debug_dma_mapping_error(struct device 
*dev,
 }
 
 static inline void debug_dma_unmap_page(struct device *dev, dma_addr_t addr,
-   size_t size, int direction,
-   bool map_single)
+   size_t size, int direction)
 {
 }
 
diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
index ba521d5506c9..0452a8be2789 100644
--- a/include/linux/dma-mapping.h
+++ b/include/linux/dma-mapping.h
@@ -284,32 +284,25 @@ static inline void dma_direct_sync_sg_for_cpu(struct 
device *dev,
 }
 #endif
 
-static inline dma_addr_t dma_map_single_attrs(struct device *dev, void *ptr,
- size_t size,
- enum dma_data_direction dir,
- unsigned long attrs)
+static inline dma_addr_t dma_map_page_attrs(struct device *dev,
+   struct page *page, size_t offset, size_t size,
+   enum dma_data_direction dir, unsigned long attrs)
 {
const struct dma_map_ops *ops = get_dma_ops(dev);
dma_addr_t addr;
 
BUG_ON(!valid_dma_direction(dir));
-   debug_dma_map_single(dev, ptr, size);
if (dma_is_direct(ops))
-   addr = dma_direct_map_page(dev, virt_to_page(ptr),
-   offset_in_page(ptr), size, dir, attrs);
+   addr = dma_direct_map_page(dev, page, offset, size, dir, attrs);
else
-   addr = ops->map_page(dev, virt_to_page(ptr),
-   offset_in_page(ptr), size, dir, attrs);
-   debug_dma_map_page(dev, virt_to_page(ptr),
-  offset_in_page(ptr), size,
-  dir, addr, true);
+   addr = ops->map_page(dev, page, offset, size, dir, attrs);
+   debug_dma_map_page(dev, page, offset, size, dir, addr);
+
return addr;
 }
 
-static inline void dma_unmap_single_attrs(struct device *dev, dma_addr_t addr,
- size_t size,
- enum dma_data_direction dir,
- unsigned long attrs)
+static inline void dma_unmap_page_attrs(struct device *dev, dma_addr_t addr,
+   size_t size, enum dma_data_direction dir, unsigned long attrs)
 {
const struct dma_map_ops *ops = get_dma_ops(dev);
 
@@ -318,13 +311,7 @@ static inline void dma_unmap_single_attrs(struct device 
*dev, dma_addr_t addr,
dma_direct_unmap_page(dev, addr, size, dir, attrs);
else if (ops->unm

[PATCH 2/5] dma-mapping: implement dmam_alloc_coherent using dmam_alloc_attrs

2019-01-02 Thread Christoph Hellwig
dmam_alloc_coherent is just the default no-flags case of
dmam_alloc_attrs, so take advantage of this similar to the non-managed
version.

Signed-off-by: Christoph Hellwig 
---
 include/linux/dma-mapping.h | 20 ---
 kernel/dma/mapping.c| 39 -
 2 files changed, 13 insertions(+), 46 deletions(-)

diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
index 0452a8be2789..fa2ebe8ad4d0 100644
--- a/include/linux/dma-mapping.h
+++ b/include/linux/dma-mapping.h
@@ -677,21 +677,20 @@ dma_mark_declared_memory_occupied(struct device *dev,
  * Managed DMA API
  */
 #ifdef CONFIG_HAS_DMA
-extern void *dmam_alloc_coherent(struct device *dev, size_t size,
-dma_addr_t *dma_handle, gfp_t gfp);
+extern void *dmam_alloc_attrs(struct device *dev, size_t size,
+dma_addr_t *dma_handle, gfp_t gfp,
+unsigned long attrs);
 extern void dmam_free_coherent(struct device *dev, size_t size, void *vaddr,
   dma_addr_t dma_handle);
 #else /* !CONFIG_HAS_DMA */
-static inline void *dmam_alloc_coherent(struct device *dev, size_t size,
-   dma_addr_t *dma_handle, gfp_t gfp)
+static inline void *dmam_alloc_attrs(struct device *dev, size_t size,
+   dma_addr_t *dma_handle, gfp_t gfp,
+   unsigned long attrs)
 { return NULL; }
 static inline void dmam_free_coherent(struct device *dev, size_t size,
  void *vaddr, dma_addr_t dma_handle) { }
 #endif /* !CONFIG_HAS_DMA */
 
-extern void *dmam_alloc_attrs(struct device *dev, size_t size,
- dma_addr_t *dma_handle, gfp_t gfp,
- unsigned long attrs);
 #ifdef CONFIG_HAVE_GENERIC_DMA_COHERENT
 extern int dmam_declare_coherent_memory(struct device *dev,
phys_addr_t phys_addr,
@@ -711,6 +710,13 @@ static inline void dmam_release_declared_memory(struct 
device *dev)
 }
 #endif /* CONFIG_HAVE_GENERIC_DMA_COHERENT */
 
+static inline void *dmam_alloc_coherent(struct device *dev, size_t size,
+   dma_addr_t *dma_handle, gfp_t gfp)
+{
+   return dmam_alloc_attrs(dev, size, dma_handle, gfp,
+   (gfp & __GFP_NOWARN) ? DMA_ATTR_NO_WARN : 0);
+}
+
 static inline void *dma_alloc_wc(struct device *dev, size_t size,
 dma_addr_t *dma_addr, gfp_t gfp)
 {
diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c
index d7c34d2d1ba5..f00544cda4e9 100644
--- a/kernel/dma/mapping.c
+++ b/kernel/dma/mapping.c
@@ -45,45 +45,6 @@ static int dmam_match(struct device *dev, void *res, void 
*match_data)
return 0;
 }
 
-/**
- * dmam_alloc_coherent - Managed dma_alloc_coherent()
- * @dev: Device to allocate coherent memory for
- * @size: Size of allocation
- * @dma_handle: Out argument for allocated DMA handle
- * @gfp: Allocation flags
- *
- * Managed dma_alloc_coherent().  Memory allocated using this function
- * will be automatically released on driver detach.
- *
- * RETURNS:
- * Pointer to allocated memory on success, NULL on failure.
- */
-void *dmam_alloc_coherent(struct device *dev, size_t size,
-  dma_addr_t *dma_handle, gfp_t gfp)
-{
-   struct dma_devres *dr;
-   void *vaddr;
-
-   dr = devres_alloc(dmam_release, sizeof(*dr), gfp);
-   if (!dr)
-   return NULL;
-
-   vaddr = dma_alloc_coherent(dev, size, dma_handle, gfp);
-   if (!vaddr) {
-   devres_free(dr);
-   return NULL;
-   }
-
-   dr->vaddr = vaddr;
-   dr->dma_handle = *dma_handle;
-   dr->size = size;
-
-   devres_add(dev, dr);
-
-   return vaddr;
-}
-EXPORT_SYMBOL(dmam_alloc_coherent);
-
 /**
  * dmam_free_coherent - Managed dma_free_coherent()
  * @dev: Device to free coherent memory for
-- 
2.19.2

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH 4/5] dma-mapping: properly stub out the DMA API for !CONFIG_HAS_DMA

2019-01-02 Thread Christoph Hellwig
This avoids link failures in drivers using the DMA API, when they
are compiled for user mode Linux with CONFIG_COMPILE_TEST=y.

Fixes: 356da6d0cd ("dma-mapping: bypass indirect calls for dma-direct")
Signed-off-by: Christoph Hellwig 
---
 include/linux/dma-mapping.h | 255 +++-
 1 file changed, 164 insertions(+), 91 deletions(-)

diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
index 937c2a949fca..cef2127e1d70 100644
--- a/include/linux/dma-mapping.h
+++ b/include/linux/dma-mapping.h
@@ -194,33 +194,6 @@ static inline int dma_mmap_from_global_coherent(struct 
vm_area_struct *vma,
 }
 #endif /* CONFIG_HAVE_GENERIC_DMA_COHERENT */
 
-#ifdef CONFIG_HAS_DMA
-#include 
-static inline const struct dma_map_ops *get_dma_ops(struct device *dev)
-{
-   if (dev && dev->dma_ops)
-   return dev->dma_ops;
-   return get_arch_dma_ops(dev ? dev->bus : NULL);
-}
-
-static inline void set_dma_ops(struct device *dev,
-  const struct dma_map_ops *dma_ops)
-{
-   dev->dma_ops = dma_ops;
-}
-#else
-/*
- * Define the dma api to allow compilation of dma dependent code.
- * Code that depends on the dma-mapping API needs to set 'depends on HAS_DMA'
- * in its Kconfig, unless it already depends on  || COMPILE_TEST,
- * where  guarantuees the availability of the dma-mapping API.
- */
-static inline const struct dma_map_ops *get_dma_ops(struct device *dev)
-{
-   return NULL;
-}
-#endif
-
 static inline bool dma_is_direct(const struct dma_map_ops *ops)
 {
return likely(!ops);
@@ -284,6 +257,22 @@ static inline void dma_direct_sync_sg_for_cpu(struct 
device *dev,
 }
 #endif
 
+#ifdef CONFIG_HAS_DMA
+#include 
+
+static inline const struct dma_map_ops *get_dma_ops(struct device *dev)
+{
+   if (dev && dev->dma_ops)
+   return dev->dma_ops;
+   return get_arch_dma_ops(dev ? dev->bus : NULL);
+}
+
+static inline void set_dma_ops(struct device *dev,
+  const struct dma_map_ops *dma_ops)
+{
+   dev->dma_ops = dma_ops;
+}
+
 static inline dma_addr_t dma_map_page_attrs(struct device *dev,
struct page *page, size_t offset, size_t size,
enum dma_data_direction dir, unsigned long attrs)
@@ -399,13 +388,6 @@ static inline void dma_sync_single_for_cpu(struct device 
*dev, dma_addr_t addr,
debug_dma_sync_single_for_cpu(dev, addr, size, dir);
 }
 
-static inline void dma_sync_single_range_for_cpu(struct device *dev,
-   dma_addr_t addr, unsigned long offset, size_t size,
-   enum dma_data_direction dir)
-{
-   return dma_sync_single_for_cpu(dev, addr + offset, size, dir);
-}
-
 static inline void dma_sync_single_for_device(struct device *dev,
  dma_addr_t addr, size_t size,
  enum dma_data_direction dir)
@@ -420,13 +402,6 @@ static inline void dma_sync_single_for_device(struct 
device *dev,
debug_dma_sync_single_for_device(dev, addr, size, dir);
 }
 
-static inline void dma_sync_single_range_for_device(struct device *dev,
-   dma_addr_t addr, unsigned long offset, size_t size,
-   enum dma_data_direction dir)
-{
-   return dma_sync_single_for_device(dev, addr + offset, size, dir);
-}
-
 static inline void
 dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg,
int nelems, enum dma_data_direction dir)
@@ -456,6 +431,138 @@ dma_sync_sg_for_device(struct device *dev, struct 
scatterlist *sg,
 
 }
 
+static inline int dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
+{
+   debug_dma_mapping_error(dev, dma_addr);
+
+   if (dma_addr == DMA_MAPPING_ERROR)
+   return -ENOMEM;
+   return 0;
+}
+
+void *dma_alloc_attrs(struct device *dev, size_t size, dma_addr_t *dma_handle,
+   gfp_t flag, unsigned long attrs);
+void dma_free_attrs(struct device *dev, size_t size, void *cpu_addr,
+   dma_addr_t dma_handle, unsigned long attrs);
+void *dmam_alloc_attrs(struct device *dev, size_t size, dma_addr_t *dma_handle,
+   gfp_t gfp, unsigned long attrs);
+void dmam_free_coherent(struct device *dev, size_t size, void *vaddr,
+   dma_addr_t dma_handle);
+void dma_cache_sync(struct device *dev, void *vaddr, size_t size,
+   enum dma_data_direction dir);
+int dma_get_sgtable_attrs(struct device *dev, struct sg_table *sgt,
+   void *cpu_addr, dma_addr_t dma_addr, size_t size,
+   unsigned long attrs);
+int dma_mmap_attrs(struct device *dev, struct vm_area_struct *vma,
+   void *cpu_addr, dma_addr_t dma_addr, size_t size,
+   unsigned long attrs);
+int dma_supported(struct device *dev, u64 mask);
+int dma_set_mask(struct device *dev, u64 mask);
+int dma_set_coherent_mask(struct device *dev, u64 mask);
+u64 dma_get_required_mask(struct device *dev);
+#el