Re: [PATCH v8 2/2] iommu/amd: Add basic debugfs infrastructure for AMD IOMMU

2018-06-04 Thread Randy Dunlap
On 05/29/2018 11:39 AM, Greg KH wrote:
> On Tue, May 29, 2018 at 01:23:23PM -0500, Gary R Hook wrote:
>> Implement a skeleton framework for debugfs support in the
>> AMD IOMMU. Add a hidden boolean to Kconfig that is defined
>> for the AMD IOMMU when general IOMMY DebugFS support is
>> enabled.
>>
>> Signed-off-by: Gary R Hook 
>> ---
>>  drivers/iommu/Kconfig |4 
>>  drivers/iommu/Makefile|1 +
>>  drivers/iommu/amd_iommu_debugfs.c |   39 
>> +
>>  drivers/iommu/amd_iommu_init.c|6 --
>>  drivers/iommu/amd_iommu_proto.h   |6 ++
>>  drivers/iommu/amd_iommu_types.h   |5 +
>>  6 files changed, 59 insertions(+), 2 deletions(-)
>>  create mode 100644 drivers/iommu/amd_iommu_debugfs.c
>>
>> diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
>> index f9af25ac409f..ec223f6f4ad4 100644
>> --- a/drivers/iommu/Kconfig
>> +++ b/drivers/iommu/Kconfig
>> @@ -137,6 +137,10 @@ config AMD_IOMMU
>>your BIOS for an option to enable it or if you have an IVRS ACPI
>>table.
>>  
>> +config AMD_IOMMU_DEBUGFS
>> +def_bool y
> 
> Why default y?  Can you not boot a box without this?  If not, it should
> not be Y.
> 
>> +depends on AMD_IOMMU && IOMMU_DEBUGFS
>> +
>>  config AMD_IOMMU_V2
>>  tristate "AMD IOMMU Version 2 driver"
>>  depends on AMD_IOMMU

Gary,

By far, most driver-debugfs additions are optional and include a user Kconfig 
prompt
so that user's can choose whether to enable it or not.

I suggest that the way forward is to fix Greg's debugfs_() api comments
and to add a prompt string to AMD_IOMMU_DEBUGFS.


-- 
~Randy
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 5/7] iommu/dma: add support for non-strict mode

2018-06-04 Thread Leizhen (ThunderTown)



On 2018/6/2 1:51, kbuild test robot wrote:
> Hi Zhen,
> 
> Thank you for the patch! Perhaps something to improve:
> 
> [auto build test WARNING on linus/master]
> [also build test WARNING on v4.17-rc7 next-20180601]
> [cannot apply to iommu/next]
> [if your patch is applied to the wrong git tree, please drop us a note to 
> help improve the system]
> 
> url:
> https://github.com/0day-ci/linux/commits/Zhen-Lei/add-non-strict-mode-support-for-arm-smmu-v3/20180602-000418
> config: x86_64-randconfig-x008-201821 (attached as .config)
> compiler: gcc-7 (Debian 7.3.0-16) 7.3.0
> reproduce:
> # save the attached .config to linux build tree
> make ARCH=x86_64 
> 
> All warnings (new ones prefixed by >>):
> 
>drivers//iommu/amd_iommu.c: In function 'amd_iommu_capable':
>>> drivers//iommu/amd_iommu.c:3053:2: warning: enumeration value 
>>> 'IOMMU_CAP_NON_STRICT' not handled in switch [-Wswitch]
>  switch (cap) {
>  ^~
> 
> vim +/IOMMU_CAP_NON_STRICT +3053 drivers//iommu/amd_iommu.c
> 
> 645c4c8d arch/x86/kernel/amd_iommu.c Joerg Roedel 2008-12-02  3050  
> ab636481 drivers/iommu/amd_iommu.c   Joerg Roedel 2014-09-05  3051  static 
> bool amd_iommu_capable(enum iommu_cap cap)
> dbb9fd86 arch/x86/kernel/amd_iommu.c Sheng Yang   2009-03-18  3052  {
> 80a506b8 arch/x86/kernel/amd_iommu.c Joerg Roedel 2010-07-27 @3053switch 
> (cap) {
> 80a506b8 arch/x86/kernel/amd_iommu.c Joerg Roedel 2010-07-27  3054case 
> IOMMU_CAP_CACHE_COHERENCY:
> ab636481 drivers/iommu/amd_iommu.c   Joerg Roedel 2014-09-05  3055
> return true;
> bdddadcb drivers/iommu/amd_iommu.c   Joerg Roedel 2012-07-02  3056case 
> IOMMU_CAP_INTR_REMAP:
> ab636481 drivers/iommu/amd_iommu.c   Joerg Roedel 2014-09-05  3057
> return (irq_remapping_enabled == 1);
> cfdeec22 drivers/iommu/amd_iommu.c   Will Deacon  2014-10-27  3058case 
> IOMMU_CAP_NOEXEC:
It seems that it's better to change this to 'default'.

> cfdeec22 drivers/iommu/amd_iommu.c   Will Deacon  2014-10-27  3059
> return false;
> 80a506b8 arch/x86/kernel/amd_iommu.c Joerg Roedel 2010-07-27  3060}
> 80a506b8 arch/x86/kernel/amd_iommu.c Joerg Roedel 2010-07-27  3061  
> ab636481 drivers/iommu/amd_iommu.c   Joerg Roedel 2014-09-05  3062return 
> false;
> dbb9fd86 arch/x86/kernel/amd_iommu.c Sheng Yang   2009-03-18  3063  }
> dbb9fd86 arch/x86/kernel/amd_iommu.c Sheng Yang   2009-03-18  3064  
> 
> :: The code at line 3053 was first introduced by commit
> :: 80a506b8fdcfa868bb53eb740f928217d0966fc1 x86/amd-iommu: Export 
> cache-coherency capability
> 
> :: TO: Joerg Roedel 
> :: CC: Joerg Roedel 
> 
> ---
> 0-DAY kernel test infrastructureOpen Source Technology Center
> https://lists.01.org/pipermail/kbuild-all   Intel Corporation
> 

-- 
Thanks!
BestRegards

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 5/7] iommu/dma: add support for non-strict mode

2018-06-04 Thread Leizhen (ThunderTown)



On 2018/5/31 21:04, Robin Murphy wrote:
> On 31/05/18 08:42, Zhen Lei wrote:
>> 1. Save the related domain pointer in struct iommu_dma_cookie, make iovad
>> capable call domain->ops->flush_iotlb_all to flush TLB.
>> 2. Define a new iommu capable: IOMMU_CAP_NON_STRICT, which used to indicate
>> that the iommu domain support non-strict mode.
>> 3. During the iommu domain initialization phase, call capable() to check
>> whether it support non-strcit mode. If so, call init_iova_flush_queue
>> to register iovad->flush_cb callback.
>> 4. All unmap(contains iova-free) APIs will finally invoke __iommu_dma_unmap
>> -->iommu_dma_free_iova. Use iovad->flush_cb to check whether its related
>> iommu support non-strict mode or not, and call IOMMU_DOMAIN_IS_STRICT to
>> make sure the IOMMU_DOMAIN_UNMANAGED domain always follow strict mode.
> 
> Once again, this is a whole load of complexity for a property which could 
> just be statically encoded at allocation, e.g. in the cookie type.
That's right. Pass domain to the static function iommu_dma_free_iova will be 
better.

> 
>> Signed-off-by: Zhen Lei 
>> ---
>>   drivers/iommu/dma-iommu.c | 29 ++---
>>   include/linux/iommu.h |  3 +++
>>   2 files changed, 29 insertions(+), 3 deletions(-)
>>
>> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
>> index 4e885f7..2e116d9 100644
>> --- a/drivers/iommu/dma-iommu.c
>> +++ b/drivers/iommu/dma-iommu.c
>> @@ -55,6 +55,7 @@ struct iommu_dma_cookie {
>>   };
>>   struct list_headmsi_page_list;
>>   spinlock_tmsi_lock;
>> +struct iommu_domain*domain;
>>   };
>> static inline size_t cookie_msi_granule(struct iommu_dma_cookie *cookie)
>> @@ -64,7 +65,8 @@ static inline size_t cookie_msi_granule(struct 
>> iommu_dma_cookie *cookie)
>>   return PAGE_SIZE;
>>   }
>>   -static struct iommu_dma_cookie *cookie_alloc(enum iommu_dma_cookie_type 
>> type)
>> +static struct iommu_dma_cookie *cookie_alloc(struct iommu_domain *domain,
>> + enum iommu_dma_cookie_type type)
>>   {
>>   struct iommu_dma_cookie *cookie;
>>   @@ -73,6 +75,7 @@ static struct iommu_dma_cookie *cookie_alloc(enum 
>> iommu_dma_cookie_type type)
>>   spin_lock_init(>msi_lock);
>>   INIT_LIST_HEAD(>msi_page_list);
>>   cookie->type = type;
>> +cookie->domain = domain;
>>   }
>>   return cookie;
>>   }
>> @@ -94,7 +97,7 @@ int iommu_get_dma_cookie(struct iommu_domain *domain)
>>   if (domain->iova_cookie)
>>   return -EEXIST;
>>   -domain->iova_cookie = cookie_alloc(IOMMU_DMA_IOVA_COOKIE);
>> +domain->iova_cookie = cookie_alloc(domain, IOMMU_DMA_IOVA_COOKIE);
>>   if (!domain->iova_cookie)
>>   return -ENOMEM;
>>   @@ -124,7 +127,7 @@ int iommu_get_msi_cookie(struct iommu_domain *domain, 
>> dma_addr_t base)
>>   if (domain->iova_cookie)
>>   return -EEXIST;
>>   -cookie = cookie_alloc(IOMMU_DMA_MSI_COOKIE);
>> +cookie = cookie_alloc(domain, IOMMU_DMA_MSI_COOKIE);
>>   if (!cookie)
>>   return -ENOMEM;
>>   @@ -261,6 +264,17 @@ static int iova_reserve_iommu_regions(struct device 
>> *dev,
>>   return ret;
>>   }
>>   +static void iova_flush_iotlb_all(struct iova_domain *iovad)
> 
> iommu_dma_flush...
OK

> 
>> +{
>> +struct iommu_dma_cookie *cookie;
>> +struct iommu_domain *domain;
>> +
>> +cookie = container_of(iovad, struct iommu_dma_cookie, iovad);
>> +domain = cookie->domain;
>> +
>> +domain->ops->flush_iotlb_all(domain);
>> +}
>> +
>>   /**
>>* iommu_dma_init_domain - Initialise a DMA mapping domain
>>* @domain: IOMMU domain previously prepared by iommu_get_dma_cookie()
>> @@ -276,6 +290,7 @@ static int iova_reserve_iommu_regions(struct device *dev,
>>   int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base,
>>   u64 size, struct device *dev)
>>   {
>> +const struct iommu_ops *ops = domain->ops;
>>   struct iommu_dma_cookie *cookie = domain->iova_cookie;
>>   struct iova_domain *iovad = >iovad;
>>   unsigned long order, base_pfn, end_pfn;
>> @@ -313,6 +328,11 @@ int iommu_dma_init_domain(struct iommu_domain *domain, 
>> dma_addr_t base,
>> init_iova_domain(iovad, 1UL << order, base_pfn);
>>   +if (ops->capable && ops->capable(IOMMU_CAP_NON_STRICT)) {
>> +BUG_ON(!ops->flush_iotlb_all);
>> +init_iova_flush_queue(iovad, iova_flush_iotlb_all, NULL);
>> +}
>> +
>>   return iova_reserve_iommu_regions(dev, domain);
>>   }
>>   EXPORT_SYMBOL(iommu_dma_init_domain);
>> @@ -392,6 +412,9 @@ static void iommu_dma_free_iova(struct iommu_dma_cookie 
>> *cookie,
>>   /* The MSI case is only ever cleaning up its most recent allocation */
>>   if (cookie->type == IOMMU_DMA_MSI_COOKIE)
>>   cookie->msi_iova -= size;
>> +else if (!IOMMU_DOMAIN_IS_STRICT(cookie->domain) && iovad->flush_cb)
>> +

Re: [PATCH 4/7] iommu/amd: make sure TLB to be flushed before IOVA freed

2018-06-04 Thread Leizhen (ThunderTown)



On 2018/5/31 21:04, Robin Murphy wrote:
> On 31/05/18 08:42, Zhen Lei wrote:
>> Although the mapping has already been removed in the page table, it maybe
>> still exist in TLB. Suppose the freed IOVAs is reused by others before the
>> flush operation completed, the new user can not correctly access to its
>> meomory.
> 
> This change seems reasonable in isolation, but why is it right in the middle 
> of a series which has nothing to do with x86?
Because I described more in the previous patch, which may help this patch to be 
understood well.

You're right, I will repost this patch separately.

> 
> Robin.
> 
>> Signed-off-by: Zhen Lei 
>> ---
>>   drivers/iommu/amd_iommu.c | 2 +-
>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
>> index 8fb8c73..93aa389 100644
>> --- a/drivers/iommu/amd_iommu.c
>> +++ b/drivers/iommu/amd_iommu.c
>> @@ -2402,9 +2402,9 @@ static void __unmap_single(struct dma_ops_domain 
>> *dma_dom,
>>   }
>> if (amd_iommu_unmap_flush) {
>> -dma_ops_free_iova(dma_dom, dma_addr, pages);
>>   domain_flush_tlb(_dom->domain);
>>   domain_flush_complete(_dom->domain);
>> +dma_ops_free_iova(dma_dom, dma_addr, pages);
>>   } else {
>>   pages = __roundup_pow_of_two(pages);
>>   queue_iova(_dom->iovad, dma_addr >> PAGE_SHIFT, pages, 0);
>>
> 
> .
> 

-- 
Thanks!
BestRegards

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 3/7] iommu: prepare for the non-strict mode support

2018-06-04 Thread Leizhen (ThunderTown)



On 2018/5/31 21:04, Robin Murphy wrote:
> On 31/05/18 08:42, Zhen Lei wrote:
>> In common, a IOMMU unmap operation follow the below steps:
>> 1. remove the mapping in page table of the specified iova range
>> 2. execute tlbi command to invalid the mapping which is cached in TLB
>> 3. wait for the above tlbi operation to be finished
>> 4. free the IOVA resource
>> 5. free the physical memory resource
>>
>> This maybe a problem when unmap is very frequently, the combination of tlbi
>> and wait operation will consume a lot of time. A feasible method is put off
>> tlbi and iova-free operation, when accumulating to a certain number or
>> reaching a specified time, execute only one tlbi_all command to clean up
>> TLB, then free the backup IOVAs. Mark as non-strict mode.
>>
>> But it must be noted that, although the mapping has already been removed in
>> the page table, it maybe still exist in TLB. And the freed physical memory
>> may also be reused for others. So a attacker can persistent access to memory
>> based on the just freed IOVA, to obtain sensible data or corrupt memory. So
>> the VFIO should always choose the strict mode.
>>
>> This patch just add a new parameter for the unmap operation, to help the
>> upper functions capable choose which mode to be applied.
> 
> This seems like it might be better handled by a flag in 
> io_pgtable_cfg->quirks. This interface change on its own looks rather 
> invasive, and teh fact that it ends up only being used to pass through a 
> constant property of the domain (which is already known by the point 
> io_pgtable_alloc() is called) implies that it is indeed the wrong level of 
> abstraction.
> 
Sound good. Thanks for your suggestion, I will try it in v2.

>> No functional changes.
>>
>> Signed-off-by: Zhen Lei 
>> ---
>>   drivers/iommu/arm-smmu-v3.c| 2 +-
>>   drivers/iommu/arm-smmu.c   | 2 +-
>>   drivers/iommu/io-pgtable-arm-v7s.c | 6 +++---
>>   drivers/iommu/io-pgtable-arm.c | 6 +++---
>>   drivers/iommu/io-pgtable.h | 2 +-
>>   drivers/iommu/ipmmu-vmsa.c | 2 +-
>>   drivers/iommu/msm_iommu.c  | 2 +-
>>   drivers/iommu/mtk_iommu.c  | 2 +-
>>   drivers/iommu/qcom_iommu.c | 2 +-
>>   include/linux/iommu.h  | 2 ++
> 
> Plus things specific to io-pgtable shouldn't really be spilling into the core 
> API header either.
> 
> Robin.
> 
>>   10 files changed, 15 insertions(+), 13 deletions(-)
>>
>> diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
>> index 4402187..59b3387 100644
>> --- a/drivers/iommu/arm-smmu-v3.c
>> +++ b/drivers/iommu/arm-smmu-v3.c
>> @@ -1767,7 +1767,7 @@ static int arm_smmu_map(struct iommu_domain *domain, 
>> unsigned long iova,
>>   if (!ops)
>>   return 0;
>>   -return ops->unmap(ops, iova, size);
>> +return ops->unmap(ops, iova, size, IOMMU_STRICT);
>>   }
>> static void arm_smmu_flush_iotlb_all(struct iommu_domain *domain)
>> diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c
>> index 69e7c60..253e807 100644
>> --- a/drivers/iommu/arm-smmu.c
>> +++ b/drivers/iommu/arm-smmu.c
>> @@ -1249,7 +1249,7 @@ static size_t arm_smmu_unmap(struct iommu_domain 
>> *domain, unsigned long iova,
>>   if (!ops)
>>   return 0;
>>   -return ops->unmap(ops, iova, size);
>> +return ops->unmap(ops, iova, size, IOMMU_STRICT);
>>   }
>> static void arm_smmu_iotlb_sync(struct iommu_domain *domain)
>> diff --git a/drivers/iommu/io-pgtable-arm-v7s.c 
>> b/drivers/iommu/io-pgtable-arm-v7s.c
>> index 10e4a3d..799eced 100644
>> --- a/drivers/iommu/io-pgtable-arm-v7s.c
>> +++ b/drivers/iommu/io-pgtable-arm-v7s.c
>> @@ -658,7 +658,7 @@ static size_t __arm_v7s_unmap(struct arm_v7s_io_pgtable 
>> *data,
>>   }
>> static size_t arm_v7s_unmap(struct io_pgtable_ops *ops, unsigned long 
>> iova,
>> -size_t size)
>> +size_t size, int strict)
>>   {
>>   struct arm_v7s_io_pgtable *data = io_pgtable_ops_to_data(ops);
>>   @@ -883,7 +883,7 @@ static int __init arm_v7s_do_selftests(void)
>>   size = 1UL << __ffs(cfg.pgsize_bitmap);
>>   while (i < loopnr) {
>>   iova_start = i * SZ_16M;
>> -if (ops->unmap(ops, iova_start + size, size) != size)
>> +if (ops->unmap(ops, iova_start + size, size, IOMMU_STRICT) != size)
>>   return __FAIL(ops);
>> /* Remap of partial unmap */
>> @@ -902,7 +902,7 @@ static int __init arm_v7s_do_selftests(void)
>>   while (i != BITS_PER_LONG) {
>>   size = 1UL << i;
>>   -if (ops->unmap(ops, iova, size) != size)
>> +if (ops->unmap(ops, iova, size, IOMMU_STRICT) != size)
>>   return __FAIL(ops);
>> if (ops->iova_to_phys(ops, iova + 42))
>> diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c
>> index 39c2a05..e0f52db 100644
>> --- a/drivers/iommu/io-pgtable-arm.c
>> +++ b/drivers/iommu/io-pgtable-arm.c
>> @@ -624,7 +624,7 @@ static 

Re: [PATCH 1/7] iommu/dma: fix trival coding style mistake

2018-06-04 Thread Leizhen (ThunderTown)



On 2018/5/31 21:03, Robin Murphy wrote:
> On 31/05/18 08:42, Zhen Lei wrote:
>> The static function iova_reserve_iommu_regions is only called by function
>> iommu_dma_init_domain, and the 'if (!dev)' check in iommu_dma_init_domain
>> affect it only, so we can safely move the check into it. I think it looks
>> more natural.
> 
> As before, I disagree - the logic of iommu_dma_init_domain() is "we expect to 
> have a valid device, but stop here if we don't", and moving the check just 
> needlessly obfuscates that. It is not a coincidence that the arguments of 
> both functions are in effective order of importance.
OK

> 
>> In addition, the local variable 'ret' is only assigned in the branch of
>> 'if (region->type == IOMMU_RESV_MSI)', so the 'if (ret)' should also only
>> take effect in the branch, add a brace to enclose it.
> 
> 'ret' is clearly also assigned at its declaration, to cover the (very likely) 
> case where we don't enter the loop at all. Thus testing it in the loop is 
> harmless, and cluttering that up with extra tabs and braces is just noise.
OK, I will drop this patch in v2

> 
> Robin.
> 
>> No functional changes.
>>
>> Signed-off-by: Zhen Lei 
>> ---
>>   drivers/iommu/dma-iommu.c | 12 +++-
>>   1 file changed, 7 insertions(+), 5 deletions(-)
>>
>> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
>> index ddcbbdb..4e885f7 100644
>> --- a/drivers/iommu/dma-iommu.c
>> +++ b/drivers/iommu/dma-iommu.c
>> @@ -231,6 +231,9 @@ static int iova_reserve_iommu_regions(struct device *dev,
>>   LIST_HEAD(resv_regions);
>>   int ret = 0;
>>   +if (!dev)
>> +return 0;
>> +
>>   if (dev_is_pci(dev))
>>   iova_reserve_pci_windows(to_pci_dev(dev), iovad);
>>   @@ -246,11 +249,12 @@ static int iova_reserve_iommu_regions(struct device 
>> *dev,
>>   hi = iova_pfn(iovad, region->start + region->length - 1);
>>   reserve_iova(iovad, lo, hi);
>>   -if (region->type == IOMMU_RESV_MSI)
>> +if (region->type == IOMMU_RESV_MSI) {
>>   ret = cookie_init_hw_msi_region(cookie, region->start,
>>   region->start + region->length);
>> -if (ret)
>> -break;
>> +if (ret)
>> +break;
>> +}
>>   }
>>   iommu_put_resv_regions(dev, _regions);
>>   @@ -308,8 +312,6 @@ int iommu_dma_init_domain(struct iommu_domain *domain, 
>> dma_addr_t base,
>>   }
>> init_iova_domain(iovad, 1UL << order, base_pfn);
>> -if (!dev)
>> -return 0;
>> return iova_reserve_iommu_regions(dev, domain);
>>   }
>>
> 
> .
> 

-- 
Thanks!
BestRegards

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[GIT PULL] dma-mapping updates for Linux 4.18

2018-06-04 Thread Christoph Hellwig
Hi Linus,

please pull dma dma-mapping update below.  Note that this includes a lot
of changes to the architecture Kconfig files, which have created quite a
few trivial conflicts in linux-next.  In all the cases there is no actual
interaction, just separate additions/removal that are right next to each
other.  For nds32 one of these conflicts already exists for 4.17, so
you'll see it straight away.

The following changes since commit 892a0be43edd63e1cd228af3453a064e9e94f08e:

  swiotlb: fix inversed DMA_ATTR_NO_WARN test (2018-05-02 14:48:55 +0200)

are available in the git repository at:

  git://git.infradead.org/users/hch/dma-mapping.git tags/dma-mapping-4.18

for you to fetch changes up to 2550bbfd495227945e17ed1fa1c05bce4753b86b:

  dma-direct: don't crash on device without dma_mask (2018-05-31 18:35:36 +0200)


dma-mapping updates for 4.18:

 - replaceme the force_dma flag with a dma_configure bus method.
   (Nipun Gupta, although one patch is Ñ–ncorrectly attributed to me
due to a git rebase bug)
 - use GFP_DMA32 more agressively in dma-direct. (Takashi Iwai)
 - remove PCI_DMA_BUS_IS_PHYS and rely on the dma-mapping API to do the
   right thing for bounce buffering.
 - move dma-debug initialization to common code, and apply a few cleanups
   to the dma-debug code.
 - cleanup the Kconfig mess around swiotlb selection
 - swiotlb comment fixup (Yisheng Xie)
 - a trivial swiotlb fix. (Dan Carpenter)
 - support swiotlb on RISC-V. (based on a patch from Palmer Dabbelt)
 - add a new generic dma-noncoherent dma_map_ops implementation and use
   it for arc, c6x and nds32.
 - improve scatterlist validity checking in dma-debug. (Robin Murphy)
 - add a struct device quirk to limit the dma-mask to 32-bit due to
   bridge/system issues, and switch x86 to use it instead of a local
   hack for VIA bridges.
 - handle devices without a dma_mask more gracefully in the dma-direct
   code.


Christoph Hellwig (42):
  drivers: remove force dma flag from buses
  scsi: reduce use of block bounce buffers
  ide: kill ide_toggle_bounce
  ide: remove the PCI_DMA_BUS_IS_PHYS check
  net: remove the PCI_DMA_BUS_IS_PHYS check in illegal_highdma
  PCI: remove PCI_DMA_BUS_IS_PHYS
  dma-debug: move initialization to common code
  dma-debug: simplify counting of preallocated requests
  dma-debug: unexport dma_debug_resize_entries and debug_dma_dump_mappings
  dma-debug: remove CONFIG_HAVE_DMA_API_DEBUG
  iommu-common: move to arch/sparc
  iommu-helper: unexport iommu_area_alloc
  iommu-helper: mark iommu_is_span_boundary as inline
  iommu-helper: move the IOMMU_HELPER config symbol to lib/
  scatterlist: move the NEED_SG_DMA_LENGTH config symbol to lib/Kconfig
  dma-mapping: move the NEED_DMA_MAP_STATE config symbol to lib/Kconfig
  arch: remove the ARCH_PHYS_ADDR_T_64BIT config symbol
  arch: define the ARCH_DMA_ADDR_T_64BIT config symbol in lib/Kconfig
  PCI: remove CONFIG_PCI_BUS_ADDR_T_64BIT
  arm: don't build swiotlb by default
  mips,unicore32: swiotlb doesn't need sg->dma_length
  swiotlb: move the SWIOTLB config symbol to lib/Kconfig
  swiotlb: remove the CONFIG_DMA_DIRECT_OPS ifdefs
  riscv: simplify Kconfig magic for 32-bit vs 64-bit kernels
  riscv: only enable ZONE_DMA32 for 64-bit
  riscv: add swiotlb support
  dma-mapping: simplify Kconfig dependencies
  dma-mapping: provide a generic dma-noncoherent implementation
  arc: simplify arc_dma_sync_single_for_{cpu,device}
  arc: fix arc_dma_sync_sg_for_{cpu,device}
  arc: fix arc_dma_{map,unmap}_page
  arc: use generic dma_noncoherent_ops
  c6x: use generic dma_noncoherent_ops
  core, dma-direct: add a flag 32-bit dma limits
  Documentation/x86: remove a stray reference to pci-nommu.c
  x86/pci-dma: remove the experimental forcesac boot option
  x86/pci-dma: remove the explicit nodac and allowdac option
  x86/pci-dma: switch the VIA 32-bit DMA quirk to use the struct device flag
  nds32: consolidate DMA cache maintainance routines
  nds32: implement the unmap_sg DMA operation
  nds32: use generic dma_noncoherent_ops
  dma-direct: don't crash on device without dma_mask

Dan Carpenter (1):
  swiotlb: remove an unecessary NULL check

Huaisheng Ye (1):
  dma-mapping: remove unused gfp_t parameter to arch_dma_alloc_attrs

Nipun Gupta (1):
  dma-mapping: move dma configuration to bus infrastructure

Robin Murphy (1):
  dma-debug: check scatterlist segments

Takashi Iwai (1):
  dma-direct: try reallocation with GFP_DMA32 if possible

Yisheng Xie (1):
  swiotlb: update comments to refer to physical instead of virtual addresses

 Documentation/admin-guide/kernel-parameters.txt|   1 -
 .../features/io/dma-api-debug/arch-support.txt |  31