MICRADS 2020, Quito, Ecuador | Deadline: December 31

2019-12-21 Thread Maria Lemos
* Proceedings by Springer, indexed by Scopus, ISI, EI-Compendex, Google 
Scholar, etc.

 

--

MICRADS´20 - The 2020 Multidisciplinary International Conference of Research 
Applied to Defense and Security

Quito, Ecuador, 13 - 15 May 2020

http://www.micrads.org/ 

--

 

Scope

MICRADS´20 - The 2020 Multidisciplinary International Conference of Research 
Applied to Defense and Security, to be held at Quito, Ecuador, 13-15 May 2020, 
is an international forum for researchers and practitioners to present and 
discuss the most recent innovations, trends, results, experiences and concerns 
in the several perspectives of Defense and Security.

We are pleased to invite you to submit your papers to MICRADS´20. They can be 
written in English, Spanish or Portuguese. All submissions will be reviewed on 
the basis of relevance, originality, importance and clarity.

 

Topics

Submitted papers should be related with one or more of the main themes proposed 
for the Conference:

 

Area A: Systems, Communication and Defense

A1) Information and Communication Technology in Education
A2) Simulation and computer vision in military applications
A3) Analysis and Signal Processing
A4) Cybersecurity and Cyberdefense
A5) Computer Networks, Mobility and Pervasive Systems

  

Area B: Strategy and political-administrative vision in Defense

B1) Safety and Maritime Protection
B2) Strategy, Geopolitics and Oceanopolitics
B3) Planning, economy and logistics applied to Defense
B4) Leadership and e-leadership
B5) Military Marketing
B6) Health informatics in military applications

 

Area C: Engineering and technologies applied to Defense

C1) Wearable Technology and Assistance Devices
C2) Military Naval Engineering
C3) Weapons and Combat Systems
C4) Chemical, Biological and Nuclear Defense
C5) Defense Engineering (General)

  

Submission and Decision

Submitted papers written in English (until 10-page limit) must comply with the 
format of Smart Innovation, Systems and Technologies series (see Instructions 
for Authors at Springer Website 
),
 must not have been published before, not be under review for any other 
conference or publication and not include any information leading to the 
authors’ identification. Therefore, the authors’ names, affiliations and 
e-mails should not be included in the version for evaluation by the Scientific 
Committee. This information should only be included in the camera-ready 
version, saved in Word or Latex format and also in PDF format. These files must 
be accompanied by the Consent to Publish form 
 filled out, in a ZIP file, and uploaded at 
the conference management system.

Submitted papers written in Spanish or Portuguese (until 15-page limit) must 
comply with the format of RISTI  - Revista Ibérica de 
Sistemas e Tecnologias de Informação (download instructions/template for 
authors in Spanish  or Portuguese 
), must not have been published before, 
not be under review for any other conference or publication and not include any 
information leading to the authors’ identification. Therefore, the authors’ 
names, affiliations and e-mails should not be included in the version for 
evaluation by the Scientific Committee. This information should only be 
included in the camera-ready version, saved in Word. These file must be 
uploaded at the conference management system in a ZIP file.

All papers will be subjected to a “blind review” by at least two members of the 
Scientific Committee.

Based on Scientific Committee evaluation, a paper can be rejected or accepted 
by the Conference Chairs. In the later case, it can be accepted as paper or 
poster.

The authors of papers accepted as posters must build and print a poster to be 
exhibited during the Conference. This poster must follow an A1 or A2 vertical 
format. The Conference can includes Work Sessions where these posters are 
presented and orally discussed, with a 7 minute limit per poster.

The authors of accepted papers will have 15 minutes to present their work in a 
Conference Work Session; approximately 5 minutes of discussion will follow each 
presentation.

 

Publication and Indexing

To ensure that an accepted paper is published, at least one of the authors must 
be fully registered by the 18 of February 2020, and the paper must comply with 
the suggested layout and page-limit (until 10 pages). Additionally, all 
recommended changes must be addressed by the authors before they submit the 
camera-ready version.

No more than one paper per registration will be published. An extra fee must be 
paid for publication of 

[PATCH 6/8] iommu: allow the dma-iommu api to use bounce buffers

2019-12-21 Thread Tom Murphy
Allow the dma-iommu api to use bounce buffers for untrusted devices.
This is a copy of the intel bounce buffer code.

Signed-off-by: Tom Murphy 
---
 drivers/iommu/dma-iommu.c | 93 ---
 drivers/iommu/iommu.c | 10 +
 include/linux/iommu.h |  9 +++-
 3 files changed, 95 insertions(+), 17 deletions(-)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 4eac3cd35443..cf778db7d84d 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -20,9 +20,11 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
+#include 
 
 struct iommu_dma_msi_page {
struct list_headlist;
@@ -505,29 +507,89 @@ static void __iommu_dma_unmap(struct device *dev, 
dma_addr_t dma_addr,
iommu_tlb_sync(domain, _gather);
}
 
+
iommu_dma_free_iova(cookie, dma_addr, size, freelist);
 }
 
+static void __iommu_dma_unmap_swiotlb(struct device *dev, dma_addr_t dma_addr,
+   size_t size, enum dma_data_direction dir,
+   unsigned long attrs)
+{
+   struct iommu_domain *domain = iommu_get_dma_domain(dev);
+   struct iommu_dma_cookie *cookie = domain->iova_cookie;
+   struct iova_domain *iovad = >iovad;
+   size_t iova_off = iova_offset(iovad, dma_addr);
+   size_t aligned_size = iova_align(iovad, size + iova_off);
+   phys_addr_t phys;
+
+   phys = iommu_iova_to_phys(domain, dma_addr);
+   if (WARN_ON(!phys))
+   return;
+
+   __iommu_dma_unmap(dev, dma_addr, size);
+
+#ifdef CONFIG_SWIOTLB
+   if (unlikely(is_swiotlb_buffer(phys)))
+   swiotlb_tbl_unmap_single(dev, phys, size,
+   aligned_size, dir, attrs);
+#endif
+}
+
 static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys,
-   size_t size, int prot, dma_addr_t dma_mask)
+   size_t org_size, dma_addr_t dma_mask, bool coherent,
+   enum dma_data_direction dir, unsigned long attrs)
 {
+   int prot = dma_info_to_prot(dir, coherent, attrs);
struct iommu_domain *domain = iommu_get_dma_domain(dev);
struct iommu_dma_cookie *cookie = domain->iova_cookie;
struct iova_domain *iovad = >iovad;
size_t iova_off = iova_offset(iovad, phys);
+   size_t aligned_size = iova_align(iovad, org_size + iova_off);
dma_addr_t iova;
 
if (unlikely(iommu_dma_deferred_attach(dev, domain)))
return DMA_MAPPING_ERROR;
 
-   size = iova_align(iovad, size + iova_off);
+#ifdef CONFIG_SWIOTLB
+   /*
+* If both the physical buffer start address and size are
+* page aligned, we don't need to use a bounce page.
+*/
+   if (iommu_needs_bounce_buffer(dev)
+   && !iova_offset(iovad, phys | org_size)) {
+   phys = swiotlb_tbl_map_single(dev,
+   __phys_to_dma(dev, io_tlb_start),
+   phys, org_size, aligned_size, dir, attrs);
+
+   if (phys == DMA_MAPPING_ERROR)
+   return DMA_MAPPING_ERROR;
+
+   /* Cleanup the padding area. */
+   void *padding_start = phys_to_virt(phys);
+   size_t padding_size = aligned_size;
+
+   if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
+   (dir == DMA_TO_DEVICE ||
+dir == DMA_BIDIRECTIONAL)) {
+   padding_start += org_size;
+   padding_size -= org_size;
+   }
 
-   iova = iommu_dma_alloc_iova(domain, size, dma_mask, dev);
+   memset(padding_start, 0, padding_size);
+   }
+#endif
+
+   iova = iommu_dma_alloc_iova(domain, aligned_size, dma_mask, dev);
if (!iova)
return DMA_MAPPING_ERROR;
 
-   if (iommu_map_atomic(domain, iova, phys - iova_off, size, prot)) {
-   iommu_dma_free_iova(cookie, iova, size, NULL);
+   if (iommu_map_atomic(domain, iova, phys - iova_off, aligned_size,
+   prot)) {
+
+   if (unlikely(is_swiotlb_buffer(phys)))
+   swiotlb_tbl_unmap_single(dev, phys, aligned_size,
+   aligned_size, dir, attrs);
+   iommu_dma_free_iova(cookie, iova, aligned_size, NULL);
return DMA_MAPPING_ERROR;
}
return iova + iova_off;
@@ -761,10 +823,10 @@ static dma_addr_t iommu_dma_map_page(struct device *dev, 
struct page *page,
 {
phys_addr_t phys = page_to_phys(page) + offset;
bool coherent = dev_is_dma_coherent(dev);
-   int prot = dma_info_to_prot(dir, coherent, attrs);
dma_addr_t dma_handle;
 
-   dma_handle = __iommu_dma_map(dev, phys, size, prot, dma_get_mask(dev));
+   dma_handle = __iommu_dma_map(dev, phys, size, dma_get_mask(dev),
+   coherent, dir, attrs);
if 

[PATCH 7/8] iommu/vt-d: Convert intel iommu driver to the iommu ops

2019-12-21 Thread Tom Murphy
Convert the intel iommu driver to the dma-iommu api. Remove the iova
handling and reserve region code from the intel iommu driver.

Signed-off-by: Tom Murphy 
---
 drivers/iommu/Kconfig   |   1 +
 drivers/iommu/intel-iommu.c | 742 +++-
 include/linux/intel-iommu.h |   1 -
 3 files changed, 55 insertions(+), 689 deletions(-)

diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
index 0b9d78a0f3ac..4126bb2794c7 100644
--- a/drivers/iommu/Kconfig
+++ b/drivers/iommu/Kconfig
@@ -188,6 +188,7 @@ config INTEL_IOMMU
select NEED_DMA_MAP_STATE
select DMAR_TABLE
select SWIOTLB
+   select IOMMU_DMA
help
  DMA remapping (DMAR) devices support enables independent address
  translations for Direct Memory Access (DMA) from devices.
diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index 675ca2aa6e20..e673e1ee9288 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -31,6 +31,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -41,7 +42,6 @@
 #include 
 #include 
 #include 
-#include 
 #include 
 #include 
 #include 
@@ -380,9 +380,6 @@ EXPORT_SYMBOL_GPL(intel_iommu_gfx_mapped);
 static DEFINE_SPINLOCK(device_domain_lock);
 static LIST_HEAD(device_domain_list);
 
-#define device_needs_bounce(d) (!intel_no_bounce && dev_is_pci(d) &&   \
-   to_pci_dev(d)->untrusted)
-
 /*
  * Iterate over elements in device_domain_list and call the specified
  * callback @fn against each element.
@@ -1180,13 +1177,6 @@ static void dma_free_pagelist(struct page *freelist)
}
 }
 
-static void iova_entry_free(unsigned long data)
-{
-   struct page *freelist = (struct page *)data;
-
-   dma_free_pagelist(freelist);
-}
-
 /* iommu handling */
 static int iommu_alloc_root_entry(struct intel_iommu *iommu)
 {
@@ -1530,16 +1520,14 @@ static inline void __mapping_notify_one(struct 
intel_iommu *iommu,
iommu_flush_write_buffer(iommu);
 }
 
-static void iommu_flush_iova(struct iova_domain *iovad)
+static void intel_flush_iotlb_all(struct iommu_domain *domain)
 {
-   struct dmar_domain *domain;
+   struct dmar_domain *dmar_domain = to_dmar_domain(domain);
int idx;
 
-   domain = container_of(iovad, struct dmar_domain, iovad);
-
-   for_each_domain_iommu(idx, domain) {
+   for_each_domain_iommu(idx, dmar_domain) {
struct intel_iommu *iommu = g_iommus[idx];
-   u16 did = domain->iommu_did[iommu->seq_id];
+   u16 did = dmar_domain->iommu_did[iommu->seq_id];
 
iommu->flush.flush_iotlb(iommu, did, 0, 0, DMA_TLB_DSI_FLUSH);
 
@@ -1784,53 +1772,6 @@ static int domain_detach_iommu(struct dmar_domain 
*domain,
return count;
 }
 
-static struct iova_domain reserved_iova_list;
-static struct lock_class_key reserved_rbtree_key;
-
-static int dmar_init_reserved_ranges(void)
-{
-   struct pci_dev *pdev = NULL;
-   struct iova *iova;
-   int i;
-
-   init_iova_domain(_iova_list, VTD_PAGE_SIZE, IOVA_START_PFN);
-
-   lockdep_set_class(_iova_list.iova_rbtree_lock,
-   _rbtree_key);
-
-   /* IOAPIC ranges shouldn't be accessed by DMA */
-   iova = reserve_iova(_iova_list, IOVA_PFN(IOAPIC_RANGE_START),
-   IOVA_PFN(IOAPIC_RANGE_END));
-   if (!iova) {
-   pr_err("Reserve IOAPIC range failed\n");
-   return -ENODEV;
-   }
-
-   /* Reserve all PCI MMIO to avoid peer-to-peer access */
-   for_each_pci_dev(pdev) {
-   struct resource *r;
-
-   for (i = 0; i < PCI_NUM_RESOURCES; i++) {
-   r = >resource[i];
-   if (!r->flags || !(r->flags & IORESOURCE_MEM))
-   continue;
-   iova = reserve_iova(_iova_list,
-   IOVA_PFN(r->start),
-   IOVA_PFN(r->end));
-   if (!iova) {
-   pci_err(pdev, "Reserve iova for %pR failed\n", 
r);
-   return -ENODEV;
-   }
-   }
-   }
-   return 0;
-}
-
-static void domain_reserve_special_ranges(struct dmar_domain *domain)
-{
-   copy_reserved_iova(_iova_list, >iovad);
-}
-
 static inline int guestwidth_to_adjustwidth(int gaw)
 {
int agaw;
@@ -1850,16 +1791,11 @@ static int domain_init(struct dmar_domain *domain, 
struct intel_iommu *iommu,
 {
int adjust_width, agaw;
unsigned long sagaw;
-   int err;
-
-   init_iova_domain(>iovad, VTD_PAGE_SIZE, IOVA_START_PFN);
-
-   err = init_iova_flush_queue(>iovad,
-   iommu_flush_iova, iova_entry_free);
-   if (err)
-   return err;
+   int ret;
 
-   domain_reserve_special_ranges(domain);
+   ret = 

[PATCH 1/8] iommu/vt-d: clean up 32bit si_domain assignment

2019-12-21 Thread Tom Murphy
In the intel iommu driver devices which only support 32bit DMA can't be
direct mapped. The implementation of this is weird. Currently we assign
it a direct mapped domain and then remove the domain later and replace
it with a domain of type IOMMU_DOMAIN_IDENTITY. We should just assign it
a domain of type IOMMU_DOMAIN_IDENTITY from the begging rather than
needlessly swapping domains.

Signed-off-by: Tom Murphy 
---
 drivers/iommu/intel-iommu.c | 88 +
 1 file changed, 31 insertions(+), 57 deletions(-)

diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index 0c8d81f56a30..c1ea66467918 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -3462,46 +3462,9 @@ static struct dmar_domain 
*get_private_domain_for_dev(struct device *dev)
 }
 
 /* Check if the dev needs to go through non-identity map and unmap process.*/
-static bool iommu_need_mapping(struct device *dev)
+static bool iommu_no_mapping(struct device *dev)
 {
-   int ret;
-
-   if (iommu_dummy(dev))
-   return false;
-
-   ret = identity_mapping(dev);
-   if (ret) {
-   u64 dma_mask = *dev->dma_mask;
-
-   if (dev->coherent_dma_mask && dev->coherent_dma_mask < dma_mask)
-   dma_mask = dev->coherent_dma_mask;
-
-   if (dma_mask >= dma_direct_get_required_mask(dev))
-   return false;
-
-   /*
-* 32 bit DMA is removed from si_domain and fall back to
-* non-identity mapping.
-*/
-   dmar_remove_one_dev_info(dev);
-   ret = iommu_request_dma_domain_for_dev(dev);
-   if (ret) {
-   struct iommu_domain *domain;
-   struct dmar_domain *dmar_domain;
-
-   domain = iommu_get_domain_for_dev(dev);
-   if (domain) {
-   dmar_domain = to_dmar_domain(domain);
-   dmar_domain->flags |= DOMAIN_FLAG_LOSE_CHILDREN;
-   }
-   dmar_remove_one_dev_info(dev);
-   get_private_domain_for_dev(dev);
-   }
-
-   dev_info(dev, "32bit DMA uses non-identity mapping\n");
-   }
-
-   return true;
+   return iommu_dummy(dev) || identity_mapping(dev);
 }
 
 static dma_addr_t __intel_map_single(struct device *dev, phys_addr_t paddr,
@@ -3568,20 +3531,22 @@ static dma_addr_t intel_map_page(struct device *dev, 
struct page *page,
 enum dma_data_direction dir,
 unsigned long attrs)
 {
-   if (iommu_need_mapping(dev))
-   return __intel_map_single(dev, page_to_phys(page) + offset,
-   size, dir, *dev->dma_mask);
-   return dma_direct_map_page(dev, page, offset, size, dir, attrs);
+   if (iommu_no_mapping(dev))
+   return dma_direct_map_page(dev, page, offset, size, dir, attrs);
+
+   return __intel_map_single(dev, page_to_phys(page) + offset, size, dir,
+   *dev->dma_mask);
 }
 
 static dma_addr_t intel_map_resource(struct device *dev, phys_addr_t phys_addr,
 size_t size, enum dma_data_direction dir,
 unsigned long attrs)
 {
-   if (iommu_need_mapping(dev))
-   return __intel_map_single(dev, phys_addr, size, dir,
-   *dev->dma_mask);
-   return dma_direct_map_resource(dev, phys_addr, size, dir, attrs);
+   if (iommu_no_mapping(dev))
+   return dma_direct_map_resource(dev, phys_addr, size, dir,
+   attrs);
+
+   return __intel_map_single(dev, phys_addr, size, dir, *dev->dma_mask);
 }
 
 static void intel_unmap(struct device *dev, dma_addr_t dev_addr, size_t size)
@@ -3632,16 +3597,16 @@ static void intel_unmap_page(struct device *dev, 
dma_addr_t dev_addr,
 size_t size, enum dma_data_direction dir,
 unsigned long attrs)
 {
-   if (iommu_need_mapping(dev))
-   intel_unmap(dev, dev_addr, size);
-   else
+   if (iommu_no_mapping(dev))
dma_direct_unmap_page(dev, dev_addr, size, dir, attrs);
+   else
+   intel_unmap(dev, dev_addr, size);
 }
 
 static void intel_unmap_resource(struct device *dev, dma_addr_t dev_addr,
size_t size, enum dma_data_direction dir, unsigned long attrs)
 {
-   if (iommu_need_mapping(dev))
+   if (!iommu_no_mapping(dev))
intel_unmap(dev, dev_addr, size);
 }
 
@@ -3652,7 +3617,7 @@ static void *intel_alloc_coherent(struct device *dev, 
size_t size,
struct page *page = NULL;
int order;
 
-   if (!iommu_need_mapping(dev))
+   if (iommu_no_mapping(dev))
return 

[PATCH 0/8] Convert the intel iommu driver to the dma-iommu api

2019-12-21 Thread Tom Murphy
This patchset converts the intel iommu driver to the dma-iommu api.

While converting the driver I exposed a bug in the intel i915 driver which 
causes a huge amount of artifacts on the screen of my laptop. You can see a 
picture of it here:
https://github.com/pippy360/kernelPatches/blob/master/IMG_20191219_225922.jpg

This issue is most likely in the i915 driver and is most likely caused by the 
driver not respecting the return value of the dma_map_ops::map_sg function. You 
can see the driver ignoring the return value here:
https://github.com/torvalds/linux/blob/7e0165b2f1a912a06e381e91f0f4e495f4ac3736/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c#L51

Previously this didn’t cause issues because the intel map_sg always returned 
the same number of elements as the input scatter gather list but with the 
change to this dma-iommu api this is no longer the case. I wasn’t able to track 
the bug down to a specific line of code unfortunately.  

Could someone from the intel team look at this?


I have been testing on a lenovo x1 carbon 5th generation. Let me know if 
there’s any more information you need.

To allow my patch set to be tested I have added a patch (patch 8/8) in this 
series to disable combining sg segments in the dma-iommu api which fixes the 
bug but it doesn't fix the actual problem.

As part of this patch series I copied the intel bounce buffer code to the 
dma-iommu path. The addition of the bounce buffer code took me by surprise. I 
did most of my development on this patch series before the bounce buffer code 
was added and my reimplementation in the dma-iommu path is very rushed and not 
properly tested but I’m running out of time to work on this patch set.

On top of that I also didn’t port over the intel tracing code from this commit:
https://github.com/torvalds/linux/commit/3b53034c268d550d9e8522e613a14ab53b8840d8#diff-6b3e7c4993f05e76331e463ab1fc87e1
So all the work in that commit is now wasted. The code will need to be removed 
and reimplemented in the dma-iommu path. I would like to take the time to do 
this but I really don’t have the time at the moment and I want to get these 
changes out before the iommu code changes any more.


Tom Murphy (8):
  iommu/vt-d: clean up 32bit si_domain assignment
  iommu/vt-d: Use default dma_direct_* mapping functions for direct
mapped devices
  iommu/vt-d: Remove IOVA handling code from non-dma_ops path
  iommu: Handle freelists when using deferred flushing in iommu drivers
  iommu: Add iommu_dma_free_cpu_cached_iovas function
  iommu: allow the dma-iommu api to use bounce buffers
  iommu/vt-d: Convert intel iommu driver to the iommu ops
  DO NOT MERGE: iommu: disable list appending in dma-iommu

 drivers/iommu/Kconfig   |   1 +
 drivers/iommu/amd_iommu.c   |  14 +-
 drivers/iommu/arm-smmu-v3.c |   3 +-
 drivers/iommu/arm-smmu.c|   3 +-
 drivers/iommu/dma-iommu.c   | 183 +--
 drivers/iommu/exynos-iommu.c|   3 +-
 drivers/iommu/intel-iommu.c | 936 
 drivers/iommu/iommu.c   |  39 +-
 drivers/iommu/ipmmu-vmsa.c  |   3 +-
 drivers/iommu/msm_iommu.c   |   3 +-
 drivers/iommu/mtk_iommu.c   |   3 +-
 drivers/iommu/mtk_iommu_v1.c|   3 +-
 drivers/iommu/omap-iommu.c  |   3 +-
 drivers/iommu/qcom_iommu.c  |   3 +-
 drivers/iommu/rockchip-iommu.c  |   3 +-
 drivers/iommu/s390-iommu.c  |   3 +-
 drivers/iommu/tegra-gart.c  |   3 +-
 drivers/iommu/tegra-smmu.c  |   3 +-
 drivers/iommu/virtio-iommu.c|   3 +-
 drivers/vfio/vfio_iommu_type1.c |   2 +-
 include/linux/dma-iommu.h   |   3 +
 include/linux/intel-iommu.h |   1 -
 include/linux/iommu.h   |  32 +-
 23 files changed, 345 insertions(+), 908 deletions(-)

-- 
2.20.1

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

[PATCH 8/8] DO NOT MERGE: iommu: disable list appending in dma-iommu

2019-12-21 Thread Tom Murphy
ops __finalise_sg

Disable combining sg segments in the dma-iommu api.
Combining the sg segments exposes a bug in the intel i915 driver which
causes visual artifacts and the screen to freeze. This is most likely
because of how the i915 handles the returned list. It probably doesn't
respect the returned value specifying the number of elements in the list
and instead depends on the previous behaviour of the intel iommu driver
which would return the same number of elements in the output list as in
the input list.

Signed-off-by: Tom Murphy 
---
 drivers/iommu/dma-iommu.c | 38 +++---
 1 file changed, 7 insertions(+), 31 deletions(-)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index cf778db7d84d..d7547b912c87 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -853,8 +853,7 @@ static int __finalise_sg(struct device *dev, struct 
scatterlist *sg, int nents,
 {
struct scatterlist *s, *cur = sg;
unsigned long seg_mask = dma_get_seg_boundary(dev);
-   unsigned int cur_len = 0, max_len = dma_get_max_seg_size(dev);
-   int i, count = 0;
+   int i;
 
for_each_sg(sg, s, nents, i) {
/* Restore this segment's original unaligned fields first */
@@ -862,39 +861,16 @@ static int __finalise_sg(struct device *dev, struct 
scatterlist *sg, int nents,
unsigned int s_length = sg_dma_len(s);
unsigned int s_iova_len = s->length;
 
+   if (i > 0)
+   cur = sg_next(cur);
+
s->offset += s_iova_off;
s->length = s_length;
-   sg_dma_address(s) = DMA_MAPPING_ERROR;
-   sg_dma_len(s) = 0;
-
-   /*
-* Now fill in the real DMA data. If...
-* - there is a valid output segment to append to
-* - and this segment starts on an IOVA page boundary
-* - but doesn't fall at a segment boundary
-* - and wouldn't make the resulting output segment too long
-*/
-   if (cur_len && !s_iova_off && (dma_addr & seg_mask) &&
-   (max_len - cur_len >= s_length)) {
-   /* ...then concatenate it with the previous one */
-   cur_len += s_length;
-   } else {
-   /* Otherwise start the next output segment */
-   if (i > 0)
-   cur = sg_next(cur);
-   cur_len = s_length;
-   count++;
-
-   sg_dma_address(cur) = dma_addr + s_iova_off;
-   }
-
-   sg_dma_len(cur) = cur_len;
+   sg_dma_address(cur) = dma_addr + s_iova_off;
+   sg_dma_len(cur) = s_length;
dma_addr += s_iova_len;
-
-   if (s_length + s_iova_off < s_iova_len)
-   cur_len = 0;
}
-   return count;
+   return nents;
 }
 
 /*
-- 
2.20.1

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH 2/8] iommu/vt-d: Use default dma_direct_* mapping functions for direct mapped devices

2019-12-21 Thread Tom Murphy
We should only assign intel_dma_ops to devices which will actually use
the iommu and let the default fall back dma_direct_* functions handle
all other devices. This won't change any behaviour but will just use the
generic implementations for direct mapped devices rather than intel
specific ones.

Signed-off-by: Tom Murphy 
---
 drivers/iommu/intel-iommu.c | 52 +
 1 file changed, 6 insertions(+), 46 deletions(-)

diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index c1ea66467918..64b1a9793daa 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -2794,17 +2794,6 @@ static int __init si_domain_init(int hw)
return 0;
 }
 
-static int identity_mapping(struct device *dev)
-{
-   struct device_domain_info *info;
-
-   info = dev->archdata.iommu;
-   if (info && info != DUMMY_DEVICE_DOMAIN_INFO && info != 
DEFER_DEVICE_DOMAIN_INFO)
-   return (info->domain == si_domain);
-
-   return 0;
-}
-
 static int domain_add_dev_info(struct dmar_domain *domain, struct device *dev)
 {
struct dmar_domain *ndomain;
@@ -3461,12 +3450,6 @@ static struct dmar_domain 
*get_private_domain_for_dev(struct device *dev)
return domain;
 }
 
-/* Check if the dev needs to go through non-identity map and unmap process.*/
-static bool iommu_no_mapping(struct device *dev)
-{
-   return iommu_dummy(dev) || identity_mapping(dev);
-}
-
 static dma_addr_t __intel_map_single(struct device *dev, phys_addr_t paddr,
 size_t size, int dir, u64 dma_mask)
 {
@@ -3531,9 +3514,6 @@ static dma_addr_t intel_map_page(struct device *dev, 
struct page *page,
 enum dma_data_direction dir,
 unsigned long attrs)
 {
-   if (iommu_no_mapping(dev))
-   return dma_direct_map_page(dev, page, offset, size, dir, attrs);
-
return __intel_map_single(dev, page_to_phys(page) + offset, size, dir,
*dev->dma_mask);
 }
@@ -3542,10 +3522,6 @@ static dma_addr_t intel_map_resource(struct device *dev, 
phys_addr_t phys_addr,
 size_t size, enum dma_data_direction dir,
 unsigned long attrs)
 {
-   if (iommu_no_mapping(dev))
-   return dma_direct_map_resource(dev, phys_addr, size, dir,
-   attrs);
-
return __intel_map_single(dev, phys_addr, size, dir, *dev->dma_mask);
 }
 
@@ -3597,17 +3573,13 @@ static void intel_unmap_page(struct device *dev, 
dma_addr_t dev_addr,
 size_t size, enum dma_data_direction dir,
 unsigned long attrs)
 {
-   if (iommu_no_mapping(dev))
-   dma_direct_unmap_page(dev, dev_addr, size, dir, attrs);
-   else
-   intel_unmap(dev, dev_addr, size);
+   intel_unmap(dev, dev_addr, size);
 }
 
 static void intel_unmap_resource(struct device *dev, dma_addr_t dev_addr,
size_t size, enum dma_data_direction dir, unsigned long attrs)
 {
-   if (!iommu_no_mapping(dev))
-   intel_unmap(dev, dev_addr, size);
+   intel_unmap(dev, dev_addr, size);
 }
 
 static void *intel_alloc_coherent(struct device *dev, size_t size,
@@ -3617,9 +3589,6 @@ static void *intel_alloc_coherent(struct device *dev, 
size_t size,
struct page *page = NULL;
int order;
 
-   if (iommu_no_mapping(dev))
-   return dma_direct_alloc(dev, size, dma_handle, flags, attrs);
-
size = PAGE_ALIGN(size);
order = get_order(size);
 
@@ -3653,9 +3622,6 @@ static void intel_free_coherent(struct device *dev, 
size_t size, void *vaddr,
int order;
struct page *page = virt_to_page(vaddr);
 
-   if (iommu_no_mapping(dev))
-   return dma_direct_free(dev, size, vaddr, dma_handle, attrs);
-
size = PAGE_ALIGN(size);
order = get_order(size);
 
@@ -3673,9 +3639,6 @@ static void intel_unmap_sg(struct device *dev, struct 
scatterlist *sglist,
struct scatterlist *sg;
int i;
 
-   if (iommu_no_mapping(dev))
-   return dma_direct_unmap_sg(dev, sglist, nelems, dir, attrs);
-
for_each_sg(sglist, sg, nelems, i) {
nrpages += aligned_nrpages(sg_dma_address(sg), sg_dma_len(sg));
}
@@ -3699,8 +3662,6 @@ static int intel_map_sg(struct device *dev, struct 
scatterlist *sglist, int nele
struct intel_iommu *iommu;
 
BUG_ON(dir == DMA_NONE);
-   if (iommu_no_mapping(dev))
-   return dma_direct_map_sg(dev, sglist, nelems, dir, attrs);
 
domain = deferred_attach_domain(dev);
if (!domain)
@@ -3747,8 +3708,6 @@ static int intel_map_sg(struct device *dev, struct 
scatterlist *sglist, int nele
 
 static u64 intel_get_required_mask(struct device *dev)
 {
-   if (iommu_no_mapping(dev))
-   return 

[PATCH 5/8] iommu: Add iommu_dma_free_cpu_cached_iovas function

2019-12-21 Thread Tom Murphy
to dma-iommu ops

Add a iommu_dma_free_cpu_cached_iovas function to allow drivers which
use the dma-iommu ops to free cached cpu iovas.

Signed-off-by: Tom Murphy 
---
 drivers/iommu/dma-iommu.c | 9 +
 include/linux/dma-iommu.h | 3 +++
 2 files changed, 12 insertions(+)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index df28facdfb8b..4eac3cd35443 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -50,6 +50,15 @@ struct iommu_dma_cookie {
struct iommu_domain *fq_domain;
 };
 
+void iommu_dma_free_cpu_cached_iovas(unsigned int cpu,
+   struct iommu_domain *domain)
+{
+   struct iommu_dma_cookie *cookie = domain->iova_cookie;
+   struct iova_domain *iovad = >iovad;
+
+   free_cpu_cached_iovas(cpu, iovad);
+}
+
 static void iommu_dma_entry_dtor(unsigned long data)
 {
struct page *freelist = (struct page *)data;
diff --git a/include/linux/dma-iommu.h b/include/linux/dma-iommu.h
index 2112f21f73d8..316d22a4a860 100644
--- a/include/linux/dma-iommu.h
+++ b/include/linux/dma-iommu.h
@@ -37,6 +37,9 @@ void iommu_dma_compose_msi_msg(struct msi_desc *desc,
 
 void iommu_dma_get_resv_regions(struct device *dev, struct list_head *list);
 
+void iommu_dma_free_cpu_cached_iovas(unsigned int cpu,
+   struct iommu_domain *domain);
+
 #else /* CONFIG_IOMMU_DMA */
 
 struct iommu_domain;
-- 
2.20.1

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH 4/8] iommu: Handle freelists when using deferred flushing in iommu drivers

2019-12-21 Thread Tom Murphy
Allow the iommu_unmap_fast to return newly freed page table pages and
pass the freelist to queue_iova in the dma-iommu ops path.

This is useful for iommu drivers (in this case the intel iommu driver)
which need to wait for the ioTLB to be flushed before newly
free/unmapped page table pages can be freed. This way we can still batch
ioTLB free operations and handle the freelists.

Signed-off-by: Tom Murphy 
---
 drivers/iommu/amd_iommu.c   | 14 -
 drivers/iommu/arm-smmu-v3.c |  3 +-
 drivers/iommu/arm-smmu.c|  3 +-
 drivers/iommu/dma-iommu.c   | 45 ++---
 drivers/iommu/exynos-iommu.c|  3 +-
 drivers/iommu/intel-iommu.c | 51 +
 drivers/iommu/iommu.c   | 29 ++-
 drivers/iommu/ipmmu-vmsa.c  |  3 +-
 drivers/iommu/msm_iommu.c   |  3 +-
 drivers/iommu/mtk_iommu.c   |  3 +-
 drivers/iommu/mtk_iommu_v1.c|  3 +-
 drivers/iommu/omap-iommu.c  |  3 +-
 drivers/iommu/qcom_iommu.c  |  3 +-
 drivers/iommu/rockchip-iommu.c  |  3 +-
 drivers/iommu/s390-iommu.c  |  3 +-
 drivers/iommu/tegra-gart.c  |  3 +-
 drivers/iommu/tegra-smmu.c  |  3 +-
 drivers/iommu/virtio-iommu.c|  3 +-
 drivers/vfio/vfio_iommu_type1.c |  2 +-
 include/linux/iommu.h   | 25 
 20 files changed, 151 insertions(+), 57 deletions(-)

diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
index bd25674ee4db..e8a4c0842624 100644
--- a/drivers/iommu/amd_iommu.c
+++ b/drivers/iommu/amd_iommu.c
@@ -2542,7 +2542,8 @@ static int amd_iommu_map(struct iommu_domain *dom, 
unsigned long iova,
 
 static size_t amd_iommu_unmap(struct iommu_domain *dom, unsigned long iova,
  size_t page_size,
- struct iommu_iotlb_gather *gather)
+ struct iommu_iotlb_gather *gather,
+ struct page **freelist)
 {
struct protection_domain *domain = to_pdomain(dom);
 
@@ -2668,6 +2669,16 @@ static void amd_iommu_flush_iotlb_all(struct 
iommu_domain *domain)
spin_unlock_irqrestore(>lock, flags);
 }
 
+static void amd_iommu_flush_iotlb_range(struct iommu_domain *domain,
+   unsigned long iova, size_t size,
+   struct page *freelist)
+{
+   struct protection_domain *dom = to_pdomain(domain);
+
+   domain_flush_pages(dom, iova, size);
+   domain_flush_complete(dom);
+}
+
 static void amd_iommu_iotlb_sync(struct iommu_domain *domain,
 struct iommu_iotlb_gather *gather)
 {
@@ -2692,6 +2703,7 @@ const struct iommu_ops amd_iommu_ops = {
.is_attach_deferred = amd_iommu_is_attach_deferred,
.pgsize_bitmap  = AMD_IOMMU_PGSIZES,
.flush_iotlb_all = amd_iommu_flush_iotlb_all,
+   .flush_iotlb_range = amd_iommu_flush_iotlb_range,
.iotlb_sync = amd_iommu_iotlb_sync,
 };
 
diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
index effe72eb89e7..a27d4bf4492c 100644
--- a/drivers/iommu/arm-smmu-v3.c
+++ b/drivers/iommu/arm-smmu-v3.c
@@ -2459,7 +2459,8 @@ static int arm_smmu_map(struct iommu_domain *domain, 
unsigned long iova,
 }
 
 static size_t arm_smmu_unmap(struct iommu_domain *domain, unsigned long iova,
-size_t size, struct iommu_iotlb_gather *gather)
+size_t size, struct iommu_iotlb_gather *gather,
+struct page **freelist)
 {
struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
struct io_pgtable_ops *ops = smmu_domain->pgtbl_ops;
diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c
index 4f1a350d9529..ea1ab3387a07 100644
--- a/drivers/iommu/arm-smmu.c
+++ b/drivers/iommu/arm-smmu.c
@@ -1205,7 +1205,8 @@ static int arm_smmu_map(struct iommu_domain *domain, 
unsigned long iova,
 }
 
 static size_t arm_smmu_unmap(struct iommu_domain *domain, unsigned long iova,
-size_t size, struct iommu_iotlb_gather *gather)
+size_t size, struct iommu_iotlb_gather *gather,
+struct page **freelist)
 {
struct io_pgtable_ops *ops = to_smmu_domain(domain)->pgtbl_ops;
struct arm_smmu_device *smmu = to_smmu_domain(domain)->smmu;
diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 0cc702a70a96..df28facdfb8b 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -50,6 +50,19 @@ struct iommu_dma_cookie {
struct iommu_domain *fq_domain;
 };
 
+static void iommu_dma_entry_dtor(unsigned long data)
+{
+   struct page *freelist = (struct page *)data;
+
+   while (freelist != NULL) {
+   unsigned long p = (unsigned long)page_address(freelist);
+
+   freelist = freelist->freelist;
+   free_page(p);
+   }
+}
+
+
 static inline size_t