At the cleanup time when dma unmap is done, linux kernel does not allow unmap of individual segments which were coalesced together while creating the DMA map for type1 IOMMU mappings. So, this change updates the mapping of the memory segments(hugepages) on a per-page basis.
Signed-off-by: Nipun Gupta <nipun.gu...@amd.com> Signed-off-by: Nikhil Agarwal <nikhil.agar...@amd.com> --- Changes in v2: - Fix checkpatch errors by updated mailmap .mailmap | 4 ++-- lib/eal/linux/eal_vfio.c | 29 ----------------------------- 2 files changed, 2 insertions(+), 31 deletions(-) diff --git a/.mailmap b/.mailmap index 75884b6fe2..a234c9b3de 100644 --- a/.mailmap +++ b/.mailmap @@ -954,7 +954,7 @@ Nicolas Chautru <nicolas.chau...@intel.com> Nicolas Dichtel <nicolas.dich...@6wind.com> Nicolas Harnois <nicolas.harn...@6wind.com> Nicolás Pernas Maradei <n...@emutex.com> <nicolas.pernas.mara...@emutex.com> -Nikhil Agarwal <nikhil.agar...@linaro.org> +Nikhil Agarwal <nikhil.agar...@amd.com> <nikhil.agar...@xilinx.com> <nikhil.agar...@linaro.org> Nikhil Jagtap <nikhil.jag...@gmail.com> Nikhil Rao <nikhil....@intel.com> Nikhil Vasoya <nikhil.vas...@chelsio.com> @@ -962,7 +962,7 @@ Nikita Kozlov <nik...@elyzion.net> Niklas Söderlund <niklas.soderl...@corigine.com> Nikolay Nikolaev <nicknickol...@gmail.com> Ning Li <muziding...@163.com> <linin...@jd.com> -Nipun Gupta <nipun.gu...@nxp.com> +Nipun Gupta <nipun.gu...@amd.com> <nipun.gu...@xilinx.com> <nipun.gu...@nxp.com> Nir Efrati <nir.efr...@intel.com> Nirmoy Das <n...@suse.de> Nithin Dabilpuram <ndabilpu...@marvell.com> <nithin.dabilpu...@caviumnetworks.com> diff --git a/lib/eal/linux/eal_vfio.c b/lib/eal/linux/eal_vfio.c index 549b86ae1d..56edccb0db 100644 --- a/lib/eal/linux/eal_vfio.c +++ b/lib/eal/linux/eal_vfio.c @@ -1369,19 +1369,6 @@ rte_vfio_get_group_num(const char *sysfs_base, return 1; } -static int -type1_map_contig(const struct rte_memseg_list *msl, const struct rte_memseg *ms, - size_t len, void *arg) -{ - int *vfio_container_fd = arg; - - if (msl->external) - return 0; - - return vfio_type1_dma_mem_map(*vfio_container_fd, ms->addr_64, ms->iova, - len, 1); -} - static int type1_map(const struct rte_memseg_list *msl, const struct rte_memseg *ms, void *arg) @@ -1396,10 +1383,6 @@ type1_map(const struct rte_memseg_list *msl, const struct rte_memseg *ms, if (ms->iova == RTE_BAD_IOVA) return 0; - /* if IOVA mode is VA, we've already mapped the internal segments */ - if (!msl->external && rte_eal_iova_mode() == RTE_IOVA_VA) - return 0; - return vfio_type1_dma_mem_map(*vfio_container_fd, ms->addr_64, ms->iova, ms->len, 1); } @@ -1464,18 +1447,6 @@ vfio_type1_dma_mem_map(int vfio_container_fd, uint64_t vaddr, uint64_t iova, static int vfio_type1_dma_map(int vfio_container_fd) { - if (rte_eal_iova_mode() == RTE_IOVA_VA) { - /* with IOVA as VA mode, we can get away with mapping contiguous - * chunks rather than going page-by-page. - */ - int ret = rte_memseg_contig_walk(type1_map_contig, - &vfio_container_fd); - if (ret) - return ret; - /* we have to continue the walk because we've skipped the - * external segments during the config walk. - */ - } return rte_memseg_walk(type1_map, &vfio_container_fd); } -- 2.25.1