Re: [PATCH 1/5] iommu/tegra-smmu: Fix domain_alloc
On Wed, Jan 16, 2019 at 12:50:10PM -0800, Navneet Kumar wrote: > * Allocate dma iova cookie for a domain while adding dma iommu > devices. > * Perform a stricter check for domain type parameter. > > Signed-off-by: Navneet Kumar > --- > drivers/iommu/tegra-smmu.c | 43 +++ > 1 file changed, 27 insertions(+), 16 deletions(-) I just gave this a quick spin because I was investigating how we could potentially make use of the DMA API instead of the IOMMU API directly in Tegra DRM. We currently rely on the fact that the Tegra SMMU driver only supports unmanaged domains. Once we start supporting DMA domains all the automatic machinery kicks in and there's lots of SMMU faults. I think at least partially those faults point out bugs we currently have in the code. From the looks of it, the display controller is running during boot and happily fetching from whatever address it was configured with in the bootloader, and when we enable the ASID for the display controller as part of the DMA/IOMMU setup, the fetches from the display controller will be accessing IOV addresses that don't have a mapping. One one hand that's a good thing because it points out existing weaknesses, but then it also means that we can't merge this series because it causes bad regressions. I also see failures from the GPU with this applied, which I think stem from the fact that we're now transparently mapping allocations through the SMMU without the Nouveau driver knowing that and setting the appropriate bit when addressing memory. Or it could come from the SMMU code in Nouveau trying to map an already mapped buffer, so effectively creating an IOVA mapping to an address that is already a IOV address rather than a physical address. So I think before we can go ahead with this series we have a lot of janitorial work to do first so that this won't cause any regressions when applied. Thierry signature.asc Description: PGP signature ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [PATCH -next] swiotlb: drop pointless static qualifier in swiotlb_dma_supported()
On 2019/2/14 15:26, Christoph Hellwig wrote: > On Thu, Feb 14, 2019 at 01:41:47AM +, YueHaibing wrote: >> There is no need to have the 'struct dentry *d_swiotlb_usage' variable >> static since new value always be assigned before use it. > > FYI, this is in swiotlb_create_debugfs, not swiotlb_dma_supported. Thank you, I will fix it. > > . > ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
[PATCH v2 -next] swiotlb: drop pointless static qualifier in swiotlb_create_debugfs()
There is no need to have the 'struct dentry *d_swiotlb_usage' variable static since new value always be assigned before use it. Signed-off-by: YueHaibing --- v2: fix patch title --- kernel/dma/swiotlb.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index a7b53786db9f..02fa517c47d9 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -689,7 +689,7 @@ swiotlb_dma_supported(struct device *hwdev, u64 mask) static int __init swiotlb_create_debugfs(void) { - static struct dentry *d_swiotlb_usage; + struct dentry *d_swiotlb_usage; struct dentry *ent; d_swiotlb_usage = debugfs_create_dir("swiotlb", NULL); ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [PATCH v6 0/9] vfio/mdev: IOMMU aware mediated device
On Wed, 13 Feb 2019 12:02:52 +0800 Lu Baolu wrote: > Hi, > > The Mediate Device is a framework for fine-grained physical device > sharing across the isolated domains. Currently the mdev framework > is designed to be independent of the platform IOMMU support. As the > result, the DMA isolation relies on the mdev parent device in a > vendor specific way. > > There are several cases where a mediated device could be protected > and isolated by the platform IOMMU. For example, Intel vt-d rev3.0 > [1] introduces a new translation mode called 'scalable mode', which > enables PASID-granular translations. The vt-d scalable mode is the > key ingredient for Scalable I/O Virtualization [2] [3] which allows > sharing a device in minimal possible granularity (ADI - Assignable > Device Interface). > > A mediated device backed by an ADI could be protected and isolated > by the IOMMU since 1) the parent device supports tagging an unique > PASID to all DMA traffic out of the mediated device; and 2) the DMA > translation unit (IOMMU) supports the PASID granular translation. > We can apply IOMMU protection and isolation to this kind of devices > just as what we are doing with an assignable PCI device. > > In order to distinguish the IOMMU-capable mediated devices from those > which still need to rely on parent devices, this patch set adds one > new member in struct mdev_device. > > * iommu_device > - This, if set, indicates that the mediated device could > be fully isolated and protected by IOMMU via attaching > an iommu domain to this device. If empty, it indicates > using vendor defined isolation. > > Below helpers are added to set and get above iommu device in mdev core > implementation. > > * mdev_set/get_iommu_device(dev, iommu_device) > - Set or get the iommu device which represents this mdev > in IOMMU's device scope. Drivers don't need to set the > iommu device if it uses vendor defined isolation. > > The mdev parent device driver could opt-in that the mdev could be > fully isolated and protected by the IOMMU when the mdev is being > created by invoking mdev_set_iommu_device() in its @create(). > > In the vfio_iommu_type1_attach_group(), a domain allocated through > iommu_domain_alloc() will be attached to the mdev iommu device if > an iommu device has been set. Otherwise, the dummy external domain > will be used and all the DMA isolation and protection are routed to > parent driver as the result. > > On IOMMU side, a basic requirement is allowing to attach multiple > domains to a PCI device if the device advertises the capability > and the IOMMU hardware supports finer granularity translations than > the normal PCI Source ID based translation. > > As the result, a PCI device could work in two modes: normal mode > and auxiliary mode. In the normal mode, a pci device could be > isolated in the Source ID granularity; the pci device itself could > be assigned to a user application by attaching a single domain > to it. In the auxiliary mode, a pci device could be isolated in > finer granularity, hence subsets of the device could be assigned > to different user level application by attaching a different domain > to each subset. > > Below APIs are introduced in iommu generic layer for aux-domain > purpose: > > * iommu_dev_has_feature(dev, IOMMU_DEV_FEAT_AUX) > - Check whether both IOMMU and device support IOMMU aux > domain feature. Below aux-domain specific interfaces > are available only after this returns true. > > * iommu_dev_enable/disable_feature(dev, IOMMU_DEV_FEAT_AUX) > - Enable/disable device specific aux-domain feature. > > * iommu_dev_feature_enabled(dev, IOMMU_DEV_FEAT_AUX) > - Check whether the aux domain specific feature enabled or > not. > > * iommu_aux_attach_device(domain, dev) > - Attaches @domain to @dev in the auxiliary mode. Multiple > domains could be attached to a single device in the > auxiliary mode with each domain representing an isolated > address space for an assignable subset of the device. > > * iommu_aux_detach_device(domain, dev) > - Detach @domain which has been attached to @dev in the > auxiliary mode. > > * iommu_aux_get_pasid(domain, dev) > - Return ID used for finer-granularity DMA translation. > For the Intel Scalable IOV usage model, this will be > a PASID. The device which supports Scalable IOV needs > to write this ID to the device register so that DMA > requests could be tagged with a right PASID prefix. > > In order for the ease of discussion, sometimes we call "a domain in > auxiliary mode' or simply 'an auxiliary domain' when a domain is > attached to a device for finer granularity translations. But we need > to keep in mind that this doesn't mean there is a differnt domain > type. A same domain could be bound to a device for Source ID based > translation, and bound to another device for finer granularity > translation at the same time. > > This patch series extends both IOMMU
Re: ARM64 boot failure on espressobin with 5.0.0-rc6 (1f947a7a011fcceb14cb912f5481a53b18f1879a)
On 14/02/2019 17:36, Christoph Hellwig wrote: On Thu, Feb 14, 2019 at 05:27:41PM +, Robin Murphy wrote: Oh wow, that driver has possibly the most inventive way of passing a NULL device to the DMA API that I've ever seen, and on arm64 it will certainly have been failing since 4.2, but of course there's also no error checking for anyone to notice... I did take a brief look and didn't see how we got the NULL device pointer, so it is well hidden for sure. This crash will be a fallout from 356da6d0cd (plus the subsequent fix in 9ab91e7c5c51) that's otherwise missed Christoph's big cleanup. Obviously the right thing to do is for someone to try to figure out the steaming pile of mess in that driver, but if necessary I think the quick fix below should probably suffice to mitigate the change in the short term. The fix looks ok. And for 5.2 I plan to explicitly reject all uses of NULL device arguments in the DMA API. I've sent patches out for all the obviously problemetic drivers, and most of them got accepted by the maintainers for the 5.1 merge window. It seems like the mv_xor code is mostly unmaintained as far as I can tell unfortunately. Hmm, having felt brave enough to take a closer look, it might actually be as simple as this - Dave, are you able to give the diff below a spin? Robin. ->8- diff --git a/drivers/dma/mv_xor.c b/drivers/dma/mv_xor.c index 7f595355fb79..fe4a7c71fede 100644 --- a/drivers/dma/mv_xor.c +++ b/drivers/dma/mv_xor.c @@ -1059,6 +1059,7 @@ mv_xor_channel_add(struct mv_xor_device *xordev, mv_chan->op_in_desc = XOR_MODE_IN_DESC; dma_dev = _chan->dmadev; + dma_dev->dev = >dev; mv_chan->xordev = xordev; /* @@ -1091,7 +1092,6 @@ mv_xor_channel_add(struct mv_xor_device *xordev, dma_dev->device_free_chan_resources = mv_xor_free_chan_resources; dma_dev->device_tx_status = mv_xor_status; dma_dev->device_issue_pending = mv_xor_issue_pending; - dma_dev->dev = >dev; /* set prep routines based on capability */ if (dma_has_cap(DMA_INTERRUPT, dma_dev->cap_mask)) ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: ARM64 boot failure on espressobin with 5.0.0-rc6 (1f947a7a011fcceb14cb912f5481a53b18f1879a)
On Thu, Feb 14, 2019 at 05:27:41PM +, Robin Murphy wrote: > Oh wow, that driver has possibly the most inventive way of passing a NULL > device to the DMA API that I've ever seen, and on arm64 it will certainly > have been failing since 4.2, but of course there's also no error checking > for anyone to notice... I did take a brief look and didn't see how we got the NULL device pointer, so it is well hidden for sure. > This crash will be a fallout from 356da6d0cd (plus the subsequent fix in > 9ab91e7c5c51) that's otherwise missed Christoph's big cleanup. Obviously > the right thing to do is for someone to try to figure out the steaming pile > of mess in that driver, but if necessary I think the quick fix below should > probably suffice to mitigate the change in the short term. The fix looks ok. And for 5.2 I plan to explicitly reject all uses of NULL device arguments in the DMA API. I've sent patches out for all the obviously problemetic drivers, and most of them got accepted by the maintainers for the 5.1 merge window. It seems like the mv_xor code is mostly unmaintained as far as I can tell unfortunately. ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: ARM64 boot failure on espressobin with 5.0.0-rc6 (1f947a7a011fcceb14cb912f5481a53b18f1879a)
On 2019-02-14 12:58 p.m., Robin Murphy wrote: > Hmm, having felt brave enough to take a closer look, it might actually be as > simple as this - Dave, are you able to give the diff below a spin? Yes. -- John David Anglin dave.ang...@bell.net ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [PATCH] iommu/arm-smmu: Allow disabling bypass via kernel config
Hi Doug, On 2019-02-14 8:44 pm, Douglas Anderson wrote: Right now the only way to disable the iommu bypass for the ARM SMMU is with the kernel command line parameter 'arm-smmu.disable_bypass'. In general kernel command line parameters make sense for things that someone would like to tweak without rebuilding the kernel or for very basic communication between the bootloader and the kernel, but are awkward for other things. Specifically: * Human parsing of the kernel command line can be difficult since it's just a big runon space separated line of text. * If every bit of the system was configured via the kernel command line the kernel command line would get very large and even more unwieldly. * Typically there are not easy ways in build systems to adjust the kernel command line for config-like options. Let's introduce a new config option that allows us to disable the iommu bypass without affecting the existing default nor the existing ability to adjust the configuration via kernel command line. I say let's just flip the default - for a while now it's been one of those "oh yeah, we should probably do that" things that gets instantly forgotten again, so some 3rd-party demand is plenty to convince me :) There are few reasons to allow unmatched stream bypass, and even fewer good ones, so I'd be happy to shift the command-line burden over to the esoteric cases at this point, and consider the config option in future if anyone from that camp pops up and screams hard enough. Cheers, Robin. Signed-off-by: Douglas Anderson --- drivers/iommu/Kconfig| 22 ++ drivers/iommu/arm-smmu.c | 3 ++- 2 files changed, 24 insertions(+), 1 deletion(-) diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig index 46fcd75d4364..c614beab08f8 100644 --- a/drivers/iommu/Kconfig +++ b/drivers/iommu/Kconfig @@ -359,6 +359,28 @@ config ARM_SMMU Say Y here if your SoC includes an IOMMU device implementing the ARM SMMU architecture. +config ARM_SMMU_DISABLE_BYPASS_BY_DEFAULT + bool "Default to disabling bypass on ARM SMMU v1 and v2" + depends on ARM_SMMU + default n + help + Say Y here to (by default) disable bypass streams such that + incoming transactions from devices that are not attached to + an iommu domain will report an abort back to the device and + will not be allowed to pass through the SMMU. + + Historically the ARM SMMU v1 and v2 driver has defaulted + to allow bypass by default but it could be disabled with + the parameter 'arm-smmu.disable_bypass'. The parameter is + still present and can be used to override this config + option, but this config option allows you to disable bypass + without bloating the kernel command line. + + Disabling bypass is more secure but presumably will break + old systems. + + Say N if unsure. + config ARM_SMMU_V3 bool "ARM Ltd. System MMU Version 3 (SMMUv3) Support" depends on ARM64 diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c index 045d93884164..930c07635956 100644 --- a/drivers/iommu/arm-smmu.c +++ b/drivers/iommu/arm-smmu.c @@ -110,7 +110,8 @@ static int force_stage; module_param(force_stage, int, S_IRUGO); MODULE_PARM_DESC(force_stage, "Force SMMU mappings to be installed at a particular stage of translation. A value of '1' or '2' forces the corresponding stage. All other values are ignored (i.e. no stage is forced). Note that selecting a specific stage will disable support for nested translation."); -static bool disable_bypass; +static bool disable_bypass = + IS_ENABLED(CONFIG_ARM_SMMU_DISABLE_BYPASS_BY_DEFAULT); module_param(disable_bypass, bool, S_IRUGO); MODULE_PARM_DESC(disable_bypass, "Disable bypass streams such that incoming transactions from devices that are not attached to an iommu domain will report an abort back to the device and will not be allowed to pass through the SMMU."); ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
[PATCH] iommu/arm-smmu: Allow disabling bypass via kernel config
Right now the only way to disable the iommu bypass for the ARM SMMU is with the kernel command line parameter 'arm-smmu.disable_bypass'. In general kernel command line parameters make sense for things that someone would like to tweak without rebuilding the kernel or for very basic communication between the bootloader and the kernel, but are awkward for other things. Specifically: * Human parsing of the kernel command line can be difficult since it's just a big runon space separated line of text. * If every bit of the system was configured via the kernel command line the kernel command line would get very large and even more unwieldly. * Typically there are not easy ways in build systems to adjust the kernel command line for config-like options. Let's introduce a new config option that allows us to disable the iommu bypass without affecting the existing default nor the existing ability to adjust the configuration via kernel command line. Signed-off-by: Douglas Anderson --- drivers/iommu/Kconfig| 22 ++ drivers/iommu/arm-smmu.c | 3 ++- 2 files changed, 24 insertions(+), 1 deletion(-) diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig index 46fcd75d4364..c614beab08f8 100644 --- a/drivers/iommu/Kconfig +++ b/drivers/iommu/Kconfig @@ -359,6 +359,28 @@ config ARM_SMMU Say Y here if your SoC includes an IOMMU device implementing the ARM SMMU architecture. +config ARM_SMMU_DISABLE_BYPASS_BY_DEFAULT + bool "Default to disabling bypass on ARM SMMU v1 and v2" + depends on ARM_SMMU + default n + help + Say Y here to (by default) disable bypass streams such that + incoming transactions from devices that are not attached to + an iommu domain will report an abort back to the device and + will not be allowed to pass through the SMMU. + + Historically the ARM SMMU v1 and v2 driver has defaulted + to allow bypass by default but it could be disabled with + the parameter 'arm-smmu.disable_bypass'. The parameter is + still present and can be used to override this config + option, but this config option allows you to disable bypass + without bloating the kernel command line. + + Disabling bypass is more secure but presumably will break + old systems. + + Say N if unsure. + config ARM_SMMU_V3 bool "ARM Ltd. System MMU Version 3 (SMMUv3) Support" depends on ARM64 diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c index 045d93884164..930c07635956 100644 --- a/drivers/iommu/arm-smmu.c +++ b/drivers/iommu/arm-smmu.c @@ -110,7 +110,8 @@ static int force_stage; module_param(force_stage, int, S_IRUGO); MODULE_PARM_DESC(force_stage, "Force SMMU mappings to be installed at a particular stage of translation. A value of '1' or '2' forces the corresponding stage. All other values are ignored (i.e. no stage is forced). Note that selecting a specific stage will disable support for nested translation."); -static bool disable_bypass; +static bool disable_bypass = + IS_ENABLED(CONFIG_ARM_SMMU_DISABLE_BYPASS_BY_DEFAULT); module_param(disable_bypass, bool, S_IRUGO); MODULE_PARM_DESC(disable_bypass, "Disable bypass streams such that incoming transactions from devices that are not attached to an iommu domain will report an abort back to the device and will not be allowed to pass through the SMMU."); -- 2.21.0.rc0.258.g878e2cd30e-goog ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [PATCH] iommu/arm-smmu: Allow disabling bypass via kernel config
Hi, On Thu, Feb 14, 2019 at 1:32 PM Robin Murphy wrote: > > Hi Doug, > > On 2019-02-14 8:44 pm, Douglas Anderson wrote: > > Right now the only way to disable the iommu bypass for the ARM SMMU is > > with the kernel command line parameter 'arm-smmu.disable_bypass'. > > > > In general kernel command line parameters make sense for things that > > someone would like to tweak without rebuilding the kernel or for very > > basic communication between the bootloader and the kernel, but are > > awkward for other things. Specifically: > > * Human parsing of the kernel command line can be difficult since it's > >just a big runon space separated line of text. > > * If every bit of the system was configured via the kernel command > >line the kernel command line would get very large and even more > >unwieldly. > > * Typically there are not easy ways in build systems to adjust the > >kernel command line for config-like options. > > > > Let's introduce a new config option that allows us to disable the > > iommu bypass without affecting the existing default nor the existing > > ability to adjust the configuration via kernel command line. > > I say let's just flip the default - for a while now it's been one of > those "oh yeah, we should probably do that" things that gets instantly > forgotten again, so some 3rd-party demand is plenty to convince me :) > > There are few reasons to allow unmatched stream bypass, and even fewer > good ones, so I'd be happy to shift the command-line burden over to the > esoteric cases at this point, and consider the config option in future > if anyone from that camp pops up and screams hard enough. Sure, I can submit that patch if we want. I presume I'll get lots of screaming but I'm used to that. ;-) ...specifically I found that when I turned on "disably bypass" on my board (sdm845-cheza, which is not yet upstream) that a bunch of things that used to work broke. That's a good thing because all the things that broke need to be fixed properly (by adding the IOMMUs) but presumably my board is not special in relying on the old insecure behavior. I'm about to head on vacation for a week so I'm not sure I'll get to re-post before then. If not I'll post this sometime after I get back unless someone beats me to it. -Doug ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
[PATCH v4 0/9] mm: Use vm_map_pages() and vm_map_pages_zero() API
Previouly drivers have their own way of mapping range of kernel pages/memory into user vma and this was done by invoking vm_insert_page() within a loop. As this pattern is common across different drivers, it can be generalized by creating new functions and use it across the drivers. vm_map_pages() is the API which could be used to map kernel memory/pages in drivers which has considered vm_pgoff. vm_map_pages_zero() is the API which could be used to map range of kernel memory/pages in drivers which has not considered vm_pgoff. vm_pgoff is passed default as 0 for those drivers. We _could_ then at a later "fix" these drivers which are using vm_map_pages_zero() to behave according to the normal vm_pgoff offsetting simply by removing the _zero suffix on the function name and if that causes regressions, it gives us an easy way to revert. Tested on Rockchip hardware and display is working fine, including talking to Lima via prime. v1 -> v2: Few Reviewed-by. Updated the change log in [8/9] In [7/9], vm_pgoff is treated in V4L2 API as a 'cookie' to select a buffer, not as a in-buffer offset by design and it always want to mmap a whole buffer from its beginning. Added additional changes after discussing with Marek and vm_map_pages() could be used instead of vm_map_pages_zero(). v2 -> v3: Corrected the documentation as per review comment. As suggested in v2, renaming the interfaces to - *vm_insert_range() -> vm_map_pages()* and *vm_insert_range_buggy() -> vm_map_pages_zero()*. As the interface is renamed, modified the code accordingly, updated the change logs and modified the subject lines to use the new interfaces. There is no other change apart from renaming and using the new interface. Patch[1/9] & [4/9], Tested on Rockchip hardware. v3 -> v4: Fixed build warnings on patch [8/9] reported by kbuild test robot. Souptick Joarder (9): mm: Introduce new vm_map_pages() and vm_map_pages_zero() API arm: mm: dma-mapping: Convert to use vm_map_pages() drivers/firewire/core-iso.c: Convert to use vm_map_pages_zero() drm/rockchip/rockchip_drm_gem.c: Convert to use vm_map_pages() drm/xen/xen_drm_front_gem.c: Convert to use vm_map_pages() iommu/dma-iommu.c: Convert to use vm_map_pages() videobuf2/videobuf2-dma-sg.c: Convert to use vm_map_pages() xen/gntdev.c: Convert to use vm_map_pages() xen/privcmd-buf.c: Convert to use vm_map_pages_zero() arch/arm/mm/dma-mapping.c | 22 ++ drivers/firewire/core-iso.c| 15 +--- drivers/gpu/drm/rockchip/rockchip_drm_gem.c| 17 + drivers/gpu/drm/xen/xen_drm_front_gem.c| 18 ++--- drivers/iommu/dma-iommu.c | 12 +--- drivers/media/common/videobuf2/videobuf2-core.c| 7 ++ .../media/common/videobuf2/videobuf2-dma-contig.c | 6 -- drivers/media/common/videobuf2/videobuf2-dma-sg.c | 22 ++ drivers/xen/gntdev.c | 11 ++- drivers/xen/privcmd-buf.c | 8 +-- include/linux/mm.h | 4 ++ mm/memory.c| 81 ++ mm/nommu.c | 14 13 files changed, 134 insertions(+), 103 deletions(-) -- 1.9.1 ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
[PATCH v4 1/9] mm: Introduce new vm_map_pages() and vm_map_pages_zero() API
Previouly drivers have their own way of mapping range of kernel pages/memory into user vma and this was done by invoking vm_insert_page() within a loop. As this pattern is common across different drivers, it can be generalized by creating new functions and use it across the drivers. vm_map_pages() is the API which could be used to mapped kernel memory/pages in drivers which has considered vm_pgoff vm_map_pages_zero() is the API which could be used to map range of kernel memory/pages in drivers which has not considered vm_pgoff. vm_pgoff is passed default as 0 for those drivers. We _could_ then at a later "fix" these drivers which are using vm_map_pages_zero() to behave according to the normal vm_pgoff offsetting simply by removing the _zero suffix on the function name and if that causes regressions, it gives us an easy way to revert. Tested on Rockchip hardware and display is working, including talking to Lima via prime. Signed-off-by: Souptick Joarder Suggested-by: Russell King Suggested-by: Matthew Wilcox Reviewed-by: Mike Rapoport Tested-by: Heiko Stuebner --- include/linux/mm.h | 4 +++ mm/memory.c| 81 ++ mm/nommu.c | 14 ++ 3 files changed, 99 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index 80bb640..e0aaa73 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2565,6 +2565,10 @@ unsigned long change_prot_numa(struct vm_area_struct *vma, int remap_pfn_range(struct vm_area_struct *, unsigned long addr, unsigned long pfn, unsigned long size, pgprot_t); int vm_insert_page(struct vm_area_struct *, unsigned long addr, struct page *); +int vm_map_pages(struct vm_area_struct *vma, struct page **pages, + unsigned long num); +int vm_map_pages_zero(struct vm_area_struct *vma, struct page **pages, + unsigned long num); vm_fault_t vmf_insert_pfn(struct vm_area_struct *vma, unsigned long addr, unsigned long pfn); vm_fault_t vmf_insert_pfn_prot(struct vm_area_struct *vma, unsigned long addr, diff --git a/mm/memory.c b/mm/memory.c index e11ca9d..cad3e27 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1520,6 +1520,87 @@ int vm_insert_page(struct vm_area_struct *vma, unsigned long addr, } EXPORT_SYMBOL(vm_insert_page); +/* + * __vm_map_pages - maps range of kernel pages into user vma + * @vma: user vma to map to + * @pages: pointer to array of source kernel pages + * @num: number of pages in page array + * @offset: user's requested vm_pgoff + * + * This allows drivers to map range of kernel pages into a user vma. + * + * Return: 0 on success and error code otherwise. + */ +static int __vm_map_pages(struct vm_area_struct *vma, struct page **pages, + unsigned long num, unsigned long offset) +{ + unsigned long count = vma_pages(vma); + unsigned long uaddr = vma->vm_start; + int ret, i; + + /* Fail if the user requested offset is beyond the end of the object */ + if (offset > num) + return -ENXIO; + + /* Fail if the user requested size exceeds available object size */ + if (count > num - offset) + return -ENXIO; + + for (i = 0; i < count; i++) { + ret = vm_insert_page(vma, uaddr, pages[offset + i]); + if (ret < 0) + return ret; + uaddr += PAGE_SIZE; + } + + return 0; +} + +/** + * vm_map_pages - maps range of kernel pages starts with non zero offset + * @vma: user vma to map to + * @pages: pointer to array of source kernel pages + * @num: number of pages in page array + * + * Maps an object consisting of @num pages, catering for the user's + * requested vm_pgoff + * + * If we fail to insert any page into the vma, the function will return + * immediately leaving any previously inserted pages present. Callers + * from the mmap handler may immediately return the error as their caller + * will destroy the vma, removing any successfully inserted pages. Other + * callers should make their own arrangements for calling unmap_region(). + * + * Context: Process context. Called by mmap handlers. + * Return: 0 on success and error code otherwise. + */ +int vm_map_pages(struct vm_area_struct *vma, struct page **pages, + unsigned long num) +{ + return __vm_map_pages(vma, pages, num, vma->vm_pgoff); +} +EXPORT_SYMBOL(vm_map_pages); + +/** + * vm_map_pages_zero - map range of kernel pages starts with zero offset + * @vma: user vma to map to + * @pages: pointer to array of source kernel pages + * @num: number of pages in page array + * + * Similar to vm_map_pages(), except that it explicitly sets the offset + * to 0. This function is intended for the drivers that did not consider + * vm_pgoff. + * + * Context: Process context. Called by mmap handlers. + * Return: 0 on success and error
[PATCH v4 6/9] iommu/dma-iommu.c: Convert to use vm_map_pages()
Convert to use vm_map_pages() to map range of kernel memory to user vma. Signed-off-by: Souptick Joarder --- drivers/iommu/dma-iommu.c | 12 +--- 1 file changed, 1 insertion(+), 11 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index d19f3d6..bacebff 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -620,17 +620,7 @@ struct page **iommu_dma_alloc(struct device *dev, size_t size, gfp_t gfp, int iommu_dma_mmap(struct page **pages, size_t size, struct vm_area_struct *vma) { - unsigned long uaddr = vma->vm_start; - unsigned int i, count = PAGE_ALIGN(size) >> PAGE_SHIFT; - int ret = -ENXIO; - - for (i = vma->vm_pgoff; i < count && uaddr < vma->vm_end; i++) { - ret = vm_insert_page(vma, uaddr, pages[i]); - if (ret) - break; - uaddr += PAGE_SIZE; - } - return ret; + return vm_map_pages(vma, pages, PAGE_ALIGN(size) >> PAGE_SHIFT); } static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys, -- 1.9.1 ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu