On 7/26/2018 9:00 PM, Robin Murphy wrote:
On 26/07/18 08:12, Vivek Gautam wrote:
On Wed, Jul 25, 2018 at 11:46 PM, Vivek Gautam
wrote:
On Tue, Jul 24, 2018 at 8:51 PM, Robin Murphy
wrote:
On 19/07/18 11:15, Vivek Gautam wrote:
From: Sricharan R
The smmu needs to be functional only
On 2018/7/26 22:16, Robin Murphy wrote:
> On 2018-07-26 4:44 AM, Leizhen (ThunderTown) wrote:
>>
>>
>> On 2018/7/25 5:51, Robin Murphy wrote:
>>> On 2018-07-12 7:18 AM, Zhen Lei wrote:
v2 -> v3: Add a bootup option "iommu_strict_mode" to make the
manager can choose which mode to be
On 07/23/2018 05:16 PM, Robin Murphy wrote:
> Whilst the common firmware code invoked by dma_configure() initialises
> devices' DMA masks according to limitations described by the respective
> properties ("dma-ranges" for OF and _DMA/IORT for ACPI), the nature of
> the dma_set_mask() API leads
On 07/26/2018 06:52 PM, Grygorii Strashko wrote:
>
>
> On 07/23/2018 05:16 PM, Robin Murphy wrote:
>> Now that we can track upstream DMA constraints properly with
>> bus_dma_mask instead of trying (and failing) to maintain it in
>> coherent_dma_mask, it doesn't make much sense for the
On Thu, Jul 26, 2018 at 04:06:05PM -0400, Tony Battersby wrote:
> On 07/26/2018 03:42 PM, Matthew Wilcox wrote:
> > On Thu, Jul 26, 2018 at 02:54:56PM -0400, Tony Battersby wrote:
> >> dma_pool_free() scales poorly when the pool contains many pages because
> >> pool_find_page() does a linear scan
On 07/23/2018 05:16 PM, Robin Murphy wrote:
> Now that we can track upstream DMA constraints properly with
> bus_dma_mask instead of trying (and failing) to maintain it in
> coherent_dma_mask, it doesn't make much sense for the firmware code to
> be touching the latter at all. It's merely
Improper DMA backing with IOMMU has been resolved now using the new
drivers core option that allows to avoid the implicit backing, hence
detaching isn't necessary anymore.
This reverts commit b59fb482b52269977ee5de205308e5b236a03917.
Signed-off-by: Dmitry Osipenko
---
Implicit backing DMA with IOMMU breaks Nouveau on Tegra, the current
approach with detaching device from IOMMU that was added in commit
b59fb482b522 ("drm/nouveau: tegra: Detach from ARM DMA/IOMMU mapping")
works only for arm32 which has the CONFIG_ARM_DMA_USE_IOMMU, but not for
arm64 which
Host1x driver manages IOMMU by itself, backing DMA with IOMMU by the
drivers core breaks the Host1x driver.
Signed-off-by: Dmitry Osipenko
---
drivers/gpu/host1x/dev.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/gpu/host1x/dev.c b/drivers/gpu/host1x/dev.c
index
Tegra DRM manages IOMMU by itself, backing DMA with IOMMU by the drivers
core breaks the Tegra driver.
Signed-off-by: Dmitry Osipenko
---
drivers/gpu/drm/tegra/dc.c | 1 +
drivers/gpu/drm/tegra/gr2d.c | 1 +
drivers/gpu/drm/tegra/gr3d.c | 1 +
drivers/gpu/drm/tegra/vic.c | 1 +
4 files
Respect device driver requirement for device DMA not to be implicitly
backed with IOMMU by skipping the backing setup for drivers that do not
want that.
Signed-off-by: Dmitry Osipenko
---
drivers/of/device.c | 7 +++
1 file changed, 7 insertions(+)
diff --git a/drivers/of/device.c
This allows device drivers to convey the drivers core that implicit IOMMU
backing for devices DMA shouldn't happen. It is needed for drivers that
manage IOMMU by themselves, like for example it is needed by the NVIDIA
Tegra GPU driver.
Signed-off-by: Dmitry Osipenko
---
include/linux/device.h |
Hello,
There is a trouble on ARM with DMA allocations made by device drivers,
the trouble is that DMA allocations are getting implicitly backed with
IOMMU mapping by the driver core if IOMMU presents in a system and IOMMU
could handle device. This is an undesired behaviour for drivers that
manage
This fixes kernel crashing on NVIDIA Tegra if kernel is compiled in
a multiplatform configuration and IPMMU-VMSA driver is enabled.
Cc: # v3.20+
Signed-off-by: Dmitry Osipenko
---
drivers/iommu/ipmmu-vmsa.c | 7 +++
1 file changed, 7 insertions(+)
diff --git a/drivers/iommu/ipmmu-vmsa.c
On 07/26/2018 03:42 PM, Matthew Wilcox wrote:
> On Thu, Jul 26, 2018 at 02:54:56PM -0400, Tony Battersby wrote:
>> dma_pool_free() scales poorly when the pool contains many pages because
>> pool_find_page() does a linear scan of all allocated pages. Improve its
>> scalability by replacing the
On 07/26/2018 03:37 PM, Andy Shevchenko wrote:
> On Thu, Jul 26, 2018 at 9:54 PM, Tony Battersby wrote:
>> dma_pool_alloc() scales poorly when allocating a large number of pages
>> because it does a linear scan of all previously-allocated pages before
>> allocating a new one. Improve its
On Thu, Jul 26, 2018 at 9:54 PM, Tony Battersby wrote:
> dma_pool_free() scales poorly when the pool contains many pages because
> pool_find_page() does a linear scan of all allocated pages. Improve its
> scalability by replacing the linear scan with a red-black tree lookup.
> In big O notation,
On Thu, Jul 26, 2018 at 02:54:56PM -0400, Tony Battersby wrote:
> dma_pool_free() scales poorly when the pool contains many pages because
> pool_find_page() does a linear scan of all allocated pages. Improve its
> scalability by replacing the linear scan with a red-black tree lookup.
> In big O
On Thu, Jul 26, 2018 at 9:54 PM, Tony Battersby wrote:
> dma_pool_alloc() scales poorly when allocating a large number of pages
> because it does a linear scan of all previously-allocated pages before
> allocating a new one. Improve its scalability by maintaining a separate
> list of pages that
dma_pool_free() scales poorly when the pool contains many pages because
pool_find_page() does a linear scan of all allocated pages. Improve its
scalability by replacing the linear scan with a red-black tree lookup.
In big O notation, this improves the algorithm from O(n^2) to O(n * log n).
drivers/scsi/mpt3sas is running into a scalability problem with the
kernel's DMA pool implementation. With a LSI/Broadcom SAS 9300-8i
12Gb/s HBA and max_sgl_entries=256, during modprobe, mpt3sas does the
equivalent of:
chain_dma_pool = dma_pool_create(size = 128);
for (i = 0; i < 373959; i++)
Replace chain_dma_pool with direct calls to dma_alloc_coherent() and
dma_free_coherent(). Since the chain lookup can involve hundreds of
thousands of allocations, it is worthwile to avoid the overhead of the
dma_pool API.
Signed-off-by: Tony Battersby
---
The original code called
dma_pool_alloc() scales poorly when allocating a large number of pages
because it does a linear scan of all previously-allocated pages before
allocating a new one. Improve its scalability by maintaining a separate
list of pages that have free blocks ready to (re)allocate. In big O
notation, this
Even without the MSI trick, we can still do a lot better than hogging
the entire queue while it drains. All we actually need to do for the
necessary guarantee of completion is wait for our particular command to
have been consumed - as long as we keep track of where we inserted it,
there is no need
On 26/07/18 08:12, Vivek Gautam wrote:
On Wed, Jul 25, 2018 at 11:46 PM, Vivek Gautam
wrote:
On Tue, Jul 24, 2018 at 8:51 PM, Robin Murphy wrote:
On 19/07/18 11:15, Vivek Gautam wrote:
From: Sricharan R
The smmu needs to be functional only when the respective
master's using it are
On 26/07/18 04:28, Tian, Kevin wrote:
>> hierarchical domain might be the right way to go, but let's do more
>> thinking on any corner cases.
>>
>
> btw maybe we don't need make it 'hierarchical', as maintaining
> hierarchy usually brings more work. What we require is possibly
> just the
On 26/07/18 04:03, Tian, Kevin wrote:
>> Whenever I come back to hierarchical IOMMU domains I reject it as too
>> complicated, but maybe that is what we need. I find it difficult to
>> reason about because domains currently represent both a collection of
>> devices and a one or more address
On 2018-07-26 8:20 AM, Leizhen (ThunderTown) wrote:
On 2018/7/25 6:25, Robin Murphy wrote:
On 2018-07-12 7:18 AM, Zhen Lei wrote:
To support the non-strict mode, now we only tlbi and sync for the strict
mode. But for the non-leaf case, always follow strict mode.
Use the lowest bit of the iova
On 2018-07-26 4:44 AM, Leizhen (ThunderTown) wrote:
On 2018/7/25 5:51, Robin Murphy wrote:
On 2018-07-12 7:18 AM, Zhen Lei wrote:
v2 -> v3: Add a bootup option "iommu_strict_mode" to make the
manager can choose which mode to be used. The first 5 patches
have not changed. +
On Thu, Jul 26, 2018 at 12:08:33AM +0300, Laurent Pinchart wrote:
> Hi Geert,
>
> Thank you for the patch.
>
> On Wednesday, 25 July 2018 16:10:29 EEST Geert Uytterhoeven wrote:
> > The Renesas IPMMU-VMSA driver supports not just R-Car H2 and M2 SoCs,
> > but also other R-Car Gen2 and R-Car Gen3
Thanks,
applied to the dma-mapping tree.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
On Mon, Jul 23, 2018 at 11:16:10PM +0100, Robin Murphy wrote:
> Bonus question: Now that we're collecing DMA API code in kernel/dma/
> do we want to reevaluate dma-iommu? On the one hand it's the bulk of a
> dma_ops implementation so should perhaps move, but on the other it's
> entirely
On 2018/7/25 6:46, Robin Murphy wrote:
> On 2018-07-12 7:18 AM, Zhen Lei wrote:
>> Because the non-strict mode introduces a vulnerability window, so add a
>> bootup option to make the manager can choose which mode to be used. The
>> default mode is IOMMU_STRICT.
>>
>> Signed-off-by: Zhen Lei
On 2018/7/25 6:25, Robin Murphy wrote:
> On 2018-07-12 7:18 AM, Zhen Lei wrote:
>> To support the non-strict mode, now we only tlbi and sync for the strict
>> mode. But for the non-leaf case, always follow strict mode.
>>
>> Use the lowest bit of the iova parameter to pass the strict mode:
>>
On Wed, Jul 25, 2018 at 11:46 PM, Vivek Gautam
wrote:
> On Tue, Jul 24, 2018 at 8:51 PM, Robin Murphy wrote:
>> On 19/07/18 11:15, Vivek Gautam wrote:
>>>
>>> From: Sricharan R
>>>
>>> The smmu needs to be functional only when the respective
>>> master's using it are active. The device_link
35 matches
Mail list logo