On 3/26/2019 2:39 AM, Bjorn Andersson wrote:
On Sun 09 Sep 23:25 PDT 2018, Vivek Gautam wrote:
There are scnenarios where drivers are required to make a
scm call in atomic context, such as in one of the qcom's
arm-smmu-500 errata [1].
[1]
On 3/25/2019 7:00 AM, Lu Baolu wrote:
> A parent device might create different types of mediated
> devices. For example, a mediated device could be created
> by the parent device with full isolation and protection
> provided by the IOMMU. One usage case could be found on
> Intel platforms where
On 2/22/2019 7:49 AM, Lu Baolu wrote:
> This adds helpers to attach or detach a domain to a
> group. This will replace iommu_attach_group() which
> only works for non-mdev devices.
>
> If a domain is attaching to a group which includes the
> mediated devices, it should attach to the iommu
Memory is incorrectly freed using the direct ops, as dma_map_ops = NULL.
Oops...
After reversing the order of the calls to arch_teardown_dma_ops() and
devres_release_all(), dma_map_ops is still valid, and the DMA memory is
now released using __iommu_free_attrs():
+sata_rcar ee30.sata:
Hi John,
CC robh
On Tue, Mar 26, 2019 at 12:42 PM John Garry wrote:
> > Memory is incorrectly freed using the direct ops, as dma_map_ops = NULL.
> > Oops...
> >
> > After reversing the order of the calls to arch_teardown_dma_ops() and
> > devres_release_all(), dma_map_ops is still valid, and
On Mon, 25 Mar 2019 09:30:34 +0800
Lu Baolu wrote:
> A parent device might create different types of mediated
> devices. For example, a mediated device could be created
> by the parent device with full isolation and protection
> provided by the IOMMU. One usage case could be found on
> Intel
On Mon, 25 Mar 2019 09:30:35 +0800
Lu Baolu wrote:
> This adds helpers to attach or detach a domain to a
> group. This will replace iommu_attach_group() which
> only works for non-mdev devices.
>
> If a domain is attaching to a group which includes the
> mediated devices, it should attach to
On Mon, 25 Mar 2019 09:30:36 +0800
Lu Baolu wrote:
> This adds the support to determine the isolation type
> of a mediated device group by checking whether it has
> an iommu device. If an iommu device exists, an iommu
> domain will be allocated and then attached to the iommu
> device. Otherwise,
On Wed, Jan 30, 2019 at 08:44:27AM +0100, Christoph Hellwig wrote:
> On Tue, Jan 29, 2019 at 09:36:08PM -0500, Michael S. Tsirkin wrote:
> > This has been discussed ad nauseum. virtio is all about compatibility.
> > Losing a couple of lines of code isn't worth breaking working setups.
> > People
On 26/03/2019 12:31, Geert Uytterhoeven wrote:
Hi John,
CC robh
On Tue, Mar 26, 2019 at 12:42 PM John Garry wrote:
Memory is incorrectly freed using the direct ops, as dma_map_ops = NULL.
Oops...
After reversing the order of the calls to arch_teardown_dma_ops() and
devres_release_all(),
The CMA allocation will skip allocations of single pages to save CMA
resource. This requires its callers to rebound those page allocations
from normal area.
So this patch moves the alloc_pages() call to the fallback routines.
Signed-off-by: Nicolin Chen
---
Changlog
v1->v2:
* PATCH-2:
The addresses within a single page are always contiguous, so it's
not so necessary to always allocate one single page from CMA area.
Since the CMA area has a limited predefined size of space, it may
run out of space in heavy use cases, where there might be quite a
lot CMA pages being allocated for
This series of patches try to save single pages from CMA area bypassing
all CMA single page alloctions and allocating normal pages instead, as
all addresses within one single page are contiguous.
We had once applied the PATCH-5 but reverted it as actually not all the
callers handled the fallback
The cma allocation will skip allocations of single pages to save CMA
resource. This requires its callers to rebound those page allocations
from normal area. So this patch adds fallback routines.
Signed-off-by: Nicolin Chen
---
arch/arm64/mm/dma-mapping.c | 19 ---
1 file
The CMA allocation will skip allocations of single pages to save CMA
resource. This requires its callers to rebound those page allocations
from normal area. So this patch adds fallback routines.
Note: amd_iommu driver uses dma_alloc_from_contiguous() as a fallback
allocation and uses
The CMA allocation will skip allocations of single pages to save CMA
resource. This requires its callers to rebound those page allocations
from normal area. So this patch adds fallback routines.
Signed-off-by: Nicolin Chen
---
arch/arm/mm/dma-mapping.c | 13 ++---
1 file changed, 10
The CMA allocation will skip allocations of single pages to save CMA
resource. This requires its callers to rebound those page allocations
from normal area. So this patch adds fallback routines.
Signed-off-by: Nicolin Chen
---
arch/arm/mm/dma-mapping.c | 13 ++---
1 file changed, 10
The addresses within a single page are always contiguous, so it's
not so necessary to always allocate one single page from CMA area.
Since the CMA area has a limited predefined size of space, it may
run out of space in heavy use cases, where there might be quite a
lot CMA pages being allocated for
The cma allocation will skip allocations of single pages to save CMA
resource. This requires its callers to rebound those page allocations
from normal area. So this patch adds fallback routines.
Signed-off-by: Nicolin Chen
---
arch/arm64/mm/dma-mapping.c | 19 ---
1 file
The CMA allocation will skip allocations of single pages to save CMA
resource. This requires its callers to rebound those page allocations
from normal area. So this patch adds fallback routines.
Note: amd_iommu driver uses dma_alloc_from_contiguous() as a fallback
allocation and uses
This series of patches try to save single pages from CMA area bypassing
all CMA single page alloctions and allocating normal pages instead, as
all addresses within one single page are contiguous.
We had once applied the PATCH-5 but reverted it as actually not all the
callers handled the fallback
The CMA allocation will skip allocations of single pages to save CMA
resource. This requires its callers to rebound those page allocations
from normal area.
So this patch moves the alloc_pages() call to the fallback routines.
Signed-off-by: Nicolin Chen
---
kernel/dma/remap.c | 2 +-
1 file
On Tue, Mar 26, 2019 at 03:49:56PM -0700, Nicolin Chen wrote:
> @@ -116,7 +116,7 @@ int __init dma_atomic_pool_init(gfp_t gfp, pgprot_t prot)
> if (dev_get_cma_area(NULL))
> page = dma_alloc_from_contiguous(NULL, nr_pages,
>
23 matches
Mail list logo