> From: Jason Gunthorpe
> Sent: Friday, October 22, 2021 7:31 AM
>
> On Thu, Oct 21, 2021 at 02:26:00AM +, Tian, Kevin wrote:
>
> > But in reality only Intel integrated GPUs have this special no-snoop
> > trick (fixed knowledge), with a dedicated IOMMU which doesn't
> > support enforce-snoop
On 10/21/21 4:10 PM, Marc Zyngier wrote:
On Thu, 21 Oct 2021 03:22:30 +0100,
Lu Baolu wrote:
On 10/20/21 10:22 PM, Marc Zyngier wrote:
On Wed, 20 Oct 2021 06:21:44 +0100,
Lu Baolu wrote:
On 2021/10/20 0:37, Sven Peter via iommu wrote:
+ /*
+* Check that CPU pages can be repr
On Thu, Oct 21, 2021 at 02:26:00AM +, Tian, Kevin wrote:
> But in reality only Intel integrated GPUs have this special no-snoop
> trick (fixed knowledge), with a dedicated IOMMU which doesn't
> support enforce-snoop format at all. In this case there is no choice
> that the user can further ma
On Thu, Oct 21, 2021 at 03:58:02PM +0100, Jean-Philippe Brucker wrote:
> On Thu, Oct 21, 2021 at 02:26:00AM +, Tian, Kevin wrote:
> > > I'll leave it to Jean to confirm. If only coherent DMA can be used in
> > > the guest on other platforms, suppose VFIO should not blindly set
> > > IOMMU_CACHE
On 2021-09-28 23:22, Gustavo A. R. Silva wrote:
Use 2-factor argument form kvcalloc() instead of kvzalloc().
If we have a thing for that now, then sure, why not. FWIW this can't
ever overflow due to where "count" comes from, but it has no reason to
be special.
Acked-by: Robin Murphy
Link
On Tue, Sep 28, 2021 at 05:22:29PM -0500, Gustavo A. R. Silva wrote:
> Use 2-factor argument form kvcalloc() instead of kvzalloc().
>
> Link: https://github.com/KSPP/linux/issues/162
> Signed-off-by: Gustavo A. R. Silva
Looks right.
Reviewed-by: Kees Cook
--
Kees Cook
___
On Thu, Oct 21, 2021 at 02:26:00AM +, Tian, Kevin wrote:
> > I'll leave it to Jean to confirm. If only coherent DMA can be used in
> > the guest on other platforms, suppose VFIO should not blindly set
> > IOMMU_CACHE and in concept it should deny assigning a non-coherent
> > device since no co-
Add a helper to check if a potentially blocking operation should
dip into the atomic pools.
Signed-off-by: Christoph Hellwig
---
kernel/dma/direct.c | 18 --
1 file changed, 12 insertions(+), 6 deletions(-)
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index f2ec40f5733
Add a new helper to deal with the swiotlb case. This keeps the code
nicely boundled and removes the not required call to
dma_direct_optimal_gfp_mask for the swiotlb case.
Signed-off-by: Christoph Hellwig
---
kernel/dma/direct.c | 24 +++-
1 file changed, 15 insertions(+), 9
swiotlb_alloc and swiotlb_free are properly stubbed out if
CONFIG_DMA_RESTRICTED_POOL is not set, so skip the extra checks.
Signed-off-by: Christoph Hellwig
---
kernel/dma/direct.c | 6 ++
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
Instead of blindly running into a blocking operation for a non-blocking gfp,
return NULL and spew an error. Note that Kconfig prevents this for all
currently relevant platforms, and this is just a debug check.
Signed-off-by: Christoph Hellwig
---
kernel/dma/direct.c | 9 +
1 file change
Add a big central !dev_is_dma_coherent(dev) block to deal with as much
as of the uncached allocation schemes and document the schemes a bit
better.
Signed-off-by: Christoph Hellwig
---
kernel/dma/direct.c | 58 -
1 file changed, 36 insertions(+), 22 de
Split the code for DMA_ATTR_NO_KERNEL_MAPPING allocations into a separate
helper to make dma_direct_alloc a little more readable.
Signed-off-by: Christoph Hellwig
Acked-by: David Rientjes
---
kernel/dma/direct.c | 31 ---
1 file changed, 20 insertions(+), 11 deletion
Add a local variable to track if we want to remap the returned address
using vmap and use that to simplify the code flow.
Signed-off-by: Christoph Hellwig
---
kernel/dma/direct.c | 47 +++--
1 file changed, 24 insertions(+), 23 deletions(-)
diff --git a/k
We must never unencryped memory go back into the general page pool.
So if we fail to set it back to encrypted when freeing DMA memory, leak
the memory insted and warn the user.
Signed-off-by: Christoph Hellwig
---
kernel/dma/direct.c | 18 ++
1 file changed, 14 insertions(+), 4 d
When dma_set_decrypted fails for the remapping case in dma_direct_alloc
we also need to unmap the pages before freeing them.
Signed-off-by: Christoph Hellwig
---
kernel/dma/direct.c | 5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
Factor out helpers the make dealing with memory encryption a little less
cumbersome.
Signed-off-by: Christoph Hellwig
---
kernel/dma/direct.c | 56 -
1 file changed, 25 insertions(+), 31 deletions(-)
diff --git a/kernel/dma/direct.c b/kernel/dma/direc
Hi all,
Linus complained about the complex flow in dma_direct_alloc, so this
tries to simplify it a bit, and while I was at it I also made sure that
unencrypted pages never leak back into the page allocator.
Changes since v1:
- fix a missing return
- add a new patch to fix a pre-existing missin
On Wed, Oct 20, 2021, at 07:21, Lu Baolu wrote:
> On 2021/10/20 0:37, Sven Peter via iommu wrote:
>> The iova allocator is capable of handling any granularity which is a power
>> of two. Remove the much stronger condition that the granularity must be
>> smaller or equal to the CPU page size from
On Thu, 21 Oct 2021 03:22:30 +0100,
Lu Baolu wrote:
>
> On 10/20/21 10:22 PM, Marc Zyngier wrote:
> > On Wed, 20 Oct 2021 06:21:44 +0100,
> > Lu Baolu wrote:
> >>
> >> On 2021/10/20 0:37, Sven Peter via iommu wrote:
> >>> + /*
> >>> + * Check that CPU pages can be represented by the IOVA granu
On Tue, Oct 19, 2021 at 12:56:36PM -0700, David Rientjes wrote:
> > - dma_set_encrypted(dev, vaddr, 1 << page_order);
> > + if (dma_set_encrypted(dev, vaddr, 1 << page_order)) {
> > + pr_warn_ratelimited(
> > + "leaking DMA memory that can't be re-encrypted\n");
> >
On Tue, Oct 19, 2021 at 12:54:54PM -0700, David Rientjes wrote:
> > - 1 << get_order(size));
> > - if (err)
> > - goto out_free_pages;
> > - }
> > + err = dma_set_decrypted(dev, ret, size);
22 matches
Mail list logo