Re: [PATCH 1/2] swiotlb: Remove alloc_size argument to swiotlb_tbl_map_single()

2024-05-06 Thread Petr Tesařík
On Mon, 6 May 2024 15:14:05 +
Michael Kelley  wrote:

> From: mhkelle...@gmail.com 
> >   
> 
> Gentle ping ...
> 
> Anyone interested in reviewing this series of two patches?  It fixes
> an edge case bug in the size of the swiotlb request coming from
> dma-iommu, and plugs a hole that allows untrusted devices to see
> kernel data unrelated to the intended DMA transfer.  I think these are
> the last "known bugs" that came out of the extensive swiotlb discussion
> and patches for 6.9.
> 
> Michael
> 
> > Currently swiotlb_tbl_map_single() takes alloc_align_mask and
> > alloc_size arguments to specify an swiotlb allocation that is
> > larger than mapping_size. This larger allocation is used solely
> > by iommu_dma_map_single() to handle untrusted devices that should
> > not have DMA visibility to memory pages that are partially used
> > for unrelated kernel data.
> > 
> > Having two arguments to specify the allocation is redundant. While
> > alloc_align_mask naturally specifies the alignment of the starting
> > address of the allocation, it can also implicitly specify the size
> > by rounding up the mapping_size to that alignment.
> > 
> > Additionally, the current approach has an edge case bug.
> > iommu_dma_map_page() already does the rounding up to compute the
> > alloc_size argument. But swiotlb_tbl_map_single() then calculates
> > the alignment offset based on the DMA min_align_mask, and adds
> > that offset to alloc_size. If the offset is non-zero, the addition
> > may result in a value that is larger than the max the swiotlb can
> > allocate. If the rounding up is done _after_ the alignment offset is
> > added to the mapping_size (and the original mapping_size conforms to
> > the value returned by swiotlb_max_mapping_size), then the max that the
> > swiotlb can allocate will not be exceeded.
> > 
> > In view of these issues, simplify the swiotlb_tbl_map_single() interface
> > by removing the alloc_size argument. Most call sites pass the same
> > value for mapping_size and alloc_size, and they pass alloc_align_mask
> > as zero. Just remove the redundant argument from these callers, as they
> > will see no functional change. For iommu_dma_map_page() also remove
> > the alloc_size argument, and have swiotlb_tbl_map_single() compute
> > the alloc_size by rounding up mapping_size after adding the offset
> > based on min_align_mask. This has the side effect of fixing the
> > edge case bug but with no other functional change.
> > 
> > Also add a sanity test on the alloc_align_mask. While IOMMU code
> > currently ensures the granule is not larger than PAGE_SIZE, if
> > that guarantee were to be removed in the future, the downstream
> > effect on the swiotlb might go unnoticed until strange allocation
> > failures occurred.
> > 
> > Tested on an ARM64 system with 16K page size and some kernel
> > test-only hackery to allow modifying the DMA min_align_mask and
> > the granule size that becomes the alloc_align_mask. Tested these
> > combinations with a variety of original memory addresses and
> > sizes, including those that reproduce the edge case bug:
> > 
> > * 4K granule and 0 min_align_mask
> > * 4K granule and 0xFFF min_align_mask (4K - 1)
> > * 16K granule and 0xFFF min_align_mask
> > * 64K granule and 0xFFF min_align_mask
> > * 64K granule and 0x3FFF min_align_mask (16K - 1)
> > 
> > With the changes, all combinations pass.
> > 
> > Signed-off-by: Michael Kelley 

Looks good to me. My previous discussion was not related to this
change; I was merely trying to find an answer to your question whether
anything else should be changed, and IIUC the result was that not.

Reviewed-by: Petr Tesarik 

Petr T

> > ---
> > I've haven't used any "Fixes:" tags. This patch really should be
> > backported only if all the other recent swiotlb fixes get backported,
> > and I'm unclear on whether that will happen.
> > 
> > I saw the brief discussion about removing the "dir" parameter from
> > swiotlb_tbl_map_single(). That removal could easily be done as part
> > of this patch, since it's already changing the swiotlb_tbl_map_single()
> > parameters. But I think the conclusion of the discussion was to leave
> > the "dir" parameter for symmetry with the swiotlb_sync_*() functions.
> > Please correct me if that's wrong, and I'll respin this patch to do
> > the removal.
> > 
> >  drivers/iommu/dma-iommu.c |  2 +-
> >  drivers/xen/swiotlb-xen.c |  2 +-
> >  include/linux/swiotlb.h   |  2 +-
> >  kernel/dma/swiotlb.c  | 56 +--
> >  4 files changed, 45 insertions(+), 17 deletions(-)
> > 
> > diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> > index 07d087eecc17..c21ef1388499 100644
> > --- a/drivers/iommu/dma-iommu.c
> > +++ b/drivers/iommu/dma-iommu.c
> > @@ -1165,7 +1165,7 @@ static dma_addr_t iommu_dma_map_page(struct device 
> > *dev, struct page *page,
> > trace_swiotlb_bounced(dev, phys, size);
> > 
> > aligned_size = iova_align(iovad, 

RE: [PATCH 1/2] swiotlb: Remove alloc_size argument to swiotlb_tbl_map_single()

2024-05-06 Thread Michael Kelley
From: mhkelle...@gmail.com 
> 

Gentle ping ...

Anyone interested in reviewing this series of two patches?  It fixes
an edge case bug in the size of the swiotlb request coming from
dma-iommu, and plugs a hole that allows untrusted devices to see
kernel data unrelated to the intended DMA transfer.  I think these are
the last "known bugs" that came out of the extensive swiotlb discussion
and patches for 6.9.

Michael

> Currently swiotlb_tbl_map_single() takes alloc_align_mask and
> alloc_size arguments to specify an swiotlb allocation that is
> larger than mapping_size. This larger allocation is used solely
> by iommu_dma_map_single() to handle untrusted devices that should
> not have DMA visibility to memory pages that are partially used
> for unrelated kernel data.
> 
> Having two arguments to specify the allocation is redundant. While
> alloc_align_mask naturally specifies the alignment of the starting
> address of the allocation, it can also implicitly specify the size
> by rounding up the mapping_size to that alignment.
> 
> Additionally, the current approach has an edge case bug.
> iommu_dma_map_page() already does the rounding up to compute the
> alloc_size argument. But swiotlb_tbl_map_single() then calculates
> the alignment offset based on the DMA min_align_mask, and adds
> that offset to alloc_size. If the offset is non-zero, the addition
> may result in a value that is larger than the max the swiotlb can
> allocate. If the rounding up is done _after_ the alignment offset is
> added to the mapping_size (and the original mapping_size conforms to
> the value returned by swiotlb_max_mapping_size), then the max that the
> swiotlb can allocate will not be exceeded.
> 
> In view of these issues, simplify the swiotlb_tbl_map_single() interface
> by removing the alloc_size argument. Most call sites pass the same
> value for mapping_size and alloc_size, and they pass alloc_align_mask
> as zero. Just remove the redundant argument from these callers, as they
> will see no functional change. For iommu_dma_map_page() also remove
> the alloc_size argument, and have swiotlb_tbl_map_single() compute
> the alloc_size by rounding up mapping_size after adding the offset
> based on min_align_mask. This has the side effect of fixing the
> edge case bug but with no other functional change.
> 
> Also add a sanity test on the alloc_align_mask. While IOMMU code
> currently ensures the granule is not larger than PAGE_SIZE, if
> that guarantee were to be removed in the future, the downstream
> effect on the swiotlb might go unnoticed until strange allocation
> failures occurred.
> 
> Tested on an ARM64 system with 16K page size and some kernel
> test-only hackery to allow modifying the DMA min_align_mask and
> the granule size that becomes the alloc_align_mask. Tested these
> combinations with a variety of original memory addresses and
> sizes, including those that reproduce the edge case bug:
> 
> * 4K granule and 0 min_align_mask
> * 4K granule and 0xFFF min_align_mask (4K - 1)
> * 16K granule and 0xFFF min_align_mask
> * 64K granule and 0xFFF min_align_mask
> * 64K granule and 0x3FFF min_align_mask (16K - 1)
> 
> With the changes, all combinations pass.
> 
> Signed-off-by: Michael Kelley 
> ---
> I've haven't used any "Fixes:" tags. This patch really should be
> backported only if all the other recent swiotlb fixes get backported,
> and I'm unclear on whether that will happen.
> 
> I saw the brief discussion about removing the "dir" parameter from
> swiotlb_tbl_map_single(). That removal could easily be done as part
> of this patch, since it's already changing the swiotlb_tbl_map_single()
> parameters. But I think the conclusion of the discussion was to leave
> the "dir" parameter for symmetry with the swiotlb_sync_*() functions.
> Please correct me if that's wrong, and I'll respin this patch to do
> the removal.
> 
>  drivers/iommu/dma-iommu.c |  2 +-
>  drivers/xen/swiotlb-xen.c |  2 +-
>  include/linux/swiotlb.h   |  2 +-
>  kernel/dma/swiotlb.c  | 56 +--
>  4 files changed, 45 insertions(+), 17 deletions(-)
> 
> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> index 07d087eecc17..c21ef1388499 100644
> --- a/drivers/iommu/dma-iommu.c
> +++ b/drivers/iommu/dma-iommu.c
> @@ -1165,7 +1165,7 @@ static dma_addr_t iommu_dma_map_page(struct device 
> *dev, struct page *page,
>   trace_swiotlb_bounced(dev, phys, size);
> 
>   aligned_size = iova_align(iovad, size);
> - phys = swiotlb_tbl_map_single(dev, phys, size, aligned_size,
> + phys = swiotlb_tbl_map_single(dev, phys, size,
> iova_mask(iovad), dir, attrs);
> 
>   if (phys == DMA_MAPPING_ERROR)
> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> index 1c4ef5111651..6579ae3f6dac 100644
> --- a/drivers/xen/swiotlb-xen.c
> +++ b/drivers/xen/swiotlb-xen.c
> @@ -216,7 +216,7 @@ static 

Re: [PATCH 1/2] swiotlb: Remove alloc_size argument to swiotlb_tbl_map_single()

2024-04-15 Thread Petr Tesařík
On Mon, 15 Apr 2024 13:03:30 +
Michael Kelley  wrote:

> From: Petr Tesařík  Sent: Monday, April 15, 2024 5:50 AM
> > 
> > On Mon, 15 Apr 2024 12:23:22 +
> > Michael Kelley  wrote:
> >   
> > > From: Petr Tesařík  Sent: Monday, April 15, 2024 4:46 
> > > AM  
> > > >
> > > > Hi Michael,
> > > >
> > > > sorry for taking so long to answer. Yes, there was no agreement on the
> > > > removal of the "dir" parameter, but I'm not sure it's because of
> > > > symmetry with swiotlb_sync_*(), because the topic was not really
> > > > discussed.
> > > >
> > > > The discussion was about the KUnit test suite and whether direction is
> > > > a property of the bounce buffer or of each sync operation. Since DMA API
> > > > defines associates each DMA buffer with a direction, the direction
> > > > parameter passed to swiotlb_sync_*() should match what was passed to
> > > > swiotlb_tbl_map_single(), because that's how it is used by the generic
> > > > DMA code. In other words, if the parameter is kept, it should be kept
> > > > to match dma_map_*().
> > > >
> > > > However, there is also symmetry with swiotlb_tbl_unmap_single(). This
> > > > function does use the parameter for the final sync. I believe there
> > > > should be a matching initial sync in swiotlb_tbl_map_single(). In
> > > > short, the buffer sync for DMA non-coherent devices should be moved from
> > > > swiotlb_map() to swiotlb_tbl_map_single(). If this sync is not needed,
> > > > then the caller can (and should) include DMA_ATTR_SKIP_CPU_SYNC in
> > > > the flags parameter.
> > > >
> > > > To sum it up:
> > > >
> > > > * Do *NOT* remove the "dir" parameter.
> > > > * Let me send a patch which moves the initial buffer sync.
> > > >  
> > >
> > > I'm not seeing the need to move the initial buffer sync.  All
> > > callers of swiotlb_tbl_map_single() already have a subsequent
> > > check for a non-coherent device, and a call to
> > > arch_sync_dma_for_device().  And the Xen code has some
> > > special handling that probably shouldn't go in
> > > swiotlb_tbl_map_single().  Or am I missing something?  
> > 
> > Oh, sure, there's nothing broken ATM. It's merely a cleanup. The API is
> > asymmetric and thus confusing. You get a final sync by default if you
> > call swiotlb_tbl_unmap_single(),   
> 
> I don't see that final sync in swiotlb_tbl_unmap_single().  It calls
> swiotlb_bounce() to copy the data, but it doesn't deal with
> non-coherent devices or call arch_sync_dma_for_cpu().

Ouch. You're right! The buffer gets only bounced but not synced if
device DMA is non-coherent. So, how is this supposed to work?

Now I'm looking at the code in dma_direct_map_page(), and it calls
arch_sync_dma_for_device() explicitly, _except_ when using SWIOTLB. So,
maybe I should instead review all callers of swiotlb_map(), make sure
that they handle non-coherent devices, and then remove the sync from
swiotlb_map()?

I mean, the current situation seems somewhat disorganized to me.

Petr T



RE: [PATCH 1/2] swiotlb: Remove alloc_size argument to swiotlb_tbl_map_single()

2024-04-15 Thread Michael Kelley
From: Petr Tesařík  Sent: Monday, April 15, 2024 5:50 AM
> 
> On Mon, 15 Apr 2024 12:23:22 +
> Michael Kelley  wrote:
> 
> > From: Petr Tesařík  Sent: Monday, April 15, 2024 4:46 AM
> > >
> > > Hi Michael,
> > >
> > > sorry for taking so long to answer. Yes, there was no agreement on the
> > > removal of the "dir" parameter, but I'm not sure it's because of
> > > symmetry with swiotlb_sync_*(), because the topic was not really
> > > discussed.
> > >
> > > The discussion was about the KUnit test suite and whether direction is
> > > a property of the bounce buffer or of each sync operation. Since DMA API
> > > defines associates each DMA buffer with a direction, the direction
> > > parameter passed to swiotlb_sync_*() should match what was passed to
> > > swiotlb_tbl_map_single(), because that's how it is used by the generic
> > > DMA code. In other words, if the parameter is kept, it should be kept
> > > to match dma_map_*().
> > >
> > > However, there is also symmetry with swiotlb_tbl_unmap_single(). This
> > > function does use the parameter for the final sync. I believe there
> > > should be a matching initial sync in swiotlb_tbl_map_single(). In
> > > short, the buffer sync for DMA non-coherent devices should be moved from
> > > swiotlb_map() to swiotlb_tbl_map_single(). If this sync is not needed,
> > > then the caller can (and should) include DMA_ATTR_SKIP_CPU_SYNC in
> > > the flags parameter.
> > >
> > > To sum it up:
> > >
> > > * Do *NOT* remove the "dir" parameter.
> > > * Let me send a patch which moves the initial buffer sync.
> > >
> >
> > I'm not seeing the need to move the initial buffer sync.  All
> > callers of swiotlb_tbl_map_single() already have a subsequent
> > check for a non-coherent device, and a call to
> > arch_sync_dma_for_device().  And the Xen code has some
> > special handling that probably shouldn't go in
> > swiotlb_tbl_map_single().  Or am I missing something?
> 
> Oh, sure, there's nothing broken ATM. It's merely a cleanup. The API is
> asymmetric and thus confusing. You get a final sync by default if you
> call swiotlb_tbl_unmap_single(), 

I don't see that final sync in swiotlb_tbl_unmap_single().  It calls
swiotlb_bounce() to copy the data, but it doesn't deal with
non-coherent devices or call arch_sync_dma_for_cpu().

> but you don't get an initial sync by
> default if you call swiotlb_tbl_map_single(). This is difficult to
> remember, so potential new users of the API may incorrectly assume that
> an initial sync is done, or that a final sync is not done.
> 
> And yes, when moving the code, all current users of
> swiotlb_tbl_map_single() should specify DMA_ATTR_SKIP_CPU_SYNC.
> 
> Petr T


Re: [PATCH 1/2] swiotlb: Remove alloc_size argument to swiotlb_tbl_map_single()

2024-04-15 Thread Petr Tesařík
On Mon, 15 Apr 2024 12:23:22 +
Michael Kelley  wrote:

> From: Petr Tesařík  Sent: Monday, April 15, 2024 4:46 AM
> > 
> > Hi Michael,
> > 
> > sorry for taking so long to answer. Yes, there was no agreement on the
> > removal of the "dir" parameter, but I'm not sure it's because of
> > symmetry with swiotlb_sync_*(), because the topic was not really
> > discussed.
> > 
> > The discussion was about the KUnit test suite and whether direction is
> > a property of the bounce buffer or of each sync operation. Since DMA API
> > defines associates each DMA buffer with a direction, the direction
> > parameter passed to swiotlb_sync_*() should match what was passed to
> > swiotlb_tbl_map_single(), because that's how it is used by the generic
> > DMA code. In other words, if the parameter is kept, it should be kept
> > to match dma_map_*().
> > 
> > However, there is also symmetry with swiotlb_tbl_unmap_single(). This
> > function does use the parameter for the final sync. I believe there
> > should be a matching initial sync in swiotlb_tbl_map_single(). In
> > short, the buffer sync for DMA non-coherent devices should be moved from
> > swiotlb_map() to swiotlb_tbl_map_single(). If this sync is not needed,
> > then the caller can (and should) include DMA_ATTR_SKIP_CPU_SYNC in
> > the flags parameter.
> > 
> > To sum it up:
> > 
> > * Do *NOT* remove the "dir" parameter.
> > * Let me send a patch which moves the initial buffer sync.
> >   
> 
> I'm not seeing the need to move the initial buffer sync.  All
> callers of swiotlb_tbl_map_single() already have a subsequent
> check for a non-coherent device, and a call to 
> arch_sync_dma_for_device().  And the Xen code has some 
> special handling that probably shouldn't go in
> swiotlb_tbl_map_single().  Or am I missing something?

Oh, sure, there's nothing broken ATM. It's merely a cleanup. The API is
asymmetric and thus confusing. You get a final sync by default if you
call swiotlb_tbl_unmap_single(), but you don't get an initial sync by
default if you call swiotlb_tbl_map_single(). This is difficult to
remember, so potential new users of the API may incorrectly assume that
an initial sync is done, or that a final sync is not done.

And yes, when moving the code, all current users of
swiotlb_tbl_map_single() should specify DMA_ATTR_SKIP_CPU_SYNC.

Petr T



RE: [PATCH 1/2] swiotlb: Remove alloc_size argument to swiotlb_tbl_map_single()

2024-04-15 Thread Michael Kelley
From: Petr Tesařík  Sent: Monday, April 15, 2024 4:46 AM
> 
> Hi Michael,
> 
> sorry for taking so long to answer. Yes, there was no agreement on the
> removal of the "dir" parameter, but I'm not sure it's because of
> symmetry with swiotlb_sync_*(), because the topic was not really
> discussed.
> 
> The discussion was about the KUnit test suite and whether direction is
> a property of the bounce buffer or of each sync operation. Since DMA API
> defines associates each DMA buffer with a direction, the direction
> parameter passed to swiotlb_sync_*() should match what was passed to
> swiotlb_tbl_map_single(), because that's how it is used by the generic
> DMA code. In other words, if the parameter is kept, it should be kept
> to match dma_map_*().
> 
> However, there is also symmetry with swiotlb_tbl_unmap_single(). This
> function does use the parameter for the final sync. I believe there
> should be a matching initial sync in swiotlb_tbl_map_single(). In
> short, the buffer sync for DMA non-coherent devices should be moved from
> swiotlb_map() to swiotlb_tbl_map_single(). If this sync is not needed,
> then the caller can (and should) include DMA_ATTR_SKIP_CPU_SYNC in
> the flags parameter.
> 
> To sum it up:
> 
> * Do *NOT* remove the "dir" parameter.
> * Let me send a patch which moves the initial buffer sync.
> 

I'm not seeing the need to move the initial buffer sync.  All
callers of swiotlb_tbl_map_single() already have a subsequent
check for a non-coherent device, and a call to 
arch_sync_dma_for_device().  And the Xen code has some 
special handling that probably shouldn't go in
swiotlb_tbl_map_single().  Or am I missing something?

Michael



Re: [PATCH 1/2] swiotlb: Remove alloc_size argument to swiotlb_tbl_map_single()

2024-04-15 Thread Petr Tesařík
On Sun,  7 Apr 2024 21:11:41 -0700
mhkelle...@gmail.com wrote:

> From: Michael Kelley 
> 
> Currently swiotlb_tbl_map_single() takes alloc_align_mask and
> alloc_size arguments to specify an swiotlb allocation that is
> larger than mapping_size. This larger allocation is used solely
> by iommu_dma_map_single() to handle untrusted devices that should
> not have DMA visibility to memory pages that are partially used
> for unrelated kernel data.
> 
> Having two arguments to specify the allocation is redundant. While
> alloc_align_mask naturally specifies the alignment of the starting
> address of the allocation, it can also implicitly specify the size
> by rounding up the mapping_size to that alignment.
> 
> Additionally, the current approach has an edge case bug.
> iommu_dma_map_page() already does the rounding up to compute the
> alloc_size argument. But swiotlb_tbl_map_single() then calculates
> the alignment offset based on the DMA min_align_mask, and adds
> that offset to alloc_size. If the offset is non-zero, the addition
> may result in a value that is larger than the max the swiotlb can
> allocate. If the rounding up is done _after_ the alignment offset is
> added to the mapping_size (and the original mapping_size conforms to
> the value returned by swiotlb_max_mapping_size), then the max that the
> swiotlb can allocate will not be exceeded.
> 
> In view of these issues, simplify the swiotlb_tbl_map_single() interface
> by removing the alloc_size argument. Most call sites pass the same
> value for mapping_size and alloc_size, and they pass alloc_align_mask
> as zero. Just remove the redundant argument from these callers, as they
> will see no functional change. For iommu_dma_map_page() also remove
> the alloc_size argument, and have swiotlb_tbl_map_single() compute
> the alloc_size by rounding up mapping_size after adding the offset
> based on min_align_mask. This has the side effect of fixing the
> edge case bug but with no other functional change.
> 
> Also add a sanity test on the alloc_align_mask. While IOMMU code
> currently ensures the granule is not larger than PAGE_SIZE, if
> that guarantee were to be removed in the future, the downstream
> effect on the swiotlb might go unnoticed until strange allocation
> failures occurred.
> 
> Tested on an ARM64 system with 16K page size and some kernel
> test-only hackery to allow modifying the DMA min_align_mask and
> the granule size that becomes the alloc_align_mask. Tested these
> combinations with a variety of original memory addresses and
> sizes, including those that reproduce the edge case bug:
> 
> * 4K granule and 0 min_align_mask
> * 4K granule and 0xFFF min_align_mask (4K - 1)
> * 16K granule and 0xFFF min_align_mask
> * 64K granule and 0xFFF min_align_mask
> * 64K granule and 0x3FFF min_align_mask (16K - 1)
> 
> With the changes, all combinations pass.
> 
> Signed-off-by: Michael Kelley 
> ---
> I've haven't used any "Fixes:" tags. This patch really should be
> backported only if all the other recent swiotlb fixes get backported,
> and I'm unclear on whether that will happen.
> 
> I saw the brief discussion about removing the "dir" parameter from
> swiotlb_tbl_map_single(). That removal could easily be done as part
> of this patch, since it's already changing the swiotlb_tbl_map_single()
> parameters. But I think the conclusion of the discussion was to leave
> the "dir" parameter for symmetry with the swiotlb_sync_*() functions.
> Please correct me if that's wrong, and I'll respin this patch to do
> the removal.

Hi Michael,

sorry for taking so long to answer. Yes, there was no agreement on the
removal of the "dir" parameter, but I'm not sure it's because of
symmetry with swiotlb_sync_*(), because the topic was not really
discussed.

The discussion was about the KUnit test suite and whether direction is
a property of the bounce buffer or of each sync operation. Since DMA API
defines associates each DMA buffer with a direction, the direction
parameter passed to swiotlb_sync_*() should match what was passed to
swiotlb_tbl_map_single(), because that's how it is used by the generic
DMA code. In other words, if the parameter is kept, it should be kept
to match dma_map_*().

However, there is also symmetry with swiotlb_tbl_unmap_single(). This
function does use the parameter for the final sync. I believe there
should be a matching initial sync in swiotlb_tbl_map_single(). In
short, the buffer sync for DMA non-coherent devices should be moved from
swiotlb_map() to swiotlb_tbl_map_single(). If this sync is not needed,
then the caller can (and should) include DMA_ATTR_SKIP_CPU_SYNC in
the flags parameter.

To sum it up:

* Do *NOT* remove the "dir" parameter.
* Let me send a patch which moves the initial buffer sync.

Petr T

>  drivers/iommu/dma-iommu.c |  2 +-
>  drivers/xen/swiotlb-xen.c |  2 +-
>  include/linux/swiotlb.h   |  2 +-
>  kernel/dma/swiotlb.c  | 56
> +-- 4 files changed, 45
> insertions(+), 17 

[PATCH 1/2] swiotlb: Remove alloc_size argument to swiotlb_tbl_map_single()

2024-04-08 Thread mhkelley58
From: Michael Kelley 

Currently swiotlb_tbl_map_single() takes alloc_align_mask and
alloc_size arguments to specify an swiotlb allocation that is
larger than mapping_size. This larger allocation is used solely
by iommu_dma_map_single() to handle untrusted devices that should
not have DMA visibility to memory pages that are partially used
for unrelated kernel data.

Having two arguments to specify the allocation is redundant. While
alloc_align_mask naturally specifies the alignment of the starting
address of the allocation, it can also implicitly specify the size
by rounding up the mapping_size to that alignment.

Additionally, the current approach has an edge case bug.
iommu_dma_map_page() already does the rounding up to compute the
alloc_size argument. But swiotlb_tbl_map_single() then calculates
the alignment offset based on the DMA min_align_mask, and adds
that offset to alloc_size. If the offset is non-zero, the addition
may result in a value that is larger than the max the swiotlb can
allocate. If the rounding up is done _after_ the alignment offset is
added to the mapping_size (and the original mapping_size conforms to
the value returned by swiotlb_max_mapping_size), then the max that the
swiotlb can allocate will not be exceeded.

In view of these issues, simplify the swiotlb_tbl_map_single() interface
by removing the alloc_size argument. Most call sites pass the same
value for mapping_size and alloc_size, and they pass alloc_align_mask
as zero. Just remove the redundant argument from these callers, as they
will see no functional change. For iommu_dma_map_page() also remove
the alloc_size argument, and have swiotlb_tbl_map_single() compute
the alloc_size by rounding up mapping_size after adding the offset
based on min_align_mask. This has the side effect of fixing the
edge case bug but with no other functional change.

Also add a sanity test on the alloc_align_mask. While IOMMU code
currently ensures the granule is not larger than PAGE_SIZE, if
that guarantee were to be removed in the future, the downstream
effect on the swiotlb might go unnoticed until strange allocation
failures occurred.

Tested on an ARM64 system with 16K page size and some kernel
test-only hackery to allow modifying the DMA min_align_mask and
the granule size that becomes the alloc_align_mask. Tested these
combinations with a variety of original memory addresses and
sizes, including those that reproduce the edge case bug:

* 4K granule and 0 min_align_mask
* 4K granule and 0xFFF min_align_mask (4K - 1)
* 16K granule and 0xFFF min_align_mask
* 64K granule and 0xFFF min_align_mask
* 64K granule and 0x3FFF min_align_mask (16K - 1)

With the changes, all combinations pass.

Signed-off-by: Michael Kelley 
---
I've haven't used any "Fixes:" tags. This patch really should be
backported only if all the other recent swiotlb fixes get backported,
and I'm unclear on whether that will happen.

I saw the brief discussion about removing the "dir" parameter from
swiotlb_tbl_map_single(). That removal could easily be done as part
of this patch, since it's already changing the swiotlb_tbl_map_single()
parameters. But I think the conclusion of the discussion was to leave
the "dir" parameter for symmetry with the swiotlb_sync_*() functions.
Please correct me if that's wrong, and I'll respin this patch to do
the removal.

 drivers/iommu/dma-iommu.c |  2 +-
 drivers/xen/swiotlb-xen.c |  2 +-
 include/linux/swiotlb.h   |  2 +-
 kernel/dma/swiotlb.c  | 56 +--
 4 files changed, 45 insertions(+), 17 deletions(-)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 07d087eecc17..c21ef1388499 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -1165,7 +1165,7 @@ static dma_addr_t iommu_dma_map_page(struct device *dev, 
struct page *page,
trace_swiotlb_bounced(dev, phys, size);
 
aligned_size = iova_align(iovad, size);
-   phys = swiotlb_tbl_map_single(dev, phys, size, aligned_size,
+   phys = swiotlb_tbl_map_single(dev, phys, size,
  iova_mask(iovad), dir, attrs);
 
if (phys == DMA_MAPPING_ERROR)
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 1c4ef5111651..6579ae3f6dac 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -216,7 +216,7 @@ static dma_addr_t xen_swiotlb_map_page(struct device *dev, 
struct page *page,
 */
trace_swiotlb_bounced(dev, dev_addr, size);
 
-   map = swiotlb_tbl_map_single(dev, phys, size, size, 0, dir, attrs);
+   map = swiotlb_tbl_map_single(dev, phys, size, 0, dir, attrs);
if (map == (phys_addr_t)DMA_MAPPING_ERROR)
return DMA_MAPPING_ERROR;
 
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index ea23097e351f..14bc10c1bb23 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -43,7 +43,7 @@ int