As the recent swiotlb bug revealed, we seem to have given up the
direct DMA allocation too early and felt back to swiotlb allocation.
The reason is that swiotlb allocator expected that dma_direct_alloc()
would try harder to get pages even below 64bit DMA mask with
GFP_DMA32, but the function doesn't do that but only deals with
GFP_DMA case.

This patch adds a similar fallback reallocation with GFP_DMA32 as
we've done with GFP_DMA.  The condition is that the coherent mask is
smaller than 64bit (i.e. some address limitation), and neither GFP_DMA
nor GFP_DMA32 is set beforehand.

Signed-off-by: Takashi Iwai <>


This is a resend of a test patch included in the previous thread
("swiotlb: Fix unexpected swiotlb_alloc_coherent() failures").

 lib/dma-direct.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/lib/dma-direct.c b/lib/dma-direct.c
index bbfb229aa067..970d39155618 100644
--- a/lib/dma-direct.c
+++ b/lib/dma-direct.c
@@ -84,6 +84,13 @@ void *dma_direct_alloc(struct device *dev, size_t size, 
dma_addr_t *dma_handle,
                __free_pages(page, page_order);
                page = NULL;
+               if (IS_ENABLED(CONFIG_ZONE_DMA32) &&
+                   dev->coherent_dma_mask < DMA_BIT_MASK(64) &&
+                   !(gfp & (GFP_DMA32 | GFP_DMA))) {
+                       gfp |= GFP_DMA32;
+                       goto again;
+               }
                if (IS_ENABLED(CONFIG_ZONE_DMA) &&
                    dev->coherent_dma_mask < DMA_BIT_MASK(32) &&
                    !(gfp & GFP_DMA)) {

iommu mailing list

Reply via email to