Re: [PATCH] swiotlb: Allow allocating buffer anywhere in memory
Hello Christoph, Christoph Hellwig writes: > On Sat, Aug 15, 2020 at 05:45:36PM -0300, Thiago Jung Bauermann wrote: >> POWER secure guests (i.e., guests which use the Protection Execution >> Facility) need to use SWIOTLB to be able to do I/O with the hypervisor, but >> they don't need the SWIOTLB memory to be in low addresses since the >> hypervisor doesn't have any addressing limitation. >> >> This solves a SWIOTLB initialization problem we are seeing in secure guests >> with 128 GB of RAM: they are configured with 4 GB of crashkernel reserved >> memory, which leaves no space for SWIOTLB in low addresses. > > What about just open coding the allocation and using > swiotlb_init_with_tbl? Yes, that works too. I just sent a v2 implementing that change. I just had to add a small accessor function so that I could set no_iotlb_memory from outside swiotlb.c. Thank you for the quick review. -- Thiago Jung Bauermann IBM Linux Technology Center
Re: [PATCH] swiotlb: Allow allocating buffer anywhere in memory
On Sat, Aug 15, 2020 at 05:45:36PM -0300, Thiago Jung Bauermann wrote: > POWER secure guests (i.e., guests which use the Protection Execution > Facility) need to use SWIOTLB to be able to do I/O with the hypervisor, but > they don't need the SWIOTLB memory to be in low addresses since the > hypervisor doesn't have any addressing limitation. > > This solves a SWIOTLB initialization problem we are seeing in secure guests > with 128 GB of RAM: they are configured with 4 GB of crashkernel reserved > memory, which leaves no space for SWIOTLB in low addresses. What about just open coding the allocation and using swiotlb_init_with_tbl? > > Signed-off-by: Thiago Jung Bauermann > --- > arch/powerpc/mm/mem.c | 7 ++- > include/linux/swiotlb.h | 8 +++- > kernel/dma/swiotlb.c| 10 +++--- > 3 files changed, 20 insertions(+), 5 deletions(-) > > Normally I would split changes like this into one patch touching generic > code and another for the arch-specific part, but in this case I thought it > would be unneeded complexity. I can split though if people prefer it that > way. > > diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c > index c2c11eb8dcfc..13f2e3aff8b5 100644 > --- a/arch/powerpc/mm/mem.c > +++ b/arch/powerpc/mm/mem.c > @@ -50,6 +50,7 @@ > #include > #include > #include > +#include > > #include > > @@ -290,7 +291,11 @@ void __init mem_init(void) >* back to to-down. >*/ > memblock_set_bottom_up(true); > - swiotlb_init(0); > + /* > + * SVM guests can use the SWIOTLB wherever it is in memory, > + * even if not DMA-able. > + */ > + swiotlb_init_anywhere(0, is_secure_guest()); > #endif > > high_memory = (void *) __va(max_low_pfn * PAGE_SIZE); > diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h > index 046bb94bd4d6..433f3dbb35b5 100644 > --- a/include/linux/swiotlb.h > +++ b/include/linux/swiotlb.h > @@ -29,7 +29,13 @@ enum swiotlb_force { > */ > #define IO_TLB_SHIFT 11 > > -extern void swiotlb_init(int verbose); > +void __init swiotlb_init_anywhere(int verbose, bool allocate_anywhere); > + > +static inline void swiotlb_init(int verbose) > +{ > + swiotlb_init_anywhere(verbose, false); > +} > + > int swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose); > extern unsigned long swiotlb_nr_tbl(void); > unsigned long swiotlb_size_or_default(void); > diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c > index c19379fabd20..27070aa59e34 100644 > --- a/kernel/dma/swiotlb.c > +++ b/kernel/dma/swiotlb.c > @@ -244,7 +244,7 @@ int __init swiotlb_init_with_tbl(char *tlb, unsigned long > nslabs, int verbose) > * structures for the software IO TLB used to implement the DMA API. > */ > void __init > -swiotlb_init(int verbose) > +swiotlb_init_anywhere(int verbose, bool allocate_anywhere) > { > size_t default_size = IO_TLB_DEFAULT_SIZE; > unsigned char *vstart; > @@ -257,8 +257,12 @@ swiotlb_init(int verbose) > > bytes = io_tlb_nslabs << IO_TLB_SHIFT; > > - /* Get IO TLB memory from the low pages */ > - vstart = memblock_alloc_low(PAGE_ALIGN(bytes), PAGE_SIZE); > + if (allocate_anywhere) > + vstart = memblock_alloc(PAGE_ALIGN(bytes), PAGE_SIZE); > + else > + /* Get IO TLB memory from the low pages */ > + vstart = memblock_alloc_low(PAGE_ALIGN(bytes), PAGE_SIZE); > + > if (vstart && !swiotlb_init_with_tbl(vstart, io_tlb_nslabs, verbose)) > return; > ---end quoted text---
[PATCH] swiotlb: Allow allocating buffer anywhere in memory
POWER secure guests (i.e., guests which use the Protection Execution Facility) need to use SWIOTLB to be able to do I/O with the hypervisor, but they don't need the SWIOTLB memory to be in low addresses since the hypervisor doesn't have any addressing limitation. This solves a SWIOTLB initialization problem we are seeing in secure guests with 128 GB of RAM: they are configured with 4 GB of crashkernel reserved memory, which leaves no space for SWIOTLB in low addresses. Signed-off-by: Thiago Jung Bauermann --- arch/powerpc/mm/mem.c | 7 ++- include/linux/swiotlb.h | 8 +++- kernel/dma/swiotlb.c| 10 +++--- 3 files changed, 20 insertions(+), 5 deletions(-) Normally I would split changes like this into one patch touching generic code and another for the arch-specific part, but in this case I thought it would be unneeded complexity. I can split though if people prefer it that way. diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c index c2c11eb8dcfc..13f2e3aff8b5 100644 --- a/arch/powerpc/mm/mem.c +++ b/arch/powerpc/mm/mem.c @@ -50,6 +50,7 @@ #include #include #include +#include #include @@ -290,7 +291,11 @@ void __init mem_init(void) * back to to-down. */ memblock_set_bottom_up(true); - swiotlb_init(0); + /* +* SVM guests can use the SWIOTLB wherever it is in memory, +* even if not DMA-able. +*/ + swiotlb_init_anywhere(0, is_secure_guest()); #endif high_memory = (void *) __va(max_low_pfn * PAGE_SIZE); diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h index 046bb94bd4d6..433f3dbb35b5 100644 --- a/include/linux/swiotlb.h +++ b/include/linux/swiotlb.h @@ -29,7 +29,13 @@ enum swiotlb_force { */ #define IO_TLB_SHIFT 11 -extern void swiotlb_init(int verbose); +void __init swiotlb_init_anywhere(int verbose, bool allocate_anywhere); + +static inline void swiotlb_init(int verbose) +{ + swiotlb_init_anywhere(verbose, false); +} + int swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose); extern unsigned long swiotlb_nr_tbl(void); unsigned long swiotlb_size_or_default(void); diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index c19379fabd20..27070aa59e34 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -244,7 +244,7 @@ int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose) * structures for the software IO TLB used to implement the DMA API. */ void __init -swiotlb_init(int verbose) +swiotlb_init_anywhere(int verbose, bool allocate_anywhere) { size_t default_size = IO_TLB_DEFAULT_SIZE; unsigned char *vstart; @@ -257,8 +257,12 @@ swiotlb_init(int verbose) bytes = io_tlb_nslabs << IO_TLB_SHIFT; - /* Get IO TLB memory from the low pages */ - vstart = memblock_alloc_low(PAGE_ALIGN(bytes), PAGE_SIZE); + if (allocate_anywhere) + vstart = memblock_alloc(PAGE_ALIGN(bytes), PAGE_SIZE); + else + /* Get IO TLB memory from the low pages */ + vstart = memblock_alloc_low(PAGE_ALIGN(bytes), PAGE_SIZE); + if (vstart && !swiotlb_init_with_tbl(vstart, io_tlb_nslabs, verbose)) return;