Re: [PATCH v2 3/3] arch/*/: remove CONFIG_VIRT_TO_BUS

2022-06-30 Thread Christophe Leroy


Le 30/06/2022 à 10:04, David Laight a écrit :
> From: Michael Schmitz
>> Sent: 29 June 2022 00:09
>>
>> Hi Arnd,
>>
>> On 29/06/22 09:50, Arnd Bergmann wrote:
>>> On Tue, Jun 28, 2022 at 11:03 PM Michael Schmitz  
>>> wrote:
 On 28/06/22 19:03, Geert Uytterhoeven wrote:
>> The driver allocates bounce buffers using kmalloc if it hits an
>> unaligned data buffer - can such buffers still even happen these days?
> No idea.
 Hmmm - I think I'll stick a WARN_ONCE() in there so we know whether this
 code path is still being used.
>>> kmalloc() guarantees alignment to the next power-of-two size or
>>> KMALLOC_MIN_ALIGN, whichever is bigger. On m68k this means it
>>> is cacheline aligned.
>>
>> And all SCSI buffers are allocated using kmalloc? No way at all for user
>> space to pass unaligned data?
> 
> I didn't think kmalloc() gave any such guarantee about alignment.

I does since commit 59bb47985c1d ("mm, sl[aou]b: guarantee natural 
alignment for kmalloc(power-of-two)")

Christophe

> There are cache-line alignment requirements on systems with non-coherent
> dma, but otherwise the alignment can be much smaller.
> 
> One of the allocators adds a header to each item, IIRC that can
> lead to 'unexpected' alignments - especially on m68k.
> 
> dma_alloc_coherent() does align to next 'power of 2'.
> And sometimes you need (eg) 16k allocates that are 16k aligned.
> 
>   David
> 
> -
> Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 
> 1PT, UK
> Registration No: 1397386 (Wales)
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

[PATCH] iommu/fsl_pamu: Prepare cleanup of powerpc's asm/prom.h

2022-04-02 Thread Christophe Leroy
powerpc's asm/prom.h brings some headers that it doesn't
need itself.

In order to clean it up, first add missing headers in
users of asm/prom.h

Signed-off-by: Christophe Leroy 
---
 drivers/iommu/fsl_pamu.c| 3 +++
 drivers/iommu/fsl_pamu_domain.c | 1 +
 2 files changed, 4 insertions(+)

diff --git a/drivers/iommu/fsl_pamu.c b/drivers/iommu/fsl_pamu.c
index fc38b1fba7cf..0d03f837a5d4 100644
--- a/drivers/iommu/fsl_pamu.c
+++ b/drivers/iommu/fsl_pamu.c
@@ -11,6 +11,9 @@
 #include 
 #include 
 #include 
+#include 
+#include 
+#include 
 
 #include 
 
diff --git a/drivers/iommu/fsl_pamu_domain.c b/drivers/iommu/fsl_pamu_domain.c
index 69a4a62dc3b9..94b4589dc67c 100644
--- a/drivers/iommu/fsl_pamu_domain.c
+++ b/drivers/iommu/fsl_pamu_domain.c
@@ -9,6 +9,7 @@
 
 #include "fsl_pamu_domain.h"
 
+#include 
 #include 
 
 /*
-- 
2.35.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 08/11] swiotlb: make the swiotlb_init interface more useful

2022-02-27 Thread Christophe Leroy


Le 27/02/2022 à 15:30, Christoph Hellwig a écrit :
> Pass a bool to pass if swiotlb needs to be enabled based on the
> addressing needs and replace the verbose argument with a set of
> flags, including one to force enable bounce buffering.
> 
> Note that this patch removes the possibility to force xen-swiotlb
> use using swiotlb=force on the command line on x86 (arm and arm64
> never supported that), but this interface will be restored shortly.
> 
> Signed-off-by: Christoph Hellwig 
> ---
>   arch/arm/mm/init.c |  6 +
>   arch/arm64/mm/init.c   |  6 +
>   arch/ia64/mm/init.c|  4 +--
>   arch/mips/cavium-octeon/dma-octeon.c   |  2 +-
>   arch/mips/loongson64/dma.c |  2 +-
>   arch/mips/sibyte/common/dma.c  |  2 +-
>   arch/powerpc/include/asm/swiotlb.h |  1 +
>   arch/powerpc/mm/mem.c  |  3 ++-

arch/powerpc/mm/mem.o:(.toc+0x0): undefined reference to `ppc_swiotlb_flags'
make[1]: *** [vmlinux] Error 1
/linux/Makefile:1155: recipe for target 'vmlinux' failed


>   arch/powerpc/platforms/pseries/setup.c |  3 ---
>   arch/riscv/mm/init.c   |  8 +-
>   arch/s390/mm/init.c|  3 +--
>   arch/x86/kernel/cpu/mshyperv.c |  8 --
>   arch/x86/kernel/pci-dma.c  | 15 ++-
>   arch/x86/mm/mem_encrypt_amd.c  |  3 ---
>   drivers/xen/swiotlb-xen.c  |  4 +--
>   include/linux/swiotlb.h| 15 ++-
>   include/trace/events/swiotlb.h | 29 -
>   kernel/dma/swiotlb.c   | 35 ++
>   18 files changed, 56 insertions(+), 93 deletions(-)
> 
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: [PATCH v5 3/6] mm: make alloc_contig_range work at pageblock granularity

2022-02-13 Thread Christophe Leroy


Le 11/02/2022 à 17:41, Zi Yan a écrit :
> From: Zi Yan 
> 
> alloc_contig_range() worked at MAX_ORDER-1 granularity to avoid merging
> pageblocks with different migratetypes. It might unnecessarily convert
> extra pageblocks at the beginning and at the end of the range. Change
> alloc_contig_range() to work at pageblock granularity.
> 
> Special handling is needed for free pages and in-use pages across the
> boundaries of the range specified alloc_contig_range(). Because these
> partially isolated pages causes free page accounting issues. The free
> pages will be split and freed into separate migratetype lists; the
> in-use pages will be migrated then the freed pages will be handled.
> 
> Signed-off-by: Zi Yan 
> ---
>   include/linux/page-isolation.h |   2 +-
>   mm/internal.h  |   3 +
>   mm/memory_hotplug.c|   3 +-
>   mm/page_alloc.c| 235 +
>   mm/page_isolation.c|  33 -
>   5 files changed, 211 insertions(+), 65 deletions(-)
> 
> diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h
> index 4ef7be6def83..78ff940cc169 100644
> --- a/include/linux/page-isolation.h
> +++ b/include/linux/page-isolation.h
> @@ -54,7 +54,7 @@ int move_freepages_block(struct zone *zone, struct page 
> *page,
>*/
>   int
>   start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn,
> -  unsigned migratetype, int flags);
> +  unsigned migratetype, int flags, gfp_t gfp_flags);
>   
>   /*
>* Changes MIGRATE_ISOLATE to MIGRATE_MOVABLE.
> diff --git a/mm/internal.h b/mm/internal.h
> index 0d240e876831..509cbdc25992 100644
> --- a/mm/internal.h
> +++ b/mm/internal.h
> @@ -319,6 +319,9 @@ isolate_freepages_range(struct compact_control *cc,
>   int
>   isolate_migratepages_range(struct compact_control *cc,
>  unsigned long low_pfn, unsigned long end_pfn);
> +
> +int
> +isolate_single_pageblock(unsigned long boundary_pfn, gfp_t gfp_flags, int 
> isolate_before_boundary);
>   #endif
>   int find_suitable_fallback(struct free_area *area, unsigned int order,
>   int migratetype, bool only_stealable, bool *can_steal);
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index ce68098832aa..82406d2f3e46 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -1863,7 +1863,8 @@ int __ref offline_pages(unsigned long start_pfn, 
> unsigned long nr_pages,
>   /* set above range as isolated */
>   ret = start_isolate_page_range(start_pfn, end_pfn,
>  MIGRATE_MOVABLE,
> -MEMORY_OFFLINE | REPORT_FAILURE);
> +MEMORY_OFFLINE | REPORT_FAILURE,
> +GFP_USER | __GFP_MOVABLE | 
> __GFP_RETRY_MAYFAIL);
>   if (ret) {
>   reason = "failure to isolate range";
>   goto failed_removal_pcplists_disabled;
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 62ef78f3d771..7a4fa21aea5c 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -8985,7 +8985,7 @@ static inline void alloc_contig_dump_pages(struct 
> list_head *page_list)
>   #endif
>   
>   /* [start, end) must belong to a single zone. */
> -static int __alloc_contig_migrate_range(struct compact_control *cc,
> +int __alloc_contig_migrate_range(struct compact_control *cc,
>   unsigned long start, unsigned long end)
>   {
>   /* This function is based on compact_zone() from compaction.c. */
> @@ -9043,6 +9043,167 @@ static int __alloc_contig_migrate_range(struct 
> compact_control *cc,
>   return 0;
>   }
>   
> +/**
> + * split_free_page() -- split a free page at split_pfn_offset
> + * @free_page:   the original free page
> + * @order:   the order of the page
> + * @split_pfn_offset:split offset within the page
> + *
> + * It is used when the free page crosses two pageblocks with different 
> migratetypes
> + * at split_pfn_offset within the page. The split free page will be put into
> + * separate migratetype lists afterwards. Otherwise, the function achieves
> + * nothing.
> + */
> +static inline void split_free_page(struct page *free_page,
> + int order, unsigned long split_pfn_offset)
> +{
> + struct zone *zone = page_zone(free_page);
> + unsigned long free_page_pfn = page_to_pfn(free_page);
> + unsigned long pfn;
> + unsigned long flags;
> + int free_page_order;
> +
> + spin_lock_irqsave(>lock, flags);
> + del_page_from_free_list(free_page, zone, order);
> + for (pfn = free_page_pfn;
> +  pfn < free_page_pfn + (1UL << order);) {
> + int mt = get_pfnblock_migratetype(pfn_to_page(pfn), pfn);
> +
> + free_page_order = order_base_2(split_pfn_offset);
> + __free_one_page(pfn_to_page(pfn), pfn, zone, 

Re: [PATCH 3/3] memblock: cleanup memblock_free interface

2021-09-23 Thread Christophe Leroy



Le 23/09/2021 à 14:01, Mike Rapoport a écrit :

On Thu, Sep 23, 2021 at 11:47:48AM +0200, Christophe Leroy wrote:



Le 23/09/2021 à 09:43, Mike Rapoport a écrit :

From: Mike Rapoport 

For ages memblock_free() interface dealt with physical addresses even
despite the existence of memblock_alloc_xx() functions that return a
virtual pointer.

Introduce memblock_phys_free() for freeing physical ranges and repurpose
memblock_free() to free virtual pointers to make the following pairing
abundantly clear:

int memblock_phys_free(phys_addr_t base, phys_addr_t size);
phys_addr_t memblock_phys_alloc(phys_addr_t base, phys_addr_t size);

void *memblock_alloc(phys_addr_t size, phys_addr_t align);
void memblock_free(void *ptr, size_t size);

Replace intermediate memblock_free_ptr() with memblock_free() and drop
unnecessary aliases memblock_free_early() and memblock_free_early_nid().

Suggested-by: Linus Torvalds 
Signed-off-by: Mike Rapoport 
---



diff --git a/arch/s390/kernel/smp.c b/arch/s390/kernel/smp.c
index 1a04e5bdf655..37826d8c4f74 100644
--- a/arch/s390/kernel/smp.c
+++ b/arch/s390/kernel/smp.c
@@ -723,7 +723,7 @@ void __init smp_save_dump_cpus(void)
/* Get the CPU registers */
smp_save_cpu_regs(sa, addr, is_boot_cpu, page);
}
-   memblock_free(page, PAGE_SIZE);
+   memblock_phys_free(page, PAGE_SIZE);
diag_amode31_ops.diag308_reset();
pcpu_set_smt(0);
   }
@@ -880,7 +880,7 @@ void __init smp_detect_cpus(void)
/* Add CPUs present at boot */
__smp_rescan_cpus(info, true);
-   memblock_free_early((unsigned long)info, sizeof(*info));
+   memblock_free(info, sizeof(*info));
   }
   /*


I'm a bit lost. IIUC memblock_free_early() and memblock_free() where
identical.


Yes, they were, but all calls to memblock_free_early() were using
__pa(vaddr) because they had a virtual address at hand.


I'm still not following. In the above memblock_free_early() was taking 
(unsigned long)info . Was it a bug ? It looks odd to hide bug fixes in 
such a big patch, should that bug fix go in patch 2 ?





In the first hunk memblock_free() gets replaced by memblock_phys_free()
In the second hunk memblock_free_early() gets replaced by memblock_free()


In the first hunk the memory is allocated with memblock_phys_alloc() and we
have a physical range to free. In the second hunk the memory is allocated
with memblock_alloc() and we are freeing a virtual pointer.
  

I think it would be easier to follow if you could split it in several
patches:


It was an explicit request from Linus to make it a single commit:

   but the actual commit can and should be just a single commit that just
   fixes 'memblock_free()' to have sane interfaces.

I don't feel strongly about splitting it (except my laziness really
objects), but I don't think doing the conversion in several steps worth the
churn.


The commit is quite big (55 files changed, approx 100 lines modified).

If done in the right order the change should be minimal.

It is rather not-easy to follow and review when a function that was 
existing (namely memblock_free() ) disappears and re-appears in the same 
commit but to do something different.


You do:
- memblock_free() ==> memblock_phys_free()
- memblock_free_ptr() ==> memblock_free()

At least you could split in two patches, the advantage would be that 
between first and second patch memblock() doesn't exist anymore so you 
can check you really don't have anymore user.





- First patch: Create memblock_phys_free() and change all relevant
memblock_free() to memblock_phys_free() - Or change memblock_free() to
memblock_phys_free() and make memblock_free() an alias of it.
- Second patch: Make memblock_free_ptr() become memblock_free() and change
all remaining callers to the new semantics (IIUC memblock_free(__pa(ptr))
becomes memblock_free(ptr) and make memblock_free_ptr() an alias of
memblock_free()
- Fourth patch: Replace and drop memblock_free_ptr()
- Fifth patch: Drop memblock_free_early() and memblock_free_early_nid() (All
users should have been upgraded to memblock_free_phys() in patch 1 or
memblock_free() in patch 2)

Christophe



___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: [PATCH 3/3] memblock: cleanup memblock_free interface

2021-09-23 Thread Christophe Leroy



Le 23/09/2021 à 09:43, Mike Rapoport a écrit :

From: Mike Rapoport 

For ages memblock_free() interface dealt with physical addresses even
despite the existence of memblock_alloc_xx() functions that return a
virtual pointer.

Introduce memblock_phys_free() for freeing physical ranges and repurpose
memblock_free() to free virtual pointers to make the following pairing
abundantly clear:

int memblock_phys_free(phys_addr_t base, phys_addr_t size);
phys_addr_t memblock_phys_alloc(phys_addr_t base, phys_addr_t size);

void *memblock_alloc(phys_addr_t size, phys_addr_t align);
void memblock_free(void *ptr, size_t size);

Replace intermediate memblock_free_ptr() with memblock_free() and drop
unnecessary aliases memblock_free_early() and memblock_free_early_nid().

Suggested-by: Linus Torvalds 
Signed-off-by: Mike Rapoport 
---



diff --git a/arch/s390/kernel/smp.c b/arch/s390/kernel/smp.c
index 1a04e5bdf655..37826d8c4f74 100644
--- a/arch/s390/kernel/smp.c
+++ b/arch/s390/kernel/smp.c
@@ -723,7 +723,7 @@ void __init smp_save_dump_cpus(void)
/* Get the CPU registers */
smp_save_cpu_regs(sa, addr, is_boot_cpu, page);
}
-   memblock_free(page, PAGE_SIZE);
+   memblock_phys_free(page, PAGE_SIZE);
diag_amode31_ops.diag308_reset();
pcpu_set_smt(0);
  }
@@ -880,7 +880,7 @@ void __init smp_detect_cpus(void)
  
  	/* Add CPUs present at boot */

__smp_rescan_cpus(info, true);
-   memblock_free_early((unsigned long)info, sizeof(*info));
+   memblock_free(info, sizeof(*info));
  }
  
  /*


I'm a bit lost. IIUC memblock_free_early() and memblock_free() where 
identical.


In the first hunk memblock_free() gets replaced by memblock_phys_free()
In the second hunk memblock_free_early() gets replaced by memblock_free()

I think it would be easier to follow if you could split it in several 
patches:
- First patch: Create memblock_phys_free() and change all relevant 
memblock_free() to memblock_phys_free() - Or change memblock_free() to 
memblock_phys_free() and make memblock_free() an alias of it.
- Second patch: Make memblock_free_ptr() become memblock_free() and 
change all remaining callers to the new semantics (IIUC 
memblock_free(__pa(ptr)) becomes memblock_free(ptr) and make 
memblock_free_ptr() an alias of memblock_free()

- Fourth patch: Replace and drop memblock_free_ptr()
- Fifth patch: Drop memblock_free_early() and memblock_free_early_nid() 
(All users should have been upgraded to memblock_free_phys() in patch 1 
or memblock_free() in patch 2)


Christophe
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: [PATCH v3 4/8] powerpc/pseries/svm: Add a powerpc version of cc_platform_has()

2021-09-15 Thread Christophe Leroy



Le 15/09/2021 à 12:08, Borislav Petkov a écrit :

On Wed, Sep 15, 2021 at 10:28:59AM +1000, Michael Ellerman wrote:

I don't love it, a new C file and an out-of-line call to then call back
to a static inline that for most configuration will return false ... but
whatever :)


Yeah, hch thinks it'll cause a big mess otherwise:

https://lore.kernel.org/lkml/ysscwvpxevxw%2f...@infradead.org/


Could you please provide more explicit explanation why inlining such an 
helper is considered as bad practice and messy ?


Because as demonstrated in my previous response some days ago, taking 
that outline ends up with an unneccessary ugly generated code and we 
don't benefit front GCC's capability to fold in and opt out unreachable 
code.


As pointed by Michael in most cases the function will just return false 
so behind the performance concern, there is also the code size and code 
coverage topic that is to be taken into account. And even when the 
function doesn't return false, the only thing it does folds into a 
single powerpc instruction so there is really no point in making a 
dedicated out-of-line fonction for that and suffer the cost and the size 
of a function call and to justify the addition of a dedicated C file.





I guess less ifdeffery is nice too.


I can't see your point here. Inlining the function wouldn't add any 
ifdeffery as far as I can see.


So, would you mind reconsidering your approach and allow architectures 
to provide inline implementation by just not enforcing a generic 
prototype ? Or otherwise provide more details and exemple of why the 
cons are more important versus the pros ?


Thanks
Christophe
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: [PATCH v3 4/8] powerpc/pseries/svm: Add a powerpc version of cc_platform_has()

2021-09-14 Thread Christophe Leroy



Le 14/09/2021 à 13:58, Borislav Petkov a écrit :

On Wed, Sep 08, 2021 at 05:58:35PM -0500, Tom Lendacky wrote:

Introduce a powerpc version of the cc_platform_has() function. This will
be used to replace the powerpc mem_encrypt_active() implementation, so
the implementation will initially only support the CC_ATTR_MEM_ENCRYPT
attribute.

Cc: Michael Ellerman 
Cc: Benjamin Herrenschmidt 
Cc: Paul Mackerras 
Signed-off-by: Tom Lendacky 
---
  arch/powerpc/platforms/pseries/Kconfig   |  1 +
  arch/powerpc/platforms/pseries/Makefile  |  2 ++
  arch/powerpc/platforms/pseries/cc_platform.c | 26 
  3 files changed, 29 insertions(+)
  create mode 100644 arch/powerpc/platforms/pseries/cc_platform.c


Michael,

can I get an ACK for the ppc bits to carry them through the tip tree
pls?

Btw, on a related note, cross-compiling this throws the following error here:

$ make 
CROSS_COMPILE=/home/share/src/crosstool/gcc-9.4.0-nolibc/powerpc64-linux/bin/powerpc64-linux-
 V=1 ARCH=powerpc

...

/home/share/src/crosstool/gcc-9.4.0-nolibc/powerpc64-linux/bin/powerpc64-linux-gcc
 -Wp,-MD,arch/powerpc/boot/.crt0.o.d -D__ASSEMBLY__ -Wall -Wundef 
-Wstrict-prototypes -Wno-trigraphs -fno-strict-aliasing -O2 -msoft-float 
-mno-altivec -mno-vsx -pipe -fomit-frame-pointer -fno-builtin -fPIC -nostdinc 
-include ./include/linux/compiler_attributes.h -I./arch/powerpc/include 
-I./arch/powerpc/include/generated  -I./include -I./arch/powerpc/include/uapi 
-I./arch/powerpc/include/generated/uapi -I./include/uapi 
-I./include/generated/uapi -include ./include/linux/compiler-version.h -include 
./include/linux/kconfig.h -m32 -isystem 
/home/share/src/crosstool/gcc-9.4.0-nolibc/powerpc64-linux/bin/../lib/gcc/powerpc64-linux/9.4.0/include
 -mbig-endian -nostdinc -c -o arch/powerpc/boot/crt0.o arch/powerpc/boot/crt0.S
In file included from :
././include/linux/compiler_attributes.h:62:5: warning: "__has_attribute" is not 
defined, evaluates to 0 [-Wundef]
62 | #if __has_attribute(__assume_aligned__)
   | ^~~
././include/linux/compiler_attributes.h:62:20: error: missing binary operator before 
token "("
62 | #if __has_attribute(__assume_aligned__)
   |^
././include/linux/compiler_attributes.h:88:5: warning: "__has_attribute" is not 
defined, evaluates to 0 [-Wundef]
88 | #if __has_attribute(__copy__)
   | ^~~
...

Known issue?

This __has_attribute() thing is supposed to be supported
in gcc since 5.1 and I'm using the crosstool stuff from
https://www.kernel.org/pub/tools/crosstool/ and gcc-9.4 above is pretty
new so that should not happen actually.

But it does...

Hmmm.




Yes, see 
https://lore.kernel.org/linuxppc-dev/20210914123919.58203...@canb.auug.org.au/T/#t


___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: [PATCH v3 4/8] powerpc/pseries/svm: Add a powerpc version of cc_platform_has()

2021-09-09 Thread Christophe Leroy




On 9/8/21 10:58 PM, Tom Lendacky wrote:

Introduce a powerpc version of the cc_platform_has() function. This will
be used to replace the powerpc mem_encrypt_active() implementation, so
the implementation will initially only support the CC_ATTR_MEM_ENCRYPT
attribute.

Cc: Michael Ellerman 
Cc: Benjamin Herrenschmidt 
Cc: Paul Mackerras 
Signed-off-by: Tom Lendacky 
---
  arch/powerpc/platforms/pseries/Kconfig   |  1 +
  arch/powerpc/platforms/pseries/Makefile  |  2 ++
  arch/powerpc/platforms/pseries/cc_platform.c | 26 
  3 files changed, 29 insertions(+)
  create mode 100644 arch/powerpc/platforms/pseries/cc_platform.c

diff --git a/arch/powerpc/platforms/pseries/Kconfig 
b/arch/powerpc/platforms/pseries/Kconfig
index 5e037df2a3a1..2e57391e0778 100644
--- a/arch/powerpc/platforms/pseries/Kconfig
+++ b/arch/powerpc/platforms/pseries/Kconfig
@@ -159,6 +159,7 @@ config PPC_SVM
select SWIOTLB
select ARCH_HAS_MEM_ENCRYPT
select ARCH_HAS_FORCE_DMA_UNENCRYPTED
+   select ARCH_HAS_CC_PLATFORM
help
 There are certain POWER platforms which support secure guests using
 the Protected Execution Facility, with the help of an Ultravisor
diff --git a/arch/powerpc/platforms/pseries/Makefile 
b/arch/powerpc/platforms/pseries/Makefile
index 4cda0ef87be0..41d8aee98da4 100644
--- a/arch/powerpc/platforms/pseries/Makefile
+++ b/arch/powerpc/platforms/pseries/Makefile
@@ -31,3 +31,5 @@ obj-$(CONFIG_FA_DUMP) += rtas-fadump.o
  
  obj-$(CONFIG_SUSPEND)		+= suspend.o

  obj-$(CONFIG_PPC_VAS) += vas.o
+
+obj-$(CONFIG_ARCH_HAS_CC_PLATFORM) += cc_platform.o
diff --git a/arch/powerpc/platforms/pseries/cc_platform.c 
b/arch/powerpc/platforms/pseries/cc_platform.c
new file mode 100644
index ..e8021af83a19
--- /dev/null
+++ b/arch/powerpc/platforms/pseries/cc_platform.c
@@ -0,0 +1,26 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Confidential Computing Platform Capability checks
+ *
+ * Copyright (C) 2021 Advanced Micro Devices, Inc.
+ *
+ * Author: Tom Lendacky 
+ */
+
+#include 
+#include 
+
+#include 
+#include 
+
+bool cc_platform_has(enum cc_attr attr)
+{


Please keep this function inline as mem_encrypt_active() is



+   switch (attr) {
+   case CC_ATTR_MEM_ENCRYPT:
+   return is_secure_guest();
+
+   default:
+   return false;
+   }
+}
+EXPORT_SYMBOL_GPL(cc_platform_has);


___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v3 2/8] mm: Introduce a function to check for confidential computing features

2021-09-09 Thread Christophe Leroy




On 9/8/21 10:58 PM, Tom Lendacky wrote:

In prep for other confidential computing technologies, introduce a generic
helper function, cc_platform_has(), that can be used to check for specific


I have little problem with that naming.

For me CC has always meant Compiler Collection.


active confidential computing attributes, like memory encryption. This is
intended to eliminate having to add multiple technology-specific checks to
the code (e.g. if (sev_active() || tdx_active())).

Co-developed-by: Andi Kleen 
Signed-off-by: Andi Kleen 
Co-developed-by: Kuppuswamy Sathyanarayanan 

Signed-off-by: Kuppuswamy Sathyanarayanan 

Signed-off-by: Tom Lendacky 
---
  arch/Kconfig|  3 ++
  include/linux/cc_platform.h | 88 +
  2 files changed, 91 insertions(+)
  create mode 100644 include/linux/cc_platform.h

diff --git a/arch/Kconfig b/arch/Kconfig
index 3743174da870..ca7c359e5da8 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -1234,6 +1234,9 @@ config RELR
  config ARCH_HAS_MEM_ENCRYPT
bool
  
+config ARCH_HAS_CC_PLATFORM

+   bool
+
  config HAVE_SPARSE_SYSCALL_NR
 bool
 help
diff --git a/include/linux/cc_platform.h b/include/linux/cc_platform.h
new file mode 100644
index ..253f3ea66cd8
--- /dev/null
+++ b/include/linux/cc_platform.h
@@ -0,0 +1,88 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Confidential Computing Platform Capability checks
+ *
+ * Copyright (C) 2021 Advanced Micro Devices, Inc.
+ *
+ * Author: Tom Lendacky 
+ */
+
+#ifndef _CC_PLATFORM_H
+#define _CC_PLATFORM_H
+
+#include 
+#include 
+
+/**
+ * enum cc_attr - Confidential computing attributes
+ *
+ * These attributes represent confidential computing features that are
+ * currently active.
+ */
+enum cc_attr {
+   /**
+* @CC_ATTR_MEM_ENCRYPT: Memory encryption is active
+*
+* The platform/OS is running with active memory encryption. This
+* includes running either as a bare-metal system or a hypervisor
+* and actively using memory encryption or as a guest/virtual machine
+* and actively using memory encryption.
+*
+* Examples include SME, SEV and SEV-ES.
+*/
+   CC_ATTR_MEM_ENCRYPT,
+
+   /**
+* @CC_ATTR_HOST_MEM_ENCRYPT: Host memory encryption is active
+*
+* The platform/OS is running as a bare-metal system or a hypervisor
+* and actively using memory encryption.
+*
+* Examples include SME.
+*/
+   CC_ATTR_HOST_MEM_ENCRYPT,
+
+   /**
+* @CC_ATTR_GUEST_MEM_ENCRYPT: Guest memory encryption is active
+*
+* The platform/OS is running as a guest/virtual machine and actively
+* using memory encryption.
+*
+* Examples include SEV and SEV-ES.
+*/
+   CC_ATTR_GUEST_MEM_ENCRYPT,
+
+   /**
+* @CC_ATTR_GUEST_STATE_ENCRYPT: Guest state encryption is active
+*
+* The platform/OS is running as a guest/virtual machine and actively
+* using memory encryption and register state encryption.
+*
+* Examples include SEV-ES.
+*/
+   CC_ATTR_GUEST_STATE_ENCRYPT,
+};
+
+#ifdef CONFIG_ARCH_HAS_CC_PLATFORM
+
+/**
+ * cc_platform_has() - Checks if the specified cc_attr attribute is active
+ * @attr: Confidential computing attribute to check
+ *
+ * The cc_platform_has() function will return an indicator as to whether the
+ * specified Confidential Computing attribute is currently active.
+ *
+ * Context: Any context
+ * Return:
+ * * TRUE  - Specified Confidential Computing attribute is active
+ * * FALSE - Specified Confidential Computing attribute is not active
+ */
+bool cc_platform_has(enum cc_attr attr);


This declaration make it impossible for architectures to define this 
function inline.


For such function, having it inline would make more sense as it would 
allow GCC to perform constant folding and avoid the overhead  of calling 
a sub-function.



+
+#else  /* !CONFIG_ARCH_HAS_CC_PLATFORM */
+
+static inline bool cc_platform_has(enum cc_attr attr) { return false; }
+
+#endif /* CONFIG_ARCH_HAS_CC_PLATFORM */
+
+#endif /* _CC_PLATFORM_H */


___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v3 8/8] treewide: Replace the use of mem_encrypt_active() with cc_platform_has()

2021-09-09 Thread Christophe Leroy




On 9/8/21 10:58 PM, Tom Lendacky wrote:


diff --git a/arch/powerpc/include/asm/mem_encrypt.h 
b/arch/powerpc/include/asm/mem_encrypt.h
index ba9dab07c1be..2f26b8fc8d29 100644
--- a/arch/powerpc/include/asm/mem_encrypt.h
+++ b/arch/powerpc/include/asm/mem_encrypt.h
@@ -10,11 +10,6 @@
  
  #include 
  
-static inline bool mem_encrypt_active(void)

-{
-   return is_secure_guest();
-}
-
  static inline bool force_dma_unencrypted(struct device *dev)
  {
return is_secure_guest();
diff --git a/arch/powerpc/platforms/pseries/svm.c 
b/arch/powerpc/platforms/pseries/svm.c
index 87f001b4c4e4..c083ecbbae4d 100644
--- a/arch/powerpc/platforms/pseries/svm.c
+++ b/arch/powerpc/platforms/pseries/svm.c
@@ -8,6 +8,7 @@
  
  #include 

  #include 
+#include 
  #include 
  #include 
  #include 
@@ -63,7 +64,7 @@ void __init svm_swiotlb_init(void)
  
  int set_memory_encrypted(unsigned long addr, int numpages)

  {
-   if (!mem_encrypt_active())
+   if (!cc_platform_has(CC_ATTR_MEM_ENCRYPT))
return 0;
  
  	if (!PAGE_ALIGNED(addr))

@@ -76,7 +77,7 @@ int set_memory_encrypted(unsigned long addr, int numpages)
  
  int set_memory_decrypted(unsigned long addr, int numpages)

  {
-   if (!mem_encrypt_active())
+   if (!cc_platform_has(CC_ATTR_MEM_ENCRYPT))
return 0;
  
  	if (!PAGE_ALIGNED(addr))


This change unnecessarily complexifies the two functions. This is due to 
cc_platform_has() being out-line. It should really remain inline.


Before the change we got:

 <.set_memory_encrypted>:
   0:   7d 20 00 a6 mfmsr   r9
   4:   75 29 00 40 andis.  r9,r9,64
   8:   41 82 00 48 beq 50 <.set_memory_encrypted+0x50>
   c:   78 69 04 20 clrldi  r9,r3,48
  10:   2c 29 00 00 cmpdi   r9,0
  14:   40 82 00 4c bne 60 <.set_memory_encrypted+0x60>
  18:   7c 08 02 a6 mflrr0
  1c:   7c 85 23 78 mr  r5,r4
  20:   78 64 85 02 rldicl  r4,r3,48,20
  24:   61 23 f1 34 ori r3,r9,61748
  28:   f8 01 00 10 std r0,16(r1)
  2c:   f8 21 ff 91 stdur1,-112(r1)
  30:   48 00 00 01 bl  30 <.set_memory_encrypted+0x30>
30: R_PPC64_REL24   .ucall_norets
  34:   60 00 00 00 nop
  38:   38 60 00 00 li  r3,0
  3c:   38 21 00 70 addir1,r1,112
  40:   e8 01 00 10 ld  r0,16(r1)
  44:   7c 08 03 a6 mtlrr0
  48:   4e 80 00 20 blr
  50:   38 60 00 00 li  r3,0
  54:   4e 80 00 20 blr
  60:   38 60 ff ea li  r3,-22
  64:   4e 80 00 20 blr

After the change we get:

 <.set_memory_encrypted>:
   0:   7c 08 02 a6 mflrr0
   4:   fb c1 ff f0 std r30,-16(r1)
   8:   fb e1 ff f8 std r31,-8(r1)
   c:   7c 7f 1b 78 mr  r31,r3
  10:   38 60 00 00 li  r3,0
  14:   7c 9e 23 78 mr  r30,r4
  18:   f8 01 00 10 std r0,16(r1)
  1c:   f8 21 ff 81 stdur1,-128(r1)
  20:   48 00 00 01 bl  20 <.set_memory_encrypted+0x20>
20: R_PPC64_REL24   .cc_platform_has
  24:   60 00 00 00 nop
  28:   2c 23 00 00 cmpdi   r3,0
  2c:   41 82 00 44 beq 70 <.set_memory_encrypted+0x70>
  30:   7b e9 04 20 clrldi  r9,r31,48
  34:   2c 29 00 00 cmpdi   r9,0
  38:   40 82 00 58 bne 90 <.set_memory_encrypted+0x90>
  3c:   38 60 00 00 li  r3,0
  40:   7f c5 f3 78 mr  r5,r30
  44:   7b e4 85 02 rldicl  r4,r31,48,20
  48:   60 63 f1 34 ori r3,r3,61748
  4c:   48 00 00 01 bl  4c <.set_memory_encrypted+0x4c>
4c: R_PPC64_REL24   .ucall_norets
  50:   60 00 00 00 nop
  54:   38 60 00 00 li  r3,0
  58:   38 21 00 80 addir1,r1,128
  5c:   e8 01 00 10 ld  r0,16(r1)
  60:   eb c1 ff f0 ld  r30,-16(r1)
  64:   eb e1 ff f8 ld  r31,-8(r1)
  68:   7c 08 03 a6 mtlrr0
  6c:   4e 80 00 20 blr
  70:   38 21 00 80 addir1,r1,128
  74:   38 60 00 00 li  r3,0
  78:   e8 01 00 10 ld  r0,16(r1)
  7c:   eb c1 ff f0 ld  r30,-16(r1)
  80:   eb e1 ff f8 ld  r31,-8(r1)
  84:   7c 08 03 a6 mtlrr0
  88:   4e 80 00 20 blr
  90:   38 60 ff ea li  r3,-22
  94:   4b ff ff c4 b   58 <.set_memory_encrypted+0x58>

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 07/11] treewide: Replace the use of mem_encrypt_active() with prot_guest_has()

2021-08-02 Thread Christophe Leroy



Le 28/07/2021 à 00:26, Tom Lendacky a écrit :

Replace occurrences of mem_encrypt_active() with calls to prot_guest_has()
with the PATTR_MEM_ENCRYPT attribute.



What about 
https://patchwork.ozlabs.org/project/linuxppc-dev/patch/20210730114231.23445-1-w...@kernel.org/ ?


Christophe




Cc: Thomas Gleixner 
Cc: Ingo Molnar 
Cc: Borislav Petkov 
Cc: Dave Hansen 
Cc: Andy Lutomirski 
Cc: Peter Zijlstra 
Cc: David Airlie 
Cc: Daniel Vetter 
Cc: Maarten Lankhorst 
Cc: Maxime Ripard 
Cc: Thomas Zimmermann 
Cc: VMware Graphics 
Cc: Joerg Roedel 
Cc: Will Deacon 
Cc: Dave Young 
Cc: Baoquan He 
Signed-off-by: Tom Lendacky 
---
  arch/x86/kernel/head64.c| 4 ++--
  arch/x86/mm/ioremap.c   | 4 ++--
  arch/x86/mm/mem_encrypt.c   | 5 ++---
  arch/x86/mm/pat/set_memory.c| 3 ++-
  drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 4 +++-
  drivers/gpu/drm/drm_cache.c | 4 ++--
  drivers/gpu/drm/vmwgfx/vmwgfx_drv.c | 4 ++--
  drivers/gpu/drm/vmwgfx/vmwgfx_msg.c | 6 +++---
  drivers/iommu/amd/iommu.c   | 3 ++-
  drivers/iommu/amd/iommu_v2.c| 3 ++-
  drivers/iommu/iommu.c   | 3 ++-
  fs/proc/vmcore.c| 6 +++---
  kernel/dma/swiotlb.c| 4 ++--
  13 files changed, 29 insertions(+), 24 deletions(-)

diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index de01903c3735..cafed6456d45 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -19,7 +19,7 @@
  #include 
  #include 
  #include 
-#include 
+#include 
  #include 
  
  #include 

@@ -285,7 +285,7 @@ unsigned long __head __startup_64(unsigned long physaddr,
 * there is no need to zero it after changing the memory encryption
 * attribute.
 */
-   if (mem_encrypt_active()) {
+   if (prot_guest_has(PATTR_MEM_ENCRYPT)) {
vaddr = (unsigned long)__start_bss_decrypted;
vaddr_end = (unsigned long)__end_bss_decrypted;
for (; vaddr < vaddr_end; vaddr += PMD_SIZE) {
diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
index 0f2d5ace5986..5e1c1f5cbbe8 100644
--- a/arch/x86/mm/ioremap.c
+++ b/arch/x86/mm/ioremap.c
@@ -693,7 +693,7 @@ static bool __init 
early_memremap_is_setup_data(resource_size_t phys_addr,
  bool arch_memremap_can_ram_remap(resource_size_t phys_addr, unsigned long 
size,
 unsigned long flags)
  {
-   if (!mem_encrypt_active())
+   if (!prot_guest_has(PATTR_MEM_ENCRYPT))
return true;
  
  	if (flags & MEMREMAP_ENC)

@@ -723,7 +723,7 @@ pgprot_t __init 
early_memremap_pgprot_adjust(resource_size_t phys_addr,
  {
bool encrypted_prot;
  
-	if (!mem_encrypt_active())

+   if (!prot_guest_has(PATTR_MEM_ENCRYPT))
return prot;
  
  	encrypted_prot = true;

diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index 451de8e84fce..0f1533dbe81c 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -364,8 +364,7 @@ int __init early_set_memory_encrypted(unsigned long vaddr, 
unsigned long size)
  /*
   * SME and SEV are very similar but they are not the same, so there are
   * times that the kernel will need to distinguish between SME and SEV. The
- * sme_active() and sev_active() functions are used for this.  When a
- * distinction isn't needed, the mem_encrypt_active() function can be used.
+ * sme_active() and sev_active() functions are used for this.
   *
   * The trampoline code is a good example for this requirement.  Before
   * paging is activated, SME will access all memory as decrypted, but SEV
@@ -451,7 +450,7 @@ void __init mem_encrypt_free_decrypted_mem(void)
 * The unused memory range was mapped decrypted, change the encryption
 * attribute from decrypted to encrypted before freeing it.
 */
-   if (mem_encrypt_active()) {
+   if (sme_me_mask) {
r = set_memory_encrypted(vaddr, npages);
if (r) {
pr_warn("failed to free unused decrypted pages\n");
diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
index ad8a5c586a35..6925f2bb4be1 100644
--- a/arch/x86/mm/pat/set_memory.c
+++ b/arch/x86/mm/pat/set_memory.c
@@ -18,6 +18,7 @@
  #include 
  #include 
  #include 
+#include 
  
  #include 

  #include 
@@ -1986,7 +1987,7 @@ static int __set_memory_enc_dec(unsigned long addr, int 
numpages, bool enc)
int ret;
  
  	/* Nothing to do if memory encryption is not active */

-   if (!mem_encrypt_active())
+   if (!prot_guest_has(PATTR_MEM_ENCRYPT))
return 0;
  
  	/* Should not be working on unaligned addresses */

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
index abb928894eac..8407224717df 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
@@ -38,6 +38,7 @@
  

Re: [PATCH] iommu: spapr_tce: Disable compile testing to fix build on book3s_32 config

2020-04-18 Thread Christophe Leroy




On 04/14/2020 02:26 PM, Krzysztof Kozlowski wrote:

Although SPAPR_TCE_IOMMU itself can be compile tested on certain PowerPC
configurations, its presence makes arch/powerpc/kvm/Makefile to select
modules which do not build in such configuration.

The arch/powerpc/kvm/ modules use kvm_arch.spapr_tce_tables which exists
only with CONFIG_PPC_BOOK3S_64.  However these modules are selected when
COMPILE_TEST and SPAPR_TCE_IOMMU are chosen leading to build failures:

 In file included from arch/powerpc/include/asm/book3s/64/mmu-hash.h:20:0,
  from arch/powerpc/kvm/book3s_64_vio_hv.c:22:
 arch/powerpc/include/asm/book3s/64/pgtable.h:17:0: error: "_PAGE_EXEC" 
redefined [-Werror]
  #define _PAGE_EXEC  0x1 /* execute permission */

 In file included from arch/powerpc/include/asm/book3s/32/pgtable.h:8:0,
  from arch/powerpc/include/asm/book3s/pgtable.h:8,
  from arch/powerpc/include/asm/pgtable.h:18,
  from include/linux/mm.h:95,
  from arch/powerpc/include/asm/io.h:29,
  from include/linux/io.h:13,
  from include/linux/irq.h:20,
  from arch/powerpc/include/asm/hardirq.h:6,
  from include/linux/hardirq.h:9,
  from include/linux/kvm_host.h:7,
  from arch/powerpc/kvm/book3s_64_vio_hv.c:12:
 arch/powerpc/include/asm/book3s/32/hash.h:29:0: note: this is the location 
of the previous definition
  #define _PAGE_EXEC 0x200 /* software: exec allowed */

Reported-by: Geert Uytterhoeven 
Fixes: e93a1695d7fb ("iommu: Enable compile testing for some of drivers")
Signed-off-by: Krzysztof Kozlowski 
---
  drivers/iommu/Kconfig | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
index 58b4a4dbfc78..3532b1ead19d 100644
--- a/drivers/iommu/Kconfig
+++ b/drivers/iommu/Kconfig
@@ -362,7 +362,7 @@ config IPMMU_VMSA
  
  config SPAPR_TCE_IOMMU

bool "sPAPR TCE IOMMU Support"
-   depends on PPC_POWERNV || PPC_PSERIES || (PPC && COMPILE_TEST)
+   depends on PPC_POWERNV || PPC_PSERIES
select IOMMU_API
help
  Enables bits of IOMMU API required by VFIO. The iommu_ops



Should it be fixed the other way round, something like:

diff --git a/arch/powerpc/kvm/Makefile b/arch/powerpc/kvm/Makefile
index 2bfeaa13befb..906707d15810 100644
--- a/arch/powerpc/kvm/Makefile
+++ b/arch/powerpc/kvm/Makefile
@@ -135,4 +135,4 @@ obj-$(CONFIG_KVM_BOOK3S_32) += kvm.o
 obj-$(CONFIG_KVM_BOOK3S_64_PR) += kvm-pr.o
 obj-$(CONFIG_KVM_BOOK3S_64_HV) += kvm-hv.o

-obj-y += $(kvm-book3s_64-builtin-objs-y)
+obj-$(CONFIG_KVM_BOOK3S_64) += $(kvm-book3s_64-builtin-objs-y)


Christophe
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v3] crypto: talitos - fix ablkcipher for CONFIG_VMAP_STACK

2019-01-07 Thread Christophe Leroy



Le 04/01/2019 à 16:24, Horia Geanta a écrit :

On 1/4/2019 5:17 PM, Horia Geanta wrote:

On 12/21/2018 10:07 AM, Christophe Leroy wrote:
[snip]

IV cannot be on stack when CONFIG_VMAP_STACK is selected because the stack
cannot be DMA mapped anymore.
This looks better, thanks.



This patch copies the IV into the extended descriptor when iv is not
a valid linear address.


Though I am not sure the checks in place are enough.


Fixes: 4de9d0b547b9 ("crypto: talitos - Add ablkcipher algorithms")
Cc: sta...@vger.kernel.org
Signed-off-by: Christophe Leroy 
---
  v3: Using struct edesc buffer.

  v2: Using per-request context.

[snip]

+   if (ivsize && !virt_addr_valid(iv))
+   alloc_len += ivsize;

[snip]
  
+	if (ivsize && !virt_addr_valid(iv))

A more precise condition would be (!is_vmalloc_addr || is_vmalloc_addr(iv))


Sorry for the typo, I meant:
(!virt_addr_valid(iv) || is_vmalloc_addr(iv))


As far as I know, virt_addr_valid() means the address is in the linear 
memory space. So it cannot be a vmalloc_addr if it is a linear space 
addr, can it ?


At least, it is that way on powerpc which is the arch embedding the 
talitos crypto engine. virt_addr_valid() means we are under max_pfn, 
while VMALLOC_START is above max_pfn.


Christophe




It matches the checks in debug_dma_map_single() helper, though I am not sure
they are enough to rule out all exceptions of DMA API.

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: [PATCH] fix double ;;s in code

2018-02-18 Thread Christophe LEROY



Le 17/02/2018 à 22:19, Pavel Machek a écrit :


Fix double ;;'s in code.

Signed-off-by: Pavel Machek 


A summary of the files modified on top of the patch would help 
understand the impact.


A maybe there should be one patch by area, eg one for each arch specific 
modif and one for drivers/ and one for block/ ?


Christophe



diff --git a/arch/arc/kernel/setup.c b/arch/arc/kernel/setup.c
index 9d27331..ec12fe1 100644
--- a/arch/arc/kernel/setup.c
+++ b/arch/arc/kernel/setup.c
@@ -373,7 +373,7 @@ static void arc_chk_core_config(void)
  {
struct cpuinfo_arc *cpu = _arc700[smp_processor_id()];
int saved = 0, present = 0;
-   char *opt_nm = NULL;;
+   char *opt_nm = NULL;
  
  	if (!cpu->extn.timer0)

panic("Timer0 is not present!\n");
diff --git a/arch/arc/kernel/unwind.c b/arch/arc/kernel/unwind.c
index 333daab..183391d 100644
--- a/arch/arc/kernel/unwind.c
+++ b/arch/arc/kernel/unwind.c
@@ -366,7 +366,7 @@ static void init_unwind_hdr(struct unwind_table *table,
return;
  
  ret_err:

-   panic("Attention !!! Dwarf FDE parsing errors\n");;
+   panic("Attention !!! Dwarf FDE parsing errors\n");
  }
  
  #ifdef CONFIG_MODULES

diff --git a/arch/arm/kernel/time.c b/arch/arm/kernel/time.c
index 629f8e9..cf2701c 100644
--- a/arch/arm/kernel/time.c
+++ b/arch/arm/kernel/time.c
@@ -83,7 +83,7 @@ static void dummy_clock_access(struct timespec64 *ts)
  }
  
  static clock_access_fn __read_persistent_clock = dummy_clock_access;

-static clock_access_fn __read_boot_clock = dummy_clock_access;;
+static clock_access_fn __read_boot_clock = dummy_clock_access;
  
  void read_persistent_clock64(struct timespec64 *ts)

  {
diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
index 6618036..9ae31f7 100644
--- a/arch/arm64/kernel/ptrace.c
+++ b/arch/arm64/kernel/ptrace.c
@@ -1419,7 +1419,7 @@ static int compat_ptrace_hbp_get(unsigned int note_type,
u64 addr = 0;
u32 ctrl = 0;
  
-	int err, idx = compat_ptrace_hbp_num_to_idx(num);;

+   int err, idx = compat_ptrace_hbp_num_to_idx(num);
  
  	if (num & 1) {

err = ptrace_hbp_get_addr(note_type, tsk, idx, );
diff --git a/arch/powerpc/kvm/book3s_xive.c b/arch/powerpc/kvm/book3s_xive.c
index f0f5cd4..f9818d7 100644
--- a/arch/powerpc/kvm/book3s_xive.c
+++ b/arch/powerpc/kvm/book3s_xive.c
@@ -188,7 +188,7 @@ static int xive_provision_queue(struct kvm_vcpu *vcpu, u8 
prio)
if (!qpage) {
pr_err("Failed to allocate queue %d for VCPU %d\n",
   prio, xc->server_num);
-   return -ENOMEM;;
+   return -ENOMEM;
}
memset(qpage, 0, 1 << xive->q_order);
  
diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c

index 496e476..a6c92c7 100644
--- a/arch/powerpc/platforms/powernv/pci-ioda.c
+++ b/arch/powerpc/platforms/powernv/pci-ioda.c
@@ -1854,7 +1854,7 @@ static int pnv_pci_ioda_dma_set_mask(struct pci_dev 
*pdev, u64 dma_mask)
s64 rc;
  
  	if (WARN_ON(!pdn || pdn->pe_number == IODA_INVALID_PE))

-   return -ENODEV;;
+   return -ENODEV;
  
  	pe = >ioda.pe_array[pdn->pe_number];

if (pe->tce_bypass_enabled) {
diff --git a/arch/x86/boot/compressed/eboot.c b/arch/x86/boot/compressed/eboot.c
index 353e20c..886a911 100644
--- a/arch/x86/boot/compressed/eboot.c
+++ b/arch/x86/boot/compressed/eboot.c
@@ -439,7 +439,7 @@ setup_uga32(void **uga_handle, unsigned long size, u32 
*width, u32 *height)
struct efi_uga_draw_protocol *uga = NULL, *first_uga;
efi_guid_t uga_proto = EFI_UGA_PROTOCOL_GUID;
unsigned long nr_ugas;
-   u32 *handles = (u32 *)uga_handle;;
+   u32 *handles = (u32 *)uga_handle;
efi_status_t status = EFI_INVALID_PARAMETER;
int i;
  
@@ -484,7 +484,7 @@ setup_uga64(void **uga_handle, unsigned long size, u32 *width, u32 *height)

struct efi_uga_draw_protocol *uga = NULL, *first_uga;
efi_guid_t uga_proto = EFI_UGA_PROTOCOL_GUID;
unsigned long nr_ugas;
-   u64 *handles = (u64 *)uga_handle;;
+   u64 *handles = (u64 *)uga_handle;
efi_status_t status = EFI_INVALID_PARAMETER;
int i;
  
diff --git a/block/sed-opal.c b/block/sed-opal.c

index 9ed51d0c..e4929ee 100644
--- a/block/sed-opal.c
+++ b/block/sed-opal.c
@@ -490,7 +490,7 @@ static int opal_discovery0_end(struct opal_dev *dev)
  
  	if (!found_com_id) {

pr_debug("Could not find OPAL comid for device. Returning 
early\n");
-   return -EOPNOTSUPP;;
+   return -EOPNOTSUPP;
}
  
  	dev->comid = comid;

diff --git a/drivers/clocksource/mips-gic-timer.c 
b/drivers/clocksource/mips-gic-timer.c
index a04808a..65e18c8 100644
--- a/drivers/clocksource/mips-gic-timer.c
+++ b/drivers/clocksource/mips-gic-timer.c
@@ -205,12 +205,12 @@ static int __init gic_clocksource_of_init(struct 
device_node *node)
} else 

Re: [PATCH 07/11] powerpc: make dma_cache_sync a no-op

2017-10-04 Thread Christophe LEROY



Le 03/10/2017 à 13:43, Christoph Hellwig a écrit :

On Tue, Oct 03, 2017 at 01:24:57PM +0200, Christophe LEROY wrote:

powerpc does not implement DMA_ATTR_NON_CONSISTENT allocations, so it
doesn't make any sense to do any work in dma_cache_sync given that it
must be a no-op when dma_alloc_attrs returns coherent memory.

What about arch/powerpc/mm/dma-noncoherent.c ?

Powerpc 8xx doesn't have coherent memory.


It doesn't implement the DMA_ATTR_NON_CONSISTENT interface either,
so if it really doesn't have a way to provide dma coherent allocation
(although the code in __dma_alloc_coherent suggests it does provide
dma coherent allocations) I have no idea how it could ever have
worked.



Yes indeed it provides coherent memory by allocation non cached memory.

And drivers aiming at using non coherent memory do it by using kmalloc() 
with GFP_DMA then dma_map_single().
Then they use dma_sync_single_for_xxx(), which calls __dma_sync() on the 
8xx and is a nop on other powerpcs.


Christophe
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: [PATCH 07/11] powerpc: make dma_cache_sync a no-op

2017-10-03 Thread Christophe LEROY



Le 03/10/2017 à 12:43, Christoph Hellwig a écrit :

powerpc does not implement DMA_ATTR_NON_CONSISTENT allocations, so it
doesn't make any sense to do any work in dma_cache_sync given that it
must be a no-op when dma_alloc_attrs returns coherent memory.

What about arch/powerpc/mm/dma-noncoherent.c ?

Powerpc 8xx doesn't have coherent memory.

Christophe



Signed-off-by: Christoph Hellwig 
---
  arch/powerpc/include/asm/dma-mapping.h | 2 --
  1 file changed, 2 deletions(-)

diff --git a/arch/powerpc/include/asm/dma-mapping.h 
b/arch/powerpc/include/asm/dma-mapping.h
index eaece3d3e225..320846442bfb 100644
--- a/arch/powerpc/include/asm/dma-mapping.h
+++ b/arch/powerpc/include/asm/dma-mapping.h
@@ -144,8 +144,6 @@ static inline phys_addr_t dma_to_phys(struct device *dev, 
dma_addr_t daddr)
  static inline void dma_cache_sync(struct device *dev, void *vaddr, size_t 
size,
enum dma_data_direction direction)
  {
-   BUG_ON(direction == DMA_NONE);
-   __dma_sync(vaddr, size, (int)direction);
  }
  
  #endif /* __KERNEL__ */



___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu