Re: [PATCH] tpm: of: avoid __va() translation for event log address

2020-09-27 Thread Christophe Leroy




Le 28/09/2020 à 01:44, Jarkko Sakkinen a écrit :

On Fri, Sep 25, 2020 at 09:00:18AM -0300, Jason Gunthorpe wrote:

On Fri, Sep 25, 2020 at 01:29:20PM +0300, Jarkko Sakkinen wrote:

On Fri, Sep 25, 2020 at 09:00:56AM +0200, Ard Biesheuvel wrote:

On Fri, 25 Sep 2020 at 07:56, Jarkko Sakkinen
 wrote:


On Tue, Sep 22, 2020 at 11:41:28AM +0200, Ard Biesheuvel wrote:

The TPM event log is provided to the OS by the firmware, by loading
it into an area in memory and passing the physical address via a node
in the device tree.

Currently, we use __va() to access the memory via the kernel's linear
map: however, it is not guaranteed that the linear map covers this
particular address, as we may be running under HIGHMEM on a 32-bit
architecture, or running firmware that uses a memory type for the
event log that is omitted from the linear map (such as EfiReserved).


Makes perfect sense to the level that I wonder if this should have a
fixes tag and/or needs to be backported to the stable kernels?



AIUI, the code was written specifically for ppc64, which is a
non-highmem, non-EFI architecture. However, when we start reusing this
driver for ARM, this issue could pop up.

The code itself has been refactored a couple of times, so I think it
will require different versions of the patch for different generations
of stable kernels.

So perhaps just add Cc: , and wait and see how
far back it applies cleanly?


Yeah, I think I'll cc it with some note before the diffstat.

I'm thinking to cap it to only 5.x kernels (at least first) unless it is
dead easy to backport below that.


I have this vauge recollection of pointing at this before and being
told that it had to be __va for some PPC reason?

Do check with the PPC people first, I see none on the CC list.

Jason


Thanks, added arch/powerpc maintainers.



As far as I can see, memremap() won't work on PPC32 at least:

IIUC, memremap() calls arch_memremap_wb()
arch_memremap_wb() calls ioremap_cache()
In case of failure, then ioremap_wt() and ioremap_wc() are tried.

All ioremap calls end up in __ioremap_caller() which will return NULL in case 
you try to ioremap RAM.

So the statement "So instead, use memremap(), which will reuse the linear 
mapping if
it is valid, or create another mapping otherwise." seems to be wrong, at least 
for PPC32.

Even for PPC64 which doesn't seem to have the RAM check, I can't see that it will "reuse the linear 
mapping".


Christophe


Re: [PATCH -next] ocxl: simplify the return expression of free_function_dev()

2020-09-27 Thread Andrew Donnellan

On 21/9/20 11:10 pm, Qinglang Miao wrote:

Simplify the return expression.

Signed-off-by: Qinglang Miao 


Looks good

Acked-by: Andrew Donnellan 

--
Andrew Donnellan  OzLabs, ADL Canberra
a...@linux.ibm.com IBM Australia Limited


Re: [PATCH] tpm: of: avoid __va() translation for event log address

2020-09-27 Thread Jarkko Sakkinen
On Fri, Sep 25, 2020 at 09:00:18AM -0300, Jason Gunthorpe wrote:
> On Fri, Sep 25, 2020 at 01:29:20PM +0300, Jarkko Sakkinen wrote:
> > On Fri, Sep 25, 2020 at 09:00:56AM +0200, Ard Biesheuvel wrote:
> > > On Fri, 25 Sep 2020 at 07:56, Jarkko Sakkinen
> > >  wrote:
> > > >
> > > > On Tue, Sep 22, 2020 at 11:41:28AM +0200, Ard Biesheuvel wrote:
> > > > > The TPM event log is provided to the OS by the firmware, by loading
> > > > > it into an area in memory and passing the physical address via a node
> > > > > in the device tree.
> > > > >
> > > > > Currently, we use __va() to access the memory via the kernel's linear
> > > > > map: however, it is not guaranteed that the linear map covers this
> > > > > particular address, as we may be running under HIGHMEM on a 32-bit
> > > > > architecture, or running firmware that uses a memory type for the
> > > > > event log that is omitted from the linear map (such as EfiReserved).
> > > >
> > > > Makes perfect sense to the level that I wonder if this should have a
> > > > fixes tag and/or needs to be backported to the stable kernels?
> > > >
> > > 
> > > AIUI, the code was written specifically for ppc64, which is a
> > > non-highmem, non-EFI architecture. However, when we start reusing this
> > > driver for ARM, this issue could pop up.
> > > 
> > > The code itself has been refactored a couple of times, so I think it
> > > will require different versions of the patch for different generations
> > > of stable kernels.
> > > 
> > > So perhaps just add Cc: , and wait and see how
> > > far back it applies cleanly?
> > 
> > Yeah, I think I'll cc it with some note before the diffstat.
> > 
> > I'm thinking to cap it to only 5.x kernels (at least first) unless it is
> > dead easy to backport below that.
> 
> I have this vauge recollection of pointing at this before and being
> told that it had to be __va for some PPC reason?
> 
> Do check with the PPC people first, I see none on the CC list.
> 
> Jason

Thanks, added arch/powerpc maintainers.

/Jarkko


Re: [PATCH] rpadlpar_io:Add MODULE_DESCRIPTION entries to kernel modules

2020-09-27 Thread Oliver O'Halloran
On Sat, Sep 26, 2020 at 5:43 AM Bjorn Helgaas  wrote:
>
> On Thu, Sep 24, 2020 at 04:41:39PM +1000, Oliver O'Halloran wrote:
> > On Thu, Sep 24, 2020 at 3:15 PM Mamatha Inamdar
> >  wrote:
> > >
> > > This patch adds a brief MODULE_DESCRIPTION to rpadlpar_io kernel modules
> > > (descriptions taken from Kconfig file)
> > >
> > > Signed-off-by: Mamatha Inamdar 
> > > ---
> > >  drivers/pci/hotplug/rpadlpar_core.c |1 +
> > >  1 file changed, 1 insertion(+)
> > >
> > > diff --git a/drivers/pci/hotplug/rpadlpar_core.c 
> > > b/drivers/pci/hotplug/rpadlpar_core.c
> > > index f979b70..bac65ed 100644
> > > --- a/drivers/pci/hotplug/rpadlpar_core.c
> > > +++ b/drivers/pci/hotplug/rpadlpar_core.c
> > > @@ -478,3 +478,4 @@ static void __exit rpadlpar_io_exit(void)
> > >  module_init(rpadlpar_io_init);
> > >  module_exit(rpadlpar_io_exit);
> > >  MODULE_LICENSE("GPL");
> > > +MODULE_DESCRIPTION("RPA Dynamic Logical Partitioning driver for I/O 
> > > slots");
> >
> > RPA as a spec was superseded by PAPR in the early 2000s. Can we rename
> > this already?
> >
> > The only potential problem I can see is scripts doing: modprobe
> > rpadlpar_io or similar
> >
> > However, we should be able to fix that with a module alias.
>
> Is MODULE_DESCRIPTION() connected with how modprobe works?

I don't think so. The description is just there as an FYI.

> If this patch just improves documentation, without breaking users of
> modprobe, I'm fine with it, even if it would be nice to rename to PAPR
> or something in the future.

Right, the change in this patch is just a documentation fix and
shouldn't cause any problems.

I was suggesting renaming the module itself since the term "RPA" is
only used in this hotplug driver and some of the corresponding PHB add
/ remove handling in arch/powerpc/platforms/pseries/. We can make that
change in a follow up though.

> But, please use "git log --oneline drivers/pci/hotplug/rpadlpar*" and
> match the style, and also look through the rest of drivers/pci/ to see
> if we should do the same thing to any other modules.
>
> Bjorn


Re: [PATCH v3 2/5] powerpc: apm82181: create shared dtsi for APM bluestone

2020-09-27 Thread Christian Lamparter
On Tue, Sep 22, 2020 at 9:14 PM Rob Herring  wrote:
>
> On Sat, Sep 19, 2020 at 2:23 PM Christian Lamparter  
> wrote:
> >
> > On 2020-09-15 03:05, Rob Herring wrote:
> > > On Sun, Sep 06, 2020 at 12:06:12AM +0200, Christian Lamparter wrote:
> > >> This patch adds an DTSI-File that can be used by various device-tree
> > >> files for APM82181-based devices.
> > >>
> > >> Some of the nodes (like UART, PCIE, SATA) are used by the uboot and
> > >> need to stick with the naming-conventions of the old times'.
> > >> I've added comments whenever this was the case.
> > >>
> > >> Signed-off-by: Chris Blake 
> > >> Signed-off-by: Christian Lamparter 
> > >> ---
> > >> rfc v1 -> v2:
> > >>  - removed PKA (this CryptoPU will need driver)
> > >>  - stick with compatibles, nodes, ... from either
> > >>Bluestone (APM82181) or Canyonlands (PPC460EX).
> > >>  - add labels for NAND and NOR to help with access.
> > >> v2 -> v3:
> > >>  - nodename of pciex@d was changed to pcie@d..
> > >>due to upstream patch.
> > >>  - use simple-bus on the ebc, opb and plb nodes
> > >> ---
> > >>   arch/powerpc/boot/dts/apm82181.dtsi | 466 
> > >>   1 file changed, 466 insertions(+)
> > >>   create mode 100644 arch/powerpc/boot/dts/apm82181.dtsi
> > >>
> > >> diff --git a/arch/powerpc/boot/dts/apm82181.dtsi 
> > >> b/arch/powerpc/boot/dts/apm82181.dtsi
> > >> new file mode 100644
> > >> index ..60283430978d
> > >> --- /dev/null
> > >> +++ b/arch/powerpc/boot/dts/apm82181.dtsi
> > >> @@ -0,0 +1,466 @@
> > >> +// SPDX-License-Identifier: GPL-2.0-or-later
> > >> +/*
> > >> + * Device Tree template include for various APM82181 boards.
> > >> + *
> > >> + * The SoC is an evolution of the PPC460EX predecessor.
> > >> + * This is why dt-nodes from the canyonlands EBC, OPB, USB,
> > >> + * DMA, SATA, EMAC, ... ended up in here.
> > >> + *
> > >> + * Copyright (c) 2010, Applied Micro Circuits Corporation
> > >> + * Author: Tirumala R Marri ,
> > >> + * Christian Lamparter ,
> > >> + * Chris Blake 
> > >> + */
> > >> +
> > >> +#include 
> > >> +#include 
> > >> +#include 
> > >> +#include 
> > >> +
> > >> +/ {
> > >> +#address-cells = <2>;
> > >> +#size-cells = <1>;
> > >> +dcr-parent = <&{/cpus/cpu@0}>;
> > >> +
> > >> +aliases {
> > >> +ethernet0 =  /* needed for BSP u-boot */
> > >> +};
> > >> +
> > >> +cpus {
> > >> +#address-cells = <1>;
> > >> +#size-cells = <0>;
> > >> +
> > >> +CPU0: cpu@0 {
> > >> +device_type = "cpu";
> > >> +model = "PowerPC,apm82181";
> > >
> > > This doesn't match the existing bluestone dts file.
> > >
> > > Please separate any restructuring from changes.
> >
> >
> > "I see" (I'm including your comment of the dt-binding as well).
> >
> > I'm getting the vibe that I better should not touch that bluestone.dts.
>
> I don't know about that.

k, understood.

>
> > An honestly, looking at the series and patches that the APM-engineers
> > posted back in the day, I can see why this well is so poisoned... and
> > stuff like SATA/AHBDMA/USB/GPIO/CPM/... was missing.
> >
> > As for the devices. In the spirit of Arnd Bergmann's post of
> > 
> >
> > |It would be nice to move over the the bluestone .dts to the apm82181.dtsi 
> > file
> > |when that gets added, if only to ensure they use the same description for 
> > each
> > |node, but that shouldn't stop the addition of the new file if that is 
> > needed for
> > |distros to make use of a popular device.
> > |I see a couple of additional files in openwrt.
> >
> > I mean I don't have the bluestone dev board, just the consumer devices.
>
> This stuff is old enough, I'd guess no one cares about a dev board.
> But we should figure that out and document that with any changes.
>
> > Would it be possible to support those? I can start from a "skeleton" 
> > apm82181.dtsi
> > This would just include CPU, Memory (SD-RAM+L2C+OCM), UIC 
> > (Interrupt-Controller),
> > the PLB+OBP+EBC Busses and UART. Just enough to make a board "boot from 
> > ram".
>
> This skeleton would be chunks moved over or duplicated? I don't think
> we want 2 of the same thing.
My Idea was copying the working apm82181.dtsi we have in OpenWrt
and stripping the nodes we added for SATA, USB, GPIO and the likes.
so the remaining nodes would be very close to what bluestone.dts had.
The main differences would be:
- It's a bit smaller since I made a separate patch for the NOR/NAND on the EBC.
Reason being that the SoC uses glue-logic for mapping NOR/NAND (and other
external peripherals like the GPIOs on the WD) into the memory and I thought
this needed some explanation as to why this weird thing works.

- it would already use the dt-bindings/interrupt-controller/irq.h macros
for LEVEL/EDGE cell values

- it contains valuable comments about the uboot. Because ethernet0 alias
  and the 

Re: [PATCH v2] i2c: cpm: Fix i2c_ram structure

2020-09-27 Thread Wolfram Sang
On Wed, Sep 23, 2020 at 04:08:40PM +0200, nico.vi...@gmail.com wrote:
> From: Nicolas VINCENT 
> 
> the i2c_ram structure is missing the sdmatmp field mentionned in
> datasheet for MPC8272 at paragraph 36.5. With this field missing, the
> hardware would write past the allocated memory done through
> cpm_muram_alloc for the i2c_ram structure and land in memory allocated
> for the buffers descriptors corrupting the cbd_bufaddr field. Since this
> field is only set during setup(), the first i2c transaction would work
> and the following would send data read from an arbitrary memory
> location.
> 
> Signed-off-by: Nicolas VINCENT 

Fixes tag aded and applied to for-current, thanks everyone!



signature.asc
Description: PGP signature


[PATCH v1 30/30] powerpc/vdso: Cleanup vdso.h

2020-09-27 Thread Christophe Leroy
Rename the guard define to _ASM_POWERPC_VDSO_H

And remove useless #ifdef __KERNEL__

Signed-off-by: Christophe Leroy 
---
 arch/powerpc/include/asm/vdso.h | 10 +++---
 1 file changed, 3 insertions(+), 7 deletions(-)

diff --git a/arch/powerpc/include/asm/vdso.h b/arch/powerpc/include/asm/vdso.h
index 2448419cb3e5..8542e9bbeead 100644
--- a/arch/powerpc/include/asm/vdso.h
+++ b/arch/powerpc/include/asm/vdso.h
@@ -1,8 +1,6 @@
 /* SPDX-License-Identifier: GPL-2.0 */
-#ifndef __PPC64_VDSO_H__
-#define __PPC64_VDSO_H__
-
-#ifdef __KERNEL__
+#ifndef _ASM_POWERPC_VDSO_H
+#define _ASM_POWERPC_VDSO_H
 
 /* Default map addresses for 32bit vDSO */
 #define VDSO32_MBASE   0x10
@@ -54,6 +52,4 @@ int vdso_getcpu_init(void);
 
 #endif /* __ASSEMBLY__ */
 
-#endif /* __KERNEL__ */
-
-#endif /* __PPC64_VDSO_H__ */
+#endif /* _ASM_POWERPC_VDSO_H */
-- 
2.25.0



[PATCH v1 29/30] powerpc/vdso: Remove VDSO32_LBASE and VDSO64_LBASE

2020-09-27 Thread Christophe Leroy
VDSO32_LBASE and VDSO64_LBASE are 0. Remove them to simplify code.

Signed-off-by: Christophe Leroy 
---
 arch/powerpc/include/asm/vdso.h | 4 
 arch/powerpc/kernel/vdso32/vdso32.lds.S | 2 +-
 arch/powerpc/kernel/vdso64/vdso64.lds.S | 2 +-
 3 files changed, 2 insertions(+), 6 deletions(-)

diff --git a/arch/powerpc/include/asm/vdso.h b/arch/powerpc/include/asm/vdso.h
index a97384909fe5..2448419cb3e5 100644
--- a/arch/powerpc/include/asm/vdso.h
+++ b/arch/powerpc/include/asm/vdso.h
@@ -4,10 +4,6 @@
 
 #ifdef __KERNEL__
 
-/* Default link addresses for the vDSOs */
-#define VDSO32_LBASE   0x0
-#define VDSO64_LBASE   0x0
-
 /* Default map addresses for 32bit vDSO */
 #define VDSO32_MBASE   0x10
 
diff --git a/arch/powerpc/kernel/vdso32/vdso32.lds.S 
b/arch/powerpc/kernel/vdso32/vdso32.lds.S
index 7b476a6f2dba..2636b359c9ce 100644
--- a/arch/powerpc/kernel/vdso32/vdso32.lds.S
+++ b/arch/powerpc/kernel/vdso32/vdso32.lds.S
@@ -17,7 +17,7 @@ ENTRY(_start)
 SECTIONS
 {
PROVIDE(_vdso_datapage = . - PAGE_SIZE);
-   . = VDSO32_LBASE + SIZEOF_HEADERS;
+   . = SIZEOF_HEADERS;
 
.hash   : { *(.hash) }  :text
.gnu.hash   : { *(.gnu.hash) }
diff --git a/arch/powerpc/kernel/vdso64/vdso64.lds.S 
b/arch/powerpc/kernel/vdso64/vdso64.lds.S
index a543826cd857..f256525e633f 100644
--- a/arch/powerpc/kernel/vdso64/vdso64.lds.S
+++ b/arch/powerpc/kernel/vdso64/vdso64.lds.S
@@ -17,7 +17,7 @@ ENTRY(_start)
 SECTIONS
 {
PROVIDE(_vdso_datapage = . - PAGE_SIZE);
-   . = VDSO64_LBASE + SIZEOF_HEADERS;
+   . = SIZEOF_HEADERS;
 
.hash   : { *(.hash) }  :text
.gnu.hash   : { *(.gnu.hash) }
-- 
2.25.0



[PATCH v1 28/30] powerpc/vdso: Remove DBG()

2020-09-27 Thread Christophe Leroy
DBG() is not used anymore. Remove it.

Signed-off-by: Christophe Leroy 
---
 arch/powerpc/kernel/vdso.c | 8 
 1 file changed, 8 deletions(-)

diff --git a/arch/powerpc/kernel/vdso.c b/arch/powerpc/kernel/vdso.c
index e5a9b60274ba..4e3858bb2b24 100644
--- a/arch/powerpc/kernel/vdso.c
+++ b/arch/powerpc/kernel/vdso.c
@@ -33,14 +33,6 @@
 #include 
 #include 
 
-#undef DEBUG
-
-#ifdef DEBUG
-#define DBG(fmt...) printk(fmt)
-#else
-#define DBG(fmt...)
-#endif
-
 /* The alignment of the vDSO */
 #define VDSO_ALIGNMENT (1 << 16)
 
-- 
2.25.0



[PATCH v1 27/30] powerpc/vdso: Remove vdso_ready

2020-09-27 Thread Christophe Leroy
There is no way to get out of vdso_init() prematuraly anymore.

Remove vdso_ready as it will always be 1.

Signed-off-by: Christophe Leroy 
---
 arch/powerpc/kernel/vdso.c | 6 --
 1 file changed, 6 deletions(-)

diff --git a/arch/powerpc/kernel/vdso.c b/arch/powerpc/kernel/vdso.c
index 14fbcc76a629..e5a9b60274ba 100644
--- a/arch/powerpc/kernel/vdso.c
+++ b/arch/powerpc/kernel/vdso.c
@@ -47,8 +47,6 @@
 extern char vdso32_start, vdso32_end;
 extern char vdso64_start, vdso64_end;
 
-static int vdso_ready;
-
 /*
  * The vdso data page (aka. systemcfg for old ppc64 fans) is here.
  * Once the early boot kernel code no longer needs to muck around
@@ -171,9 +169,6 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, 
int uses_interp)
 
mm->context.vdso = NULL;
 
-   if (!vdso_ready)
-   return 0;
-
if (mmap_write_lock_killable(mm))
return -EINTR;
 
@@ -312,7 +307,6 @@ static int __init vdso_init(void)
vdso64_spec.pages = vdso_setup_pages(_start, 
_end);
 
smp_wmb();
-   vdso_ready = 1;
 
return 0;
 }
-- 
2.25.0



[PATCH v1 26/30] powerpc/vdso: Remove vdso_setup()

2020-09-27 Thread Christophe Leroy
vdso_fixup_features() cannot fail anymore and that's
the only function called by vdso_setup().

vdso_setup() has become trivial and can be removed.

Signed-off-by: Christophe Leroy 
---
 arch/powerpc/kernel/vdso.c | 21 ++---
 1 file changed, 2 insertions(+), 19 deletions(-)

diff --git a/arch/powerpc/kernel/vdso.c b/arch/powerpc/kernel/vdso.c
index 0cb320b72923..14fbcc76a629 100644
--- a/arch/powerpc/kernel/vdso.c
+++ b/arch/powerpc/kernel/vdso.c
@@ -192,7 +192,7 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, 
int uses_interp)
do_##type##_fixups((value), __start, __end);
\
 } while (0)
 
-static int __init vdso_fixup_features(void)
+static void __init vdso_fixup_features(void)
 {
 #ifdef CONFIG_PPC64
VDSO_DO_FIXUPS(feature, cur_cpu_spec->cpu_features, 64, ftr_fixup);
@@ -209,16 +209,6 @@ static int __init vdso_fixup_features(void)
 #endif /* CONFIG_PPC64 */
VDSO_DO_FIXUPS(lwsync, cur_cpu_spec->cpu_features, 32, lwsync_fixup);
 #endif
-
-   return 0;
-}
-
-static __init int vdso_setup(void)
-{
-   if (vdso_fixup_features())
-   return -1;
-
-   return 0;
 }
 
 /*
@@ -313,14 +303,7 @@ static int __init vdso_init(void)
 
vdso_setup_syscall_map();
 
-   /*
-* Initialize the vDSO images in memory, that is do necessary
-* fixups of vDSO symbols, locate trampolines, etc...
-*/
-   if (vdso_setup()) {
-   printk(KERN_ERR "vDSO setup failure, not enabled !\n");
-   return 0;
-   }
+   vdso_fixup_features();
 
if (IS_ENABLED(CONFIG_VDSO32))
vdso32_spec.pages = vdso_setup_pages(_start, 
_end);
-- 
2.25.0



[PATCH v1 24/30] powerpc/vdso: Remove symbol section information in struct lib32/64_elfinfo

2020-09-27 Thread Christophe Leroy
The members related to the symbol section in struct lib32_elfinfo and
struct lib64_elfinfo are not used anymore, removed them.

Signed-off-by: Christophe Leroy 
---
 arch/powerpc/kernel/vdso.c | 90 --
 1 file changed, 90 deletions(-)

diff --git a/arch/powerpc/kernel/vdso.c b/arch/powerpc/kernel/vdso.c
index fa1cbddfb978..f7b477da0b8a 100644
--- a/arch/powerpc/kernel/vdso.c
+++ b/arch/powerpc/kernel/vdso.c
@@ -70,17 +70,11 @@ struct vdso_arch_data *vdso_data = _data_store.data;
 struct lib32_elfinfo
 {
Elf32_Ehdr  *hdr;   /* ptr to ELF */
-   Elf32_Sym   *dynsym;/* ptr to .dynsym section */
-   unsigned long   dynsymsize; /* size of .dynsym section */
-   char*dynstr;/* ptr to .dynstr section */
 };
 
 struct lib64_elfinfo
 {
Elf64_Ehdr  *hdr;
-   Elf64_Sym   *dynsym;
-   unsigned long   dynsymsize;
-   char*dynstr;
 };
 
 static int vdso_mremap(const struct vm_special_mapping *sm, struct 
vm_area_struct *new_vma,
@@ -208,59 +202,6 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, 
int uses_interp)
return rc;
 }
 
-#ifdef CONFIG_VDSO32
-static void * __init find_section32(Elf32_Ehdr *ehdr, const char *secname,
- unsigned long *size)
-{
-   Elf32_Shdr *sechdrs;
-   unsigned int i;
-   char *secnames;
-
-   /* Grab section headers and strings so we can tell who is who */
-   sechdrs = (void *)ehdr + ehdr->e_shoff;
-   secnames = (void *)ehdr + sechdrs[ehdr->e_shstrndx].sh_offset;
-
-   /* Find the section they want */
-   for (i = 1; i < ehdr->e_shnum; i++) {
-   if (strcmp(secnames+sechdrs[i].sh_name, secname) == 0) {
-   if (size)
-   *size = sechdrs[i].sh_size;
-   return (void *)ehdr + sechdrs[i].sh_offset;
-   }
-   }
-   *size = 0;
-   return NULL;
-}
-#endif /* CONFIG_VDSO32 */
-
-
-#ifdef CONFIG_PPC64
-
-static void * __init find_section64(Elf64_Ehdr *ehdr, const char *secname,
- unsigned long *size)
-{
-   Elf64_Shdr *sechdrs;
-   unsigned int i;
-   char *secnames;
-
-   /* Grab section headers and strings so we can tell who is who */
-   sechdrs = (void *)ehdr + ehdr->e_shoff;
-   secnames = (void *)ehdr + sechdrs[ehdr->e_shstrndx].sh_offset;
-
-   /* Find the section they want */
-   for (i = 1; i < ehdr->e_shnum; i++) {
-   if (strcmp(secnames+sechdrs[i].sh_name, secname) == 0) {
-   if (size)
-   *size = sechdrs[i].sh_size;
-   return (void *)ehdr + sechdrs[i].sh_offset;
-   }
-   }
-   if (size)
-   *size = 0;
-   return NULL;
-}
-#endif /* CONFIG_PPC64 */
-
 #define VDSO_DO_FIXUPS(type, value, bits, sec) do {
\
void *__start = (void *)VDSO##bits##_SYMBOL(##bits##_start, 
sec##_start);  \
void *__end = (void *)VDSO##bits##_SYMBOL(##bits##_start, 
sec##_end);  \
@@ -268,34 +209,6 @@ static void * __init find_section64(Elf64_Ehdr *ehdr, 
const char *secname,
do_##type##_fixups((value), __start, __end);
\
 } while (0)
 
-static __init int vdso_do_find_sections(struct lib32_elfinfo *v32,
-   struct lib64_elfinfo *v64)
-{
-   /*
-* Locate symbol tables & text section
-*/
-
-#ifdef CONFIG_VDSO32
-   v32->dynsym = find_section32(v32->hdr, ".dynsym", >dynsymsize);
-   v32->dynstr = find_section32(v32->hdr, ".dynstr", NULL);
-   if (v32->dynsym == NULL || v32->dynstr == NULL) {
-   printk(KERN_ERR "vDSO32: required symbol section not found\n");
-   return -1;
-   }
-#endif
-
-#ifdef CONFIG_PPC64
-   v64->dynsym = find_section64(v64->hdr, ".dynsym", >dynsymsize);
-   v64->dynstr = find_section64(v64->hdr, ".dynstr", NULL);
-   if (v64->dynsym == NULL || v64->dynstr == NULL) {
-   printk(KERN_ERR "vDSO64: required symbol section not found\n");
-   return -1;
-   }
-#endif /* CONFIG_PPC64 */
-
-   return 0;
-}
-
 static __init int vdso_fixup_features(struct lib32_elfinfo *v32,
  struct lib64_elfinfo *v64)
 {
@@ -325,9 +238,6 @@ static __init int vdso_setup(void)
 
v32.hdr = vdso32_kbase;
v64.hdr = vdso64_kbase;
-   if (vdso_do_find_sections(, ))
-   return -1;
-
if (vdso_fixup_features(, ))
return -1;
 
-- 
2.25.0



[PATCH v1 25/30] powerpc/vdso: Remove lib32_elfinfo and lib64_elfinfo

2020-09-27 Thread Christophe Leroy
lib32_elfinfo and lib64_elfinfo are not used anymore, remove them.

Also remove vdso32_kbase and vdso64_kbase while removing the
last use.

Signed-off-by: Christophe Leroy 
---
 arch/powerpc/kernel/vdso.c | 29 ++---
 1 file changed, 2 insertions(+), 27 deletions(-)

diff --git a/arch/powerpc/kernel/vdso.c b/arch/powerpc/kernel/vdso.c
index f7b477da0b8a..0cb320b72923 100644
--- a/arch/powerpc/kernel/vdso.c
+++ b/arch/powerpc/kernel/vdso.c
@@ -44,11 +44,8 @@
 /* The alignment of the vDSO */
 #define VDSO_ALIGNMENT (1 << 16)
 
-static void *vdso32_kbase;
-
 extern char vdso32_start, vdso32_end;
 extern char vdso64_start, vdso64_end;
-static void *vdso64_kbase = _start;
 
 static int vdso_ready;
 
@@ -63,20 +60,6 @@ static union {
 } vdso_data_store __page_aligned_data;
 struct vdso_arch_data *vdso_data = _data_store.data;
 
-/*
- * Some infos carried around for each of them during parsing at
- * boot time.
- */
-struct lib32_elfinfo
-{
-   Elf32_Ehdr  *hdr;   /* ptr to ELF */
-};
-
-struct lib64_elfinfo
-{
-   Elf64_Ehdr  *hdr;
-};
-
 static int vdso_mremap(const struct vm_special_mapping *sm, struct 
vm_area_struct *new_vma,
   unsigned long text_size)
 {
@@ -209,8 +192,7 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, 
int uses_interp)
do_##type##_fixups((value), __start, __end);
\
 } while (0)
 
-static __init int vdso_fixup_features(struct lib32_elfinfo *v32,
- struct lib64_elfinfo *v64)
+static int __init vdso_fixup_features(void)
 {
 #ifdef CONFIG_PPC64
VDSO_DO_FIXUPS(feature, cur_cpu_spec->cpu_features, 64, ftr_fixup);
@@ -233,12 +215,7 @@ static __init int vdso_fixup_features(struct lib32_elfinfo 
*v32,
 
 static __init int vdso_setup(void)
 {
-   struct lib32_elfinfov32;
-   struct lib64_elfinfov64;
-
-   v32.hdr = vdso32_kbase;
-   v64.hdr = vdso64_kbase;
-   if (vdso_fixup_features(, ))
+   if (vdso_fixup_features())
return -1;
 
return 0;
@@ -334,8 +311,6 @@ static int __init vdso_init(void)
vdso_data->icache_log_block_size = ppc64_caches.l1i.log_block_size;
 #endif /* CONFIG_PPC64 */
 
-   vdso32_kbase = _start;
-
vdso_setup_syscall_map();
 
/*
-- 
2.25.0



[PATCH v1 23/30] powerpc/vdso: Remove unused text member in struct lib32/64_elfinfo

2020-09-27 Thread Christophe Leroy
The text member in struct lib32_elfinfo and struct lib64_elfinfo
is not used, remove it.

Signed-off-by: Christophe Leroy 
---
 arch/powerpc/kernel/vdso.c | 16 
 1 file changed, 16 deletions(-)

diff --git a/arch/powerpc/kernel/vdso.c b/arch/powerpc/kernel/vdso.c
index 5e4e3546f034..fa1cbddfb978 100644
--- a/arch/powerpc/kernel/vdso.c
+++ b/arch/powerpc/kernel/vdso.c
@@ -73,7 +73,6 @@ struct lib32_elfinfo
Elf32_Sym   *dynsym;/* ptr to .dynsym section */
unsigned long   dynsymsize; /* size of .dynsym section */
char*dynstr;/* ptr to .dynstr section */
-   unsigned long   text;   /* offset of .text section in .so */
 };
 
 struct lib64_elfinfo
@@ -82,7 +81,6 @@ struct lib64_elfinfo
Elf64_Sym   *dynsym;
unsigned long   dynsymsize;
char*dynstr;
-   unsigned long   text;
 };
 
 static int vdso_mremap(const struct vm_special_mapping *sm, struct 
vm_area_struct *new_vma,
@@ -273,8 +271,6 @@ static void * __init find_section64(Elf64_Ehdr *ehdr, const 
char *secname,
 static __init int vdso_do_find_sections(struct lib32_elfinfo *v32,
struct lib64_elfinfo *v64)
 {
-   void *sect;
-
/*
 * Locate symbol tables & text section
 */
@@ -286,12 +282,6 @@ static __init int vdso_do_find_sections(struct 
lib32_elfinfo *v32,
printk(KERN_ERR "vDSO32: required symbol section not found\n");
return -1;
}
-   sect = find_section32(v32->hdr, ".text", NULL);
-   if (sect == NULL) {
-   printk(KERN_ERR "vDSO32: the .text section was not found\n");
-   return -1;
-   }
-   v32->text = sect - vdso32_kbase;
 #endif
 
 #ifdef CONFIG_PPC64
@@ -301,12 +291,6 @@ static __init int vdso_do_find_sections(struct 
lib32_elfinfo *v32,
printk(KERN_ERR "vDSO64: required symbol section not found\n");
return -1;
}
-   sect = find_section64(v64->hdr, ".text", NULL);
-   if (sect == NULL) {
-   printk(KERN_ERR "vDSO64: the .text section was not found\n");
-   return -1;
-   }
-   v64->text = sect - vdso64_kbase;
 #endif /* CONFIG_PPC64 */
 
return 0;
-- 
2.25.0



[PATCH v1 20/30] powerpc/vdso: Remove __kernel_datapage_offset

2020-09-27 Thread Christophe Leroy
__kernel_datapage_offset is not used anymore, remove it.

Signed-off-by: Christophe Leroy 
---
 arch/powerpc/kernel/vdso.c  | 39 -
 arch/powerpc/kernel/vdso32/datapage.S   |  3 --
 arch/powerpc/kernel/vdso32/vdso32.lds.S |  5 
 arch/powerpc/kernel/vdso64/datapage.S   |  3 --
 arch/powerpc/kernel/vdso64/vdso64.lds.S |  5 
 5 files changed, 55 deletions(-)

diff --git a/arch/powerpc/kernel/vdso.c b/arch/powerpc/kernel/vdso.c
index e732776bac0a..611977010e2d 100644
--- a/arch/powerpc/kernel/vdso.c
+++ b/arch/powerpc/kernel/vdso.c
@@ -494,42 +494,6 @@ static __init void vdso_setup_trampolines(struct 
lib32_elfinfo *v32,
vdso32_rt_sigtramp = find_function32(v32, "__kernel_sigtramp_rt32");
 }
 
-static __init int vdso_fixup_datapage(struct lib32_elfinfo *v32,
-  struct lib64_elfinfo *v64)
-{
-#ifdef CONFIG_VDSO32
-   Elf32_Sym *sym32;
-#endif
-#ifdef CONFIG_PPC64
-   Elf64_Sym *sym64;
-
-   sym64 = find_symbol64(v64, "__kernel_datapage_offset");
-   if (sym64 == NULL) {
-   printk(KERN_ERR "vDSO64: Can't find symbol "
-  "__kernel_datapage_offset !\n");
-   return -1;
-   }
-   *((int *)(vdso64_kbase + sym64->st_value - VDSO64_LBASE)) =
-   -PAGE_SIZE -
-   (sym64->st_value - VDSO64_LBASE);
-#endif /* CONFIG_PPC64 */
-
-#ifdef CONFIG_VDSO32
-   sym32 = find_symbol32(v32, "__kernel_datapage_offset");
-   if (sym32 == NULL) {
-   printk(KERN_ERR "vDSO32: Can't find symbol "
-  "__kernel_datapage_offset !\n");
-   return -1;
-   }
-   *((int *)(vdso32_kbase + (sym32->st_value - VDSO32_LBASE))) =
-   -PAGE_SIZE -
-   (sym32->st_value - VDSO32_LBASE);
-#endif
-
-   return 0;
-}
-
-
 static __init int vdso_fixup_features(struct lib32_elfinfo *v32,
  struct lib64_elfinfo *v64)
 {
@@ -595,9 +559,6 @@ static __init int vdso_setup(void)
if (vdso_do_find_sections(, ))
return -1;
 
-   if (vdso_fixup_datapage(, ))
-   return -1;
-
if (vdso_fixup_features(, ))
return -1;
 
diff --git a/arch/powerpc/kernel/vdso32/datapage.S 
b/arch/powerpc/kernel/vdso32/datapage.S
index 91a153b34714..0513a2eabec8 100644
--- a/arch/powerpc/kernel/vdso32/datapage.S
+++ b/arch/powerpc/kernel/vdso32/datapage.S
@@ -13,9 +13,6 @@
 #include 
 
.text
-   .global __kernel_datapage_offset;
-__kernel_datapage_offset:
-   .long   0
 
 /*
  * void *__kernel_get_syscall_map(unsigned int *syscall_count) ;
diff --git a/arch/powerpc/kernel/vdso32/vdso32.lds.S 
b/arch/powerpc/kernel/vdso32/vdso32.lds.S
index c70f5dac8c98..7b476a6f2dba 100644
--- a/arch/powerpc/kernel/vdso32/vdso32.lds.S
+++ b/arch/powerpc/kernel/vdso32/vdso32.lds.S
@@ -148,11 +148,6 @@ VERSION
 {
VDSO_VERSION_STRING {
global:
-   /*
-* Has to be there for the kernel to find
-*/
-   __kernel_datapage_offset;
-
__kernel_get_syscall_map;
 #ifndef CONFIG_PPC_BOOK3S_601
__kernel_gettimeofday;
diff --git a/arch/powerpc/kernel/vdso64/datapage.S 
b/arch/powerpc/kernel/vdso64/datapage.S
index 941b735df069..00760dc69d68 100644
--- a/arch/powerpc/kernel/vdso64/datapage.S
+++ b/arch/powerpc/kernel/vdso64/datapage.S
@@ -13,9 +13,6 @@
 #include 
 
.text
-.global__kernel_datapage_offset;
-__kernel_datapage_offset:
-   .long   0
 
 /*
  * void *__kernel_get_syscall_map(unsigned int *syscall_count) ;
diff --git a/arch/powerpc/kernel/vdso64/vdso64.lds.S 
b/arch/powerpc/kernel/vdso64/vdso64.lds.S
index a049000eacfe..a543826cd857 100644
--- a/arch/powerpc/kernel/vdso64/vdso64.lds.S
+++ b/arch/powerpc/kernel/vdso64/vdso64.lds.S
@@ -148,11 +148,6 @@ VERSION
 {
VDSO_VERSION_STRING {
global:
-   /*
-* Has to be there for the kernel to find
-*/
-   __kernel_datapage_offset;
-
__kernel_get_syscall_map;
__kernel_gettimeofday;
__kernel_clock_gettime;
-- 
2.25.0



[PATCH v1 22/30] powerpc/vdso: Remove vdso_patches[] and associated functions

2020-09-27 Thread Christophe Leroy
vdso_patches[] is now empty, remove it and remove
all functions that depends on it.

Signed-off-by: Christophe Leroy 
---
 arch/powerpc/kernel/vdso.c | 161 -
 1 file changed, 161 deletions(-)

diff --git a/arch/powerpc/kernel/vdso.c b/arch/powerpc/kernel/vdso.c
index ec0f1aae0cad..5e4e3546f034 100644
--- a/arch/powerpc/kernel/vdso.c
+++ b/arch/powerpc/kernel/vdso.c
@@ -41,9 +41,6 @@
 #define DBG(fmt...)
 #endif
 
-/* Max supported size for symbol names */
-#define MAX_SYMNAME64
-
 /* The alignment of the vDSO */
 #define VDSO_ALIGNMENT (1 << 16)
 
@@ -66,22 +63,6 @@ static union {
 } vdso_data_store __page_aligned_data;
 struct vdso_arch_data *vdso_data = _data_store.data;
 
-/* Format of the patch table */
-struct vdso_patch_def
-{
-   unsigned long   ftr_mask, ftr_value;
-   const char  *gen_name;
-   const char  *fix_name;
-};
-
-/* Table of functions to patch based on the CPU type/revision
- *
- * Currently, we only change sync_dicache to do nothing on processors
- * with a coherent icache
- */
-static struct vdso_patch_def vdso_patches[] = {
-};
-
 /*
  * Some infos carried around for each of them during parsing at
  * boot time.
@@ -252,62 +233,6 @@ static void * __init find_section32(Elf32_Ehdr *ehdr, 
const char *secname,
*size = 0;
return NULL;
 }
-
-static Elf32_Sym * __init find_symbol32(struct lib32_elfinfo *lib,
-   const char *symname)
-{
-   unsigned int i;
-   char name[MAX_SYMNAME], *c;
-
-   for (i = 0; i < (lib->dynsymsize / sizeof(Elf32_Sym)); i++) {
-   if (lib->dynsym[i].st_name == 0)
-   continue;
-   strlcpy(name, lib->dynstr + lib->dynsym[i].st_name,
-   MAX_SYMNAME);
-   c = strchr(name, '@');
-   if (c)
-   *c = 0;
-   if (strcmp(symname, name) == 0)
-   return >dynsym[i];
-   }
-   return NULL;
-}
-
-static int __init vdso_do_func_patch32(struct lib32_elfinfo *v32,
-  struct lib64_elfinfo *v64,
-  const char *orig, const char *fix)
-{
-   Elf32_Sym *sym32_gen, *sym32_fix;
-
-   sym32_gen = find_symbol32(v32, orig);
-   if (sym32_gen == NULL) {
-   printk(KERN_ERR "vDSO32: Can't find symbol %s !\n", orig);
-   return -1;
-   }
-   if (fix == NULL) {
-   sym32_gen->st_name = 0;
-   return 0;
-   }
-   sym32_fix = find_symbol32(v32, fix);
-   if (sym32_fix == NULL) {
-   printk(KERN_ERR "vDSO32: Can't find symbol %s !\n", fix);
-   return -1;
-   }
-   sym32_gen->st_value = sym32_fix->st_value;
-   sym32_gen->st_size = sym32_fix->st_size;
-   sym32_gen->st_info = sym32_fix->st_info;
-   sym32_gen->st_other = sym32_fix->st_other;
-   sym32_gen->st_shndx = sym32_fix->st_shndx;
-
-   return 0;
-}
-#else /* !CONFIG_VDSO32 */
-static int __init vdso_do_func_patch32(struct lib32_elfinfo *v32,
-  struct lib64_elfinfo *v64,
-  const char *orig, const char *fix)
-{
-   return 0;
-}
 #endif /* CONFIG_VDSO32 */
 
 
@@ -336,56 +261,6 @@ static void * __init find_section64(Elf64_Ehdr *ehdr, 
const char *secname,
*size = 0;
return NULL;
 }
-
-static Elf64_Sym * __init find_symbol64(struct lib64_elfinfo *lib,
-   const char *symname)
-{
-   unsigned int i;
-   char name[MAX_SYMNAME], *c;
-
-   for (i = 0; i < (lib->dynsymsize / sizeof(Elf64_Sym)); i++) {
-   if (lib->dynsym[i].st_name == 0)
-   continue;
-   strlcpy(name, lib->dynstr + lib->dynsym[i].st_name,
-   MAX_SYMNAME);
-   c = strchr(name, '@');
-   if (c)
-   *c = 0;
-   if (strcmp(symname, name) == 0)
-   return >dynsym[i];
-   }
-   return NULL;
-}
-
-static int __init vdso_do_func_patch64(struct lib32_elfinfo *v32,
-  struct lib64_elfinfo *v64,
-  const char *orig, const char *fix)
-{
-   Elf64_Sym *sym64_gen, *sym64_fix;
-
-   sym64_gen = find_symbol64(v64, orig);
-   if (sym64_gen == NULL) {
-   printk(KERN_ERR "vDSO64: Can't find symbol %s !\n", orig);
-   return -1;
-   }
-   if (fix == NULL) {
-   sym64_gen->st_name = 0;
-   return 0;
-   }
-   sym64_fix = find_symbol64(v64, fix);
-   if (sym64_fix == NULL) {
-   printk(KERN_ERR "vDSO64: Can't find symbol %s !\n", fix);
-   return -1;
-   }
-   sym64_gen->st_value = sym64_fix->st_value;
-   sym64_gen->st_size = sym64_fix->st_size;
-  

[PATCH v1 21/30] powerpc/vdso: Remove runtime generated sigtramp offsets

2020-09-27 Thread Christophe Leroy
Signal trampoline offsets are now generated at buildtime.

Runtime generated offsets are not used anymore, remove them.

Signed-off-by: Christophe Leroy 
---
 arch/powerpc/include/asm/vdso.h |  5 ---
 arch/powerpc/kernel/vdso.c  | 59 -
 2 files changed, 64 deletions(-)

diff --git a/arch/powerpc/include/asm/vdso.h b/arch/powerpc/include/asm/vdso.h
index f5257b7f17d0..a97384909fe5 100644
--- a/arch/powerpc/include/asm/vdso.h
+++ b/arch/powerpc/include/asm/vdso.h
@@ -27,11 +27,6 @@
 
 #define VDSO32_SYMBOL(base, name) ((unsigned long)(base) + 
(vdso32_offset_##name))
 
-/* Offsets relative to thread->vdso_base */
-extern unsigned long vdso64_rt_sigtramp;
-extern unsigned long vdso32_sigtramp;
-extern unsigned long vdso32_rt_sigtramp;
-
 int vdso_getcpu_init(void);
 
 #else /* __ASSEMBLY__ */
diff --git a/arch/powerpc/kernel/vdso.c b/arch/powerpc/kernel/vdso.c
index 611977010e2d..ec0f1aae0cad 100644
--- a/arch/powerpc/kernel/vdso.c
+++ b/arch/powerpc/kernel/vdso.c
@@ -48,15 +48,10 @@
 #define VDSO_ALIGNMENT (1 << 16)
 
 static void *vdso32_kbase;
-unsigned long vdso32_sigtramp;
-unsigned long vdso32_rt_sigtramp;
 
 extern char vdso32_start, vdso32_end;
 extern char vdso64_start, vdso64_end;
 static void *vdso64_kbase = _start;
-#ifdef CONFIG_PPC64
-unsigned long vdso64_rt_sigtramp;
-#endif /* CONFIG_PPC64 */
 
 static int vdso_ready;
 
@@ -278,22 +273,6 @@ static Elf32_Sym * __init find_symbol32(struct 
lib32_elfinfo *lib,
return NULL;
 }
 
-/* Note that we assume the section is .text and the symbol is relative to
- * the library base
- */
-static unsigned long __init find_function32(struct lib32_elfinfo *lib,
-   const char *symname)
-{
-   Elf32_Sym *sym = find_symbol32(lib, symname);
-
-   if (sym == NULL) {
-   printk(KERN_WARNING "vDSO32: function %s not found !\n",
-  symname);
-   return 0;
-   }
-   return sym->st_value - VDSO32_LBASE;
-}
-
 static int __init vdso_do_func_patch32(struct lib32_elfinfo *v32,
   struct lib64_elfinfo *v64,
   const char *orig, const char *fix)
@@ -323,12 +302,6 @@ static int __init vdso_do_func_patch32(struct 
lib32_elfinfo *v32,
return 0;
 }
 #else /* !CONFIG_VDSO32 */
-static unsigned long __init find_function32(struct lib32_elfinfo *lib,
-   const char *symname)
-{
-   return 0;
-}
-
 static int __init vdso_do_func_patch32(struct lib32_elfinfo *v32,
   struct lib64_elfinfo *v64,
   const char *orig, const char *fix)
@@ -384,22 +357,6 @@ static Elf64_Sym * __init find_symbol64(struct 
lib64_elfinfo *lib,
return NULL;
 }
 
-/* Note that we assume the section is .text and the symbol is relative to
- * the library base
- */
-static unsigned long __init find_function64(struct lib64_elfinfo *lib,
-   const char *symname)
-{
-   Elf64_Sym *sym = find_symbol64(lib, symname);
-
-   if (sym == NULL) {
-   printk(KERN_WARNING "vDSO64: function %s not found !\n",
-  symname);
-   return 0;
-   }
-   return sym->st_value - VDSO64_LBASE;
-}
-
 static int __init vdso_do_func_patch64(struct lib32_elfinfo *v32,
   struct lib64_elfinfo *v64,
   const char *orig, const char *fix)
@@ -480,20 +437,6 @@ static __init int vdso_do_find_sections(struct 
lib32_elfinfo *v32,
return 0;
 }
 
-static __init void vdso_setup_trampolines(struct lib32_elfinfo *v32,
- struct lib64_elfinfo *v64)
-{
-   /*
-* Find signal trampolines
-*/
-
-#ifdef CONFIG_PPC64
-   vdso64_rt_sigtramp = find_function64(v64, "__kernel_sigtramp_rt64");
-#endif
-   vdso32_sigtramp= find_function32(v32, "__kernel_sigtramp32");
-   vdso32_rt_sigtramp = find_function32(v32, "__kernel_sigtramp_rt32");
-}
-
 static __init int vdso_fixup_features(struct lib32_elfinfo *v32,
  struct lib64_elfinfo *v64)
 {
@@ -565,8 +508,6 @@ static __init int vdso_setup(void)
if (vdso_fixup_alt_funcs(, ))
return -1;
 
-   vdso_setup_trampolines(, );
-
return 0;
 }
 
-- 
2.25.0



[PATCH v1 18/30] powerpc/vdso: Merge __kernel_sync_dicache_p5() into __kernel_sync_dicache()

2020-09-27 Thread Christophe Leroy
__kernel_sync_dicache_p5() is an alternative to
__kernel_sync_dicache() when cpu has CPU_FTR_COHERENT_ICACHE

Remove this alternative function and merge
__kernel_sync_dicache_p5() into __kernel_sync_dicache() using
standard CPU feature fixup.

Signed-off-by: Christophe Leroy 
---
 arch/powerpc/kernel/vdso.c  |  4 
 arch/powerpc/kernel/vdso32/cacheflush.S | 17 ++---
 arch/powerpc/kernel/vdso32/vdso32.lds.S |  1 -
 arch/powerpc/kernel/vdso64/cacheflush.S | 16 ++--
 arch/powerpc/kernel/vdso64/vdso64.lds.S |  1 -
 5 files changed, 12 insertions(+), 27 deletions(-)

diff --git a/arch/powerpc/kernel/vdso.c b/arch/powerpc/kernel/vdso.c
index ba2b935a67f6..3a4fbcc0d1be 100644
--- a/arch/powerpc/kernel/vdso.c
+++ b/arch/powerpc/kernel/vdso.c
@@ -87,10 +87,6 @@ struct vdso_patch_def
  * with a coherent icache
  */
 static struct vdso_patch_def vdso_patches[] = {
-   {
-   CPU_FTR_COHERENT_ICACHE, CPU_FTR_COHERENT_ICACHE,
-   "__kernel_sync_dicache", "__kernel_sync_dicache_p5"
-   },
 };
 
 /*
diff --git a/arch/powerpc/kernel/vdso32/cacheflush.S 
b/arch/powerpc/kernel/vdso32/cacheflush.S
index 017843bf5382..f340e82d1981 100644
--- a/arch/powerpc/kernel/vdso32/cacheflush.S
+++ b/arch/powerpc/kernel/vdso32/cacheflush.S
@@ -24,11 +24,15 @@
  */
 V_FUNCTION_BEGIN(__kernel_sync_dicache)
   .cfi_startproc
+BEGIN_FTR_SECTION
+   b   3f
+END_FTR_SECTION_IFSET(CPU_FTR_COHERENT_ICACHE)
 #ifdef CONFIG_PPC64
mflrr12
   .cfi_register lr,r12
get_datapager10
mtlrr12
+  .cfi_restore lr
 #endif
 
 #ifdef CONFIG_PPC64
@@ -84,20 +88,11 @@ V_FUNCTION_BEGIN(__kernel_sync_dicache)
isync
li  r3,0
blr
-  .cfi_endproc
-V_FUNCTION_END(__kernel_sync_dicache)
-
-
-/*
- * POWER5 version of __kernel_sync_dicache
- */
-V_FUNCTION_BEGIN(__kernel_sync_dicache_p5)
-  .cfi_startproc
+3:
crclr   cr0*4+so
sync
isync
li  r3,0
blr
   .cfi_endproc
-V_FUNCTION_END(__kernel_sync_dicache_p5)
-
+V_FUNCTION_END(__kernel_sync_dicache)
diff --git a/arch/powerpc/kernel/vdso32/vdso32.lds.S 
b/arch/powerpc/kernel/vdso32/vdso32.lds.S
index dd9f262e07c6..c70f5dac8c98 100644
--- a/arch/powerpc/kernel/vdso32/vdso32.lds.S
+++ b/arch/powerpc/kernel/vdso32/vdso32.lds.S
@@ -163,7 +163,6 @@ VERSION
__kernel_get_tbfreq;
 #endif
__kernel_sync_dicache;
-   __kernel_sync_dicache_p5;
__kernel_sigtramp32;
__kernel_sigtramp_rt32;
 #if defined(CONFIG_PPC64) || !defined(CONFIG_SMP)
diff --git a/arch/powerpc/kernel/vdso64/cacheflush.S 
b/arch/powerpc/kernel/vdso64/cacheflush.S
index 61985de5758f..76c3c8cf8ece 100644
--- a/arch/powerpc/kernel/vdso64/cacheflush.S
+++ b/arch/powerpc/kernel/vdso64/cacheflush.S
@@ -23,10 +23,14 @@
  */
 V_FUNCTION_BEGIN(__kernel_sync_dicache)
   .cfi_startproc
+BEGIN_FTR_SECTION
+   b   3f
+END_FTR_SECTION_IFSET(CPU_FTR_COHERENT_ICACHE)
mflrr12
   .cfi_register lr,r12
get_datapager10
mtlrr12
+  .cfi_restore lr
 
lwz r7,CFG_DCACHE_BLOCKSZ(r10)
addir5,r7,-1
@@ -61,19 +65,11 @@ V_FUNCTION_BEGIN(__kernel_sync_dicache)
isync
li  r3,0
blr
-  .cfi_endproc
-V_FUNCTION_END(__kernel_sync_dicache)
-
-
-/*
- * POWER5 version of __kernel_sync_dicache
- */
-V_FUNCTION_BEGIN(__kernel_sync_dicache_p5)
-  .cfi_startproc
+3:
crclr   cr0*4+so
sync
isync
li  r3,0
blr
   .cfi_endproc
-V_FUNCTION_END(__kernel_sync_dicache_p5)
+V_FUNCTION_END(__kernel_sync_dicache)
diff --git a/arch/powerpc/kernel/vdso64/vdso64.lds.S 
b/arch/powerpc/kernel/vdso64/vdso64.lds.S
index e950bf68783a..a049000eacfe 100644
--- a/arch/powerpc/kernel/vdso64/vdso64.lds.S
+++ b/arch/powerpc/kernel/vdso64/vdso64.lds.S
@@ -159,7 +159,6 @@ VERSION
__kernel_clock_getres;
__kernel_get_tbfreq;
__kernel_sync_dicache;
-   __kernel_sync_dicache_p5;
__kernel_sigtramp_rt64;
__kernel_getcpu;
__kernel_time;
-- 
2.25.0



[PATCH v1 17/30] powerpc/vdso: Use builtin symbols to locate fixup section

2020-09-27 Thread Christophe Leroy
Add builtin symbols to locate fixup section and use them
instead of locating sections through elf headers at runtime.

Signed-off-by: Christophe Leroy 
---
 arch/powerpc/kernel/vdso.c  | 55 +++--
 arch/powerpc/kernel/vdso32/vdso32.lds.S |  8 
 arch/powerpc/kernel/vdso64/vdso64.lds.S |  8 
 3 files changed, 30 insertions(+), 41 deletions(-)

diff --git a/arch/powerpc/kernel/vdso.c b/arch/powerpc/kernel/vdso.c
index 7042e9edfb96..ba2b935a67f6 100644
--- a/arch/powerpc/kernel/vdso.c
+++ b/arch/powerpc/kernel/vdso.c
@@ -437,6 +437,12 @@ static int __init vdso_do_func_patch64(struct 
lib32_elfinfo *v32,
 
 #endif /* CONFIG_PPC64 */
 
+#define VDSO_DO_FIXUPS(type, value, bits, sec) do {
\
+   void *__start = (void *)VDSO##bits##_SYMBOL(##bits##_start, 
sec##_start);  \
+   void *__end = (void *)VDSO##bits##_SYMBOL(##bits##_start, 
sec##_end);  \
+   
\
+   do_##type##_fixups((value), __start, __end);
\
+} while (0)
 
 static __init int vdso_do_find_sections(struct lib32_elfinfo *v32,
struct lib64_elfinfo *v64)
@@ -533,53 +539,20 @@ static __init int vdso_fixup_datapage(struct 
lib32_elfinfo *v32,
 static __init int vdso_fixup_features(struct lib32_elfinfo *v32,
  struct lib64_elfinfo *v64)
 {
-   unsigned long size;
-   void *start;
-
 #ifdef CONFIG_PPC64
-   start = find_section64(v64->hdr, "__ftr_fixup", );
-   if (start)
-   do_feature_fixups(cur_cpu_spec->cpu_features,
- start, start + size);
-
-   start = find_section64(v64->hdr, "__mmu_ftr_fixup", );
-   if (start)
-   do_feature_fixups(cur_cpu_spec->mmu_features,
- start, start + size);
-
-   start = find_section64(v64->hdr, "__fw_ftr_fixup", );
-   if (start)
-   do_feature_fixups(powerpc_firmware_features,
- start, start + size);
-
-   start = find_section64(v64->hdr, "__lwsync_fixup", );
-   if (start)
-   do_lwsync_fixups(cur_cpu_spec->cpu_features,
-start, start + size);
+   VDSO_DO_FIXUPS(feature, cur_cpu_spec->cpu_features, 64, ftr_fixup);
+   VDSO_DO_FIXUPS(feature, cur_cpu_spec->mmu_features, 64, mmu_ftr_fixup);
+   VDSO_DO_FIXUPS(feature, powerpc_firmware_features, 64, fw_ftr_fixup);
+   VDSO_DO_FIXUPS(lwsync, cur_cpu_spec->cpu_features, 64, lwsync_fixup);
 #endif /* CONFIG_PPC64 */
 
 #ifdef CONFIG_VDSO32
-   start = find_section32(v32->hdr, "__ftr_fixup", );
-   if (start)
-   do_feature_fixups(cur_cpu_spec->cpu_features,
- start, start + size);
-
-   start = find_section32(v32->hdr, "__mmu_ftr_fixup", );
-   if (start)
-   do_feature_fixups(cur_cpu_spec->mmu_features,
- start, start + size);
-
+   VDSO_DO_FIXUPS(feature, cur_cpu_spec->cpu_features, 32, ftr_fixup);
+   VDSO_DO_FIXUPS(feature, cur_cpu_spec->mmu_features, 32, mmu_ftr_fixup);
 #ifdef CONFIG_PPC64
-   start = find_section32(v32->hdr, "__fw_ftr_fixup", );
-   if (start)
-   do_feature_fixups(powerpc_firmware_features,
- start, start + size);
+   VDSO_DO_FIXUPS(feature, powerpc_firmware_features, 32, fw_ftr_fixup);
 #endif /* CONFIG_PPC64 */
-
-   start = find_section32(v32->hdr, "__lwsync_fixup", );
-   if (start)
-   do_lwsync_fixups(cur_cpu_spec->cpu_features,
-start, start + size);
+   VDSO_DO_FIXUPS(lwsync, cur_cpu_spec->cpu_features, 32, lwsync_fixup);
 #endif
 
return 0;
diff --git a/arch/powerpc/kernel/vdso32/vdso32.lds.S 
b/arch/powerpc/kernel/vdso32/vdso32.lds.S
index a4494a998f58..dd9f262e07c6 100644
--- a/arch/powerpc/kernel/vdso32/vdso32.lds.S
+++ b/arch/powerpc/kernel/vdso32/vdso32.lds.S
@@ -38,17 +38,25 @@ SECTIONS
PROVIDE(etext = .);
 
. = ALIGN(8);
+   VDSO_ftr_fixup_start = .;
__ftr_fixup : { *(__ftr_fixup) }
+   VDSO_ftr_fixup_end = .;
 
. = ALIGN(8);
+   VDSO_mmu_ftr_fixup_start = .;
__mmu_ftr_fixup : { *(__mmu_ftr_fixup) }
+   VDSO_mmu_ftr_fixup_end = .;
 
. = ALIGN(8);
+   VDSO_lwsync_fixup_start = .;
__lwsync_fixup  : { *(__lwsync_fixup) }
+   VDSO_lwsync_fixup_end = .;
 
 #ifdef CONFIG_PPC64
. = ALIGN(8);
+   VDSO_fw_ftr_fixup_start = .;
__fw_ftr_fixup  : { *(__fw_ftr_fixup) }
+   VDSO_fw_ftr_fixup_end = .;
 #endif
 
/*
diff --git a/arch/powerpc/kernel/vdso64/vdso64.lds.S 
b/arch/powerpc/kernel/vdso64/vdso64.lds.S
index 2113bf79ccda..e950bf68783a 100644
--- a/arch/powerpc/kernel/vdso64/vdso64.lds.S
+++ 

[PATCH v1 19/30] powerpc/vdso: Remove vdso32_pages and vdso64_pages

2020-09-27 Thread Christophe Leroy
vdso32_pages and vdso64_pages are not used anymore.

Remove them.

Signed-off-by: Christophe Leroy 
---
 arch/powerpc/kernel/vdso.c | 16 
 1 file changed, 16 deletions(-)

diff --git a/arch/powerpc/kernel/vdso.c b/arch/powerpc/kernel/vdso.c
index 3a4fbcc0d1be..e732776bac0a 100644
--- a/arch/powerpc/kernel/vdso.c
+++ b/arch/powerpc/kernel/vdso.c
@@ -47,7 +47,6 @@
 /* The alignment of the vDSO */
 #define VDSO_ALIGNMENT (1 << 16)
 
-static unsigned int vdso32_pages;
 static void *vdso32_kbase;
 unsigned long vdso32_sigtramp;
 unsigned long vdso32_rt_sigtramp;
@@ -55,7 +54,6 @@ unsigned long vdso32_rt_sigtramp;
 extern char vdso32_start, vdso32_end;
 extern char vdso64_start, vdso64_end;
 static void *vdso64_kbase = _start;
-static unsigned int vdso64_pages;
 #ifdef CONFIG_PPC64
 unsigned long vdso64_rt_sigtramp;
 #endif /* CONFIG_PPC64 */
@@ -701,20 +699,8 @@ static int __init vdso_init(void)
vdso_data->icache_log_block_size = ppc64_caches.l1i.log_block_size;
 #endif /* CONFIG_PPC64 */
 
-   /*
-* Calculate the size of the 64 bits vDSO
-*/
-   vdso64_pages = (_end - _start) >> PAGE_SHIFT;
-   DBG("vdso64_kbase: %p, 0x%x pages\n", vdso64_kbase, vdso64_pages);
-
vdso32_kbase = _start;
 
-   /*
-* Calculate the size of the 32 bits vDSO
-*/
-   vdso32_pages = (_end - _start) >> PAGE_SHIFT;
-   DBG("vdso32_kbase: %p, 0x%x pages\n", vdso32_kbase, vdso32_pages);
-
vdso_setup_syscall_map();
 
/*
@@ -723,8 +709,6 @@ static int __init vdso_init(void)
 */
if (vdso_setup()) {
printk(KERN_ERR "vDSO setup failure, not enabled !\n");
-   vdso32_pages = 0;
-   vdso64_pages = 0;
return 0;
}
 
-- 
2.25.0



[PATCH v1 16/30] powerpc/vdso: Retrieve sigtramp offsets at buildtime

2020-09-27 Thread Christophe Leroy
This is copied from arm64.

Instead of using runtime generated signal trampoline offsets,
get offsets at buildtime.

If the said trampoline doesn't exist, build will fail. So no
need to check whether the trampoline exists or not in the VDSO.

Signed-off-by: Christophe Leroy 
---
 arch/powerpc/Makefile  | 15 +++
 arch/powerpc/include/asm/vdso.h| 12 
 arch/powerpc/kernel/signal_32.c|  8 
 arch/powerpc/kernel/signal_64.c|  4 ++--
 arch/powerpc/kernel/vdso32/Makefile|  8 
 arch/powerpc/kernel/vdso32/gen_vdso_offsets.sh | 16 
 arch/powerpc/kernel/vdso32/vdso32.lds.S|  6 ++
 arch/powerpc/kernel/vdso64/Makefile|  8 
 arch/powerpc/kernel/vdso64/gen_vdso_offsets.sh | 16 
 arch/powerpc/kernel/vdso64/vdso64.lds.S|  5 +
 arch/powerpc/perf/callchain_32.c   |  8 
 arch/powerpc/perf/callchain_64.c   |  4 ++--
 12 files changed, 98 insertions(+), 12 deletions(-)
 create mode 100755 arch/powerpc/kernel/vdso32/gen_vdso_offsets.sh
 create mode 100755 arch/powerpc/kernel/vdso64/gen_vdso_offsets.sh

diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile
index 4f932044939e..2b432a62d6a2 100644
--- a/arch/powerpc/Makefile
+++ b/arch/powerpc/Makefile
@@ -410,6 +410,21 @@ install:
 archclean:
$(Q)$(MAKE) $(clean)=$(boot)
 
+ifeq ($(KBUILD_EXTMOD),)
+# We need to generate vdso-offsets.h before compiling certain files in kernel/.
+# In order to do that, we should use the archprepare target, but we can't since
+# asm-offsets.h is included in some files used to generate vdso-offsets.h, and
+# asm-offsets.h is built in prepare0, for which archprepare is a dependency.
+# Therefore we need to generate the header after prepare0 has been made, hence
+# this hack.
+prepare: vdso_prepare
+vdso_prepare: prepare0
+   $(if $(CONFIG_VDSO32),$(Q)$(MAKE) \
+   $(build)=arch/powerpc/kernel/vdso32 
include/generated/vdso32-offsets.h)
+   $(if $(CONFIG_PPC64),$(Q)$(MAKE) \
+   $(build)=arch/powerpc/kernel/vdso64 
include/generated/vdso64-offsets.h)
+endif
+
 archprepare: checkbin
 
 archheaders:
diff --git a/arch/powerpc/include/asm/vdso.h b/arch/powerpc/include/asm/vdso.h
index 2ff884853f97..f5257b7f17d0 100644
--- a/arch/powerpc/include/asm/vdso.h
+++ b/arch/powerpc/include/asm/vdso.h
@@ -15,6 +15,18 @@
 
 #ifndef __ASSEMBLY__
 
+#ifdef CONFIG_PPC64
+#include 
+#endif
+
+#ifdef CONFIG_VDSO32
+#include 
+#endif
+
+#define VDSO64_SYMBOL(base, name) ((unsigned long)(base) + 
(vdso64_offset_##name))
+
+#define VDSO32_SYMBOL(base, name) ((unsigned long)(base) + 
(vdso32_offset_##name))
+
 /* Offsets relative to thread->vdso_base */
 extern unsigned long vdso64_rt_sigtramp;
 extern unsigned long vdso32_sigtramp;
diff --git a/arch/powerpc/kernel/signal_32.c b/arch/powerpc/kernel/signal_32.c
index 4dcc5e2659ce..e6f8afe1d12c 100644
--- a/arch/powerpc/kernel/signal_32.c
+++ b/arch/powerpc/kernel/signal_32.c
@@ -785,9 +785,9 @@ int handle_rt_signal32(struct ksignal *ksig, sigset_t 
*oldset,
/* Save user registers on the stack */
frame = _sf->uc.uc_mcontext;
addr = frame;
-   if (vdso32_rt_sigtramp && tsk->mm->context.vdso) {
+   if (tsk->mm->context.vdso) {
sigret = 0;
-   tramp = (unsigned long)tsk->mm->context.vdso + 
vdso32_rt_sigtramp;
+   tramp = VDSO32_SYMBOL(tsk->mm->context.vdso, sigtramp_rt32);
} else {
sigret = __NR_rt_sigreturn;
tramp = (unsigned long) frame->tramp;
@@ -1247,9 +1247,9 @@ int handle_signal32(struct ksignal *ksig, sigset_t 
*oldset,
|| __put_user(ksig->sig, >signal))
goto badframe;
 
-   if (vdso32_sigtramp && tsk->mm->context.vdso) {
+   if (tsk->mm->context.vdso) {
sigret = 0;
-   tramp = (unsigned long)tsk->mm->context.vdso + vdso32_sigtramp;
+   tramp = VDSO32_SYMBOL(tsk->mm->context.vdso, sigtramp32);
} else {
sigret = __NR_sigreturn;
tramp = (unsigned long) frame->mctx.tramp;
diff --git a/arch/powerpc/kernel/signal_64.c b/arch/powerpc/kernel/signal_64.c
index 80ad09c8bc14..d29f529a4658 100644
--- a/arch/powerpc/kernel/signal_64.c
+++ b/arch/powerpc/kernel/signal_64.c
@@ -864,8 +864,8 @@ int handle_rt_signal64(struct ksignal *ksig, sigset_t *set,
tsk->thread.fp_state.fpscr = 0;
 
/* Set up to return from userspace. */
-   if (vdso64_rt_sigtramp && tsk->mm->context.vdso) {
-   regs->nip = (unsigned long)tsk->mm->context.vdso + 
vdso64_rt_sigtramp;
+   if (tsk->mm->context.vdso) {
+   regs->nip = VDSO64_SYMBOL(tsk->mm->context.vdso, sigtramp_rt64);
} else {
err |= setup_trampoline(__NR_rt_sigreturn, >tramp[0]);
if (err)
diff --git 

[PATCH v1 15/30] powerpc/vdso: Remove unused \tmp param in __get_datapage()

2020-09-27 Thread Christophe Leroy
The \tmp param is not used anymore, remove it.

Signed-off-by: Christophe Leroy 
---
 arch/powerpc/include/asm/vdso/gettimeofday.h | 4 ++--
 arch/powerpc/include/asm/vdso_datapage.h | 2 +-
 arch/powerpc/kernel/vdso32/cacheflush.S  | 2 +-
 arch/powerpc/kernel/vdso32/datapage.S| 4 ++--
 arch/powerpc/kernel/vdso64/cacheflush.S  | 2 +-
 arch/powerpc/kernel/vdso64/datapage.S| 4 ++--
 6 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/arch/powerpc/include/asm/vdso/gettimeofday.h 
b/arch/powerpc/include/asm/vdso/gettimeofday.h
index 8da84722729b..037fa214da7c 100644
--- a/arch/powerpc/include/asm/vdso/gettimeofday.h
+++ b/arch/powerpc/include/asm/vdso/gettimeofday.h
@@ -22,7 +22,7 @@
 #ifdef CONFIG_PPC64
PPC_STL r2, STACK_FRAME_OVERHEAD + STK_GOT(r1)
 #endif
-   get_datapager5, r0
+   get_datapager5
addir5, r5, VDSO_DATA_OFFSET
bl  \funct
PPC_LL  r0, STACK_FRAME_OVERHEAD + PPC_LR_STKOFF(r1)
@@ -51,7 +51,7 @@
 #ifdef CONFIG_PPC64
PPC_STL r2, STACK_FRAME_OVERHEAD + STK_GOT(r1)
 #endif
-   get_datapager4, r0
+   get_datapager4
addir4, r4, VDSO_DATA_OFFSET
bl  \funct
PPC_LL  r0, STACK_FRAME_OVERHEAD + PPC_LR_STKOFF(r1)
diff --git a/arch/powerpc/include/asm/vdso_datapage.h 
b/arch/powerpc/include/asm/vdso_datapage.h
index 535ba737397d..3f958ecf2beb 100644
--- a/arch/powerpc/include/asm/vdso_datapage.h
+++ b/arch/powerpc/include/asm/vdso_datapage.h
@@ -103,7 +103,7 @@ extern struct vdso_arch_data *vdso_data;
 
 #else /* __ASSEMBLY__ */
 
-.macro get_datapage ptr, tmp
+.macro get_datapage ptr
bcl 20, 31, .+4
 999:
mflr\ptr
diff --git a/arch/powerpc/kernel/vdso32/cacheflush.S 
b/arch/powerpc/kernel/vdso32/cacheflush.S
index 3440ddf21c8b..017843bf5382 100644
--- a/arch/powerpc/kernel/vdso32/cacheflush.S
+++ b/arch/powerpc/kernel/vdso32/cacheflush.S
@@ -27,7 +27,7 @@ V_FUNCTION_BEGIN(__kernel_sync_dicache)
 #ifdef CONFIG_PPC64
mflrr12
   .cfi_register lr,r12
-   get_datapager10, r0
+   get_datapager10
mtlrr12
 #endif
 
diff --git a/arch/powerpc/kernel/vdso32/datapage.S 
b/arch/powerpc/kernel/vdso32/datapage.S
index 217bb630f8f9..91a153b34714 100644
--- a/arch/powerpc/kernel/vdso32/datapage.S
+++ b/arch/powerpc/kernel/vdso32/datapage.S
@@ -31,7 +31,7 @@ V_FUNCTION_BEGIN(__kernel_get_syscall_map)
mflrr12
   .cfi_register lr,r12
mr. r4,r3
-   get_datapager3, r0
+   get_datapager3
mtlrr12
addir3,r3,CFG_SYSCALL_MAP32
beqlr
@@ -52,7 +52,7 @@ V_FUNCTION_BEGIN(__kernel_get_tbfreq)
   .cfi_startproc
mflrr12
   .cfi_register lr,r12
-   get_datapager3, r0
+   get_datapager3
lwz r4,(CFG_TB_TICKS_PER_SEC + 4)(r3)
lwz r3,CFG_TB_TICKS_PER_SEC(r3)
mtlrr12
diff --git a/arch/powerpc/kernel/vdso64/cacheflush.S 
b/arch/powerpc/kernel/vdso64/cacheflush.S
index cab14324242b..61985de5758f 100644
--- a/arch/powerpc/kernel/vdso64/cacheflush.S
+++ b/arch/powerpc/kernel/vdso64/cacheflush.S
@@ -25,7 +25,7 @@ V_FUNCTION_BEGIN(__kernel_sync_dicache)
   .cfi_startproc
mflrr12
   .cfi_register lr,r12
-   get_datapager10, r0
+   get_datapager10
mtlrr12
 
lwz r7,CFG_DCACHE_BLOCKSZ(r10)
diff --git a/arch/powerpc/kernel/vdso64/datapage.S 
b/arch/powerpc/kernel/vdso64/datapage.S
index 067247d3efb9..941b735df069 100644
--- a/arch/powerpc/kernel/vdso64/datapage.S
+++ b/arch/powerpc/kernel/vdso64/datapage.S
@@ -31,7 +31,7 @@ V_FUNCTION_BEGIN(__kernel_get_syscall_map)
mflrr12
   .cfi_register lr,r12
mr  r4,r3
-   get_datapager3, r0
+   get_datapager3
mtlrr12
addir3,r3,CFG_SYSCALL_MAP64
cmpldi  cr0,r4,0
@@ -53,7 +53,7 @@ V_FUNCTION_BEGIN(__kernel_get_tbfreq)
   .cfi_startproc
mflrr12
   .cfi_register lr,r12
-   get_datapager3, r0
+   get_datapager3
ld  r3,CFG_TB_TICKS_PER_SEC(r3)
mtlrr12
crclr   cr0*4+so
-- 
2.25.0



[PATCH v1 14/30] powerpc/vdso: Simplify __get_datapage()

2020-09-27 Thread Christophe Leroy
The VDSO datapage and the text pages are always located immediately
next to each other, so it can be hardcoded without an indirection
through __kernel_datapage_offset

Signed-off-by: Christophe Leroy 
---
 arch/powerpc/include/asm/vdso_datapage.h | 8 +---
 arch/powerpc/kernel/vdso32/vdso32.lds.S  | 2 ++
 arch/powerpc/kernel/vdso64/vdso64.lds.S  | 2 ++
 3 files changed, 9 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/include/asm/vdso_datapage.h 
b/arch/powerpc/include/asm/vdso_datapage.h
index 3d996db05acd..535ba737397d 100644
--- a/arch/powerpc/include/asm/vdso_datapage.h
+++ b/arch/powerpc/include/asm/vdso_datapage.h
@@ -105,10 +105,12 @@ extern struct vdso_arch_data *vdso_data;
 
 .macro get_datapage ptr, tmp
bcl 20, 31, .+4
+999:
mflr\ptr
-   addi\ptr, \ptr, (__kernel_datapage_offset - (.-4))@l
-   lwz \tmp, 0(\ptr)
-   add \ptr, \tmp, \ptr
+#if CONFIG_PPC_PAGE_SHIFT > 14
+   addis   \ptr, \ptr, (_vdso_datapage - 999b)@ha
+#endif
+   addi\ptr, \ptr, (_vdso_datapage - 999b)@l
 .endm
 
 #endif /* __ASSEMBLY__ */
diff --git a/arch/powerpc/kernel/vdso32/vdso32.lds.S 
b/arch/powerpc/kernel/vdso32/vdso32.lds.S
index af5812ca5dce..c96b5141738e 100644
--- a/arch/powerpc/kernel/vdso32/vdso32.lds.S
+++ b/arch/powerpc/kernel/vdso32/vdso32.lds.S
@@ -4,6 +4,7 @@
  * library
  */
 #include 
+#include 
 
 #ifdef __LITTLE_ENDIAN__
 OUTPUT_FORMAT("elf32-powerpcle", "elf32-powerpcle", "elf32-powerpcle")
@@ -15,6 +16,7 @@ ENTRY(_start)
 
 SECTIONS
 {
+   PROVIDE(_vdso_datapage = . - PAGE_SIZE);
. = VDSO32_LBASE + SIZEOF_HEADERS;
 
.hash   : { *(.hash) }  :text
diff --git a/arch/powerpc/kernel/vdso64/vdso64.lds.S 
b/arch/powerpc/kernel/vdso64/vdso64.lds.S
index 256fb9720298..aa5b924683c5 100644
--- a/arch/powerpc/kernel/vdso64/vdso64.lds.S
+++ b/arch/powerpc/kernel/vdso64/vdso64.lds.S
@@ -4,6 +4,7 @@
  * library
  */
 #include 
+#include 
 
 #ifdef __LITTLE_ENDIAN__
 OUTPUT_FORMAT("elf64-powerpcle", "elf64-powerpcle", "elf64-powerpcle")
@@ -15,6 +16,7 @@ ENTRY(_start)
 
 SECTIONS
 {
+   PROVIDE(_vdso_datapage = . - PAGE_SIZE);
. = VDSO64_LBASE + SIZEOF_HEADERS;
 
.hash   : { *(.hash) }  :text
-- 
2.25.0



[PATCH v1 13/30] powerpc/vdso: Move vdso datapage up front

2020-09-27 Thread Christophe Leroy
Move the vdso datapage in front of the VDSO area,
before vdso test.

This will allow to remove the __kernel_datapage_offset symbol
and simplify __get_datapage() in following patches.

Signed-off-by: Christophe Leroy 
---
 arch/powerpc/include/asm/mmu_context.h |  2 +-
 arch/powerpc/kernel/vdso.c | 14 +++---
 2 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/arch/powerpc/include/asm/mmu_context.h 
b/arch/powerpc/include/asm/mmu_context.h
index d54358cb5be1..e5a5e3cb7724 100644
--- a/arch/powerpc/include/asm/mmu_context.h
+++ b/arch/powerpc/include/asm/mmu_context.h
@@ -262,7 +262,7 @@ extern void arch_exit_mmap(struct mm_struct *mm);
 static inline void arch_unmap(struct mm_struct *mm,
  unsigned long start, unsigned long end)
 {
-   unsigned long vdso_base = (unsigned long)mm->context.vdso;
+   unsigned long vdso_base = (unsigned long)mm->context.vdso - PAGE_SIZE;
 
if (start <= vdso_base && vdso_base < end)
mm->context.vdso = NULL;
diff --git a/arch/powerpc/kernel/vdso.c b/arch/powerpc/kernel/vdso.c
index 87b77b793029..7042e9edfb96 100644
--- a/arch/powerpc/kernel/vdso.c
+++ b/arch/powerpc/kernel/vdso.c
@@ -123,7 +123,7 @@ static int vdso_mremap(const struct vm_special_mapping *sm, 
struct vm_area_struc
if (new_size != text_size + PAGE_SIZE)
return -EINVAL;
 
-   current->mm->context.vdso = (void __user *)new_vma->vm_start;
+   current->mm->context.vdso = (void __user *)new_vma->vm_start + 
PAGE_SIZE;
 
return 0;
 }
@@ -198,7 +198,7 @@ static int __arch_setup_additional_pages(struct 
linux_binprm *bprm, int uses_int
 * install_special_mapping or the perf counter mmap tracking code
 * will fail to recognise it as a vDSO.
 */
-   mm->context.vdso = (void __user *)vdso_base;
+   mm->context.vdso = (void __user *)vdso_base + PAGE_SIZE;
 
/*
 * our vma flags don't have VM_WRITE so by default, the process isn't
@@ -510,7 +510,7 @@ static __init int vdso_fixup_datapage(struct lib32_elfinfo 
*v32,
return -1;
}
*((int *)(vdso64_kbase + sym64->st_value - VDSO64_LBASE)) =
-   (vdso64_pages << PAGE_SHIFT) -
+   -PAGE_SIZE -
(sym64->st_value - VDSO64_LBASE);
 #endif /* CONFIG_PPC64 */
 
@@ -522,7 +522,7 @@ static __init int vdso_fixup_datapage(struct lib32_elfinfo 
*v32,
return -1;
}
*((int *)(vdso32_kbase + (sym32->st_value - VDSO32_LBASE))) =
-   (vdso32_pages << PAGE_SHIFT) -
+   -PAGE_SIZE -
(sym32->st_value - VDSO32_LBASE);
 #endif
 
@@ -696,10 +696,10 @@ static struct page ** __init vdso_setup_pages(void 
*start, void *end)
if (!pagelist)
panic("%s: Cannot allocate page list for VDSO", __func__);
 
-   for (i = 0; i < pages; i++)
-   pagelist[i] = virt_to_page(start + i * PAGE_SIZE);
+   pagelist[0] = virt_to_page(vdso_data);
 
-   pagelist[i] = virt_to_page(vdso_data);
+   for (i = 0; i < pages; i++)
+   pagelist[i + 1] = virt_to_page(start + i * PAGE_SIZE);
 
return pagelist;
 }
-- 
2.25.0



[PATCH v1 12/30] powerpc/vdso: Replace vdso_base by vdso

2020-09-27 Thread Christophe Leroy
All other architectures but s390 use a void pointer named 'vdso'
to reference the VDSO mapping.

In a following patch, the VDSO data page will be put in front of
text, vdso_base will then not anymore point to VDSO text.

To avoid confusion between vdso_base and VDSO text, rename vdso_base
into vdso and make it a void __user *.

Signed-off-by: Christophe Leroy 
---
 arch/powerpc/include/asm/book3s/32/mmu-hash.h | 2 +-
 arch/powerpc/include/asm/book3s/64/mmu.h  | 2 +-
 arch/powerpc/include/asm/elf.h| 2 +-
 arch/powerpc/include/asm/mmu_context.h| 6 --
 arch/powerpc/include/asm/nohash/32/mmu-40x.h  | 2 +-
 arch/powerpc/include/asm/nohash/32/mmu-44x.h  | 2 +-
 arch/powerpc/include/asm/nohash/32/mmu-8xx.h  | 2 +-
 arch/powerpc/include/asm/nohash/mmu-book3e.h  | 2 +-
 arch/powerpc/kernel/signal_32.c   | 8 
 arch/powerpc/kernel/signal_64.c   | 4 ++--
 arch/powerpc/kernel/vdso.c| 8 
 arch/powerpc/perf/callchain_32.c  | 8 
 arch/powerpc/perf/callchain_64.c  | 4 ++--
 13 files changed, 27 insertions(+), 25 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/32/mmu-hash.h 
b/arch/powerpc/include/asm/book3s/32/mmu-hash.h
index 2e277ca0170f..331187661236 100644
--- a/arch/powerpc/include/asm/book3s/32/mmu-hash.h
+++ b/arch/powerpc/include/asm/book3s/32/mmu-hash.h
@@ -90,7 +90,7 @@ struct hash_pte {
 
 typedef struct {
unsigned long id;
-   unsigned long vdso_base;
+   void __user *vdso;
 } mm_context_t;
 
 void update_bats(void);
diff --git a/arch/powerpc/include/asm/book3s/64/mmu.h 
b/arch/powerpc/include/asm/book3s/64/mmu.h
index ddc414ab3c4d..fc6cb6a712c7 100644
--- a/arch/powerpc/include/asm/book3s/64/mmu.h
+++ b/arch/powerpc/include/asm/book3s/64/mmu.h
@@ -111,7 +111,7 @@ typedef struct {
 
struct hash_mm_context *hash_context;
 
-   unsigned long vdso_base;
+   void __user *vdso;
/*
 * pagetable fragment support
 */
diff --git a/arch/powerpc/include/asm/elf.h b/arch/powerpc/include/asm/elf.h
index 53ed2ca40151..4ecc372c408e 100644
--- a/arch/powerpc/include/asm/elf.h
+++ b/arch/powerpc/include/asm/elf.h
@@ -169,7 +169,7 @@ do {
\
NEW_AUX_ENT(AT_DCACHEBSIZE, dcache_bsize);  \
NEW_AUX_ENT(AT_ICACHEBSIZE, icache_bsize);  \
NEW_AUX_ENT(AT_UCACHEBSIZE, ucache_bsize);  \
-   VDSO_AUX_ENT(AT_SYSINFO_EHDR, current->mm->context.vdso_base);  \
+   VDSO_AUX_ENT(AT_SYSINFO_EHDR, (unsigned 
long)current->mm->context.vdso);\
ARCH_DLINFO_CACHE_GEOMETRY; \
 } while (0)
 
diff --git a/arch/powerpc/include/asm/mmu_context.h 
b/arch/powerpc/include/asm/mmu_context.h
index e02aa793420b..d54358cb5be1 100644
--- a/arch/powerpc/include/asm/mmu_context.h
+++ b/arch/powerpc/include/asm/mmu_context.h
@@ -262,8 +262,10 @@ extern void arch_exit_mmap(struct mm_struct *mm);
 static inline void arch_unmap(struct mm_struct *mm,
  unsigned long start, unsigned long end)
 {
-   if (start <= mm->context.vdso_base && mm->context.vdso_base < end)
-   mm->context.vdso_base = 0;
+   unsigned long vdso_base = (unsigned long)mm->context.vdso;
+
+   if (start <= vdso_base && vdso_base < end)
+   mm->context.vdso = NULL;
 }
 
 #ifdef CONFIG_PPC_MEM_KEYS
diff --git a/arch/powerpc/include/asm/nohash/32/mmu-40x.h 
b/arch/powerpc/include/asm/nohash/32/mmu-40x.h
index 74f4edb5916e..8a8f13a22cf4 100644
--- a/arch/powerpc/include/asm/nohash/32/mmu-40x.h
+++ b/arch/powerpc/include/asm/nohash/32/mmu-40x.h
@@ -57,7 +57,7 @@
 typedef struct {
unsigned intid;
unsigned intactive;
-   unsigned long   vdso_base;
+   void __user *vdso;
 } mm_context_t;
 
 #endif /* !__ASSEMBLY__ */
diff --git a/arch/powerpc/include/asm/nohash/32/mmu-44x.h 
b/arch/powerpc/include/asm/nohash/32/mmu-44x.h
index 28aa3b339c5e..2d92a39d8f2e 100644
--- a/arch/powerpc/include/asm/nohash/32/mmu-44x.h
+++ b/arch/powerpc/include/asm/nohash/32/mmu-44x.h
@@ -108,7 +108,7 @@ extern unsigned int tlb_44x_index;
 typedef struct {
unsigned intid;
unsigned intactive;
-   unsigned long   vdso_base;
+   void __user *vdso;
 } mm_context_t;
 
 /* patch sites */
diff --git a/arch/powerpc/include/asm/nohash/32/mmu-8xx.h 
b/arch/powerpc/include/asm/nohash/32/mmu-8xx.h
index 1d9ac0f9c794..f0bd7f20c1e3 100644
--- a/arch/powerpc/include/asm/nohash/32/mmu-8xx.h
+++ b/arch/powerpc/include/asm/nohash/32/mmu-8xx.h
@@ -198,7 +198,7 @@ void mmu_pin_tlb(unsigned long top, bool readonly);
 typedef struct {
unsigned int id;
unsigned int active;
-   unsigned long vdso_base;
+   void __user *vdso;
void *pte_frag;
 } mm_context_t;
 
diff --git 

[PATCH v1 11/30] powerpc/vdso: Provide vdso_remap()

2020-09-27 Thread Christophe Leroy
Provide vdso_remap() through _install_special_mapping() and
drop arch_remap().

This adds a test of the size and returns -EINVAL if the size
is not correct.

Signed-off-by: Christophe Leroy 
---
 arch/powerpc/include/asm/mm-arch-hooks.h | 25 
 arch/powerpc/kernel/vdso.c   | 24 +++
 2 files changed, 24 insertions(+), 25 deletions(-)
 delete mode 100644 arch/powerpc/include/asm/mm-arch-hooks.h

diff --git a/arch/powerpc/include/asm/mm-arch-hooks.h 
b/arch/powerpc/include/asm/mm-arch-hooks.h
deleted file mode 100644
index dce274be824a..
--- a/arch/powerpc/include/asm/mm-arch-hooks.h
+++ /dev/null
@@ -1,25 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0-only */
-/*
- * Architecture specific mm hooks
- *
- * Copyright (C) 2015, IBM Corporation
- * Author: Laurent Dufour 
- */
-
-#ifndef _ASM_POWERPC_MM_ARCH_HOOKS_H
-#define _ASM_POWERPC_MM_ARCH_HOOKS_H
-
-static inline void arch_remap(struct mm_struct *mm,
- unsigned long old_start, unsigned long old_end,
- unsigned long new_start, unsigned long new_end)
-{
-   /*
-* mremap() doesn't allow moving multiple vmas so we can limit the
-* check to old_start == vdso_base.
-*/
-   if (old_start == mm->context.vdso_base)
-   mm->context.vdso_base = new_start;
-}
-#define arch_remap arch_remap
-
-#endif /* _ASM_POWERPC_MM_ARCH_HOOKS_H */
diff --git a/arch/powerpc/kernel/vdso.c b/arch/powerpc/kernel/vdso.c
index 9b2c91a963a6..971764d5b85b 100644
--- a/arch/powerpc/kernel/vdso.c
+++ b/arch/powerpc/kernel/vdso.c
@@ -115,13 +115,37 @@ struct lib64_elfinfo
unsigned long   text;
 };
 
+static int vdso_mremap(const struct vm_special_mapping *sm, struct 
vm_area_struct *new_vma,
+  unsigned long text_size)
+{
+   unsigned long new_size = new_vma->vm_end - new_vma->vm_start;
+
+   if (new_size != text_size + PAGE_SIZE)
+   return -EINVAL;
+
+   current->mm->context.vdso_base = new_vma->vm_start;
+
+   return 0;
+}
+
+static int vdso32_mremap(const struct vm_special_mapping *sm, struct 
vm_area_struct *new_vma)
+{
+   return vdso_mremap(sm, new_vma, _end - _start);
+}
+
+static int vdso64_mremap(const struct vm_special_mapping *sm, struct 
vm_area_struct *new_vma)
+{
+   return vdso_mremap(sm, new_vma, _end - _start);
+}
 
 static struct vm_special_mapping vdso32_spec __ro_after_init = {
.name = "[vdso]",
+   .mremap = vdso32_mremap,
 };
 
 static struct vm_special_mapping vdso64_spec __ro_after_init = {
.name = "[vdso]",
+   .mremap = vdso64_mremap,
 };
 
 /*
-- 
2.25.0



[PATCH v1 10/30] powerpc/vdso: Move to _install_special_mapping() and remove arch_vma_name()

2020-09-27 Thread Christophe Leroy
Copied from commit 2fea7f6c98f5 ("arm64: vdso: move to
_install_special_mapping and remove arch_vma_name").

Use the new _install_special_mapping() API added by
commit a62c34bd2a8a ("x86, mm: Improve _install_special_mapping
and fix x86 vdso naming") which obsolete install_special_mapping().

And remove arch_vma_name() as the name is handled by the new API.

Signed-off-by: Christophe Leroy 
---
 arch/powerpc/kernel/vdso.c | 45 +++---
 1 file changed, 22 insertions(+), 23 deletions(-)

diff --git a/arch/powerpc/kernel/vdso.c b/arch/powerpc/kernel/vdso.c
index a976c5e4a7ac..9b2c91a963a6 100644
--- a/arch/powerpc/kernel/vdso.c
+++ b/arch/powerpc/kernel/vdso.c
@@ -49,7 +49,6 @@
 
 static unsigned int vdso32_pages;
 static void *vdso32_kbase;
-static struct page **vdso32_pagelist;
 unsigned long vdso32_sigtramp;
 unsigned long vdso32_rt_sigtramp;
 
@@ -57,7 +56,6 @@ extern char vdso32_start, vdso32_end;
 extern char vdso64_start, vdso64_end;
 static void *vdso64_kbase = _start;
 static unsigned int vdso64_pages;
-static struct page **vdso64_pagelist;
 #ifdef CONFIG_PPC64
 unsigned long vdso64_rt_sigtramp;
 #endif /* CONFIG_PPC64 */
@@ -118,6 +116,14 @@ struct lib64_elfinfo
 };
 
 
+static struct vm_special_mapping vdso32_spec __ro_after_init = {
+   .name = "[vdso]",
+};
+
+static struct vm_special_mapping vdso64_spec __ro_after_init = {
+   .name = "[vdso]",
+};
+
 /*
  * This is called from binfmt_elf, we create the special vma for the
  * vDSO and insert it into the mm struct tree
@@ -125,17 +131,17 @@ struct lib64_elfinfo
 static int __arch_setup_additional_pages(struct linux_binprm *bprm, int 
uses_interp)
 {
struct mm_struct *mm = current->mm;
-   struct page **vdso_pagelist;
+   struct vm_special_mapping *vdso_spec;
+   struct vm_area_struct *vma;
unsigned long vdso_size;
unsigned long vdso_base;
-   int rc;
 
if (is_32bit_task()) {
-   vdso_pagelist = vdso32_pagelist;
+   vdso_spec = _spec;
vdso_size = _end - _start;
vdso_base = VDSO32_MBASE;
} else {
-   vdso_pagelist = vdso64_pagelist;
+   vdso_spec = _spec;
vdso_size = _end - _start;
/*
 * On 64bit we don't have a preferred map address. This
@@ -166,7 +172,7 @@ static int __arch_setup_additional_pages(struct 
linux_binprm *bprm, int uses_int
/*
 * Put vDSO base into mm struct. We need to do this before calling
 * install_special_mapping or the perf counter mmap tracking code
-* will fail to recognise it as a vDSO (since arch_vma_name fails).
+* will fail to recognise it as a vDSO.
 */
current->mm->context.vdso_base = vdso_base;
 
@@ -180,11 +186,13 @@ static int __arch_setup_additional_pages(struct 
linux_binprm *bprm, int uses_int
 * It's fine to use that for setting breakpoints in the vDSO code
 * pages though.
 */
-   rc = install_special_mapping(mm, vdso_base, vdso_size,
-VM_READ|VM_EXEC|
-VM_MAYREAD|VM_MAYWRITE|VM_MAYEXEC,
-vdso_pagelist);
-   return rc;
+   vma = _install_special_mapping(mm, vdso_base, vdso_size,
+  VM_READ | VM_EXEC | VM_MAYREAD |
+  VM_MAYWRITE | VM_MAYEXEC, vdso_spec);
+   if (IS_ERR(vma))
+   return PTR_ERR(vma);
+
+   return 0;
 }
 
 int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
@@ -208,15 +216,6 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, 
int uses_interp)
return rc;
 }
 
-const char *arch_vma_name(struct vm_area_struct *vma)
-{
-   if (vma->vm_mm && vma->vm_start == vma->vm_mm->context.vdso_base)
-   return "[vdso]";
-   return NULL;
-}
-
-
-
 #ifdef CONFIG_VDSO32
 static void * __init find_section32(Elf32_Ehdr *ehdr, const char *secname,
  unsigned long *size)
@@ -737,10 +736,10 @@ static int __init vdso_init(void)
}
 
if (IS_ENABLED(CONFIG_VDSO32))
-   vdso32_pagelist = vdso_setup_pages(_start, _end);
+   vdso32_spec.pages = vdso_setup_pages(_start, 
_end);
 
if (IS_ENABLED(CONFIG_PPC64))
-   vdso64_pagelist = vdso_setup_pages(_start, _end);
+   vdso64_spec.pages = vdso_setup_pages(_start, 
_end);
 
smp_wmb();
vdso_ready = 1;
-- 
2.25.0



[PATCH v1 08/30] powerpc/vdso: Use VDSO size in arch_setup_additional_pages()

2020-09-27 Thread Christophe Leroy
In arch_setup_additional_pages(), instead of using number of VDSO
pages and recalculate VDSO size, directly use the VDSO size.

As vdso_ready is set, vdso_pages can't be 0 so just remove the test.

Signed-off-by: Christophe Leroy 
---
 arch/powerpc/kernel/vdso.c | 18 ++
 1 file changed, 6 insertions(+), 12 deletions(-)

diff --git a/arch/powerpc/kernel/vdso.c b/arch/powerpc/kernel/vdso.c
index a24f6a583fac..448ecaa27ac5 100644
--- a/arch/powerpc/kernel/vdso.c
+++ b/arch/powerpc/kernel/vdso.c
@@ -126,7 +126,7 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, 
int uses_interp)
 {
struct mm_struct *mm = current->mm;
struct page **vdso_pagelist;
-   unsigned long vdso_pages;
+   unsigned long vdso_size;
unsigned long vdso_base;
int rc;
 
@@ -135,11 +135,11 @@ int arch_setup_additional_pages(struct linux_binprm 
*bprm, int uses_interp)
 
if (is_32bit_task()) {
vdso_pagelist = vdso32_pagelist;
-   vdso_pages = vdso32_pages;
+   vdso_size = _end - _start;
vdso_base = VDSO32_MBASE;
} else {
vdso_pagelist = vdso64_pagelist;
-   vdso_pages = vdso64_pages;
+   vdso_size = _end - _start;
/*
 * On 64bit we don't have a preferred map address. This
 * allows get_unmapped_area to find an area near other mmaps
@@ -150,13 +150,8 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, 
int uses_interp)
 
current->mm->context.vdso_base = 0;
 
-   /* vDSO has a problem and was disabled, just don't "enable" it for the
-* process
-*/
-   if (vdso_pages == 0)
-   return 0;
/* Add a page to the vdso size for the data page */
-   vdso_pages ++;
+   vdso_size += PAGE_SIZE;
 
/*
 * pick a base address for the vDSO in process space. We try to put it
@@ -167,8 +162,7 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, 
int uses_interp)
if (mmap_write_lock_killable(mm))
return -EINTR;
vdso_base = get_unmapped_area(NULL, vdso_base,
- (vdso_pages << PAGE_SHIFT) +
- ((VDSO_ALIGNMENT - 1) & PAGE_MASK),
+ vdso_size + ((VDSO_ALIGNMENT - 1) & 
PAGE_MASK),
  0, 0);
if (IS_ERR_VALUE(vdso_base)) {
rc = vdso_base;
@@ -195,7 +189,7 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, 
int uses_interp)
 * It's fine to use that for setting breakpoints in the vDSO code
 * pages though.
 */
-   rc = install_special_mapping(mm, vdso_base, vdso_pages << PAGE_SHIFT,
+   rc = install_special_mapping(mm, vdso_base, vdso_size,
 VM_READ|VM_EXEC|
 VM_MAYREAD|VM_MAYWRITE|VM_MAYEXEC,
 vdso_pagelist);
-- 
2.25.0



[PATCH v1 07/30] powerpc/vdso: Remove unnecessary ifdefs in vdso_pagelist initialization

2020-09-27 Thread Christophe Leroy
No need of all those #ifdefs around the pagelist initialisation,
use IS_ENABLED(), GCC will kick out unused static variables.

Signed-off-by: Christophe Leroy 
---
 arch/powerpc/kernel/vdso.c | 31 ++-
 1 file changed, 6 insertions(+), 25 deletions(-)

diff --git a/arch/powerpc/kernel/vdso.c b/arch/powerpc/kernel/vdso.c
index d129d7ee006d..a24f6a583fac 100644
--- a/arch/powerpc/kernel/vdso.c
+++ b/arch/powerpc/kernel/vdso.c
@@ -53,15 +53,12 @@ static struct page **vdso32_pagelist;
 unsigned long vdso32_sigtramp;
 unsigned long vdso32_rt_sigtramp;
 
-#ifdef CONFIG_VDSO32
 extern char vdso32_start, vdso32_end;
-#endif
-
-#ifdef CONFIG_PPC64
 extern char vdso64_start, vdso64_end;
 static void *vdso64_kbase = _start;
 static unsigned int vdso64_pages;
 static struct page **vdso64_pagelist;
+#ifdef CONFIG_PPC64
 unsigned long vdso64_rt_sigtramp;
 #endif /* CONFIG_PPC64 */
 
@@ -136,7 +133,6 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, 
int uses_interp)
if (!vdso_ready)
return 0;
 
-#ifdef CONFIG_PPC64
if (is_32bit_task()) {
vdso_pagelist = vdso32_pagelist;
vdso_pages = vdso32_pages;
@@ -151,11 +147,6 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, 
int uses_interp)
 */
vdso_base = 0;
}
-#else
-   vdso_pagelist = vdso32_pagelist;
-   vdso_pages = vdso32_pages;
-   vdso_base = VDSO32_MBASE;
-#endif
 
current->mm->context.vdso_base = 0;
 
@@ -614,9 +605,7 @@ static __init int vdso_setup(void)
struct lib64_elfinfov64;
 
v32.hdr = vdso32_kbase;
-#ifdef CONFIG_PPC64
v64.hdr = vdso64_kbase;
-#endif
if (vdso_do_find_sections(, ))
return -1;
 
@@ -722,16 +711,14 @@ static int __init vdso_init(void)
vdso_data->icache_block_size = ppc64_caches.l1i.block_size;
vdso_data->dcache_log_block_size = ppc64_caches.l1d.log_block_size;
vdso_data->icache_log_block_size = ppc64_caches.l1i.log_block_size;
+#endif /* CONFIG_PPC64 */
 
/*
 * Calculate the size of the 64 bits vDSO
 */
vdso64_pages = (_end - _start) >> PAGE_SHIFT;
DBG("vdso64_kbase: %p, 0x%x pages\n", vdso64_kbase, vdso64_pages);
-#endif /* CONFIG_PPC64 */
-
 
-#ifdef CONFIG_VDSO32
vdso32_kbase = _start;
 
/*
@@ -739,8 +726,6 @@ static int __init vdso_init(void)
 */
vdso32_pages = (_end - _start) >> PAGE_SHIFT;
DBG("vdso32_kbase: %p, 0x%x pages\n", vdso32_kbase, vdso32_pages);
-#endif
-
 
vdso_setup_syscall_map();
 
@@ -751,19 +736,15 @@ static int __init vdso_init(void)
if (vdso_setup()) {
printk(KERN_ERR "vDSO setup failure, not enabled !\n");
vdso32_pages = 0;
-#ifdef CONFIG_PPC64
vdso64_pages = 0;
-#endif
return 0;
}
 
-#ifdef CONFIG_VDSO32
-   vdso32_pagelist = vdso_setup_pages(_start, _end);
-#endif
+   if (IS_ENABLED(CONFIG_VDSO32))
+   vdso32_pagelist = vdso_setup_pages(_start, _end);
 
-#ifdef CONFIG_PPC64
-   vdso64_pagelist = vdso_setup_pages(_start, _end);
-#endif /* CONFIG_PPC64 */
+   if (IS_ENABLED(CONFIG_PPC64))
+   vdso64_pagelist = vdso_setup_pages(_start, _end);
 
smp_wmb();
vdso_ready = 1;
-- 
2.25.0



[PATCH v1 06/30] powerpc/vdso: Refactor 32 bits and 64 bits pages setup

2020-09-27 Thread Christophe Leroy
The setup of VDSO pages is identical for 32 bits VDSO and
64 bits VDSO.

Refactor that setup.

And use _start which is synonym of vdsoXX_kbase.

Signed-off-by: Christophe Leroy 
---
 arch/powerpc/kernel/vdso.c | 39 +++---
 1 file changed, 19 insertions(+), 20 deletions(-)

diff --git a/arch/powerpc/kernel/vdso.c b/arch/powerpc/kernel/vdso.c
index d2c08f5de587..d129d7ee006d 100644
--- a/arch/powerpc/kernel/vdso.c
+++ b/arch/powerpc/kernel/vdso.c
@@ -678,10 +678,26 @@ int vdso_getcpu_init(void)
 early_initcall(vdso_getcpu_init);
 #endif
 
-static int __init vdso_init(void)
+static struct page ** __init vdso_setup_pages(void *start, void *end)
 {
int i;
+   struct page **pagelist;
+   int pages = (end - start) >> PAGE_SHIFT;
+
+   pagelist = kcalloc(pages + 1, sizeof(struct page *), GFP_KERNEL);
+   if (!pagelist)
+   panic("%s: Cannot allocate page list for VDSO", __func__);
+
+   for (i = 0; i < pages; i++)
+   pagelist[i] = virt_to_page(start + i * PAGE_SIZE);
+
+   pagelist[i] = virt_to_page(vdso_data);
+
+   return pagelist;
+}
 
+static int __init vdso_init(void)
+{
 #ifdef CONFIG_PPC64
/*
 * Fill up the "systemcfg" stuff for backward compatibility
@@ -742,28 +758,11 @@ static int __init vdso_init(void)
}
 
 #ifdef CONFIG_VDSO32
-   /* Make sure pages are in the correct state */
-   vdso32_pagelist = kcalloc(vdso32_pages + 1, sizeof(struct page *),
- GFP_KERNEL);
-   BUG_ON(vdso32_pagelist == NULL);
-   for (i = 0; i < vdso32_pages; i++) {
-   struct page *pg = virt_to_page(vdso32_kbase + i*PAGE_SIZE);
-
-   vdso32_pagelist[i] = pg;
-   }
-   vdso32_pagelist[i++] = virt_to_page(vdso_data);
+   vdso32_pagelist = vdso_setup_pages(_start, _end);
 #endif
 
 #ifdef CONFIG_PPC64
-   vdso64_pagelist = kcalloc(vdso64_pages + 1, sizeof(struct page *),
- GFP_KERNEL);
-   BUG_ON(vdso64_pagelist == NULL);
-   for (i = 0; i < vdso64_pages; i++) {
-   struct page *pg = virt_to_page(vdso64_kbase + i*PAGE_SIZE);
-
-   vdso64_pagelist[i] = pg;
-   }
-   vdso64_pagelist[i++] = virt_to_page(vdso_data);
+   vdso64_pagelist = vdso_setup_pages(_start, _end);
 #endif /* CONFIG_PPC64 */
 
smp_wmb();
-- 
2.25.0



[PATCH v1 09/30] powerpc/vdso: Simplify arch_setup_additional_pages() exit

2020-09-27 Thread Christophe Leroy
To simplify arch_setup_additional_pages() exit, rename
it __arch_setup_additional_pages() and create a caller
arch_setup_additional_pages() which does the locking.

Signed-off-by: Christophe Leroy 
---
 arch/powerpc/kernel/vdso.c | 40 --
 1 file changed, 21 insertions(+), 19 deletions(-)

diff --git a/arch/powerpc/kernel/vdso.c b/arch/powerpc/kernel/vdso.c
index 448ecaa27ac5..a976c5e4a7ac 100644
--- a/arch/powerpc/kernel/vdso.c
+++ b/arch/powerpc/kernel/vdso.c
@@ -122,7 +122,7 @@ struct lib64_elfinfo
  * This is called from binfmt_elf, we create the special vma for the
  * vDSO and insert it into the mm struct tree
  */
-int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
+static int __arch_setup_additional_pages(struct linux_binprm *bprm, int 
uses_interp)
 {
struct mm_struct *mm = current->mm;
struct page **vdso_pagelist;
@@ -130,9 +130,6 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, 
int uses_interp)
unsigned long vdso_base;
int rc;
 
-   if (!vdso_ready)
-   return 0;
-
if (is_32bit_task()) {
vdso_pagelist = vdso32_pagelist;
vdso_size = _end - _start;
@@ -148,8 +145,6 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, 
int uses_interp)
vdso_base = 0;
}
 
-   current->mm->context.vdso_base = 0;
-
/* Add a page to the vdso size for the data page */
vdso_size += PAGE_SIZE;
 
@@ -159,15 +154,11 @@ int arch_setup_additional_pages(struct linux_binprm 
*bprm, int uses_interp)
 * and end up putting it elsewhere.
 * Add enough to the size so that the result can be aligned.
 */
-   if (mmap_write_lock_killable(mm))
-   return -EINTR;
vdso_base = get_unmapped_area(NULL, vdso_base,
  vdso_size + ((VDSO_ALIGNMENT - 1) & 
PAGE_MASK),
  0, 0);
-   if (IS_ERR_VALUE(vdso_base)) {
-   rc = vdso_base;
-   goto fail_mmapsem;
-   }
+   if (IS_ERR_VALUE(vdso_base))
+   return vdso_base;
 
/* Add required alignment. */
vdso_base = ALIGN(vdso_base, VDSO_ALIGNMENT);
@@ -193,15 +184,26 @@ int arch_setup_additional_pages(struct linux_binprm 
*bprm, int uses_interp)
 VM_READ|VM_EXEC|
 VM_MAYREAD|VM_MAYWRITE|VM_MAYEXEC,
 vdso_pagelist);
-   if (rc) {
-   current->mm->context.vdso_base = 0;
-   goto fail_mmapsem;
-   }
+   return rc;
+}
 
-   mmap_write_unlock(mm);
-   return 0;
+int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
+{
+   struct mm_struct *mm = current->mm;
+   int rc;
+
+   mm->context.vdso_base = 0;
+
+   if (!vdso_ready)
+   return 0;
+
+   if (mmap_write_lock_killable(mm))
+   return -EINTR;
+
+   rc = __arch_setup_additional_pages(bprm, uses_interp);
+   if (rc)
+   mm->context.vdso_base = 0;
 
- fail_mmapsem:
mmap_write_unlock(mm);
return rc;
 }
-- 
2.25.0



[PATCH v1 05/30] powerpc/vdso: Remove NULL termination element in vdso_pagelist

2020-09-27 Thread Christophe Leroy
No need of a NULL last element in pagelists, install_special_mapping()
knows how long the list is.

Remove that element.

Signed-off-by: Christophe Leroy 
---
 arch/powerpc/kernel/vdso.c | 6 ++
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/kernel/vdso.c b/arch/powerpc/kernel/vdso.c
index dfaa4be258d2..d2c08f5de587 100644
--- a/arch/powerpc/kernel/vdso.c
+++ b/arch/powerpc/kernel/vdso.c
@@ -743,7 +743,7 @@ static int __init vdso_init(void)
 
 #ifdef CONFIG_VDSO32
/* Make sure pages are in the correct state */
-   vdso32_pagelist = kcalloc(vdso32_pages + 2, sizeof(struct page *),
+   vdso32_pagelist = kcalloc(vdso32_pages + 1, sizeof(struct page *),
  GFP_KERNEL);
BUG_ON(vdso32_pagelist == NULL);
for (i = 0; i < vdso32_pages; i++) {
@@ -752,11 +752,10 @@ static int __init vdso_init(void)
vdso32_pagelist[i] = pg;
}
vdso32_pagelist[i++] = virt_to_page(vdso_data);
-   vdso32_pagelist[i] = NULL;
 #endif
 
 #ifdef CONFIG_PPC64
-   vdso64_pagelist = kcalloc(vdso64_pages + 2, sizeof(struct page *),
+   vdso64_pagelist = kcalloc(vdso64_pages + 1, sizeof(struct page *),
  GFP_KERNEL);
BUG_ON(vdso64_pagelist == NULL);
for (i = 0; i < vdso64_pages; i++) {
@@ -765,7 +764,6 @@ static int __init vdso_init(void)
vdso64_pagelist[i] = pg;
}
vdso64_pagelist[i++] = virt_to_page(vdso_data);
-   vdso64_pagelist[i] = NULL;
 #endif /* CONFIG_PPC64 */
 
smp_wmb();
-- 
2.25.0



[PATCH v1 01/30] powerpc/vdso: Stripped VDSO is not needed, don't build it

2020-09-27 Thread Christophe Leroy
Since commit 24b659a13866 ("powerpc: Use unstripped VDSO image for
more accurate profiling data"), only the unstripped VDSO image
has been used.

Partially revert commit 8150caad0226 ("[POWERPC] powerpc vDSO: install
unstripped copies on disk") to avoid building the stripped version.

And the unstripped version in $(MODLIB)/vdso/ is not required
anymore as it is the one embedded in the kernel image.

Signed-off-by: Christophe Leroy 
---
 arch/powerpc/Makefile   |  9 -
 arch/powerpc/kernel/vdso32/Makefile | 19 ++-
 arch/powerpc/kernel/vdso64/Makefile | 19 ++-
 3 files changed, 4 insertions(+), 43 deletions(-)

diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile
index 3e8da9cf2eb9..4f932044939e 100644
--- a/arch/powerpc/Makefile
+++ b/arch/powerpc/Makefile
@@ -407,15 +407,6 @@ PHONY += install
 install:
$(Q)$(MAKE) $(build)=$(boot) install
 
-PHONY += vdso_install
-vdso_install:
-ifdef CONFIG_PPC64
-   $(Q)$(MAKE) $(build)=arch/$(ARCH)/kernel/vdso64 $@
-endif
-ifdef CONFIG_VDSO32
-   $(Q)$(MAKE) $(build)=arch/$(ARCH)/kernel/vdso32 $@
-endif
-
 archclean:
$(Q)$(MAKE) $(clean)=$(boot)
 
diff --git a/arch/powerpc/kernel/vdso32/Makefile 
b/arch/powerpc/kernel/vdso32/Makefile
index b46c21ed9316..0923e5f10257 100644
--- a/arch/powerpc/kernel/vdso32/Makefile
+++ b/arch/powerpc/kernel/vdso32/Makefile
@@ -34,7 +34,7 @@ CC32FLAGS += -m32
 KBUILD_CFLAGS := $(filter-out -mcmodel=medium,$(KBUILD_CFLAGS))
 endif
 
-targets := $(obj-vdso32) vdso32.so vdso32.so.dbg
+targets := $(obj-vdso32) vdso32.so.dbg
 obj-vdso32 := $(addprefix $(obj)/, $(obj-vdso32))
 
 GCOV_PROFILE := n
@@ -51,17 +51,12 @@ extra-y += vdso32.lds
 CPPFLAGS_vdso32.lds += -P -C -Upowerpc
 
 # Force dependency (incbin is bad)
-$(obj)/vdso32_wrapper.o : $(obj)/vdso32.so
+$(obj)/vdso32_wrapper.o : $(obj)/vdso32.so.dbg
 
 # link rule for the .so file, .lds has to be first
 $(obj)/vdso32.so.dbg: $(src)/vdso32.lds $(obj-vdso32) $(obj)/vgettimeofday.o 
FORCE
$(call if_changed,vdso32ld_and_check)
 
-# strip rule for the .so file
-$(obj)/%.so: OBJCOPYFLAGS := -S
-$(obj)/%.so: $(obj)/%.so.dbg FORCE
-   $(call if_changed,objcopy)
-
 # assembly rules for the .S files
 $(obj-vdso32): %.o: %.S FORCE
$(call if_changed_dep,vdso32as)
@@ -75,13 +70,3 @@ quiet_cmd_vdso32as = VDSO32A $@
   cmd_vdso32as = $(VDSOCC) $(a_flags) $(CC32FLAGS) -c -o $@ $<
 quiet_cmd_vdso32cc = VDSO32C $@
   cmd_vdso32cc = $(VDSOCC) $(c_flags) $(CC32FLAGS) -c -o $@ $<
-
-# install commands for the unstripped file
-quiet_cmd_vdso_install = INSTALL $@
-  cmd_vdso_install = cp $(obj)/$@.dbg $(MODLIB)/vdso/$@
-
-vdso32.so: $(obj)/vdso32.so.dbg
-   @mkdir -p $(MODLIB)/vdso
-   $(call cmd,vdso_install)
-
-vdso_install: vdso32.so
diff --git a/arch/powerpc/kernel/vdso64/Makefile 
b/arch/powerpc/kernel/vdso64/Makefile
index b8eeebea12c3..99752f27df3f 100644
--- a/arch/powerpc/kernel/vdso64/Makefile
+++ b/arch/powerpc/kernel/vdso64/Makefile
@@ -17,7 +17,7 @@ endif
 
 # Build rules
 
-targets := $(obj-vdso64) vdso64.so vdso64.so.dbg
+targets := $(obj-vdso64) vdso64.so.dbg
 obj-vdso64 := $(addprefix $(obj)/, $(obj-vdso64))
 
 GCOV_PROFILE := n
@@ -36,27 +36,12 @@ CPPFLAGS_vdso64.lds += -P -C -U$(ARCH)
 $(obj)/vgettimeofday.o: %.o: %.c FORCE
 
 # Force dependency (incbin is bad)
-$(obj)/vdso64_wrapper.o : $(obj)/vdso64.so
+$(obj)/vdso64_wrapper.o : $(obj)/vdso64.so.dbg
 
 # link rule for the .so file, .lds has to be first
 $(obj)/vdso64.so.dbg: $(src)/vdso64.lds $(obj-vdso64) $(obj)/vgettimeofday.o 
FORCE
$(call if_changed,vdso64ld_and_check)
 
-# strip rule for the .so file
-$(obj)/%.so: OBJCOPYFLAGS := -S
-$(obj)/%.so: $(obj)/%.so.dbg FORCE
-   $(call if_changed,objcopy)
-
 # actual build commands
 quiet_cmd_vdso64ld_and_check = VDSO64L $@
   cmd_vdso64ld_and_check = $(CC) $(c_flags) -o $@ -Wl,-T$(filter %.lds,$^) 
$(filter %.o,$^); $(cmd_vdso_check)
-
-# install commands for the unstripped file
-quiet_cmd_vdso_install = INSTALL $@
-  cmd_vdso_install = cp $(obj)/$@.dbg $(MODLIB)/vdso/$@
-
-vdso64.so: $(obj)/vdso64.so.dbg
-   @mkdir -p $(MODLIB)/vdso
-   $(call cmd,vdso_install)
-
-vdso_install: vdso64.so
-- 
2.25.0



[PATCH v1 03/30] powerpc/vdso: Rename syscall_map_32/64 to simplify vdso_setup_syscall_map()

2020-09-27 Thread Christophe Leroy
Today vdso_data structure has:
- syscall_map_32[] and syscall_map_64[] on PPC64
- syscall_map_32[] on PPC32

On PPC32, syscall_map_32[] is populated using sys_call_table[].

On PPC64, syscall_map_64[] is populated using sys_call_table[]
and syscal_map_32[] is populated using compat_sys_call_table[].

To simplify vdso_setup_syscall_map(),
- On PPC32 rename syscall_map_32[] into syscall_map[],
- On PPC64 rename syscall_map_64[] into syscall_map[],
- On PPC64 rename syscall_map_32[] into compat_syscall_map[].

That way, syscall_map[] gets populated using sys_call_table[] and
compat_syscall_map[] gets population using compat_sys_call_table[].

Also define an empty compat_syscall_map[] on PPC32 to avoid ifdefs.

Signed-off-by: Christophe Leroy 
---
 arch/powerpc/include/asm/vdso_datapage.h |  7 ---
 arch/powerpc/kernel/asm-offsets.c|  6 --
 arch/powerpc/kernel/vdso.c   | 12 ++--
 3 files changed, 10 insertions(+), 15 deletions(-)

diff --git a/arch/powerpc/include/asm/vdso_datapage.h 
b/arch/powerpc/include/asm/vdso_datapage.h
index c4d320504d26..3d996db05acd 100644
--- a/arch/powerpc/include/asm/vdso_datapage.h
+++ b/arch/powerpc/include/asm/vdso_datapage.h
@@ -79,8 +79,8 @@ struct vdso_arch_data {
__u32 icache_block_size;/* L1 i-cache block size */
__u32 dcache_log_block_size;/* L1 d-cache log block size */
__u32 icache_log_block_size;/* L1 i-cache log block size */
-   __u32 syscall_map_64[SYSCALL_MAP_SIZE]; /* map of syscalls  */
-   __u32 syscall_map_32[SYSCALL_MAP_SIZE]; /* map of syscalls */
+   __u32 syscall_map[SYSCALL_MAP_SIZE];/* Map of syscalls  */
+   __u32 compat_syscall_map[SYSCALL_MAP_SIZE]; /* Map of compat 
syscalls */
 
struct vdso_data data[CS_BASES];
 };
@@ -92,7 +92,8 @@ struct vdso_arch_data {
  */
 struct vdso_arch_data {
__u64 tb_ticks_per_sec; /* Timebase tics / sec  0x38 */
-   __u32 syscall_map_32[SYSCALL_MAP_SIZE]; /* map of syscalls */
+   __u32 syscall_map[SYSCALL_MAP_SIZE]; /* Map of syscalls */
+   __u32 compat_syscall_map[0];/* No compat syscalls on PPC32 */
struct vdso_data data[CS_BASES];
 };
 
diff --git a/arch/powerpc/kernel/asm-offsets.c 
b/arch/powerpc/kernel/asm-offsets.c
index 684260186dbf..e48043087208 100644
--- a/arch/powerpc/kernel/asm-offsets.c
+++ b/arch/powerpc/kernel/asm-offsets.c
@@ -399,13 +399,15 @@ int main(void)
/* datapage offsets for use by vdso */
OFFSET(VDSO_DATA_OFFSET, vdso_arch_data, data);
OFFSET(CFG_TB_TICKS_PER_SEC, vdso_arch_data, tb_ticks_per_sec);
-   OFFSET(CFG_SYSCALL_MAP32, vdso_arch_data, syscall_map_32);
 #ifdef CONFIG_PPC64
OFFSET(CFG_ICACHE_BLOCKSZ, vdso_arch_data, icache_block_size);
OFFSET(CFG_DCACHE_BLOCKSZ, vdso_arch_data, dcache_block_size);
OFFSET(CFG_ICACHE_LOGBLOCKSZ, vdso_arch_data, icache_log_block_size);
OFFSET(CFG_DCACHE_LOGBLOCKSZ, vdso_arch_data, dcache_log_block_size);
-   OFFSET(CFG_SYSCALL_MAP64, vdso_arch_data, syscall_map_64);
+   OFFSET(CFG_SYSCALL_MAP64, vdso_arch_data, syscall_map);
+   OFFSET(CFG_SYSCALL_MAP32, vdso_arch_data, compat_syscall_map);
+#else
+   OFFSET(CFG_SYSCALL_MAP32, vdso_arch_data, syscall_map);
 #endif
 
 #ifdef CONFIG_BUG
diff --git a/arch/powerpc/kernel/vdso.c b/arch/powerpc/kernel/vdso.c
index b0332c609104..6d106fcafb9e 100644
--- a/arch/powerpc/kernel/vdso.c
+++ b/arch/powerpc/kernel/vdso.c
@@ -643,19 +643,11 @@ static void __init vdso_setup_syscall_map(void)
unsigned int i;
 
for (i = 0; i < NR_syscalls; i++) {
-#ifdef CONFIG_PPC64
if (sys_call_table[i] != (unsigned long)_ni_syscall)
-   vdso_data->syscall_map_64[i >> 5] |=
-   0x8000UL >> (i & 0x1f);
+   vdso_data->syscall_map[i >> 5] |= 0x8000UL >> (i & 
0x1f);
if (IS_ENABLED(CONFIG_COMPAT) &&
compat_sys_call_table[i] != (unsigned long)_ni_syscall)
-   vdso_data->syscall_map_32[i >> 5] |=
-   0x8000UL >> (i & 0x1f);
-#else /* CONFIG_PPC64 */
-   if (sys_call_table[i] != (unsigned long)_ni_syscall)
-   vdso_data->syscall_map_32[i >> 5] |=
-   0x8000UL >> (i & 0x1f);
-#endif /* CONFIG_PPC64 */
+   vdso_data->compat_syscall_map[i >> 5] |= 0x8000UL 
>> (i & 0x1f);
}
 }
 
-- 
2.25.0



[PATCH v1 00/30] Modernise VDSO setup

2020-09-27 Thread Christophe Leroy
This series modernises the setup of VDSO:
- Switch to using _install_special_mapping() which has replaced 
install_special_mapping()
- Move datapage in front of text like most other architectures to simplify its 
localisation
- Perform link time symbol resolution instead of runtime

This leads to a huge size reduction of vdso.c

Replaces the two following series:
 [v1,1/9] powerpc/vdso: Remove BUG_ON() in vdso_init()
 [v2,1/5] powerpc/vdso: Remove DBG()

This series is based on top of the series to the C generic VDSO.
It is functionnaly independant but some trivial merge conflict
occurs in some files. I may rebase it on top of merge if the
C generic VDSO series cannot be merged soon.

Christophe Leroy (30):
  powerpc/vdso: Stripped VDSO is not needed, don't build it
  powerpc/vdso: Add missing includes and clean vdso_setup_syscall_map()
  powerpc/vdso: Rename syscall_map_32/64 to simplify
vdso_setup_syscall_map()
  powerpc/vdso: Remove get_page() in vdso_pagelist initialization
  powerpc/vdso: Remove NULL termination element in vdso_pagelist
  powerpc/vdso: Refactor 32 bits and 64 bits pages setup
  powerpc/vdso: Remove unnecessary ifdefs in vdso_pagelist
initialization
  powerpc/vdso: Use VDSO size in arch_setup_additional_pages()
  powerpc/vdso: Simplify arch_setup_additional_pages() exit
  powerpc/vdso: Move to _install_special_mapping() and remove
arch_vma_name()
  powerpc/vdso: Provide vdso_remap()
  powerpc/vdso: Replace vdso_base by vdso
  powerpc/vdso: Move vdso datapage up front
  powerpc/vdso: Simplify __get_datapage()
  powerpc/vdso: Remove unused \tmp param in __get_datapage()
  powerpc/vdso: Retrieve sigtramp offsets at buildtime
  powerpc/vdso: Use builtin symbols to locate fixup section
  powerpc/vdso: Merge __kernel_sync_dicache_p5() into
__kernel_sync_dicache()
  powerpc/vdso: Remove vdso32_pages and vdso64_pages
  powerpc/vdso: Remove __kernel_datapage_offset
  powerpc/vdso: Remove runtime generated sigtramp offsets
  powerpc/vdso: Remove vdso_patches[] and associated functions
  powerpc/vdso: Remove unused text member in struct lib32/64_elfinfo
  powerpc/vdso: Remove symbol section information in struct
lib32/64_elfinfo
  powerpc/vdso: Remove lib32_elfinfo and lib64_elfinfo
  powerpc/vdso: Remove vdso_setup()
  powerpc/vdso: Remove vdso_ready
  powerpc/vdso: Remove DBG()
  powerpc/vdso: Remove VDSO32_LBASE and VDSO64_LBASE
  powerpc/vdso: Cleanup vdso.h

 arch/powerpc/Makefile |  24 +-
 arch/powerpc/include/asm/book3s/32/mmu-hash.h |   2 +-
 arch/powerpc/include/asm/book3s/64/mmu.h  |   2 +-
 arch/powerpc/include/asm/elf.h|   2 +-
 arch/powerpc/include/asm/mm-arch-hooks.h  |  25 -
 arch/powerpc/include/asm/mmu_context.h|   6 +-
 arch/powerpc/include/asm/nohash/32/mmu-40x.h  |   2 +-
 arch/powerpc/include/asm/nohash/32/mmu-44x.h  |   2 +-
 arch/powerpc/include/asm/nohash/32/mmu-8xx.h  |   2 +-
 arch/powerpc/include/asm/nohash/mmu-book3e.h  |   2 +-
 arch/powerpc/include/asm/vdso.h   |  29 +-
 arch/powerpc/include/asm/vdso/gettimeofday.h  |   4 +-
 arch/powerpc/include/asm/vdso_datapage.h  |  17 +-
 arch/powerpc/kernel/asm-offsets.c |   6 +-
 arch/powerpc/kernel/signal_32.c   |   8 +-
 arch/powerpc/kernel/signal_64.c   |   4 +-
 arch/powerpc/kernel/vdso.c| 682 +++---
 arch/powerpc/kernel/vdso32/Makefile   |  27 +-
 arch/powerpc/kernel/vdso32/cacheflush.S   |  19 +-
 arch/powerpc/kernel/vdso32/datapage.S |   7 +-
 .../powerpc/kernel/vdso32/gen_vdso_offsets.sh |  16 +
 arch/powerpc/kernel/vdso32/vdso32.lds.S   |  24 +-
 arch/powerpc/kernel/vdso64/Makefile   |  25 +-
 arch/powerpc/kernel/vdso64/cacheflush.S   |  18 +-
 arch/powerpc/kernel/vdso64/datapage.S |   7 +-
 .../powerpc/kernel/vdso64/gen_vdso_offsets.sh |  16 +
 arch/powerpc/kernel/vdso64/vdso64.lds.S   |  23 +-
 arch/powerpc/perf/callchain_32.c  |   8 +-
 arch/powerpc/perf/callchain_64.c  |   4 +-
 29 files changed, 267 insertions(+), 746 deletions(-)
 delete mode 100644 arch/powerpc/include/asm/mm-arch-hooks.h
 create mode 100755 arch/powerpc/kernel/vdso32/gen_vdso_offsets.sh
 create mode 100755 arch/powerpc/kernel/vdso64/gen_vdso_offsets.sh

-- 
2.25.0



[PATCH v1 02/30] powerpc/vdso: Add missing includes and clean vdso_setup_syscall_map()

2020-09-27 Thread Christophe Leroy
Instead of including extern references locally in
vdso_setup_syscall_map(), add the missing headers.

sys_ni_syscall() being a function, cast its address to
an unsigned long instead of declaring it as a fake
unsigned long object.

At the same time, remove a comment which paraphrases the
function name.

Signed-off-by: Christophe Leroy 
---
 arch/powerpc/kernel/vdso.c | 17 +
 1 file changed, 5 insertions(+), 12 deletions(-)

diff --git a/arch/powerpc/kernel/vdso.c b/arch/powerpc/kernel/vdso.c
index 23208a051af5..b0332c609104 100644
--- a/arch/powerpc/kernel/vdso.c
+++ b/arch/powerpc/kernel/vdso.c
@@ -17,8 +17,10 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 
+#include 
 #include 
 #include 
 #include 
@@ -639,24 +641,18 @@ static __init int vdso_setup(void)
 static void __init vdso_setup_syscall_map(void)
 {
unsigned int i;
-   extern unsigned long *sys_call_table;
-#ifdef CONFIG_PPC64
-   extern unsigned long *compat_sys_call_table;
-#endif
-   extern unsigned long sys_ni_syscall;
-
 
for (i = 0; i < NR_syscalls; i++) {
 #ifdef CONFIG_PPC64
-   if (sys_call_table[i] != sys_ni_syscall)
+   if (sys_call_table[i] != (unsigned long)_ni_syscall)
vdso_data->syscall_map_64[i >> 5] |=
0x8000UL >> (i & 0x1f);
if (IS_ENABLED(CONFIG_COMPAT) &&
-   compat_sys_call_table[i] != sys_ni_syscall)
+   compat_sys_call_table[i] != (unsigned long)_ni_syscall)
vdso_data->syscall_map_32[i >> 5] |=
0x8000UL >> (i & 0x1f);
 #else /* CONFIG_PPC64 */
-   if (sys_call_table[i] != sys_ni_syscall)
+   if (sys_call_table[i] != (unsigned long)_ni_syscall)
vdso_data->syscall_map_32[i >> 5] |=
0x8000UL >> (i & 0x1f);
 #endif /* CONFIG_PPC64 */
@@ -738,9 +734,6 @@ static int __init vdso_init(void)
 #endif
 
 
-   /*
-* Setup the syscall map in the vDOS
-*/
vdso_setup_syscall_map();
 
/*
-- 
2.25.0



[PATCH v1 04/30] powerpc/vdso: Remove get_page() in vdso_pagelist initialization

2020-09-27 Thread Christophe Leroy
Partly copied from commit 16fb1a9bec61 ("arm64: vdso: clean up
vdso_pagelist initialization").

No need to get_page() the vdso text/data - these are part of the
kernel image.

Signed-off-by: Christophe Leroy 
---
 arch/powerpc/kernel/vdso.c | 6 ++
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/kernel/vdso.c b/arch/powerpc/kernel/vdso.c
index 6d106fcafb9e..dfaa4be258d2 100644
--- a/arch/powerpc/kernel/vdso.c
+++ b/arch/powerpc/kernel/vdso.c
@@ -748,7 +748,7 @@ static int __init vdso_init(void)
BUG_ON(vdso32_pagelist == NULL);
for (i = 0; i < vdso32_pages; i++) {
struct page *pg = virt_to_page(vdso32_kbase + i*PAGE_SIZE);
-   get_page(pg);
+
vdso32_pagelist[i] = pg;
}
vdso32_pagelist[i++] = virt_to_page(vdso_data);
@@ -761,15 +761,13 @@ static int __init vdso_init(void)
BUG_ON(vdso64_pagelist == NULL);
for (i = 0; i < vdso64_pages; i++) {
struct page *pg = virt_to_page(vdso64_kbase + i*PAGE_SIZE);
-   get_page(pg);
+
vdso64_pagelist[i] = pg;
}
vdso64_pagelist[i++] = virt_to_page(vdso_data);
vdso64_pagelist[i] = NULL;
 #endif /* CONFIG_PPC64 */
 
-   get_page(virt_to_page(vdso_data));
-
smp_wmb();
vdso_ready = 1;
 
-- 
2.25.0



Re: [PATCH v8 2/8] powerpc/vdso: Remove __kernel_datapage_offset and simplify __get_datapage()

2020-09-27 Thread Christophe Leroy




Le 21/09/2020 à 13:26, Will Deacon a écrit :

On Fri, Aug 28, 2020 at 12:14:28PM +1000, Michael Ellerman wrote:

Dmitry Safonov <0x7f454...@gmail.com> writes:

On Wed, 26 Aug 2020 at 15:39, Michael Ellerman  wrote:

Christophe Leroy  writes:
We added a test for vdso unmap recently because it happened to trigger a
KAUP failure, and someone actually hit it & reported it.


You right, CRIU cares much more about moving vDSO.
It's done for each restoree and as on most setups vDSO is premapped and
used by the application - it's actively tested.
Speaking about vDSO unmap - that's concerning only for heterogeneous C/R,
i.e when an application is migrated from a system that uses vDSO to the one
which doesn't - it's much rare scenario.
(for arm it's !CONFIG_VDSO, for x86 it's `vdso=0` boot parameter)


Ah OK that explains it.

The case we hit of VDSO unmapping was some strange "library OS" thing
which had explicitly unmapped the VDSO, so also very rare.


Looking at the code, it seems quite easy to provide/maintain .close() for
vm_special_mapping. A bit harder to add a test from CRIU side
(as glibc won't know on restore that it can't use vdso anymore),
but totally not impossible.


Running that test on arm64 segfaults:

   # ./sigreturn_vdso
   VDSO is at 0x8191f000-0x8191 (4096 bytes)
   Signal delivered OK with VDSO mapped
   VDSO moved to 0x8191a000-0x8191afff (4096 bytes)
   Signal delivered OK with VDSO moved
   Unmapped VDSO
   Remapped the stack executable
   [   48.556191] potentially unexpected fatal signal 11.
   [   48.556752] CPU: 0 PID: 140 Comm: sigreturn_vdso Not tainted 
5.9.0-rc2-00057-g2ac69819ba9e #190
   [   48.556990] Hardware name: linux,dummy-virt (DT)
   [   48.557336] pstate: 60001000 (nZCv daif -PAN -UAO BTYPE=--)
   [   48.557475] pc : 8191a7bc
   [   48.557603] lr : 8191a7bc
   [   48.557697] sp : c13c9e90
   [   48.557873] x29: c13cb0e0 x28: 
   [   48.558201] x27:  x26: 
   [   48.558337] x25:  x24: 
   [   48.558754] x23:  x22: 
   [   48.558893] x21: 004009b0 x20: 
   [   48.559046] x19: 00400ff0 x18: 
   [   48.559180] x17: 817da300 x16: 00412010
   [   48.559312] x15:  x14: 001c
   [   48.559443] x13: 656c626174756365 x12: 7865206b63617473
   [   48.559625] x11: 0003 x10: 0101010101010101
   [   48.559828] x9 : 818afda8 x8 : 0081
   [   48.559973] x7 : 6174732065687420 x6 : 64657070616d6552
   [   48.560115] x5 : 0e0388bd x4 : 0040135d
   [   48.560270] x3 :  x2 : 0001
   [   48.560412] x1 : 0003 x0 : 004120b8
   Segmentation fault
   #

So I think we need to keep the unmap hook. Maybe it should be handled by
the special_mapping stuff generically.


I'll cook a patch for vm_special_mapping if you don't mind :-)


That would be great, thanks!


I lost track of this one. Is there a patch kicking around to resolve this,
or is the segfault expected behaviour?



IIUC dmitry said he will cook a patch. I have not seen any patch yet.

AFAIKS, among the architectures having VDSO sigreturn trampolines, only SH, X86 and POWERPC provide 
alternative trampoline on stack when VDSO is not there.


All other architectures just having a VDSO don't expect VDSO to not be mapped.

As far as nowadays stacks are mapped non-executable, getting a segfaut is expected behaviour. 
However, I think we should really make it cleaner. Today it segfaults because it is still pointing 
to the VDSO trampoline that has been unmapped. But should the user map some other code at the same 
address, we'll run in the weed on signal return instead of segfaulting.


So VDSO unmapping should really be properly managed, the reference should be properly cleared in 
order to segfault in a controllable manner.


Only powerpc has a hook to properly clear the VDSO pointer when VDSO is 
unmapped.

Christophe


Re: [RFC PATCH 18/18] powerpc/powermac: Move PHB discovery

2020-09-27 Thread Christophe Leroy




Le 24/09/2020 à 08:38, Oliver O'Halloran a écrit :

Signed-off-by: Oliver O'Halloran 


Tested-by: Christophe Leroy 

This series is a really good step forward to the elimination of
early support for ioremap(), thanks.

Tested with pmac32_defconfig on QEMU MAC99.

Before the series we have 9000 kbytes mapped as early ioremap

ioremap() called early from pmac_feature_init+0xc8/0xac8. Use early_ioremap() 
instead
ioremap() called early from probe_one_macio+0x170/0x2a8. Use early_ioremap() 
instead
ioremap() called early from udbg_scc_init+0x1d8/0x494. Use early_ioremap() 
instead
ioremap() called early from find_via_cuda+0xa8/0x3f8. Use early_ioremap() 
instead
ioremap() called early from pmac_pci_init+0x214/0x778. Use early_ioremap() 
instead
ioremap() called early from pmac_pci_init+0x228/0x778. Use early_ioremap() 
instead
ioremap() called early from pci_process_bridge_OF_ranges+0x158/0x2d0. Use 
early_ioremap() instead
ioremap() called early from pmac_setup_arch+0x110/0x298. Use early_ioremap() 
instead
ioremap() called early from pmac_nvram_init+0x144/0x534. Use early_ioremap() 
instead
  * 0xfeb36000..0xff40  : early ioremap
  * 0xf100..0xfeb36000  : vmalloc & ioremap

After the series we have 800 kbytes mapped as early ioremap

ioremap() called early from pmac_feature_init+0xc8/0xac8. Use early_ioremap() 
instead
ioremap() called early from probe_one_macio+0x170/0x2a8. Use early_ioremap() 
instead
ioremap() called early from udbg_scc_init+0x1d8/0x494. Use early_ioremap() 
instead
ioremap() called early from find_via_cuda+0xa8/0x3f8. Use early_ioremap() 
instead
ioremap() called early from pmac_setup_arch+0x10c/0x294. Use early_ioremap() 
instead
ioremap() called early from pmac_nvram_init+0x144/0x534. Use early_ioremap() 
instead
  * 0xff338000..0xff40  : early ioremap
  * 0xf100..0xff338000  : vmalloc & ioremap

Christophe



---
compile tested with pmac32_defconfig and g5_defconfig
---
  arch/powerpc/platforms/powermac/setup.c | 4 +---
  1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/arch/powerpc/platforms/powermac/setup.c 
b/arch/powerpc/platforms/powermac/setup.c
index f002b0fa69b8..0f8669139a21 100644
--- a/arch/powerpc/platforms/powermac/setup.c
+++ b/arch/powerpc/platforms/powermac/setup.c
@@ -298,9 +298,6 @@ static void __init pmac_setup_arch(void)
of_node_put(ic);
}
  
-	/* Lookup PCI hosts */

-   pmac_pci_init();
-
  #ifdef CONFIG_PPC32
ohare_init();
l2cr_init();
@@ -600,6 +597,7 @@ define_machine(powermac) {
.name   = "PowerMac",
.probe  = pmac_probe,
.setup_arch = pmac_setup_arch,
+   .discover_phbs  = pmac_pci_init,
.show_cpuinfo   = pmac_show_cpuinfo,
.init_IRQ   = pmac_pic_init,
.get_irq= NULL, /* changed later */



RE: [PATCH 1/5] Documentation: dt: binding: fsl: Add 'fsl,ippdexpcr1-alt-addr' property

2020-09-27 Thread Ran Wang
Hi Rob

Not sure whether you have missed this mail with my query.

Regards,
Ran

On Wednesday, September 23, 2020 2:44 PM Ran Wang wrote:
> 
> Hi Rob,
> 
> On Wednesday, September 23, 2020 10:33 AM, Rob Herring wrote:
> >
> > On Wed, Sep 16, 2020 at 04:18:27PM +0800, Ran Wang wrote:
> > > From: Biwen Li 
> > >
> > > The 'fsl,ippdexpcr1-alt-addr' property is used to handle an errata
> > > A-008646 on LS1021A
> > >
> > > Signed-off-by: Biwen Li 
> > > Signed-off-by: Ran Wang 
> > > ---
> > >  Documentation/devicetree/bindings/soc/fsl/rcpm.txt | 19
> > > +++
> > >  1 file changed, 19 insertions(+)
> > >
> > > diff --git a/Documentation/devicetree/bindings/soc/fsl/rcpm.txt
> > > b/Documentation/devicetree/bindings/soc/fsl/rcpm.txt
> > > index 5a33619..1be58a3 100644
> > > --- a/Documentation/devicetree/bindings/soc/fsl/rcpm.txt
> > > +++ b/Documentation/devicetree/bindings/soc/fsl/rcpm.txt
> > > @@ -34,6 +34,11 @@ Chassis VersionExample Chips
> > >  Optional properties:
> > >   - little-endian : RCPM register block is Little Endian. Without it RCPM
> > > will be Big Endian (default case).
> > > + - fsl,ippdexpcr1-alt-addr : The property is related to a hardware issue
> > > +   on SoC LS1021A and only needed on SoC LS1021A.
> > > +   Must include 2 entries:
> > > +   The first entry must be a link to the SCFG device node.
> > > +   The 2nd entry must be offset of register IPPDEXPCR1 in SCFG.
> >
> > You don't need a DT change for this. You can find SCFG node by its
> > compatible string and then the offset should be known given this issue is
> only on 1 SoC.
> 
> Did you mean that RCPM driver just to access IPPDEXPCR1 shadowed register
> in SCFG directly without fetching it's offset info. from DT?
> 
> Regards,
> Ran