Re: [PATCH 7/8] powerpc: Fix typos

2024-01-03 Thread Randy Dunlap



On 1/3/24 15:16, Bjorn Helgaas wrote:
> From: Bjorn Helgaas 
> 
> Fix typos, most reported by "codespell arch/powerpc".  Only touches
> comments, no code changes.
> 
> Signed-off-by: Bjorn Helgaas 
> Cc: Nicholas Piggin 
> Cc: Christophe Leroy 
> Cc: linuxppc-dev@lists.ozlabs.org
> ---
>  arch/powerpc/boot/Makefile   |  4 ++--
>  arch/powerpc/boot/dts/acadia.dts |  2 +-
>  arch/powerpc/boot/main.c |  2 +-
>  arch/powerpc/boot/ps3.c  |  2 +-
>  arch/powerpc/include/asm/io.h|  2 +-
>  arch/powerpc/include/asm/opal-api.h  |  4 ++--
>  arch/powerpc/include/asm/pmac_feature.h  |  2 +-
>  arch/powerpc/include/asm/uninorth.h  |  2 +-
>  arch/powerpc/include/uapi/asm/bootx.h|  2 +-
>  arch/powerpc/kernel/eeh_pe.c |  2 +-
>  arch/powerpc/kernel/fadump.c |  2 +-
>  arch/powerpc/kernel/misc_64.S|  4 ++--
>  arch/powerpc/kernel/process.c| 12 ++--
>  arch/powerpc/kernel/ptrace/ptrace-tm.c   |  2 +-
>  arch/powerpc/kernel/smp.c|  2 +-
>  arch/powerpc/kernel/sysfs.c  |  4 ++--
>  arch/powerpc/kvm/book3s_xive.c   |  2 +-
>  arch/powerpc/mm/cacheflush.c |  2 +-
>  arch/powerpc/mm/nohash/kaslr_booke.c |  2 +-
>  arch/powerpc/platforms/512x/mpc512x_shared.c |  2 +-
>  arch/powerpc/platforms/cell/spufs/sched.c|  2 +-
>  arch/powerpc/platforms/maple/pci.c   |  2 +-
>  arch/powerpc/platforms/powermac/pic.c|  2 +-
>  arch/powerpc/platforms/powermac/sleep.S  |  2 +-
>  arch/powerpc/platforms/powernv/pci-sriov.c   |  4 ++--
>  arch/powerpc/platforms/powernv/vas-window.c  |  2 +-
>  arch/powerpc/platforms/pseries/vas.c |  2 +-
>  arch/powerpc/sysdev/xive/common.c|  4 ++--
>  arch/powerpc/sysdev/xive/native.c|  2 +-
>  29 files changed, 40 insertions(+), 40 deletions(-)

Reviewed-by: Randy Dunlap 

Thanks.

-- 
#Randy


[PATCH 7/8] powerpc: Fix typos

2024-01-03 Thread Bjorn Helgaas
From: Bjorn Helgaas 

Fix typos, most reported by "codespell arch/powerpc".  Only touches
comments, no code changes.

Signed-off-by: Bjorn Helgaas 
Cc: Nicholas Piggin 
Cc: Christophe Leroy 
Cc: linuxppc-dev@lists.ozlabs.org
---
 arch/powerpc/boot/Makefile   |  4 ++--
 arch/powerpc/boot/dts/acadia.dts |  2 +-
 arch/powerpc/boot/main.c |  2 +-
 arch/powerpc/boot/ps3.c  |  2 +-
 arch/powerpc/include/asm/io.h|  2 +-
 arch/powerpc/include/asm/opal-api.h  |  4 ++--
 arch/powerpc/include/asm/pmac_feature.h  |  2 +-
 arch/powerpc/include/asm/uninorth.h  |  2 +-
 arch/powerpc/include/uapi/asm/bootx.h|  2 +-
 arch/powerpc/kernel/eeh_pe.c |  2 +-
 arch/powerpc/kernel/fadump.c |  2 +-
 arch/powerpc/kernel/misc_64.S|  4 ++--
 arch/powerpc/kernel/process.c| 12 ++--
 arch/powerpc/kernel/ptrace/ptrace-tm.c   |  2 +-
 arch/powerpc/kernel/smp.c|  2 +-
 arch/powerpc/kernel/sysfs.c  |  4 ++--
 arch/powerpc/kvm/book3s_xive.c   |  2 +-
 arch/powerpc/mm/cacheflush.c |  2 +-
 arch/powerpc/mm/nohash/kaslr_booke.c |  2 +-
 arch/powerpc/platforms/512x/mpc512x_shared.c |  2 +-
 arch/powerpc/platforms/cell/spufs/sched.c|  2 +-
 arch/powerpc/platforms/maple/pci.c   |  2 +-
 arch/powerpc/platforms/powermac/pic.c|  2 +-
 arch/powerpc/platforms/powermac/sleep.S  |  2 +-
 arch/powerpc/platforms/powernv/pci-sriov.c   |  4 ++--
 arch/powerpc/platforms/powernv/vas-window.c  |  2 +-
 arch/powerpc/platforms/pseries/vas.c |  2 +-
 arch/powerpc/sysdev/xive/common.c|  4 ++--
 arch/powerpc/sysdev/xive/native.c|  2 +-
 29 files changed, 40 insertions(+), 40 deletions(-)

diff --git a/arch/powerpc/boot/Makefile b/arch/powerpc/boot/Makefile
index 968aee2025b8..9c2b6e527ed1 100644
--- a/arch/powerpc/boot/Makefile
+++ b/arch/powerpc/boot/Makefile
@@ -108,8 +108,8 @@ DTC_FLAGS   ?= -p 1024
 # these files into the build dir, fix up any includes and ensure that dependent
 # files are copied in the right order.
 
-# these need to be seperate variables because they are copied out of different
-# directories in the kernel tree. Sure you COULd merge them, but it's a
+# these need to be separate variables because they are copied out of different
+# directories in the kernel tree. Sure you COULD merge them, but it's a
 # cure-is-worse-than-disease situation.
 zlib-decomp-$(CONFIG_KERNEL_GZIP) := decompress_inflate.c
 zlib-$(CONFIG_KERNEL_GZIP) := inffast.c inflate.c inftrees.c
diff --git a/arch/powerpc/boot/dts/acadia.dts b/arch/powerpc/boot/dts/acadia.dts
index deb52e41ab84..5fedda811378 100644
--- a/arch/powerpc/boot/dts/acadia.dts
+++ b/arch/powerpc/boot/dts/acadia.dts
@@ -172,7 +172,7 @@ ieee1588@ef602800 {
reg = <0xef602800 0x60>;
interrupt-parent = <>;
interrupts = <0x4 0x4>;
-   /* This thing is a bit weird.  It has it's own 
UIC
+   /* This thing is a bit weird.  It has its own 
UIC
 * that it uses to generate snapshot triggers.  
We
 * don't really support this device yet, and it 
needs
 * work to figure this out.
diff --git a/arch/powerpc/boot/main.c b/arch/powerpc/boot/main.c
index cae31a6e8f02..2c0e2a1cab01 100644
--- a/arch/powerpc/boot/main.c
+++ b/arch/powerpc/boot/main.c
@@ -188,7 +188,7 @@ static inline void prep_esm_blob(struct addr_range vmlinux, 
void *chosen) { }
 
 /* A buffer that may be edited by tools operating on a zImage binary so as to
  * edit the command line passed to vmlinux (by setting /chosen/bootargs).
- * The buffer is put in it's own section so that tools may locate it easier.
+ * The buffer is put in its own section so that tools may locate it easier.
  */
 static char cmdline[BOOT_COMMAND_LINE_SIZE]
__attribute__((__section__("__builtin_cmdline")));
diff --git a/arch/powerpc/boot/ps3.c b/arch/powerpc/boot/ps3.c
index f157717ae814..89ff46b8b225 100644
--- a/arch/powerpc/boot/ps3.c
+++ b/arch/powerpc/boot/ps3.c
@@ -25,7 +25,7 @@ BSS_STACK(4096);
 
 /* A buffer that may be edited by tools operating on a zImage binary so as to
  * edit the command line passed to vmlinux (by setting /chosen/bootargs).
- * The buffer is put in it's own section so that tools may locate it easier.
+ * The buffer is put in its own section so that tools may locate it easier.
  */
 
 static char cmdline[BOOT_COMMAND_LINE_SIZE]
diff --git a/arch/powerpc/include/asm/io.h b/arch/powerpc/include/asm/io.h
index 5220274a6277..7fb001ab3109 100644
--- a/arch/powerpc/include/asm/io.h
+++ b/arch/powerpc/include/asm/io.h
@@ -989,7 +989,7 @@ static inline phys_addr_t page_to_phys(struct page *page)
 

Re: WARNING: No atomic I2C transfer handler for 'i2c-4' at drivers/i2c/i2c-core.h:40 i2c_smbus_xfer+0x178/0x190 (kernel 6.6.X, 6.7-rcX, PowerMac G5 11,2)

2024-01-03 Thread Tor Vic




On 12/22/23 23:01, Erhard Furtner wrote:

Greetings!

I am getting this on my PowerMac G5 11,2 at reboot on kernels 6.6.X and 6.7-rcX:



Hi,

this seems to be the same issue as [1], and also referenced in [2].

For now, I have reverted the patch [3] as the huge splats on reboot are 
really annoying.


[1] 
https://lore.kernel.org/linux-i2c/13271b9b-4132-46ef-abf8-2c311967b...@mailbox.org/


[2] 
https://lore.kernel.org/linux-i2c/20230327-tegra-pmic-reboot-v7-2-18699d5dc...@skidata.com/T/#m22d00b913f150b4d80623162c5b0c79b338774f0


[3] (3473cf43b) i2c: core: Run atomic i2c xfer when !preemptible

Cheers,
Tor Vic


[...]
reboot: Restarting system
[ cut here ]
No atomic I2C transfer handler for 'i2c-4'
WARNING: CPU: 1 PID: 362 at drivers/i2c/i2c-core.h:40 i2c_smbus_xfer+0x178/0x190
Modules linked in: windfarm_cpufreq_clamp windfarm_smu_sensors 
windfarm_smu_controls windfarm_pm112 snd_aoa_codec_onyx windfarm_pid 
snd_aoa_fabric_layout snd_aoa nouveau windfarm_smu_sat snd_aoa_i2sbus 
windfarm_lm75_sensor snd_aoa_soundbus windfarm_max6690_sensor firewire_ohci 
snd_pcm windfarm_core drm_exec snd_timer firewire_core crc_itu_t gpu_sched snd 
i2c_algo_bit backlight drm_ttm_helper ttm soundcore ohci_pci rack_meter tg3 
drm_display_helper hwmon cfg80211 rfkill zram zsmalloc loop dm_mod configfs
CPU: 1 PID: 362 Comm: kwindfarm Not tainted 6.6.7-gentoo-PMacG5 #1
Hardware name: PowerMac11,2 PPC970MP 0x440101 PowerMac
NIP:  c0b03f68 LR: c0b03f64 CTR: 
REGS: c0001fddf930 TRAP: 0700   Not tainted  (6.6.7-gentoo-PMacG5)
MSR:  90029032   CR: 24002842  XER: 
IRQMASK: 0
GPR00:  c0001fddfbd0 c10dd900 
GPR04:    
GPR08:    
GPR12:  c700 c0101558 c0001be361c0
GPR16:    
GPR20:    c0003d348258
GPR24: 51eb851f 004c  0001
GPR28: 0001 0002 c0001fddfc96 c40c8828
NIP [c0b03f68] i2c_smbus_xfer+0x178/0x190
LR [c0b03f64] i2c_smbus_xfer+0x174/0x190
Call Trace:
[c0001fddfbd0] [c0b03f64] i2c_smbus_xfer+0x174/0x190 (unreliable)
[c0001fddfc70] [c0b040d4] i2c_smbus_read_byte_data+0x64/0xd0
[c0001fddfcd0] [c0003d3290c8] wf_max6690_get+0x30/0x90 
[windfarm_max6690_sensor]
[c0001fddfd00] [c0003d06878c] pm112_wf_notify+0x564/0x118c 
[windfarm_pm112]
[c0001fddfe00] [c0103364] notifier_call_chain+0xa4/0x190
[c0001fddfea0] [c010387c] blocking_notifier_call_chain+0x5c/0xb0
[c0001fddfee0] [c0003d34ebe0] wf_thread_func+0xe8/0x190 [windfarm_core]
[c0001fddff90] [c0101680] kthread+0x130/0x140
[c0001fddffe0] [c000bfb0] start_kernel_thread+0x14/0x18
Code: 3980 4e800020 e9290018 2c29 4082ff1c e88300e0 2c24 4182001c 
3c62fff4 3863f2b0 4b5bf379 6000 <0fe0> 4bfffef8 e8830090 4be4
---[ end trace  ]---
[ cut here ]
No atomic I2C transfer handler for 'i2c-4'
WARNING: CPU: 1 PID: 362 at drivers/i2c/i2c-core.h:40 i2c_smbus_xfer+0x178/0x190
Modules linked in: windfarm_cpufreq_clamp windfarm_smu_sensors 
windfarm_smu_controls windfarm_pm112 snd_aoa_codec_onyx windfarm_pid 
snd_aoa_fabric_layout snd_aoa nouveau windfarm_smu_sat snd_aoa_i2sbus 
windfarm_lm75_sensor snd_aoa_soundbus windfarm_max6690_sensor firewire_ohci 
snd_pcm windfarm_core drm_exec snd_timer firewire_core crc_itu_t gpu_sched snd 
i2c_algo_bit backlight drm_ttm_helper ttm soundcore ohci_pci rack_meter tg3 
drm_display_helper hwmon cfg80211 rfkill zram zsmalloc loop dm_mod configfs
CPU: 1 PID: 362 Comm: kwindfarm Tainted: GW  
6.6.7-gentoo-PMacG5 #1
Hardware name: PowerMac11,2 PPC970MP 0x440101 PowerMac
NIP:  c0b03f68 LR: c0b03f64 CTR: 
REGS: c0001fddf930 TRAP: 0700   Tainted: GW   
(6.6.7-gentoo-PMacG5)
MSR:  90029032   CR: 24002842  XER: 
IRQMASK: 0
GPR00:  c0001fddfbd0 c10dd900 
GPR04:    
GPR08:    
GPR12:  c700 c0101558 c0001be361c0
GPR16:    
GPR20:    c0003d348258
GPR24: 51eb851f 004a  0001
GPR28:  0003 c0001fddfc96 c40c8828
NIP [c0b03f68] i2c_smbus_xfer+0x178/0x190
LR [c0b03f64] i2c_smbus_xfer+0x174/0x190
Call Trace:

Re: [PATCH v2 00/14] Unified cross-architecture kernel-mode FPU API

2024-01-03 Thread Alex Deucher
On Thu, Dec 28, 2023 at 5:11 AM Samuel Holland
 wrote:
>
> This series unifies the kernel-mode FPU API across several architectures
> by wrapping the existing functions (where needed) in consistently-named
> functions placed in a consistent header location, with mostly the same
> semantics: they can be called from preemptible or non-preemptible task
> context, and are not assumed to be reentrant. Architectures are also
> expected to provide CFLAGS adjustments for compiling FPU-dependent code.
> For the moment, SIMD/vector units are out of scope for this common API.
>
> This allows us to remove the ifdeffery and duplicated Makefile logic at
> each FPU user. It then implements the common API on RISC-V, and converts
> a couple of users to the new API: the AMDGPU DRM driver, and the FPU
> self test.
>
> The underlying goal of this series is to allow using newer AMD GPUs
> (e.g. Navi) on RISC-V boards such as SiFive's HiFive Unmatched. Those
> GPUs need CONFIG_DRM_AMD_DC_FP to initialize, which requires kernel-mode
> FPU support.

Series is:
Acked-by: Alex Deucher 

>
> Previous versions:
> v1: 
> https://lore.kernel.org/linux-kernel/20231208055501.2916202-1-samuel.holl...@sifive.com/
> v0: 
> https://lore.kernel.org/linux-kernel/20231122030621.3759313-1-samuel.holl...@sifive.com/
>
> Changes in v2:
>  - Add documentation explaining the built-time and runtime APIs
>  - Add a linux/fpu.h header for generic isolation enforcement
>  - Remove file name from header comment
>  - Clean up arch/arm64/lib/Makefile, like for arch/arm
>  - Remove RISC-V architecture-specific preprocessor check
>  - Split altivec removal to a separate patch
>  - Use linux/fpu.h instead of asm/fpu.h in consumers
>  - Declare test_fpu() in a header
>
> Michael Ellerman (1):
>   drm/amd/display: Only use hard-float, not altivec on powerpc
>
> Samuel Holland (13):
>   arch: Add ARCH_HAS_KERNEL_FPU_SUPPORT
>   ARM: Implement ARCH_HAS_KERNEL_FPU_SUPPORT
>   ARM: crypto: Use CC_FLAGS_FPU for NEON CFLAGS
>   arm64: Implement ARCH_HAS_KERNEL_FPU_SUPPORT
>   arm64: crypto: Use CC_FLAGS_FPU for NEON CFLAGS
>   lib/raid6: Use CC_FLAGS_FPU for NEON CFLAGS
>   LoongArch: Implement ARCH_HAS_KERNEL_FPU_SUPPORT
>   powerpc: Implement ARCH_HAS_KERNEL_FPU_SUPPORT
>   x86: Implement ARCH_HAS_KERNEL_FPU_SUPPORT
>   riscv: Add support for kernel-mode FPU
>   drm/amd/display: Use ARCH_HAS_KERNEL_FPU_SUPPORT
>   selftests/fpu: Move FP code to a separate translation unit
>   selftests/fpu: Allow building on other architectures
>
>  Documentation/core-api/floating-point.rst | 78 +++
>  Documentation/core-api/index.rst  |  1 +
>  Makefile  |  5 ++
>  arch/Kconfig  |  6 ++
>  arch/arm/Kconfig  |  1 +
>  arch/arm/Makefile |  7 ++
>  arch/arm/include/asm/fpu.h| 15 
>  arch/arm/lib/Makefile |  3 +-
>  arch/arm64/Kconfig|  1 +
>  arch/arm64/Makefile   |  9 ++-
>  arch/arm64/include/asm/fpu.h  | 15 
>  arch/arm64/lib/Makefile   |  6 +-
>  arch/loongarch/Kconfig|  1 +
>  arch/loongarch/Makefile   |  5 +-
>  arch/loongarch/include/asm/fpu.h  |  1 +
>  arch/powerpc/Kconfig  |  1 +
>  arch/powerpc/Makefile |  5 +-
>  arch/powerpc/include/asm/fpu.h| 28 +++
>  arch/riscv/Kconfig|  1 +
>  arch/riscv/Makefile   |  3 +
>  arch/riscv/include/asm/fpu.h  | 16 
>  arch/riscv/kernel/Makefile|  1 +
>  arch/riscv/kernel/kernel_mode_fpu.c   | 28 +++
>  arch/x86/Kconfig  |  1 +
>  arch/x86/Makefile | 20 +
>  arch/x86/include/asm/fpu.h| 13 
>  drivers/gpu/drm/amd/display/Kconfig   |  2 +-
>  .../gpu/drm/amd/display/amdgpu_dm/dc_fpu.c| 35 +
>  drivers/gpu/drm/amd/display/dc/dml/Makefile   | 36 +
>  drivers/gpu/drm/amd/display/dc/dml2/Makefile  | 36 +
>  include/linux/fpu.h   | 12 +++
>  lib/Kconfig.debug |  2 +-
>  lib/Makefile  | 26 +--
>  lib/raid6/Makefile| 31 ++--
>  lib/test_fpu.h|  8 ++
>  lib/{test_fpu.c => test_fpu_glue.c}   | 37 ++---
>  lib/test_fpu_impl.c   | 37 +
>  37 files changed, 343 insertions(+), 190 deletions(-)
>  create mode 100644 Documentation/core-api/floating-point.rst
>  create mode 100644 arch/arm/include/asm/fpu.h
>  create mode 100644 arch/arm64/include/asm/fpu.h
>  create mode 100644 arch/powerpc/include/asm/fpu.h
>  create mode 100644 

Re: [PATCH v1 1/1] powerpc/powernv: fix up kernel compile issues

2024-01-03 Thread Christophe Leroy
Hi,

Le 02/01/2024 à 03:48, Luming Yu a écrit :
> [Vous ne recevez pas souvent de courriers de luming...@shingroup.cn. 
> Découvrez pourquoi ceci est important à 
> https://aka.ms/LearnAboutSenderIdentification ]
> 
> up kernel is quite useful to silicon validation, despite
> it is rare to be found in server productions. the fixes are
> obvious. Not like IBM pSeries, it may be not necessary
> to have powernv SMP forced. It is difficult to compile a
> up kernel for pSerises as I've tried.

You title and message are confusing. "fix up" has a standard meaning in 
english language, see 
https://www.collinsdictionary.com/dictionary/english/fix-up
"up" also has a meaning, see 
https://www.collinsdictionary.com/dictionary/english/up

Use "non-SMP" instead of "UP".

For instance, see commit 5657c1167835 ("sched/core: Fix NULL pointer 
access fault in sched_setaffinity() with non-SMP configs")

Christophe


> 
> Signed-off-by: Luming Yu 
> ---
> v0->v1: solve powernv vas up kernel compile issues found by lkp bot.
> ---
>   arch/powerpc/platforms/powernv/Kconfig| 1 -
>   arch/powerpc/platforms/powernv/opal-imc.c | 1 +
>   arch/powerpc/platforms/powernv/vas.c  | 1 +
>   arch/powerpc/platforms/powernv/vas.h  | 1 +
>   arch/powerpc/sysdev/xive/common.c | 2 ++
>   arch/powerpc/sysdev/xive/spapr.c  | 5 -
>   6 files changed, 9 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/powerpc/platforms/powernv/Kconfig 
> b/arch/powerpc/platforms/powernv/Kconfig
> index 70a46acc70d6..40b1a49379de 100644
> --- a/arch/powerpc/platforms/powernv/Kconfig
> +++ b/arch/powerpc/platforms/powernv/Kconfig
> @@ -15,7 +15,6 @@ config PPC_POWERNV
>  select CPU_FREQ
>  select PPC_DOORBELL
>  select MMU_NOTIFIER
> -   select FORCE_SMP
>  select ARCH_SUPPORTS_PER_VMA_LOCK
>  default y
> 
> diff --git a/arch/powerpc/platforms/powernv/opal-imc.c 
> b/arch/powerpc/platforms/powernv/opal-imc.c
> index 828fc4d88471..6e9e2b0a5bdc 100644
> --- a/arch/powerpc/platforms/powernv/opal-imc.c
> +++ b/arch/powerpc/platforms/powernv/opal-imc.c
> @@ -13,6 +13,7 @@
>   #include 
>   #include 
>   #include 
> +#include 
>   #include 
>   #include 
>   #include 
> diff --git a/arch/powerpc/platforms/powernv/vas.c 
> b/arch/powerpc/platforms/powernv/vas.c
> index b65256a63e87..c1759135aca5 100644
> --- a/arch/powerpc/platforms/powernv/vas.c
> +++ b/arch/powerpc/platforms/powernv/vas.c
> @@ -18,6 +18,7 @@
>   #include 
>   #include 
>   #include 
> +#include 
> 
>   #include "vas.h"
> 
> diff --git a/arch/powerpc/platforms/powernv/vas.h 
> b/arch/powerpc/platforms/powernv/vas.h
> index 08d9d3d5a22b..313a8f2c8c7d 100644
> --- a/arch/powerpc/platforms/powernv/vas.h
> +++ b/arch/powerpc/platforms/powernv/vas.h
> @@ -12,6 +12,7 @@
>   #include 
>   #include 
>   #include 
> +#include 
> 
>   /*
>* Overview of Virtual Accelerator Switchboard (VAS).
> diff --git a/arch/powerpc/sysdev/xive/common.c 
> b/arch/powerpc/sysdev/xive/common.c
> index a289cb97c1d7..d49b12809c10 100644
> --- a/arch/powerpc/sysdev/xive/common.c
> +++ b/arch/powerpc/sysdev/xive/common.c
> @@ -1497,7 +1497,9 @@ static int xive_prepare_cpu(unsigned int cpu)
>GFP_KERNEL, cpu_to_node(cpu));
>  if (!xc)
>  return -ENOMEM;
> +#ifdef CONFIG_SMP
>  xc->hw_ipi = XIVE_BAD_IRQ;
> +#endif
>  xc->chip_id = XIVE_INVALID_CHIP_ID;
>  if (xive_ops->prepare_cpu)
>  xive_ops->prepare_cpu(cpu, xc);
> diff --git a/arch/powerpc/sysdev/xive/spapr.c 
> b/arch/powerpc/sysdev/xive/spapr.c
> index e45419264391..7298f57f8416 100644
> --- a/arch/powerpc/sysdev/xive/spapr.c
> +++ b/arch/powerpc/sysdev/xive/spapr.c
> @@ -81,6 +81,7 @@ static void xive_irq_bitmap_remove_all(void)
>  }
>   }
> 
> +#ifdef CONFIG_SMP
>   static int __xive_irq_bitmap_alloc(struct xive_irq_bitmap *xibm)
>   {
>  int irq;
> @@ -126,7 +127,7 @@ static void xive_irq_bitmap_free(int irq)
>  }
>  }
>   }
> -
> +#endif
> 
>   /* Based on the similar routines in RTAS */
>   static unsigned int plpar_busy_delay_time(long rc)
> @@ -663,6 +664,7 @@ static void xive_spapr_sync_source(u32 hw_irq)
>  plpar_int_sync(0, hw_irq);
>   }
> 
> +#ifdef CONFIG_SMP
>   static int xive_spapr_debug_show(struct seq_file *m, void *private)
>   {
>  struct xive_irq_bitmap *xibm;
> @@ -680,6 +682,7 @@ static int xive_spapr_debug_show(struct seq_file *m, void 
> *private)
> 
>  return 0;
>   }
> +#endif
> 
>   static const struct xive_ops xive_spapr_ops = {
>  .populate_irq_data  = xive_spapr_populate_irq_data,
> --
> 2.42.0.windows.2
> 


Re: [PATCH v1 1/1] powerpc/powernv: fix up kernel compile issues

2024-01-03 Thread kernel test robot
Hi Luming,

kernel test robot noticed the following build errors:

[auto build test ERROR on powerpc/next]
[also build test ERROR on powerpc/fixes linus/master v6.7-rc8 next-20240103]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url:
https://github.com/intel-lab-lkp/linux/commits/Luming-Yu/powerpc-powernv-fix-up-kernel-compile-issues/20240102-105402
base:   https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git next
patch link:
https://lore.kernel.org/r/9D8FEE1731685D9B%2B20240102024834.1276-2-luming.yu%40shingroup.cn
patch subject: [PATCH v1 1/1] powerpc/powernv: fix up kernel compile issues
config: powerpc-powernv_defconfig 
(https://download.01.org/0day-ci/archive/20240103/202401032003.71dm6nhr-...@intel.com/config)
compiler: clang version 18.0.0git (https://github.com/llvm/llvm-project 
baf8a39aaf8b61a38b5b2b5591deb765e42eb00b)
reproduce (this is a W=1 build): 
(https://download.01.org/0day-ci/archive/20240103/202401032003.71dm6nhr-...@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot 
| Closes: 
https://lore.kernel.org/oe-kbuild-all/202401032003.71dm6nhr-...@intel.com/

All errors (new ones prefixed by >>):

>> drivers/crypto/nx/nx-common-powernv.c:718:13: error: call to undeclared 
>> function 'cpu_to_chip_id'; ISO C99 and later do not support implicit 
>> function declarations [-Wimplicit-function-declaration]
 718 | chip_id = cpu_to_chip_id(i);
 |   ^
   1 error generated.


vim +/cpu_to_chip_id +718 drivers/crypto/nx/nx-common-powernv.c

b0d6c9bab5e41d drivers/crypto/nx/nx-842-powernv.cHaren Myneni   2017-08-31  
703  
976dd6490b1b45 drivers/crypto/nx/nx-842-powernv.cHaren Myneni   2017-09-24  
704  /*
976dd6490b1b45 drivers/crypto/nx/nx-842-powernv.cHaren Myneni   2017-09-24  
705   * Identify chip ID for each CPU, open send wndow for the corresponding NX
976dd6490b1b45 drivers/crypto/nx/nx-842-powernv.cHaren Myneni   2017-09-24  
706   * engine and save txwin in percpu cpu_txwin.
976dd6490b1b45 drivers/crypto/nx/nx-842-powernv.cHaren Myneni   2017-09-24  
707   * cpu_txwin is used in copy/paste operation for each compression /
976dd6490b1b45 drivers/crypto/nx/nx-842-powernv.cHaren Myneni   2017-09-24  
708   * decompression request.
976dd6490b1b45 drivers/crypto/nx/nx-842-powernv.cHaren Myneni   2017-09-24  
709   */
4aebf3ce26ca21 drivers/crypto/nx/nx-common-powernv.c Haren Myneni   2020-04-17  
710  static int nx_open_percpu_txwins(void)
976dd6490b1b45 drivers/crypto/nx/nx-842-powernv.cHaren Myneni   2017-09-24  
711  {
4aebf3ce26ca21 drivers/crypto/nx/nx-common-powernv.c Haren Myneni   2020-04-17  
712 struct nx_coproc *coproc, *n;
976dd6490b1b45 drivers/crypto/nx/nx-842-powernv.cHaren Myneni   2017-09-24  
713 unsigned int i, chip_id;
976dd6490b1b45 drivers/crypto/nx/nx-842-powernv.cHaren Myneni   2017-09-24  
714  
976dd6490b1b45 drivers/crypto/nx/nx-842-powernv.cHaren Myneni   2017-09-24  
715 for_each_possible_cpu(i) {
976dd6490b1b45 drivers/crypto/nx/nx-842-powernv.cHaren Myneni   2017-09-24  
716 struct vas_window *txwin = NULL;
976dd6490b1b45 drivers/crypto/nx/nx-842-powernv.cHaren Myneni   2017-09-24  
717  
976dd6490b1b45 drivers/crypto/nx/nx-842-powernv.cHaren Myneni   2017-09-24 
@718 chip_id = cpu_to_chip_id(i);
976dd6490b1b45 drivers/crypto/nx/nx-842-powernv.cHaren Myneni   2017-09-24  
719  
4aebf3ce26ca21 drivers/crypto/nx/nx-common-powernv.c Haren Myneni   2020-04-17  
720 list_for_each_entry_safe(coproc, n, _coprocs, list) {
976dd6490b1b45 drivers/crypto/nx/nx-842-powernv.cHaren Myneni   2017-09-24  
721 /*
976dd6490b1b45 drivers/crypto/nx/nx-842-powernv.cHaren Myneni   2017-09-24  
722  * Kernel requests use only high priority FIFOs. So
976dd6490b1b45 drivers/crypto/nx/nx-842-powernv.cHaren Myneni   2017-09-24  
723  * open send windows for these FIFOs.
4aebf3ce26ca21 drivers/crypto/nx/nx-common-powernv.c Haren Myneni   2020-04-17  
724  * GZIP is not supported in kernel right now.
976dd6490b1b45 drivers/crypto/nx/nx-842-powernv.cHaren Myneni   2017-09-24  
725  */
976dd6490b1b45 drivers/crypto/nx/nx-842-powernv.cHaren Myneni   2017-09-24  
726  
976dd6490b1b45 drivers/crypto/nx/nx-842-powernv.cHaren Myneni   2017-09-24  
727 if (coproc->ct != VAS_COP_TYPE_842_HIPRI)
976dd6490b1b45 drivers/crypto/nx/nx-842-powernv.cHaren Myneni   2017-09-24  
728 continue;
976dd6490b1b45 drivers/crypto/nx/nx-842-powernv.

Re: [PATCH v2 00/13] mm/gup: Unify hugetlb, part 2

2024-01-03 Thread Christophe Leroy


Le 03/01/2024 à 10:14, pet...@redhat.com a écrit :
> From: Peter Xu 
> 
> 
> Test Done
> =
> 
> This v1 went through the normal GUP smoke tests over different memory
> types on archs (using VM instances): x86_64, aarch64, ppc64le.  For
> aarch64, tested over 64KB cont_pte huge pages.  For ppc64le, tested over
> 16MB hugepd entries (Power8 hash MMU on 4K base page size).
> 

Can you tell how you test ?

I'm willing to test this series on powerpc 8xx (PPC32).

Christophe


[PATCH v2 13/13] mm/gup: Handle hugetlb in the generic follow_page_mask code

2024-01-03 Thread peterx
From: Peter Xu 

Now follow_page() is ready to handle hugetlb pages in whatever form, and
over all architectures.  Switch to the generic code path.

Time to retire hugetlb_follow_page_mask(), following the previous
retirement of follow_hugetlb_page() in 4849807114b8.

There may be a slight difference of how the loops run when processing slow
GUP over a large hugetlb range on cont_pte/cont_pmd supported archs: each
loop of __get_user_pages() will resolve one pgtable entry with the patch
applied, rather than relying on the size of hugetlb hstate, the latter may
cover multiple entries in one loop.

A quick performance test on an aarch64 VM on M1 chip shows 15% degrade over
a tight loop of slow gup after the path switched.  That shouldn't be a
problem because slow-gup should not be a hot path for GUP in general: when
page is commonly present, fast-gup will already succeed, while when the
page is indeed missing and require a follow up page fault, the slow gup
degrade will probably buried in the fault paths anyway.  It also explains
why slow gup for THP used to be very slow before 57edfcfd3419 ("mm/gup:
accelerate thp gup even for "pages != NULL"") lands, the latter not part of
a performance analysis but a side benefit.  If the performance will be a
concern, we can consider handle CONT_PTE in follow_page().

Before that is justified to be necessary, keep everything clean and simple.

Signed-off-by: Peter Xu 
---
 include/linux/hugetlb.h |  7 
 mm/gup.c| 15 +++--
 mm/hugetlb.c| 71 -
 3 files changed, 5 insertions(+), 88 deletions(-)

diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index e8eddd51fc17..cdbb53407722 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -332,13 +332,6 @@ static inline void hugetlb_zap_end(
 {
 }
 
-static inline struct page *hugetlb_follow_page_mask(
-struct vm_area_struct *vma, unsigned long address, unsigned int flags,
-unsigned int *page_mask)
-{
-   BUILD_BUG(); /* should never be compiled in if !CONFIG_HUGETLB_PAGE*/
-}
-
 static inline int copy_hugetlb_page_range(struct mm_struct *dst,
  struct mm_struct *src,
  struct vm_area_struct *dst_vma,
diff --git a/mm/gup.c b/mm/gup.c
index 245214b64108..4f8a3dc287c9 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -997,18 +997,11 @@ static struct page *follow_page_mask(struct 
vm_area_struct *vma,
 {
pgd_t *pgd, pgdval;
struct mm_struct *mm = vma->vm_mm;
+   struct page *page;
 
-   ctx->page_mask = 0;
-
-   /*
-* Call hugetlb_follow_page_mask for hugetlb vmas as it will use
-* special hugetlb page table walking code.  This eliminates the
-* need to check for hugetlb entries in the general walking code.
-*/
-   if (is_vm_hugetlb_page(vma))
-   return hugetlb_follow_page_mask(vma, address, flags,
-   >page_mask);
+   vma_pgtable_walk_begin(vma);
 
+   ctx->page_mask = 0;
pgd = pgd_offset(mm, address);
pgdval = *pgd;
 
@@ -1020,6 +1013,8 @@ static struct page *follow_page_mask(struct 
vm_area_struct *vma,
else
page = follow_p4d_mask(vma, address, pgd, flags, ctx);
 
+   vma_pgtable_walk_end(vma);
+
return page;
 }
 
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index bfb52bb8b943..e13b4e038c2c 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -6782,77 +6782,6 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte,
 }
 #endif /* CONFIG_USERFAULTFD */
 
-struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma,
- unsigned long address, unsigned int flags,
- unsigned int *page_mask)
-{
-   struct hstate *h = hstate_vma(vma);
-   struct mm_struct *mm = vma->vm_mm;
-   unsigned long haddr = address & huge_page_mask(h);
-   struct page *page = NULL;
-   spinlock_t *ptl;
-   pte_t *pte, entry;
-   int ret;
-
-   hugetlb_vma_lock_read(vma);
-   pte = hugetlb_walk(vma, haddr, huge_page_size(h));
-   if (!pte)
-   goto out_unlock;
-
-   ptl = huge_pte_lock(h, mm, pte);
-   entry = huge_ptep_get(pte);
-   if (pte_present(entry)) {
-   page = pte_page(entry);
-
-   if (!huge_pte_write(entry)) {
-   if (flags & FOLL_WRITE) {
-   page = NULL;
-   goto out;
-   }
-
-   if (gup_must_unshare(vma, flags, page)) {
-   /* Tell the caller to do unsharing */
-   page = ERR_PTR(-EMLINK);
-   goto out;
-   }
-   }
-
-   page = nth_page(page, ((address & ~huge_page_mask(h)) >> 
PAGE_SHIFT));
-
-   

[PATCH v2 12/13] mm/gup: Handle hugepd for follow_page()

2024-01-03 Thread peterx
From: Peter Xu 

Hugepd is only used in PowerPC so far on 4K page size kernels where hash
mmu is used.  follow_page_mask() used to leverage hugetlb APIs to access
hugepd entries.  Teach follow_page_mask() itself on hugepd.

With previous refactors on fast-gup gup_huge_pd(), most of the code can be
easily leveraged.  There's something not needed for follow page, for
example, gup_hugepte() tries to detect pgtable entry change which will
never happen with slow gup (which has the pgtable lock held), but that's
not a problem to check.

Since follow_page() always only fetch one page, set the end to "address +
PAGE_SIZE" should suffice.  We will still do the pgtable walk once for each
hugetlb page by setting ctx->page_mask properly.

One thing worth mentioning is that some level of pgtable's _bad() helper
will report is_hugepd() entries as TRUE on Power8 hash MMUs.  I think it at
least applies to PUD on Power8 with 4K pgsize.  It means feeding a hugepd
entry to pud_bad() will report a false positive. Let's leave that for now
because it can be arch-specific where I am a bit declined to touch.  In
this patch it's not a problem as long as hugepd is detected before any bad
pgtable entries.

Signed-off-by: Peter Xu 
---
 mm/gup.c | 78 +---
 1 file changed, 69 insertions(+), 9 deletions(-)

diff --git a/mm/gup.c b/mm/gup.c
index d96429b6fc55..245214b64108 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -30,6 +30,11 @@ struct follow_page_context {
unsigned int page_mask;
 };
 
+static struct page *follow_hugepd(struct vm_area_struct *vma, hugepd_t hugepd,
+ unsigned long addr, unsigned int pdshift,
+ unsigned int flags,
+ struct follow_page_context *ctx);
+
 static inline void sanity_check_pinned_pages(struct page **pages,
 unsigned long npages)
 {
@@ -871,6 +876,9 @@ static struct page *follow_pmd_mask(struct vm_area_struct 
*vma,
return no_page_table(vma, flags, address);
if (!pmd_present(pmdval))
return no_page_table(vma, flags, address);
+   if (unlikely(is_hugepd(__hugepd(pmd_val(pmdval)
+   return follow_hugepd(vma, __hugepd(pmd_val(pmdval)),
+address, PMD_SHIFT, flags, ctx);
if (pmd_devmap(pmdval)) {
ptl = pmd_lock(mm, pmd);
page = follow_devmap_pmd(vma, address, pmd, flags, >pgmap);
@@ -921,6 +929,9 @@ static struct page *follow_pud_mask(struct vm_area_struct 
*vma,
pud = READ_ONCE(*pudp);
if (pud_none(pud) || !pud_present(pud))
return no_page_table(vma, flags, address);
+   if (unlikely(is_hugepd(__hugepd(pud_val(pud)
+   return follow_hugepd(vma, __hugepd(pud_val(pud)),
+address, PUD_SHIFT, flags, ctx);
if (pud_huge(pud)) {
ptl = pud_lock(mm, pudp);
page = follow_huge_pud(vma, address, pudp, flags, ctx);
@@ -940,13 +951,17 @@ static struct page *follow_p4d_mask(struct vm_area_struct 
*vma,
unsigned int flags,
struct follow_page_context *ctx)
 {
-   p4d_t *p4d;
+   p4d_t *p4d, p4dval;
 
p4d = p4d_offset(pgdp, address);
-   if (p4d_none(*p4d))
-   return no_page_table(vma, flags, address);
-   BUILD_BUG_ON(p4d_huge(*p4d));
-   if (unlikely(p4d_bad(*p4d)))
+   p4dval = *p4d;
+   BUILD_BUG_ON(p4d_huge(p4dval));
+
+   if (unlikely(is_hugepd(__hugepd(p4d_val(p4dval)
+   return follow_hugepd(vma, __hugepd(p4d_val(p4dval)),
+address, P4D_SHIFT, flags, ctx);
+
+   if (p4d_none(p4dval) || unlikely(p4d_bad(p4dval)))
return no_page_table(vma, flags, address);
 
return follow_pud_mask(vma, address, p4d, flags, ctx);
@@ -980,7 +995,7 @@ static struct page *follow_page_mask(struct vm_area_struct 
*vma,
  unsigned long address, unsigned int flags,
  struct follow_page_context *ctx)
 {
-   pgd_t *pgd;
+   pgd_t *pgd, pgdval;
struct mm_struct *mm = vma->vm_mm;
 
ctx->page_mask = 0;
@@ -995,11 +1010,17 @@ static struct page *follow_page_mask(struct 
vm_area_struct *vma,
>page_mask);
 
pgd = pgd_offset(mm, address);
+   pgdval = *pgd;
 
-   if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd)))
-   return no_page_table(vma, flags, address);
+   if (unlikely(is_hugepd(__hugepd(pgd_val(pgdval)
+   page = follow_hugepd(vma, __hugepd(pgd_val(pgdval)),
+address, PGDIR_SHIFT, flags, ctx);
+   else if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd)))
+   page = 

[PATCH v2 11/13] mm/gup: Handle huge pmd for follow_pmd_mask()

2024-01-03 Thread peterx
From: Peter Xu 

Replace pmd_trans_huge() with pmd_thp_or_huge() to also cover pmd_huge() as
long as enabled.

FOLL_TOUCH and FOLL_SPLIT_PMD only apply to THP, not yet huge.

Since now follow_trans_huge_pmd() can process hugetlb pages, renaming it
into follow_huge_pmd() to match what it does.  Move it into gup.c so not
depend on CONFIG_THP.

When at it, move the ctx->page_mask setup into follow_huge_pmd(), only set
it when the page is valid.  It was not a bug to set it before even if GUP
failed (page==NULL), because follow_page_mask() callers always ignores
page_mask if so.  But doing so makes the code cleaner.

Signed-off-by: Peter Xu 
---
 mm/gup.c | 107 ---
 mm/huge_memory.c |  86 +
 mm/internal.h|   5 +--
 3 files changed, 105 insertions(+), 93 deletions(-)

diff --git a/mm/gup.c b/mm/gup.c
index 760406180222..d96429b6fc55 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -580,6 +580,93 @@ static struct page *follow_huge_pud(struct vm_area_struct 
*vma,
 
return page;
 }
+
+/* FOLL_FORCE can write to even unwritable PMDs in COW mappings. */
+static inline bool can_follow_write_pmd(pmd_t pmd, struct page *page,
+   struct vm_area_struct *vma,
+   unsigned int flags)
+{
+   /* If the pmd is writable, we can write to the page. */
+   if (pmd_write(pmd))
+   return true;
+
+   /* Maybe FOLL_FORCE is set to override it? */
+   if (!(flags & FOLL_FORCE))
+   return false;
+
+   /* But FOLL_FORCE has no effect on shared mappings */
+   if (vma->vm_flags & (VM_MAYSHARE | VM_SHARED))
+   return false;
+
+   /* ... or read-only private ones */
+   if (!(vma->vm_flags & VM_MAYWRITE))
+   return false;
+
+   /* ... or already writable ones that just need to take a write fault */
+   if (vma->vm_flags & VM_WRITE)
+   return false;
+
+   /*
+* See can_change_pte_writable(): we broke COW and could map the page
+* writable if we have an exclusive anonymous page ...
+*/
+   if (!page || !PageAnon(page) || !PageAnonExclusive(page))
+   return false;
+
+   /* ... and a write-fault isn't required for other reasons. */
+   if (vma_soft_dirty_enabled(vma) && !pmd_soft_dirty(pmd))
+   return false;
+   return !userfaultfd_huge_pmd_wp(vma, pmd);
+}
+
+static struct page *follow_huge_pmd(struct vm_area_struct *vma,
+   unsigned long addr, pmd_t *pmd,
+   unsigned int flags,
+   struct follow_page_context *ctx)
+{
+   struct mm_struct *mm = vma->vm_mm;
+   pmd_t pmdval = *pmd;
+   struct page *page;
+   int ret;
+
+   assert_spin_locked(pmd_lockptr(mm, pmd));
+
+   page = pmd_page(pmdval);
+   VM_BUG_ON_PAGE(!PageHead(page) && !is_zone_device_page(page), page);
+
+   if ((flags & FOLL_WRITE) &&
+   !can_follow_write_pmd(pmdval, page, vma, flags))
+   return NULL;
+
+   /* Avoid dumping huge zero page */
+   if ((flags & FOLL_DUMP) && is_huge_zero_pmd(pmdval))
+   return ERR_PTR(-EFAULT);
+
+   if (pmd_protnone(*pmd) && !gup_can_follow_protnone(vma, flags))
+   return NULL;
+
+   if (!pmd_write(pmdval) && gup_must_unshare(vma, flags, page))
+   return ERR_PTR(-EMLINK);
+
+   VM_BUG_ON_PAGE((flags & FOLL_PIN) && PageAnon(page) &&
+   !PageAnonExclusive(page), page);
+
+   ret = try_grab_page(page, flags);
+   if (ret)
+   return ERR_PTR(ret);
+
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+   if (pmd_trans_huge(pmdval) && (flags & FOLL_TOUCH))
+   touch_pmd(vma, addr, pmd, flags & FOLL_WRITE);
+#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
+
+   page += (addr & ~HPAGE_PMD_MASK) >> PAGE_SHIFT;
+   ctx->page_mask = HPAGE_PMD_NR - 1;
+   VM_BUG_ON_PAGE(!PageCompound(page) && !is_zone_device_page(page), page);
+
+   return page;
+}
+
 #else  /* CONFIG_PGTABLE_HAS_HUGE_LEAVES */
 static struct page *follow_huge_pud(struct vm_area_struct *vma,
unsigned long addr, pud_t *pudp,
@@ -587,6 +674,14 @@ static struct page *follow_huge_pud(struct vm_area_struct 
*vma,
 {
return NULL;
 }
+
+static struct page *follow_huge_pmd(struct vm_area_struct *vma,
+   unsigned long addr, pmd_t *pmd,
+   unsigned int flags,
+   struct follow_page_context *ctx)
+{
+   return NULL;
+}
 #endif /* CONFIG_PGTABLE_HAS_HUGE_LEAVES */
 
 static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address,
@@ -784,31 +879,31 @@ static struct page *follow_pmd_mask(struct vm_area_struct 
*vma,
return page;
   

[PATCH v2 10/13] mm/gup: Handle huge pud for follow_pud_mask()

2024-01-03 Thread peterx
From: Peter Xu 

Teach follow_pud_mask() to be able to handle normal PUD pages like hugetlb.

Rename follow_devmap_pud() to follow_huge_pud() so that it can process
either huge devmap or hugetlb. Move it out of TRANSPARENT_HUGEPAGE_PUD and
and huge_memory.c (which relies on CONFIG_THP).

In the new follow_huge_pud(), taking care of possible CoR for hugetlb if
necessary.  touch_pud() needs to be moved out of huge_memory.c to be
accessable from gup.c even if !THP.

Since at it, optimize the non-present check by adding a pud_present() early
check before taking the pgtable lock, failing the follow_page() early if
PUD is not present: that is required by both devmap or hugetlb.  Use
pud_huge() to also cover the pud_devmap() case.

One more trivial thing to mention is, introduce "pud_t pud" in the code
paths along the way, so the code doesn't dereference *pudp multiple time.
Not only because that looks less straightforward, but also because if the
dereference really happened, it's not clear whether there can be race to
see different *pudp values when it's being modified at the same time.

Setting ctx->page_mask properly for a PUD entry.  As a side effect, this
patch should also be able to optimize devmap GUP on PUD to be able to jump
over the whole PUD range, but not yet verified.  Hugetlb already can do so
prior to this patch.

Signed-off-by: Peter Xu 
---
 include/linux/huge_mm.h |  8 -
 mm/gup.c| 70 +++--
 mm/huge_memory.c| 47 ++-
 mm/internal.h   |  2 ++
 4 files changed, 71 insertions(+), 56 deletions(-)

diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 96bd4b5d027e..3b73d20d537e 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -345,8 +345,6 @@ static inline bool folio_test_pmd_mappable(struct folio 
*folio)
 
 struct page *follow_devmap_pmd(struct vm_area_struct *vma, unsigned long addr,
pmd_t *pmd, int flags, struct dev_pagemap **pgmap);
-struct page *follow_devmap_pud(struct vm_area_struct *vma, unsigned long addr,
-   pud_t *pud, int flags, struct dev_pagemap **pgmap);
 
 vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf);
 
@@ -502,12 +500,6 @@ static inline struct page *follow_devmap_pmd(struct 
vm_area_struct *vma,
return NULL;
 }
 
-static inline struct page *follow_devmap_pud(struct vm_area_struct *vma,
-   unsigned long addr, pud_t *pud, int flags, struct dev_pagemap **pgmap)
-{
-   return NULL;
-}
-
 static inline bool thp_migration_supported(void)
 {
return false;
diff --git a/mm/gup.c b/mm/gup.c
index 63845b3ec44f..760406180222 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -525,6 +525,70 @@ static struct page *no_page_table(struct vm_area_struct 
*vma,
return NULL;
 }
 
+#ifdef CONFIG_PGTABLE_HAS_HUGE_LEAVES
+static struct page *follow_huge_pud(struct vm_area_struct *vma,
+   unsigned long addr, pud_t *pudp,
+   int flags, struct follow_page_context *ctx)
+{
+   struct mm_struct *mm = vma->vm_mm;
+   struct page *page;
+   pud_t pud = *pudp;
+   unsigned long pfn = pud_pfn(pud);
+   int ret;
+
+   assert_spin_locked(pud_lockptr(mm, pudp));
+
+   if ((flags & FOLL_WRITE) && !pud_write(pud))
+   return NULL;
+
+   if (!pud_present(pud))
+   return NULL;
+
+   pfn += (addr & ~PUD_MASK) >> PAGE_SHIFT;
+
+#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
+   if (pud_devmap(pud)) {
+   /*
+* device mapped pages can only be returned if the caller
+* will manage the page reference count.
+*
+* At least one of FOLL_GET | FOLL_PIN must be set, so
+* assert that here:
+*/
+   if (!(flags & (FOLL_GET | FOLL_PIN)))
+   return ERR_PTR(-EEXIST);
+
+   if (flags & FOLL_TOUCH)
+   touch_pud(vma, addr, pudp, flags & FOLL_WRITE);
+
+   ctx->pgmap = get_dev_pagemap(pfn, ctx->pgmap);
+   if (!ctx->pgmap)
+   return ERR_PTR(-EFAULT);
+   }
+#endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */
+   page = pfn_to_page(pfn);
+
+   if (!pud_devmap(pud) && !pud_write(pud) &&
+   gup_must_unshare(vma, flags, page))
+   return ERR_PTR(-EMLINK);
+
+   ret = try_grab_page(page, flags);
+   if (ret)
+   page = ERR_PTR(ret);
+   else
+   ctx->page_mask = HPAGE_PUD_NR - 1;
+
+   return page;
+}
+#else  /* CONFIG_PGTABLE_HAS_HUGE_LEAVES */
+static struct page *follow_huge_pud(struct vm_area_struct *vma,
+   unsigned long addr, pud_t *pudp,
+   int flags, struct follow_page_context *ctx)
+{
+   return NULL;
+}
+#endif /* CONFIG_PGTABLE_HAS_HUGE_LEAVES */
+
 static 

[PATCH v2 09/13] mm/gup: Cache *pudp in follow_pud_mask()

2024-01-03 Thread peterx
From: Peter Xu 

Introduce "pud_t pud" in the function, so the code won't dereference *pudp
multiple time.  Not only because that looks less straightforward, but also
because if the dereference really happened, it's not clear whether there
can be race to see different *pudp values if it's being modified at the
same time.

Acked-by: James Houghton 
Signed-off-by: Peter Xu 
---
 mm/gup.c | 17 +
 1 file changed, 9 insertions(+), 8 deletions(-)

diff --git a/mm/gup.c b/mm/gup.c
index b8a80e2bfe08..63845b3ec44f 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -753,26 +753,27 @@ static struct page *follow_pud_mask(struct vm_area_struct 
*vma,
unsigned int flags,
struct follow_page_context *ctx)
 {
-   pud_t *pud;
+   pud_t *pudp, pud;
spinlock_t *ptl;
struct page *page;
struct mm_struct *mm = vma->vm_mm;
 
-   pud = pud_offset(p4dp, address);
-   if (pud_none(*pud))
+   pudp = pud_offset(p4dp, address);
+   pud = READ_ONCE(*pudp);
+   if (pud_none(pud))
return no_page_table(vma, flags, address);
-   if (pud_devmap(*pud)) {
-   ptl = pud_lock(mm, pud);
-   page = follow_devmap_pud(vma, address, pud, flags, >pgmap);
+   if (pud_devmap(pud)) {
+   ptl = pud_lock(mm, pudp);
+   page = follow_devmap_pud(vma, address, pudp, flags, 
>pgmap);
spin_unlock(ptl);
if (page)
return page;
return no_page_table(vma, flags, address);
}
-   if (unlikely(pud_bad(*pud)))
+   if (unlikely(pud_bad(pud)))
return no_page_table(vma, flags, address);
 
-   return follow_pmd_mask(vma, address, pud, flags, ctx);
+   return follow_pmd_mask(vma, address, pudp, flags, ctx);
 }
 
 static struct page *follow_p4d_mask(struct vm_area_struct *vma,
-- 
2.41.0



[PATCH v2 08/13] mm/gup: Handle hugetlb for no_page_table()

2024-01-03 Thread peterx
From: Peter Xu 

no_page_table() is not yet used for hugetlb code paths. Make it prepared.

The major difference here is hugetlb will return -EFAULT as long as page
cache does not exist, even if VM_SHARED.  See hugetlb_follow_page_mask().

Pass "address" into no_page_table() too, as hugetlb will need it.

Reviewed-by: Christoph Hellwig 
Signed-off-by: Peter Xu 
---
 mm/gup.c | 44 ++--
 1 file changed, 26 insertions(+), 18 deletions(-)

diff --git a/mm/gup.c b/mm/gup.c
index 3813aad79c4a..b8a80e2bfe08 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -501,19 +501,27 @@ static inline void mm_set_has_pinned_flag(unsigned long 
*mm_flags)
 
 #ifdef CONFIG_MMU
 static struct page *no_page_table(struct vm_area_struct *vma,
-   unsigned int flags)
+ unsigned int flags, unsigned long address)
 {
+   if (!(flags & FOLL_DUMP))
+   return NULL;
+
/*
-* When core dumping an enormous anonymous area that nobody
-* has touched so far, we don't want to allocate unnecessary pages or
+* When core dumping, we don't want to allocate unnecessary pages or
 * page tables.  Return error instead of NULL to skip handle_mm_fault,
 * then get_dump_page() will return NULL to leave a hole in the dump.
 * But we can only make this optimization where a hole would surely
 * be zero-filled if handle_mm_fault() actually did handle it.
 */
-   if ((flags & FOLL_DUMP) &&
-   (vma_is_anonymous(vma) || !vma->vm_ops->fault))
+   if (is_vm_hugetlb_page(vma)) {
+   struct hstate *h = hstate_vma(vma);
+
+   if (!hugetlbfs_pagecache_present(h, vma, address))
+   return ERR_PTR(-EFAULT);
+   } else if ((vma_is_anonymous(vma) || !vma->vm_ops->fault)) {
return ERR_PTR(-EFAULT);
+   }
+
return NULL;
 }
 
@@ -593,7 +601,7 @@ static struct page *follow_page_pte(struct vm_area_struct 
*vma,
 
ptep = pte_offset_map_lock(mm, pmd, address, );
if (!ptep)
-   return no_page_table(vma, flags);
+   return no_page_table(vma, flags, address);
pte = ptep_get(ptep);
if (!pte_present(pte))
goto no_page;
@@ -685,7 +693,7 @@ static struct page *follow_page_pte(struct vm_area_struct 
*vma,
pte_unmap_unlock(ptep, ptl);
if (!pte_none(pte))
return NULL;
-   return no_page_table(vma, flags);
+   return no_page_table(vma, flags, address);
 }
 
 static struct page *follow_pmd_mask(struct vm_area_struct *vma,
@@ -701,27 +709,27 @@ static struct page *follow_pmd_mask(struct vm_area_struct 
*vma,
pmd = pmd_offset(pudp, address);
pmdval = pmdp_get_lockless(pmd);
if (pmd_none(pmdval))
-   return no_page_table(vma, flags);
+   return no_page_table(vma, flags, address);
if (!pmd_present(pmdval))
-   return no_page_table(vma, flags);
+   return no_page_table(vma, flags, address);
if (pmd_devmap(pmdval)) {
ptl = pmd_lock(mm, pmd);
page = follow_devmap_pmd(vma, address, pmd, flags, >pgmap);
spin_unlock(ptl);
if (page)
return page;
-   return no_page_table(vma, flags);
+   return no_page_table(vma, flags, address);
}
if (likely(!pmd_trans_huge(pmdval)))
return follow_page_pte(vma, address, pmd, flags, >pgmap);
 
if (pmd_protnone(pmdval) && !gup_can_follow_protnone(vma, flags))
-   return no_page_table(vma, flags);
+   return no_page_table(vma, flags, address);
 
ptl = pmd_lock(mm, pmd);
if (unlikely(!pmd_present(*pmd))) {
spin_unlock(ptl);
-   return no_page_table(vma, flags);
+   return no_page_table(vma, flags, address);
}
if (unlikely(!pmd_trans_huge(*pmd))) {
spin_unlock(ptl);
@@ -752,17 +760,17 @@ static struct page *follow_pud_mask(struct vm_area_struct 
*vma,
 
pud = pud_offset(p4dp, address);
if (pud_none(*pud))
-   return no_page_table(vma, flags);
+   return no_page_table(vma, flags, address);
if (pud_devmap(*pud)) {
ptl = pud_lock(mm, pud);
page = follow_devmap_pud(vma, address, pud, flags, >pgmap);
spin_unlock(ptl);
if (page)
return page;
-   return no_page_table(vma, flags);
+   return no_page_table(vma, flags, address);
}
if (unlikely(pud_bad(*pud)))
-   return no_page_table(vma, flags);
+   return no_page_table(vma, flags, address);
 
return follow_pmd_mask(vma, address, pud, flags, ctx);
 }
@@ -776,10 +784,10 @@ static struct page 

[PATCH v2 07/13] mm/gup: Refactor record_subpages() to find 1st small page

2024-01-03 Thread peterx
From: Peter Xu 

All the fast-gup functions take a tail page to operate, always need to do
page mask calculations before feeding that into record_subpages().

Merge that logic into record_subpages(), so that it will do the nth_page()
calculation.

Signed-off-by: Peter Xu 
---
 mm/gup.c | 25 ++---
 1 file changed, 14 insertions(+), 11 deletions(-)

diff --git a/mm/gup.c b/mm/gup.c
index fa93e14b7fca..3813aad79c4a 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -2767,13 +2767,16 @@ static int __gup_device_huge_pud(pud_t pud, pud_t 
*pudp, unsigned long addr,
 }
 #endif
 
-static int record_subpages(struct page *page, unsigned long addr,
-  unsigned long end, struct page **pages)
+static int record_subpages(struct page *page, unsigned long sz,
+  unsigned long addr, unsigned long end,
+  struct page **pages)
 {
+   struct page *start_page;
int nr;
 
+   start_page = nth_page(page, (addr & (sz - 1)) >> PAGE_SHIFT);
for (nr = 0; addr != end; nr++, addr += PAGE_SIZE)
-   pages[nr] = nth_page(page, nr);
+   pages[nr] = nth_page(start_page, nr);
 
return nr;
 }
@@ -2808,8 +2811,8 @@ static int gup_hugepte(pte_t *ptep, unsigned long sz, 
unsigned long addr,
/* hugepages are never "special" */
VM_BUG_ON(!pfn_valid(pte_pfn(pte)));
 
-   page = nth_page(pte_page(pte), (addr & (sz - 1)) >> PAGE_SHIFT);
-   refs = record_subpages(page, addr, end, pages + *nr);
+   page = pte_page(pte);
+   refs = record_subpages(page, sz, addr, end, pages + *nr);
 
folio = try_grab_folio(page, refs, flags);
if (!folio)
@@ -2882,8 +2885,8 @@ static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned 
long addr,
 pages, nr);
}
 
-   page = nth_page(pmd_page(orig), (addr & ~PMD_MASK) >> PAGE_SHIFT);
-   refs = record_subpages(page, addr, end, pages + *nr);
+   page = pmd_page(orig);
+   refs = record_subpages(page, PMD_SIZE, addr, end, pages + *nr);
 
folio = try_grab_folio(page, refs, flags);
if (!folio)
@@ -2926,8 +2929,8 @@ static int gup_huge_pud(pud_t orig, pud_t *pudp, unsigned 
long addr,
 pages, nr);
}
 
-   page = nth_page(pud_page(orig), (addr & ~PUD_MASK) >> PAGE_SHIFT);
-   refs = record_subpages(page, addr, end, pages + *nr);
+   page = pud_page(orig);
+   refs = record_subpages(page, PUD_SIZE, addr, end, pages + *nr);
 
folio = try_grab_folio(page, refs, flags);
if (!folio)
@@ -2966,8 +2969,8 @@ static int gup_huge_pgd(pgd_t orig, pgd_t *pgdp, unsigned 
long addr,
 
BUILD_BUG_ON(pgd_devmap(orig));
 
-   page = nth_page(pgd_page(orig), (addr & ~PGDIR_MASK) >> PAGE_SHIFT);
-   refs = record_subpages(page, addr, end, pages + *nr);
+   page = pgd_page(orig);
+   refs = record_subpages(page, PGDIR_SIZE, addr, end, pages + *nr);
 
folio = try_grab_folio(page, refs, flags);
if (!folio)
-- 
2.41.0



[PATCH v2 06/13] mm/gup: Drop folio_fast_pin_allowed() in hugepd processing

2024-01-03 Thread peterx
From: Peter Xu 

Hugepd format for GUP is only used in PowerPC with hugetlbfs.  There are
some kernel usage of hugepd (can refer to hugepd_populate_kernel() for
PPC_8XX), however those pages are not candidates for GUP.

Commit a6e79df92e4a ("mm/gup: disallow FOLL_LONGTERM GUP-fast writing to
file-backed mappings") added a check to fail gup-fast if there's potential
risk of violating GUP over writeback file systems.  That should never apply
to hugepd.  Considering that hugepd is an old format (and even
software-only), there's no plan to extend hugepd into other file typed
memories that is prone to the same issue.

Drop that check, not only because it'll never be true for hugepd per any
known plan, but also it paves way for reusing the function outside
fast-gup.

To make sure we'll still remember this issue just in case hugepd will be
extended to support non-hugetlbfs memories, add a rich comment above
gup_huge_pd(), explaining the issue with proper references.

Cc: Christoph Hellwig 
Cc: Lorenzo Stoakes 
Cc: Michael Ellerman 
Cc: linuxppc-dev@lists.ozlabs.org
Signed-off-by: Peter Xu 
---
 mm/gup.c | 13 -
 1 file changed, 8 insertions(+), 5 deletions(-)

diff --git a/mm/gup.c b/mm/gup.c
index eebae70d2465..fa93e14b7fca 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -2820,11 +2820,6 @@ static int gup_hugepte(pte_t *ptep, unsigned long sz, 
unsigned long addr,
return 0;
}
 
-   if (!folio_fast_pin_allowed(folio, flags)) {
-   gup_put_folio(folio, refs, flags);
-   return 0;
-   }
-
if (!pte_write(pte) && gup_must_unshare(NULL, flags, >page)) {
gup_put_folio(folio, refs, flags);
return 0;
@@ -2835,6 +2830,14 @@ static int gup_hugepte(pte_t *ptep, unsigned long sz, 
unsigned long addr,
return 1;
 }
 
+/*
+ * NOTE: currently GUP for a hugepd is only possible on hugetlbfs file
+ * systems on Power, which does not have issue with folio writeback against
+ * GUP updates.  When hugepd will be extended to support non-hugetlbfs or
+ * even anonymous memory, we need to do extra check as what we do with most
+ * of the other folios. See writable_file_mapping_allowed() and
+ * folio_fast_pin_allowed() for more information.
+ */
 static int gup_huge_pd(hugepd_t hugepd, unsigned long addr,
unsigned int pdshift, unsigned long end, unsigned int flags,
struct page **pages, int *nr)
-- 
2.41.0



[PATCH v2 05/13] mm: Introduce vma_pgtable_walk_{begin|end}()

2024-01-03 Thread peterx
From: Peter Xu 

Introduce per-vma begin()/end() helpers for pgtable walks.  This is a
preparation work to merge hugetlb pgtable walkers with generic mm.

The helpers need to be called before and after a pgtable walk, will start
to be needed if the pgtable walker code supports hugetlb pages.  It's a
hook point for any type of VMA, but for now only hugetlb uses it to
stablize the pgtable pages from getting away (due to possible pmd
unsharing).

Reviewed-by: Christoph Hellwig 
Reviewed-by: Muchun Song 
Signed-off-by: Peter Xu 
---
 include/linux/mm.h |  3 +++
 mm/memory.c| 12 
 2 files changed, 15 insertions(+)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 896c0079f64f..6836da00671a 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -4181,4 +4181,7 @@ static inline bool pfn_is_unaccepted_memory(unsigned long 
pfn)
return range_contains_unaccepted_memory(paddr, paddr + PAGE_SIZE);
 }
 
+void vma_pgtable_walk_begin(struct vm_area_struct *vma);
+void vma_pgtable_walk_end(struct vm_area_struct *vma);
+
 #endif /* _LINUX_MM_H */
diff --git a/mm/memory.c b/mm/memory.c
index 7e1f4849463a..89f3caac2ec8 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -6279,3 +6279,15 @@ void ptlock_free(struct ptdesc *ptdesc)
kmem_cache_free(page_ptl_cachep, ptdesc->ptl);
 }
 #endif
+
+void vma_pgtable_walk_begin(struct vm_area_struct *vma)
+{
+   if (is_vm_hugetlb_page(vma))
+   hugetlb_vma_lock_read(vma);
+}
+
+void vma_pgtable_walk_end(struct vm_area_struct *vma)
+{
+   if (is_vm_hugetlb_page(vma))
+   hugetlb_vma_unlock_read(vma);
+}
-- 
2.41.0



[PATCH v2 04/13] mm: Make HPAGE_PXD_* macros even if !THP

2024-01-03 Thread peterx
From: Peter Xu 

These macros can be helpful when we plan to merge hugetlb code into generic
code.  Move them out and define them even if !THP.

We actually already defined HPAGE_PMD_NR for other reasons even if !THP.
Reorganize these macros.

Reviewed-by: Christoph Hellwig 
Signed-off-by: Peter Xu 
---
 include/linux/huge_mm.h | 17 ++---
 1 file changed, 6 insertions(+), 11 deletions(-)

diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 5adb86af35fc..96bd4b5d027e 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -64,9 +64,6 @@ ssize_t single_hugepage_flag_show(struct kobject *kobj,
  enum transparent_hugepage_flag flag);
 extern struct kobj_attribute shmem_enabled_attr;
 
-#define HPAGE_PMD_ORDER (HPAGE_PMD_SHIFT-PAGE_SHIFT)
-#define HPAGE_PMD_NR (1<

[PATCH v2 03/13] mm: Provide generic pmd_thp_or_huge()

2024-01-03 Thread peterx
From: Peter Xu 

ARM defines pmd_thp_or_huge(), detecting either a THP or a huge PMD.  It
can be a helpful helper if we want to merge more THP and hugetlb code
paths.  Make it a generic default implementation, only exist when
CONFIG_MMU.  Arch can overwrite it by defining its own version.

For example, ARM's pgtable-2level.h defines it to always return false.

Keep the macro declared with all config, it should be optimized to a false
anyway if !THP && !HUGETLB.

Signed-off-by: Peter Xu 
---
 include/linux/pgtable.h | 4 
 mm/gup.c| 3 +--
 2 files changed, 5 insertions(+), 2 deletions(-)

diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index 466cf477551a..2b42e95a4e3a 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -1362,6 +1362,10 @@ static inline int pmd_write(pmd_t pmd)
 #endif /* pmd_write */
 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
 
+#ifndef pmd_thp_or_huge
+#define pmd_thp_or_huge(pmd)   (pmd_huge(pmd) || pmd_trans_huge(pmd))
+#endif
+
 #ifndef pud_write
 static inline int pud_write(pud_t pud)
 {
diff --git a/mm/gup.c b/mm/gup.c
index df83182ec72d..eebae70d2465 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -3004,8 +3004,7 @@ static int gup_pmd_range(pud_t *pudp, pud_t pud, unsigned 
long addr, unsigned lo
if (!pmd_present(pmd))
return 0;
 
-   if (unlikely(pmd_trans_huge(pmd) || pmd_huge(pmd) ||
-pmd_devmap(pmd))) {
+   if (unlikely(pmd_thp_or_huge(pmd) || pmd_devmap(pmd))) {
/* See gup_pte_range() */
if (pmd_protnone(pmd))
return 0;
-- 
2.41.0



[PATCH v2 02/13] mm/hugetlb: Declare hugetlbfs_pagecache_present() non-static

2024-01-03 Thread peterx
From: Peter Xu 

It will be used outside hugetlb.c soon.

Signed-off-by: Peter Xu 
---
 include/linux/hugetlb.h | 9 +
 mm/hugetlb.c| 4 ++--
 2 files changed, 11 insertions(+), 2 deletions(-)

diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index c1ee640d87b1..e8eddd51fc17 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -174,6 +174,9 @@ u32 hugetlb_fault_mutex_hash(struct address_space *mapping, 
pgoff_t idx);
 
 pte_t *huge_pmd_share(struct mm_struct *mm, struct vm_area_struct *vma,
  unsigned long addr, pud_t *pud);
+bool hugetlbfs_pagecache_present(struct hstate *h,
+struct vm_area_struct *vma,
+unsigned long address);
 
 struct address_space *hugetlb_page_mapping_lock_write(struct page *hpage);
 
@@ -1221,6 +1224,12 @@ static inline void hugetlb_register_node(struct node 
*node)
 static inline void hugetlb_unregister_node(struct node *node)
 {
 }
+
+static inline bool hugetlbfs_pagecache_present(
+struct hstate *h, struct vm_area_struct *vma, unsigned long address)
+{
+   return false;
+}
 #endif /* CONFIG_HUGETLB_PAGE */
 
 static inline spinlock_t *huge_pte_lock(struct hstate *h,
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 0d262784ce60..bfb52bb8b943 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -6017,8 +6017,8 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct 
vm_area_struct *vma,
 /*
  * Return whether there is a pagecache page to back given address within VMA.
  */
-static bool hugetlbfs_pagecache_present(struct hstate *h,
-   struct vm_area_struct *vma, unsigned long address)
+bool hugetlbfs_pagecache_present(struct hstate *h,
+struct vm_area_struct *vma, unsigned long 
address)
 {
struct address_space *mapping = vma->vm_file->f_mapping;
pgoff_t idx = linear_page_index(vma, address);
-- 
2.41.0



[PATCH v2 01/13] mm/Kconfig: CONFIG_PGTABLE_HAS_HUGE_LEAVES

2024-01-03 Thread peterx
From: Peter Xu 

Introduce a config option that will be selected as long as huge leaves are
involved in pgtable (thp or hugetlbfs).  It would be useful to mark any
code with this new config that can process either hugetlb or thp pages in
any level that is higher than pte level.

Signed-off-by: Peter Xu 
---
 mm/Kconfig | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/mm/Kconfig b/mm/Kconfig
index cb9d470f0bf7..9350ba180d52 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -904,6 +904,9 @@ config READ_ONLY_THP_FOR_FS
 
 endif # TRANSPARENT_HUGEPAGE
 
+config PGTABLE_HAS_HUGE_LEAVES
+   def_bool TRANSPARENT_HUGEPAGE || HUGETLB_PAGE
+
 #
 # UP and nommu archs use km based percpu allocator
 #
-- 
2.41.0



[PATCH v2 00/13] mm/gup: Unify hugetlb, part 2

2024-01-03 Thread peterx
From: Peter Xu 

v2:
- Collect acks
- Patch 9:
  - Use READ_ONCE() to fetch pud entry [James]

rfc: https://lore.kernel.org/r/20231116012908.392077-1-pet...@redhat.com
v1:  https://lore.kernel.org/r/20231219075538.414708-1-pet...@redhat.com

This is v2 of the series, based on latest mm-unstalbe (856325d361df).

The series removes the hugetlb slow gup path after a previous refactor work
[1], so that slow gup now uses the exact same path to process all kinds of
memory including hugetlb.

For the long term, we may want to remove most, if not all, call sites of
huge_pte_offset().  It'll be ideal if that API can be completely dropped
from arch hugetlb API.  This series is one small step towards merging
hugetlb specific codes into generic mm paths.  From that POV, this series
removes one reference to huge_pte_offset() out of many others.

One goal of such a route is that we can reconsider merging hugetlb features
like High Granularity Mapping (HGM).  It was not accepted in the past
because it may add lots of hugetlb specific codes and make the mm code even
harder to maintain.  With a merged codeset, features like HGM can hopefully
share some code with THP, legacy (PMD+) or modern (continuous PTEs).

To make it work, the generic slow gup code will need to at least understand
hugepd, which is already done like so in fast-gup.  Fortunately it seems
that's the only major thing I need to teach slow GUP to share the common
path for now besides normal huge PxD entries.  Non-gup can be more
challenging, but that's a question for later.

There's one major difference for slow-gup on cont_pte / cont_pmd handling,
currently supported on three architectures (aarch64, riscv, ppc).  Before
the series, slow gup will be able to recognize e.g. cont_pte entries with
the help of huge_pte_offset() when hstate is around.  Now it's gone but
still working, by looking up pgtable entries one by one.

It's not ideal, but hopefully this change should not affect yet on major
workloads.  There's some more information in the commit message of the last
patch.  If this would be a concern, we can consider teaching slow gup to
recognize cont pte/pmd entries, and that should recover the lost
performance.  But I doubt its necessity for now, so I kept it as simple as
it can be.

Test Done
=

This v1 went through the normal GUP smoke tests over different memory
types on archs (using VM instances): x86_64, aarch64, ppc64le.  For
aarch64, tested over 64KB cont_pte huge pages.  For ppc64le, tested over
16MB hugepd entries (Power8 hash MMU on 4K base page size).

Patch layout
=

Patch 1-7:Preparation works, or cleanups in relevant code paths
Patch 8-12:   Teach slow gup with all kinds of huge entries (pXd, hugepd)
Patch 13: Drop hugetlb_follow_page_mask()

More information can be found in the commit messages of each patch.  Any
comment will be welcomed.  Thanks.

[1] https://lore.kernel.org/all/20230628215310.73782-1-pet...@redhat.com

Peter Xu (13):
  mm/Kconfig: CONFIG_PGTABLE_HAS_HUGE_LEAVES
  mm/hugetlb: Declare hugetlbfs_pagecache_present() non-static
  mm: Provide generic pmd_thp_or_huge()
  mm: Make HPAGE_PXD_* macros even if !THP
  mm: Introduce vma_pgtable_walk_{begin|end}()
  mm/gup: Drop folio_fast_pin_allowed() in hugepd processing
  mm/gup: Refactor record_subpages() to find 1st small page
  mm/gup: Handle hugetlb for no_page_table()
  mm/gup: Cache *pudp in follow_pud_mask()
  mm/gup: Handle huge pud for follow_pud_mask()
  mm/gup: Handle huge pmd for follow_pmd_mask()
  mm/gup: Handle hugepd for follow_page()
  mm/gup: Handle hugetlb in the generic follow_page_mask code

 include/linux/huge_mm.h |  25 +--
 include/linux/hugetlb.h |  16 +-
 include/linux/mm.h  |   3 +
 include/linux/pgtable.h |   4 +
 mm/Kconfig  |   3 +
 mm/gup.c| 362 
 mm/huge_memory.c| 133 +--
 mm/hugetlb.c|  75 +
 mm/internal.h   |   7 +-
 mm/memory.c |  12 ++
 10 files changed, 342 insertions(+), 298 deletions(-)

-- 
2.41.0