Re: [PATCH] powerpc: don't select KFENCE on platform PPC_FSL_BOOK3E
On 2021/9/24 14:41, Christophe Leroy wrote: > > > Le 24/09/2021 à 08:39, Liu Shixin a écrit : >> On platform PPC_FSL_BOOK3E, all lowmem is managed by tlbcam. That means >> we didn't really map the kfence pool with page granularity. Therefore, >> if KFENCE is enabled, the system will hit the following panic: > > Could you please explain a bit more what the problem is ? > > KFENCE has been implemented with the same logic as DEBUG_PAGEALLOC. > > DEBUG_PAGEALLOC is enabled on FSL_BOOK3E. > > In MMU_setup(), __map_without_ltlbs is set to 1 when KFENCE is enabled. > > __map_without_ltlbs should disable the use of tlbcam. > > > So what's wrong really ? > > Does DEBUG_PAGEALLOC work on FSL_BOOK3E ? > > Thanks > Christophe > hi Christophe, The phenomenon is that kernel panic in the kfence_protect_page function because __kfence_pool is not mapped with page granularity. The problem is that in the mapin_ram function, the return value(i.e base) of mmu_mapin_ram is equal to top. As a result, no level-2 page table is created for [base, top]. It seems that __map_without_ltlbs didn't diable the use of tlbcam. I have tried to force page table for all lowmem, then this problem will go away but the kfence_test failed, which could be explained by the fact that tlbcam is still used. By the way, DEBUG_PAGEALLOC works well on FSL_BOOK3E without level-2 page table. Thanks, >> >> BUG: Kernel NULL pointer dereference on read at 0x >> Faulting instruction address: 0xc01de598 >> Oops: Kernel access of bad area, sig: 11 [#1] >> BE PAGE_SIZE=4K SMP NR_CPUS=4 MPC8544 DS >> Dumping ftrace buffer: >> (ftrace buffer empty) >> Modules linked in: >> CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.12.0-rc3+ #298 >> NIP: c01de598 LR: c08ae9c4 CTR: >> REGS: c0b4bea0 TRAP: 0300 Not tainted (5.12.0-rc3+) >> MSR: 00021000 CR: 24000228 XER: 2000 >> DEAR: ESR: >> GPR00: c08ae9c4 c0b4bf60 c0ad64e0 ef72 00021000 >> 0200 >> GPR08: c0ad5000 0004 008fbb30 >> >> GPR16: c000 >> >> GPR24: c08ca004 c08ca004 c0b6a0e0 c0b6 c0b58f00 c085 c08ca000 >> ef72 >> NIP [c01de598] kfence_protect+0x44/0x6c >> LR [c08ae9c4] kfence_init+0xfc/0x2a4 >> Call Trace: >> [c0b4bf60] [efffe160] 0xefffe160 (unreliable) >> [c0b4bf70] [c08ae9c4] kfence_init+0xfc/0x2a4 >> [c0b4bfb0] [c0894d3c] start_kernel+0x3bc/0x574 >> [c0b4bff0] [c470] set_ivor+0x14c/0x188 >> Instruction dump: >> 7c0802a6 8109d594 546a653a 90010014 54630026 3920 7d48502e 2c0a >> 41820010 554a0026 5469b53a 7d295214 <8149> 38831000 554a003c >> 9149 >> random: get_random_bytes called from print_oops_end_marker+0x40/0x78 >> with crng_init=0 >> ---[ end trace ]--- >> >> Signed-off-by: Liu Shixin >> --- >> arch/powerpc/Kconfig | 2 +- >> 1 file changed, 1 insertion(+), 1 deletion(-) >> >> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig >> index d46db0bfb998..cffd57bcb5e4 100644 >> --- a/arch/powerpc/Kconfig >> +++ b/arch/powerpc/Kconfig >> @@ -185,7 +185,7 @@ config PPC >> select HAVE_ARCH_KASANif PPC32 && PPC_PAGE_SHIFT <= 14 >> select HAVE_ARCH_KASAN_VMALLOCif PPC32 && PPC_PAGE_SHIFT <= 14 >> select HAVE_ARCH_KGDB >> -select HAVE_ARCH_KFENCEif PPC32 >> +select HAVE_ARCH_KFENCEif PPC32 && !PPC_FSL_BOOK3E >> select HAVE_ARCH_MMAP_RND_BITS >> select HAVE_ARCH_MMAP_RND_COMPAT_BITSif COMPAT >> select HAVE_ARCH_NVRAM_OPS >> > . >
Re: [PATCH] powerpc/paravirt: correct preempt debug splat in vcpu_is_preempted()
Michael Ellerman writes: > Srikar Dronamraju writes: >> * Michael Ellerman [2021-09-23 17:29:32]: >> >>> Nathan Lynch writes: >>> > Srikar Dronamraju writes: >>> > >>> >> * Nathan Lynch [2021-09-22 11:01:12]: >>> >> >>> >>> Srikar Dronamraju writes: > ... >>> >> Or can I understand how debug_smp_processor_id() is useful if >>> >> __smp_processor_id() is defined as raw_smp_processor_id()? >>> >>> debug_smp_processor_id() is useful on powerpc, as well as other arches, >>> because it checks that we're in a context where the processor id won't >>> change out from under us. >>> >>> eg. something like this is unsafe: >>> >>> int counts[NR_CPUS]; >>> int tmp, cpu; >>> >>> cpu = smp_processor_id(); >>> tmp = counts[cpu]; >>> <- preempted here and migrated to another CPU >>> counts[cpu] = tmp + 1; >>> >> >> If lets say the above call was replaced by raw_smp_processor_id(), how would >> it avoid the preemption / migration to another CPU? >> >> Replacing it with raw_smp_processor_id() may avoid, the debug splat but the >> underlying issue would still remain as is. No? > > Correct. > > Using raw_smp_processor_id() is generally the wrong answer. For this > example the correct fix is to disable preemption around the code, eg: > >int counts[NR_CPUS]; >int tmp, cpu; > >preempt_disable() > >cpu = smp_processor_id(); >tmp = counts[cpu]; >counts[cpu] = tmp + 1; > >preempt_enable(); > > > For the original issue I think it is OK to use raw_smp_processor_id(), > because we're already doing a racy check of whether another CPU has been > preempted by the hypervisor. > > if (!is_kvm_guest()) { > int first_cpu = cpu_first_thread_sibling(smp_processor_id()); > > if (cpu_first_thread_sibling(cpu) == first_cpu) > return false; > } > > We could disable preemption around that, eg: > > if (!is_kvm_guest()) { > int first_cpu; > bool is_sibling; > > preempt_disable(); > first_cpu = cpu_first_thread_sibling(smp_processor_id()); > is_sibling = (cpu_first_thread_sibling(cpu) == first_cpu) > preempt_enable(); > > // Can be preempted here > > if (is_sibling) > return false; > } > > But before we return we could be preempted, and then is_sibling is no > longer necessarily correct. So that doesn't really gain us anything. > > The only way to make that result stable is to disable preemption in the > caller, but most callers don't want to AFAICS, because they know they're > doing a racy check to begin with. I'll add that one way I think about this is that when I choose smp_processor_id(), I am making a claim about the context in which it is used, and when I use raw_smp_processor_id() I am making a different claim. smp_processor_id() => I am sampling the CPU index in a critical section where the result equals the CPU that executes the entire critical section, and I am relying on that property for the section's correctness. This claim is verified by debug_smp_processor_id() when DEBUG_PREEMPT=y. raw_smp_processor_id() => I am sampling the CPU index and using the result in a way that is safe even if it differs from the CPU(s) on which the surrounding code actually executes. This framing doesn't cover all situations, but I've found it to be generally useful.
[Bug 213837] "Kernel panic - not syncing: corrupted stack end detected inside scheduler" at building via distcc on a G5
https://bugzilla.kernel.org/show_bug.cgi?id=213837 --- Comment #12 from Erhard F. (erhar...@mailbox.org) --- Created attachment 298963 --> https://bugzilla.kernel.org/attachment.cgi?id=298963=edit dmesg (5.15-rc2 + patch, PowerMac G5 11,2) #1 Last stack trace with CONFIG_THREAD_SHIFT=14 however did reveal a bit more data: [...] stack: c23effb0: 28022284 (.". stack: c23effc0: 0c00 3fff 94c95000 ..?...P. stack: c23effd0: 4200 ffea B... stack: c23effe0: stack: c23efff0: Kernel panic - not syncing: corrupted stack end detected inside scheduler CPU: 1 PID: 2652 Comm: cc1plus Tainted: GW 5.15.0-rc2-PowerMacG5+ #4 Call Trace: [c23ef7f0] [c05532d8] .dump_stack_lvl+0x98/0xe0 (unreliable) [c23ef880] [c0069538] .panic+0x14c/0x3e8 [c23ef930] [c081d5a0] .__schedule+0xc0/0x874 [c23efa00] [c081dea0] .preempt_schedule_common+0x28/0x48 [c23efa80] [c081deec] .__cond_resched+0x2c/0x50 [c23efaf0] [c029579c] .dput+0x40/0x218 [c23efba0] [c0285204] .path_put+0x1c/0x34 [c23efc20] [c027eab8] .do_readlinkat+0xdc/0x124 [c23efcf0] [c027f310] .__se_sys_readlink+0x20/0x30 [c23efd60] [c0022850] .system_call_exception+0x1ac/0x1e4 [c23efe10] [c000b4cc] system_call_common+0xec/0x250 --- interrupt: c00 at 0x3fff95335500 NIP: 3fff95335500 LR: 3fff95273f6c CTR: REGS: c23efe80 TRAP: 0c00 Tainted: GW (5.15.0-rc2-PowerMacG5+) MSR: 9200f032 CR: 28022284 XER: IRQMASK: 0 GPR00: 0055 3fffd5fb3600 3fff95424300 3fffd5fb4040 GPR04: 3fffd5fb3b20 03ff 62697473 2bcf8581 GPR08: d4307a80 GPR12: 3fff95946430 03ff 3fffd5fb4030 GPR16: 3fffd5fb3b20 3fffd5fb3700 GPR20: 002f 3fffd5fb3b10 3fff9593f7b8 3fffd5fb4080 GPR24: 0004 3fffd5fb3710 3fffd5fb3b20 3fffd5fb4040 GPR28: 3fffd5fb4538 3fffd5fb4084 3fffd5fb4040 NIP [3fff95335500] 0x3fff95335500 LR [3fff95273f6c] 0x3fff95273f6c --- interrupt: c00 Rebooting in 40 seconds.. And another one: [...] stack: c0002cd77fb0: 28042822 (.(" stack: c0002cd77fc0: 0c00 3fff 9b2dd000 ..?..-.. stack: c0002cd77fd0: 4200 B... stack: c0002cd77fe0: stack: c0002cd77ff0: Kernel panic - not syncing: corrupted stack end detected inside scheduler CPU: 1 PID: 2713 Comm: cc1plus Tainted: GW 5.15.0-rc2-PowerMacG5+ #4 Call Trace: [c0002cd76dd0] [c05532d8] .dump_stack_lvl+0x98/0xe0 (unreliable) [c0002cd76e60] [c0069538] .panic+0x14c/0x3e8 [c0002cd76f10] [c081d5a0] .__schedule+0xc0/0x874 [c0002cd76fe0] [c081dea0] .preempt_schedule_common+0x28/0x48 [c0002cd77060] [c081deec] .__cond_resched+0x2c/0x50 [c0002cd770d0] [c0327848] .__ext4_handle_dirty_metadata+0x24/0x214 [c0002cd771a0] [c0352a6c] .ext4_mb_mark_diskspace_used+0x3e0/0x41c [c0002cd77290] [c0355ae4] .ext4_mb_new_blocks+0x580/0xe10 [c0002cd773b0] [c033b2f0] .ext4_ind_map_blocks+0x63c/0xb28 [c0002cd775a0] [c0342bf4] .ext4_map_blocks+0x37c/0x588 [c0002cd77680] [c0342e64] ._ext4_get_block+0x64/0xec [c0002cd77730] [c02c36a4] .__block_write_begin_int+0x188/0x4a4 [c0002cd77850] [c03480f8] .ext4_write_begin+0x2a8/0x3d0 [c0002cd77970] [c01c9170] .generic_perform_write+0xb8/0x1f4 [c0002cd77a60] [c0333d68] .ext4_buffered_write_iter+0xb8/0x154 [c0002cd77b00] [c0277a14] .new_sync_write+0x94/0xe8 [c0002cd77c00] [c0278d6c] .vfs_write+0x13c/0x140 [c0002cd77ca0] [c0278eb4] .ksys_write+0x78/0xc4 [c0002cd77d60] [c0022850] .system_call_exception+0x1ac/0x1e4 [c0002cd77e10] [c000b4cc] system_call_common+0xec/0x250 --- interrupt: c00 at 0x3fff9b804b00 NIP: 3fff9b804b00 LR: 3fff9b780d04 CTR: REGS: c0002cd77e80 TRAP: 0c00 Tainted: GW (5.15.0-rc2-PowerMacG5+) MSR: 9200d032 CR: 28042822 XER: IRQMASK: 0 GPR00: 0004 3fffe21d3000 3fff9b8f6300 0001 GPR04: 26979780 1000 0001 0036 GPR08:
[Bug 213837] "Kernel panic - not syncing: corrupted stack end detected inside scheduler" at building via distcc on a G5
https://bugzilla.kernel.org/show_bug.cgi?id=213837 --- Comment #11 from Erhard F. (erhar...@mailbox.org) --- Created attachment 298961 --> https://bugzilla.kernel.org/attachment.cgi?id=298961=edit System.map (5.15-rc2 + patch + CONFIG_THREAD_SHIFT=15, PowerMac G5 11,2) -- You may reply to this email to add a comment. You are receiving this mail because: You are watching the assignee of the bug. You are watching someone on the CC list of the bug.
[Bug 213837] "Kernel panic - not syncing: corrupted stack end detected inside scheduler" at building via distcc on a G5
https://bugzilla.kernel.org/show_bug.cgi?id=213837 --- Comment #10 from Erhard F. (erhar...@mailbox.org) --- Created attachment 298959 --> https://bugzilla.kernel.org/attachment.cgi?id=298959=edit kernel .config (5.15-rc2 + CONFIG_THREAD_SHIFT=15, PowerMac G5 11,2) (In reply to mpe from comment #8) > You could also try changing CONFIG_THREAD_SHIFT to 15, that might keep > the system running a bit longer and give us some other clues. The stack seems just large enough with CONFIG_THREAD_SHIFT=15 to not run into this bug. I let the G5 build stuff via distcc in zram disk for a day without an issue. With CONFIG_THREAD_SHIFT=14 I hit the bug within minutes. Just for completeness I'll upload the System.map and kernel .config with CONFIG_THREAD_SHIFT=15. -- You may reply to this email to add a comment. You are receiving this mail because: You are watching the assignee of the bug. You are watching someone on the CC list of the bug.
Re: coherency issue observed after hotplug on POWER8
Hi Cascardo, Thanks for reporting this. Thadeu Lima de Souza Cascardo wrote: Hi, there. We have been investigating an issue we have observed on POWER8 POWERNV systems. When running the kernel selftests reuseport_bpf_cpu after a CPU hotplug, we see crashes, in different forms. [1] Just to re-confirm: you are only seeing this on P8 powernv, and not in a P8 guest/LPAR? I haven't been able to reproduce this on a firestone -- can you share more details about your power8 machine? Also, do you only see this with ubuntu kernels, or are you also able to reproduce this with the upstream tree? I managed to get xmon on that trap, and did some debugging. [2] I tried to dump the BPF JIT code, and it looks different when dumped from CPU#0 and CPU#0x9f (the one that was hotplugged, offlined, then onlined). Next time you reproduce this, can you try dumping the SLBs for the cpus (command 'u' in xmon)? Here is my partial analysis [3]. Basically, the BPF JIT fills a page with invalid instructions (traps, in ppc64 case), and puts the BPF program in a random offset of the page. In the case of the hotplugged CPU, which was the one that compiled the program, the page had the expected contents (BPF program started at the offset used to run the program). On the other CPU (in many cases, CPU #0), the same memory address/page had different contents, with the program starting at a different offset. From [3], I think fp->aux->jit_data can be NULL if there are subprogs. But, I find it interesting that you don't always see the correct bpf_func, as reported in comment #25. Can you also try dumping the full bpf_prog structure (prog/fp) from xmon? Is this a case of a bug in the micro-architecture or the firmware when doing the hotplug? Can someone chime in? It's possible that something is going wrong when offlining the cpu. Can you try booting the kernel with 'powersave=off' and see if the problem goes away? Notice that we can't reproduce the same issue on a POWER9 system. Thanks. Cascardo. [1] https://bugs.launchpad.net/ubuntu-kernel-tests/+bug/1927076 [2] https://bugs.launchpad.net/ubuntu-kernel-tests/+bug/1927076/comments/29 [3] https://bugs.launchpad.net/ubuntu-kernel-tests/+bug/1927076/comments/30 - Naveen
[PATCH] powerpc: Activate CONFIG_STRICT_KERNEL_RWX by default
CONFIG_STRICT_KERNEL_RWX should be set by default on every architectures (See https://github.com/KSPP/linux/issues/4) On PPC32 we have to find a compromise between performance and/or memory wasting and selection of strict_kernel_rwx, because it implies either smaller memory chunks or larger alignment between RO memory and RW memory. For instance the 8xx maps memory with 8M pages. So either the limit between RO and RW must be 8M aligned or it falls back or 512k pages which implies more pressure on the TLB. book3s/32 maps memory with BATs as much as possible. BATS can have any power-of-two size between 128k and 256M but we have only 4 to 8 BATs so the alignment must be good enough to allow efficient use of the BATs and avoid falling back on standard page mapping which would kill performance. So let's go one step forward and make it the default but still allow users to unset it when wanted. Cc: Kees Cook Signed-off-by: Christophe Leroy --- arch/powerpc/Kconfig | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index ba5b66189358..79332f51185d 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -148,6 +148,7 @@ config PPC select ARCH_MIGHT_HAVE_PC_PARPORT select ARCH_MIGHT_HAVE_PC_SERIO select ARCH_OPTIONAL_KERNEL_RWX if ARCH_HAS_STRICT_KERNEL_RWX + select ARCH_OPTIONAL_KERNEL_RWX_DEFAULT select ARCH_STACKWALK select ARCH_SUPPORTS_ATOMIC_RMW select ARCH_SUPPORTS_DEBUG_PAGEALLOCif PPC32 || PPC_BOOK3S_64 -- 2.31.1
Re: [PATCH v2 1/2] serial: 8250: SERIAL_8250_FSL should not default to y when compile-testing
On Fri, Sep 24, 2021 at 09:12:30AM +0200, Geert Uytterhoeven wrote: > Commit b1442c55ce8977aa ("serial: 8250: extend compile-test coverage") > added compile-test support to the Freescale 16550 driver. However, as > SERIAL_8250_FSL is an invisible symbol, merely enabling compile-testing > now enables this driver. > > Fix this by dropping the COMPILE_TEST default again, but making the > SERIAL_8250_FSL symbol visible instead. > > Fixes: b1442c55ce8977aa ("serial: 8250: extend compile-test coverage") > Signed-off-by: Geert Uytterhoeven > --- > v2: > - Split in two parts. I realised that we can do better than this. I've prepared a patch that preserves the old behaviour of always enabling the option on platforms that may need it while also not enabling it by default when compile testing. Note that SERIAL_8250_FSL only enables a workaround for an erratum in the Freescale UARTs in the 8250 driver (leaving the later added ACPI support aside) and we shouldn't make it easier to disable it by mistake. > --- > drivers/tty/serial/8250/Kconfig | 7 +-- > 1 file changed, 5 insertions(+), 2 deletions(-) > > diff --git a/drivers/tty/serial/8250/Kconfig b/drivers/tty/serial/8250/Kconfig > index 808268edd2e82a45..0af96f3adab517f6 100644 > --- a/drivers/tty/serial/8250/Kconfig > +++ b/drivers/tty/serial/8250/Kconfig > @@ -361,9 +361,12 @@ config SERIAL_8250_BCM2835AUX > If unsure, say N. > > config SERIAL_8250_FSL > - bool > + bool "Freescale 16550-style UART support (8250 based driver)" > depends on SERIAL_8250_CONSOLE > - default PPC || ARM || ARM64 || COMPILE_TEST > + default PPC || ARM || ARM64 > + help > + Selecting this option will add support for the 16550-style serial > + port hardware found on Freescale SoCs. > > config SERIAL_8250_DW > tristate "Support for Synopsys DesignWare 8250 quirks" Johan
[PATCH 1/3] mm: Make generic arch_is_kernel_initmem_freed() do what it says
Commit 7a5da02de8d6 ("locking/lockdep: check for freed initmem in static_obj()") added arch_is_kernel_initmem_freed() which is supposed to report whether an object is part of already freed init memory. For the time being, the generic version of arch_is_kernel_initmem_freed() always reports 'false', allthough free_initmem() is generically called on all architectures. Therefore, change the generic version of arch_is_kernel_initmem_freed() to check whether free_initmem() has been called. If so, then check if a given address falls into init memory. In order to use function init_section_contains(), the fonction is moved at the end of asm-generic/section.h Cc: Gerald Schaefer Signed-off-by: Christophe Leroy --- include/asm-generic/sections.h | 31 +-- 1 file changed, 17 insertions(+), 14 deletions(-) diff --git a/include/asm-generic/sections.h b/include/asm-generic/sections.h index d16302d3eb59..d1e5bb2c6b72 100644 --- a/include/asm-generic/sections.h +++ b/include/asm-generic/sections.h @@ -80,20 +80,6 @@ static inline int arch_is_kernel_data(unsigned long addr) } #endif -/* - * Check if an address is part of freed initmem. This is needed on architectures - * with virt == phys kernel mapping, for code that wants to check if an address - * is part of a static object within [_stext, _end]. After initmem is freed, - * memory can be allocated from it, and such allocations would then have - * addresses within the range [_stext, _end]. - */ -#ifndef arch_is_kernel_initmem_freed -static inline int arch_is_kernel_initmem_freed(unsigned long addr) -{ - return 0; -} -#endif - /** * memory_contains - checks if an object is contained within a memory region * @begin: virtual address of the beginning of the memory region @@ -172,4 +158,21 @@ static inline bool is_kernel_rodata(unsigned long addr) addr < (unsigned long)__end_rodata; } +/* + * Check if an address is part of freed initmem. This is needed on architectures + * with virt == phys kernel mapping, for code that wants to check if an address + * is part of a static object within [_stext, _end]. After initmem is freed, + * memory can be allocated from it, and such allocations would then have + * addresses within the range [_stext, _end]. + */ +#ifndef arch_is_kernel_initmem_freed +static inline int arch_is_kernel_initmem_freed(unsigned long addr) +{ + if (system_state < SYSTEM_RUNNING) + return 0; + + return init_section_contains((void *)addr, 1); +} +#endif + #endif /* _ASM_GENERIC_SECTIONS_H_ */ -- 2.31.1
[PATCH 2/3] powerpc: Use generic version of arch_is_kernel_initmem_freed()
Generic version of arch_is_kernel_initmem_freed() now does the same as powerpc version. Remove the powerpc version. Signed-off-by: Christophe Leroy --- arch/powerpc/include/asm/sections.h | 13 - 1 file changed, 13 deletions(-) diff --git a/arch/powerpc/include/asm/sections.h b/arch/powerpc/include/asm/sections.h index 6e4af4492a14..79cb7a25a5fb 100644 --- a/arch/powerpc/include/asm/sections.h +++ b/arch/powerpc/include/asm/sections.h @@ -6,21 +6,8 @@ #include #include -#define arch_is_kernel_initmem_freed arch_is_kernel_initmem_freed - #include -extern bool init_mem_is_free; - -static inline int arch_is_kernel_initmem_freed(unsigned long addr) -{ - if (!init_mem_is_free) - return 0; - - return addr >= (unsigned long)__init_begin && - addr < (unsigned long)__init_end; -} - extern char __head_end[]; #ifdef __powerpc64__ -- 2.31.1
[PATCH 3/3] s390: Use generic version of arch_is_kernel_initmem_freed()
Generic version of arch_is_kernel_initmem_freed() now does the same as s390 version. Remove the s390 version. Cc: Gerald Schaefer Signed-off-by: Christophe Leroy --- arch/s390/include/asm/sections.h | 12 arch/s390/mm/init.c | 3 --- 2 files changed, 15 deletions(-) diff --git a/arch/s390/include/asm/sections.h b/arch/s390/include/asm/sections.h index 85881dd48022..3fecaa4e8b74 100644 --- a/arch/s390/include/asm/sections.h +++ b/arch/s390/include/asm/sections.h @@ -2,20 +2,8 @@ #ifndef _S390_SECTIONS_H #define _S390_SECTIONS_H -#define arch_is_kernel_initmem_freed arch_is_kernel_initmem_freed - #include -extern bool initmem_freed; - -static inline int arch_is_kernel_initmem_freed(unsigned long addr) -{ - if (!initmem_freed) - return 0; - return addr >= (unsigned long)__init_begin && - addr < (unsigned long)__init_end; -} - /* * .boot.data section contains variables "shared" between the decompressor and * the decompressed kernel. The decompressor will store values in them, and diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c index a04faf49001a..8c6f258a6183 100644 --- a/arch/s390/mm/init.c +++ b/arch/s390/mm/init.c @@ -58,8 +58,6 @@ unsigned long empty_zero_page, zero_page_mask; EXPORT_SYMBOL(empty_zero_page); EXPORT_SYMBOL(zero_page_mask); -bool initmem_freed; - static void __init setup_zero_pages(void) { unsigned int order; @@ -214,7 +212,6 @@ void __init mem_init(void) void free_initmem(void) { - initmem_freed = true; __set_memory((unsigned long)_sinittext, (unsigned long)(_einittext - _sinittext) >> PAGE_SHIFT, SET_MEMORY_RW | SET_MEMORY_NX); -- 2.31.1
Re: [PATCH v3 5/8] x86/sme: Replace occurrences of sme_active() with cc_platform_has()
On 9/24/21 4:51 AM, Borislav Petkov wrote: On Fri, Sep 24, 2021 at 12:41:32PM +0300, Kirill A. Shutemov wrote: On Thu, Sep 23, 2021 at 08:21:03PM +0200, Borislav Petkov wrote: On Thu, Sep 23, 2021 at 12:05:58AM +0300, Kirill A. Shutemov wrote: Unless we find other way to guarantee RIP-relative access, we must use fixup_pointer() to access any global variables. Yah, I've asked compiler folks about any guarantees we have wrt rip-relative addresses but it doesn't look good. Worst case, we'd have to do the fixup_pointer() thing. In the meantime, Tom and I did some more poking at this and here's a diff ontop. The direction being that we'll stick both the AMD and Intel *cc_platform_has() call into cc_platform.c for which instrumentation will be disabled so no issues with that. And that will keep all that querying all together in a single file. And still do cc_platform_has() calls in __startup_64() codepath? It's broken. Intel detection in cc_platform_has() relies on boot_cpu_data.x86_vendor which is not initialized until early_cpu_init() in setup_arch(). Given that X86_VENDOR_INTEL is 0 it leads to false-positive. Yeah, Tom, I had the same question yesterday. /me looks in his direction. Yup, all the things we talked about. But we also know that cc_platform_has() gets called a few other times before boot_cpu_data is initialized - so more false-positives. For cc_platform_has() to work properly, given the desire to consolidate the calls, IMO, Intel will have to come up with some early setting that can be enabled and checked in place of boot_cpu_data or else you live with false-positives. Thanks, Tom :-)
[PATCH v1 2/3] powerpc: Stop using init_mem_is_free
Generic parts of the kernel for instance core_kernel_text() use 'system_state' to check whether init memory has been freed. Do the same and stop using init_mem_is_free. Cc: Michael Neuling Signed-off-by: Christophe Leroy --- arch/powerpc/include/asm/sections.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/powerpc/include/asm/sections.h b/arch/powerpc/include/asm/sections.h index 6e4af4492a14..dff55421f76b 100644 --- a/arch/powerpc/include/asm/sections.h +++ b/arch/powerpc/include/asm/sections.h @@ -14,7 +14,7 @@ extern bool init_mem_is_free; static inline int arch_is_kernel_initmem_freed(unsigned long addr) { - if (!init_mem_is_free) + if (system_state < SYSTEM_RUNNING) return 0; return addr >= (unsigned long)__init_begin && -- 2.31.1
[PATCH v1 3/3] powerpc: Remove init_mem_is_free
init_mem_is_free is not used anymore. Remove it. Cc: Michael Neuling Signed-off-by: Christophe Leroy --- arch/powerpc/include/asm/sections.h | 2 -- arch/powerpc/include/asm/setup.h| 1 - arch/powerpc/mm/mem.c | 2 -- 3 files changed, 5 deletions(-) diff --git a/arch/powerpc/include/asm/sections.h b/arch/powerpc/include/asm/sections.h index dff55421f76b..454b8434bfc4 100644 --- a/arch/powerpc/include/asm/sections.h +++ b/arch/powerpc/include/asm/sections.h @@ -10,8 +10,6 @@ #include -extern bool init_mem_is_free; - static inline int arch_is_kernel_initmem_freed(unsigned long addr) { if (system_state < SYSTEM_RUNNING) diff --git a/arch/powerpc/include/asm/setup.h b/arch/powerpc/include/asm/setup.h index 6c1a7d217d1a..426a2d8d028f 100644 --- a/arch/powerpc/include/asm/setup.h +++ b/arch/powerpc/include/asm/setup.h @@ -9,7 +9,6 @@ extern void ppc_printk_progress(char *s, unsigned short hex); extern unsigned int rtas_data; extern unsigned long long memory_limit; -extern bool init_mem_is_free; extern void *zalloc_maybe_bootmem(size_t size, gfp_t mask); struct device_node; diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c index c3c4e31462ec..5b1eae6c0356 100644 --- a/arch/powerpc/mm/mem.c +++ b/arch/powerpc/mm/mem.c @@ -26,7 +26,6 @@ #include unsigned long long memory_limit; -bool init_mem_is_free; unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)] __page_aligned_bss; EXPORT_SYMBOL(empty_zero_page); @@ -312,7 +311,6 @@ void free_initmem(void) { ppc_md.progress = ppc_printk_progress; mark_initmem_nx(); - init_mem_is_free = true; free_initmem_default(POISON_FREE_INITMEM); } -- 2.31.1
[PATCH v1 1/3] powerpc/code-patching: Improve verification of patchability
Today, patch_instruction() assumes that is called exclusively on valid addresses, and only checks that it is not called on an init address after init section has been freed. Improve verification by calling kernel_text_address() instead. kernel_text_address() already includes a verification of initmem release. Cc: Michael Neuling Signed-off-by: Christophe Leroy --- arch/powerpc/lib/code-patching.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c index f9a3019e37b4..20ec6648aacc 100644 --- a/arch/powerpc/lib/code-patching.c +++ b/arch/powerpc/lib/code-patching.c @@ -190,10 +190,9 @@ static int do_patch_instruction(u32 *addr, struct ppc_inst instr) int patch_instruction(u32 *addr, struct ppc_inst instr) { /* Make sure we aren't patching a freed init section */ - if (init_mem_is_free && init_section_contains(addr, 4)) { - pr_debug("Skipping init section patching addr: 0x%px\n", addr); + if (!kernel_text_address((unsigned long)addr)) return 0; - } + return do_patch_instruction(addr, instr); } NOKPROBE_SYMBOL(patch_instruction); -- 2.31.1
Re: [PATCH AUTOSEL 5.10 01/26] ibmvnic: check failover_pending in login response
On Thu, Sep 23, 2021 at 09:33:47AM +0200, Pavel Machek wrote: Hi! Something went wrong with this series. I only see first 7 patches. I thought it might be local problem, but I only see 7 patches on lore... Huh, yes, apparently git-send-email timed out. I'll resend. Thanks! -- Thanks, Sasha
[PATCH v3 2/2] powerpc/powermac: constify device_node in of_irq_parse_oldworld()
The of_irq_parse_oldworld() does not modify passed device_node so make it a pointer to const for safety. Drop the extern while modifying the line. Signed-off-by: Krzysztof Kozlowski --- Changes since v1: 1. Drop extern. --- arch/powerpc/platforms/powermac/pic.c | 2 +- include/linux/of_irq.h| 6 +++--- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/powerpc/platforms/powermac/pic.c b/arch/powerpc/platforms/powermac/pic.c index 4921bccf0376..af5ca1f41bb1 100644 --- a/arch/powerpc/platforms/powermac/pic.c +++ b/arch/powerpc/platforms/powermac/pic.c @@ -384,7 +384,7 @@ static void __init pmac_pic_probe_oldstyle(void) #endif } -int of_irq_parse_oldworld(struct device_node *device, int index, +int of_irq_parse_oldworld(const struct device_node *device, int index, struct of_phandle_args *out_irq) { const u32 *ints = NULL; diff --git a/include/linux/of_irq.h b/include/linux/of_irq.h index aaf219bd0354..83fccd0c9bba 100644 --- a/include/linux/of_irq.h +++ b/include/linux/of_irq.h @@ -20,12 +20,12 @@ typedef int (*of_irq_init_cb_t)(struct device_node *, struct device_node *); #if defined(CONFIG_PPC32) && defined(CONFIG_PPC_PMAC) extern unsigned int of_irq_workarounds; extern struct device_node *of_irq_dflt_pic; -extern int of_irq_parse_oldworld(struct device_node *device, int index, - struct of_phandle_args *out_irq); +int of_irq_parse_oldworld(const struct device_node *device, int index, + struct of_phandle_args *out_irq); #else /* CONFIG_PPC32 && CONFIG_PPC_PMAC */ #define of_irq_workarounds (0) #define of_irq_dflt_pic (NULL) -static inline int of_irq_parse_oldworld(struct device_node *device, int index, +static inline int of_irq_parse_oldworld(const struct device_node *device, int index, struct of_phandle_args *out_irq) { return -EINVAL; -- 2.30.2
[PATCH v3 1/2] powerpc/powermac: add missing g5_phy_disable_cpu1() declaration
g5_phy_disable_cpu1() is used outside of platforms/powermac/feature.c, so it should have a declaration to fix W=1 warning: arch/powerpc/platforms/powermac/feature.c:1533:6: error: no previous prototype for ‘g5_phy_disable_cpu1’ [-Werror=missing-prototypes] Signed-off-by: Krzysztof Kozlowski --- Changes since v2: 1. Put declaration in powermac/pmac.h Changes since v1: 1. Drop declaration in powermac/smp.c --- arch/powerpc/platforms/powermac/pmac.h | 2 ++ arch/powerpc/platforms/powermac/smp.c | 2 -- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/powerpc/platforms/powermac/pmac.h b/arch/powerpc/platforms/powermac/pmac.h index 0d715db434dc..be528e0e507f 100644 --- a/arch/powerpc/platforms/powermac/pmac.h +++ b/arch/powerpc/platforms/powermac/pmac.h @@ -14,6 +14,8 @@ struct rtc_time; extern int pmac_newworld; +void g5_phy_disable_cpu1(void); + extern long pmac_time_init(void); extern time64_t pmac_get_boot_time(void); extern void pmac_get_rtc_time(struct rtc_time *); diff --git a/arch/powerpc/platforms/powermac/smp.c b/arch/powerpc/platforms/powermac/smp.c index 3256a316e884..5d0626f432d5 100644 --- a/arch/powerpc/platforms/powermac/smp.c +++ b/arch/powerpc/platforms/powermac/smp.c @@ -875,8 +875,6 @@ static int smp_core99_cpu_online(unsigned int cpu) static void __init smp_core99_bringup_done(void) { - extern void g5_phy_disable_cpu1(void); - /* Close i2c bus if it was used for tb sync */ if (pmac_tb_clock_chip_host) pmac_i2c_close(pmac_tb_clock_chip_host); -- 2.30.2
Re: [PATCH v3 5/8] x86/sme: Replace occurrences of sme_active() with cc_platform_has()
On Fri, Sep 24, 2021 at 12:41:32PM +0300, Kirill A. Shutemov wrote: > On Thu, Sep 23, 2021 at 08:21:03PM +0200, Borislav Petkov wrote: > > On Thu, Sep 23, 2021 at 12:05:58AM +0300, Kirill A. Shutemov wrote: > > > Unless we find other way to guarantee RIP-relative access, we must use > > > fixup_pointer() to access any global variables. > > > > Yah, I've asked compiler folks about any guarantees we have wrt > > rip-relative addresses but it doesn't look good. Worst case, we'd have > > to do the fixup_pointer() thing. > > > > In the meantime, Tom and I did some more poking at this and here's a > > diff ontop. > > > > The direction being that we'll stick both the AMD and Intel > > *cc_platform_has() call into cc_platform.c for which instrumentation > > will be disabled so no issues with that. > > > > And that will keep all that querying all together in a single file. > > And still do cc_platform_has() calls in __startup_64() codepath? > > It's broken. > > Intel detection in cc_platform_has() relies on boot_cpu_data.x86_vendor > which is not initialized until early_cpu_init() in setup_arch(). Given > that X86_VENDOR_INTEL is 0 it leads to false-positive. Yeah, Tom, I had the same question yesterday. /me looks in his direction. :-) -- Regards/Gruss, Boris. https://people.kernel.org/tglx/notes-about-netiquette
Re: [PATCH v3 5/8] x86/sme: Replace occurrences of sme_active() with cc_platform_has()
On Thu, Sep 23, 2021 at 08:21:03PM +0200, Borislav Petkov wrote: > On Thu, Sep 23, 2021 at 12:05:58AM +0300, Kirill A. Shutemov wrote: > > Unless we find other way to guarantee RIP-relative access, we must use > > fixup_pointer() to access any global variables. > > Yah, I've asked compiler folks about any guarantees we have wrt > rip-relative addresses but it doesn't look good. Worst case, we'd have > to do the fixup_pointer() thing. > > In the meantime, Tom and I did some more poking at this and here's a > diff ontop. > > The direction being that we'll stick both the AMD and Intel > *cc_platform_has() call into cc_platform.c for which instrumentation > will be disabled so no issues with that. > > And that will keep all that querying all together in a single file. And still do cc_platform_has() calls in __startup_64() codepath? It's broken. Intel detection in cc_platform_has() relies on boot_cpu_data.x86_vendor which is not initialized until early_cpu_init() in setup_arch(). Given that X86_VENDOR_INTEL is 0 it leads to false-positive. I think opencode these two calls is the way forward. Maybe also move the check from sme_encrypt_kernel() to __startup_64(). -- Kirill A. Shutemov
[PATCH v2 2/2] serial: 8250: SERIAL_8250_FSL should depend on Freescale platforms
The Freescale 16550-style UART is only present on some Freescale SoCs. Hence tighten the dependencies to prevent asking the user about this driver, and possibly defaulting it to be enabled, when configuring a kernel without appropriate Freescale SoC or ACPI support. Signed-off-by: Geert Uytterhoeven --- v2: - Split in two parts. --- drivers/tty/serial/8250/Kconfig | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/tty/serial/8250/Kconfig b/drivers/tty/serial/8250/Kconfig index 0af96f3adab517f6..a2978b31144e94f2 100644 --- a/drivers/tty/serial/8250/Kconfig +++ b/drivers/tty/serial/8250/Kconfig @@ -363,7 +363,8 @@ config SERIAL_8250_BCM2835AUX config SERIAL_8250_FSL bool "Freescale 16550-style UART support (8250 based driver)" depends on SERIAL_8250_CONSOLE - default PPC || ARM || ARM64 + depends on FSL_SOC || ARCH_LAYERSCAPE || SOC_LS1021A || (ARM64 && ACPI) || COMPILE_TEST + default FSL_SOC || ARCH_LAYERSCAPE || SOC_LS1021A || (ARM64 && ACPI) help Selecting this option will add support for the 16550-style serial port hardware found on Freescale SoCs. -- 2.25.1
[PATCH v2 1/2] serial: 8250: SERIAL_8250_FSL should not default to y when compile-testing
Commit b1442c55ce8977aa ("serial: 8250: extend compile-test coverage") added compile-test support to the Freescale 16550 driver. However, as SERIAL_8250_FSL is an invisible symbol, merely enabling compile-testing now enables this driver. Fix this by dropping the COMPILE_TEST default again, but making the SERIAL_8250_FSL symbol visible instead. Fixes: b1442c55ce8977aa ("serial: 8250: extend compile-test coverage") Signed-off-by: Geert Uytterhoeven --- v2: - Split in two parts. --- drivers/tty/serial/8250/Kconfig | 7 +-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/drivers/tty/serial/8250/Kconfig b/drivers/tty/serial/8250/Kconfig index 808268edd2e82a45..0af96f3adab517f6 100644 --- a/drivers/tty/serial/8250/Kconfig +++ b/drivers/tty/serial/8250/Kconfig @@ -361,9 +361,12 @@ config SERIAL_8250_BCM2835AUX If unsure, say N. config SERIAL_8250_FSL - bool + bool "Freescale 16550-style UART support (8250 based driver)" depends on SERIAL_8250_CONSOLE - default PPC || ARM || ARM64 || COMPILE_TEST + default PPC || ARM || ARM64 + help + Selecting this option will add support for the 16550-style serial + port hardware found on Freescale SoCs. config SERIAL_8250_DW tristate "Support for Synopsys DesignWare 8250 quirks" -- 2.25.1
Re: [PATCH v2 9/9] iommu/vt-d: Use pci core's DVSEC functionality
On Thu, Sep 23, 2021 at 10:26:47AM -0700, Ben Widawsky wrote: > */ > static int siov_find_pci_dvsec(struct pci_dev *pdev) > { > + return pci_find_dvsec_capability(pdev, PCI_VENDOR_ID_INTEL, 5); > } I hink the siov_find_pci_dvsec helper is pretty pointless now and can be folded into its only caller. And independent of that: this capability really needs a symbolic name. Especially for a vendor like Intel that might have a few there should be a list of them somewhere.
Re: [PATCH] powerpc: don't select KFENCE on platform PPC_FSL_BOOK3E
Le 24/09/2021 à 08:39, Liu Shixin a écrit : On platform PPC_FSL_BOOK3E, all lowmem is managed by tlbcam. That means we didn't really map the kfence pool with page granularity. Therefore, if KFENCE is enabled, the system will hit the following panic: Could you please explain a bit more what the problem is ? KFENCE has been implemented with the same logic as DEBUG_PAGEALLOC. DEBUG_PAGEALLOC is enabled on FSL_BOOK3E. In MMU_setup(), __map_without_ltlbs is set to 1 when KFENCE is enabled. __map_without_ltlbs should disable the use of tlbcam. So what's wrong really ? Does DEBUG_PAGEALLOC work on FSL_BOOK3E ? Thanks Christophe BUG: Kernel NULL pointer dereference on read at 0x Faulting instruction address: 0xc01de598 Oops: Kernel access of bad area, sig: 11 [#1] BE PAGE_SIZE=4K SMP NR_CPUS=4 MPC8544 DS Dumping ftrace buffer: (ftrace buffer empty) Modules linked in: CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.12.0-rc3+ #298 NIP: c01de598 LR: c08ae9c4 CTR: REGS: c0b4bea0 TRAP: 0300 Not tainted (5.12.0-rc3+) MSR: 00021000 CR: 24000228 XER: 2000 DEAR: ESR: GPR00: c08ae9c4 c0b4bf60 c0ad64e0 ef72 00021000 0200 GPR08: c0ad5000 0004 008fbb30 GPR16: c000 GPR24: c08ca004 c08ca004 c0b6a0e0 c0b6 c0b58f00 c085 c08ca000 ef72 NIP [c01de598] kfence_protect+0x44/0x6c LR [c08ae9c4] kfence_init+0xfc/0x2a4 Call Trace: [c0b4bf60] [efffe160] 0xefffe160 (unreliable) [c0b4bf70] [c08ae9c4] kfence_init+0xfc/0x2a4 [c0b4bfb0] [c0894d3c] start_kernel+0x3bc/0x574 [c0b4bff0] [c470] set_ivor+0x14c/0x188 Instruction dump: 7c0802a6 8109d594 546a653a 90010014 54630026 3920 7d48502e 2c0a 41820010 554a0026 5469b53a 7d295214 <8149> 38831000 554a003c 9149 random: get_random_bytes called from print_oops_end_marker+0x40/0x78 with crng_init=0 ---[ end trace ]--- Signed-off-by: Liu Shixin --- arch/powerpc/Kconfig | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index d46db0bfb998..cffd57bcb5e4 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -185,7 +185,7 @@ config PPC select HAVE_ARCH_KASAN if PPC32 && PPC_PAGE_SHIFT <= 14 select HAVE_ARCH_KASAN_VMALLOC if PPC32 && PPC_PAGE_SHIFT <= 14 select HAVE_ARCH_KGDB - select HAVE_ARCH_KFENCE if PPC32 + select HAVE_ARCH_KFENCE if PPC32 && !PPC_FSL_BOOK3E select HAVE_ARCH_MMAP_RND_BITS select HAVE_ARCH_MMAP_RND_COMPAT_BITS if COMPAT select HAVE_ARCH_NVRAM_OPS
[PATCH] powerpc: don't select KFENCE on platform PPC_FSL_BOOK3E
On platform PPC_FSL_BOOK3E, all lowmem is managed by tlbcam. That means we didn't really map the kfence pool with page granularity. Therefore, if KFENCE is enabled, the system will hit the following panic: BUG: Kernel NULL pointer dereference on read at 0x Faulting instruction address: 0xc01de598 Oops: Kernel access of bad area, sig: 11 [#1] BE PAGE_SIZE=4K SMP NR_CPUS=4 MPC8544 DS Dumping ftrace buffer: (ftrace buffer empty) Modules linked in: CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.12.0-rc3+ #298 NIP: c01de598 LR: c08ae9c4 CTR: REGS: c0b4bea0 TRAP: 0300 Not tainted (5.12.0-rc3+) MSR: 00021000 CR: 24000228 XER: 2000 DEAR: ESR: GPR00: c08ae9c4 c0b4bf60 c0ad64e0 ef72 00021000 0200 GPR08: c0ad5000 0004 008fbb30 GPR16: c000 GPR24: c08ca004 c08ca004 c0b6a0e0 c0b6 c0b58f00 c085 c08ca000 ef72 NIP [c01de598] kfence_protect+0x44/0x6c LR [c08ae9c4] kfence_init+0xfc/0x2a4 Call Trace: [c0b4bf60] [efffe160] 0xefffe160 (unreliable) [c0b4bf70] [c08ae9c4] kfence_init+0xfc/0x2a4 [c0b4bfb0] [c0894d3c] start_kernel+0x3bc/0x574 [c0b4bff0] [c470] set_ivor+0x14c/0x188 Instruction dump: 7c0802a6 8109d594 546a653a 90010014 54630026 3920 7d48502e 2c0a 41820010 554a0026 5469b53a 7d295214 <8149> 38831000 554a003c 9149 random: get_random_bytes called from print_oops_end_marker+0x40/0x78 with crng_init=0 ---[ end trace ]--- Signed-off-by: Liu Shixin --- arch/powerpc/Kconfig | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index d46db0bfb998..cffd57bcb5e4 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -185,7 +185,7 @@ config PPC select HAVE_ARCH_KASAN if PPC32 && PPC_PAGE_SHIFT <= 14 select HAVE_ARCH_KASAN_VMALLOC if PPC32 && PPC_PAGE_SHIFT <= 14 select HAVE_ARCH_KGDB - select HAVE_ARCH_KFENCE if PPC32 + select HAVE_ARCH_KFENCE if PPC32 && !PPC_FSL_BOOK3E select HAVE_ARCH_MMAP_RND_BITS select HAVE_ARCH_MMAP_RND_COMPAT_BITS if COMPAT select HAVE_ARCH_NVRAM_OPS -- 2.18.0.huawei.25