[PATCH] powerpc/mce: log the error for all unrecoverable errors

2022-11-13 Thread Ganesh Goudar
machine_check_log_err() is not getting called for all unrecoverable errors, And we are missing to log the error. Raise irq work in save_mce_event() for unrecoverable errors, So that we log the error from MCE event handling block in timer handler. Signed-off-by: Ganesh Goudar ---

Re: [PATCH linux next v2] scsi: ibmvfc: use sysfs_emit() to instead of scnprintf()

2022-11-13 Thread Christophe Leroy
Le 14/11/2022 à 04:38, yang.yan...@zte.com.cn a écrit : > [Vous ne recevez pas souvent de courriers de yang.yan...@zte.com.cn. > Découvrez pourquoi ceci est important à > https://aka.ms/LearnAboutSenderIdentification ] > > From: Xu Panda > > Replace the open-code with sysfs_emit() to

Re: [PATCH v5 02/16] powerpc: Override __ALIGN and __ALIGN_STR macros

2022-11-13 Thread Sathvika Vasireddy
Hi Peter, On 03/11/22 14:18, Peter Zijlstra wrote: On Wed, Nov 02, 2022 at 12:35:07PM +, Christophe Leroy wrote: Le 28/10/2022 à 16:33, Sathvika Vasireddy a écrit : In a subsequent patch, we would want to annotate powerpc assembly functions with SYM_FUNC_START_LOCAL macro. This macro

[PATCH linux next v2] scsi: ibmvfc: use sysfs_emit() to instead of scnprintf()

2022-11-13 Thread yang.yang29
From: Xu Panda Replace the open-code with sysfs_emit() to simplify the code. --- change for v2 - align code --- Signed-off-by: Xu Panda Signed-off-by: Yang Yang --- drivers/scsi/ibmvscsi/ibmvfc.c | 20 1 file changed, 8 insertions(+), 12 deletions(-) diff --git

Re: [PATCH v5 2/2] arm64: support batched/deferred tlb shootdown during page reclamation

2022-11-13 Thread Anshuman Khandual
On 10/28/22 13:42, Yicong Yang wrote: > +static inline bool arch_tlbbatch_should_defer(struct mm_struct *mm) > +{ > + /* > + * TLB batched flush is proved to be beneficial for systems with large > + * number of CPUs, especially system with more than 8 CPUs. TLB shutdown > + *

[PATCH v2 17/17] powerpc/qspinlock: provide accounting and options for sleepy locks

2022-11-13 Thread Nicholas Piggin
Finding the owner or a queued waiter on a lock with a preempted vcpu is indicative of an oversubscribed guest causing the lock to get into trouble. Provide some options to detect this situation and have new CPUs avoid queueing for a longer time (more steal iterations) to minimise the problems

[PATCH v2 16/17] powerpc/qspinlock: allow indefinite spinning on a preempted owner

2022-11-13 Thread Nicholas Piggin
Provide an option that holds off queueing indefinitely while the lock owner is preempted. This could reduce queueing latencies for very overcommitted vcpu situations. This is disabled by default. Signed-off-by: Nicholas Piggin --- arch/powerpc/lib/qspinlock.c | 74

[PATCH v2 15/17] powerpc/qspinlock: reduce remote node steal spins

2022-11-13 Thread Nicholas Piggin
Allow for a reduction in the number of times a CPU from a different node than the owner can attempt to steal the lock before queueing. This could bias the transfer behaviour of the lock across the machine and reduce NUMA crossings. Signed-off-by: Nicholas Piggin --- arch/powerpc/lib/qspinlock.c

[PATCH v2 14/17] powerpc/qspinlock: use spin_begin/end API

2022-11-13 Thread Nicholas Piggin
Use the spin_begin/spin_cpu_relax/spin_end APIs in qspinlock, which helps to prevent threads issuing a lot of expensive priority nops which may not have much effect due to immediately executing low then medium priority. Signed-off-by: Nicholas Piggin --- arch/powerpc/lib/qspinlock.c | 41

[PATCH v2 13/17] powerpc/qspinlock: trylock and initial lock attempt may steal

2022-11-13 Thread Nicholas Piggin
This gives trylock slightly more strength, and it also gives most of the benefit of passing 'val' back through the slowpath without the complexity. Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/qspinlock.h | 44 +++- arch/powerpc/lib/qspinlock.c |

[PATCH v2 12/17] powerpc/qspinlock: add ability to prod new queue head CPU

2022-11-13 Thread Nicholas Piggin
After the head of the queue acquires the lock, it releases the next waiter in the queue to become the new head. Add an option to prod the new head if its vCPU was preempted. This may only have an effect if queue waiters are yielding. Disable this option by default for now, i.e., no logical

[PATCH v2 11/17] powerpc/qspinlock: allow propagation of yield CPU down the queue

2022-11-13 Thread Nicholas Piggin
Having all CPUs poll the lock word for the owner CPU that should be yielded to defeats most of the purpose of using MCS queueing for scalability. Yet it may be desirable for queued waiters to to yield to a preempted owner. s390 addreses this problem by having queued waiters sample the lock word

[PATCH v2 10/17] powerpc/qspinlock: allow stealing when head of queue yields

2022-11-13 Thread Nicholas Piggin
If the head of queue is preventing stealing but it finds the owner vCPU is preempted, it will yield its cycles to the owner which could cause it to become preempted. Add an option to re-allow stealers before yielding, and disallow them again after returning from the yield. Disable this option by

[PATCH v2 09/17] powerpc/qspinlock: implement option to yield to previous node

2022-11-13 Thread Nicholas Piggin
Queued waiters which are not at the head of the queue don't spin on the lock word but their qnode lock word, waiting for the previous queued CPU to release them. Add an option which allows these waiters to yield to the previous CPU if its vCPU is preempted. Signed-off-by: Nicholas Piggin ---

[PATCH v2 08/17] powerpc/qspinlock: paravirt yield to lock owner

2022-11-13 Thread Nicholas Piggin
Waiters spinning on the lock word should yield to the lock owner if the vCPU is preempted. This improves performance when the hypervisor has oversubscribed physical CPUs. Signed-off-by: Nicholas Piggin --- arch/powerpc/lib/qspinlock.c | 101 ++- 1 file changed,

[PATCH v2 07/17] powerpc/qspinlock: store owner CPU in lock word

2022-11-13 Thread Nicholas Piggin
Store the owner CPU number in the lock word so it may be yielded to, as powerpc's paravirtualised simple spinlocks do. Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/qspinlock.h | 9 - arch/powerpc/include/asm/qspinlock_types.h | 10 ++

[PATCH v2 06/17] powerpc/qspinlock: theft prevention to control latency

2022-11-13 Thread Nicholas Piggin
Give the queue head the ability to stop stealers. After a number of spins without sucessfully acquiring the lock, the queue head employs this, which will assure it is the next owner. Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/qspinlock_types.h | 10 -

[PATCH v2 05/17] powerpc/qspinlock: allow new waiters to steal the lock before queueing

2022-11-13 Thread Nicholas Piggin
Allow new waiters a number of spins on the lock word before queueing, which particularly helps paravirt performance when physical CPUs are oversubscribed. Signed-off-by: Nicholas Piggin --- arch/powerpc/lib/qspinlock.c | 159 ++- 1 file changed, 140

[PATCH v2 04/17] powerpc/qspinlock: convert atomic operations to assembly

2022-11-13 Thread Nicholas Piggin
This uses more optimal ll/sc style access patterns (rather than cmpxchg), and also sets the EH=1 lock hint on those operations which acquire ownership of the lock. Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/qspinlock.h | 24 +--

[PATCH v2 03/17] powerpc/qspinlock: use a half-word store to unlock to avoid larx/stcx.

2022-11-13 Thread Nicholas Piggin
The first 16 bits of the lock are only modified by the owner, and other modifications always use atomic operations on the entire 32 bits, so unlocks can use plain stores on the 16 bits. This is the same kind of optimisation done by core qspinlock code. Signed-off-by: Nicholas Piggin ---

[PATCH v2 02/17] powerpc/qspinlock: add mcs queueing for contended waiters

2022-11-13 Thread Nicholas Piggin
This forms the basis of the qspinlock slow path. Like generic qspinlocks and unlike the vanilla MCS algorithm, the lock owner does not participate in the queue, only waiters. The first waiter spins on the lock word, then when the lock is released it takes ownership and unqueues the next waiter.

[PATCH v2 01/17] powerpc/qspinlock: powerpc qspinlock implementation

2022-11-13 Thread Nicholas Piggin
Add a powerpc specific implementation of queued spinlocks. This is the build framework with a very simple (non-queued) spinlock implementation to begin with. Later changes add queueing, and other features and optimisations one-at-a-time. It is done this way to more easily see how the queued

[PATCH v2 01a/17] powerpc/qspinlock: prepare powerpc qspinlock implementation

2022-11-13 Thread Nicholas Piggin
This is a merge placeholder with a conflicting series of patches to generic qspinlocks. Not intended to be standalone, this should be applied before patch 1. diff --git a/arch/powerpc/include/asm/Kbuild b/arch/powerpc/include/asm/Kbuild index bcf95ce0964f..813a8c3405ad 100644 ---

[PATCH v2 00/17] powerpc: alternate queued spinlock implementation

2022-11-13 Thread Nicholas Piggin
This replaces the generic queued spinlock code (like s390 does) with our own implementation. There is an extra shim patch 1a to get the series to apply. Generic PV qspinlock code is causing latency / starvation regressions on large systems that are resulting in hard lockups reported (mostly in

Re: [PATCH] powerpc/kernel: fix repeated words in comments

2022-11-13 Thread Christophe Leroy
Le 12/11/2022 à 08:58, wangjianli a écrit : > Delete the redundant word 'the'. > > Signed-off-by: wangjianli > --- > arch/powerpc/kernel/process.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c > index

Re: [RFC PATCH 3/3] powerpc/bpf: use bpf_jit_binary_pack_[alloc|finalize|free]

2022-11-13 Thread Christophe Leroy
Le 10/11/2022 à 19:43, Hari Bathini a écrit : > Use bpf_jit_binary_pack_alloc in powerpc jit. The jit engine first > writes the program to the rw buffer. When the jit is done, the program > is copied to the final location with bpf_jit_binary_pack_finalize. > With multiple jit_subprogs,

Re: [RFC PATCH 2/3] powerpc/bpf: implement bpf_arch_text_invalidate for bpf_prog_pack

2022-11-13 Thread Christophe Leroy
Le 10/11/2022 à 19:43, Hari Bathini a écrit : > Implement bpf_arch_text_invalidate and use it to fill unused part of > the bpf_prog_pack with trap instructions when a BPF program is freed. Same here, allthough patch_instruction() is nice for a first try, it is not the solution on the long run.

Re: [RFC PATCH 1/3] powerpc/bpf: implement bpf_arch_text_copy

2022-11-13 Thread Christophe Leroy
Le 10/11/2022 à 19:43, Hari Bathini a écrit : > bpf_arch_text_copy is used to dump JITed binary to RX page, allowing > multiple BPF programs to share the same page. Using patch_instruction > to implement it. Using patch_instruction() is nice for a quick implementation, but it is probably

Re: Writing not working to CPLD/FPGA.

2022-11-13 Thread Christophe Leroy
Le 11/11/2022 à 15:27, Steven J. Hill a écrit : > On 11/11/22 02:53, Christophe Leroy wrote: >> >> First of all, kernel 3.12 is prehistoric. Have you tried with latest >> kernel, or at least with one of the long term support releases (see >> https://www.kernel.org/category/releases.html) ? >> > It

[PATCH] macintosh/mac_hid.c: don't load by default

2022-11-13 Thread Thomas Weißschuh
There should be no need to automatically load this driver on *all* machines with a keyboard. This driver is of very limited utility and has to be enabled by the user explicitly anyway. Furthermore its own header comment has deprecated it for 17 years. Fixes: 99b089c3c38a ("Input: Mac button

[PATCH] powerpc/xmon: Fix array_size.cocci warning

2022-11-13 Thread wangkailong
Fix the following coccicheck warning: arch/powerpc/xmon/ppc-opc.c:957:67-68: WARNING: Use ARRAY_SIZE arch/powerpc/xmon/ppc-opc.c:7280:24-25: WARNING: Use ARRAY_SIZE arch/powerpc/xmon/ppc-opc.c:6972:25-26: WARNING: Use ARRAY_SIZE arch/powerpc/xmon/ppc-opc.c:7211:21-22: WARNING: Use ARRAY_SIZE