machine_check_log_err() is not getting called for all
unrecoverable errors, And we are missing to log the error.
Raise irq work in save_mce_event() for unrecoverable errors,
So that we log the error from MCE event handling block in
timer handler.
Signed-off-by: Ganesh Goudar
---
Le 14/11/2022 à 04:38, yang.yan...@zte.com.cn a écrit :
> [Vous ne recevez pas souvent de courriers de yang.yan...@zte.com.cn.
> Découvrez pourquoi ceci est important à
> https://aka.ms/LearnAboutSenderIdentification ]
>
> From: Xu Panda
>
> Replace the open-code with sysfs_emit() to
Hi Peter,
On 03/11/22 14:18, Peter Zijlstra wrote:
On Wed, Nov 02, 2022 at 12:35:07PM +, Christophe Leroy wrote:
Le 28/10/2022 à 16:33, Sathvika Vasireddy a écrit :
In a subsequent patch, we would want to annotate powerpc assembly functions
with SYM_FUNC_START_LOCAL macro. This macro
From: Xu Panda
Replace the open-code with sysfs_emit() to simplify the code.
---
change for v2
- align code
---
Signed-off-by: Xu Panda
Signed-off-by: Yang Yang
---
drivers/scsi/ibmvscsi/ibmvfc.c | 20
1 file changed, 8 insertions(+), 12 deletions(-)
diff --git
On 10/28/22 13:42, Yicong Yang wrote:
> +static inline bool arch_tlbbatch_should_defer(struct mm_struct *mm)
> +{
> + /*
> + * TLB batched flush is proved to be beneficial for systems with large
> + * number of CPUs, especially system with more than 8 CPUs. TLB shutdown
> + *
Finding the owner or a queued waiter on a lock with a preempted vcpu
is indicative of an oversubscribed guest causing the lock to get into
trouble. Provide some options to detect this situation and have new
CPUs avoid queueing for a longer time (more steal iterations) to
minimise the problems
Provide an option that holds off queueing indefinitely while the lock
owner is preempted. This could reduce queueing latencies for very
overcommitted vcpu situations.
This is disabled by default.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/lib/qspinlock.c | 74
Allow for a reduction in the number of times a CPU from a different
node than the owner can attempt to steal the lock before queueing.
This could bias the transfer behaviour of the lock across the
machine and reduce NUMA crossings.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/lib/qspinlock.c
Use the spin_begin/spin_cpu_relax/spin_end APIs in qspinlock, which helps
to prevent threads issuing a lot of expensive priority nops which may not
have much effect due to immediately executing low then medium priority.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/lib/qspinlock.c | 41
This gives trylock slightly more strength, and it also gives most
of the benefit of passing 'val' back through the slowpath without
the complexity.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/include/asm/qspinlock.h | 44 +++-
arch/powerpc/lib/qspinlock.c |
After the head of the queue acquires the lock, it releases the
next waiter in the queue to become the new head. Add an option
to prod the new head if its vCPU was preempted. This may only
have an effect if queue waiters are yielding.
Disable this option by default for now, i.e., no logical
Having all CPUs poll the lock word for the owner CPU that should be
yielded to defeats most of the purpose of using MCS queueing for
scalability. Yet it may be desirable for queued waiters to to yield
to a preempted owner.
s390 addreses this problem by having queued waiters sample the lock
word
If the head of queue is preventing stealing but it finds the owner vCPU
is preempted, it will yield its cycles to the owner which could cause it
to become preempted. Add an option to re-allow stealers before yielding,
and disallow them again after returning from the yield.
Disable this option by
Queued waiters which are not at the head of the queue don't spin on
the lock word but their qnode lock word, waiting for the previous queued
CPU to release them. Add an option which allows these waiters to yield
to the previous CPU if its vCPU is preempted.
Signed-off-by: Nicholas Piggin
---
Waiters spinning on the lock word should yield to the lock owner if the
vCPU is preempted. This improves performance when the hypervisor has
oversubscribed physical CPUs.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/lib/qspinlock.c | 101 ++-
1 file changed,
Store the owner CPU number in the lock word so it may be yielded to,
as powerpc's paravirtualised simple spinlocks do.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/include/asm/qspinlock.h | 9 -
arch/powerpc/include/asm/qspinlock_types.h | 10 ++
Give the queue head the ability to stop stealers. After a number of
spins without sucessfully acquiring the lock, the queue head employs
this, which will assure it is the next owner.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/include/asm/qspinlock_types.h | 10 -
Allow new waiters a number of spins on the lock word before queueing,
which particularly helps paravirt performance when physical CPUs are
oversubscribed.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/lib/qspinlock.c | 159 ++-
1 file changed, 140
This uses more optimal ll/sc style access patterns (rather than
cmpxchg), and also sets the EH=1 lock hint on those operations
which acquire ownership of the lock.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/include/asm/qspinlock.h | 24 +--
The first 16 bits of the lock are only modified by the owner, and other
modifications always use atomic operations on the entire 32 bits, so
unlocks can use plain stores on the 16 bits. This is the same kind of
optimisation done by core qspinlock code.
Signed-off-by: Nicholas Piggin
---
This forms the basis of the qspinlock slow path.
Like generic qspinlocks and unlike the vanilla MCS algorithm, the lock
owner does not participate in the queue, only waiters. The first waiter
spins on the lock word, then when the lock is released it takes
ownership and unqueues the next waiter.
Add a powerpc specific implementation of queued spinlocks. This is the
build framework with a very simple (non-queued) spinlock implementation
to begin with. Later changes add queueing, and other features and
optimisations one-at-a-time. It is done this way to more easily see how
the queued
This is a merge placeholder with a conflicting series of patches to
generic qspinlocks. Not intended to be standalone, this should be
applied before patch 1.
diff --git a/arch/powerpc/include/asm/Kbuild b/arch/powerpc/include/asm/Kbuild
index bcf95ce0964f..813a8c3405ad 100644
---
This replaces the generic queued spinlock code (like s390 does) with
our own implementation. There is an extra shim patch 1a to get the
series to apply.
Generic PV qspinlock code is causing latency / starvation regressions on
large systems that are resulting in hard lockups reported (mostly in
Le 12/11/2022 à 08:58, wangjianli a écrit :
> Delete the redundant word 'the'.
>
> Signed-off-by: wangjianli
> ---
> arch/powerpc/kernel/process.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
> index
Le 10/11/2022 à 19:43, Hari Bathini a écrit :
> Use bpf_jit_binary_pack_alloc in powerpc jit. The jit engine first
> writes the program to the rw buffer. When the jit is done, the program
> is copied to the final location with bpf_jit_binary_pack_finalize.
> With multiple jit_subprogs,
Le 10/11/2022 à 19:43, Hari Bathini a écrit :
> Implement bpf_arch_text_invalidate and use it to fill unused part of
> the bpf_prog_pack with trap instructions when a BPF program is freed.
Same here, allthough patch_instruction() is nice for a first try, it is
not the solution on the long run.
Le 10/11/2022 à 19:43, Hari Bathini a écrit :
> bpf_arch_text_copy is used to dump JITed binary to RX page, allowing
> multiple BPF programs to share the same page. Using patch_instruction
> to implement it.
Using patch_instruction() is nice for a quick implementation, but it is
probably
Le 11/11/2022 à 15:27, Steven J. Hill a écrit :
> On 11/11/22 02:53, Christophe Leroy wrote:
>>
>> First of all, kernel 3.12 is prehistoric. Have you tried with latest
>> kernel, or at least with one of the long term support releases (see
>> https://www.kernel.org/category/releases.html) ?
>>
> It
There should be no need to automatically load this driver on *all*
machines with a keyboard.
This driver is of very limited utility and has to be enabled by the user
explicitly anyway.
Furthermore its own header comment has deprecated it for 17 years.
Fixes: 99b089c3c38a ("Input: Mac button
Fix the following coccicheck warning:
arch/powerpc/xmon/ppc-opc.c:957:67-68: WARNING: Use ARRAY_SIZE
arch/powerpc/xmon/ppc-opc.c:7280:24-25: WARNING: Use ARRAY_SIZE
arch/powerpc/xmon/ppc-opc.c:6972:25-26: WARNING: Use ARRAY_SIZE
arch/powerpc/xmon/ppc-opc.c:7211:21-22: WARNING: Use ARRAY_SIZE
31 matches
Mail list logo