On Fri Feb 3, 2023 at 4:26 PM AEST, Sachin Sant wrote:
> I am observing an intermittent crash while running powerpc/security
> selftests on a Power10 LPAR booted with powerpc/merge branch code.
>
> [ cut here ]
> WARNING: CPU: 1 PID: 5644 at arch/powerpc/kernel/irq_64.c:278
really require long values here). Let's standardize on "int" here to
> avoid casting the values back and forth between the two types.
>
> Signed-off-by: Thomas Huth
Thanks for the patch. It looks fine to me, it should be okay to
go via Paolo's tree if he's going to take the se
On Fri Feb 3, 2023 at 7:42 PM AEST, Thomas Huth wrote:
> The KVM_GET_NR_MMU_PAGES ioctl is quite questionable on 64-bit hosts
> since it fails to return the full 64 bits of the value that can be
> set with the corresponding KVM_SET_NR_MMU_PAGES call. This ioctl
> likely has also never really been
, so enable
the option. This increases the above benchmark to 118 million context
switches per second.
This generates 314 additional IPI interrupts on a 144 CPU system doing
a kernel compile, which is in the noise in terms of kernel cycles.
Acked-by: Linus Torvalds
Signed-off-by: Nicholas Piggin
Rik came up with basically the same idea a few years ago, so credit
to him for that. ]
Link:
https://lore.kernel.org/linux-mm/20230118080011.2258375-1-npig...@gmail.com/
Link: https://lore.kernel.org/all/20180728215357.3249-11-r...@surriel.com/
Acked-by: Linus Torvalds
Signed-off-by: Nicholas Piggin
---
no functional change.
Acked-by: Linus Torvalds
Signed-off-by: Nicholas Piggin
---
Documentation/mm/active_mm.rst | 6 ++
arch/Kconfig | 17 +
include/linux/sched/mm.h | 18 +++---
3 files changed, 38 insertions(+), 3 deletions(-)
diff
Add explicit _lazy_tlb annotated functions for lazy tlb mm refcounting.
This makes the lazy tlb mm references more obvious, and allows the
refcounting scheme to be modified in later changes. There is no
functional change with this patch.
Acked-by: Linus Torvalds
Signed-off-by: Nicholas Piggin
Signed-off-by: Nicholas Piggin
---
kernel/kthread.c | 14 +-
1 file changed, 5 insertions(+), 9 deletions(-)
diff --git a/kernel/kthread.c b/kernel/kthread.c
index f97fd01a2932..7424a1839e9a 100644
--- a/kernel/kthread.c
+++ b/kernel/kthread.c
@@ -1410,14 +1410,13 @@ void
and credit to Rik's earlier work in the same vein.
- Did a pass over comments and changelogs to improve readability.
Nicholas Piggin (5):
kthread: simplify kthread_use_mm refcounting
lazy tlb: introduce lazy tlb mm refcount helper functions
lazy tlb: allow lazy tlb mm refcounting
On Fri Feb 3, 2023 at 12:02 PM AEST, Benjamin Gray wrote:
> On Fri, 2023-02-03 at 00:46 +0100, Erhard F. wrote:
> > Happened during boot:
> >
> > [...]
> > Creating 6 MTD partitions on "flash@0":
> > 0x-0x0400 : "PNOR"
> > 0x01b21000-0x03921000 : "BOOTKERNEL"
> >
syscalls do not set the PPR field in their interrupt frame and return
from syscall always sets the default PPR for userspace, so setting the
value in the ret_from_fork frame is not necessary and mildly
inconsistent. Remove it.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/kernel/process.c | 5
In the kernel user thread path, don't set _TIF_RESTOREALL because the
thread is required to call kernel_execve() before it returns, which will
set _TIF_RESTOREALL if necessary via start_thread().
Signed-off-by: Nicholas Piggin
---
arch/powerpc/kernel/interrupt_64.S | 5 +
arch/powerpc
threads).
Split these startup functions in two, and catch kernel threads that
improperly return from their function. This is intended to make the
complicated code a little bit easier to understand.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/kernel/entry_32.S | 20
never exit and does not require a user
interrupt frame, so it gets a minimal stack frame.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/kernel/process.c | 99 +--
1 file changed, 61 insertions(+), 38 deletions(-)
diff --git a/arch/powerpc/kernel/process.c b/arch/p
If the system call return path always restores NVGPRs then there is no
need for ret_from_fork to do it. The HANDLER_RESTORE_NVGPRS does the
right thing for this.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/kernel/interrupt_64.S | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff
can be stored there.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/kernel/entry_32.S | 1 -
arch/powerpc/kernel/interrupt_64.S | 1 -
arch/powerpc/kernel/process.c | 9 +
3 files changed, 9 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/kernel/entry_32.S b/arch
The ret_from_fork code for 64e and 32-bit set r3 for
syscall_exit_prepare the same way that 64s does, so there should be no
need to special-case them in copy_thread.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/kernel/entry_32.S | 2 +-
arch/powerpc/kernel/process.c | 3 ---
2 files changed
The pkey registers (AMR, IAMR) do not get loaded from the switch frame
so it is pointless to save anything there. Remove the dead code.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/kernel/process.c | 12 +---
1 file changed, 1 insertion(+), 11 deletions(-)
diff --git a/arch/powerpc
The way copy_thread works, particularly with kernel threads and kernel
created user threads is a bit muddled and confusing. This series tries
to straighten things out a bit.
Thanks,
Nick
Nicholas Piggin (8):
powerpc: copy_thread remove unused pkey code
powerpc: copy_thread make ret_from_fork
On Tue Jan 31, 2023 at 12:54 PM AEST, Andrew Donnellan wrote:
> On Tue, 2023-01-24 at 15:17 +1000, Nicholas Piggin wrote:
> > > +static const char * const plpks_var_names[] = {
> > > + "PK",
> > > + "KEK",
> > &g
On Fri Jan 20, 2023 at 5:43 PM AEST, Andrew Donnellan wrote:
> From: Russell Currey
>
> The secvar object format is only in the device tree under powernv.
> We now have an API call to retrieve it in a generic way, so we should
> use that instead of having to handle the DT here.
>
> Add support
On Fri Jan 20, 2023 at 5:43 PM AEST, Andrew Donnellan wrote:
> From: Russell Currey
>
> The pseries platform can support dynamic secure boot (i.e. secure boot
> using user-defined keys) using variables contained with the PowerVM LPAR
> Platform KeyStore (PLPKS). Using the powerpc secvar API,
On Fri Jan 20, 2023 at 5:43 PM AEST, Andrew Donnellan wrote:
> From: Russell Currey
>
> Before interacting with the PLPKS, we ask the hypervisor to generate a
> password for the current boot, which is then required for most further
> PLPKS operations.
>
> If we kexec into a new kernel, the new
On Fri Jan 20, 2023 at 5:43 PM AEST, Andrew Donnellan wrote:
> It seems a bit unnecessary for the PLPKS code to have a user-visible
> config option when it doesn't do anything on its own, and there's existing
> options for enabling Secure Boot-related features.
>
> It should be enabled by
On Fri Jan 20, 2023 at 5:42 PM AEST, Andrew Donnellan wrote:
> From: Nayna Jain
>
> The Platform Keystore provides a signed update interface which can be used
> to create, replace or append to certain variables in the PKS in a secure
> fashion, with the hypervisor requiring that the update be
On Mon Jan 23, 2023 at 6:16 PM AEST, Nadav Amit wrote:
>
>
> On 1/19/23 6:22 AM, Nicholas Piggin wrote:
> > On Thu Jan 19, 2023 at 8:22 AM AEST, Nadav Amit wrote:
> >>
> >>
> >>> On Jan 18, 2023, at 12:00 AM, Nicholas Piggin wrote:
> &g
On Mon Jan 23, 2023 at 6:02 PM AEST, Nadav Amit wrote:
>
>
> On 1/23/23 9:35 AM, Nadav Amit wrote:
> >> + if (IS_ENABLED(CONFIG_MMU_LAZY_TLB_REFCOUNT)) {
> >> + mmdrop(mm);
> >> + } else {
> >> + /*
> >> + * mmdrop_lazy_tlb must provide a full memory barrier, see the
>
a complexity. This
means pending irqs won't be profiled quite so well because perf irqs
can't be taken.
Signed-off-by: Nicholas Piggin
---
Hopefully this will be the last major change to the irq soft masking
code. The patch doesn't look like much, but not having interrupts
and irq replay meddling with our
[EE].
This also tidies some of the warnings, no need to duplicate them in
both should_hard_irq_enable() and do_hard_irq_enable().
Signed-off-by: Nicholas Piggin
---
Since v1: just a rebase and re-test.
arch/powerpc/include/asm/hw_irq.h | 41 ++-
arch/powerpc/kernel
cputime_t is no longer a type, so VIRT_CPU_ACCOUNTING_GEN does not
have any affect on the type for 32-bit architectures, so there is
no reason it can't be supported.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/powerpc/Kconfig
-by: Nicholas Piggin
---
arch/powerpc/Kconfig | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index b8c4ac56bddc..7683788af8cb 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -207,7 +207,7 @@ config PPC
select
, but it
does make the context tracking calls necessary for 32-bit to support
context tracking.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/include/asm/interrupt.h | 35 +++-
1 file changed, 8 insertions(+), 27 deletions(-)
diff --git a/arch/powerpc/include/asm/interrupt.h
b
,
Nick
Nicholas Piggin (3):
powerpc: Consolidate 32-bit and 64-bit interrupt_enter_prepare
powerpc/32: implement HAVE_CONTEXT_TRACKING_USER support
powerpc/32: select HAVE_VIRT_CPU_ACCOUNTING_GEN
arch/powerpc/Kconfig | 3 ++-
arch/powerpc/include/asm/interrupt.h | 35
d ("powerpc/64s/hash: Make hash faults work in NMI context")
Signed-off-by: Nicholas Piggin
---
Lockup looks like this, note IRQMASK=1 in native_hpte_insert when we
expect it should be 3.
watchdog: CPU 16 Hard LOCKUP
watchdog: CPU 16 TB:6084087529753, last heartbeat TB:6075895318740 (16
On Thu Jan 19, 2023 at 8:22 AM AEST, Nadav Amit wrote:
>
>
> > On Jan 18, 2023, at 12:00 AM, Nicholas Piggin wrote:
> >
> > +static void do_shoot_lazy_tlb(void *arg)
> > +{
> > + struct mm_struct *mm = arg;
> > +
> > + if (current->active_m
On Thu Jan 19, 2023 at 3:30 AM AEST, Linus Torvalds wrote:
> [ Adding a few more x86 and arm64 maintainers - while linux-arch is
> the right mailing list, I'm not convinced people actually follow it
> all that closely ]
>
> On Wed, Jan 18, 2023 at 12:00 AM Nicholas Piggin wrote:
On Wed Jan 18, 2023 at 7:05 PM AEST, David Howells wrote:
> Linus Torvalds wrote:
>
> > And for the kernel, where we don't have bad locking, and where we
> > actually use fine-grained locks that are _near_ the data that we are
> > locking (the lockref of the dcache is obviously one example of
On Wed Jan 18, 2023 at 4:10 PM AEST, Andrew Donnellan wrote:
> From: Russell Currey
>
> The code that handles the format string in secvar-sysfs.c is entirely
> OPAL specific, so create a new "format" op in secvar_operations to make
> the secvar code more generic. No functional change.
>
>
On Wed Jan 18, 2023 at 4:10 PM AEST, Andrew Donnellan wrote:
> plpks_confirm_object_flushed() uses the H_PKS_CONFIRM_OBJECT_FLUSHED hcall
> to check whether changes to an object in the Platform KeyStore have been
> flushed to non-volatile storage.
>
> The hcall returns two output values, the
On Wed Jan 18, 2023 at 4:10 PM AEST, Andrew Donnellan wrote:
> From: Nayna Jain
>
> The Platform Keystore provides a signed update interface which can be used
> to create, replace or append to certain variables in the PKS in a secure
> fashion, with the hypervisor requiring that the update be
On Wed Jan 18, 2023 at 4:10 PM AEST, Andrew Donnellan wrote:
> Currently, the list of variables is populated by calling
> secvar_ops->get_next() repeatedly, which is explicitly modelled on the
> OPAL API (including the keylen parameter).
>
> For the upcoming PLPKS backend, we have a static list of
On Wed Jan 18, 2023 at 4:10 PM AEST, Andrew Donnellan wrote:
> From: Russell Currey
>
> The code that handles the format string in secvar-sysfs.c is entirely
> OPAL specific, so create a new "format" op in secvar_operations to make
> the secvar code more generic. No functional change.
>
>
On Wed Jan 18, 2023 at 4:10 PM AEST, Andrew Donnellan wrote:
> From: Russell Currey
>
> The secvar code only supports one consumer at a time.
>
> Multiple consumers aren't possible at this point in time, but we'd want
> it to be obvious if it ever could happen.
>
> Signed-off-by: Russell Currey
On Thu Jan 19, 2023 at 8:22 AM AEST, Nadav Amit wrote:
>
>
> > On Jan 18, 2023, at 12:00 AM, Nicholas Piggin wrote:
> >
> > +static void do_shoot_lazy_tlb(void *arg)
> > +{
> > + struct mm_struct *mm = arg;
> > +
> > + if (current->active_m
** Not for merge **
CONFIG_MMU_LAZY_TLB_SHOOTDOWN that requires IPIs to clear the "lazy tlb"
references to an mm that is being freed. With the radix MMU, the final
userspace exit TLB flush can be performed with IPIs, and those IPIs can
also clear lazy tlb mm references, which mostly eliminates
for CONFIG_MMU_LAZY_TLB_SHOOTDOWN, so enable
the option. This increases the above benchmark to 118 million context
switches per second.
This generates 314 additional IPI interrupts on a 144 CPU system doing
a kernel compile, which is in the noise in terms of kernel cycles.
Signed-off-by: Nicholas
s
tend not to migrate CPUs much, therefore they don't get much chance to
leave lazy tlb mm references on remote CPUs. There are a lot of options
to reduce them if necessary.
Signed-off-by: Nicholas Piggin
---
arch/Kconfig | 15
kernel/fork.c | 65
code does, so the patch is effectively a no-op.
Rename rq->prev_mm to rq->prev_lazy_mm, because that's what it is.
Signed-off-by: Nicholas Piggin
---
Documentation/mm/active_mm.rst | 6 ++
arch/Kconfig | 17 +
include/linux/sched/mm.h
a regular
reference on mm to use, and mmdrop_lazy_tlb the previous active_mm.
Signed-off-by: Nicholas Piggin
---
arch/arm/mach-rpc/ecard.c| 2 +-
arch/powerpc/kernel/smp.c| 2 +-
arch/powerpc/mm/book3s64/radix_tlb.c | 4 ++--
fs/exec.c| 2
too, which
I would like.
Even without the last patch, the additional IPIs caused by shoot lazy
is down in the noise so I'm not too concerned about it.
Thanks,
Nick
Nicholas Piggin (5):
lazy tlb: introduce lazy tlb mm refcount helper functions
lazy tlb: allow lazy tlb mm refcounting
On Fri Jan 13, 2023 at 2:15 PM AEST, Linus Torvalds wrote:
> On Thu, Jan 12, 2023 at 9:20 PM Nicholas Piggin wrote:
> >
> > Actually what we'd really want is an arch specific implementation of
> > lockref.
>
> The problem is mainly that then you need to generate the
On Fri Jan 13, 2023 at 10:13 AM AEST, Linus Torvalds wrote:
> [ Adding linux-arch, which is relevant but not very specific, and the
> arm64 and powerpc maintainers that are the more specific cases for an
> architecture where this might actually matter.
>
> See
>
>
>
...@https.bugzilla.kernel.org%2F/
Reported-by: kernel test robot
Link: https://lore.kernel.org/oe-lkp/202212251728.8d0872ff-oliver.s...@intel.com
Fixes: 30f3bb09778d ("kallsyms: Add self-test facility")
Signed-off-by: Nicholas Piggin
---
kernel/kallsyms_selftest.c | 21 ++---
1 fi
to the percpu
area which is out of range of this relocation, that case gets converted
to a GOT address (XXX: is this kosher? Must restrict this case to only
percpu so it doesn't paper over any bugs).
Signed-off-by: Nicholas Piggin
---
arch/powerpc/Makefile | 5 +-
arch/powerpc
labels as
non-constant even if alignment is arranged so padding is not required.
This may need toolchain help to solve nicely, for now move the prefix
instruction out of the alternate patch section to work around it.
This reduces kernel text size by about 6%.
Signed-off-by: Nicholas Piggin
it will provide immediates that exceed
the range of simple load/store displacement. Whether this is a
toolchain or a kernel asm problem remains to be seen. For now, these
are replaced with simpler and less efficient direct register addressing
when compiling with prefixed.
Signed-off-by: Nicholas Piggin
This macro is to be used in assembly where C functions are called.
pcrel addressing mode requires branches to functions with a
localentry value of 1 to have either a trailing nop or @notoc.
This macro permits the latter without changing callers.
Signed-off-by: Nicholas Piggin
---
arch/powerpc
addressing already has that the TOC pointer use its virtual
address.
XXX: I expect this to blow up everywhere, I've not tested a huge range
of platforms, and mostly in QEMU.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/kernel/head_64.S | 83 +++
1 file changed, 45
A later change moves the non-prom case to run at the virtual address
earlier, which calls for virtual TOC and kernel base. Split these two
calculations for prom and non-prom to make that change simpler.
Signed: Nicholas Piggin
---
arch/powerpc/kernel/head_64.S | 28
As the earlier comment explains, __secondary_hold_spinloop does not have
to be accessed at its virtual address, slightly simplifying code.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/kernel/head_64.S | 6 +-
1 file changed, 1 insertion(+), 5 deletions(-)
diff --git a/arch/powerpc
Move some basic Book3S initialisation after prom to a function similar
to what Book3E looks like. Book3E returns from this function at the
virtual address mapping, and Book3S will do the same in a later change,
so making them look similar helps with that.
Signed-off-by: Nicholas Piggin
---
arch
and will take them to toolchan developers after the kernel work is a bit
further along.
Thanks,
Nick
Nicholas Piggin (9):
crypto: powerpc - Use address generation helper for asm
powerpc/64s: Refactor initialisation after prom
powerpc/64e: Simplify address calculation in secondary hold loop
powerpc/64
Signed-off-by: Nicholas Piggin
---
arch/powerpc/crypto/crc32-vpmsum_core.S | 13 -
1 file changed, 4 insertions(+), 9 deletions(-)
diff --git a/arch/powerpc/crypto/crc32-vpmsum_core.S
b/arch/powerpc/crypto/crc32-vpmsum_core.S
index a16a717c809c..b0f87f595b26 100644
--- a/arch
symbols found at the end of a
section.
Signed-off-by: Nicholas Piggin
---
tools/objtool/check.c | 9 +
1 file changed, 9 insertions(+)
diff --git a/tools/objtool/check.c b/tools/objtool/check.c
index 4350be739f4f..4b7c8b33069e 100644
--- a/tools/objtool/check.c
+++ b/tools/objtool/check.c
Cc: Thomas Gleixner
Cc: Frederic Weisbecker
Cc: Sven Schnelle
Cc: Arnd Bergmann
Cc: linuxppc-dev@lists.ozlabs.org
Cc: linux-s...@vger.kernel.org
Signed-off-by: Nicholas Piggin
---
This required a couple of tweaks to s390 includes so we're not pulling
asm/cputime.h into the core header
Stack validation in early boot can just bail out of checking alternate
stacks if they are not validated yet. Checking against a NULL stack
could cause NULLish pointer values to be considered valid.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/kernel/process.c | 11 +++
1 file changed
ever worked, because paca_ptrs[0] was never not-poisoned when boot_cpuid
is not 0.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/include/asm/paca.h| 1 -
arch/powerpc/include/asm/smp.h | 1 +
arch/powerpc/kernel/prom.c | 12 ++--
arch/powerpc/kernel/setup-common.c | 4
ine checks dereferencing that memory
in real mode. Later, validating the current stack pointer against the
idle task of a different secondary will probably cause no stack trace
to be printed.
Fix this by setting thread_info->cpu right after smp_processor_id() is
set to the boot cpuid.
Signed-off
The stress_hpt memblock allocation did not pass in an alignment,
which causes a stack dump in early boot (that I missed, oops).
Fixes: 6b34a099faa1 ("powerpc/64s/hash: add stress_hpt kernel boot option to
increase hash faults")
Signed-off-by: Nicholas Piggin
---
arch/powerpc/m
the rest of the window and makes
the paca allocation a lot better, but it has more possibility for
regressions. Last patch is independent of the rest and should be
quite straightforward.
Thanks,
Nick
Nicholas Piggin (4):
powerpc/64s: Fix stress_hpt memblock alloc alignment
powerpc/64: Fix task_cpu
On Thu Dec 15, 2022 at 10:17 AM AEST, Segher Boessenkool wrote:
> On Wed, Dec 14, 2022 at 09:39:25PM +1000, Nicholas Piggin wrote:
> > What about just copying x86's implementation including using
> > __builtin_frame_address(1/2)? Are those builtins reliable for all
> > our
On Wed Dec 14, 2022 at 6:39 PM AEST, Christophe Leroy wrote:
>
>
> Le 14/12/2022 à 05:42, Nicholas Miehlbradt a écrit :
> > Walks the stack when copy_{to,from}_user address is in the stack to
> > ensure that the object being copied is entirely within a single stack
> > frame.
> >
> > Substatially
On Mon Dec 12, 2022 at 3:57 PM AEST, wrote:
> https://bugzilla.kernel.org/show_bug.cgi?id=214913
>
> --- Comment #9 from Michael Ellerman (mich...@ellerman.id.au) ---
> I assume it's an io_uring IO worker.
>
> They're created via create_io_worker() -> create_io_thread().
>
> They pass a non-NULL
On Sun Dec 11, 2022 at 11:19 PM AEST, wrote:
> https://bugzilla.kernel.org/show_bug.cgi?id=214913
>
> --- Comment #7 from Zorro Lang (zl...@redhat.com) ---
> (In reply to Michael Ellerman from comment #5)
> > Sorry I don't have any idea which commit could have fixed this.
> >
> > The process
ended
waiters")
Signed-off-by: Nicholas Piggin
---
arch/powerpc/lib/qspinlock.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/powerpc/lib/qspinlock.c b/arch/powerpc/lib/qspinlock.c
index 2eab84774911..c81fe8fff2b2 100644
--- a/arch/powerpc/lib/qspinlock.c
+++ b/arch/powerpc/lib/q
On Sun Dec 4, 2022 at 9:33 PM AEST, Christian Zigotzky wrote:
> Hi All,
>
> We regularly use QEMU with KVM HV on our A-EON AmigaOne X5000 machines
> (book3e). It works fast and without any problems.
>
> Screenshot tour of QEMU/KVM HV on our AmigaOnes:
>
> -
On Thu Dec 1, 2022 at 6:45 AM AEST, Crystal Wood wrote:
> On Mon, 2022-11-28 at 14:36 +1000, Nicholas Piggin wrote:
> > BookE KVM is in a deep maintenance state, I'm not sure how much testing
> > it gets. I don't have a test setup, and it does not look like QEMU has
> > any HV
On Mon Nov 28, 2022 at 12:44 PM AEST, Benjamin Gray wrote:
> Recognise and pass the appropriate signal to the user program when a
> hashchk instruction triggers. This is independent of allowing
> configuration of DEXCR[NPHIE], as a hypervisor can enforce this aspect
> regardless of the kernel.
>
>
t; producing speculation gadgets.
>
> Signed-off-by: Rohan McLure
Acked-by: Nicholas Piggin
> ---
> Resubmitting patches as their own series after v6 partially merged:
> Link:
> https://lore.kernel.org/all/166488988686.779920.13794870102696416283.b4...@ellerman.id.au/t/
&g
.
> See ~0.8% performance regression with this mitigation, but this
> indicates the worst-case performance due to heavier-weight interrupt
> handlers. This mitigation is able to be enabled/disabled through
> CONFIG_INTERRUPT_SANITIZE_REGISTERS.
>
> Reviewed-by: Nicholas Piggin
> Signed-off
l
> cause these macros to zeroise architected registers to avoid potential
> speculation influence of user data.
>
> Provide an IOption that signals that r12 must be retained, as the
> interrupt handler assumes it to hold the contents of the MSR.
Reviewed-by: Nicholas Piggin
>
&g
estore_nvgprs_\srr
> .Lrestore_nvgprs_\srr\()_cont:
> +#endif
Looks pretty good. You might add a comment here to say nvgprs are always
restored, in the sanitize case. Not that it's hard to grep for.
Reviewed-by: Nicholas Piggin
Thanks,
Nick
n't mind long names
too much.
Reviewed-by: Nicholas Piggin
> ---
> v4: New patch
> ---
> arch/powerpc/include/asm/ppc_asm.h | 17 +
> 1 file changed, 17 insertions(+)
>
> diff --git a/arch/powerpc/include/asm/ppc_asm.h
> b/arch/powerpc/include/asm/ppc_asm.h
On Tue Nov 29, 2022 at 9:44 AM AEST, Nathan Lynch wrote:
> "Nicholas Piggin" writes:
>
> > On Sat Nov 19, 2022 at 1:07 AM AEST, Nathan Lynch wrote:
> >> Call the just-added rtas tracepoints in do_enter_rtas(), taking care
> >> to avoid funct
On Tue Nov 29, 2022 at 4:26 AM AEST, Nathan Lynch wrote:
> "Nicholas Piggin" writes:
> > On Sat Nov 19, 2022 at 1:07 AM AEST, Nathan Lynch wrote:
> >> rtas_os_term() is called during panic. Its behavior depends on a
> >> couple of conditions
BookE KVM is in a deep maintenance state, I'm not sure how much testing
it gets. I don't have a test setup, and it does not look like QEMU has
any HV architecture enabled. It hasn't been too painful but there are
some cases where it causes a bit of problem not being able to test, e.g.,
support, better compatibility with
eBPF tools.
BE+ELFv2 is not officially supported by the GNU toolchain, but it works
fine in testing and has been used by some userspace for some time (e.g.,
Void Linux).
Tested-by: Michal Suchánek
Reviewed-by: Segher Boessenkool
Signed-off-by: Nicholas Piggin
This allows asm generation for big-endian ELFv2 builds.
Signed-off-by: Nicholas Piggin
---
drivers/crypto/vmx/Makefile | 12 +++-
drivers/crypto/vmx/ppc-xlate.pl | 10 ++
2 files changed, 17 insertions(+), 5 deletions(-)
diff --git a/drivers/crypto/vmx/Makefile b/drivers
elf_validity_check().
Cc: Michael Ellerman
Signed-off-by: Jessica Yu
[np: added changelog, adjust name, rebase]
Acked-by: Luis Chamberlain
Signed-off-by: Nicholas Piggin
---
include/linux/moduleloader.h | 3 +++
kernel/module/main.c | 10 ++
2 files changed, 13 insertions(+)
diff --git
]
Signed-off-by: Nicholas Piggin
---
arch/powerpc/kernel/module_64.c | 10 ++
1 file changed, 10 insertions(+)
diff --git a/arch/powerpc/kernel/module_64.c b/arch/powerpc/kernel/module_64.c
index 7e45dc98df8a..ff045644f13f 100644
--- a/arch/powerpc/kernel/module_64.c
+++ b/arch/powerpc/kernel
the EXPERT depends so it's easier to test. It's marked as
experimental, but we should soon make it default and try to deprecate
the v1 ABI so we can eventually remove it.
Thanks,
Nick
Nicholas Piggin (4):
module: add module_elf_check_arch for module-specific checks
powerpc/64: Add module check
g explicitly but
is that for architectures with unsynchronized clocks maybe.
Acked-by: Nicholas Piggin
> ---
> arch/powerpc/platforms/pseries/mobility.c | 4 +++-
> 1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/arch/powerpc/platforms/pseries/mobility.c
> b/arch/powerpc/platfo
spinlocks are built, and to make performance and correctness
bisects more useful.
Signed-off-by: Nicholas Piggin
---
Missed the first patch sending the series :( Here is the real patch 1.
Thanks,
NIck
arch/powerpc/Kconfig | 1 -
arch/powerpc/include/asm/paravirt.h
On Sat Nov 19, 2022 at 1:07 AM AEST, Nathan Lynch wrote:
> Call the just-added rtas tracepoints in do_enter_rtas(), taking care
> to avoid function name lookups in the CPU offline path.
>
> Signed-off-by: Nathan Lynch
> ---
> arch/powerpc/kernel/rtas.c | 23 +++
> 1 file
__array(__u32, params, 16)
> + ),
> +
> + TP_fast_assign(
> + __entry->token = be32_to_cpu(rtas_args->token);
> + __entry->nargs = be32_to_cpu(rtas_args->nargs);
> + __entry->nret = be32_to_cpu(rtas_args->nret)
_args struct fields.
>
> There's no apparent reason to force inlining of do_enter_rtas()
> either, and it seems to bloat the code a bit. Let the compiler decide.
Reviewed-by: Nicholas Piggin
>
> Signed-off-by: Nathan Lynch
> ---
> arch/powerpc/kernel/rtas.c | 10 +-
>
or using this call
this way, so looks okay to me. Maybe you could open-code an mdelay
though, although I guess firmware should be tolerant of calling it in
a loop.
Reviewed-by: Nicholas Piggin
>
> Signed-off-by: Nathan Lynch
> ---
> arch/powerpc/kernel/rtas.c | 7 ++-
> 1 file
y_read_bool() since
> it is a boolean property, not a RTAS function token.
Small nit, but you could do that at the query site unless you
were going to start using ibm,os-term without the extended
capability.
Reviewed-by: Nicholas Piggin
>
> Signed-off-by: Nathan Lynch
> ---
> arch
On Mon Nov 7, 2022 at 1:32 PM AEST, Rohan McLure wrote:
> Cause pseries platforms to default to zeroising all potentially user-defined
> registers when entering the kernel by means of any interrupt source,
> reducing user-influence of the kernel and the likelihood or producing
> speculation
hen the 64s/e enablement patches are
independent and apply to exactly that subarch.
But code-wise I think this looks good.
Reviewed-by: Nicholas Piggin
> Mitigation defaults to enabled by INTERRUPT_SANITIZE_REGISTERS.
>
> Signed-off-by: Rohan McLure
> ---
> Resubmitting patches
601 - 700 of 6220 matches
Mail list logo