for Book3S/nohash 32-bit systems.
Signed-off-by: Rohan McLure
---
V2: Provide missing pud_user implementations, use p{u,m}d_is_leaf.
V3: Provide missing pmd_user implementations as stubs in 32-bit.
V4: Use pmd_leaf, pud_leaf, and define pmd_user for 32 Book3E with
static inline method rather than macro
rm64/mm: enable
ARCH_SUPPORTS_PAGE_TABLE_CHECK")
x86_64 in commit d283d422c6c4 ("x86: mm: add x86_64 support for page table
check")
Signed-off-by: Rohan McLure
---
V2: Update spacing and types assigned to pte_update calls.
V3: Update one last pte_update call to remove __pte invocation.
V5
.
Signed-off-by: Rohan McLure
---
V2: Remove conditional BUILD_BUG and BUG. Instead warn on usage.
V3: Replace WARN with WARN_ONCE, which should suffice to demonstrate
misuse of puds.
---
arch/powerpc/include/asm/book3s/64/pgtable.h | 10 --
arch/powerpc/include/asm/pgtable.h | 14
of __pmdp_collapse_flush
as access to the mm_struct * is required.
Link:
https://lore.kernel.org/linuxppc-dev/20230214015939.1853438-1-rmcl...@linux.ibm.com/
v5:
Link:
https://lore.kernel.org/linuxppc-dev/20221118002146.25979-1-rmcl...@linux.ibm.com/
Rohan McLure (7):
powerpc: mm: Separate set_pte, set_pte_at
.
Signed-off-by: Rohan McLure
---
v5: Split patch that replaces p{m,u,4}d_is_leaf into two patches, first
replacing callsites and afterward providing generic definition.
Remove ifndef-defines implementing p{m,u}d_leaf in favour of
implementing stubs in headers belonging to the particular platforms
the vma and calls __pmdp_collapse_flush. The mm_struct * parameter
is needed in a future patch providing Page Table Check support,
which is defined in terms of mm context objects.
Signed-off-by: Rohan McLure
---
v6: New patch
v7: Remove explicit `return' in macro. Prefix macro args with __,
as done
not to be, permitting for uninstrumented
internal mappings. This distinction in names is also present in x86.
Signed-off-by: Rohan McLure
---
v6: new patch
v7: Remove extern, move set_pte args to be in a single line.
---
arch/powerpc/include/asm/book3s/pgtable.h | 3 +--
arch/powerpc/include/asm
that they may be referenced in generic code.
Signed-off-by: Rohan McLure
---
V4: New patch
V5: Previously replaced stub definition for *_is_leaf with *_leaf. Do
that in a later patch
---
arch/powerpc/kvm/book3s_64_mmu_radix.c | 12 ++--
arch/powerpc/mm/book3s64/radix_pgtable.c | 14
> On 15 Feb 2023, at 11:17 am, Rohan McLure wrote:
>
>> On 14 Feb 2023, at 5:02 pm, Christophe Leroy
>> wrote:
>>
>>
>>
>> Le 14/02/2023 à 02:59, Rohan McLure a écrit :
>>> pmdp_collapse_flush has references in generic code with just th
> On 14 Feb 2023, at 5:02 pm, Christophe Leroy
> wrote:
>
>
>
> Le 14/02/2023 à 02:59, Rohan McLure a écrit :
>> pmdp_collapse_flush has references in generic code with just three
>> parameters, due to the choice of mm context being implied by the vm_area
>&
rm64/mm: enable
ARCH_SUPPORTS_PAGE_TABLE_CHECK")
x86_64 in commit d283d422c6c4 ("x86: mm: add x86_64 support for page table
check")
Signed-off-by: Rohan McLure
---
V2: Update spacing and types assigned to pte_update calls.
V3: Update one last pte_update call to remove __pte invocation
.
Signed-off-by: Rohan McLure
---
V2: Remove conditional BUILD_BUG and BUG. Instead warn on usage.
V3: Replace WARN with WARN_ONCE, which should suffice to demonstrate
misuse of puds.
---
arch/powerpc/include/asm/book3s/64/pgtable.h | 10 --
arch/powerpc/include/asm/pgtable.h | 14
for Book3S/nohash 32-bit systems.
Signed-off-by: Rohan McLure
---
V2: Provide missing pud_user implementations, use p{u,m}d_is_leaf.
V3: Provide missing pmd_user implementations as stubs in 32-bit.
V4: Use pmd_leaf, pud_leaf, and define pmd_user for 32 Book3E with
static inline method rather than macro
.
Signed-off-by: Rohan McLure
---
v5: Split patch that replaces p{m,u,4}d_is_leaf into two patches, first
replacing callsites and afterward providing generic definition.
Remove ifndef-defines implementing p{m,u}d_leaf in favour of
implementing stubs in headers belonging to the particular platforms
that they may be referenced in generic code.
Signed-off-by: Rohan McLure
---
V4: New patch
V5: Previously replaced stub definition for *_is_leaf with *_leaf. Do
that in a later patch
---
arch/powerpc/kvm/book3s_64_mmu_radix.c | 12 ++--
arch/powerpc/mm/book3s64/radix_pgtable.c | 14
and
calls __pmdp_collapse_flush. The mm_struct * parameter is needed in a
future patch providing Page Table Check support, which is defined in
terms of mm context objects.
Signed-off-by: Rohan McLure
---
v6: New patch
---
arch/powerpc/include/asm/book3s/64/pgtable.h | 14 +++---
1 file changed
not to be, permitting for uninstrumented
internal mappings. This distinction in names is also present in x86.
Signed-off-by: Rohan McLure
---
v6: new patch
---
arch/powerpc/include/asm/book3s/pgtable.h | 4 ++--
arch/powerpc/include/asm/nohash/pgtable.h | 4 ++--
arch/powerpc/include/asm/pgtable.h
.
* Remove instrumentation from set_pte from kernel internal pages.
* 64s: Implement pmdp_collapse_flush in terms of __pmdp_collapse_flush
as access to the mm_struct * is required.
v5:
Link:
Rohan McLure (7):
powerpc: mm: Separate set_pte, set_pte_at for internal, external use
powerpc/64s: mm
> On 9 Feb 2023, at 10:14 am, Rohan McLure wrote:
>
>
>
>> On 8 Feb 2023, at 11:23 pm, Christophe Leroy
>> wrote:
>>
>>
>>
>> Le 08/02/2023 à 04:21, Rohan McLure a écrit :
>>> KCSAN instruments calls to atomic
> On 8 Feb 2023, at 11:23 pm, Christophe Leroy
> wrote:
>
>
>
> Le 08/02/2023 à 04:21, Rohan McLure a écrit :
>> KCSAN instruments calls to atomic builtins, and will in turn call these
>> builtins itself. As such, architectures supporting KCSAN
for more
information.
Signed-off-by: Rohan McLure
---
v3: Restrict support to 64-bit, as TSAN expects 64-bit __atomic_* compiler
built-ins.
---
arch/powerpc/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index b8c4ac56bddc..55bc2d724c73
Instrumented memory accesses provided by KCSAN will access core-local
memories (which will save and restore IRQs) as well as restoring IRQs
directly. Avoid recursive instrumentation by applying __no_kcsan
annotation to IRQ restore routines.
Signed-off-by: Rohan McLure
---
arch/powerpc/kernel
Annotate memory barriers *mb() with calls to kcsan_mb(), signaling to
compilers supporting KCSAN that the respective memory barrier has been
issued. Rename memory barrier *mb() to __*mb() to opt in for
asm-generic/barrier.h to generate the respective *mb() macro.
Signed-off-by: Rohan McLure
recursive instrumentation, exclude udelay from being instrumented.
Signed-off-by: Rohan McLure
---
arch/powerpc/kernel/time.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c
index d68de3618741..b894029f53db 100644
Exclude various incompatible compilation units from KCSAN
instrumentation.
Signed-off-by: Rohan McLure
---
arch/powerpc/kernel/Makefile | 10 ++
arch/powerpc/kernel/trace/Makefile | 1 +
arch/powerpc/kernel/vdso/Makefile | 1 +
arch/powerpc/lib/Makefile | 2 ++
arch
A prior patch implemented stubs in place of the __atomic_* builtins in
generic code, as it is useful for other 32-bit architectures such as
32-bit powerpc.
Remove the kcsan-stubs.c translation unit and instead use
the generic implementation.
Signed-off-by: Rohan McLure
---
V4: New patch
the xtensa implementation of these stubs.
Signed-off-by: Rohan McLure
---
v4: New patch
---
kernel/kcsan/Makefile | 3 ++
kernel/kcsan/stubs.c | 78 +++
2 files changed, 81 insertions(+)
create mode 100644 kernel/kcsan/stubs.c
diff --git a/kernel/kcsan/Makefil
:
https://lore.kernel.org/linuxppc-dev/20230131234859.1275125-1-rmcl...@linux.ibm.com/
Rohan McLure (7):
kcsan: Add atomic builtin stubs for 32-bit systems
xtensa: kcsan: Remove kcsan stubs for atomic builtins
powerpc: kcsan: Add exclusions from instrumentation
powerpc: kcsan: Exclude udelay
for more
information.
Signed-off-by: Rohan McLure
---
v3: Restrict support to 64-bit, as TSAN expects 64-bit __atomic_* compiler
built-ins.
---
arch/powerpc/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index b8c4ac56bddc..55bc2d724c73
recursive instrumentation, exclude udelay from being instrumented.
Signed-off-by: Rohan McLure
---
arch/powerpc/kernel/time.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c
index d68de3618741..b894029f53db 100644
Instrumented memory accesses provided by KCSAN will access core-local
memories (which will save and restore IRQs) as well as restoring IRQs
directly. Avoid recursive instrumentation by applying __no_kcsan
annotation to IRQ restore routines.
Signed-off-by: Rohan McLure
---
arch/powerpc/kernel
...@linux.ibm.com/
v1:
https://lore.kernel.org/linuxppc-dev/20230131234859.1275125-1-rmcl...@linux.ibm.com/
Rohan McLure (5):
powerpc: kcsan: Add exclusions from instrumentation
powerpc: kcsan: Exclude udelay to prevent recursive instrumentation
powerpc: kcsan: Memory barriers semantics
powerpc: kcsan
Annotate memory barriers *mb() with calls to kcsan_mb(), signaling to
compilers supporting KCSAN that the respective memory barrier has been
issued. Rename memory barrier *mb() to __*mb() to opt in for
asm-generic/barrier.h to generate the respective *mb() macro.
Signed-off-by: Rohan McLure
Exclude various incompatible compilation units from KCSAN
instrumentation.
Signed-off-by: Rohan McLure
---
arch/powerpc/kernel/Makefile | 10 ++
arch/powerpc/kernel/trace/Makefile | 1 +
arch/powerpc/kernel/vdso/Makefile | 1 +
arch/powerpc/lib/Makefile | 2 ++
arch
Enable HAVE_ARCH_KCSAN on all powerpc platforms, permitting use of the
kernel concurrency sanitiser through the CONFIG_KCSAN_* kconfig options.
See documentation in Documentation/dev-tools/kcsan.rst for more
information.
Signed-off-by: Rohan McLure
---
arch/powerpc/Kconfig | 1 +
1 file
Instrumented memory accesses provided by KCSAN will access core-local
memories (which will save and restore IRQs) as well as restoring IRQs
directly. Avoid recursive instrumentation by applying __no_kcsan
annotation to IRQ restore routines.
Signed-off-by: Rohan McLure
---
arch/powerpc/kernel
Annotate memory barriers *mb() with calls to kcsan_mb(), signaling to
compilers supporting KCSAN that the respective memory barrier has been
issued. Rename memory barrier *mb() to __*mb() to opt in for
asm-generic/barrier.h to generate the respective *mb() macro.
Signed-off-by: Rohan McLure
recursive instrumentation, exclude udelay from being instrumented.
Signed-off-by: Rohan McLure
---
arch/powerpc/kernel/time.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c
index d68de3618741..b894029f53db 100644
Exclude various incompatible compilation units from KCSAN
instrumentation.
Signed-off-by: Rohan McLure
---
arch/powerpc/kernel/Makefile | 10 ++
arch/powerpc/kernel/trace/Makefile | 1 +
arch/powerpc/kernel/vdso/Makefile | 1 +
arch/powerpc/lib/Makefile | 2 ++
arch
to be issued in future patches.
v2: Implement __smp_mb() in terms of __mb() to avoid multiple calls to
kcsan_mb().
v1:
https://lore.kernel.org/linuxppc-dev/20230131234859.1275125-1-rmcl...@linux.ibm.com/
Rohan McLure (5):
powerpc: kcsan: Add exclusions from instrumentation
powerpc: kcsan: Exclude
to be issued in future patches.
Rohan McLure (5):
powerpc: kcsan: Add exclusions from instrumentation
powerpc: kcsan: Exclude udelay to prevent recursive instrumentation
powerpc: kcsan: Memory barriers semantics
powerpc: kcsan: Prevent recursive instrumentation with IRQ
save/restores
powerpc
Enable HAVE_ARCH_KCSAN on all powerpc platforms, permitting use of the
kernel concurrency sanitiser through the CONFIG_KCSAN_* kconfig options.
See documentation in Documentation/dev-tools/kcsan.rst for more
information.
Signed-off-by: Rohan McLure
---
arch/powerpc/Kconfig | 1 +
1 file
Instrumented memory accesses provided by KCSAN will access core-local
memories (which will save and restore IRQs) as well as restoring IRQs
directly. Avoid recursive instrumentation by applying __no_kcsan
annotation to IRQ restore routines.
Signed-off-by: Rohan McLure
---
arch/powerpc/kernel
Annotate memory barriers *mb() with calls to kcsan_mb(), signaling to
compilers supporting KCSAN that the respective memory barrier has been
issued. Rename memory barrier *mb() to __*mb() to opt in for
asm-generic/barrier.h to generate the respective *mb() macro.
Signed-off-by: Rohan McLure
recursive instrumentation, exclude udelay from being instrumented.
Signed-off-by: Rohan McLure
---
arch/powerpc/kernel/time.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c
index d68de3618741..b894029f53db 100644
Exclude various incompatible compilation units from KCSAN
instrumentation.
Signed-off-by: Rohan McLure
---
arch/powerpc/kernel/Makefile | 10 ++
arch/powerpc/kernel/trace/Makefile | 1 +
arch/powerpc/kernel/vdso/Makefile | 1 +
arch/powerpc/lib/Makefile | 2 ++
arch
Great job spotting this. Somehow lost these throughout the revisions. Thanks.
> On 7 Dec 2022, at 9:24 am, Michael Ellerman wrote:
>
> Rohan McLure writes:
>> The check that a higher-level entry in multi-level pages contains a page
>> translation entry (pte) is perfor
asm-generic/qspinlock.h provides an identical implementation of
queued_spin_lock. Remove the variant in asm/qspinlock.h.
Signed-off-by: Rohan McLure
---
arch/powerpc/include/asm/qspinlock.h | 11 ---
1 file changed, 11 deletions(-)
diff --git a/arch/powerpc/include/asm/qspinlock.h
b
Cause pseries and POWERNV platforms to default to zeroising all potentially
user-defined registers when entering the kernel by means of any interrupt
source, reducing user-influence of the kernel and the likelihood or
producing speculation gadgets.
Acked-by: Nicholas Piggin
Signed-off-by: Rohan
safely, without loss of register state prior to the
interrupt, as the common prologue saves the initial values of
non-volatiles, which are unconditionally restored in interrupt_64.S.
Mitigation defaults to enabled by INTERRUPT_SANITIZE_REGISTERS.
Reviewed-by: Nicholas Piggin
Signed-off-by: Rohan
is able to be enabled/disabled through
CONFIG_INTERRUPT_SANITIZE_REGISTERS.
Reviewed-by: Nicholas Piggin
Signed-off-by: Rohan McLure
---
v2: REST_NVGPRS should be conditional on mitigation in scv handler. Fix
improper multi-line preprocessor macro in interrupt_64.S
v4: Split off IMSR_R12 definition
speculation influence of user data.
Provide an IOption that signals that r12 must be retained, as the
interrupt handler assumes it to hold the contents of the MSR.
Reviewed-by: Nicholas Piggin
Signed-off-by: Rohan McLure
---
v4: Split 64s register sanitisation commit to establish this IOption
---
arch
state, meaning that the
mitigation when implemented will have minimal overhead.
Acked-by: Nicholas Piggin
Signed-off-by: Rohan McLure
---
Resubmitting patches as their own series after v6 partially merged:
Link:
https://lore.kernel.org/all/166488988686.779920.13794870102696416283.b4
required by the mitigation.
Reviewed-by: Nicholas Piggin
Signed-off-by: Rohan McLure
---
v4: New patch
v5: Remove unnecessary _ZEROIZE_ parts of macro titles, as the
implementation of how registers are sanitised doesn't need to be
immediately accessible, only that values will be clobbered. Introduce
this configuration option is either disabled or enabled.
Reviewed-by: Nicholas Piggin
Signed-off-by: Rohan McLure
---
v4: New patch
---
arch/powerpc/kernel/interrupt_64.S | 6 ++
1 file changed, 6 insertions(+)
diff --git a/arch/powerpc/kernel/interrupt_64.S
b/arch/powerpc/kernel/interrupt_64.S
Cause pseries and POWERNV platforms to default to zeroising all potentially
user-defined registers when entering the kernel by means of any interrupt
source, reducing user-influence of the kernel and the likelihood or
producing speculation gadgets.
Signed-off-by: Rohan McLure
---
Resubmitting
safely, without loss of register state prior to the
interrupt, as the common prologue saves the initial values of
non-volatiles, which are unconditionally restored in interrupt_64.S.
Mitigation defaults to enabled by INTERRUPT_SANITIZE_REGISTERS.
Reviewed-by: Nicholas Piggin
Signed-off-by: Rohan
is able to be enabled/disabled through
CONFIG_INTERRUPT_SANITIZE_REGISTERS.
Reviewed-by: Nicholas Piggin
Signed-off-by: Rohan McLure
---
v2: REST_NVGPRS should be conditional on mitigation in scv handler. Fix
improper multi-line preprocessor macro in interrupt_64.S
v4: Split off IMSR_R12 definition
speculation influence of user data.
Provide an IOption that signals that r12 must be retained, as the
interrupt handler assumes it to hold the contents of the MSR.
Signed-off-by: Rohan McLure
---
v4: Split 64s register sanitisation commit to establish this IOption
---
arch/powerpc/kernel/exceptions
this configuration option is either disabled or enabled.
Signed-off-by: Rohan McLure
---
v4: New patch
---
arch/powerpc/kernel/interrupt_64.S | 6 ++
1 file changed, 6 insertions(+)
diff --git a/arch/powerpc/kernel/interrupt_64.S
b/arch/powerpc/kernel/interrupt_64.S
index 978a173eb339
state, meaning that the
mitigation when implemented will have minimal overhead.
Acked-by: Nicholas Piggin
Signed-off-by: Rohan McLure
---
Resubmitting patches as their own series after v6 partially merged:
Link:
https://lore.kernel.org/all/166488988686.779920.13794870102696416283.b4
required by the mitigation.
Signed-off-by: Rohan McLure
---
v4: New patch
---
arch/powerpc/include/asm/ppc_asm.h | 17 +
1 file changed, 17 insertions(+)
diff --git a/arch/powerpc/include/asm/ppc_asm.h
b/arch/powerpc/include/asm/ppc_asm.h
index 753a2757bcd4..272b2795c36a 100644
.
Signed-off-by: Rohan McLure
---
V2: Remove conditional BUILD_BUG and BUG. Instead warn on usage.
V3: Replace WARN with WARN_ONCE, which should suffice to demonstrate
misuse of puds.
---
arch/powerpc/include/asm/book3s/64/pgtable.h | 10 --
arch/powerpc/include/asm/pgtable.h | 14
rm64/mm: enable
ARCH_SUPPORTS_PAGE_TABLE_CHECK")
x86_64 in commit d283d422c6c4 ("x86: mm: add x86_64 support for page table
check")
Reviewed-by: Russell Currey
Reviewed-by: Christophe Leroy
Signed-off-by: Rohan McLure
---
V2: Update spacing and types assigned to pte_update calls.
for Book3S/nohash 32-bit systems.
Signed-off-by: Rohan McLure
---
V2: Provide missing pud_user implementations, use p{u,m}d_is_leaf.
V3: Provide missing pmd_user implementations as stubs in 32-bit.
V4: Use pmd_leaf, pud_leaf, and define pmd_user for 32 Book3E with
static inline method rather than macro
.
Signed-off-by: Rohan McLure
---
v5: Split patch that replaces p{m,u,4}d_is_leaf into two patches, first
replacing callsites and afterward providing generic definition.
Remove ifndef-defines implementing p{m,u}d_leaf in favour of
implementing stubs in headers belonging to the particular platforms
that they may be referenced in generic code.
Signed-off-by: Rohan McLure
---
V4: New patch
V5: Previously replaced stub definition for *_is_leaf with *_leaf. Do
that in a later patch
---
arch/powerpc/kvm/book3s_64_mmu_radix.c | 12 ++--
arch/powerpc/mm/book3s64/radix_pgtable.c | 14
safely, without loss of register state prior to the
interrupt, as the common prologue saves the initial values of
non-volatiles, which are unconditionally restored in interrupt_64.S.
Mitigation defaults to enabled by INTERRUPT_SANITIZE_REGISTERS.
Signed-off-by: Rohan McLure
---
Resubmitting
Cause pseries platforms to default to zeroising all potentially user-defined
registers when entering the kernel by means of any interrupt source,
reducing user-influence of the kernel and the likelihood or producing
speculation gadgets.
Signed-off-by: Rohan McLure
---
Resubmitting patches
state, meaning that the
mitigation when implemented will have minimal overhead.
Acked-by: Nicholas Piggin
Signed-off-by: Rohan McLure
---
Resubmitting patches as their own series after v6 partially merged:
Link:
https://lore.kernel.org/all/166488988686.779920.13794870102696416283.b4
.
Signed-off-by: Rohan McLure
---
Resubmitting patches as their own series after v6 partially merged:
Link:
https://lore.kernel.org/all/166488988686.779920.13794870102696416283.b4...@ellerman.id.au/t/
v2: REST_NVGPRS should be conditional on mitigation in scv handler. Fix
improper multi-line
> On 7 Nov 2022, at 9:49 am, Rohan McLure wrote:
>
> Replace occurrences of p{u,m,4}d_is_leaf with p{u,m,4}_leaf, as the
> latter is the name given to checking that a higher-level entry in
> multi-level paging contains a page translation entry (pte). This commit
> allows
for Book3S/nohash 32-bit systems.
Signed-off-by: Rohan McLure
---
V2: Provide missing pud_user implementations, use p{u,m}d_is_leaf.
V3: Provide missing pmd_user implementations as stubs in 32-bit.
V4: Use pmd_leaf, pud_leaf, and define pmd_user for 32 Book3E with
static inline method rather than macro
rm64/mm: enable
ARCH_SUPPORTS_PAGE_TABLE_CHECK")
x86_64 in commit d283d422c6c4 ("x86: mm: add x86_64 support for page table
check")
Signed-off-by: Rohan McLure
Reviewed-by: Russell Currey
Reviewed-by: Christophe Leroy
---
V2: Update spacing and types assigned to pte_update calls.
-nop4d.h which defines a default pud_leaf.
Define pud_leaf preprocessor macro on both Book3E/S 32-bit to avoid
including the default definition in asm/pgtable.h.
Signed-off-by: Rohan McLure
---
V4: new patch.
---
arch/powerpc/include/asm/book3s/32/pgtable.h | 2 +-
arch/powerpc/include/asm
.
Signed-off-by: Rohan McLure
---
V2: Remove conditional BUILD_BUG and BUG. Instead warn on usage.
V3: Replace WARN with WARN_ONCE, which should suffice to demonstrate
misuse of puds.
---
arch/powerpc/include/asm/book3s/64/pgtable.h | 10 --
arch/powerpc/include/asm/pgtable.h | 14
> On 3 Nov 2022, at 7:02 pm, Christophe Leroy
> wrote:
>
>
>
> Le 24/10/2022 à 02:35, Rohan McLure a écrit :
>> Add the following helpers for detecting whether a page table entry
>> is a leaf and is accessible to user space.
>>
>> * pte_user_acce
Cause pseries platforms to default to zeroising all potentially user-defined
registers when entering the kernel by means of any interrupt source,
reducing user-influence of the kernel and the likelihood or producing
speculation gadgets. Interrupt sources include syscalls.
Signed-off-by: Rohan
safely, without loss of register state prior to the
interrupt, as the common prologue saves the initial values of
non-volatiles, which are unconditionally restored in interrupt_64.S.
Mitigation defaults to enabled by INTERRUPT_SANITIZE_REGISTERS.
Signed-off-by: Rohan McLure
---
Resubmitting
state, meaning that the
mitigation when implemented will have minimal overhead.
Signed-off-by: Rohan McLure
Acked-by: Nicholas Piggin
---
Resubmitting patches as their own series after v6 partially merged:
Link:
https://lore.kernel.org/all/166488988686.779920.13794870102696416283.b4
.
Signed-off-by: Rohan McLure
---
Resubmitting patches as their own series after v6 partially merged:
Link:
https://lore.kernel.org/all/166488988686.779920.13794870102696416283.b4...@ellerman.id.au/t/
Standalone series: Now syscall register clearing is included
under the same configuration option
> On 23 Sep 2022, at 6:02 pm, Nicholas Piggin wrote:
>
> On Wed Sep 21, 2022 at 4:56 PM AEST, Rohan McLure wrote:
>> Clear user state in gprs (assign to zero) to reduce the influence of user
>> registers on speculation within kernel syscall handlers. Clears occur
>
the
DEFINE_INTERRUPT_HANDLER macros.
Applying this offset perturbs the layout of interrupt handler stack
frames, rendering to the kernel stack more difficult to control by means
of user invoked interrupts.
Link: https://lists.ozlabs.org/pipermail/linuxppc-dev/2022-May/243238.html
Signed-off-by: Rohan McLure
---
arch
.
Signed-off-by: Rohan McLure
---
V2: Remove conditional BUILD_BUG and BUG. Instead warn on usage.
---
arch/powerpc/include/asm/book3s/64/pgtable.h | 10 --
arch/powerpc/include/asm/pgtable.h | 14 ++
2 files changed, 14 insertions(+), 10 deletions(-)
diff --git
for Book3S/nohash 32-bit systems.
Signed-off-by: Rohan McLure
---
V2: Provide missing pud_user implementations, use p{u,m}d_is_leaf.
V3: Provide missing pmd_user implementations as stubs in 32-bit.
---
arch/powerpc/include/asm/book3s/32/pgtable.h | 4
arch/powerpc/include/asm/book3s/64/pgtable.h
rm64/mm: enable
ARCH_SUPPORTS_PAGE_TABLE_CHECK")
x86_64 in commit d283d422c6c4 ("x86: mm: add x86_64 support for page table
check")
Signed-off-by: Rohan McLure
---
V2: Update spacing and types assigned to pte_update calls.
V3: Update one last pte_update call to remove __pte invocat
.
Signed-off-by: Rohan McLure
---
V2: Remove conditional BUILD_BUG and BUG. Instead warn on usage.
---
arch/powerpc/include/asm/book3s/64/pgtable.h | 10 --
arch/powerpc/include/asm/pgtable.h | 14 ++
2 files changed, 14 insertions(+), 10 deletions(-)
diff --git
Add the following helpers for detecting whether a page table entry
is a leaf and is accessible to user space.
* pte_user_accessible_page
* pmd_user_accessible_page
* pud_user_accessible_page
Also implement missing pud_user definitions for both Book3S/E 64-bit
systems.
Signed-off-by: Rohan
rm64/mm: enable
ARCH_SUPPORTS_PAGE_TABLE_CHECK")
x86_64 in commit d283d422c6c4 ("x86: mm: add x86_64 support for page table
check")
Signed-off-by: Rohan McLure
---
V2: Update spacing and types assigned to pte_update calls.
---
arch/powerpc/Kconfig | 1 +
arch
safely, without loss of register state prior to the
interrupt, as the common prologue saves the initial values of
non-volatiles, which are unconditionally restored in interrupt_64.S.
Mitigation defaults to enabled by INTERRUPT_SANITIZE_REGISTERS.
Signed-off-by: Rohan McLure
---
V4: New patch.
V5
interrupt
handlers. This mitigation is disabled by default, but enabled with
CONFIG_INTERRUPT_SANITIZE_REGISTERS.
Signed-off-by: Rohan McLure
---
V2: Add benchmark data
V3: Use ZEROIZE_GPR{,S} macro renames, clarify
upt_exit_user_prepare changes in summary.
V5: Configurable now
state, meaning that the
mitigation when implemented will have minimal overhead.
Signed-off-by: Rohan McLure
Acked-by: Nicholas Piggin
---
V5: New patch
---
arch/powerpc/Kconfig | 9 +
1 file changed, 9 insertions(+)
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index
-by: Rohan McLure
---
V2: Update summary
V3: Remove erroneous summary paragraph on syscall_exit_prepare
V4: Use ZEROIZE instead of NULLIFY. Clear r0 also.
V5: Move to end of patch series.
V6: Include clears which were previously in the syscall wrapper patch.
Move comment on r3-r8 register save to when we
xed in a successive patch, but requires spu_sys_callback to
allocate a pt_regs structure to satisfy the wrapped calling convention.
Co-developed-by: Andrew Donnellan
Signed-off-by: Andrew Donnellan
Signed-off-by: Rohan McLure
---
V2: Generate prototypes for symbols produced by the wrapper.
V3: Rebased to rem
frame.
Signed-off-by: Rohan McLure
---
V6: Split off from syscall wrapper patch.
---
arch/powerpc/include/asm/interrupt.h | 3 +--
arch/powerpc/kernel/entry_32.S | 6 +++---
arch/powerpc/kernel/interrupt_64.S | 20 ++--
arch/powerpc/kernel/syscall.c| 10
Remove explicit clearing of the high order-word of user parameters when
handling compatibility syscalls in system_call_exception. The
COMPAT_SYSCALL_DEFINEx macros handle this clearing through an
explicit cast to the signature type of the target handler.
Signed-off-by: Rohan McLure
Reported
.
Fixup comparisons in VDSO to avoid pointer-integer comparison. Introduce
explicit cast on systems with SPUs.
Signed-off-by: Rohan McLure
Reviewed-by: Nicholas Piggin
---
V2: New patch.
V3: Remove unnecessary cast from const syscall_fn to syscall_fn
V5: Update patch desctiption.
V6: Remove
/syscalls.h headers for
prototypes.
Reported-by: Arnd Bergmann
Signed-off-by: Rohan McLure
Reviewed-by: Nicholas Piggin
---
V2: New patch.
V5: For this patch only, represent handler function pointers as
unsigned long. Remove reference to syscall wrappers. Use asm/syscalls.h
which implies asm
handlers are expressed in terms of types in
ppc32.h. Expose this header globally.
Signed-off-by: Rohan McLure
Acked-by: Nicholas Piggin
---
V2: Explicitly include prototypes.
V3: Remove extraneous #include and ppc_fallocate
prototype. Rename header.
V5: Clean. Elaborate comment on long long
, through the
__ARCH_WANT_COMPAT_SYS_... convention.
Signed-off-by: Rohan McLure
Reviewed-by: Nicholas Piggin
---
V2: All syscall handlers wrapped by this macro.
V3: Move creation of do_ppc64_personality helper to prior patch.
V4: Fix parenthesis alignment. Don't emit sys_*** symbols.
V5: Use
101 - 200 of 332 matches
Mail list logo