> On March 29, 2019 at 3:41 AM Christophe Leroy wrote:
>
>
>
>
> Le 29/03/2019 à 05:21, cmr a écrit :
> > Operations which write to memory should be restricted on secure systems
> > and optionally to avoid self-destructive behaviors.
> >
> > Add a config option, XMON_RO, to control default
> On March 29, 2019 at 12:49 AM Andrew Donnellan
> wrote:
>
>
> On 29/3/19 3:21 pm, cmr wrote:
> > Operations which write to memory should be restricted on secure systems
> > and optionally to avoid self-destructive behaviors.
>
> For reference:
> -
> On April 3, 2019 at 12:15 AM Christophe Leroy wrote:
>
>
>
>
> Le 03/04/2019 à 05:38, Christopher M Riedl a écrit :
> >> On March 29, 2019 at 3:41 AM Christophe Leroy
> >> wrote:
> >>
> >>
> >>
> >>
> >> L
disable memset
disable memzcan
memex:
no-op'd mwrite
super_regs:
no-op'd write_spr
bpt_cmds:
disable
proc_call:
disable
Signed-off-by: Christopher M. Riedl
---
v1->v2:
Use bool type for xmon_is_ro flag
Replace XMON_RO with XMON_RW con
> On April 8, 2019 at 1:34 AM Oliver wrote:
>
>
> On Mon, Apr 8, 2019 at 1:06 PM Christopher M. Riedl
> wrote:
> >
> > Operations which write to memory and special purpose registers should be
> > restricted on systems with integrity guarantees (such as
> On April 8, 2019 at 2:37 AM Andrew Donnellan
> wrote:
>
>
> On 8/4/19 1:08 pm, Christopher M. Riedl wrote:
> > Operations which write to memory and special purpose registers should be
> > restricted on systems with integrity guarantees (such as Secure Boot)
&g
-by: Christopher M. Riedl
---
v2->v3:
Use XMON_DEFAULT_RO_MODE to set xmon read-only mode
Untangle read-only mode from STRICT_KERNEL_RWX and PAGE_KERNEL_ROX
Update printed msg string for write ops in read-only mode
arch/powerpc/Kconfig.debug | 8
arch/powe
-by: Christopher M. Riedl
Reviewed-by: Oliver O'Halloran
---
v3->v4:
Address Andrew's nitpick.
arch/powerpc/Kconfig.debug | 8
arch/powerpc/xmon/xmon.c | 42 ++
2 files changed, 50 insertions(+)
diff --git a/arch/powerpc/Kconfig.debu
Add support for disabling the kernel implemented spectre v2 mitigation
(count cache flush on context switch) via the nospectre_v2 and
mitigations=off cmdline options.
Suggested-by: Michael Ellerman
Signed-off-by: Christopher M. Riedl
Reviewed-by: Andrew Donnellan
---
v3->v4:
> On June 3, 2019 at 1:36 AM Andrew Donnellan wrote:
>
>
> On 24/5/19 10:38 pm, Christopher M. Riedl wrote:
> > Xmon should be either fully or partially disabled depending on the
> > kernel lockdown state.
> >
> > Put xmon into read-only mode for lockdown=
own=none -> lockdown=confidentiality
clear all breakpoints, prevent re-entry into xmon
(3) lockdown=integrity -> lockdown=confidentiality
prevent re-entry into xmon
Suggested-by: Andrew Donnellan
Signed-off-by: Christopher M. Riedl
---
Applies on top of this seri
Add support for disabling the kernel implemented spectre v2 mitigation
(count cache flush on context switch) via the nospectre_v2 and
mitigations=off cmdline options.
Suggested-by: Michael Ellerman
Signed-off-by: Christopher M. Riedl
Reviewed-by: Andrew Donnellan
---
v4->v5:
own=none -> lockdown=confidentiality
clear all breakpoints, prevent re-entry into xmon
(3) lockdown=integrity -> lockdown=confidentiality
prevent re-entry into xmon
Suggested-by: Andrew Donnellan
Signed-off-by: Christopher M. Riedl
---
Applies on top of this series:
> On May 6, 2019 at 9:29 PM Michael Ellerman wrote:
>
>
> Christopher M Riedl writes:
> >> On May 5, 2019 at 9:32 PM Andrew Donnellan wrote:
> >> On 6/5/19 8:10 am, Christopher M. Riedl wrote:
> >> > Add support for disabling the kernel implemented
Add support for disabling the kernel implemented spectre v2 mitigation
(count cache flush on context switch) via the nospectre_v2 cmdline
option.
Suggested-by: Michael Ellerman
Signed-off-by: Christopher M. Riedl
---
v1->v2:
add call to toggle_count_cache_flush(false)
arch/powe
Add support for disabling the kernel implemented spectre v2 mitigation
(count cache flush on context switch) via the nospectre_v2 cmdline
option.
Suggested-by: Michael Ellerman
Signed-off-by: Christopher M. Riedl
Reviewed-by: Andrew Donnellan
---
v2->v3:
Address mpe's nitpick
a
> On May 7, 2019 at 5:54 AM Michael Ellerman wrote:
>
>
> "Christopher M. Riedl" writes:
> > diff --git a/arch/powerpc/kernel/security.c b/arch/powerpc/kernel/security.c
> > index b33bafb8fcea..d775da9b9227 100644
> > --- a/arch/powerpc/kernel/
> On May 5, 2019 at 9:32 PM Andrew Donnellan wrote:
>
>
> On 6/5/19 8:10 am, Christopher M. Riedl wrote:
> > Add support for disabling the kernel implemented spectre v2 mitigation
> > (count cache flush on context switch) via the nospectre_v2 cmdline
> > option.
Add support for disabling the kernel implemented spectre v2 mitigation
(count cache flush on context switch) via the nospectre_v2 cmdline
option.
Suggested-by: Michael Ellerman
Signed-off-by: Christopher M. Riedl
---
reference: https://github.com/linuxppc/issues/issues/236
arch/powerpc/kernel
> On April 11, 2019 at 8:37 AM Michael Ellerman wrote:
>
>
> Christopher M Riedl writes:
> >> On April 8, 2019 at 1:34 AM Oliver wrote:
> >> On Mon, Apr 8, 2019 at 1:06 PM Christopher M. Riedl
> >> wrote:
> ...
> >> >
> &g
> On August 2, 2019 at 6:38 AM Michael Ellerman wrote:
>
>
> "Christopher M. Riedl" writes:
> > diff --git a/arch/powerpc/include/asm/spinlock.h
> > b/arch/powerpc/include/asm/spinlock.h
> > index 0a8270183770..6aed8a83b180 100644
> > --- a/arch
> On July 29, 2019 at 2:00 AM Daniel Axtens wrote:
>
> Would you be able to send a v2 with these changes? (that is, not purging
> breakpoints when entering integrity mode)
>
Just sent out a v3 with that change among a few others and a rebase.
Thanks,
Chris R.
ckdown=confidentiality
clear all breakpoints, set xmon read-only mode,
prevent re-entry into xmon
(3) lockdown=integrity -> lockdown=confidentiality
clear all breakpoints, set xmon read-only mode,
prevent re-entry into xmon
Suggested-by: Andrew Donnellan
Signed-off-by: Christopher
ckdown=confidentiality
clear all breakpoints, set xmon read-only mode,
prevent re-entry into xmon
(3) lockdown=integrity -> lockdown=confidentiality
clear all breakpoints, set xmon read-only mode,
prevent re-entry into xmon
Suggested-by: Andrew Donnellan
Signed-off-by: Christopher
/11049461/
(based on: f632a8170a6b667ee4e3f552087588f0fe13c4bb)
- Do not clear existing breakpoints when transitioning from
lockdown=none to lockdown=integrity
- Remove line continuation and dangling quote (confuses checkpatch.pl)
from the xmon command help/usage string
Christopher M
Xmon can enter read-only mode dynamically due to changes in kernel
lockdown state. This transition does not clear active breakpoints and
any these breakpoints should remain visible to the xmon'er.
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/xmon/xmon.c | 19 ++-
1 file
in is_shared_processor() in spinlock.h
- Replace empty #define of splpar_*_yield() with actual functions with
empty bodies
Christopher M. Riedl (3):
powerpc/spinlocks: Refactor SHARED_PROCESSOR
powerpc/spinlocks: Rename SPLPAR-only spinlocks
powerpc/spinlocks: Fix oops in shared-processor spinlocks
arch
Determining if a processor is in shared processor mode is not a constant
so don't hide it behind a #define.
Signed-off-by: Christopher M. Riedl
Reviewed-by: Andrew Donnellan
---
arch/powerpc/include/asm/spinlock.h | 24 ++--
1 file changed, 18 insertions(+), 6 deletions
The __rw_yield and __spin_yield locks only pertain to SPLPAR mode.
Rename them to make this relationship obvious.
Signed-off-by: Christopher M. Riedl
Reviewed-by: Andrew Donnellan
---
arch/powerpc/include/asm/spinlock.h | 6 --
arch/powerpc/lib/locks.c| 6 +++---
2 files
e8e7 38e70100 <7ca03c2c> 70a70001 78a50020
4d820020
[0.452808] ---[ end trace 474d6b2b8fc5cb7e ]---
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/include/asm/spinlock.h | 36 -
1 file changed, 25 insertions(+), 11 deletions(-)
diff --git a/arch/p
checkpatch.pl)
from the xmon command help/usage string
Christopher M. Riedl (2):
powerpc/xmon: Allow listing active breakpoints in read-only mode
powerpc/xmon: Restrict when kernel is locked down
arch/powerpc/xmon/xmon.c | 104 +++
include/linux/security.h
clear all breakpoints, set xmon read-only mode,
prevent user re-entry into xmon
(3) lockdown=integrity -> lockdown=confidentiality
clear all breakpoints, set xmon read-only mode,
prevent user re-entry into xmon
Suggested-by: Andrew Donnellan
Signed-off-by: Christopher M. Riedl
Read-only mode should not prevent listing and clearing any active
breakpoints.
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/xmon/xmon.c | 15 ++-
1 file changed, 10 insertions(+), 5 deletions(-)
diff --git a/arch/powerpc/xmon/xmon.c b/arch/powerpc/xmon/xmon.c
index
> On August 29, 2019 at 1:40 AM Daniel Axtens wrote:
>
>
> Hi Chris,
>
> > Read-only mode should not prevent listing and clearing any active
> > breakpoints.
>
> I tested this and it works for me:
>
> Tested-by: Daniel Axtens
>
> > + if (xmon_is_ro || !scanhex()) {
>
> It took
> On August 29, 2019 at 2:43 AM Daniel Axtens wrote:
>
>
> Hi,
>
> > Xmon should be either fully or partially disabled depending on the
> > kernel lockdown state.
>
> I've been kicking the tyres of this, and it seems to work well:
>
> Tested-by: Daniel Axtens
>
Thank you for taking the
y: Daniel Axtens
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/xmon/xmon.c | 92
include/linux/security.h | 2 +
security/lockdown/lockdown.c | 2 +
3 files changed, 76 insertions(+), 20 deletions(-)
diff --git a/arch/powerpc/xmon/xmon.c b/arc
lockdown=none to lockdown=integrity
- Remove line continuation and dangling quote (confuses checkpatch.pl)
from the xmon command help/usage string
Christopher M. Riedl (2):
powerpc/xmon: Allow listing and clearing breakpoints in read-only mode
powerpc/xmon: Restrict when kernel is locked down
Read-only mode should not prevent listing and clearing any active
breakpoints.
Tested-by: Daniel Axtens
Reviewed-by: Daniel Axtens
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/xmon/xmon.c | 16 +++-
1 file changed, 11 insertions(+), 5 deletions(-)
diff --git a/arch
Read-only mode should not prevent listing and clearing any active
breakpoints.
Tested-by: Daniel Axtens
Reviewed-by: Daniel Axtens
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/xmon/xmon.c | 16 +++-
1 file changed, 11 insertions(+), 5 deletions(-)
diff --git a/arch
clear all breakpoints, set xmon read-only mode,
prevent user re-entry into xmon
(3) lockdown=integrity -> lockdown=confidentiality
clear all breakpoints, set xmon read-only mode,
prevent user re-entry into xmon
Suggested-by: Andrew Donnellan
Signed-off-by: Christopher M. Riedl
(confuses checkpatch.pl)
from the xmon command help/usage string
Christopher M. Riedl (2):
powerpc/xmon: Allow listing and clearing breakpoints in read-only mode
powerpc/xmon: Restrict when kernel is locked down
arch/powerpc/xmon/xmon.c | 119
The __rw_yield and __spin_yield locks only pertain to SPLPAR mode.
Rename them to make this relationship obvious.
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/include/asm/spinlock.h | 6 --
arch/powerpc/lib/locks.c| 6 +++---
2 files changed, 7 insertions(+), 5 deletions
Determining if a processor is in shared processor mode is not a constant
so don't hide it behind a #define.
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/include/asm/spinlock.h | 21 +++--
1 file changed, 15 insertions(+), 6 deletions(-)
diff --git a/arch/powerpc/include
e8e7 38e70100 <7ca03c2c> 70a70001 78a50020
4d820020
[0.452808] ---[ end trace 474d6b2b8fc5cb7e ]---
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/include/asm/spinlock.h | 36 -
1 file changed, 25 insertions(+), 11 deletions(-)
diff --git a/arch/p
Fixes an oops when calling the shared-processor spinlock implementation
from a non-SP LPAR. Also take this opportunity to refactor
SHARED_PROCESSOR a bit.
Reference: https://github.com/linuxppc/issues/issues/229
Christopher M. Riedl (3):
powerpc/spinlocks: Refactor SHARED_PROCESSOR
powerpc
> On July 30, 2019 at 7:11 PM Thiago Jung Bauermann
> wrote:
>
>
>
> Christopher M Riedl writes:
>
> >> On July 30, 2019 at 4:31 PM Thiago Jung Bauermann
> >> wrote:
> >>
> >>
> >>
> >> Christopher M. Riedl
> On July 30, 2019 at 4:31 PM Thiago Jung Bauermann
> wrote:
>
>
>
> Christopher M. Riedl writes:
>
> > Determining if a processor is in shared processor mode is not a constant
> > so don't hide it behind a #define.
> >
> > Signed-off-by: Ch
is
required in is_shared_processor() in spinlock.h
- Replace empty #define of splpar_*_yield() with actual functions with
empty bodies.
Christopher M. Riedl (3):
powerpc/spinlocks: Refactor SHARED_PROCESSOR
powerpc/spinlocks: Rename SPLPAR-only spinlocks
powerpc/spinlocks: Fix oops
The __rw_yield and __spin_yield locks only pertain to SPLPAR mode.
Rename them to make this relationship obvious.
Signed-off-by: Christopher M. Riedl
Reviewed-by: Andrew Donnellan
---
arch/powerpc/include/asm/spinlock.h | 6 --
arch/powerpc/lib/locks.c| 6 +++---
2 files
Determining if a processor is in shared processor mode is not a constant
so don't hide it behind a #define.
Signed-off-by: Christopher M. Riedl
Reviewed-by: Andrew Donnellan
---
arch/powerpc/include/asm/spinlock.h | 24 ++--
1 file changed, 18 insertions(+), 6 deletions
e8e7 38e70100 <7ca03c2c> 70a70001 78a50020
4d820020
[0.452808] ---[ end trace 474d6b2b8fc5cb7e ]---
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/include/asm/spinlock.h | 36 -
1 file changed, 25 insertions(+), 11 deletions(-)
diff --git a/arch/p
Determining if a processor is in shared processor mode is not a constant
so don't hide it behind a #define.
Signed-off-by: Christopher M. Riedl
Reviewed-by: Andrew Donnellan
---
arch/powerpc/include/asm/spinlock.h | 24 ++--
1 file changed, 18 insertions(+), 6 deletions
The __rw_yield and __spin_yield locks only pertain to SPLPAR mode.
Rename them to make this relationship obvious.
Signed-off-by: Christopher M. Riedl
Reviewed-by: Andrew Donnellan
---
arch/powerpc/include/asm/spinlock.h | 6 --
arch/powerpc/lib/locks.c| 6 +++---
2 files
e8e7 38e70100 <7ca03c2c> 70a70001 78a50020
4d820020
[0.452808] ---[ end trace 474d6b2b8fc5cb7e ]---
Signed-off-by: Christopher M. Riedl
---
Changes since v2:
- Directly call splpar_*_yield() to avoid duplicate call to
is_shared_processor() in some cases
arch/powerpc/inclu
to
is_shared_processor() in some cases
Changes since v1:
- Improve comment wording to make it clear why the BOOK3S #ifdef is
required in is_shared_processor() in spinlock.h
- Replace empty #define of splpar_*_yield() with actual functions with
empty bodies
Christopher M. Riedl (3):
powerpc
> On August 6, 2019 at 7:14 AM Michael Ellerman wrote:
>
>
> Christopher M Riedl writes:
> >> On August 2, 2019 at 6:38 AM Michael Ellerman wrote:
> >> "Christopher M. Riedl" writes:
> >>
> >> This leaves us with a double test of
> On March 26, 2020 9:42 AM Christophe Leroy wrote:
>
>
> This patch fixes the RFC series identified below.
> It fixes three points:
> - Failure with CONFIG_PPC_KUAP
> - Failure to write do to lack of DIRTY bit set on the 8xx
> - Inadequaly complex WARN post verification
>
> However, it has
> On April 8, 2020 6:01 AM Christophe Leroy wrote:
>
>
> Le 31/03/2020 à 05:19, Christopher M Riedl a écrit :
> >> On March 24, 2020 11:10 AM Christophe Leroy
> >> wrote:
> >>
> >>
> >> Le 23/03/2020 à 05:52, Christopher M. Rie
> On March 24, 2020 11:25 AM Christophe Leroy wrote:
>
>
> Le 23/03/2020 à 05:52, Christopher M. Riedl a écrit :
> > Currently, code patching a STRICT_KERNEL_RWX exposes the temporary
> > mappings to other CPUs. These mappings should be kept local to the CPU
> > d
On Sat Apr 18, 2020 at 12:27 PM, Christophe Leroy wrote:
>
>
>
>
> Le 15/04/2020 à 18:22, Christopher M Riedl a écrit :
> >> On April 15, 2020 4:12 AM Christophe Leroy wrote:
> >>
> >>
> >> Le 15/04/2020 à 07:16, Christopher M Riedl a écrit
> On March 24, 2020 11:10 AM Christophe Leroy wrote:
>
>
> Le 23/03/2020 à 05:52, Christopher M. Riedl a écrit :
> > When code patching a STRICT_KERNEL_RWX kernel the page containing the
> > address to be patched is temporarily mapped with permissive memory
> >
> On March 24, 2020 11:07 AM Christophe Leroy wrote:
>
>
> Le 23/03/2020 à 05:52, Christopher M. Riedl a écrit :
> > x86 supports the notion of a temporary mm which restricts access to
> > temporary PTEs to a single CPU. A temporary mm is useful for situations
> >
nel text is now overwritten.
How to run the test:
mount -t debugfs none /sys/kernel/debug
(echo HIJACK_PATCH > /sys/kernel/debug/provoke-crash/DIRECT)
Signed-off-by: Christopher M. Riedl
---
drivers/misc/lkdtm/core.c | 1 +
drivers/misc/lkdtm/lkdt
. Choose a
randomized patching address inside the temporary mm userspace address
portion. The next patch uses the temporary mm and patching address for
code patching.
Based on x86 implementation:
commit 4fc19708b165
("x86/alternatives: Initialize temporary mm for patching")
Signed-off-by: Chr
her CPU.
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/lib/code-patching.c | 7 +++
1 file changed, 7 insertions(+)
diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c
index 26f06cdb5d7e..cfbdef90384e 100644
--- a/arch/powerpc/lib/code-patching.c
+++ b/a
implementation:
commit cefa929c034e
("x86/mm: Introduce temporary mm structs")
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/include/asm/debug.h | 1 +
arch/powerpc/include/asm/mmu_context.h | 54 ++
arch/powerpc/kernel/process.c | 5 +++
3 files c
224
[1]:
https://lore.kernel.org/kernel-hardening/20190426232303.28381-1-nadav.a...@gmail.com/
Christopher M. Riedl (5):
powerpc/mm: Introduce temporary mm
powerpc/lib: Initialize a temporary mm for code patching
powerpc/lib: Use a temporary mm for code patching
powerpc/lib: Add LKDTM acc
is ignored (see PowerISA v3.0b, Fig, 35).
Based on x86 implementation:
commit b3fd8e83ada0
("x86/alternatives: Use temporary mm for text poking")
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/lib/code-patching.c | 149 ---
1 file changed, 55 inserti
On Fri Apr 24, 2020 at 9:15 AM, Steven Rostedt wrote:
> On Thu, 23 Apr 2020 18:21:14 +0200
> Christophe Leroy wrote:
>
>
> > Le 23/04/2020 à 17:09, Naveen N. Rao a écrit :
> > > With STRICT_KERNEL_RWX, we are currently ignoring return value from
> > > __patch_instruction() in
On Wed Apr 29, 2020 at 7:39 AM, Christophe Leroy wrote:
>
>
>
>
> Le 29/04/2020 à 04:05, Christopher M. Riedl a écrit :
> > x86 supports the notion of a temporary mm which restricts access to
> > temporary PTEs to a single CPU. A temporary mm is useful for situa
On Wed Apr 29, 2020 at 7:48 AM, Christophe Leroy wrote:
>
>
>
>
> Le 29/04/2020 à 04:05, Christopher M. Riedl a écrit :
> > x86 supports the notion of a temporary mm which restricts access to
> > temporary PTEs to a single CPU. A temporary mm is useful for situa
On Wed Apr 29, 2020 at 7:52 AM, Christophe Leroy wrote:
>
>
>
>
> Le 29/04/2020 à 04:05, Christopher M. Riedl a écrit :
> > Currently, code patching a STRICT_KERNEL_RWX exposes the temporary
> > mappings to other CPUs. These mappings should be kept local to the CPU
> On April 15, 2020 4:12 AM Christophe Leroy wrote:
>
>
> Le 15/04/2020 à 07:16, Christopher M Riedl a écrit :
> >> On March 26, 2020 9:42 AM Christophe Leroy wrote:
> >>
> >>
> >> This patch fixes the RFC series identified b
> On April 15, 2020 3:45 AM Christophe Leroy wrote:
>
>
> Le 15/04/2020 à 07:11, Christopher M Riedl a écrit :
> >> On March 24, 2020 11:25 AM Christophe Leroy
> >> wrote:
> >>
> >>
> >> Le 23/03/2020 à 05:52, Chri
.
Based on x86 implementation:
commit b3fd8e83ada0
("x86/alternatives: Use temporary mm for text poking")
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/lib/code-patching.c | 128 ++-
1 file changed, 57 insertions(+), 71 deletions(-)
diff --git a/arch/p
@gmail.com/
Christopher M. Riedl (3):
powerpc/mm: Introduce temporary mm
powerpc/lib: Initialize a temporary mm for code patching
powerpc/lib: Use a temporary mm for code patching
arch/powerpc/include/asm/debug.h | 1 +
arch/powerpc/include/asm/mmu_context.h | 56 +-
arch/powe
. Choose a
randomized patching address inside the temporary mm userspace address
portion. The next patch uses the temporary mm and patching address for
code patching.
Based on x86 implementation:
commit 4fc19708b165
("x86/alternatives: Initialize temporary mm for patching")
Signed-off-by: Chr
implementation:
commit cefa929c034e
("x86/mm: Introduce temporary mm structs")
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/include/asm/debug.h | 1 +
arch/powerpc/include/asm/mmu_context.h | 56 +-
arch/powerpc/kernel/process.c | 5 +++
3 files c
On Thu Aug 27, 2020 at 11:15 AM CDT, Jann Horn wrote:
> On Thu, Aug 27, 2020 at 7:24 AM Christopher M. Riedl
> wrote:
> > x86 supports the notion of a temporary mm which restricts access to
> > temporary PTEs to a single CPU. A temporary mm is useful for situations
>
with their 'unsafe'
versions which avoid the repeated uaccess switches.
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/kernel/signal_64.c | 68 -
1 file changed, 41 insertions(+), 27 deletions(-)
diff --git a/arch/powerpc/kernel/signal_64.c b/arch/powerpc
with their 'unsafe' versions
which avoid the repeated uaccess switches.
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/kernel/signal_64.c | 71 -
1 file changed, 44 insertions(+), 27 deletions(-)
diff --git a/arch/powerpc/kernel/signal_64.c b/arch/powerpc/kernel
cess block.
Signed-off-by: Daniel Axtens
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/kernel/signal_64.c | 54 -
1 file changed, 27 insertions(+), 27 deletions(-)
diff --git a/arch/powerpc/kernel/signal_64.c b/arch/powerpc/kernel/signal_64.c
index 6d
ed() calls
__copy_tofrom_user() internally, but this is still safe to call in user
access blocks formed with user_*_access_begin()/user_*_access_end()
since asm functions are not instrumented for tracing.
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/include/asm/uaccess.h | 28 +++--
uaccess functions with their 'unsafe'
versions to avoid the repeated uaccess switches.
Signed-off-by: Daniel Axtens
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/kernel/signal_64.c | 32 +++-
1 file changed, 19 insertions(+), 13 deletions(-)
diff --git a/arch
|
| linuxppc/next | 289014 | 158408 |
| unsafe-signal64 | 298506 | 253053 |
[0]: https://github.com/linuxppc/issues/issues/277
[1]: https://patchwork.ozlabs.org/project/linuxppc-dev/list/?series=196278
[2]: https://github.com/antonblanchard/will-it-scale/blob/master/tests/sign
From: Daniel Axtens
Add uaccess blocks and use the 'unsafe' versions of functions doing user
access where possible to reduce the number of times uaccess has to be
opened/closed.
Signed-off-by: Daniel Axtens
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/kernel/signal_64.c | 23
ignificantly reduces signal handling performance.
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/kernel/signal.h | 33 +
1 file changed, 33 insertions(+)
diff --git a/arch/powerpc/kernel/signal.h b/arch/powerpc/kernel/signal.h
index 2559a681536e..e9aaeac0da
-by: Christopher M. Riedl
---
arch/powerpc/kernel/process.c | 20 ++--
arch/powerpc/mm/mem.c | 4 ++--
2 files changed, 12 insertions(+), 12 deletions(-)
diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index ba2c987b8403..bf5d9654bd2c 100644
--- a/arch
On Fri Oct 16, 2020 at 10:48 AM CDT, Christophe Leroy wrote:
>
>
> Le 15/10/2020 à 17:01, Christopher M. Riedl a écrit :
> > Reuse the "safe" implementation from signal.c except for calling
> > unsafe_copy_from_user() to copy into a local buffer. Unlike the
&g
On Fri Oct 16, 2020 at 10:56 AM CDT, Christophe Leroy wrote:
>
>
> Le 15/10/2020 à 17:01, Christopher M. Riedl a écrit :
> > From: Daniel Axtens
> >
> > Previously setup_trampoline() performed a costly KUAP switch on every
> > uaccess operation. Thes
On Fri Oct 16, 2020 at 11:00 AM CDT, Christophe Leroy wrote:
>
>
> Le 15/10/2020 à 17:01, Christopher M. Riedl a écrit :
> > From: Daniel Axtens
> >
> > Add uaccess blocks and use the 'unsafe' versions of functions doing user
> > access where possible to reduc
On Fri Oct 16, 2020 at 11:07 AM CDT, Christophe Leroy wrote:
>
>
> Le 15/10/2020 à 17:01, Christopher M. Riedl a écrit :
> > From: Daniel Axtens
> >
> > Add uaccess blocks and use the 'unsafe' versions of functions doing user
> > access where possible to reduc
On Fri Oct 16, 2020 at 10:17 AM CDT, Christophe Leroy wrote:
>
>
> Le 15/10/2020 à 17:01, Christopher M. Riedl a écrit :
> > Implement raw_copy_from_user_allowed() which assumes that userspace read
> > access is open. Use this new function to implement raw_copy_from_user
On Fri Oct 16, 2020 at 4:02 AM CDT, Christophe Leroy wrote:
>
>
> Le 15/10/2020 à 17:01, Christopher M. Riedl a écrit :
> > Functions called between user_*_access_begin() and user_*_access_end()
> > should be either inlined or marked 'notrace' to prevent leaving
> > use
).
Based on x86 implementation:
commit b3fd8e83ada0
("x86/alternatives: Use temporary mm for text poking")
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/lib/code-patching.c | 153 +++
1 file changed, 54 insertions(+), 99 deletions(-)
diff --git a/ar
her CPU.
Signed-off-by: Christopher M. Riedl
---
arch/x86/include/asm/text-patching.h | 4
arch/x86/kernel/alternative.c| 7 +++
2 files changed, 11 insertions(+)
diff --git a/arch/x86/include/asm/text-patching.h
b/arch/x86/include/asm/text-patching.h
index 6593b42cb379.
'memcmp' where a simple comparison is appropriate
* Simplify expression for patch address by removing pointer maths
* Add LKDTM test
[0]: https://github.com/linuxppc/issues/issues/224
[1]:
https://lore.kernel.org/kernel-hardening/20190426232303.28381-1-nadav.a...@gmail.com/
Christopher M. Rie
implementation:
commit cefa929c034e
("x86/mm: Introduce temporary mm structs")
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/include/asm/debug.h | 1 +
arch/powerpc/kernel/process.c| 5 +++
arch/powerpc/lib/code-patching.c | 65
3 files changed, 71
her CPU.
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/include/asm/code-patching.h | 4
arch/powerpc/lib/code-patching.c | 7 +++
2 files changed, 11 insertions(+)
diff --git a/arch/powerpc/include/asm/code-patching.h
b/arch/powerpc/include/asm/code-patching.h
index eacc91
How to run the test:
mount -t debugfs none /sys/kernel/debug
(echo HIJACK_PATCH > /sys/kernel/debug/provoke-crash/DIRECT)
Signed-off-by: Christopher M. Riedl
---
drivers/misc/lkdtm/core.c | 1 +
drivers/misc/lkdtm/lkdtm.h | 1 +
drivers/misc/lkdtm/perms.c | 146 +++
1 - 100 of 266 matches
Mail list logo