On Thu, Jul 11, 2019 at 07:11:19AM +0000, Nadav Amit wrote:
> > On Jul 10, 2019, at 7:22 AM, Jiri Kosina <ji...@kernel.org> wrote:
> > 
> > On Wed, 10 Jul 2019, Peter Zijlstra wrote:
> > 
> >> If we mark the key as RO after init, and then try and modify the key to
> >> link module usage sites, things might go bang as described.
> >> 
> >> Thanks!
> >> 
> >> 
> >> diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
> >> index 27d7864e7252..5bf7a8354da2 100644
> >> --- a/arch/x86/kernel/cpu/common.c
> >> +++ b/arch/x86/kernel/cpu/common.c
> >> @@ -366,7 +366,7 @@ static __always_inline void setup_umip(struct 
> >> cpuinfo_x86 *c)
> >>    cr4_clear_bits(X86_CR4_UMIP);
> >> }
> >> 
> >> -DEFINE_STATIC_KEY_FALSE_RO(cr_pinning);
> >> +DEFINE_STATIC_KEY_FALSE(cr_pinning);
> > 
> > Good catch, I guess that is going to fix it.
> > 
> > At the same time though, it sort of destroys the original intent of Kees' 
> > patch, right? The exploits will just have to call static_key_disable() 
> > prior to calling native_write_cr4() again, and the protection is gone.
> 
> Even with DEFINE_STATIC_KEY_FALSE_RO(), I presume you can just call
> set_memory_rw(), make the page that holds the key writable, and then call
> static_key_disable(), followed by a call to native_write_cr4().

Or call text_poke_bp() with the right set of arguments.

Reply via email to