On Tue, Jun 16, 2020 at 12:52:00PM +0200, Christoph Hellwig wrote:

> I think something like this should solve the issue:
> 
> --
> diff --git a/arch/x86/include/asm/module.h b/arch/x86/include/asm/module.h
> index e988bac0a4a1c3..716e4de44a8e78 100644
> --- a/arch/x86/include/asm/module.h
> +++ b/arch/x86/include/asm/module.h
> @@ -13,4 +13,6 @@ struct mod_arch_specific {
>  #endif
>  };
>  
> +void *module_alloc_prot(unsigned long size, pgprot_t prot);
> +
>  #endif /* _ASM_X86_MODULE_H */
> diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
> index 34b153cbd4acb4..4db6e655120960 100644
> --- a/arch/x86/kernel/module.c
> +++ b/arch/x86/kernel/module.c
> @@ -65,8 +65,10 @@ static unsigned long int get_module_load_offset(void)
>  }
>  #endif
>  
> -void *module_alloc(unsigned long size)
> +void *module_alloc_prot(unsigned long size, pgprot_t prot)
>  {
> +     unsigned int flags = (pgprot_val(prot) & _PAGE_NX) ?
> +                     0 : VM_FLUSH_RESET_PERMS;
>       void *p;
>  
>       if (PAGE_ALIGN(size) > MODULES_LEN)
> @@ -75,7 +77,7 @@ void *module_alloc(unsigned long size)
>       p = __vmalloc_node_range(size, MODULE_ALIGN,
>                                   MODULES_VADDR + get_module_load_offset(),
>                                   MODULES_END, GFP_KERNEL,
> -                                 PAGE_KERNEL, 0, NUMA_NO_NODE,
> +                                 prot, flags, NUMA_NO_NODE,
>                                   __builtin_return_address(0));
>       if (p && (kasan_module_alloc(p, size) < 0)) {
>               vfree(p);

Hurmm.. Yes it would. It just doesn't feel right though. Can't we
unconditionally set the flag? At worst it makes free a little bit more
expensive.

The thing is, I don't think _NX is the only prot that needs restoring.
Any prot other than the default (RW IIRC) needs restoring.

Reply via email to