Russell King - ARM Linux admin <li...@armlinux.org.uk> writes:

> On Fri, Aug 30, 2019 at 09:31:17PM +0800, Jing Xiangfeng wrote:
>> The function do_alignment can handle misaligned address for user and
>> kernel space. If it is a userspace access, do_alignment may fail on
>> a low-memory situation, because page faults are disabled in
>> probe_kernel_address.
>> 
>> Fix this by using __copy_from_user stead of probe_kernel_address.
>> 
>> Fixes: b255188 ("ARM: fix scheduling while atomic warning in alignment 
>> handling code")
>> Signed-off-by: Jing Xiangfeng <jingxiangf...@huawei.com>
>
> NAK.
>
> The "scheduling while atomic warning in alignment handling code" is
> caused by fixing up the page fault while trying to handle the
> mis-alignment fault generated from an instruction in atomic context.
>
> Your patch re-introduces that bug.

And the patch that fixed scheduling while atomic apparently introduced a
regression.  Admittedly a regression that took 6 years to track down but
still.

So it looks like the code needs to do something like:

diff --git a/arch/arm/mm/alignment.c b/arch/arm/mm/alignment.c
index 04b36436cbc0..5e2b8623851e 100644
--- a/arch/arm/mm/alignment.c
+++ b/arch/arm/mm/alignment.c
@@ -784,6 +784,9 @@ do_alignment(unsigned long addr, unsigned int fsr, struct 
pt_regs *regs)
 
        instrptr = instruction_pointer(regs);
 
+       if (user_mode(regs))
+               goto user;
+
        if (thumb_mode(regs)) {
                u16 *ptr = (u16 *)(instrptr & ~1);
                fault = probe_kernel_address(ptr, tinstr);
@@ -933,6 +936,34 @@ do_alignment(unsigned long addr, unsigned int fsr, struct 
pt_regs *regs)
        return 1;
 
  user:
+       if (thumb_mode(regs)) {
+               u16 *ptr = (u16 *)(instrptr & ~1);
+               fault = get_user(tinstr, ptr);
+               tinstr = __mem_to_opcode_thumb16(tinstr);
+               if (!fault) {
+                       if (cpu_architecture() >= CPU_ARCH_ARMv7 &&
+                           IS_T32(tinstr)) {
+                               /* Thumb-2 32-bit */
+                               u16 tinst2 = 0;
+                               fault = get_user(ptr + 1, tinst2);
+                               tinst2 = __mem_to_opcode_thumb16(tinst2);
+                               instr = __opcode_thumb32_compose(tinstr, 
tinst2);
+                               thumb2_32b = 1;
+                       } else {
+                               isize = 2;
+                               instr = thumb2arm(tinstr);
+                       }
+               }
+       } else {
+               fault = get_user(instr, (u32*)instrptr);
+               instr = __mem_to_opcode_arm(instr);
+       }
+
+       if (fault) {
+               type = TYPE_FAULT;
+               goto bad_or_fault;
+       }
+
        ai_user += 1;
 
        if (ai_usermode & UM_WARN)

Eric

Reply via email to