At this momet, logic in arch_get_unmapped_area{,_topdown} for mmaps with
MAP_32BIT flag checks TIF_ADDR32 which means:
o if 32-bit ELF changes mode to 64-bit on x86_64 and then tries to
  mmap() with MAP_32BIT it'll result in addr over 4Gb (as default is
  top-down allocation)
o if 64-bit ELF changes mode to 32-bit and tries mmap() with MAP_32BIT,
  it'll allocate only memory in 1GB space: [0x40000000, 0x80000000).

Fix it by handeling MAP_32BIT in 64-bit syscalls only.
As a little bonus it'll make thread flag a little less used.

Signed-off-by: Dmitry Safonov <[email protected]>
---
 arch/x86/kernel/sys_x86_64.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/sys_x86_64.c b/arch/x86/kernel/sys_x86_64.c
index 1bf90cd1400c..e7d4bbbe6175 100644
--- a/arch/x86/kernel/sys_x86_64.c
+++ b/arch/x86/kernel/sys_x86_64.c
@@ -100,7 +100,7 @@ SYSCALL_DEFINE6(mmap, unsigned long, addr, unsigned long, 
len,
 static void find_start_end(unsigned long flags, unsigned long *begin,
                           unsigned long *end)
 {
-       if (!test_thread_flag(TIF_ADDR32) && (flags & MAP_32BIT)) {
+       if (!in_compat_syscall() && (flags & MAP_32BIT)) {
                /* This is usually used needed to map code in small
                   model, so it needs to be in the first 31bit. Limit
                   it to that.  This means we need to move the
@@ -213,7 +213,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const 
unsigned long addr0,
                return addr;
 
        /* for MAP_32BIT mappings we force the legacy mmap base */
-       if (!test_thread_flag(TIF_ADDR32) && (flags & MAP_32BIT))
+       if (!in_compat_syscall() && (flags & MAP_32BIT))
                goto bottomup;
 
        /* requesting a specific address */
-- 
2.11.0

Reply via email to