On Mon 16-10-17 21:02:09, James Hogan wrote:
> On Mon, Oct 16, 2017 at 09:00:47PM +0200, Michal Hocko wrote:
> > [CCing metag people for the metag elf_map implementation specific. The 
> > thread
> > starts here 
> > http://lkml.kernel.org/r/[email protected]]
> > 
> > On Mon 16-10-17 09:39:14, Kees Cook wrote:
> > > On Mon, Oct 16, 2017 at 6:44 AM, Michal Hocko <[email protected]> wrote:
> > > > +               return -EAGAIN;
> > > > +       }
> > > > +
> > > > +       return map_addr;
> > > > +}
> > > > +
> > > >  static unsigned long elf_map(struct file *filep, unsigned long addr,
> > > >                 struct elf_phdr *eppnt, int prot, int type,
> > > >                 unsigned long total_size)
> > > > @@ -366,11 +389,11 @@ static unsigned long elf_map(struct file *filep, 
> > > > unsigned long addr,
> > > 
> > > elf_map is redirected on metag -- it should probably have its vm_mmap
> > > calls adjust too.
> > 
> > Thanks for spotting this. I am not really familiar with metag. It seems
> > to clear MAP_FIXED already
> >     tcm_tag = tcm_lookup_tag(addr);
> > 
> >     if (tcm_tag != TCM_INVALID_TAG)
> >             type &= ~MAP_FIXED;
> > 
> > So if there is a tag the flag is cleared. I do not understand this code
> > (and git log doesn't help) but why is this MAP_FIXED code really needed?
> 
> This function was added to the metag port in mid-2010 to support ELFs
> with tightly coupled memory (TCM) segments, for example metag "core"
> memories are at fixed virtual addresses and aren't MMU mappable (i.e.
> globally accessible), and are outside of the usual userland address
> range, but are as fast as cache. The commit message says this:
> 
> > Override the definition of the elf_map() function to special case
> > sections that are loaded at the address of the internal memories.
> > If we have such a section, map it at a different address and copy
> > the contents of the section into the appropriate memory.
> 
> So yeh, it looks like if the section is meant to use TCM based on the
> virtual address, it drops MAP_FIXED so that the vm_mmap can succeed
> (because its outside the normally valid range), and then copies it
> directly to the desired TCM so the program can use it.
> 
> Hope that helps add some context to understand whats needed.

Hmm, so IIUC then we need the same fix as
http://lkml.kernel.org/r/[email protected],
right?

This would be something like. I wanted to share elf_vm_mmap but didn't
find a proper place to not cause include dependency hell so I balied out
to c&p.
---
diff --git a/arch/metag/kernel/process.c b/arch/metag/kernel/process.c
index c4606ce743d2..b20596b4c4c2 100644
--- a/arch/metag/kernel/process.c
+++ b/arch/metag/kernel/process.c
@@ -378,6 +378,29 @@ int dump_fpu(struct pt_regs *regs, elf_fpregset_t *fpu)
 
 #define BAD_ADDR(x) ((unsigned long)(x) >= TASK_SIZE)
 
+static unsigned long elf_vm_mmap(struct file *filep, unsigned long addr,
+               unsigned long size, int prot, int type, unsigned long off)
+{
+       unsigned long map_addr;
+
+       /*
+        * If caller requests the mapping at a specific place, make sure we fail
+        * rather than potentially clobber an existing mapping which can have
+        * security consequences (e.g. smash over the stack area).
+        */
+       map_addr = vm_mmap(filep, addr, size, prot, type & ~MAP_FIXED, off);
+       if (BAD_ADDR(map_addr))
+               return map_addr;
+
+       if ((type & MAP_FIXED) && map_addr != addr) {
+               pr_info("Uhuuh, elf segement at %p requested but the memory is 
mapped already\n",
+                               (void*)addr);
+               return -EAGAIN;
+       }
+
+       return map_addr;
+}
+
 unsigned long __metag_elf_map(struct file *filep, unsigned long addr,
                              struct elf_phdr *eppnt, int prot, int type,
                              unsigned long total_size)
@@ -410,11 +433,11 @@ unsigned long __metag_elf_map(struct file *filep, 
unsigned long addr,
        */
        if (total_size) {
                total_size = ELF_PAGEALIGN(total_size);
-               map_addr = vm_mmap(filep, addr, total_size, prot, type, off);
+               map_addr = elf_vm_mmap(filep, addr, total_size, prot, type, 
off);
                if (!BAD_ADDR(map_addr))
                        vm_munmap(map_addr+size, total_size-size);
        } else
-               map_addr = vm_mmap(filep, addr, size, prot, type, off);
+               map_addr = elf_vm_mmap(filep, addr, size, prot, type, off);
 
        if (!BAD_ADDR(map_addr) && tcm_tag != TCM_INVALID_TAG) {
                struct tcm_allocation *tcm;

-- 
Michal Hocko
SUSE Labs

Reply via email to