Re: [RFC PATCH v2 0/2] Randomization of address chosen by mmap.

2018-04-02 Thread Ilya Smith
> On 29 Mar 2018, at 00:07, Luck, Tony  wrote:
> 
>> The default limit of only 65536 VMAs will also quickly come into play
>> if consecutive anon mmaps don't get merged. Of course this can be
>> raised, but it has significant resource and performance (fork) costs.
> 
> Could the random mmap address chooser look for how many existing
> VMAs have space before/after and the right attributes to merge with the
> new one you want to create? If this is above some threshold (100?) then
> pick one of them randomly and allocate the new address so that it will
> merge from below/above with an existing one.
> 
> That should still give you a very high degree of randomness, but prevent
> out of control numbers of VMAs from being created.

I think this wouldn’t work. For example these 100 allocation may happened on 
process initialization. But when attacker come to the server all his 
allocations would be made on the predictable offsets from each other. So in 
result we did nothing just decrease performance of first 100 allocations. I 
think I can make ioctl to turn off this randomization per process and it could 
be used if needed. For example if application going to allocate big chunk or 
make big memory pressure, etc.

Best regards,
Ilya

___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc

Re: [RFC PATCH v2 0/2] Randomization of address chosen by mmap.

2018-03-30 Thread Ilya Smith
Hi

> On 30 Mar 2018, at 10:55, Pavel Machek  wrote:
> 
> Hi!
> 
>> Current implementation doesn't randomize address returned by mmap.
>> All the entropy ends with choosing mmap_base_addr at the process
>> creation. After that mmap build very predictable layout of address
>> space. It allows to bypass ASLR in many cases. This patch make
>> randomization of address on any mmap call.
> 
> How will this interact with people debugging their application, and
> getting different behaviours based on memory layout?
> 
> strace, strace again, get different results?
> 

Honestly I’m confused about your question. If the only one way for debugging 
application is to use predictable mmap behaviour, then something went wrong in 
this live and we should stop using computers at all.

Thanks,
Ilya

___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc

Re: [RFC PATCH v2 0/2] Randomization of address chosen by mmap.

2018-03-28 Thread Ilya Smith

> On 28 Mar 2018, at 02:49, Matthew Wilcox  wrote:
> 
> On Tue, Mar 27, 2018 at 03:53:53PM -0700, Kees Cook wrote:
>> I agree: pushing this off to libc leaves a lot of things unprotected.
>> I think this should live in the kernel. The question I have is about
>> making it maintainable/readable/etc.
>> 
>> The state-of-the-art for ASLR is moving to finer granularity (over
>> just base-address offset), so I'd really like to see this supported in
>> the kernel. We'll be getting there for other things in the future, and
>> I'd like to have a working production example for researchers to
>> study, etc.
> 
> One thing we need is to limit the fragmentation of this approach.
> Even on 64-bit systems, we can easily get into a situation where there isn't
> space to map a contiguous terabyte.

As I wrote before, shift_random is introduced to be fragmentation limit. Even 
without it, the main question here is ‘if we can’t allocate memory with N size 
bytes, how many bytes we already allocated?’. From these point of view I 
already showed in previous version of patch that if application uses not so big 
memory allocations, it will have enough memory to use. If it uses XX Gigabytes 
or Terabytes memory, this application has all chances to be exploited with 
fully randomization or without. Since it is much easier to find(or guess) any 
usable pointer, etc. For the instance you have only 128 terabytes of memory for 
user space, so probability to exploit this application is 1/128 what is not 
secure at all. This is very rough estimate but I try to make things easier to 
understand.

Best regards,
Ilya



___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc

Re: [RFC PATCH v2 0/2] Randomization of address chosen by mmap.

2018-03-28 Thread Ilya Smith
> On 28 Mar 2018, at 01:16, Theodore Y. Ts'o <ty...@mit.edu> wrote:
> 
> On Tue, Mar 27, 2018 at 04:51:08PM +0300, Ilya Smith wrote:
>>> /dev/[u]random is not sufficient?
>> 
>> Using /dev/[u]random makes 3 syscalls - open, read, close. This is a 
>> performance
>> issue.
> 
> You may want to take a look at the getrandom(2) system call, which is
> the recommended way getting secure random numbers from the kernel.
> 
>>> Well, I am pretty sure userspace can implement proper free ranges
>>> tracking…
>> 
>> I think we need to know what libc developers will say on implementing ASLR 
>> in 
>> user-mode. I am pretty sure they will say ‘nether’ or ‘some-day’. And 
>> problem 
>> of ASLR will stay forever.
> 
> Why can't you send patches to the libc developers?
> 
> Regards,
> 
>   - Ted

I still believe the issue is on kernel side, not in library.

Best regards,
Ilya


___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc

Re: [RFC PATCH v2 0/2] Randomization of address chosen by mmap.

2018-03-28 Thread Ilya Smith

> On 27 Mar 2018, at 17:38, Michal Hocko <mho...@kernel.org> wrote:
> 
> On Tue 27-03-18 16:51:08, Ilya Smith wrote:
>> 
>>> On 27 Mar 2018, at 10:24, Michal Hocko <mho...@kernel.org> wrote:
>>> 
>>> On Mon 26-03-18 22:45:31, Ilya Smith wrote:
>>>> 
>>>>> On 26 Mar 2018, at 11:46, Michal Hocko <mho...@kernel.org> wrote:
>>>>> 
>>>>> On Fri 23-03-18 20:55:49, Ilya Smith wrote:
>>>>>> 
>>>>>>> On 23 Mar 2018, at 15:48, Matthew Wilcox <wi...@infradead.org> wrote:
>>>>>>> 
>>>>>>> On Thu, Mar 22, 2018 at 07:36:36PM +0300, Ilya Smith wrote:
>>>>>>>> Current implementation doesn't randomize address returned by mmap.
>>>>>>>> All the entropy ends with choosing mmap_base_addr at the process
>>>>>>>> creation. After that mmap build very predictable layout of address
>>>>>>>> space. It allows to bypass ASLR in many cases. This patch make
>>>>>>>> randomization of address on any mmap call.
>>>>>>> 
>>>>>>> Why should this be done in the kernel rather than libc?  libc is 
>>>>>>> perfectly
>>>>>>> capable of specifying random numbers in the first argument of mmap.
>>>>>> Well, there is following reasons:
>>>>>> 1. It should be done in any libc implementation, what is not possible 
>>>>>> IMO;
>>>>> 
>>>>> Is this really so helpful?
>>>> 
>>>> Yes, ASLR is one of very important mitigation techniques which are really 
>>>> used 
>>>> to protect applications. If there is no ASLR, it is very easy to exploit 
>>>> vulnerable application and compromise the system. We can’t just fix all 
>>>> the 
>>>> vulnerabilities right now, thats why we have mitigations - techniques 
>>>> which are 
>>>> makes exploitation more hard or impossible in some cases.
>>>> 
>>>> Thats why it is helpful.
>>> 
>>> I am not questioning ASLR in general. I am asking whether we really need
>>> per mmap ASLR in general. I can imagine that some environments want to
>>> pay the additional price and other side effects, but considering this
>>> can be achieved by libc, why to add more code to the kernel?
>> 
>> I believe this is the only one right place for it. Adding these 200+ lines 
>> of 
>> code we give this feature for any user - on desktop, on server, on IoT 
>> device, 
>> on SCADA, etc. But if only glibc will implement ‘user-mode-aslr’ IoT and 
>> SCADA 
>> devices will never get it.
> 
> I guess it would really help if you could be more specific about the
> class of security issues this would help to mitigate. My first
> understanding was that we we need some randomization between program
> executable segments to reduce the attack space when a single address
> leaks and you know the segments layout (ordering). But why do we need
> _all_ mmaps to be randomized. Because that complicates the
> implementation consirably for different reasons you have mentioned
> earlier.
> 

There are following reasons:
1) To protect layout if one region was leaked (as you said). 
2) To protect against exploitation of Out-of-bounds vulnerabilities in some 
cases (CWE-125 , CWE-787)
3) To protect against exploitation of Buffer Overflows in some cases (CWE-120)
4) To protect application in cases when attacker need to guess the address 
(paper ASLR-NG by  Hector Marco-Gisbert and  Ismael Ripoll-Ripoll)
And may be more cases.

> Do you have any specific CVE that would be mitigated by this
> randomization approach?
> I am sorry, I am not a security expert to see all the cosequences but a
> vague - the more randomization the better - sounds rather weak to me.

It is hard to name concrete CVE number, sorry. Mitigations are made to prevent 
exploitation but not to fix vulnerabilities. It means good mitigation will make 
vulnerable application crash but not been compromised in most cases. This means 
the better randomization, the less successful exploitation rate.


Thanks,
Ilya


___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc

Re: [RFC PATCH v2 0/2] Randomization of address chosen by mmap.

2018-03-23 Thread Ilya Smith

> On 23 Mar 2018, at 15:48, Matthew Wilcox <wi...@infradead.org> wrote:
> 
> On Thu, Mar 22, 2018 at 07:36:36PM +0300, Ilya Smith wrote:
>> Current implementation doesn't randomize address returned by mmap.
>> All the entropy ends with choosing mmap_base_addr at the process
>> creation. After that mmap build very predictable layout of address
>> space. It allows to bypass ASLR in many cases. This patch make
>> randomization of address on any mmap call.
> 
> Why should this be done in the kernel rather than libc?  libc is perfectly
> capable of specifying random numbers in the first argument of mmap.
Well, there is following reasons:
1. It should be done in any libc implementation, what is not possible IMO;
2. User mode is not that layer which should be responsible for choosing
random address or handling entropy;
3. Memory fragmentation is unpredictable in this case

Off course user mode could use random ‘hint’ address, but kernel may
discard this address if it is occupied for example and allocate just before
closest vma. So this solution doesn’t give that much security like 
randomization address inside kernel.
___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc

Re: [RFC PATCH v2 2/2] Architecture defined limit on memory region random shift.

2018-03-23 Thread Ilya Smith

> On 22 Mar 2018, at 23:54, Andrew Morton  wrote:
> 
> 
> Please add changelogs.  An explanation of what a "limit on memory
> region random shift" is would be nice ;) Why does it exist, why are we
> doing this, etc.  Surely there's something to be said - at present this
> is just a lump of random code?
> 
> 
> 
Sorry, my bad. The main idea of this limit is to decrease possible memory 
fragmentation. This is not so big problem on 64bit process, but really big for 
32 bit processes since may cause failure memory allocation. To control memory 
fragmentation and protect 32 bit systems (or architectures) this limit was 
introduce by this patch. It could be also moved to CONFIG_ as well.
___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc


Re: [RFC PATCH v2 0/2] Randomization of address chosen by mmap.

2018-03-23 Thread Ilya Smith
Hello, Andrew

Thanks for reading this patch.

> On 22 Mar 2018, at 23:57, Andrew Morton <a...@linux-foundation.org> wrote:
> 
> On Thu, 22 Mar 2018 19:36:36 +0300 Ilya Smith <blackz...@gmail.com> wrote:
> 
>> Current implementation doesn't randomize address returned by mmap.
>> All the entropy ends with choosing mmap_base_addr at the process
>> creation. After that mmap build very predictable layout of address
>> space. It allows to bypass ASLR in many cases.
> 
> Perhaps some more effort on the problem description would help.  *Are*
> people predicting layouts at present?  What problems does this cause? 
> How are they doing this and are there other approaches to solving the
> problem?
> 
Sorry, I’ve lost it in first version. In short - memory layout could be easily 
repaired by single leakage. Also any Out of Bounds error may easily be 
exploited according to current implementation. All because mmap choose address 
just before previously allocated segment. You can read more about it here: 
http://www.openwall.com/lists/oss-security/2018/02/27/5
Some test are available here https://github.com/blackzert/aslur. 
To solve the problem Kernel should randomize address on any mmap so
attacker could never easily gain needed addresses.

> Mainly: what value does this patchset have to our users?  This reader
> is unable to determine that from the information which you have
> provided.  Full details, please.

The value of this patch is to decrease successful rate of exploitation
vulnerable applications.These could be either remote or local vectors.


___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc

[RFC PATCH v2 1/2] Randomization of address chosen by mmap.

2018-03-22 Thread Ilya Smith
Signed-off-by: Ilya Smith <blackz...@gmail.com>
---
 include/linux/mm.h |  16 --
 mm/mmap.c  | 164 +
 2 files changed, 175 insertions(+), 5 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index ad06d42..c716257 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -25,6 +25,7 @@
 #include 
 #include 
 #include 
+#include 
 
 struct mempolicy;
 struct anon_vma;
@@ -2253,6 +2254,13 @@ struct vm_unmapped_area_info {
unsigned long align_offset;
 };
 
+#ifndef CONFIG_MMU
+#define randomize_va_space 0
+#else
+extern int randomize_va_space;
+#endif
+
+extern unsigned long unmapped_area_random(struct vm_unmapped_area_info *info);
 extern unsigned long unmapped_area(struct vm_unmapped_area_info *info);
 extern unsigned long unmapped_area_topdown(struct vm_unmapped_area_info *info);
 
@@ -2268,6 +2276,9 @@ extern unsigned long unmapped_area_topdown(struct 
vm_unmapped_area_info *info);
 static inline unsigned long
 vm_unmapped_area(struct vm_unmapped_area_info *info)
 {
+   /* How about 32 bit process?? */
+   if ((current->flags & PF_RANDOMIZE) && randomize_va_space > 3)
+   return unmapped_area_random(info);
if (info->flags & VM_UNMAPPED_AREA_TOPDOWN)
return unmapped_area_topdown(info);
else
@@ -2529,11 +2540,6 @@ int drop_caches_sysctl_handler(struct ctl_table *, int,
 void drop_slab(void);
 void drop_slab_node(int nid);
 
-#ifndef CONFIG_MMU
-#define randomize_va_space 0
-#else
-extern int randomize_va_space;
-#endif
 
 const char * arch_vma_name(struct vm_area_struct *vma);
 void print_vma_addr(char *prefix, unsigned long rip);
diff --git a/mm/mmap.c b/mm/mmap.c
index 9efdc021..ba9cebb 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -45,6 +45,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -1780,6 +1781,169 @@ unsigned long mmap_region(struct file *file, unsigned 
long addr,
return error;
 }
 
+unsigned long unmapped_area_random(struct vm_unmapped_area_info *info)
+{
+   struct mm_struct *mm = current->mm;
+   struct vm_area_struct *vma = NULL;
+   struct vm_area_struct *visited_vma = NULL;
+   unsigned long entropy[2];
+   unsigned long length, low_limit, high_limit, gap_start, gap_end;
+   unsigned long addr = 0;
+
+   /* get entropy with prng */
+   prandom_bytes(, sizeof(entropy));
+   /* small hack to prevent EPERM result */
+   info->low_limit = max(info->low_limit, mmap_min_addr);
+
+   /* Adjust search length to account for worst case alignment overhead */
+   length = info->length + info->align_mask;
+   if (length < info->length)
+   return -ENOMEM;
+
+   /*
+* Adjust search limits by the desired length.
+* See implementation comment at top of unmapped_area().
+*/
+   gap_end = info->high_limit;
+   if (gap_end < length)
+   return -ENOMEM;
+   high_limit = gap_end - length;
+
+   low_limit = info->low_limit + info->align_mask;
+   if (low_limit >= high_limit)
+   return -ENOMEM;
+
+   /* Choose random addr in limit range */
+   addr = entropy[0] % ((high_limit - low_limit) >> PAGE_SHIFT);
+   addr = low_limit + (addr << PAGE_SHIFT);
+   addr += (info->align_offset - addr) & info->align_mask;
+
+   /* Check if rbtree root looks promising */
+   if (RB_EMPTY_ROOT(>mm_rb))
+   return -ENOMEM;
+
+   vma = rb_entry(mm->mm_rb.rb_node, struct vm_area_struct, vm_rb);
+   if (vma->rb_subtree_gap < length)
+   return -ENOMEM;
+   /* use randomly chosen address to find closest suitable gap */
+   while (true) {
+   gap_start = vma->vm_prev ? vm_end_gap(vma->vm_prev) : 0;
+   gap_end = vm_start_gap(vma);
+   if (gap_end < low_limit)
+   break;
+   if (addr < vm_start_gap(vma)) {
+   /* random said check left */
+   if (vma->vm_rb.rb_left) {
+   struct vm_area_struct *left =
+   rb_entry(vma->vm_rb.rb_left,
+struct vm_area_struct, vm_rb);
+   if (addr <= vm_start_gap(left) &&
+   left->rb_subtree_gap >= length) {
+   vma = left;
+   continue;
+   }
+   }
+   } else if (addr >= vm_end_gap(vma)) {
+   /* random said check right */
+   if (vma->vm_rb.rb_right) {
+   struct vm_area_struct *right =
+   rb_en

[RFC PATCH v2 2/2] Architecture defined limit on memory region random shift.

2018-03-22 Thread Ilya Smith
Signed-off-by: Ilya Smith <blackz...@gmail.com>
---
 arch/alpha/kernel/osf_sys.c | 1 +
 arch/arc/mm/mmap.c  | 1 +
 arch/arm/mm/mmap.c  | 2 ++
 arch/frv/mm/elf-fdpic.c | 1 +
 arch/ia64/kernel/sys_ia64.c | 1 +
 arch/ia64/mm/hugetlbpage.c  | 1 +
 arch/metag/mm/hugetlbpage.c | 1 +
 arch/mips/mm/mmap.c | 1 +
 arch/parisc/kernel/sys_parisc.c | 2 ++
 arch/powerpc/mm/hugetlbpage-radix.c | 1 +
 arch/powerpc/mm/mmap.c  | 2 ++
 arch/powerpc/mm/slice.c | 2 ++
 arch/s390/mm/mmap.c | 2 ++
 arch/sh/mm/mmap.c   | 2 ++
 arch/sparc/kernel/sys_sparc_32.c| 1 +
 arch/sparc/kernel/sys_sparc_64.c| 2 ++
 arch/sparc/mm/hugetlbpage.c | 2 ++
 arch/tile/mm/hugetlbpage.c  | 2 ++
 arch/x86/kernel/sys_x86_64.c| 4 
 arch/x86/mm/hugetlbpage.c   | 4 
 fs/hugetlbfs/inode.c| 1 +
 include/linux/mm.h  | 1 +
 mm/mmap.c   | 3 ++-
 23 files changed, 39 insertions(+), 1 deletion(-)

diff --git a/arch/alpha/kernel/osf_sys.c b/arch/alpha/kernel/osf_sys.c
index fa1a392..0ab9f31 100644
--- a/arch/alpha/kernel/osf_sys.c
+++ b/arch/alpha/kernel/osf_sys.c
@@ -1301,6 +1301,7 @@ arch_get_unmapped_area_1(unsigned long addr, unsigned 
long len,
info.high_limit = limit;
info.align_mask = 0;
info.align_offset = 0;
+   info.random_shift = 0;
return vm_unmapped_area();
 }
 
diff --git a/arch/arc/mm/mmap.c b/arch/arc/mm/mmap.c
index 2e13683..45225fc 100644
--- a/arch/arc/mm/mmap.c
+++ b/arch/arc/mm/mmap.c
@@ -75,5 +75,6 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr,
info.high_limit = TASK_SIZE;
info.align_mask = do_align ? (PAGE_MASK & (SHMLBA - 1)) : 0;
info.align_offset = pgoff << PAGE_SHIFT;
+   info.random_shift = 0;
return vm_unmapped_area();
 }
diff --git a/arch/arm/mm/mmap.c b/arch/arm/mm/mmap.c
index eb1de66..1eb660c 100644
--- a/arch/arm/mm/mmap.c
+++ b/arch/arm/mm/mmap.c
@@ -101,6 +101,7 @@ arch_get_unmapped_area(struct file *filp, unsigned long 
addr,
info.high_limit = TASK_SIZE;
info.align_mask = do_align ? (PAGE_MASK & (SHMLBA - 1)) : 0;
info.align_offset = pgoff << PAGE_SHIFT;
+   info.random_shift = 0;
return vm_unmapped_area();
 }
 
@@ -152,6 +153,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const 
unsigned long addr0,
info.high_limit = mm->mmap_base;
info.align_mask = do_align ? (PAGE_MASK & (SHMLBA - 1)) : 0;
info.align_offset = pgoff << PAGE_SHIFT;
+   info.random_shift = 0;
addr = vm_unmapped_area();
 
/*
diff --git a/arch/frv/mm/elf-fdpic.c b/arch/frv/mm/elf-fdpic.c
index 46aa289..a2ce2ce 100644
--- a/arch/frv/mm/elf-fdpic.c
+++ b/arch/frv/mm/elf-fdpic.c
@@ -86,6 +86,7 @@ unsigned long arch_get_unmapped_area(struct file *filp, 
unsigned long addr, unsi
info.high_limit = (current->mm->start_stack - 0x0020);
info.align_mask = 0;
info.align_offset = 0;
+   info.random_shift = 0;
addr = vm_unmapped_area();
if (!(addr & ~PAGE_MASK))
goto success;
diff --git a/arch/ia64/kernel/sys_ia64.c b/arch/ia64/kernel/sys_ia64.c
index 085adfc..15fa4fb 100644
--- a/arch/ia64/kernel/sys_ia64.c
+++ b/arch/ia64/kernel/sys_ia64.c
@@ -64,6 +64,7 @@ arch_get_unmapped_area (struct file *filp, unsigned long 
addr, unsigned long len
info.high_limit = TASK_SIZE;
info.align_mask = align_mask;
info.align_offset = 0;
+   info.random_shift = 0;
return vm_unmapped_area();
 }
 
diff --git a/arch/ia64/mm/hugetlbpage.c b/arch/ia64/mm/hugetlbpage.c
index d16e419..ec7822d 100644
--- a/arch/ia64/mm/hugetlbpage.c
+++ b/arch/ia64/mm/hugetlbpage.c
@@ -162,6 +162,7 @@ unsigned long hugetlb_get_unmapped_area(struct file *file, 
unsigned long addr, u
info.high_limit = HPAGE_REGION_BASE + RGN_MAP_LIMIT;
info.align_mask = PAGE_MASK & (HPAGE_SIZE - 1);
info.align_offset = 0;
+   info.random_shift = 0;
return vm_unmapped_area();
 }
 
diff --git a/arch/metag/mm/hugetlbpage.c b/arch/metag/mm/hugetlbpage.c
index 012ee4c..babd325 100644
--- a/arch/metag/mm/hugetlbpage.c
+++ b/arch/metag/mm/hugetlbpage.c
@@ -191,6 +191,7 @@ hugetlb_get_unmapped_area_new_pmd(unsigned long len)
info.high_limit = TASK_SIZE;
info.align_mask = PAGE_MASK & HUGEPT_MASK;
info.align_offset = 0;
+   info.random_shift = 0;
return vm_unmapped_area();
 }
 
diff --git a/arch/mips/mm/mmap.c b/arch/mips/mm/mmap.c
index 33d3251..5a3d384 100644
--- a/arch/mips/mm/mmap.c
+++ b/arch/mips/mm/mmap.c
@@ -122,6 +122,7 @@ static unsigned long arch_get_unmapped_area_common(struct 
file *filp,
info.flags = 0;
info.low_limit = mm->mmap_base;
info.high_limi

[RFC PATCH v2 0/2] Randomization of address chosen by mmap.

2018-03-22 Thread Ilya Smith
Current implementation doesn't randomize address returned by mmap.
All the entropy ends with choosing mmap_base_addr at the process
creation. After that mmap build very predictable layout of address
space. It allows to bypass ASLR in many cases. This patch make
randomization of address on any mmap call.

---
v2: Changed the way how gap was chosen. Now we don't get all possible
gaps. Random address generated and used as a tree walking direction.
Tree walked with backtracking till suitable gap will be found.
When the gap was found, address randomly shifted from next vma start.

The vm_unmapped_area_info structure was extended with new field random_shift
what might be used to set arch-depended limit on shift to next vma start.
In case of x86-64 architecture this shift is 256 pages for 32 bit applications
and 0x100 pages for 64 bit.

To get the entropy pseudo-random is used. This is because on Intel x86-64
processors instruction RDRAND works very slow if buffer is consumed -
after about 1 iterations.

This feature could be enabled by setting randomize_va_space with 4.

---
Performance:
After applying this patch single mmap took about 7% longer according to
following test:

before = rdtsc();
addr = mmap(0, SIZE, PROT_READ | PROT_WRITE, 
 MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
after = rdtsc();
diff = after - before;
munmap(addr, SIZE)
...
unsigned long long total = 0;
for(int i = 0; i < count; ++i) {
total += one_iteration();
}
printf("%lld\n", total);

Time is consumed by div instruction in computation of the address.

make kernel:
echo 2 > /proc/sys/kernel/randomize_va_space 
make mrproper && make defconfig && time make 
real11m9.925s
user10m17.829s
sys 1m4.969s

echo 4 > /proc/sys/kernel/randomize_va_space 
make mrproper && make defconfig && time make 
real    11m12.806s
user10m18.305s
sys 1m4.281s


Ilya Smith (2):
  Randomization of address chosen by mmap.
  Architecture defined limit on memory region random shift.

 arch/alpha/kernel/osf_sys.c |   1 +
 arch/arc/mm/mmap.c  |   1 +
 arch/arm/mm/mmap.c  |   2 +
 arch/frv/mm/elf-fdpic.c |   1 +
 arch/ia64/kernel/sys_ia64.c |   1 +
 arch/ia64/mm/hugetlbpage.c  |   1 +
 arch/metag/mm/hugetlbpage.c |   1 +
 arch/mips/mm/mmap.c |   1 +
 arch/parisc/kernel/sys_parisc.c |   2 +
 arch/powerpc/mm/hugetlbpage-radix.c |   1 +
 arch/powerpc/mm/mmap.c  |   2 +
 arch/powerpc/mm/slice.c |   2 +
 arch/s390/mm/mmap.c |   2 +
 arch/sh/mm/mmap.c   |   2 +
 arch/sparc/kernel/sys_sparc_32.c|   1 +
 arch/sparc/kernel/sys_sparc_64.c|   2 +
 arch/sparc/mm/hugetlbpage.c |   2 +
 arch/tile/mm/hugetlbpage.c  |   2 +
 arch/x86/kernel/sys_x86_64.c|   4 +
 arch/x86/mm/hugetlbpage.c   |   4 +
 fs/hugetlbfs/inode.c|   1 +
 include/linux/mm.h  |  17 ++--
 mm/mmap.c   | 165 
 23 files changed, 213 insertions(+), 5 deletions(-)

-- 
2.7.4


___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc