On Tue, Jul 12, 2011 at 01:23:52PM +0200, Otto Moerbeek wrote:
> Hi,
> 
> at the cost of some speed, reduce the malloc cache size to 0 with
> flag 'S'.  This means that pages that become free will be unmapped asap.
> This detects more use-after-free bugs. The slowdown is because of more
> unmap/mmap calls. 
> 
> ok?

I like this and don't mind the slowdown.

> 
>       -Otto
> 
> Index: malloc.c
> ===================================================================
> RCS file: /cvs/src/lib/libc/stdlib/malloc.c,v
> retrieving revision 1.138
> diff -u -p -r1.138 malloc.c
> --- malloc.c  20 Jun 2011 18:04:06 -0000      1.138
> +++ malloc.c  12 Jul 2011 11:18:41 -0000
> @@ -68,6 +68,8 @@
>  #define MALLOC_MAXCACHE              256
>  #define MALLOC_DELAYED_CHUNKS        15      /* max of getrnibble() */
>  #define MALLOC_INITIAL_REGIONS       512
> +#define MALLOC_DEFAULT_CACHE 64
> +
>  /*
>   * When the P option is active, we move allocations between half a page
>   * and a whole page towards the end, subject to alignment constraints.
> @@ -461,7 +463,7 @@ omalloc_init(struct dir_info **dp)
>        */
>       mopts.malloc_abort = 1;
>       mopts.malloc_move = 1;
> -     mopts.malloc_cache = 64;
> +     mopts.malloc_cache = MALLOC_DEFAULT_CACHE;
>  
>       for (i = 0; i < 3; i++) {
>               switch (i) {
> @@ -551,10 +553,12 @@ omalloc_init(struct dir_info **dp)
>                       case 's':
>                               mopts.malloc_freeprot = mopts.malloc_junk = 0;
>                               mopts.malloc_guard = 0;
> +                             mopts.malloc_cache = MALLOC_DEFAULT_CACHE;
>                               break;
>                       case 'S':
>                               mopts.malloc_freeprot = mopts.malloc_junk = 1;
>                               mopts.malloc_guard = MALLOC_PAGESIZE;
> +                             mopts.malloc_cache = 0;
>                               break;
>                       case 'x':
>                               mopts.malloc_xmalloc = 0;

Reply via email to