Re: malloc: prep for immutable pages

2022-10-06 Thread Theo de Raadt
Marc Espie  wrote:

> On Wed, Oct 05, 2022 at 07:54:41AM -0600, Theo de Raadt wrote:
> > Marc Espie  wrote:
> > 
> > > On Tue, Oct 04, 2022 at 10:15:51AM -0600, Theo de Raadt wrote:
> > > > A note on why this chance is coming.
> > > > 
> > > > malloc.c (as it is today), does mprotects back and forth between RW and
> > > > R, to protect an internal object.  This object is in bss, it is not
> > > > allocated with mmap.  With the upcoming mimmutable change, the bss will
> > > > become immutable by default, at program load time.  mimmutable even 
> > > > prevents
> > > > changing a RW object to R.
> > > 
> > > I'm probably missing something here, but for me, traditionally,
> > > BSS is the "set to 0" section of global variables of a program... which 
> > > are
> > > usually going to be changed to some other value.
> > > 
> > > Or are we talking at cross purposes ?
> > 
> > If you read the mimmutable diff, it has a manual page, and the answer is in
> > there.
> 
> Ah my mistake, I read a bit fast, and I thought the *pages* themselves were
> immutable.
> 
> Stupid question time: is there any reason not to allow further changes that
> would *restrict* the pages further ?
> 
> A bit like pledge works.
> 
> Like, say you mark a region "immutable" with RW rights, then later on
> use mprotect to mark it down RO, and you can never get back ?


I considered this when I first ran into the malloc and relpro issues, but
after consideration I found no use for such a semantic.

1. that is even more difficult to explain
2. if permissions are reduced, it can still result in a program flow control
   behaviour via a signal handler
3. mimmutable's job is to change mmap/munmap/mprotect to return -1, it does not
   terminate programs
4. Many programs don't check for mprotect failure, so the addition of mimmutable
   already going to be interesting enough without adding demotion to the picture



Re: malloc: prep for immutable pages

2022-10-06 Thread Marc Espie
On Wed, Oct 05, 2022 at 07:54:41AM -0600, Theo de Raadt wrote:
> Marc Espie  wrote:
> 
> > On Tue, Oct 04, 2022 at 10:15:51AM -0600, Theo de Raadt wrote:
> > > A note on why this chance is coming.
> > > 
> > > malloc.c (as it is today), does mprotects back and forth between RW and
> > > R, to protect an internal object.  This object is in bss, it is not
> > > allocated with mmap.  With the upcoming mimmutable change, the bss will
> > > become immutable by default, at program load time.  mimmutable even 
> > > prevents
> > > changing a RW object to R.
> > 
> > I'm probably missing something here, but for me, traditionally,
> > BSS is the "set to 0" section of global variables of a program... which are
> > usually going to be changed to some other value.
> > 
> > Or are we talking at cross purposes ?
> 
> If you read the mimmutable diff, it has a manual page, and the answer is in
> there.

Ah my mistake, I read a bit fast, and I thought the *pages* themselves were
immutable.

Stupid question time: is there any reason not to allow further changes that
would *restrict* the pages further ?

A bit like pledge works.

Like, say you mark a region "immutable" with RW rights, then later on
use mprotect to mark it down RO, and you can never get back ?



Re: malloc: prep for immutable pages

2022-10-05 Thread Theo de Raadt
Marc Espie  wrote:

> On Tue, Oct 04, 2022 at 10:15:51AM -0600, Theo de Raadt wrote:
> > A note on why this chance is coming.
> > 
> > malloc.c (as it is today), does mprotects back and forth between RW and
> > R, to protect an internal object.  This object is in bss, it is not
> > allocated with mmap.  With the upcoming mimmutable change, the bss will
> > become immutable by default, at program load time.  mimmutable even prevents
> > changing a RW object to R.
> 
> I'm probably missing something here, but for me, traditionally,
> BSS is the "set to 0" section of global variables of a program... which are
> usually going to be changed to some other value.
> 
> Or are we talking at cross purposes ?

If you read the mimmutable diff, it has a manual page, and the answer is in
there.



Re: malloc: prep for immutable pages

2022-10-05 Thread Otto Moerbeek
On Wed, Oct 05, 2022 at 02:47:19PM +0200, Marc Espie wrote:

> On Tue, Oct 04, 2022 at 10:15:51AM -0600, Theo de Raadt wrote:
> > A note on why this chance is coming.
> > 
> > malloc.c (as it is today), does mprotects back and forth between RW and
> > R, to protect an internal object.  This object is in bss, it is not
> > allocated with mmap.  With the upcoming mimmutable change, the bss will
> > become immutable by default, at program load time.  mimmutable even prevents
> > changing a RW object to R.
> 
> I'm probably missing something here, but for me, traditionally,
> BSS is the "set to 0" section of global variables of a program... which are
> usually going to be changed to some other value.
> 
> Or are we talking at cross purposes ?
> 

malloc sets up a few values in a bss struct and then sets the page
its in to r/o.

But when switching to multi-threaded mode a few variables in the struct
are changed (in current), so the page has to be made r/w again,
modified and then back to r/o. 

The diff changes things so that the pages can be set to r/o and
immutable after malloc init.

-Otto



Re: malloc: prep for immutable pages

2022-10-05 Thread Marc Espie
On Tue, Oct 04, 2022 at 10:15:51AM -0600, Theo de Raadt wrote:
> A note on why this chance is coming.
> 
> malloc.c (as it is today), does mprotects back and forth between RW and
> R, to protect an internal object.  This object is in bss, it is not
> allocated with mmap.  With the upcoming mimmutable change, the bss will
> become immutable by default, at program load time.  mimmutable even prevents
> changing a RW object to R.

I'm probably missing something here, but for me, traditionally,
BSS is the "set to 0" section of global variables of a program... which are
usually going to be changed to some other value.

Or are we talking at cross purposes ?



Re: malloc: prep for immutable pages

2022-10-04 Thread Theo de Raadt
A note on why this chance is coming.

malloc.c (as it is today), does mprotects back and forth between RW and
R, to protect an internal object.  This object is in bss, it is not
allocated with mmap.  With the upcoming mimmutable change, the bss will
become immutable by default, at program load time.  mimmutable even prevents
changing a RW object to R.

So I have added an elf section which indicates non-immutable sections in
the address space, and placed this object there.  That solves the problem
at hand.

However, this is leading otto to look at the life-cycle of this object,
to change when it's protection is changed, and consider that the code can
make the object immutable by itself, at the right time.

This diff should be tested in isolation from the immutable changes.
immutable changes will start coming into the tree soon...


Otto Moerbeek  wrote:

> Hi,
> 
> Rearrange things so that we do not have to flip protection of r/o
> pages back and forth when swicthing from single-threaded to
> multi-threaded. Also saves work in many cases by not initing pools
> until they are used: the pool used for MAP_CONCEAL pages is not used
> by very many processes and if you have just a few threads only a few
> pools will be set up.
> 
> The actual mimmutable calls are to be added later. I marked the places where
> they should end up. 
> 
> Please test, 
> 
>   -Otto
> 
> 
> Index: stdlib/malloc.c
> ===
> RCS file: /cvs/src/lib/libc/stdlib/malloc.c,v
> retrieving revision 1.274
> diff -u -p -r1.274 malloc.c
> --- stdlib/malloc.c   30 Jun 2022 17:15:48 -  1.274
> +++ stdlib/malloc.c   29 Sep 2022 09:42:57 -
> @@ -142,6 +142,7 @@ struct dir_info {
>   int malloc_junk;/* junk fill? */
>   int mmap_flag;  /* extra flag for mmap */
>   int mutex;
> + int malloc_mt;  /* multi-threaded mode? */
>   /* lists of free chunk info structs */
>   struct chunk_head chunk_info_list[MALLOC_MAXSHIFT + 1];
>   /* lists of chunks with free slots */
> @@ -181,8 +182,6 @@ struct dir_info {
>  #endif /* MALLOC_STATS */
>   u_int32_t canary2;
>  };
> -#define DIR_INFO_RSZ ((sizeof(struct dir_info) + MALLOC_PAGEMASK) & \
> - ~MALLOC_PAGEMASK)
>  
>  static void unmap(struct dir_info *d, void *p, size_t sz, size_t clear);
>  
> @@ -208,7 +207,6 @@ struct malloc_readonly {
>   /* Main bookkeeping information */
>   struct dir_info *malloc_pool[_MALLOC_MUTEXES];
>   u_int   malloc_mutexes; /* how much in actual use? */
> - int malloc_mt;  /* multi-threaded mode? */
>   int malloc_freecheck;   /* Extensive double free check */
>   int malloc_freeunmap;   /* mprotect free pages PROT_NONE? */
>   int def_malloc_junk;/* junk fill? */
> @@ -257,7 +255,7 @@ static void malloc_exit(void);
>  static inline void
>  _MALLOC_LEAVE(struct dir_info *d)
>  {
> - if (mopts.malloc_mt) {
> + if (d->malloc_mt) {
>   d->active--;
>   _MALLOC_UNLOCK(d->mutex);
>   }
> @@ -266,7 +264,7 @@ _MALLOC_LEAVE(struct dir_info *d)
>  static inline void
>  _MALLOC_ENTER(struct dir_info *d)
>  {
> - if (mopts.malloc_mt) {
> + if (d->malloc_mt) {
>   _MALLOC_LOCK(d->mutex);
>   d->active++;
>   }
> @@ -291,7 +289,7 @@ hash(void *p)
>  static inline struct dir_info *
>  getpool(void)
>  {
> - if (!mopts.malloc_mt)
> + if (mopts.malloc_pool[1] == NULL || !mopts.malloc_pool[1]->malloc_mt)
>   return mopts.malloc_pool[1];
>   else/* first one reserved for special pool */
>   return mopts.malloc_pool[1 + TIB_GET()->tib_tid %
> @@ -496,46 +494,22 @@ omalloc_init(void)
>  }
>  
>  static void
> -omalloc_poolinit(struct dir_info **dp, int mmap_flag)
> +omalloc_poolinit(struct dir_info *d, int mmap_flag)
>  {
> - char *p;
> - size_t d_avail, regioninfo_size;
> - struct dir_info *d;
>   int i, j;
>  
> - /*
> -  * Allocate dir_info with a guard page on either side. Also
> -  * randomise offset inside the page at which the dir_info
> -  * lies (subject to alignment by 1 << MALLOC_MINSHIFT)
> -  */
> - if ((p = MMAPNONE(DIR_INFO_RSZ + (MALLOC_PAGESIZE * 2), mmap_flag)) ==
> - MAP_FAILED)
> - wrterror(NULL, "malloc init mmap failed");
> - mprotect(p + MALLOC_PAGESIZE, DIR_INFO_RSZ, PROT_READ | PROT_WRITE);
> - d_avail = (DIR_INFO_RSZ - sizeof(*d)) >> MALLOC_MINSHIFT;
> - d = (struct dir_info *)(p + MALLOC_PAGESIZE +
> - (arc4random_uniform(d_avail) << MALLOC_MINSHIFT));
> -
> - rbytes_init(d);
> - d->regions_free = d->regions_total = MALLOC_INITIAL_REGIONS;
> - regioninfo_size = d->regions_total * sizeof(struct region_info);
> - d->r = 

malloc: prep for immutable pages

2022-09-29 Thread Otto Moerbeek
Hi,

Rearrange things so that we do not have to flip protection of r/o
pages back and forth when swicthing from single-threaded to
multi-threaded. Also saves work in many cases by not initing pools
until they are used: the pool used for MAP_CONCEAL pages is not used
by very many processes and if you have just a few threads only a few
pools will be set up.

The actual mimmutable calls are to be added later. I marked the places where
they should end up. 

Please test, 

-Otto


Index: stdlib/malloc.c
===
RCS file: /cvs/src/lib/libc/stdlib/malloc.c,v
retrieving revision 1.274
diff -u -p -r1.274 malloc.c
--- stdlib/malloc.c 30 Jun 2022 17:15:48 -  1.274
+++ stdlib/malloc.c 29 Sep 2022 09:42:57 -
@@ -142,6 +142,7 @@ struct dir_info {
int malloc_junk;/* junk fill? */
int mmap_flag;  /* extra flag for mmap */
int mutex;
+   int malloc_mt;  /* multi-threaded mode? */
/* lists of free chunk info structs */
struct chunk_head chunk_info_list[MALLOC_MAXSHIFT + 1];
/* lists of chunks with free slots */
@@ -181,8 +182,6 @@ struct dir_info {
 #endif /* MALLOC_STATS */
u_int32_t canary2;
 };
-#define DIR_INFO_RSZ   ((sizeof(struct dir_info) + MALLOC_PAGEMASK) & \
-   ~MALLOC_PAGEMASK)
 
 static void unmap(struct dir_info *d, void *p, size_t sz, size_t clear);
 
@@ -208,7 +207,6 @@ struct malloc_readonly {
/* Main bookkeeping information */
struct dir_info *malloc_pool[_MALLOC_MUTEXES];
u_int   malloc_mutexes; /* how much in actual use? */
-   int malloc_mt;  /* multi-threaded mode? */
int malloc_freecheck;   /* Extensive double free check */
int malloc_freeunmap;   /* mprotect free pages PROT_NONE? */
int def_malloc_junk;/* junk fill? */
@@ -257,7 +255,7 @@ static void malloc_exit(void);
 static inline void
 _MALLOC_LEAVE(struct dir_info *d)
 {
-   if (mopts.malloc_mt) {
+   if (d->malloc_mt) {
d->active--;
_MALLOC_UNLOCK(d->mutex);
}
@@ -266,7 +264,7 @@ _MALLOC_LEAVE(struct dir_info *d)
 static inline void
 _MALLOC_ENTER(struct dir_info *d)
 {
-   if (mopts.malloc_mt) {
+   if (d->malloc_mt) {
_MALLOC_LOCK(d->mutex);
d->active++;
}
@@ -291,7 +289,7 @@ hash(void *p)
 static inline struct dir_info *
 getpool(void)
 {
-   if (!mopts.malloc_mt)
+   if (mopts.malloc_pool[1] == NULL || !mopts.malloc_pool[1]->malloc_mt)
return mopts.malloc_pool[1];
else/* first one reserved for special pool */
return mopts.malloc_pool[1 + TIB_GET()->tib_tid %
@@ -496,46 +494,22 @@ omalloc_init(void)
 }
 
 static void
-omalloc_poolinit(struct dir_info **dp, int mmap_flag)
+omalloc_poolinit(struct dir_info *d, int mmap_flag)
 {
-   char *p;
-   size_t d_avail, regioninfo_size;
-   struct dir_info *d;
int i, j;
 
-   /*
-* Allocate dir_info with a guard page on either side. Also
-* randomise offset inside the page at which the dir_info
-* lies (subject to alignment by 1 << MALLOC_MINSHIFT)
-*/
-   if ((p = MMAPNONE(DIR_INFO_RSZ + (MALLOC_PAGESIZE * 2), mmap_flag)) ==
-   MAP_FAILED)
-   wrterror(NULL, "malloc init mmap failed");
-   mprotect(p + MALLOC_PAGESIZE, DIR_INFO_RSZ, PROT_READ | PROT_WRITE);
-   d_avail = (DIR_INFO_RSZ - sizeof(*d)) >> MALLOC_MINSHIFT;
-   d = (struct dir_info *)(p + MALLOC_PAGESIZE +
-   (arc4random_uniform(d_avail) << MALLOC_MINSHIFT));
-
-   rbytes_init(d);
-   d->regions_free = d->regions_total = MALLOC_INITIAL_REGIONS;
-   regioninfo_size = d->regions_total * sizeof(struct region_info);
-   d->r = MMAP(regioninfo_size, mmap_flag);
-   if (d->r == MAP_FAILED) {
-   d->regions_total = 0;
-   wrterror(NULL, "malloc init mmap failed");
-   }
+   d->r = NULL;
+   d->rbytesused = sizeof(d->rbytes);
+   d->regions_free = d->regions_total = 0;
for (i = 0; i <= MALLOC_MAXSHIFT; i++) {
LIST_INIT(>chunk_info_list[i]);
for (j = 0; j < MALLOC_CHUNK_LISTS; j++)
LIST_INIT(>chunk_dir[i][j]);
}
-   STATS_ADD(d->malloc_used, regioninfo_size + 3 * MALLOC_PAGESIZE);
d->mmap_flag = mmap_flag;
d->malloc_junk = mopts.def_malloc_junk;
d->canary1 = mopts.malloc_canary ^ (u_int32_t)(uintptr_t)d;
d->canary2 = ~d->canary1;
-
-   *dp = d;
 }
 
 static int
@@ -550,7 +524,8 @@ omalloc_grow(struct dir_info *d)
if (d->regions_total > SIZE_MAX / sizeof(struct region_info) / 2)
return 1;
 
-