On Mon, Oct 15, 2018 at 09:11:51AM -0700, Kees Cook wrote:
> On Mon, Oct 15, 2018 at 3:55 AM, Christian Brauner <christ...@brauner.io> 
> wrote:
> > Currently, when writing
> >
> > echo 18446744073709551616 > /proc/sys/fs/file-max
> >
> > /proc/sys/fs/file-max will overflow and be set to 0. That quickly
> > crashes the system.
> > This commit explicitly caps the value for file-max to ULONG_MAX.
> >
> > Note, this isn't technically necessary since proc_get_long() will already
> > return ULONG_MAX. However, two reason why we still should do this:
> > 1. it makes it explicit what the upper bound of file-max is instead of
> >    making readers of the code infer it from proc_get_long() themselves
> > 2. other tunebles than file-max may want to set a lower max value than
> >    ULONG_MAX and we need to enable __do_proc_doulongvec_minmax() to handle
> >    such cases too
> >
> > Cc: Kees Cook <keesc...@chromium.org>
> > Signed-off-by: Christian Brauner <christ...@brauner.io>
> > ---
> > v0->v1:
> > - if max value is < than ULONG_MAX use max as upper bound
> > - (Dominik) remove double "the" from commit message
> > ---
> >  kernel/sysctl.c | 4 ++++
> >  1 file changed, 4 insertions(+)
> >
> > diff --git a/kernel/sysctl.c b/kernel/sysctl.c
> > index 97551eb42946..226d4eaf4b0e 100644
> > --- a/kernel/sysctl.c
> > +++ b/kernel/sysctl.c
> > @@ -127,6 +127,7 @@ static int __maybe_unused one = 1;
> >  static int __maybe_unused two = 2;
> >  static int __maybe_unused four = 4;
> >  static unsigned long one_ul = 1;
> > +static unsigned long ulong_max = ULONG_MAX;
> >  static int one_hundred = 100;
> >  static int one_thousand = 1000;
> >  #ifdef CONFIG_PRINTK
> > @@ -1696,6 +1697,7 @@ static struct ctl_table fs_table[] = {
> >                 .maxlen         = sizeof(files_stat.max_files),
> >                 .mode           = 0644,
> >                 .proc_handler   = proc_doulongvec_minmax,
> > +               .extra2         = &ulong_max,
> 
> Don't we want this capped lower? The percpu comparisons, for example,
> are all signed long. And there is at least this test, which could
> overflow:
> 
>         if (atomic_long_read(&unix_nr_socks) > 2 * get_max_files())
>                 goto out;

Does that check even make sense? 
Commit 518de9b39e854542de59bfb8b9f61c8f7ecf808b made get_max_files()
return a long to bump the number of allowed files to more than 2^31.

But assuming a platform where an unsigned long is 64bit which is what
get_max_files() returns and atomic_long_read() is 64bit too this is
guaranteed to overflow, no?  So I'm not clear what this is trying to do.
Seems this should simply be:

if (atomic_long_read(&unix_nr_socks) > get_max_files())
        goto out;

or am I missing a crucial point?

> 
> Seems like max-files should be  SLONG_MAX / 2 or something instead?

Hm. Isn't that a bit low? Iiuc, this would mean cutting the maximum
number of open files in half? If at all shouldn't it be LONG_MAX?

> 
> >         },
> >         {
> >                 .procname       = "nr_open",
> > @@ -2795,6 +2797,8 @@ static int __do_proc_doulongvec_minmax(void *data, 
> > struct ctl_table *table, int
> >                                 break;
> >                         if (neg)
> >                                 continue;
> > +                       if (max && val > *max)
> > +                               val = *max;
> >                         val = convmul * val / convdiv;
> >                         if ((min && val < *min) || (max && val > *max))
> >                                 continue;
> > --
> > 2.17.1
> >
> 
> -Kees
> 
> -- 
> Kees Cook
> Pixel Security

Reply via email to