Re: increase i386 data size

2011-01-06 Thread Ariane van der Steldt
On Fri, Dec 24, 2010 at 05:14:13PM -0500, Ted Unangst wrote:
 On Fri, Dec 24, 2010 at 5:02 PM, Mark Kettenis mark.kette...@xs4all.nl
 wrote:
  Date: Fri, 24 Dec 2010 16:54:23 -0500 (EST)
  From: Ted Unangst ted.unan...@gmail.com
 
  increase the hard limit on i386 max data size to 2GB-1.  This will allow
  memory hungry processes to potentially use more RAM if you increase data
  limits appropriately.
 
  I really think that -1 is odd.  Where would those potential overflows be?
 
 Anyone who stores the limit in a signed int (or long).  Do I know of
 any such software?  No.  Am I willing to risk the possibility of such
 existing to squeeze out a few more bytes?  No.
 
 I will happily set it to straight 2GB, or even higher if we don't care
 about possible trouble, so long as everybody promises not to complain
 if an issue is found. :)

I object against the -1. MAXDSIZ is always compared against multiples of
page size, so there is no reason to make it not a multiple of page size.
Furthermore, the -1 means that a calculation like what's the first page
after data becomes hell.

I have no objection against -PAGE_SIZE. But for that matter, I don't
object against plain 2GB either. It shouldn't end up in a signed value
anyway.

-- 
Ariane



Re: increase i386 data size

2011-01-06 Thread Ted Unangst
On Thu, Jan 6, 2011 at 9:34 PM, Ariane van der Steldt ari...@stack.nl wrote:
 I have no objection against -PAGE_SIZE. But for that matter, I don't
 object against plain 2GB either. It shouldn't end up in a signed value
 anyway.

we're going to go with flat 2GB after all.



Re: increase i386 data size

2010-12-24 Thread Mark Kettenis
 Date: Fri, 24 Dec 2010 16:54:23 -0500 (EST)
 From: Ted Unangst ted.unan...@gmail.com
 
 increase the hard limit on i386 max data size to 2GB-1.  This will allow 
 memory hungry processes to potentially use more RAM if you increase data 
 limits appropriately.

I really think that -1 is odd.  Where would those potential overflows be?

 Index: vmparam.h
 ===
 RCS file: /home/tedu/cvs/src/sys/arch/i386/include/vmparam.h,v
 retrieving revision 1.45
 diff -u -r1.45 vmparam.h
 --- vmparam.h 15 Dec 2010 05:30:19 -  1.45
 +++ vmparam.h 24 Dec 2010 21:52:07 -
 @@ -63,7 +63,7 @@
  #define  DFLDSIZ (64*1024*1024)  /* initial data size 
 limit */
  #endif
  #ifndef MAXDSIZ
 -#define  MAXDSIZ (1024*1024*1024)/* max data size */
 +#define  MAXDSIZ (2UL*1024*1024*1024-1)  /* max data size. -1 to 
 avoid overflow */
  #endif
  #ifndef BRKSIZ
  #define  BRKSIZ  (1024*1024*1024)/* heap gap size */



Re: increase i386 data size

2010-12-24 Thread Ted Unangst
On Fri, Dec 24, 2010 at 5:02 PM, Mark Kettenis mark.kette...@xs4all.nl
wrote:
 Date: Fri, 24 Dec 2010 16:54:23 -0500 (EST)
 From: Ted Unangst ted.unan...@gmail.com

 increase the hard limit on i386 max data size to 2GB-1.  This will allow
 memory hungry processes to potentially use more RAM if you increase data
 limits appropriately.

 I really think that -1 is odd.  Where would those potential overflows be?

Anyone who stores the limit in a signed int (or long).  Do I know of
any such software?  No.  Am I willing to risk the possibility of such
existing to squeeze out a few more bytes?  No.

I will happily set it to straight 2GB, or even higher if we don't care
about possible trouble, so long as everybody promises not to complain
if an issue is found. :)



Re: increase i386 data size

2010-12-24 Thread Ted Unangst
On Fri, Dec 24, 2010 at 5:14 PM, Ted Unangst ted.unan...@gmail.com wrote:
 Anyone who stores the limit in a signed int (or long).  Do I know of
 any such software?  No.  Am I willing to risk the possibility of such
 existing to squeeze out a few more bytes?  No.

 I will happily set it to straight 2GB, or even higher if we don't care
 about possible trouble, so long as everybody promises not to complain
 if an issue is found. :)

To phrase it another way, I was actually hoping that by avoiding the
what about overflow? worries, we could move forward faster.  If
that's not a worry, great, I just didn't want to get tied down.



Re: increase i386 data size

2010-12-24 Thread Mark Kettenis
 Date: Fri, 24 Dec 2010 17:17:51 -0500
 From: Ted Unangst ted.unan...@gmail.com
 
 On Fri, Dec 24, 2010 at 5:14 PM, Ted Unangst ted.unan...@gmail.com wrote:
  Anyone who stores the limit in a signed int (or long).  Do I know of
  any such software?  No.  Am I willing to risk the possibility of such
  existing to squeeze out a few more bytes?  No.

You mean, in the kernel?  There the limits are stored in rlim_t, which
is a 64-bit type on all our architectures.  There is one comparison in
uvm_mmap.c that had me worried for a bit:

if (size 
(p-p_rlimit[RLIMIT_DATA].rlim_cur - ptoa(p-p_vmspace-vm_dused))) {

but this is safe since ptoa() casts to paddr_t, which is unsigned long.

There is also 'struct orlimit' in sys/resource.h, which is used for
BSD4.3 compat code in compat/common/kern_resource_43.c.  But
RLIMIT_DATA isn't the only resource limit that can be set to 2GB or
beyond.  So I'm happy to ignore that issue.

For userland, I have very little sympathy.  If stuff doesn't run with
the limits cranked up all the way to 2GB, fix it, or crank the limit
down a tad bit.

  I will happily set it to straight 2GB, or even higher if we don't care
  about possible trouble, so long as everybody promises not to complain
  if an issue is found. :)
 
 To phrase it another way, I was actually hoping that by avoiding the
 what about overflow? worries, we could move forward faster.  If
 that's not a worry, great, I just didn't want to get tied down.

I don't think this is a worry.  Wouldn't mind if somebody else takes a
look at this as well.