Re: max open files

2017-09-29 Thread Taylor R Campbell
> Date: Fri, 29 Sep 2017 13:13:42 +0100
> From: Robert Swindells 
> 
> co...@sdf.org wrote:
> >this number is way too easy to hit just linking things in pkgsrc. can we
> >raise it? things fail hard when it is hit.
> >
> >ive seen people say 'when your build dies, restart it with MAKE_JOBS=1
> >so it doesn't link in parallel'.
> 
> I saw this for the first time today too.
> 
> Maybe something has started leaking file descriptors.

Run a build with a lot of jobs in parallel, and it's not hard to hit
the process and file descriptor limit.  I started seeing this a few
years ago when I began using a 24-core machine for pkgsrc builds.
Maybe there are also some things that leak file descriptors, but the
maximum number of processes and descriptors should really be scaled by
available RAM by default.


Re: max open files

2017-09-29 Thread Joerg Sonnenberger
On Tue, Sep 26, 2017 at 09:53:29AM +0200, Patrick Welche wrote:
> On Tue, Sep 26, 2017 at 06:33:01AM +, co...@sdf.org wrote:
> > this number is way too easy to hit just linking things in pkgsrc. can we
> > raise it? things fail hard when it is hit.
> > 
> > ive seen people say 'when your build dies, restart it with MAKE_JOBS=1
> > so it doesn't link in parallel'.
> 
> In the same vein, would a kernel with say "maxusers 256" make sense?

IMO maxusers should be removed and replaced by proper scaling of data
structures and limits based on RAM, nothing else.

Joerg


Re: max open files

2017-09-29 Thread Robert Swindells

co...@sdf.org wrote:
>this number is way too easy to hit just linking things in pkgsrc. can we
>raise it? things fail hard when it is hit.
>
>ive seen people say 'when your build dies, restart it with MAKE_JOBS=1
>so it doesn't link in parallel'.

I saw this for the first time today too.

Maybe something has started leaking file descriptors.


Re: max open files

2017-09-26 Thread Patrick Welche
On Tue, Sep 26, 2017 at 06:33:01AM +, co...@sdf.org wrote:
> this number is way too easy to hit just linking things in pkgsrc. can we
> raise it? things fail hard when it is hit.
> 
> ive seen people say 'when your build dies, restart it with MAKE_JOBS=1
> so it doesn't link in parallel'.

In the same vein, would a kernel with say "maxusers 256" make sense?

Cheers,

Patrick


Re: max open files

2017-09-26 Thread Paul Goyette

On Tue, 26 Sep 2017, co...@sdf.org wrote:


Hi,

this number is way too easy to hit just linking things in pkgsrc. can we
raise it? things fail hard when it is hit.

ive seen people say 'when your build dies, restart it with MAKE_JOBS=1
so it doesn't link in parallel'.


I run my machine with kern.maxfiles=5120

I'm sure I could raise it much higher, but not necessarily.

The only time I ever ran up against the limit was when there was a bug 
in Nat's new wsbell code (which opened the /dev/audio to issue the 
"beep" tone, but never closed the device).  Otherwise I haven't hit it, 
and I'm always running large pkgsrc rebuilds and/or netbsd build, and 
always with many parallel jobs.  (I default to MAKE_JOBS=8 for pkgsrc, 
and -J 24 for release builds.)


So, I'm not against raising it, but we would need to have some specific 
value that would work on small (arm SOCs?) as well as large systems such 
as mine!  Perhaps some algorithmic calculation based on other system 
parameters?



+--+--++
| Paul Goyette | PGP Key fingerprint: | E-mail addresses:  |
| (Retired)| FA29 0E3B 35AF E8AE 6651 | paul at whooppee dot com   |
| Kernel Developer | 0786 F758 55DE 53BA 7731 | pgoyette at netbsd dot org |
+--+--++


max open files

2017-09-26 Thread coypu
Hi,

this number is way too easy to hit just linking things in pkgsrc. can we
raise it? things fail hard when it is hit.

ive seen people say 'when your build dies, restart it with MAKE_JOBS=1
so it doesn't link in parallel'.