Re: memory compile sizes
On Thu, Apr 19, 2012 at 7:26 AM, Marc Espie es...@nerim.net wrote: On Thu, Apr 19, 2012 at 02:57:57PM +0100, Stuart Henderson wrote: On 2012/04/19 15:42, Marc Espie wrote: Yeah, the only issue so far is that the make-wrapper process is C code, so it needs to be compiled and deployed on every host we ship with a wrapper in base ;) /usr/bin/time -l Up to a point. This means parsing the log files to find that entry. Nah, just direct time's output to a different file: $ cat logtime #!/bin/sh fn=$1; shift exec /usr/bin/time -l sh -c 'exec $@ 29 9-' -- $@ 92 2$fn $ ./logtime time.out sh -c 'echo stdout; echo stderr 2' stdout stderr $ cat time.out 0.00 real 0.00 user 0.00 sys 504 maximum resident set size 0 average shared memory size 0 average unshared data size 0 average unshared stack size 146 minor page faults 0 major page faults 0 swaps 0 block input operations 0 block output operations 0 messages sent 0 messages received 0 signals received 0 voluntary context switches 0 involuntary context switches :) It's feasible, but not really nice... and /usr/bin/time will fail on some signals. Hm, how so?
Re: memory compile sizes
On Tue, Apr 24, 2012 at 02:33:15PM -0700, Matthew Dempsky wrote: On Thu, Apr 19, 2012 at 7:26 AM, Marc Espie es...@nerim.net wrote: On Thu, Apr 19, 2012 at 02:57:57PM +0100, Stuart Henderson wrote: On 2012/04/19 15:42, Marc Espie wrote: Yeah, the only issue so far is that the make-wrapper process is C code, so it needs to be compiled and deployed on every host we ship with a wrapper in base ;) /usr/bin/time -l Up to a point. This means parsing the log files to find that entry. Nah, just direct time's output to a different file: $ cat logtime #!/bin/sh fn=$1; shift exec /usr/bin/time -l sh -c 'exec $@ 29 9-' -- $@ 92 2$fn $ ./logtime time.out sh -c 'echo stdout; echo stderr 2' stdout stderr $ cat time.out 0.00 real 0.00 user 0.00 sys 504 maximum resident set size 0 average shared memory size 0 average unshared data size 0 average unshared stack size 146 minor page faults 0 major page faults 0 swaps 0 block input operations 0 block output operations 0 messages sent 0 messages received 0 signals received 0 voluntary context switches 0 involuntary context switches :) Well, yuck, this makes for ways too many extra processes just for that. If I have to deploy code, it's almost as easy to compile an extra program on the fly... It's feasible, but not really nice... and /usr/bin/time will fail on some signals. Hm, how so? when you kill a job, you want to report current rss to extract stats later. /usr/bin/time will die without having reported anything on at least some current signals, namely it only catches INT and QUIT...
Re: memory compile sizes
On Tue, Apr 24, 2012 at 2:42 PM, Marc Espie es...@nerim.net wrote: Well, yuck, this makes for ways too many extra processes just for that. There are no more processes than your wrapper program, and only two more execs of /bin/sh. Considering make's primary task is forking and exec'ing /bin/sh processes, I doubt you'd notice the overhead. :P Anyway, it was a tongue-in-cheek proposal.
Re: memory compile sizes
On Wed, Apr 18, 2012 at 06:26:58PM -0500, Amit Kulkarni wrote: This is very cheap to compute, this just requires an extra wrapper-process around make to compute those numbers. Stuff 30 more or less corresponds to VMEM_WARNING ports. One thing worth doing will be to set a threshold in dpb, and not allow two large ports to build at the same time on a single host (just by looking at the next ports in the queue, plenty of candidates usually) Err. you will look at ncpu and available memory before you put that in, right? If you have quad core and up, you could have 4 large ports going at the same time as of Feb 2012. This is just a temporary dpb issue while the rthreads/vmmap issues are being sorted out, right? No, this is not a temporary issue. Look at the SIZES. If you are compiling the 4 biggest ports and they start linking at the same time, they WILL consume about 6GB of memory. And look at dpb's graphs. Postponing such ports slightly will have absolutely no impact on the total build time. Heck, less memory pressure and less cache pressure will probably make things faster.
Re: memory compile sizes
On Thursday 19 April 2012 01:27:01 Marc Espie wrote: Thanks to Ariane's changes, we can now monitor maxrss thru compiles. Here is a full list of a bulk build on amd64, ordered by size. (warning, very long post) note that the number for libreoffice might be a bit small, because I ran out of diskspace and rebuilt it later, but most of the rest is fairly interesting. This is very cheap to compute, this just requires an extra wrapper-process around make to compute those numbers. Does this mean it's going to be at least somewhat dynamic and the list that you attached will be regenerated once in a while? Stuff 30 more or less corresponds to VMEM_WARNING ports. One thing worth doing will be to set a threshold in dpb, and not allow two large ports to build at the same time on a single host (just by looking at the next ports in the queue, plenty of candidates usually) -- Antti Harri
Re: memory compile sizes
On Thu, Apr 19, 2012 at 12:58:43PM +0300, Antti Harri wrote: On Thursday 19 April 2012 01:27:01 Marc Espie wrote: Thanks to Ariane's changes, we can now monitor maxrss thru compiles. Here is a full list of a bulk build on amd64, ordered by size. (warning, very long post) note that the number for libreoffice might be a bit small, because I ran out of diskspace and rebuilt it later, but most of the rest is fairly interesting. This is very cheap to compute, this just requires an extra wrapper-process around make to compute those numbers. Does this mean it's going to be at least somewhat dynamic and the list that you attached will be regenerated once in a while? Yeah, the only issue so far is that the make-wrapper process is C code, so it needs to be compiled and deployed on every host, but then managing this full rss list just requires similar techniques to the persistent engine database under build-stats.
Re: memory compile sizes
On 2012/04/19 15:42, Marc Espie wrote: Yeah, the only issue so far is that the make-wrapper process is C code, so it needs to be compiled and deployed on every host we ship with a wrapper in base ;) /usr/bin/time -l
Re: memory compile sizes
On Thu, Apr 19, 2012 at 02:57:57PM +0100, Stuart Henderson wrote: On 2012/04/19 15:42, Marc Espie wrote: Yeah, the only issue so far is that the make-wrapper process is C code, so it needs to be compiled and deployed on every host we ship with a wrapper in base ;) /usr/bin/time -l Up to a point. This means parsing the log files to find that entry. It's feasible, but not really nice... and /usr/bin/time will fail on some signals.
Re: memory compile sizes
This is very cheap to compute, this just requires an extra wrapper-process around make to compute those numbers. Stuff 30 more or less corresponds to VMEM_WARNING ports. One thing worth doing will be to set a threshold in dpb, and not allow two large ports to build at the same time on a single host (just by looking at the next ports in the queue, plenty of candidates usually) Err. you will look at ncpu and available memory before you put that in, right? If you have quad core and up, you could have 4 large ports going at the same time as of Feb 2012. This is just a temporary dpb issue while the rthreads/vmmap issues are being sorted out, right?