Re: Variability in build times.

2010-10-19 Thread Aleksandar Kuktin
On Tue, 19 Oct 2010 05:03:15 +0200 (CEST)
Uwe Düffert dueff...@uwe-dueffert.de wrote:

 Same with glibc here: The difference between the slowest and the
 fastest run is just 0.3% (16m53.825 vs 16m57.186).
 
 Uwe

Well, my results are way more interesting. :)

Overnight builds, done while I was sleeping and therefore not loading
the machine are uniform (I have not done a full statistical analysis):
490 sec +- 10 seconds.
BUT, when I am actively using my single core computer, the times range
from 502 seconds to 675 seconds.
In detail: min: 502, max: 675, avrg: 560, sd: 53 seconds.
That's with Mplayer and/or the browser.

-- 
-Aleksandar Kuktin
-- 
http://linuxfromscratch.org/mailman/listinfo/blfs-dev
FAQ: http://www.linuxfromscratch.org/blfs/faq.html
Unsubscribe: See the above information page


Re: Variability in build times.

2010-10-19 Thread Ken Moffat
On Tue, Oct 19, 2010 at 02:18:47AM +0200, Uwe Düffert wrote:
 
 On Tue, 19 Oct 2010, Ken Moffat wrote:
 
  But, I'm only ever comparing times on the same system, and this
  package (abiword) was almost at the end of my build, so the amount
  of '/' used is almost the same (for me, /home is always a separate
  filesystem - ok, I've probably extended the usage of /var/log - and
  yes, updating the browser cache involves head movement, but such a
  large variation ?
 No, that does not sound reasonable. Does that only happen with abiword or 
 are there other examples (worth mentioning)?

 At the moment, this is the first package where I've *noticed* the
difference.  I left a script running overnight, but the script was
buggy.
 
 May be thats the reason: What about throtteling because of high 
 temperature? Or running *something* in the background forcing task 
 switches?
 
 I think some intel processors throttle because of temperature.
Mine (both machines) are athlon64.  Yes, task switching happens -
I've got 110 tasks at the moment (cc1plus, doltcompile, make - all
from abiword's build ], X, firefox-bin, icewm, rxvt-unicode, and the
normal system processes.

ĸen
-- 
das eine Mal als Tragödie, das andere Mal als Farce
-- 
http://linuxfromscratch.org/mailman/listinfo/blfs-dev
FAQ: http://www.linuxfromscratch.org/blfs/faq.html
Unsubscribe: See the above information page

Re: Variability in build times.

2010-10-19 Thread Ken Moffat
On Tue, Oct 19, 2010 at 01:07:02PM +0200, Aleksandar Kuktin wrote:
 On Tue, 19 Oct 2010 05:03:15 +0200 (CEST)
 Uwe Düffert dueff...@uwe-dueffert.de wrote:
 
  Same with glibc here: The difference between the slowest and the
  fastest run is just 0.3% (16m53.825 vs 16m57.186).
  
  Uwe
 
 Well, my results are way more interesting. :)
 
 Overnight builds, done while I was sleeping and therefore not loading
 the machine are uniform (I have not done a full statistical analysis):
 490 sec +- 10 seconds.
 BUT, when I am actively using my single core computer, the times range
 from 502 seconds to 675 seconds.
 In detail: min: 502, max: 675, avrg: 560, sd: 53 seconds.
 That's with Mplayer and/or the browser.
 
 Thanks.  So, it's a uniprocessor problem.

ĸen
-- 
das eine Mal als Tragödie, das andere Mal als Farce
-- 
http://linuxfromscratch.org/mailman/listinfo/blfs-dev
FAQ: http://www.linuxfromscratch.org/blfs/faq.html
Unsubscribe: See the above information page

Re: Variability in build times.

2010-10-19 Thread Ken Moffat
On Tue, Oct 19, 2010 at 03:49:35PM +0100, Ken Moffat wrote:
  
  Thanks.  So, it's a uniprocessor problem.
 
 /me is still puzzled that my *original* builds of abiword on these
two machines (i.e. during the installation of my normal packages)
are faster than any of my repeats, but that's life.

 FWIW, my estimate for configure and make during the initial build
on the current system is around 595 seconds (full install was 618
seconds).  Today I've run configure and make 6 times with the same
settings.  For the first, I was watching something on xine, and
firefox (with gnash plugin available) was open. For the second,
firefox was open.  For the last 4 builds, only 3 urxvt terms (and
possibly a screensaver).

build   mm:ss   relative to baseline
0   11:48   +19%
1   11:32   +16%
2   10:29   +6%
3   11:17   +14%
4   10:43   +8%
5   10:11   +3%

 So, on the assumption that this variability is a uniprocessor
feature, I'll conclude that a lightly loaded desktop can have at
least a 15% range of build times.

 This range of variation *probably* only applies to complex builds
(lots of c++ and libtool) and is perhaps magnified by whatever
abiword does for its own 'dolt' versions of libtool and friends.
Thanks for all the helpful comments.


ĸen
-- 
das eine Mal als Tragödie, das andere Mal als Farce
-- 
http://linuxfromscratch.org/mailman/listinfo/blfs-dev
FAQ: http://www.linuxfromscratch.org/blfs/faq.html
Unsubscribe: See the above information page

Re: Variability in build times.

2010-10-19 Thread Aleksandar Kuktin
On Tue, 19 Oct 2010 17:28:33 +0100
Ken Moffat k...@linuxfromscratch.org wrote:

  This range of variation *probably* only applies to complex builds
 (lots of c++ and libtool) and is perhaps magnified by whatever
 abiword does for its own 'dolt' versions of libtool and friends.
 Thanks for all the helpful comments.
 
 
 ĸen

I concur.
Ldd reports that neither gcc nor g++ link libpthread (the only
multithread library I have installed) which would indicate they are
single-thread compilers and as such can fit themselves on one core in
multi-core setups. The other core(s) can take over other tasks, a
capability single core CPUs don't have.
So, on single core CPUs, any additional tasks would affect GCC
execution times, while on multi core CPUs, this would not occur.

-- 
-Aleksandar Kuktin
-- 
http://linuxfromscratch.org/mailman/listinfo/blfs-dev
FAQ: http://www.linuxfromscratch.org/blfs/faq.html
Unsubscribe: See the above information page

Re: Variability in build times.

2010-10-19 Thread linux fan
On 10/19/10, Ken Moffat k...@linuxfromscratch.org wrote:

  /me is still puzzled that my *original* builds of abiword on these
 two machines (i.e. during the installation of my normal packages)
 are faster than any of my repeats, but that's life.


Make sure 1 SBU = 1 SBU.
Some versions (gcc, etc.) may take more time.
If web browsing, there is a lot more time killing scripting lately,
even google. It's getting so bad, I'm usually turning off javascript.
-- 
http://linuxfromscratch.org/mailman/listinfo/blfs-dev
FAQ: http://www.linuxfromscratch.org/blfs/faq.html
Unsubscribe: See the above information page


Re: Variability in build times.

2010-10-19 Thread Ken Moffat
On Tue, Oct 19, 2010 at 02:02:01PM -0400, linux fan wrote:
 On 10/19/10, Ken Moffat k...@linuxfromscratch.org wrote:
 
   /me is still puzzled that my *original* builds of abiword on these
  two machines (i.e. during the installation of my normal packages)
  are faster than any of my repeats, but that's life.
 
 
 Make sure 1 SBU = 1 SBU.
 Some versions (gcc, etc.) may take more time.

 Umm, yes.  Actually, I was really querying the *elapsed* time.  But
yes, usually a new gcc version is slower (although 4.5 does seem to
be faster than 4.4 overall).
 If web browsing, there is a lot more time killing scripting lately,
 even google. It's getting so bad, I'm usually turning off javascript.

 Tell me about it ;)

ĸen
-- 
das eine Mal als Tragödie, das andere Mal als Farce
-- 
http://linuxfromscratch.org/mailman/listinfo/blfs-dev
FAQ: http://www.linuxfromscratch.org/blfs/faq.html
Unsubscribe: See the above information page

Re: Variability in build times.

2010-10-19 Thread linux fan
On 10/19/10, Ken Moffat k...@linuxfromscratch.org wrote:


  Umm, yes.  Actually, I was really querying the *elapsed* time.  But

I never can keep track of which apples are apples and which ones are
oranges, but just in case time happens to include find, du, and other
such operations, I notice that they do seem efficient at caching.
The first find of the day (bootwise) takes a lot longer to find the
same collection than it does on subsequent runs, and similarly with
du.
-- 
http://linuxfromscratch.org/mailman/listinfo/blfs-dev
FAQ: http://www.linuxfromscratch.org/blfs/faq.html
Unsubscribe: See the above information page


Re: Variability in build times.

2010-10-19 Thread Uwe Düffert
On Tue, 19 Oct 2010, Ken Moffat wrote:

 So, on the assumption that this variability is a uniprocessor
 feature, I'll conclude that a lightly loaded desktop can have at
 least a 15% range of build times.
Well, after all, do you really think this is astonishing? On a single core 
machine just (constantly) moving the mouse can easily (depending on the 
system) cost a few percent of cpu resources. If consistent build times are 
a concern I would never have thought about starting X at all. Even on my 
dual core machine I only have a few console screens (may be running 
midnight commander) during builds. Its not really a concern with multiple 
cores (as measurements show), but on a single one it is for sure.

Uwe
-- 
http://linuxfromscratch.org/mailman/listinfo/blfs-dev
FAQ: http://www.linuxfromscratch.org/blfs/faq.html
Unsubscribe: See the above information page


Re: Variability in build times.

2010-10-19 Thread Andrew Benton
On Mon, 18 Oct 2010 18:38:48 -0400
linux fan linuxscra...@gmail.com wrote:

 start=$(date +%s)
 ... stuff happens ...
 end=$(date +%s)
 elaps=$(( end - start ))
 
That's an interesting suggestion. Thanks!

Andy
-- 
http://linuxfromscratch.org/mailman/listinfo/blfs-dev
FAQ: http://www.linuxfromscratch.org/blfs/faq.html
Unsubscribe: See the above information page


Re: Variability in build times.

2010-10-18 Thread Ken Moffat
On Mon, Oct 18, 2010 at 10:19:58PM +0100, Ken Moffat wrote:
  I've never imagined that build times were exactly repeatable, but
 I'd assumed that, *on an unloaded system with only a browser
 running* the times would be fairly close, say plus or minus 3%.
 Now, I start to wonder.
 
 I forgot to add that the original builds were probably into swap
(abiword is almost at the end of my build, and if I look, I usually
see the box is using swap during a build), but no swap was used
after the most recent [slow] build.

 Also, I'm deleting the source directory between builds.

ĸen
-- 
das eine Mal als Tragödie, das andere Mal als Farce
-- 
http://linuxfromscratch.org/mailman/listinfo/blfs-dev
FAQ: http://www.linuxfromscratch.org/blfs/faq.html
Unsubscribe: See the above information page

Re: Variability in build times.

2010-10-18 Thread Bruce Dubbs
Ken Moffat wrote:
 On Mon, Oct 18, 2010 at 10:19:58PM +0100, Ken Moffat wrote:
  I've never imagined that build times were exactly repeatable, but
 I'd assumed that, *on an unloaded system with only a browser
 running* the times would be fairly close, say plus or minus 3%.
 Now, I start to wonder.

  I forgot to add that the original builds were probably into swap
 (abiword is almost at the end of my build, and if I look, I usually
 see the box is using swap during a build), but no swap was used
 after the most recent [slow] build.
 
  Also, I'm deleting the source directory between builds.

I haven't looked at it in detail, but I think files in memory are kept 
around for a while until it's needed for something else.  About the only 
way to really get repeatability is to reboot between tries and that 
isn't realistic.

In any case, I use a script like the following one for each program.
Installing the package on /tmp allows for repeatability in size, if not 
SBU time.

   -- Bruce

#!/bin/bash

source /usr/src/stats

###
# Installing openssh

DIR=`pwd`
PROGRAM=openssh-4.5p1
LOG=$DIR/$PROGRAM.log
TITLE=$PROGRAM
TIMEFORMAT=$TIMEFMT $TITLE

BUILDDIR=/tmp/openssh
DEST=$BUILDDIR/install
rm -rf $BUILDDIR
mkdir $BUILDDIR
cd $BUILDDIR

before=`df -k / | grep / | sed -e s/ \{2,\}/ /g | cut -d' ' -f3`


tar -xf $DIR/$PROGRAM.tar.?z* || exit 1

cd $PROGRAM
{ time \
   {
 echo Making $TITLE
 date

 sed -i s:-lcrypto:/usr/lib/libcrypto.a:g configure
 ./configure --prefix=/usr \
 --sysconfdir=/etc/ssh \
 --libexecdir=/usr/lib/openssh \
 --with-md5-passwords \
 --with-privsep-path=/var/lib/sshd 
 make 
 make DESTDIR=$DEST install 
 install -v -m755 -d $DEST/usr/share/doc/$PROGRAM 
 install -v -m644 INSTALL LICENCE OVERVIEW README* WARNING.RNG \
 $DEST/usr/share/doc/$PROGRAM

   }
} 21 | tee -a $LOG

if [ $PIPESTATUS -ne 0 ]; then exit 1; fi;

stats $LOG $DIR/$PROGRAM.tar.?z* $before

exit 0
---
$ cat stats

#!/bin/bash

function stats()
{
   log=$1
   tarball=$2
   b4=$3

   free_now=`df -k / | grep / | sed -e s/ \{2,\}/ /g | cut -d  -f3`

   buildtime=`tail -n1 $log|cut -f1 -d `
   sbu=`echo scale=3; $buildtime / 132.5 | bc`

   psizeK=`du -k $tarball | cut -f1`
   psizeM=`echo scale=3; $psizeK / 1024   | bc`

   bsizeK=`echo $free_now - $b4   | bc`
   bsizeM=`echo scale=3; $bsizeK / 1024   | bc`

   echo SBU=$sbu  | tee -a $log
   echo $psizeK $tarball size ($psizeM MB)| tee -a $log
   echo $bsizeK kilobytes build size ($bsizeM MB) | tee -a $log
   (echo -n md5sum : ; md5sum $tarball)   | tee -a $log
   (echo -n sha1sum: ; sha1sum $tarball)  | tee -a $log

   echo `date` $tarball  /usr/src/packages.log
}

TIMEFMT='%1R Elapsed Time - '
--
-- 
http://linuxfromscratch.org/mailman/listinfo/blfs-dev
FAQ: http://www.linuxfromscratch.org/blfs/faq.html
Unsubscribe: See the above information page


Re: Variability in build times.

2010-10-18 Thread Aleksandar Kuktin
On Mon, 18 Oct 2010 22:19:58 +0100
Ken Moffat k...@linuxfromscratch.org wrote:

  I've never imagined that build times were exactly repeatable, but
 I'd assumed that, *on an unloaded system with only a browser
 running* the times would be fairly close, say plus or minus 3%.
 Now, I start to wonder.

I have the same feeling from time to time.
Its like every rebuild may be taking a bit longer then the
previous.

  Has anybody run a series of builds on any package (on the same
 system, with the same options) to see how much variation occurs in
 the elapsed time ?

I'll make a setup tonight and let it run overnight with binutils and
report my findings in the morning (CEST).

-- 
-Aleksandar Kuktin
-- 
http://linuxfromscratch.org/mailman/listinfo/blfs-dev
FAQ: http://www.linuxfromscratch.org/blfs/faq.html
Unsubscribe: See the above information page


Re: Variability in build times.

2010-10-18 Thread Ken Moffat
On Mon, Oct 18, 2010 at 04:46:51PM -0500, Bruce Dubbs wrote:
 
 I haven't looked at it in detail, but I think files in memory are kept 
 around for a while until it's needed for something else.  About the only 
 way to really get repeatability is to reboot between tries and that 
 isn't realistic.
 
 That accords with what I thought.

 In any case, I use a script like the following one for each program.
 Installing the package on /tmp allows for repeatability in size, if not 
 SBU time.
 For trial builds, I mostly use DESTDIR (or INSTALL_ROOT), so that I
can, if needed, compare what got installed with different options.
I don't normally bother with a script, just separately time the
various stages (unlike my own builds, which time from configure to
the end in whole seconds).  Then du -sk for both the source and the
installed files.  Nice script though.

ĸen
-- 
das eine Mal als Tragödie, das andere Mal als Farce
-- 
http://linuxfromscratch.org/mailman/listinfo/blfs-dev
FAQ: http://www.linuxfromscratch.org/blfs/faq.html
Unsubscribe: See the above information page

Re: Variability in build times.

2010-10-18 Thread Ken Moffat
On Tue, Oct 19, 2010 at 12:01:00AM +0200, Aleksandar Kuktin wrote:
 
   Has anybody run a series of builds on any package (on the same
  system, with the same options) to see how much variation occurs in
  the elapsed time ?
 
 I'll make a setup tonight and let it run overnight with binutils and
 report my findings in the morning (CEST).
 
 Thanks, that will be interesting.

ĸen
-- 
das eine Mal als Tragödie, das andere Mal als Farce
-- 
http://linuxfromscratch.org/mailman/listinfo/blfs-dev
FAQ: http://www.linuxfromscratch.org/blfs/faq.html
Unsubscribe: See the above information page

Re: Variability in build times.

2010-10-18 Thread linux fan
start=$(date +%s)
... stuff happens ...
end=$(date +%s)
elaps=$(( end - start ))

Just thinking about eliminating the finger counting that time may do.
My experience with time is that some time floats between user and sys
on different runs that have an equal real time.
-- 
http://linuxfromscratch.org/mailman/listinfo/blfs-dev
FAQ: http://www.linuxfromscratch.org/blfs/faq.html
Unsubscribe: See the above information page


Re: Variability in build times.

2010-10-18 Thread Ken Moffat
On Mon, Oct 18, 2010 at 06:38:48PM -0400, linux fan wrote:
 start=$(date +%s)
 ... stuff happens ...
 end=$(date +%s)
 elaps=$(( end - start ))
 
 Just thinking about eliminating the finger counting that time may do.
 My experience with time is that some time floats between user and sys
 on different runs that have an equal real time.

 Yes, but ... For my buildscripts, elapsed time.  For the figures
I've been taking from 'time' on subsequent scripts, the first field
which is real unless the order of fields has changed recently.

ĸen
-- 
das eine Mal als Tragödie, das andere Mal als Farce
-- 
http://linuxfromscratch.org/mailman/listinfo/blfs-dev
FAQ: http://www.linuxfromscratch.org/blfs/faq.html
Unsubscribe: See the above information page

Re: Variability in build times.

2010-10-18 Thread Uwe Düffert
On Mon, 18 Oct 2010, Ken Moffat wrote:

 Any thoughts, or alternative suggestions ?
Yes, a few:

Over the past years I took a look at the build times quite often. Never 
systematically, but this is what I experienced: Most of the time I'm 
building a package its because of a new version. I'm using scripts per 
package close to LFS but package version independant and with 
optimization and only run few of them in a row, just as much as I prepared 
packages in advance. As I tend to be impatient, I tail my previous log 
file of the package currently building quite often to see when the package 
building right now is expected to be finished. Most of the time the build 
times are pretty close to the previous attempt, even though I'm building a 
newer version of that package. So in my case a certain package is 
usually built the first time after booting the machine and the build times 
are pretty close even after (minor) version updates.

As others already suspected, I would expect latency (like seek times) of 
the hard drive to have a significant influence on build times and thus 
times might be influenced by caching as well as by significantly changed 
fill level of a partition.

If thats the case then using an SSD should improve the relyability of 
build times. I'll run a series of builds of the same package this night 
(binutils as well for comparison) on a quite fast SSD and report back.

If HD turns out not to be the problem: what about those modern processors 
that overclock some cores automatically? That might easily be influenced 
by running *something* in the background that keeps yet another core 
active and prevents the package building core(s) from overclocking...

Uwe
-- 
http://linuxfromscratch.org/mailman/listinfo/blfs-dev
FAQ: http://www.linuxfromscratch.org/blfs/faq.html
Unsubscribe: See the above information page


Re: Variability in build times.

2010-10-18 Thread Aleksandar Kuktin
On Tue, 19 Oct 2010 01:12:50 +0200 (CEST)
Uwe Düffert dueff...@uwe-dueffert.de wrote:

 If thats the case then using an SSD should improve the relyability of 
 build times. I'll run a series of builds of the same package this
 night (binutils as well for comparison) on a quite fast SSD and
 report back.

If you'll be comparing with my times, be advised that I changed my mind
about binutils and will use glibc in order to minimise caching. The
logic being that, since it is bigger, the main memory won't be able to
stretch and encompass all the files. Maybe.

I'm also thinking about adding something C++ based in the mix too.

-- 
-Aleksandar Kuktin
-- 
http://linuxfromscratch.org/mailman/listinfo/blfs-dev
FAQ: http://www.linuxfromscratch.org/blfs/faq.html
Unsubscribe: See the above information page

Re: Variability in build times.

2010-10-18 Thread Ken Moffat
On Tue, Oct 19, 2010 at 01:12:50AM +0200, Uwe Düffert wrote:
 On Mon, 18 Oct 2010, Ken Moffat wrote:
 
  Any thoughts, or alternative suggestions ?
 Yes, a few:
 
 As others already suspected, I would expect latency (like seek times) of 
 the hard drive to have a significant influence on build times and thus 
 times might be influenced by caching as well as by significantly changed 
 fill level of a partition.

 But, I'm only ever comparing times on the same system, and this
package (abiword) was almost at the end of my build, so the amount
of '/' used is almost the same (for me, /home is always a separate
filesystem - ok, I've probably extended the usage of /var/log - and
yes, updating the browser cache involves head movement, but such a
large variation ?
 
 If thats the case then using an SSD should improve the relyability of 
 build times. I'll run a series of builds of the same package this night 
 (binutils as well for comparison) on a quite fast SSD and report back.
 

 With respect, if we *have to* use SSDs then most of us won't be
able to edit the book and it will wither even more than it has so
far.

 If HD turns out not to be the problem: what about those modern processors 
 that overclock some cores automatically? That might easily be influenced 
 by running *something* in the background that keeps yet another core 
 active and prevents the package building core(s) from overclocking...
 
 Uwe

 Modern processors ?  Multiple cores ?  I'm using single-processors
from somewhere between 4 and 5 years ago ;)

ĸen
-- 
for applications, 31 bits are enough ;)
-- 
http://linuxfromscratch.org/mailman/listinfo/blfs-dev
FAQ: http://www.linuxfromscratch.org/blfs/faq.html
Unsubscribe: See the above information page

Re: Variability in build times.

2010-10-18 Thread DJ Lucas
On 10/18/2010 04:46 PM, Bruce Dubbs wrote:
 #!/bin/bash
 


Snip
Looks much cleaner than my own.  Included only for reference...different
ideas while sharing scripts (I use a top level wrapper called lspec, and
llog is a timestamp based logging script I slapped together to account
for the goofy names used in scrollkeeper from screwing up install-log).

#!/bin/sh

# Package Information
export version=8.10
export tarball=pcre-${version}.tar.bz2
export dir=pcre-${version}
export md5sum=780867a700e9d4e4b9cb47aa5453e4b2
export download=http://downloads.sourceforge.net/pcre/${tarball};

. /etc/lspec.conf
. ${lspec_dir}/.count
let count++

if [ ${count} -lt 10 ]; then
lcount=000${count}
elif [ ${count} -lt 100 ]; then
lcount=00${count}
elif [ ${count} -lt 1000 ]; then
lcount=0${count}
fi

# I already have package
# get_package

cat ${lspec_dir}/pcre  ${build_dir}/logs/$lcount-pcre-${version} 

{
cd ${build_dir} 
sudo /sbin/llog -p 

tar -xf ${source_dir}/${tarball} 
# Account for cp -R for install logging
for file in `find . -type f`
do
touch $file
done
cd ${dir} 
time {
echo ## PREP AND CONFIGURE ## 
./configure --prefix=/usr \
--docdir=/usr/share/doc/pcre-${version} \
--enable-utf8 --enable-unicode-properties \
--enable-pcregrep-libz \
--enable-pcregrep-libbz2 
make 
# Checking build dir size is negligable
BUILD_SIZE=$(du -shb ./ | awk '{print $1}') 
echo ## BEGINING TESTSUITE ## 
time {
make check || true
} 
echo ## BEGINING INSTALL ## 
sudo make install 
sudo mv -v /usr/lib/libpcre.so.* /lib/ 
sudo ln -v -sf ../../lib/libpcre.so.0 /usr/lib/libpcre.so
} 
sudo /sbin/llog pcre-${version} 
echo count=${count}  ${lspec_dir}/.count 
INSTALL_SIZE=$( du -shb `sudo awk '{print $1}'
/var/log/llog/pcre-${version}.llog | sed /\(M\)/d ` | awk '{print $1}' |
paste -sd+ | bc ) 
FULL_BUILD_SIZE=$(du -shb ./ | awk '{print $1}') 
cd .. 
rm -rf ${build_dir}/${dir} 
echo  
echo -n Diskspace required with testsuite is:  
echo -e $FULL_BUILD_SIZE\n$INSTALL_SIZE | paste -sd+ | bc 
echo -n Diskspace required without testsuite is:   
echo -e $BUILD_SIZE\n$INSTALL_SIZE | paste -sd+ | bc 
echo 
} 21 | tee -a ${build_dir}/logs/$lcount-pcre-${version}


-- 
This message has been scanned for viruses and
dangerous content, and is believed to be clean.

-- 
http://linuxfromscratch.org/mailman/listinfo/blfs-dev
FAQ: http://www.linuxfromscratch.org/blfs/faq.html
Unsubscribe: See the above information page


Re: Variability in build times.

2010-10-18 Thread Uwe Düffert



On Tue, 19 Oct 2010, Aleksandar Kuktin wrote:


On Tue, 19 Oct 2010 01:12:50 +0200 (CEST)
Uwe Düffert dueff...@uwe-dueffert.de wrote:

If thats the case then using an SSD should improve the relyability of
build times. I'll run a series of builds of the same package this
night (binutils as well for comparison) on a quite fast SSD and
report back.

If you'll be comparing with my times, be advised that I changed my mind
about binutils and will use glibc in order to minimise caching. The
logic being that, since it is bigger, the main memory won't be able to
stretch and encompass all the files. Maybe.
I stopped testing binutils anyway. All results were very close, just as I 
was used to it for years. The difference between the slowest and the 
fastest run was about 0.6% (3m04.095 vs 3m05.282 real time measured by 
time), no matter what I was doing in the background (console only, 4850e 
(dual core), only 1 core compiling). I'll run glibc next, but don't expect 
much difference. May be we should all test abiword, but I don't usually 
build it and probably quite a lot of its prerequisites neither...


Uwe-- 
http://linuxfromscratch.org/mailman/listinfo/blfs-dev
FAQ: http://www.linuxfromscratch.org/blfs/faq.html
Unsubscribe: See the above information page

Re: Variability in build times.

2010-10-18 Thread Uwe Düffert

On Tue, 19 Oct 2010, Ken Moffat wrote:

 But, I'm only ever comparing times on the same system, and this
 package (abiword) was almost at the end of my build, so the amount
 of '/' used is almost the same (for me, /home is always a separate
 filesystem - ok, I've probably extended the usage of /var/log - and
 yes, updating the browser cache involves head movement, but such a
 large variation ?
No, that does not sound reasonable. Does that only happen with abiword or 
are there other examples (worth mentioning)?

 With respect, if we *have to* use SSDs then most of us won't be
 able to edit the book
Well, it was just an idea in case HD would be the reason. And even then it 
would only be needed for build times. So far we do not even have any prove 
for that. Thats why we a running tests right now...

 If HD turns out not to be the problem: what about those modern processors
 that overclock some cores automatically? That might easily be influenced
 by running *something* in the background that keeps yet another core
 active and prevents the package building core(s) from overclocking...
 Modern processors ?  Multiple cores ?  I'm using single-processors
 from somewhere between 4 and 5 years ago ;)
May be thats the reason: What about throtteling because of high 
temperature? Or running *something* in the background forcing task 
switches?

Uwe
-- 
http://linuxfromscratch.org/mailman/listinfo/blfs-dev
FAQ: http://www.linuxfromscratch.org/blfs/faq.html
Unsubscribe: See the above information page


Re: Variability in build times.

2010-10-18 Thread Uwe Düffert


On Tue, 19 Oct 2010, Uwe Düffert wrote:

I stopped testing binutils anyway. All results were very close, just as I was 
used to it for years. The difference between the slowest and the fastest run 
was about 0.6% (3m04.095 vs 3m05.282 real time measured by time), no matter 
what I was doing in the background (console only, 4850e (dual core), only 1 
core compiling). I'll run glibc next, but don't expect much difference. May 
be we should all test abiword, but I don't usually build it and probably 
quite a lot of its prerequisites neither...
Same with glibc here: The difference between the slowest and the fastest 
run is just 0.3% (16m53.825 vs 16m57.186).


Uwe-- 
http://linuxfromscratch.org/mailman/listinfo/blfs-dev
FAQ: http://www.linuxfromscratch.org/blfs/faq.html
Unsubscribe: See the above information page