On Sun, Aug 18, 2013 at 01:49:17PM -0300, Fernando de Oliveira wrote:
> 
> All 5 VMs finished. They run with 4 CPU's, are i686. Two, as I said
> before, 1GB RAM, running in other host, 3 have 1.5GB, running on this
> host that have many other things, including FF and TB, running as well.
> 
> In the other host, times were
> 
> SBU_TIME: 36.00411522
> SBU_TIME: 35.76518218
> 
> In this host,
> SBU_TIME: 49.10614525
> SBU_TIME: 52.37500000
> SBU_TIME: 59.61038961
> 
> Perhaps, again, the fact that I have an AMD_64 running on a i686 host
> running inside a 32bit vmplayer explains why this machine always gave me
> trouble. It was built to replace this host. Would copy and change
> whatever needed, for physical, instead of virtual hardware, as done
> before, is more practical than what I did with LFS7.2, built directly on
> a physical machine. But as I have written before, discovered that, for
> many reasons, still need 32bit. Then, waited for LFS7.4, to build a new
> complete one, 32bit. I only need 64bit for creating OJDK binary for the
> book. Hope to start building tomorrow.
> 
 A couple of comments on SBUs - not sure if they relate to what you
wrote there, but I'd like to record what I do for package edits.

 This is prompted by my LFS-7.4 build on i686.  The host was LFS-7.2
and a single-threaded initial SBU was 78.392 seconds (whoot! fast!).
But one of the first things I do on a new system is remove /tools,
recreate an empty /tools, and run the SBU commands as root.  Usually
a gcc version increase means things run slower, and in this case
I've gone from 4.7.1 to 4.8.1.  My SBU on this machine is now
150.849 seconds.

 When I'm putting a new version into BLFS I always try to measure it
on the current release.  So my recent changes were measured on 7.3,
and whatever I do now will be measured on 7.4.

ĸen
-- 
das eine Mal als Tragödie, dieses Mal als Farce
-- 
http://linuxfromscratch.org/mailman/listinfo/blfs-dev
FAQ: http://www.linuxfromscratch.org/blfs/faq.html
Unsubscribe: See the above information page

Reply via email to