On Sun, Oct 19, 2014 at 9:12 PM, Bruce Dubbs <[email protected]> wrote:
> Ken Moffat wrote:
>>
>> On Fri, Oct 17, 2014 at 11:45:06AM -0500, Bruce Dubbs wrote:
>>>
>>> Kenneth Harrison wrote:
>>>>
>>>>
>>>> The build and install of binutils in Chapter 5.4 is set at 1.0 SBU as
>>>> a baseline for your calculation needs. Usage of the listed "time"
>>>> command per the example given should give you an approximate time
>>>> frame for calculating 1.0 SBU on your system that will be installing
>>>> LFS. Once you have acquired what 1.0 SBU will be on your system, you
>>>> can then calculate the overall time to build LFS completely."
>>>
>>>
>>> There are some good things here that I'll incorporate.  Thanks.
>>>
>>>    -- Bruce
>>>
>>   I did not reply to this thread earlier, but I'm now beginning to
>> question exactly how consistent SBU values really are.  Do not
>> misunderstand me: on the same installation, times are consistent to
>> within a few seconds (for almost any package) when repeated.  But
>> for different machines, times can vary widely.
>>
>>   People who recall all my previous posts will know that I repeat
>> pass-1 binutils after I have booted a new system, to get an accurate
>> figure from that toolchain, as if it was building itself.  In
>> general, newer toolchains (particularly, newer gcc) take take longer
>> to build binutils, and anyone starting from an old version of gcc
>> will find that their apparent SBU from LFS chapter 5 has little
>> relevance to what happens in BLFS.
>>
>>   Then I discovered that some of my initial attempts to rebuild
>> binutils were resulting in excessively large SBU times - I
>> eventually blamed that on my fcrontab job to (daily) run updatedb &&
>> mandb - that get scheduled just after the first boot, and uses a lot
>> of memory - if I wait until it completes, the repeat-SBU value is
>> less.
>>
>>   A couple of years ago, I got an AMD phenom with nearly 8GB of RAM
>> (vga takes about 512K here) as my main build/test machine, and an i3
>> SandyBridge with only 4GB as a lower-power alternative.  I prefer
>> to run with cpufreq scaling down my clock speeds when idle, because
>> electricity coasts money as well as mostly adding to greenhouse
>> gases.  In those days, the phenom was noticeably faster for a -j1
>> SBU.  But things have changed : the SandyBridge changed to a newer
>> driver which only does performance and powersave which was not a
>> problem, with 'performance' it still tends to fall back to half of
>> maximum frequency when quiet, and the Phenom changed from the K8
>> driver to acpi-cpufreq.  That _appeared_ to be a problem (SBU times
>> started to grow a lot).  For a while, echoing 100 to the ondemand
>> sampling_down_factor appeared to keep the phenom in the same
>> ballpark as the i3.
>>
>>   But for the last few months, my SBU measurements on the Phenom have
>> been growing.  I tried changing to the performance governor (run all
>> four cores flat out, and damn the expense) for measurements, but
>> even there it was taking more time than the i3.  Then I looked at
>> alternative schedulers, such as Con Kolivas's bfs which is supposed
>> to make things better on desktops - that showed a very slight
>> improvement in my SBU, but nothing to write home about.
>>
>>   At that point, I started to think about replacing the mobo and cpu,
>> even though it is not yet 3 years old [ an AMD FX6 or FX8 to get more
>> cores and slightly faster processing than with my previous-generation
>> Bulldozer processor ].  However, I noticed that building a new kernel
>> was still significantly faster on the AMD than the i3 : perhaps it is
>> only c++ which got slower (actually, I didn't think pass 1 binutils
>> used c++, but who knows ?).
>>
>>   FWIW, on the phenom in 7.6 the SBU is now 129.229s, while on the
>> i3 it is 107.754s (although the chapter 5 value was 62.711s which is
>> frankly unbelievable).  No, I do not believe the decimals, or even
>> the units of the seconds, are particularly useful, that is only how
>> my scripts measure them.
>>
>>   And then tonight I have been updating firefox on my 7.6 systems.
>> For each of them, the script builds nss, updates the certificates,
>> and then builds firefox with the same config.  Using -j4, on the
>> phenom the script took 41m40.something while on the i3 it took
>> 57m17.something.  From that, I conclude that in practice the AMD
>> continues to be faster, even though its SBU is now significantly
>> longer.
>>
>>   So, for me in BLFS the SBU _will_ differ depending on which machine
>> I use.  Which makes me wonder how useful the measurement now is.
>
>
> It's quite useful when looking at it as an order of magnitude.  If the SBU
> value is 1.0 or less, I figure two minutes or less.  If it's 20 SBU, then
> it's time to take a break while it churns away.
>
> On my current laptop (i7), SBU = 120, on my core2, it's 118 (go figure).  My
> 686 is 230 seconds, but I don't build much on that any more.  On the core2
> in a qemu session, the time is 210 seconds.
>
> So things vary a bit, but the SBU times are reasonably consistent in that on
> any given system, 2 SBU is probably about twice as long as 1 SBU.
>
>   -- Bruce
>
>
>
>
>
>
> --
> http://lists.linuxfromscratch.org/listinfo/blfs-dev
> FAQ: http://www.linuxfromscratch.org/blfs/faq.html
> Unsubscribe: See the above information page

1.0 SBU likewise for my system when using make -j3 is about 2 minutes,
37 seconds. My system is an Athlon64 X2 2.5 GHz with 4 GB DDR2667 RAM
in dual channel mode.

I have noticed that for a Virtual Machine (such as VMware, VirtualBox,
or Qemu), the SBU time frame can extend out as much as 6 minutes and
16 seconds for 1.0 SBU, but that should be expected from a Virtual
system.

-Jim
-- 
http://lists.linuxfromscratch.org/listinfo/blfs-dev
FAQ: http://www.linuxfromscratch.org/blfs/faq.html
Unsubscribe: See the above information page

Reply via email to