On 10-12-2014 11:49, Fernando de Oliveira wrote: > On 10-12-2014 11:22, William Harrington wrote: >> On Wed, December 10, 2014 11:33, Armin K. wrote: >>> On 12/10/2014 12:28 PM, Armin K. wrote: >>>> On 12/09/2014 08:33 PM, [email protected] wrote: >>>>> Author: fernando >> >>>>> Modified: >>>>> trunk/BOOK/xsoft/other/thunderbird.xml >>>>> >>>>> Modified: trunk/BOOK/xsoft/other/thunderbird.xml >>>>> ============================================================================== >>>>> --- trunk/BOOK/xsoft/other/thunderbird.xml Tue Dec 9 11:19:25 >>>>> 2014 (r15176) >>>>> +++ trunk/BOOK/xsoft/other/thunderbird.xml Tue Dec 9 11:33:19 >>>>> 2014 (r15177) >>>>> @@ -11,7 +11,7 @@ >>>>> <!ENTITY thunderbird-md5sum >>>>> "3781dfb541412c7f6b530a654b834ce5"> >>>>> <!ENTITY thunderbird-size "164 MB"> >>>>> <!ENTITY thunderbird-buildsize "3.6 GB (68 MB installed)"> >>>>> - <!ENTITY thunderbird-time "30 SBU (using >>>>> parallelism=4)"> >>>>> + <!ENTITY thunderbird-time "16 SBU (using >>>>> parallelism=4)"> >>>> As you can see, this largely depends on what's running on the system. >>>> Why bother upgrading it this much? I only check it each major release. >>>> Seems like too much work. >>>> >>> >>> Don't get me wrong though, I have nothing against you doing this, but >>> doesn't it make it little less true for someone? I mean, obviously the >>> former value was from a system under (average) load and the second one >>> is from a sort of base system with no additional software running. Why >>> not add these two, then divide them by two to get a value that's mostly >>> true for base system and systems under (average) load (ie, a wm, desktop >>> or simply just X). > > The problem has been discussed before, by Ken, Bruce, Igor and I. In one > of the discussions, we talked about the time taken for libxul.so to > link. In the case you mention, the difference in time is exactly that. > First it took extra 14 SBU for linking. Depending on the hardware, it > will take less or more. I got in my host 18 SBU, the day I got 30SBU in > the VM. > > > We have discussed about theses different values. I thinks it is not > satisfactory, when about half (I've even got 2/3) of the time is for the > linking. > > > I believe you missed the discussions.
I'm sorry. Please ignore previous message. >>> >>> -- >>> Note: My last name is not Krejzi. >> >> Not to mention, having an SBU when using multiple CPU's or cores when the >> rest of the book has SBU's based with one CPU or one core, and when the >> LFS book SBU is measured with one core/cpu. May as well put in every book >> an SBU based on multiple jobs when the system has so many CPU's or cores. >> I suggest leaving the SBU as if someone is using one CPU or one core. >> >> Here's one major issue with the SBU and then stating the SBU when using 4 >> CPU's or cores, >> >> When the first SBU is measured it is using one CPU or one core. When using >> multiple jobs, the SBU is not going to be accurate, especially when >> someone moves the build to a different machine. >> >> I would suggest to leave the SBU with one CPU or core. Adding a note with >> machine specifics could be used for SBU's when using multiple CPU's or >> cores. >> >> Sincerely, >> >> William Harrington > > The way I am doing has been introduced by Bruce. Later, there has been a > discussion about it, I think in the dev list, and its under "SBU values > in BLFS", at: > > http://www.linuxfromscratch.org/blfs/view/svn/introduction/conventions.html > > There are many packages with that, either in the build SBU or tests SBU. > > I believe you missed all those. I'm sorry. Please ignore previous message. -- []s, Fernando -- http://lists.linuxfromscratch.org/listinfo/blfs-dev FAQ: http://www.linuxfromscratch.org/blfs/faq.html Unsubscribe: See the above information page
