On Wed, Apr 17, 2019 at 04:50:54AM +0000, DJ Lucas via lfs-dev wrote: > On 4/12/2019 7:47 AM, Pierre Labastie via lfs-dev wrote: > > On 12/04/2019 12:37, DJ Lucas via lfs-dev wrote: > > > > > Comparison does not work anymore because of some randomization in code > > generation... Maybe, It could be disabled by some switch, though. We may > > look at what is done for comparing the second and third build of gcc at > > the end of a bootstrap. > > No, you can't do direct 1:1 comparisons, but you can still compare them > using other tools and attributes. You can't guarantee that they are > identical (they aren't), but you should be able to glean reasonable > assurance that they are functionally equivalent by maybe comparing the > symbol table for executables, and using a tool like abidiff for libraries > (shot in the dark, I haven't actually verified either is a viable test > case). > > > > > > > I'm sure there are lots of holes in the above being that I woke with > > > it, but thought I'd throw it out there anyway. I think that ticks > > > off all of the boxes identified in the earlier thread without > > > introducing more exotic flags, or even existing hacks we currently > > > use for the toolchain. Thoughts? > > > > It is certainly worth trying, but who will be able to find the time for > > doing this? I'm working steadily towards a complete jhalfs automated > > LFS/BLFS (some kind of reference build), trying not to ask my fellow > > editors to modify their habits, and it takes almost all my free time. > > You are absolutely right, I think Ken was alluding to the same. We need new > blood to go and experiment. Look at the changes Xi has put in, has caught me > depending on old assumptions more than once. Jeremy coming back to assist > with JHALFS in whatever capacity he sees fit. More capable eyes are just > plain helpful. Just putting the ideas out there for now, maybe somebody > finds it worth the time, maybe even somebody outside of the core devs will > find it an interesting experiment...or not. :-/ > > --DJ
I'm now coming back to this to summarise the reason why I gave up trying to use farce to compare an initial build against the results of using that to build itself: unexplained variation with more-recent toolchain (this was 10 years, or maybe more, ago and might have been documented on cross-lfs rather than lfs). Since then, the concept of reproducible builds has been created (check that, using the same toolchain and scripts, the distributed binary packages match). It has some of the same issues, and apparently tools have been created to help with some of the issues. Certainly, ASLR (put variables in random places, and therefore use different addresses when accessing them) is a large part of this (see wikipedia). But related to that I found various posts. One mentioned using a seeded random number so that builds were deterministic. Having suffered from a lack of entropy last year (on some of my machines with SSDs, not enough entropy to run all my BLFS bootscripts - solved by adding weak entropy from haveged), and given general upstream efforts to harden builds and increase randomness, I think that the "can it build itself" tests might be possible, but only in a very constrained environment where special measures are taken to see randomness. However, the problem of building one package repeatably is somewhat simpler than building the base LFS system repeatably, and that might be impractical. ĸen -- With a few red lights, a few old bits, we made the place to sweat. No matter what we get out of this, I know, I know we'll never forget. Smoke on the water, a fire in the sky. Smoke, on the water. -- http://lists.linuxfromscratch.org/listinfo/lfs-dev FAQ: http://www.linuxfromscratch.org/faq/ Unsubscribe: See the above information page
