On Mon, Jan 07, 2019 at 10:28:31AM +0000, Luke Kenneth Casson Leighton wrote:
> On Sun, Jan 6, 2019 at 11:46 PM Steve McIntyre <st...@einval.com> wrote:
> >
> > [ Please note the cross-post and respect the Reply-To... ]
> >
> > Hi folks,
> >
> > This has taken a while in coming, for which I apologise. There's a lot
> > of work involved in rebuilding the whole Debian archive, and many many
> > hours spent analysing the results. You learn quite a lot, too! :-)
> >
> > I promised way back before DC18 that I'd publish the results of the
> > rebuilds that I'd just started. Here they are, after a few false
> > starts. I've been rebuilding the archive *specifically* to check if we
> > would have any problems building our 32-bit Arm ports (armel and
> > armhf) using 64-bit arm64 hardware. I might have found other issues
> > too, but that was my goal.
> 
>  very cool.
> 
>  steve, this is probably as good a time as any to mention a very
> specific issue with binutils (ld) that has been slowly and inexorably
> creeping up on *all* distros - both 64 and 32 bit - where the 32-bit
> arches are beginning to hit the issue first.
> 
>  it's a 4GB variant of the "640k should be enough for anyone" problem,
> as applied to linking.
> 
>  i spoke with dr stallman a couple of weeks ago and confirmed that in
> the original version of ld that he wrote, he very very specifically
> made sure that it ONLY allocated memory up to the maximum *physical*
> resident available amount (i.e. only went into swap as an absolute
> last resort), and secondly that the number of object files loaded into
> memory was kept, again, to the minimum that the amount of spare
> resident RAM could handle.
> 
>  some... less-experienced people, somewhere in the late 1990s, ripped
> all of that code out [what's all this crap, why are we not just
> relying on swap, 4GB swap will surely be enough for anybody!!!!"]
> 
>  by 2008 i experienced a complete melt-down on a 2GB system when
> compiling webkit.  i tracked it down to having accidentally enabled
> "-g -g -g" in the Makefile, which i had done specifically for one
> file, forgot about it, and accidentally recompiled everything.
> 
>  that resulted in an absolute thrashing meltdown that nearly took out
> the entire laptop.
> 
>  the problem is that the linker phase in any application is so heavy
> on cross-references that the moment the memory allocated by the linker
> goes outside of the boundary of the available resident RAM it is
> ABSOLUTELY GUARANTEED to go into permanent sustained thrashing.
> 
>  i cannot emphasise enough how absolutely critical that this is to
> EVERY distribution to get this fixed.
> 
> resources world-wide are being completely wasted (power, time, and the
> destruction of HDDs and SSDs) because systems which should only really
> take an hour to do a link are instead often taking FIFTY times longer
> due to swap thrashing.
> 
> not only that, but the poor design of ld is beginning to stop certain
> packages from even *linking* on 32-bit systems!  firefox i heard now
> requires SEVEN GIGABYTES during the linker phase!
> 
> and it's down to this very short-sighted decision to remove code
> written by dr stallman, back in the late 1990s.
> 
> it would be extremely useful to confirm that 32-bit builds can in fact
> be completed, simply by adding "-Wl no-keep-memory" to any 32-bit
> builds that are failing at the linker phase due to lack of memory.

Note that Firefox is built with --no-keep-memory
--reduce-memory-overheads, and that was still not enough for 32-bts
builds. GNU gold instead of BFD ld was also given a shot. That didn't
work either. Presently, to make things link at all on 32-bits platforms,
debug info is entirely disabled. I still need to figure out what minimal
debug info can be enabled without incurring too much memory usage
during linking.

Mike

Reply via email to