On Sat, Aug 15, 2020 at 11:37:23PM +0200, Elias Rudberg via blfs-support wrote:
> Hi Hans,
> 
> > How can I verify whether I ran out of memory or not?
> 
> That's something I would like to know also. I don't know how to do that.
> 
> Some things you could try:
> 
> - monitor the memory usage using "top" or similar, while the compilation is
> going on
> 

With modern versions of top, you may need to spend some time
configuring it to your preferences, but you should be able to get it
to show memory and swap at the bottom of the upper part of its
display, below the CPU activity if that is being shown.

Mine is configured to show the active processes (not the default
'tree' style), so when it is running with processes which use a lot
of memory (graphical browsers, C++ compiles, rust compiles) I can
see how much memory they are using.

You can also use 'free' to get the memory/swap usage.

On sysv I would look at dmesg to see if it mentioned an OOM (out of
memory).  If you are running systemd then 'journalctl -k' might show
you the dmesg output (based on a quick google for systemd dmesg, not
from experience, so could well be wrong).

> - try to free up some memory and then try again, for example if you were
> using a window manager you could try without that (boot to runlevel 3 or
> something)
> 
> - try to reduce the amount of memory it is using when compiling, for example
> turn off parallel make (if that was used) and/or change compiler
> optimization flags to use less optimizations, e.g. -O1 instead of -O2.
> 
Playing with CFLAGS does not always do what you expect (it depends
on the individual packages as to whether you need to take special
action to force your own CFLAGS, and trying to detune released
packages seems like a bad idea.  For the little it is worth, I did
some experiments just over a year ago with the aim of forcing my own
CFLAGS, CXXFLAGS and exploring some of the options.  The results
(basically one run of each variation, but with some upgrades along
the way) were mostly inconclusive but somewhere in there are details
of what I had to do to the packages I build to get them to obey my
CFLAGS (or in some cases, to not use my optimization of -O2 or -O3)
because some default to -O3 but will detune to -O2 if you pass that,
and one some of my less-powerful machines I do generally use -O2.

But the problem was in nss.  I do not regard that as a large
package, although it is a slow one when built using -j1.
AFAICS building nss-3.55 less than 300 MB which should be trivial.

> - try in another computer that has more memory to see if it works there
> 
> Hope this helps
> 
> / Elias

Another thought is that there might have been a RAM problem which
caused g++ to be killed.  Again, dmesg or equivalent might show what
happened.  If you got a segmentation fault on an LFS system, then
either something was miscompiled using opcodes which are not
available (has been known with e.g. gmp using its default configure
scripts on low-end intel machines where some op-codes for the higher
end versions of that generation were assumed by the gmp developers
to be present in all models), or else there might be a memory fault.

For the opcode issue, it seems unlikely that you would manage to
complete LFS and then on the same machine hit this when trying to
build NSS.  But if you built LFS on one machine and then tried to
run it on a different machine it could happen (e.g. I took a binary
from an old LFS optimised for a Kaveri and eventually managed to get
it working-enough to build LFS on a Ryzen+, but getting there was
'fun' with no guarantee of success).

For memory problems, on a past x86_64 AMD machine (I think it was
some sort of old phenom with 4 cores) I had a low-end motherboard
and from time to time it would segfault if using -j4, or very
occasionally it would segfault even on -j1 and need to be turned off
for a while.  I suspect that insufficient voltage in the RAM was the
problem (someone else had similar results), but power supply issues,
or cooling, can also do this.  Beyond thant, memtest86 (or old
unmaintained memtest86+ with the reservation that using all cores
usually locks up).  If RAM has gone bad (or is being run with
too-optimistic settings) it usually shows up within a couple of
passes.

ĸen
-- 
Juliet's version of cleanliness was next to godliness, which was to
say it was erratic, past all understanding and was seldom seen.
                          -- Unseen Academicals
-- 
http://lists.linuxfromscratch.org/listinfo/blfs-support
FAQ: http://www.linuxfromscratch.org/blfs/faq.html
Unsubscribe: See the above information page

Reply via email to