On 9/17/19 2:36 PM, Ken Moffat via blfs-support wrote:
On Tue, Sep 17, 2019 at 01:59:05AM -0400, Jared Stevens via blfs-support wrote:

(I am attempting the suggested "triple vertical dots" reply in Gmail
suggestion-- hopefully this works otherwise I will use a different method
of replying.)


That part works, now it's just remembering to trim what has become
no-longer relevant ;)  And yes, that can be a hard thing to decide,
so leaving too much is probably better than removing too little.

More to the point, some observations and suggestions below.

This time, the build got frustratingly close to the end (11172/12514)
before I received yet another fatal error. Apparently, my LFS system ran
out of space on the drive while it was creating the thousands of temp files
for the build:

as: BFD (GNU Binutils) 2.32 assertion fail ../../bfd/elf.c:3103
{standard input}: Fatal error: can't close
obj/gpu/command_buffer/service/gles2_sources/gles2_sources_jumbo_2.o: No
space left on device

Too late for this build, and dealing with the other problems like
you later suggested is a better way forward.  But some comments:

You mention tmpfs - any build will put temporary files in /tmp and
conventionally that is usually a tmpfs which gets cleared on reboot.

But any tmpfs is conventionally sized as half available real RAM
(i.e. whatever is installed, less reserved areas such as for
integrated graphics).  So, on a machine with 16GB nominal, the
maximum size for a tmpfs may be between 7.0 and 7.8GB.

If you compile in /tmp (or indeed in a separate tmpfs) that space is
enough for many packages - and of course writing to memory is fast.
But the memory used (and anything left over from previous compiles)
is not available to the compiler (or, if you have enough swap,
things might get swapped out).

Building qtwebengine is painful - it uses a lot of memory for the
compile, quite apart from the space used for the resulting files.

`$ ls /tmp/
bash: ls: command not found`

The only solution is to logout of chroot, unmount the disk (had to hard
unmount as regular gave "target is busy" message), and remount and open
chroot again.


Actually, if /tmp on the host system has space (if it doesn't, you
probably can't open a new login) from the host you could df to see
what is full and then perhaps clear old things out.

Here is the output for `df -h` for my LFS system:

`$ df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdd2       442G  101G  319G  24% /
udev            5.8G     0  5.8G   0% /dev
tmpfs           5.8G     0  5.8G   0% /run`

The drive is 500 GB in total size with roughly 480 GB allocated in a
partition for root with the remaining drive space dedicated to my EFI vfat
partition (500 MB) and the swap partition (16 GB).


I think 442GB is more than enough, but 101GB used suggests there is
a lot of space used by something.  My own sources, scripts, notes
are mounted over nfs - but they live on a 50GB filesystem.  For
desktop builds I now prefer 25GB '/' for a system.  If you have
debug information I guess that could need a bigger system.  For many
purposes, a stripped desktop can fit in 10GB.

The '5.8G' figures above suggest you have nominally 12GB of RAM.
But your running host system will also be using RAM, and for ninja
compiles the system will default to scheduling for each CPU.
On x86_64 a ballpark figure for required memory is 2GB per CPU, or
else you may need to reduce how many cores are used (export
NINJAJOBS - see LFS for the fix to allow this).

I am led to believe that the build attempted to create all of the necessary
files in the tmpfs filesystem and simply ran out of space out of the given
5.8 GB, but I am not certain. I hope it did not manage to completely fill
the root partition of the available 319 GB at the time.


What matters is the amount of system memory, and swap, and their
usage.  When I'm building big packages, either LFS in chroot, or
later BLFS (usually after booting it) I often keep 'top' running to
see what is going on : getting modern versions of top to give you
useful data and readable colours (I use black and white because I
run ttys and terms with a black background) is an interesting
exercise, but once it is done (for your normal user) write the
config.

In my current settings (very different from the defaults) I have:

(top part)
a line showing this is top, with date, uptime, ... load average
a line showing the number of tasks
one line per cpu showing usage, with a graph of '|' for use
two lines showing memory and swap usage

(main part) - headings, and details for processes.  The default
produces a 'tree' structure, which I find useless (almost all user
processes are off the bottom of the display).

Running top while I write this on my laptop I've got firefox
processes, xfce processes, falkon, qtwebengine, nm-applet and top
sometimes showing up (as well as system processes).

Getting top to that state took some time, so it is well worth saving
the details of whatever options work for you.  I can also remember
that it has taken a few attempts on different machines (looked ok,
then later I realised things were not quite as I intended - the top
manpage covers it all but is not easy reading).

Finally - the reason I particularly need this on my laptop is for
updating firefox and/or qtwebengine.  I have 8 cores on a ryzen
2500u with nominal 8GB RAM (6.7GB total according to top).  When I
update those packages I need to kill both of them to clear out
memory.  With xfce, some xfce panel applets, nm-applet and a few
xfce-term open I updated to qtwebengine-5.12.5 a few days ago (this
is on 8.4) - 0.1 GB swap got used.

Oh, and for this machine I need to restrict the number of jobs to 4
for both those packages - otherwise their builds will swap badly and
probably OOM.

Good luck, when you get back to qtwebengine.

Just for comparison, I do things a bit differently.

First, I do use an nfs mount for sources and scripts and logs, but I don't build there:

SOURCE         TARGET                 FSTYPE     SIZE   USED  AVAIL USE%
lfs84:/srv/src /usr/src               nfs        246G 199.3G  34.2G  81%

The reason it has 200G used is because I have a very long history of older versions of packages and their histories. I have 875 directories in that partition.

I do recommend using a separate /tmp for an LFS/BLFS environment. I used to build in /tmp but not any more:
/dev/sdb14     /tmp                   ext4      29.4G     2G  25.9G   7%

I now build in a custom directory /build:
/dev/sdc4      /build                 ext4     491.2G 109.5G 356.7G  22%

I don't delete packages after build because I keep them for possible reference. If rebuilding a package, the previous build is removed first.

I've started using 30G for the / partition. It is enough for all the BLFS packages:
/dev/sda14     /                      ext4      29.4G  25.1G   2.8G  86%

I do have multiple copies of some large installations in /opt: libreoffice, kde, qt. /opt/texlive by itself if 2.1G.

  -- Bruce
--
http://lists.linuxfromscratch.org/listinfo/blfs-support
FAQ: http://www.linuxfromscratch.org/blfs/faq.html
Unsubscribe: See the above information page

Reply via email to