On Sat, Nov 28, 2020 at 10:14:10AM +0100, Pierre Labastie wrote:
>
> Sorry, but for me, the answer to the "has anybody..." question
> (translated to "have you...") is "no"... (and the answer to "actually
> broken..." is probably "yes", since there is no provision for that in
> jhalfs). The only packages that are built in chroot when I do a jhalfs
> build are~:
> libxml2
> libxslt
> DocBook DTD 4.5
> lynx
> sudo
> wget
> GPM
> subversion
>
> and their required and recommended dependencies (a dozen more
> packages). When all that is built, I switch to the fresh system, using
> lynx when I need a browser (not always easy, specially on sourceforge
> or github).
>
If you are able to build those packages in chroot then I suspect I
can hack the system to ensure the necessary mounts and SHELL are in
place. I'm not initially going to use jhalfs to build LFS itself.
> > 2. Does jhalfs easily support -jN builds ? I could not see that
> > mentioned in the svn docs, apologies if I've missed it. My plan
> > would probably be to build a base LFS system (initially using my own
> > scripts to tune it for the box in terms of things like firmware and
> > kernel config), image that, then use it to build "minimal" Xorg
> > (minimal in jhalfs terms, probably builds a lot of things that I
> > currently ignore at that stage). Then image that and build
> > different desktop environments.
>
> -jN is implemented for make (through MAKEFLAGS, which is set in
> envars.conf). I guess you can set NINJAJOBS in the environment (can be
> added to envars.conf).
>
Thanks.
> >
> > Using -jN and only building one system at a time is important
> > because of things like rust. I've spent a few days trying to use
> > libcgroup on sysv (rust apparently accepts cpu restrictions from
> > that) but given up in disgust (it's too hard to configure). So
> > anytime I build rust it thinks it can use N+2 cores of my machine.
> > That also means that a big machine will need a lot of memory.
>
> As you might guess, there is nothing in jhalfs concerning either cargo
> or libcgroup. I think I had once to use the
> /sys/devices/system/cpu/{on,off}line files to allow building rust, but
> I haven't done that lately (4-core haswell with HT and 16GB RAM, but
> the rust problem could have come from building in a VM with not much
> allocated memory).
>
For up to 8 cores/threads, my rule of thumb is 2GB per core for
rust. On my laptop I have to take 4 cores offline, shut down the
desktop except for xfwm and a couple of terms, and it still tends to
swap when building firefox. But then the nominal 8GB RAM is only
6.7GB after video and iommu take their chunks.
> >
> > Of course, an alternative use for a build machine is to allocate N
> > systems and build something on each of them using -j1 (just to test
> > if it builds). Because of rust (and ninja in qtwebengine) I don't
> > think I'll ever do that.
> >
> > 3. I recall that DejaVu is recommended as a runtime dependency for
> > some things. I'm sort-of thinking about building desktops and then
> > seeing if they seem to work - I suppose I'd have to provide my own
> > extra scripts to install dejavu ? If so, I might cheat and just
> > copy the TTFs from local storage into /usr/share/fonts.
>
> That's what I do. When I need a ttf font, I just copy paste
> instructions from the Xorg ttf page (using lynx...).
>
OK - if I get this going I'll hope to prepare some standard "extras"
which I can drop in.
> >
> > At this point I have doubts about all of this, even about what will
> > be a useful spec for the machine (even how much local storage will
> > be useful, so I don't think this is going to happen very quickly.
>
> I think the most storage consuming package is qtwebengine, which can go
> up to more than 2GB per job (but how many jobs consume that? Bruce
> seems to be able to build at -j22 with 32GB RAM). Other packages do not
> seem to go much higher than 1GB per job.
>
I hadn't thought about that, but it explains why building current
LFS/BLFS yesterday in chroot the box had started to use swap (4-core
machine, 16GB RAM, firefox and falkon both running on the host.
But my thoughts on storage included disk sizes - I think my next
step should be to find some space on my haswell (I've got a second
drive mostly set aside for this sort of thing, from when I was
examining CFLAGS etc).
> >
> > TIA
>
> As a general warning about jhalfs (for BLFS): the book instructions are
> for a first build, and there are not more than a dozen packages needing
> editing the scripts in that case. But when rebuilding, there are many
> more problems (one example is when a system group or user has already
> been created, it cannot be created again, so the instructions for that
> have to be commented out). Also, usually, you do not want to run the
> configuration part again (specially instructions of the form:
> "cat >> somefile << EOF", because then "somefile" would contain several
> times the same stanza. Not usually a problem, but not clean). If you
> wipe out everything that has been done after "minimal" Xorg, several of
> those annoyances would disappear, though. Still, some packages do need
> editing the scripts... Actually, kf5 and plasma scripts are among
> those. Complete automation has not been reached yet (not enough
> information in the book XML, I'd say).
>
Yes, I was vaguely aware of some of that.
> Also, tests and documentation building do not work well with jhalfs,
> unless optional dependencies are enabled, but even in that case, some
> dependencies like sphinx are not in the book (and so are not built
> unless a script is added). The problem with optional deps is that they
> lead to a lot of circular deps, and jhalfs decision is not always sound
> when that occurs.
>
I don't aim to run tests and build documentation at this stage. I'm
more interested in finding out that a package built ok when last
tried, and has not changed, but soemthing else has now broken it.
> BTW, jhalfs for LFS is Ok. CFLAGS and the like can be passed through
> the "optimization" part (not very well documented), and -jN is an
> option in one of the submenus.
>
Thanks. For this I don't intend to alter CFLAGS or CXXFLAGS in
BLFS, and I'm not sure if I'll do that on the LFS system - maybe add
stack protection etc. For the host LFS system, yes, but not for the
build system.
> Not sure I've helped you make your mind, let me(us) know how you
> progress with that project.
>
> Regards
> Pierre
>
Thanks, I now see that I need to experiment using my haswell to get
a feel for how this will turn out. I also need to review some old
desktop cases I've got (can't remember what size motherboards they
will take), and ask more generally about video cards.
ĸen
--
Internal error in fortune program:
fnum=2987 n=45 flag=1 goose_level=-232323
Please write down these values and notify fortune program admin.
--
http://lists.linuxfromscratch.org/listinfo/alfs-discuss
FAQ: http://www.linuxfromscratch.org/faq/
Unsubscribe: See the above information page