On 2/10/19 4:37 PM, Ken Moffat via lfs-dev wrote:
On Thu, Feb 07, 2019 at 02:47:21AM +0000, DJ Lucas via lfs-dev wrote:
On 2/6/2019 7:11 PM, Ken Moffat via lfs-dev wrote:
no existing mounts in /mnt
Actually, there is a reason that /mnt hasn't been populated for a long
time now. In FHS 2.2, IIRC, possibly earlier (I was unable to find 2.1),
'/mnt' was redefined as a mount point for temporary filesystems (ex:
mount /dev/sda4 /mnt), but they left in a clause where you could
continue to have sub-directories so that you were still in compliance if
using the old directory layout.

Interesting, but I think you misunderstood why I mentioned that: on
previous versions of SystemRescueCD, based on gentoo, the rescue
system itself had various things mounted under /mnt.  The current
Arch-based version looks as if it will be easier on those machines
where it works ...
Anyway, as to a host to build from, the last several bare metal builds I
did, I just did a quick Arch install (who's partition I later repurposed
for for /boot or /var). It's kind of my goto for something quick now
days. It typically only takes about 10 minutes to install and
reboot...much quicker than any installer IME, but that's probably only
because I've done it so many times.
Please note that all my testing on the new machine was before I had
freed up space in which to create new partitions, and also that the
machine has slightly broken ioapic mapping in its firmware.  With
both SRCD and Arch itself, approximately half of the attempts to
boot from the stick failed with reports that s systemd unit for LVM2
was running (with no timeout).


This is actually an upstream systemd bug. I don't believe we've patched it in LFS yet as I haven't been able to confirm it with our builds, but I know it exists and have heard of it. It's actually a udev issue, caused by their rewrite.

Hopefully Fedora won't have that problem :-)

--
http://lists.linuxfromscratch.org/listinfo/lfs-dev
FAQ: http://www.linuxfromscratch.org/faq/
Unsubscribe: See the above information page

Reply via email to