On 10/07/2018 12:54 PM, Richard Melville wrote:
On Sun, 7 Oct 2018 at 16:43, Bruce Dubbs <[email protected] <mailto:[email protected]>> wrote:

    On 10/07/2018 12:58 AM, Ken Moffat wrote:
     > On Sun, Oct 07, 2018 at 07:25:55AM +0200, Theodore Driscoll wrote:
     >>     Hi
     >>
     >>     my daily-use machine runs openSuse. To experiment with LFS I
    added a
     >>     second SATA HDD.
     >>
     >>     Initially the Suse  drive was /dev/sda, the new,
    unpartitioned, drive
     >>     /dev/sdb.
     >>
     >>     But I rebooted once more. Now Suse has swapped the names, so
    what used to
     >>     be /dev/sda is now /dev/sdb and vice-versa. If I reboot
    again, the names
     >>     sometimes swap back, sometimes do not.
     >>
     >>     Section 2.4, Creating a New Partition, doesn't allow for
    this situation -
     >>     it assumes the drive names are persistent.
     >>
     >>     Has anyone else encountered this situation, and found a way
    around it?
     >>
     >>     Cheers
     >>     Ted
     >
     > I've never seen an _internal_ drive change its device across reboots
     > on the same kernel, except when other internal  drives were added or
     > removed.  And in that situation the device names persist for as long
     > as "this drive is plugged in here, that drive is plugged in there".
     >
     > Hmm, I suppose it is possible if all the drivers are on an initrd
     > (most distros, including Suse) and there is some variability in
     > timings - but it seems unlikely.
     >
     > So I guess that this is an external drive ?  If so, expect pain when
     > you complete LFS and try to boot an external drive from grub.
     >
     > With Suse, I assume it mounts by UUID - but I've been able to avoid
     > that approach on my own systems (I think there are two variants of
     > UUID-style, only one of which is supported without an initrd - and
     > LFS doesn't use initrds.)
     >
     > For ext4, you might be able to use e2label to label the LFS
     > filesystem, and then mount with label= instead of /dev/sdXN.  But
     > I'm not sure about how well that would work when you finish LFS and
     > try to boot it, if you have other partitions which you wish to mount
     > on the Suse drive (e.g. /boot, /home) and it won't work for swap.
     >
     > But I'm sure somebody will be able to offer details on the variants
     > of UUID mounting and how to fix your problem.  In the meantime,
     > "patience, and good luck!".

    I agree with Ken.  I've never had the drive order change for internal
    SATA (or older SCSI or IDE) drives.  USB drives, yes.  I'm not sure
    about SATA, but for the older SCSI drive the order was determined by
    HW.
       I suspect the same for SATA.

    It is usually possible to change the boot drive in most system firmware.

    The way to ensure that GRUB loads the correct root file system is to
    use
    PARTUUID=<uuid> on the kernel command line.  No initrd is required
    but a
    GUID Partition Table (GPT) is required.  Most distros use filesystem
    UUIDs, but that requires an initrd.

    Booting to a USB drive may require rootdelay=10 on the kernel
    command line.


If using gdisk a label can also be applied at the same time as partitioning.

I'm pretty sure that using the partition table for the root filesystem also requires and initrd. There is no problem using it in the fstab though.

  -- Bruce

--
http://lists.linuxfromscratch.org/listinfo/lfs-support
FAQ: http://www.linuxfromscratch.org/blfs/faq.html
Unsubscribe: See the above information page

Do not top post on this list.

A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing in e-mail?

http://en.wikipedia.org/wiki/Posting_style

Reply via email to