On 16/2/19 2:44 pm, Craig Sanders via luv-main wrote:
On Sat, Feb 16, 2019 at 01:02:44PM +1100, Andrew Greig wrote:
I have had some disks "ready to go" for a couple of months, meaning all that
was required was to plug the SATA cables into the MB. I plugged them in
today and booted the machine, except that it did not boot up. Ubuntu 18.04,
it stopped at the Ubuntu burgundy screen and then went black and nowhere
from that state.

I  shut it down and removed the 2 SATA cables from the MB and booted up -
successfully.

It is apparent that I lack understanding, hoping for enlightenment
Is your /etc/fstab configured to mount the root fs (and any other filesystems)
by device node (e.g. /dev/sda1), or by the UUID or LABEL?

If you're using device node names, then you've run into the well-known fact
that linux does not guarantee that device names will remain the same across
reboots.  This is why you should always either use the filesystems' UUIDs or
create labels on the filesystems and use those.


The device node may change because the hardware has changed - e.g. you've
added or removed drive(s) from the systems (this is likely to be the case for
your system).  They may also change because the load order of driver modules
has changed, or because of timing issues in exactly when a particular drive
is detected by linux.  They may also change after a kernel upgrade.  Or they
may change for no reason at all.  They are explicitly not guaranteed to be
consistent across reboots.

For over a decade now, the advice from linux kernel devs and pretty much
everyone else has been:

DEVICE NODES CAN AND WILL CHANGE WITHOUT WARNING.  NEVER USE THE DEVICE NODE
IN /etc/fstab.  ALWAYS USE UUID OR LABEL.

BTW, if you want to read up on what a UUID is, start here:

https://en.wikipedia.org/wiki/Universally_unique_identifier


Note: it's not uncommon for device node names to remain the same for months
or years, even with drives being added to or removed from the system.  That's
nice, but it doesn't matter - think of it as a happy coincidence, certainly
not as something that can be relied upon.



To fix, you'll need to boot a "Live" CD or USB stick (the gparted and
clonezilla ISOs make good rescue systems), mount your system's root fs
somewhere (e.g. as "/target"), and edit "/target/etc/fstab" so that it refers
to all filesystems and swap partitions by UUID or LABEL.

If you don't have a live CD (and can't get one because you can't boot your
system), you should be able to do the same from the initrd bash shell, or by
adding "init=/bin/bash" to the kernel command line from the grub menu.  You'd
need to run "mount -o rw,remount /" to remount the root fs as RW before you
can edit /etc/fstab.  Any method which gets you your system's root fs mounted
RW will work.


To find the UUID or LABEL for a filesystem, run "blkid".  It will produce
output like this:


# blkid
/dev/sde1: LABEL="i_boot" UUID="69b22c56-2f10-45e8-ad0e-46a7c7dd1b43" TYPE="ext4" PARTUUID="1dbd3d85-01"
/dev/sde2: LABEL="i_swap" UUID="a765866d-3444-48a1-a598-b8875d508c7d" TYPE="swap" PARTUUID="1dbd3d85-02"
/dev/sde3: LABEL="i_root" UUID="198c2087-85bb-439c-9d97-012a87b95f0c" TYPE="ext4" PARTUUID="1dbd3d85-03"

If blkid isn't available, try 'lsblk -f'.  Both blkid and lsblk will be on a
system rescue disk, but may not be available from an initrd shell.  If udev
has already run, you can find symlinks linking the UUID to the device name in
/dev/disk/by-uuid.

NOTE: UUIDs will *always* exist for a filesystem, they are created
automatically when the fs is created.  Labels will only exist if you've
created them (the exact method varies according to the filesystem - e.g. for
ext4, by using the "-L" option when you create a fs with mkfs.ext4, or by
using "tune2fs" any time after the fs has been created).



Using the above as an example, if your fstab wanted to mount /dev/sde3 as /,
change /dev/sde3 to UUID=198c2087-85bb-439c-9d97-012a87b95f0c - e.g.

          UUID=198c2087-85bb-439c-9d97-012a87b95f0c    /    ext4    defaults,relatime,nodiratime 0 1

alternatively, if you've created labels for the filesystems, you could use something like:

          LABEL=i_root    /    ext4    defaults,relatime,nodiratime 0 1


Do this for **ALL** filesystems and swap devices listed in /etc/fstab.


Save the edited fstab, run "sync", and then unmount the filesystem.  You
should then be able to boot into your system.

craig

--
craig sanders <c...@taz.net.au>
_______________________________________________
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main

Thanks Craig,

This my /etc/fstab

andrew@andrew-desktop:~$ sudo cat /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
/dev/mapper/ubuntu--vg-root /               ext4    errors=remount-ro 0       1
/dev/mapper/ubuntu--vg-swap_1 none            swap    sw              0       0

and this is the results of:

andrew@andrew-desktop:~$ blkid
/dev/sda1: UUID="sI0LJX-JSme-W2Yt-rFiZ-bQcV-lwFN-tSetH5" TYPE="LVM2_member" PARTUUID="92e664e1-01"
/dev/mapper/ubuntu--vg-root: UUID="b0738928-9c7a-4127-9f79-99f61a77f515" TYPE="ext4"

after hot plugging the two drives (I chose to try this to see if they would be picked up and configured in the same way as a USB key is detected. it seems that sdb and sdc have been detected

dmesg gives this:

[  279.911371] ata5: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[  279.912343] ata5.00: ATA-9: ST2000DM006-2DM164, CC26, max UDMA/133
[  279.912349] ata5.00: 3907029168 sectors, multi 0: LBA48 NCQ (depth 31/32), AA
[  279.913492] ata5.00: configured for UDMA/133
[  279.913503] ata5: EH complete
[  279.913799] scsi 4:0:0:0: Direct-Access     ATA      ST2000DM006-2DM1 CC26 PQ: 0 ANSI: 5
[  279.914390] sd 4:0:0:0: Attached scsi generic sg4 type 0
[  279.914487] sd 4:0:0:0: [sdb] 3907029168 512-byte logical blocks: (2.00 TB/1.82 TiB)
[  279.914494] sd 4:0:0:0: [sdb] 4096-byte physical blocks
[  279.914557] sd 4:0:0:0: [sdb] Write Protect is off
[  279.914562] sd 4:0:0:0: [sdb] Mode Sense: 00 3a 00 00
[  279.914647] sd 4:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[  279.926227]  sdb: sdb1
[  279.926570] sd 4:0:0:0: [sdb] Attached SCSI disk
[  330.877835] ata4: exception Emask 0x10 SAct 0x0 SErr 0x40d0000 action 0xe frozen
[  330.877846] ata4: irq_stat 0x00400040, connection status changed
[  330.877855] ata4: SError: { PHYRdyChg CommWake 10B8B DevExch }
[  330.877868] ata4: hard resetting link
[  331.750805] ata4: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[  331.751777] ata4.00: ATA-9: ST2000DM006-2DM164, CC26, max UDMA/133
[  331.751784] ata4.00: 3907029168 sectors, multi 0: LBA48 NCQ (depth 31/32), AA
[  331.752909] ata4.00: configured for UDMA/133
[  331.752920] ata4: EH complete
[  331.753212] scsi 3:0:0:0: Direct-Access     ATA      ST2000DM006-2DM1 CC26 PQ: 0 ANSI: 5
[  331.753808] sd 3:0:0:0: Attached scsi generic sg5 type 0
[  331.754015] sd 3:0:0:0: [sdc] 3907029168 512-byte logical blocks: (2.00 TB/1.82 TiB)
[  331.754022] sd 3:0:0:0: [sdc] 4096-byte physical blocks
[  331.754069] sd 3:0:0:0: [sdc] Write Protect is off
[  331.754074] sd 3:0:0:0: [sdc] Mode Sense: 00 3a 00 00
[  331.754155] sd 3:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[  331.779255] sd 3:0:0:0: [sdc] Attached SCSI disk

Since the drives have not been partitioned or formatted, should I just download the latest Ubuntu and install as a server, with the two drives taking up a RAID config?

Or could I just run gparted and partition and format those disks alone?

There is something else I noticed in dmesg

[   65.106562] EDAC amd64: ECC disabled in the BIOS or no ECC capability, module will not load.
                Either enable ECC checking or force module loading by setting 'ecc_enable_override'.
                (Note that use of the override may cause unknown side effects.)
Is this something that I should address?

I am puzzled by the almost empty fstab - when I was running OpenSuse the fstab was quite large.

Many thanks

Andrew

_______________________________________________
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main

Reply via email to