On Thu, Feb 21, 2019 at 11:14:13PM +1100, Andrew Greig wrote:
> Looking at the disks in gparted I have:
>
> /dev/sda1
> File system lvn2 pv
> Label
> UUID sI0LJX-JSme-W2Yt-rFiZ-bQcV-lwFN-tSetH5
> Volume Group ubuntu-vg
> Members /dev/sda1  /dev/sdb1
> Partition /dev/sda1
> Name
> Flags boot/lvm
>
> /dev/sdb1
> File system lvm2 pv
> Label
> UUID  9HV3H6-JIYu-IdaS-2CGr-lkZQ-9xcB-RVu9Ks
> Status  Active
> Volume group /dev/sda1  /dev/sdb1
> Logical Volumes root  swap-1
> Partition Path /dev/sdb1
> Name
> Flags lvm
>
> /dev/sdc1
> File system  lvm2 pv
> Label
> UUID mqbYsB-xpm2-7c11-RLN5-q47a-A0bB-wcefad
> Status Not active(not a member of any volume group)Volume Group
> Members
> Logical Volumes
> Partition     Path /dev/sdc1
> Name
> Flags lvm

It looks like you've added one of the two new 3TB drives to the same volume
group as your root fs and swap partition.  The other 3TB drive has been turned
into an unrelated volume group.   Why?

Which drive is the old 1TB drive?  and which are the new 3TB drives?

My *guess* is that sdb1 is the old 1TB drive (because that's the only one
where the root and swap-1 LVs are mentioned).  If that's the case, then I'll
also guess that the 1TB drive is plugged into the second SATA port....so when
you plugged the new drives in, you plugged one of them into the first SATA
port.  Try swapping the cables for those two drives around so that the 1TB
drive is in the first port.

try running 'fdisk -l'.  That will show each disk and all partitions on
it, including the brand, model, and size of the drive. knowing the logical
identifiers is only half the story, you also need to know which physical drive
corresponds to those identifiers.

Once you have this information, i strongly recommend writing it down or
printing it so you always have it available when planning what to do.


> My current fstab is this
> andrew@andrew-desktop:~$ cat /etc/fstab
> # /etc/fstab: static file system information.
> #
> # Use 'blkid' to print the universally unique identifier for a
> # device; this may be used with UUID= as a more robust way to name devices
> # that works even if disks are added and removed. See fstab(5).
> #
> # <file system> <mount point>   <type>  <options>       <dump>  <pass>
> /dev/mapper/ubuntu--vg-root /               ext4    errors=remount-ro 0 1
> /dev/mapper/ubuntu--vg-swap_1 none            swap    sw              0 0
> andrew@andrew-desktop:~$
>
> So /dev/sdb1 is part of a lvm group but /dev/sdc1 is not
>
> What command do I use to get these added to the fstab? I haven't consciously
> formatted either of the two new drives,is there a step I have missed?

dunno, there isn't enough info to safely give any direct instructions. the
best I can give is generic advice that you'll have to adapt to your hardware
and circumstances.

But the first thing you need to do is undo the existing mess - why did you add
one of the new drives to the existing volume group (VG)? and, since you added
the new drive, why didn't you just create a new logical volume (LV), format
it, and start using it?

You'll need to check that it isn't being actively used in the VG, and then
remove that drive from the VG before you do anything else.


> I haven't got the dollars for a M/B upgrade so I will purchase some more
> DDR3 Ram to get me to the limit of the motherboard, and I will purchase a
> SDD as recommended. It wouldf be nice to get thses disks running so that
> I can dump my data on to them and then add the SDD and do a fresh install
> using btrfs, which, I believe will give me an effective RAID 1 config.

The SSD or SSDs should be used for grub, the root fs /, the EFI partition (if
any), /boot (if it's a separate partition and not just part of /), and swap
space. the 3TB drives are for your home directory and data.

You don't want to mix the SSD(s) and the hard drives into the same btrfs
array.

You can, however, have two btrfs arrays: one for the boot+OS SSD(s), the other
for your bulk data (the 3TB drives).  If all your data is going to be under
your home directory then mount the latter as /home.  If you're going to use it
for other stuff too, mount it as /data or something and symlink into it (e.g.
while booted in recovery mode, or logged in as root with nothing running as
your non-root user: "mv /home /data/; ln -sf /data/home/ /")

BTW, if you only get one SSD but plan to get another one later, btrfs allows
you to convert it to RAID-1 at any time. So does ZFS, you can always add a
mirror to a single drive. To do the same with mdadm, you have to plan ahead
and create an mdadm degraded raid-1 array (i.e. with a missing drive) when you
partition and format the drive.


Probably the easiest way to do this is to remove ALL drives from the system,
install the SSD(s) into the first (and second) SATA ports on the motherboard,
and the two 3TB drives into the third and fourth SATA ports.  Examine the
motherboard carefully and check the m/b's manual when choosing which port to
plug each drive into - the first port will probably be labelled SATA_0 or
similar.

boot up with the installer USB or DVD and tell it to format the SSD(s) as the
root fs with btrfs, and the two 3TB drives with btrfs (to be mounted as /home
or /data as mentioned above).

MAKE SURE YOU DELETE ANY EXISTING PARTITION TABLES AND CREATE NEW EMPTY
PARTITION TABLES ON ALL DRIVES.  The partition tables on the SSDs should be
identical with each other (you'll need a small FAT-32 partition for EFI, a
swap partition - 4GB should be enough, and the remainder of the disk as a
partition for the root fs); and the partition tables on the 3TB drives should
be identical with each other (you probably only need one big partition on
these).

When the system is installed and boots up successfully, power down, plug in
the old 1TB drive, reboot, mount it somewhere convenient (e.g. mkdir /old, and
mount the old root fs as /old), and then copy your data from it.

If you're going to copy your entire home directory (i.e. to keep your old
config files as well as your data) from the old drive to the new 3TB btrfs
array then you should do it while logged in as root with no processes running
as your non-root user.  IIRC, the ubuntu installer doesn't normally prompt
you to create a password for root so you'll need to do that yourself (e.g. by
running "su" or "sudo -i" and then running "passwd root").  Don't log in as
root with X, switch to the virtual terminal with Ctrl-Alt-F1 and login on the
text console.

Once you've copied the data from it, you should probably retire that old 1TB
drive.  Unplug it and put it away somewhere safe.  Write the date on it.  It's
effectively a backup of your data as at that date.



Speaking of backups, you should backup your data regularly.  Get a USB drive
box and another drive at least 3TB in size (e.g. a 4TB drive with a 1TB
partition and a 3TB partition will allow you to have multiple backups of / and
and at least one backup of /home).  Use btrfs snapshots and 'btfs send' to
backup to the USB drive.

IMO you're better off getting a generic USB drive box that allows you to
easily swap the drive in it than to get an "external" drive.  That will allow
you to have multiple backup drives, so you can store one off site (also,
"external drive" products sold as a single self-contained unit tend to have
proprietary garbage firmware that lies to the OS and holds your data to ransom
- e.g. if external drive box dies you can't just pull out the drive and put it
into another drive box and use it).


craig

--
craig sanders <[email protected]>
_______________________________________________
luv-main mailing list
[email protected]
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main

Reply via email to