On Sun 2020-05-10 08.53.16, James Courtier-Dutton wrote:

So, there is a solution that uses tiled RAID. LVM has a "mirror" option.
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/4/html/cluster_logical_volume_manager/mirrored_volumes
If you used that, you would not need a RAID layer at all.
You would create all the disks as a LVM volume group, and then create
LVM partitions using the LVM mirror option.
An LVM mirror divides the device being copied into regions that are
typically 512KB in size, so a big improvement over the 500GB chunks
suggestion above.
This would also give flexibility, you could choose some of your data
to be "mirror" and some not.
LVM "mirror" also lets you migrate data while it is still mounted.
You have the original LVM volume, mirror it onto a new disk, remove
the original copy.

So, I think moving to an "LVM mirror" solution is your best bet for
future extensibility.

After reviewing all options, this indeed seems to be the best one in my case. 
As I am doing for the first time — are these the correct steps?

* additional sources I looked at:
- https://wiki.debian.org/LVM
- 
https://www.debian.org/doc/manuals/debian-handbook/advanced-administration.en.html#sect.raid-and-lvm

[new hard disk drives, still blank: sdc, sde; old hard disk drives as RAID-1 
system md127p1]

pvcreate /dev/sdc
pvcreate /dev/sde

vgcreate vg_blobs /dev/sdc /dev/sde

lvcreate -n lv_blobs -m1 vg_blobs # How do I make the new volume use all 
available space? Will mirroring choose the physical volumes automatically? 
Where does the log go? = Do I need a partition on a separate disk for it? How 
large should it be and how do I incorporate it?

mkfs.ext4 /dev/vg_blobs/lv_blobs

rsync … # old RAID-1 system ↔ new mirrored volume group

# ? remove sda + sdb from RAID-1 md127p1
# ? eliminate RAID-1 md127p1

pvcreate /dev/sda
pvcreate /dev/sdb

vgextend vg_blobs *** # or vgcreate vg_blobs2 ?

lvextend / lvresize # ?

# grow filesystem

Attachment: signature.asc
Description: PGP signature

-- 
GLLUG mailing list
GLLUG@mailman.lug.org.uk
https://mailman.lug.org.uk/mailman/listinfo/gllug

Reply via email to