Thanks Craig,

I had a looked for Raid 1 + Tumbleweed and it seems that there are a few "gotchas" in using btrfs in opensuse TW

It is interestig that Yast2 is held in such high regard for its installation and configuration abilities. But for my money the gurpmi command (also the Mandriva Control Centre ) was the slickest and easiest I have ever seen. Hardware was never a problem for Mandrake/Mandriva, and installation was much faster.

Sorry about that digression.  So is there a platform that handles RAID really well? I went to OpenSuse for mapping. But now photography is my main work, so maybe I should be looking at installing the distro which is preferrred by developers of the GIMP and also Darktable.

I just can't afford to fiddle for two weeks to get a result. And the OpenSuse and Ubuntu fora have freaked me out, somewhat.

Many thanks for persevering with me

Andrew



On 03/06/2018 10:28 PM, Craig Sanders via luv-main wrote:
[ replying back to the list. i think you accidentally replied only to me ]

On Tue, Mar 06, 2018 at 08:36:21PM +1100, pushin.linux wrote:
Thanks to all contributors to this thread, I think I have a light grip on
this.

Recipe, Using Yast2 download and install mdadmin on my 1 Tb Sata disk sda

Power down and install 2 x 2 Tb Seagate Sata Drives.

Boot up

Somehow the system will recognise the 2 drives and then I can use mdadmin to
format the drives for the btrfs, nominating RAID 1 mirrored as the framework
and /data as the name of this "partition". Then set up my directories and
transfer data.
If you're using btrfs, you don't need mdadm. btrfs does its own RAID.

Just partition both new drives (with fdisk or gdisk or parted or whatever),
and then mkfs.btrfs the filesystem. e.g. if the new drives are /dev/sdb and
/dev/sdc:

     mkfs.btrfs -m raid1 -d raid1 /dev/sdb1 /dev/sdc1

you don't actually need to partition the new drives, but a partition table
takes a tiny amount of space and maĸes it obvious that the disk is already in
use and shouldn't be re-formatted.  Then mount it as, e.g., /data.

You can create sub-volumes with, e.g.

     btrfs subvolume create /data/video

The main advantages of a sub-volume over a subdirectory is that you can set
some attributes individually on a sub-volume, and you can snapshot them
individually too.  The disadvantage is that sub-volumes are like a separate
partition, so mv-ing a file from one sub-volume to another is actually a
copy-and-delete.

I was going to show an example of creating a btrfs subvolume for videos with
compression disabled but it seems I'd forgotten that btrfs doesn't support
setting compression on or off for each sub-volume.  It's all or nothing.  I'll
show how to do it with ZFS instead - BTW, btrfs calls them "sub-volumes",
ZFS calls them "datasets". They're pretty much the same thing - different
features, of course, but the serve the same purpose.


You can create a snapshot at any time with `btrfs snapshot` and if you have
another machine with a btrfs pool (οr another pool on the same machine), you
can send that snapshot to it with `btrfs send`.  Very useful for backups.
There are various scripts around which can automatically create and expire
snapshots via cron (e.g. you can set up a schedule to keep the last 24 hourly
snapshots, 7 daily, 4 weekly, and 12 monthly) , making it very easy to recover
files if you accidentally delete or overwrite something.


Note that if you delete a file, it will still consume space on the disk as
long as there is at least one snapshot containing it.  To really delete it,
you have to delete all snapshots it appears in.  Or do nothing and let your
automatic snapshot cron job eventually get around to auto-expiring it.




For ZFS (assuming you'd already installed the tumbleweed ZFS packages[1]), it
would be:

     zpool create -o ashift=12 data mirror /dev/sdb /dev/sdc

"data" is the name of the pool and "mirror /dev/sdb /dev/sdc" telss zpool that
we want to create a mirrored pool using those two disks.

If you can, use the symlinks in /dev/disk/by-id/ instead of the direct device
names.  This will avoid potential problems if the devices get renumbered by
the kernel in future (the kernel explicitly does not guarantee that drives
will keep the same name across reboots, that's why it is recommended to use
UUIDs or LABELs instead of device names in /etc/fstab)

Otherwise, just run:

     zpool export data
     zpool import -d /dev/disk/by-id/ data

That will export the pool and re-import it using the /dev/disk/by-id/ device
symlinks.

The ashift=12 tells Zfs to create it aligned to 4096 byte sectors - this will
work perfectly if your drives are 512 byte sectors (because 4096 is a multiple
of 512) but will make it possible to add or replace 4K sector drives without
compromising performance.

zpool automatically partitions the drives with a GPT partition table.

The pool is auto-mounted as /data, and any data sets you create are
auto-mounted underneath thære (both are easilyy changed). e.g. to create a
data-set for video files with compression disabled:

     zfs set compressīon=on data         # enable compression by default
     zfs create data/video
     zfs set compression=off data/video  # disable it for this dataset.

You won't need to add an entry for /data or any of the datasets underneath
it. ZFS will take care of auto-mounting them.


zfs also has snapshot and send capabilities, offering the same kind of
benefits.


[1]: https://software.opensuse.org/download.html?project=filesystems&package=zfs


craig

--
craig sanders <c...@taz.net.au>
_______________________________________________
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main

_______________________________________________
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main

Reply via email to