On 12/30/2015 10:14 PM, lee wrote:
> Hi,
> 
> soon I'll be replacing the system disks and will copy over the existing
> system to the new disks.  I'm wondering how much merit there would be in
> being able to make snapshots to be able to revert back to a previous
> state when updating software or when installing packages to just try
> them out.
> 
> To be able to make snapshots, I could use btrfs on the new disks.  When
> using btrfs, I could use the hardware RAID-1 as I do now, or I could use
> the raid features of btrfs instead to create a RAID-1.
> 
> 
> Is it worthwhile to use btrfs?

Yes.

;-)

> Am I going to run into problems when trying to boot from the new disks
> when I use btrfs?

Yes.

;-)

well ... maybe.

prepare for some learning curve. but it is worth it!

> Am I better off using the hardware raid or software raid if I use btrfs?

I would be picky here and separate "software raid" from "btrfs raid":

software raid .. you think of mdadm-based software RAID as we know it in
the linux world?

btrfs offers RAID-like redundancy as well, no mdadm involved here.

The general recommendation now is to stay at level-1 for now. That fits
your 2-disk-situation.

> The installation/setup is simple: 2x3.5" are to be replaced by 2x2.5",
> each 15krpm, 72GB SAS disks, so no fancy partioning is involved.
> 
> (I need the physical space to plug in more 3.5" disks for storage.  Sure
> I have considered SSDs, but they would cost 20 times as much and provide
> no significant advantage in this case.)
> 
> 
> I could just replace one disk after the other and let the hardware raid
> do it all for me.  A rebuilt takes only 10 minutes or so.  Then I could
> convert the file system to btrfs, or leave it as is.  That might even be
> the safest bet because I can't miss anything when copying.  (What the
> heck do I have it for? :) )
> 
> 
> Suggestions?

I would avoid converting and stuff.

Why not try a fresh install on the new disks with btrfs?
You can always step back and plug in the old disks.
You could even add your new disks *beside the existing system and set up
a new rootfs alongside (did that several times here).

-

There is nearly no partitioning needed with btrfs (one of the great
benefits).

I never had /boot on btrfs so far, maybe others can guide you with this.

My /boot is plain extX on maybe RAID1 (differs on
laptops/desktop/servers), I size it 500 MB to have space for multiple
kernels (especially on dualboot-systems).

Then some swap-partitions, and the rest for btrfs.

So you will have something like /dev/sd[ab]3 for btrfs then.

Create your btrfs-"pool" with:

# mkfs.btrfs -m raid1 -d raid1 /dev/sda3 /dev/sdb3

Then check for your btrfs-fs with:

# btrfs fi show

Oh: I realize that I start writing a howto here ;-)

In short:

In my opinion it is worth learning to use btrfs.
checksums, snapshots, subvolumes, compression ... bla ...

It has some learning curve, especially with a distro like gentoo.
But it is manageable.

As mentioned here several times I am using btrfs on >6 of my systems for
years now. And I don't look back so far.

-

look up:

https://btrfs.wiki.kernel.org/index.php/Using_Btrfs_with_Multiple_Devices


Reply via email to