"Stefan G. Weichinger" <[email protected]> writes: > On 12/30/2015 10:14 PM, lee wrote: >> Hi, >> >> soon I'll be replacing the system disks and will copy over the existing >> system to the new disks. I'm wondering how much merit there would be in >> being able to make snapshots to be able to revert back to a previous >> state when updating software or when installing packages to just try >> them out. >> >> To be able to make snapshots, I could use btrfs on the new disks. When >> using btrfs, I could use the hardware RAID-1 as I do now, or I could use >> the raid features of btrfs instead to create a RAID-1. >> >> >> Is it worthwhile to use btrfs? > > Yes. > > ;-) > >> Am I going to run into problems when trying to boot from the new disks >> when I use btrfs? > > Yes. > > ;-) > > well ... maybe. > > prepare for some learning curve. but it is worth it!
So how does that go? Having trouble to boot is something I really don't need. >> Am I better off using the hardware raid or software raid if I use btrfs? > > I would be picky here and separate "software raid" from "btrfs raid": > > software raid .. you think of mdadm-based software RAID as we know it in > the linux world? I'm referring to the software raid btrfs uses. > btrfs offers RAID-like redundancy as well, no mdadm involved here. > > The general recommendation now is to stay at level-1 for now. That fits > your 2-disk-situation. Well, what shows better performance? No btrfs-raid on hardware raid or btrfs raid on JBOD? >> Suggestions? > > I would avoid converting and stuff. > > Why not try a fresh install on the new disks with btrfs? Why would I want to spend another year to get back to where I'm now? > You can always step back and plug in the old disks. > You could even add your new disks *beside the existing system and set up > a new rootfs alongside (did that several times here). The plan is to replace the 3.5" SAS disks with 1TB disks. There is no room to fit any more 3.5" disks. Switching disks all the time is not an option. That's why I want to use the 2.5" SAS disks. But I found out that I can't fit those as planned. Unless I tape them to the bottom of the case or something, I'm out of options :( However, if tape them, I could use 4 instead of two ... > There is nearly no partitioning needed with btrfs (one of the great > benefits). That depends. Try to install on btrfs when you have 4TB disks. That totally sucks, even without btrfs. Add btrfs and it doesn't work at all --- at least not with Debian, though I was thinking all the time that if that wasn't Debian but Gentoo, it would just work ... With 72GB disks, there's nearly no partitioning involved, either. And the system is currently only 20GB, including two VMs. > I never had /boot on btrfs so far, maybe others can guide you with this. > > My /boot is plain extX on maybe RAID1 (differs on > laptops/desktop/servers), I size it 500 MB to have space for multiple > kernels (especially on dualboot-systems). > > Then some swap-partitions, and the rest for btrfs. There you go, you end up with an odd setup. I don't like /boot partitions. As well as swap partitions, they need to be on raid. So unless you use hardware raid, you end up with mdadm /and/ btrfs /and/ perhaps ext4, /and/ multiple partitions. When you use hardware raid, it can be disadvantageous compared to btrfs-raid --- and when you use it anyway, things are suddenly much more straightforward because everything is on raid to begin with. We should be able to get away with something really straightforward, like btrfs-raid on unpartitioned devices and special provisions in btrfs for swap space so that we don't need extra swap partitions anymore. The swap space could even be allowed to grow (to some limit) and shrink back to a starting size after a reboot. > So you will have something like /dev/sd[ab]3 for btrfs then. But I want straightforward :) > Create your btrfs-"pool" with: > > # mkfs.btrfs -m raid1 -d raid1 /dev/sda3 /dev/sdb3 > > Then check for your btrfs-fs with: > > # btrfs fi show > > Oh: I realize that I start writing a howto here ;-) That doesn't work without an extra /boot partition? How's btrfs's performance when you use swap files instead of swap partitions to avoid the need for mdadm? > In short: > > In my opinion it is worth learning to use btrfs. > checksums, snapshots, subvolumes, compression ... bla ... > > It has some learning curve, especially with a distro like gentoo. > But it is manageable. Well, I found it pretty easy since you can always look up how to do something. The question is whether it's worthwhile or not. If I had time, I could do some testing ... Now I understand that it's apparently not possible to simply make a btrfs-raid1 from the two raw disks, copy the system over, install grub and boot from that. (I could live with swap files instead of swap partitions.) > As mentioned here several times I am using btrfs on >6 of my systems for > years now. And I don't look back so far. And has it always been reliable?

