On 20/01/15 05:10, Rich Freeman wrote:
> On Mon, Jan 19, 2015 at 11:50 AM, James <wirel...@tampabay.rr.com> wrote:
>> Bill Kenworthy <billk <at> iinet.net.au> writes:
>>
>> I was wondering what my /etc/fstab should look like using uuids, raid 1 and
>> btrfs.
> 
> From mine:
> /dev/disk/by-uuid/7d9f3772-a39c-408b-9be0-5fa26eec8342          /
>          btrfs           noatime,ssd,compress=none
> /dev/disk/by-uuid/cd074207-9bc3-402d-bee8-6a8c77d56959          /data
>          btrfs           noatime,compress=none
> 
> The first is a single disk, the second is 5-drive raid1.
> 
> I disabled compression due to some bugs a few kernels ago.  I need to
> look into whether those were fixed - normally I'd use lzo.
> 
> I use dracut - obviously you need to use some care when running root
> on a disk identified by uuid since this isn't a kernel feature.  With
> btrfs as long as you identify one device in an array it will find the
> rest.  They all have the same UUID though.
> 
> Probably also worth nothing that if you try to run btrfs on top of lvm
> and then create an lvm snapshot btrfs can cause spectacular breakage
> when it sees two devices whose metadata identify them as being the
> same - I don't know where it went but there was talk of trying to use
> a generation id/etc to keep track of which ones are old vs recent in
> this scenario.
> 
>>
>> Eventually, I want to run CephFS on several of these raid one btrfs
>> systems for some clustering code experiments. I'm not sure how that
>> will affect, if at all, the raid 1-btrfs-uuid setup.
>>
> 
> Btrfs would run below CephFS I imagine, so it wouldn't affect it at all.
> 
> The main thing keeping me away from CephFS is that it has no mechanism
> for resolving silent corruption.  Btrfs underneath it would obviously
> help, though not for failure modes that involve CephFS itself.  I'd
> feel a lot better if CephFS had some way of determining which copy was
> the right one other than "the master server always wins."
> 

Forget ceph on btrfs for the moment - the COW kills it stone dead after
real use.  When running a small handful of VMs on a raid1 with ceph -
sloooooooooooow :)

You can turn off COW and go single on btrfs to speed it up but bugs in
ceph and btrfs lose data real fast!

ceph itself (my last setup trashed itself 6 months ago and I've given
up!) will only work under real use/heavy loads with lots of discrete
systems, ideally 10G network, and small disks to spread the failure
domain.  Using 3 hosts and 2x2g disks per host wasn't near big enough :(
 Its design means that small scale trials just wont work.

Its not designed for small scale/low end hardware, no matter how
attractive the idea is :(

BillK






Reply via email to