Andrew Savchenko <bircoph <at> gmail.com> writes:

> Ceph is optimized for btrfs by design, it has no configure options
> to enable or disable btrfs-related stuff:
> https://github.com/ceph/ceph/blob/master/configure.ac
> No configure option => no use flag.

Good to know; nice script.

> Just use the latest (0.80.7 ATM). You may just nerame and rehash
> 0.80.5 ebuild (usually this works fine). Or you may stay with
> 0.80.5, but with fewer bug fixes.

So just download from ceph.com, put it in distfiles and copy-edit
ceph-0.80.7 in my /usr/local/portage,  or is there an overlay somewhere
I missed?

> If raid is supposed to be read more frequently than written to,
> then my favourite solution is raid-10-f2 (2 far copies, perfectly
> fine for 2 disks). This will give you read performance of raid-0 and
> robustness of raid-1. Though write i/o will be somewhat slower due
> to more seeks. Also it depends on workload: if you'll have a lot of  
> independent read requests, raid-1 will be fine too. But for large read  
> i/o from a single or few clients raid-10-f2 is the best imo.

Interesting. For now I'm going to stay with simple mirroring. After
some time I might migrate to a more agressive FS arrangement, once
I have a better idea of the i/o needs. With spark(RDD)  on top of mesos,
I shooting for mostly "in-memory" usage so i/o  is not very heavily
used. We'll just have to see how things work out.

Last point. I'm using openrc and not systemd, at this time; any
ceph issues with openrc, as I do see systemd related items with ceph.


> Andrew Savchenko


Very good advice.
Thanks,
James




Reply via email to