I'v tested it on my home cluster: 8 OSDs (4 nodes by 2x4TB OSDs with
Celeron J1900 and 8GB RAM) + 4 cache tier OSDs (2 nodes by 2x250GB SSD OSDs
with Atom D2500 and 4GB RAM).
HDD OSDs worked v-v-very slow. And SSD OSDs sometimes stopped working
because btrfs couldn't rebalance quickly enough and overfilled its SSDs
(100% used space). To return them back to life, I had to perform
complicated procedure of freeing some space and rebalancing btrfs tree.
Maybe real production hardware hasn't such a problems with btrfs, but I
don't think that production cluster stability must depend on its hardware
performance.

пт, 7 авг. 2015 г. в 23:05, Ben Hines <[email protected]>:

> Howdy,
>
> The Ceph docs still say btrfs is 'experimental' in one section, but
> say it's the long term ideal for ceph in the later section. Is this
> still accurate with Hammer? Is it mature enough on centos 7.1 for
> production use?
>
> (kernel is  3.10.0-229.7.2.el7.x86_64 )
>
> thanks-
>
> -Ben
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to