I can confirm, we used to run all our nodes on btrfs and I cannot recommend
anyone of doing that at this time. We had problems with deadlocks and very
slow performance and even corruption over time all the way up to kernel
3.13, I haven't tried 3.14 but there's some patches mentioning performance
and deadlocks.

But again, I would not recommend it. and even if you like to live
dangerously; try it thoroughly in a test environment a couple of months.


On Wed, May 28, 2014 at 8:58 PM, Mark Nelson <[email protected]>
wrote:

> On 05/28/2014 09:19 AM, Cedric Lemarchand wrote:
>
>>
>> Le 28/05/2014 16:15, Stefan Priebe - Profihost AG a écrit :
>>
>>> Am 28.05.2014 16:13, schrieb Wido den Hollander:
>>>
>>>> On 05/28/2014 04:11 PM, VELARTIS Philipp Dürhammer wrote:
>>>>
>>>>> Is someone using btrfs in production?
>>>>> I know people say it’s still not stable. But do we use so many features
>>>>> with ceph? And facebook uses it also in production. Would be a big
>>>>> speed
>>>>> gain.
>>>>>
>>>> As far as I know the main problem is still performance degradation over
>>>> time. On a SSD-only cluster this would be less of a problem since seek
>>>> times on SSDs aren't a really big problem, but on spinning disks they
>>>> are.
>>>>
>>>> I haven't seen btrfs in production on any Ceph cluster I encountered.
>>>>
>>> It heavily fragements over time.
>>>
>> I just would add that it is inherent to *all* COW based file system, and
>> not specifically to BTRFS ;-)
>>
>
> I think the big issues is if the BTRFS defragmentation tools are made safe
> for when lots of snapshots are used.  BTRFS tends to be very fast with Ceph
> on fresh filesystems, but the fragmentation, especially with small writes
> to RBD objects, can just kill it.
>
>
>
>> Cheers
>>
>> Cédric
>>
>>    Also no kernel backports are available
>>> to stable kernels. So which one would you choose?
>>>
>>> Stefan
>>>
>>>
>>>
>>>>>
>>>>> _______________________________________________
>>>>> ceph-users mailing list
>>>>> [email protected]
>>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>>
>>>>>
>>>>  _______________________________________________
>>> ceph-users mailing list
>>> [email protected]
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>
>>
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to