If rbds work for your use case, I would recommend them. Networked posix
filesystems are a last resort, imo. Generally you use them when proprietary
software will not work with an rbd or object store (without having to run
nfs or something else on top of an rbd).

On Wed, Dec 13, 2017, 4:22 AM Bogdan SOLGA <[email protected]> wrote:

> Thanks a lot for the info, David!
>
> Have you encountered any file sync issues while using CephFS from the
> containers?
> How is the overall performance? have you also used RBD images? if yes -
> how is the CephFS performance, compared to the RBD performance?
>
> Thank you,
> Bogdan
>
> On Tue, Dec 12, 2017 at 7:53 PM, David Turner <[email protected]>
> wrote:
>
>> We have a project using cephfs (ceph-fuse) in kubernetes containers.  For
>> us the throughput was limited by the mount point and not the cluster.
>> Having a single mount point for each container would cap with the
>> throughput of a single mount point.  We ended up mounting cephfs inside of
>> the containers.  The initial reason we used kubernetes for cephfs was
>> multi-tenancy benchmarking and we found that a single mount point vs 20
>> mount points all had the same throughput for our infrastructure (so 20
>> mounts points was 20x more throughput than 1 mount point).  It wasn't until
>> we got up to about 100 concurrent mount points that we capped our
>> throughput, but our total throughput just kept going up the more mount
>> points we had of ceph-fuse for cephfs.
>>
>> On Tue, Dec 12, 2017 at 12:06 PM Bogdan SOLGA <[email protected]>
>> wrote:
>>
>>> Hello, everyone!
>>>
>>> We have recently started to use CephFS (Jewel, v12.2.1) from a few LXD
>>> containers. We have mounted it on the host servers and then exposed it in
>>> the LXD containers.
>>>
>>> Do you have any recommendations (dos and don'ts) on this way of using
>>> CephFS?
>>>
>>> Thank you, in advance!
>>>
>>> Kind regards,
>>> Bogdan
>>> _______________________________________________
>>> ceph-users mailing list
>>> [email protected]
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to