Hi Anish, in case you're still interested, we're using cephfs in production
since jewel 10.2.1.

I have a few similar clusters with some small set up variations. They're
not so big but they're under heavy workload.

- 15~20 x 6TB HDD OSDs (5 per node), ~4 x 480GB SSD OSDs (2 per node, set
for cache tier pool)
- About 4 mount points per cluster, so I assume it translates to 4 clients
per cluster
- Running 10.2.9 on Ubuntu 4.4.0-24-generic now.

Cache Tiering is enabled for the CephFS on a separate pool that uses the
SSDs as OSDs, if that's really what you wanna know.


Cya,


Regards,

Webert Lima
DevOps Engineer at MAV Tecnologia
*Belo Horizonte - Brasil*

On Mon, Jul 24, 2017 at 3:27 PM, Anish Gupta <[email protected]> wrote:

> Hello,
>
> Can you kindly share their experience with the  bulit-in FSCache support
> with ceph?
>
> Interested in knowing the following:
> - Are you using FSCache in production environment?
> - How large is your Ceph deployment?
> - If with CephFS, how many Ceph clients are using FSCache
> - which version of Ceph and Linux kernel
>
>
> thank you.
> Anish
>
>
>
>
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to