The cephfs_metadata pool makes sense on ssd, but it won't need a lot of
space. Chances are that you'll have plenty of ssd storage to spare for
other uses.
Personally, I'm migrating away from a cache tier and rebuilding my OSDs. I
am finding that performance with Bluestore OSDs with the block.db
Hello,
sorry to jump in.
I'm looking to expand with SSDs on an HDD cluster.
I'm thinking about moving cephfs_metadata to the SSDs (maybe with device
class?) or to use them as cache layer in front of the cluster.
Any tips on how to do it with ceph-ansible?
I can share the config I currently have
I have deployed, expanded and upgraded multiple Ceph clusters using
ceph-ansible. Works great.
What information are you looking for?
--
Sinan
> Op 17 apr. 2019 om 16:24 heeft Francois Lafont
> het volgende geschreven:
>
> Hi,
>
> +1 for ceph-ansible too. ;)
>
> --
> François (flaf)
>
Hi,
+1 for ceph-ansible too. ;)
--
François (flaf)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On 4/17/19 4:24 AM, John Molefe wrote:
Hi everyone,
I currently have a ceph cluster running on SUSE and I have an expansion
project that I will be starting with around June.
Has anybody here deployed (from scratch) or expanded their ceph cluster
via ansible?? I would appreciate it if you'd
Hi everyone,
I currently have a ceph cluster running on SUSE and I have an expansion project
that I will be starting with around June.
Has anybody here deployed (from scratch) or expanded their ceph cluster via
ansible?? I would appreciate it if you'd share your experiences, challenges,