Our use case is not Openstack but we have a cluster with similar size to
what you are looking at. Our cluster has 540 OSDs with 4PB of raw storage
spread across 9 nodes at this point.
2 pools
- 512 PGs - 3 way redundancy
- 32768 PGs - RS(6,3) erasure coding (99.9% of data in this pool)
The
The general recommendation is to target around 100 PG/OSD. Have you tried
the https://ceph.com/pgcalc/ tool?
On Wed, 4 Apr 2018 at 21:38, Osama Hasebou wrote:
> Hi Everyone,
>
> I would like to know what kind of setup had the Ceph community been using
> for their Openstack's Ceph configuration w
Hi Everyone,
I would like to know what kind of setup had the Ceph community been using for
their Openstack's Ceph configuration when it comes to number of Pools & OSDs
and their PGs.
Ceph documentation briefly mentions it for small cluster size, and I would like
to know from your experience,