Hi Oliver, a good value is 100-150 PGs per OSD. So in your case between 20k and 30k.
You can increase your PGs, but keep in mind that this will keep the cluster quite busy for some while. That said I would rather increase in smaller steps than in one large move. Kai On 17.05.2018 01:29, Oliver Schulz wrote: > Dear all, > > we have a Ceph cluster that has slowly evolved over several > years and Ceph versions (started with 18 OSDs and 54 TB > in 2013, now about 200 OSDs and 1.5 PB, still the same > cluster, with data continuity). So there are some > "early sins" in the cluster configuration, left over from > the early days. > > One of these sins is the number of PGs in our CephFS "data" > pool, which is 7200 and therefore not (as recommended) > a power of two. Pretty much all of our data is in the > "data" pool, the only other pools are "rbd" and "metadata", > both contain little data (and they have way too many PGs > already, another early sin). > > Is it possible - and safe - to change the number of "data" > pool PGs from 7200 to 8192 or 16384? As we recently added > more OSDs, I guess it would be time to increase the number > of PGs anyhow. Or would we have to go to 14400 instead of > 16384? > > > Thanks for any advice, > > Oliver > _______________________________________________ > ceph-users mailing list > firstname.lastname@example.org > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
Description: OpenPGP digital signature
_______________________________________________ ceph-users mailing list email@example.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com