Hi,
Since you didn't get an immediate reply from a developer, I'm going to be bold 
and repeat my interpretation that the documentation implies, perhaps not 
clearly enough, that the 50-100 PGs per OSD rule should be applied for the 
total of all pools, not per pool. I hope a dev will correct me if I'm wrong.

With your config you must have an avg 400 PGs per OSD. Do you find 
peering/backfilling/recovery to be responsive? How is the CPU and memory usage 
of your OSDs during backfilling?

Cheers, Dan

-- Dan van der Ster || Data & Storage Services || CERN IT Department --


-------- Original Message --------
From: "McNamara, Bradley" <[email protected]>
Sent: Thursday, March 13, 2014 08:03 PM
To: [email protected]
Subject: [ceph-users] PG Calculations

There was a very recent thread discussing PG calculations, and it made me doubt 
my cluster setup.  So, Inktank, please provide some clarification.

I followed the documentation, and interpreted that documentation to mean that 
PG and PGP calculation was based upon a per-pool calculation.  The recent 
discussion introduced a slightly different formula adding in the total number 
of pools:

# OSD * 100 / 3

vs.

# OSD’s * 100 / (3 * # pools)

My current cluster has 24 OSD’s, replica size of 3, and the standard three 
pools, RBD, DATA, and METADATA.  My current total PG’s is 3072, which by the 
second formula is way too many.  So, do I have too many?  Does it need to be 
addressed, or can it wait until I add more OSD’s, which will bring the ratio 
closer to ideal?  I’m currently using only RBD and CephFS, no RadosGW.

Thank you!

Brad
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to