Hello,

On Tue, 9 Sep 2014 09:42:13 +0100 Luis Periquito wrote:

> I was reading on the number of PGs we should have for a cluster, and I
> found the formula to place 100 PGs in each OSD (
> http://ceph.com/docs/master/rados/operations/placement-groups/).
> 
> Now this formula has generated some discussion as to how many PGs we
> should have in each pool.
> 
> Currently our main cluster is being used for S3, cephFS and RBD data type
> usage. So we have 3 very big pools (data, .rgw.buckets and rbd) and 9
> small pools (all the remaining ones).
> 
> As we have a total of 60 OSDs we've been discussing how many PGs we
> should really have. We are using a replication of 4.
> 
> Should we have a total around 1500 PGs distributed over all the pools
> (total PGs) or should we have the big pools each with 1500 PGs for a
> total around 5000 PGs on the cluster?
> 
As it says in the documentation you've linked up there:
---
When using multiple data pools for storing objects, you need to ensure
that you balance the number of placement groups per pool with the number
of placement groups per OSD so that you arrive at a reasonable total
number of placement groups that provides reasonably low variance per OSD
without taxing system resources or making the peering process too slow.
---

Also as stated on the same page, you will want to round up that 1500 to
2048 for starters.

With smaller clusters, it is beneficial to overprovision PGs for various
reasons (smoother data distribution, etc). 

The larger the cluster gets, the closer you will want to adhere to that 100
PGs per OSD, as the resource usage (memory, CPU, network peering traffic)
creeps up.

So as Wido just wrote (I'm clearly typing to slow ^o^), balance it out
according to usage. I only use RBD, so my 2 other default pools stay at
measly 64 PGs while RBD gets all the PG loving. 

In your case (really depends on how much data is in the pools) you could
do 512 PGs for the 3 big ones and 64 PGs for the small ones and stay
within the recommended limits. 

However if you're planning on growing this cluster further and your
current hardware has plenty of reserves, I would go with the 1024 PGs for
big pools and 128 or 256 for the small ones.

Christian
-- 
Christian Balzer        Network/Systems Engineer                
ch...@gol.com           Global OnLine Japan/Fusion Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to