Correction: Sorry min_size is at 1 everywhere.

Thank you.

Karol Kozubal

From: Karol Kozubal <karol.kozu...@elits.com<mailto:karol.kozu...@elits.com>>
Date: Wednesday, March 12, 2014 at 12:06 PM
To: "ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>" 
<ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>>
Subject: PG Scaling

Hi Everyone,

I am deploying an openstack deployment with Fuel 4.1 and have a 20 node ceph 
deployment of c6220’s with 3 osd’s and 1 journaling disk per node. When first 
deployed each storage pool is configured with the correct size and min_size 
attributes however fuel doesn’t seem to apply the correct number of pg’s to the 
pools based on the number of osd’s that we actually have.

I make the adjustments using the following

(20 nodes * 3 OSDs)*100 / 3 replicas = 2000

ceph osd pool volumes set size 3
ceph osd pool volumes set min_size 3
ceph osd pool volumes set pg_num 2000
ceph osd pool volumes set pgp_num 2000

ceph osd pool images set size 3
ceph osd pool images set min_size 3
ceph osd pool images set pg_num 2000
ceph osd pool images set pgp_num 2000

ceph osd pool compute set size 3
ceph osd pool compute set min_size 3
ceph osd pool compute set pg_num 2000
ceph osd pool compute set pgp_num 2000

Here are the questions I am left with concerning these changes:

  1.  How long does it take for ceph to apply the changes and recalculate the 
pg’s?
  2.  When is it safe to do this type of operation? before any data is written 
to the pools or is doing this while pools are used acceptable?
  3.  Is it possible to scale down the number of pg’s ?

Thank you for your input.

Karol Kozubal
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to