I install a ceph cluster with 8 osds, 3 pools and 1 replication(as 
osd_pool_default_size) in 2 machines. 

I follow formula in 
http://docs.ceph.com/docs/mimic/rados/operations/placement-groups/#choosing-the-number-of-placement-groups
 to count pg_nu. 
Then get total pg_num equal to 192, i set each pool as 64. 

I get below warn: 
$ ceph -s 
cluster: id: fd64d9e4-e33b-4e1c-927c-c0bb56d072cf 
health: HEALTH_WARN too few PGs per OSD (24 < min 30 ) 

Then i change osd_pool_size to 2, warning miss which makes me confused. 

I read docs again, i have below questions: 

1.Between 5 and 10 OSDs set pg_num to 512 in doc, this pg_num is total pg num? 
If so, for 2 replications, the pg per osd is too low. 
If not, it means pg num per pool, for more pools, the pg per osd is too high. 

2.How count the min is 30? 

3.Why only change replication, the warning miss, seems that not count by the 
formula. 

4.The formula does not consider pool num, just consider replication and osd 
num. 
So for more pool, the formula need to divide pool num too, right? 

5.In http://docs.ceph.com/docs/mimic/rados/configuration/pool-pg-config-ref/, 
it says set 250 as default. 
This num is not power of 2, why set it? Is it right? 

If i set osd_pool_default_size as 2, does it mean need to set 
osd_pool_default_min_size as 1? 
If so, when osd_pool_default_size is 1, osd_pool_default_min_size equal to 
Zero? 
If not, for 2 machine: 
1) set osd_pool_default_size as 2 is meaningless, but it can solve ceph status 
warning. 
2) set osd_pool_default_size and osd_pool_default_min_size both 1? 

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to