Hi, see comments below.
JC > On Jun 14, 2017, at 07:23, Stéphane Klein <[email protected]> wrote: > > Hi, > > I have this parameter in my Ansible configuration: > > pool_default_pg_num: 300 # (100 * 6) / 2 = 300 > > But I have this error: > > # ceph status > cluster 800221d2-4b8c-11e7-9bb9-cffc42889917 > health HEALTH_ERR > 73 pgs are stuck inactive for more than 300 seconds > 22 pgs degraded > 9 pgs peering > 64 pgs stale > 22 pgs stuck degraded > 9 pgs stuck inactive > 64 pgs stuck stale > 31 pgs stuck unclean > 22 pgs stuck undersized > 22 pgs undersized > too few PGs per OSD (16 < min 30) > monmap e1: 2 mons at > {ceph-storage-rbx-1=172.29.20.30:6789/0,ceph-storage-rbx-2=172.29.20.31:6789/0 > <http://172.29.20.30:6789/0,ceph-storage-rbx-2=172.29.20.31:6789/0>} > election epoch 4, quorum 0,1 ceph-storage-rbx-1,ceph-storage-rbx-2 > osdmap e41: 12 osds: 6 up, 6 in; 8 remapped pgs > flags sortbitwise,require_jewel_osds > pgmap v79: 64 pgs, 1 pools, 0 bytes data, 0 objects As this line shows you only have 64 pgs in your cluster so far hence the warning. This parameter must be positioned before you deploy your cluster or before you create your first pool. > 30919 MB used, 22194 GB / 22225 GB avail > 33 stale+active+clean > 22 stale+active+undersized+degraded > 9 stale+peering > > I have 2 hosts with 3 partitions, then 3 x 2 OSD ? > > Why 16 < min 30 ? I set 300 pg_num > > Best regards, > Stéphane > -- > Stéphane Klein <[email protected] > <mailto:[email protected]>> > blog: http://stephane-klein.info <http://stephane-klein.info/> > cv : http://cv.stephane-klein.info <http://cv.stephane-klein.info/> > Twitter: http://twitter.com/klein_stephane > <http://twitter.com/klein_stephane>_______________________________________________ > ceph-users mailing list > [email protected] > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list [email protected] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
