This moved to the PG map in luminous. I think it might have been there in Jewel as well.
http://docs.ceph.com/docs/luminous/man/8/ceph/#pg ceph pg set_full_ratio <float[0.0-1.0]> ceph pg set_backfillfull_ratio <float[0.0-1.0]> ceph pg set_nearfull_ratio <float[0.0-1.0]> On Thu, Aug 30, 2018, 1:57 PM David C <[email protected]> wrote: > Hi All > > I feel like this is going to be a silly query with a hopefully simple > answer. I don't seem to have the osd_backfill_full_ratio config option on > my OSDs and can't inject it. This a Lumimous 12.2.1 cluster that was > upgraded from Jewel. > > I added an OSD to the cluster and woke up the next day to find the OSD had > hit OSD_FULL. I'm pretty sure the reason it filled up was because the new > host was weighted too high (I initially add two OSDs but decided to only > backfill one at a time). The thing that surprised me was why a backfill > full ratio didn't kick in to prevent this from happening. > > One potentially key piece of info is I haven't run the "ceph osd > require-osd-release luminous" command yet (I wasn't sure what impact this > would have so was waiting for a window with quiet client I/O). > > ceph osd dump is showing zero for all full ratios: > > # ceph osd dump | grep full_ratio > full_ratio 0 > backfillfull_ratio 0 > nearfull_ratio 0 > > Do I simply need to run ceph osd set -backfillfull-ratio? Or am I missing > something here. I don't understand why I don't have a default backfill_full > ratio on this cluster. > > Thanks, > > > > _______________________________________________ > ceph-users mailing list > [email protected] > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >
_______________________________________________ ceph-users mailing list [email protected] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
