Hi ,

That makes sense.

How can I adjust the osd nearfull ratio ?  I tried this, however it didnt
change.

$ ceph tell mon.* injectargs "--mon_osd_nearfull_ratio .86"
mon.mon-a1: injectargs:mon_osd_nearfull_ratio = '0.860000' (not observed,
change may require restart)
mon.mon-a2: injectargs:mon_osd_nearfull_ratio = '0.860000' (not observed,
change may require restart)
mon.mon-a3: injectargs:mon_osd_nearfull_ratio = '0.860000' (not observed,
change may require restart)


Karun Josy

On Tue, Dec 19, 2017 at 10:05 PM, Jean-Charles Lopez <[email protected]>
wrote:

> OK so it’s telling you that the near full OSD holds PGs for these three
> pools.
>
> JC
>
> On Dec 19, 2017, at 08:05, Karun Josy <[email protected]> wrote:
>
> No, I haven't.
>
> Interestingly, the POOL_NEARFULL flag is shown only when there is OSD_NEARFULL
> flag.
> I have recently upgraded to Luminous 12.2.2, haven't seen this flag in
> 12.2.1
>
>
>
> Karun Josy
>
> On Tue, Dec 19, 2017 at 9:27 PM, Jean-Charles Lopez <[email protected]>
> wrote:
>
>> Hi
>>
>> did you set quotas on these pools?
>>
>> See this page for explanation of most error messages:
>> http://docs.ceph.com/docs/master/rados/operations/
>> health-checks/#pool-near-full
>>
>> JC
>>
>> On Dec 19, 2017, at 01:48, Karun Josy <[email protected]> wrote:
>>
>> Hello,
>>
>> In one of our clusters, health is showing these warnings :
>> ---------
>> OSD_NEARFULL 1 nearfull osd(s)
>>     osd.22 is near full
>> POOL_NEARFULL 3 pool(s) nearfull
>>     pool 'templates' is nearfull
>>     pool 'cvm' is nearfull
>>     pool 'ecpool' is nearfull
>> ------------
>>
>> One osd is above 85% used, which I know caused the OSD_Nearfull flag.
>> But what does pool(s) nearfull mean ?
>> And how can I correct it ?
>>
>> ]$ ceph df
>> GLOBAL:
>>     SIZE       AVAIL      RAW USED     %RAW USED
>>     31742G     11147G       20594G         64.88
>> POOLS:
>>     NAME            ID     USED       %USED     MAX AVAIL     OBJECTS
>>     templates      5        196G     23.28          645G       50202
>>     cvm               6        6528         0         1076G         770
>>     ecpool           7      10260G     83.56         2018G     3004031
>>
>>
>>
>> Karun
>> _______________________________________________
>> ceph-users mailing list
>> [email protected]
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>>
>
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to