Re: [ceph-users] POOL_NEARFULL

2018-01-29 Thread Konstantin Shalygin

On 01/29/2018 04:25 PM, Karun Josy wrote:

In Luminous version, we have to use osd set command



Yep. Since Luminous _full options saved in osdmap.



k
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] POOL_NEARFULL

2018-01-29 Thread Karun Josy
In Luminous version, we have to use osd set command

--
ceph osd   set -backfillfull-ratio .89
ceph osd set-nearfull-ratio .84
ceph osd set-full-ratio .96
--

Karun Josy

On Thu, Dec 21, 2017 at 4:29 PM, Konstantin Shalygin  wrote:

> Update your ceph.conf file
>
> This is also not help. I was create ticket http://tracker.ceph.com/
> issues/22520
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] POOL_NEARFULL

2017-12-21 Thread Konstantin Shalygin

Update your ceph.conf file


This is also not help. I was create ticket 
http://tracker.ceph.com/issues/22520


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] POOL_NEARFULL

2017-12-19 Thread Nghia Than
You may try this command:

766  ceph pg set_nearfull_ratio 0.86

  767  ceph pg set_full_ratio 0.9

On Wed, Dec 20, 2017 at 12:45 AM, Jean-Charles Lopez 
wrote:

> Update your ceph.conf file
>
> JC
>
> On Dec 19, 2017, at 09:03, Karun Josy  wrote:
>
> Hi ,
>
> That makes sense.
>
> How can I adjust the osd nearfull ratio ?  I tried this, however it didnt
> change.
>
> $ ceph tell mon.* injectargs "--mon_osd_nearfull_ratio .86"
> mon.mon-a1: injectargs:mon_osd_nearfull_ratio = '0.86' (not observed,
> change may require restart)
> mon.mon-a2: injectargs:mon_osd_nearfull_ratio = '0.86' (not observed,
> change may require restart)
> mon.mon-a3: injectargs:mon_osd_nearfull_ratio = '0.86' (not observed,
> change may require restart)
>
>
> Karun Josy
>
> On Tue, Dec 19, 2017 at 10:05 PM, Jean-Charles Lopez 
> wrote:
>
>> OK so it’s telling you that the near full OSD holds PGs for these three
>> pools.
>>
>> JC
>>
>> On Dec 19, 2017, at 08:05, Karun Josy  wrote:
>>
>> No, I haven't.
>>
>> Interestingly, the POOL_NEARFULL flag is shown only when there is 
>> OSD_NEARFULL
>> flag.
>> I have recently upgraded to Luminous 12.2.2, haven't seen this flag in
>> 12.2.1
>>
>>
>>
>> Karun Josy
>>
>> On Tue, Dec 19, 2017 at 9:27 PM, Jean-Charles Lopez 
>> wrote:
>>
>>> Hi
>>>
>>> did you set quotas on these pools?
>>>
>>> See this page for explanation of most error messages:
>>> http://docs.ceph.com/docs/master/rados/operations/
>>> health-checks/#pool-near-full
>>>
>>> JC
>>>
>>> On Dec 19, 2017, at 01:48, Karun Josy  wrote:
>>>
>>> Hello,
>>>
>>> In one of our clusters, health is showing these warnings :
>>> -
>>> OSD_NEARFULL 1 nearfull osd(s)
>>> osd.22 is near full
>>> POOL_NEARFULL 3 pool(s) nearfull
>>> pool 'templates' is nearfull
>>> pool 'cvm' is nearfull
>>> pool 'ecpool' is nearfull
>>> 
>>>
>>> One osd is above 85% used, which I know caused the OSD_Nearfull flag.
>>> But what does pool(s) nearfull mean ?
>>> And how can I correct it ?
>>>
>>> ]$ ceph df
>>> GLOBAL:
>>> SIZE   AVAIL  RAW USED %RAW USED
>>> 31742G 11147G   20594G 64.88
>>> POOLS:
>>> NAMEID USED   %USED MAX AVAIL OBJECTS
>>> templates  5196G 23.28  645G   50202
>>> cvm   66528 0 1076G 770
>>> ecpool   7  10260G 83.56 2018G 3004031
>>>
>>>
>>>
>>> Karun
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>>
>>>
>>
>>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>


-- 
==
Nghia Than
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] POOL_NEARFULL

2017-12-19 Thread Jean-Charles Lopez
Update your ceph.conf file

JC

> On Dec 19, 2017, at 09:03, Karun Josy  wrote:
> 
> Hi ,
> 
> That makes sense.
> 
> How can I adjust the osd nearfull ratio ?  I tried this, however it didnt 
> change. 
> 
> $ ceph tell mon.* injectargs "--mon_osd_nearfull_ratio .86"
> mon.mon-a1: injectargs:mon_osd_nearfull_ratio = '0.86' (not observed, 
> change may require restart)
> mon.mon-a2: injectargs:mon_osd_nearfull_ratio = '0.86' (not observed, 
> change may require restart)
> mon.mon-a3: injectargs:mon_osd_nearfull_ratio = '0.86' (not observed, 
> change may require restart)
> 
> 
> Karun Josy
> 
> On Tue, Dec 19, 2017 at 10:05 PM, Jean-Charles Lopez  > wrote:
> OK so it’s telling you that the near full OSD holds PGs for these three pools.
> 
> JC
> 
>> On Dec 19, 2017, at 08:05, Karun Josy > > wrote:
>> 
>> No, I haven't.
>> 
>> Interestingly, the POOL_NEARFULL flag is shown only when there is 
>> OSD_NEARFULL  flag.
>> I have recently upgraded to Luminous 12.2.2, haven't seen this flag in 12.2.1
>> 
>> 
>> 
>> Karun Josy
>> 
>> On Tue, Dec 19, 2017 at 9:27 PM, Jean-Charles Lopez > > wrote:
>> Hi
>> 
>> did you set quotas on these pools?
>> 
>> See this page for explanation of most error messages: 
>> http://docs.ceph.com/docs/master/rados/operations/health-checks/#pool-near-full
>>  
>> 
>> 
>> JC
>> 
>>> On Dec 19, 2017, at 01:48, Karun Josy >> > wrote:
>>> 
>>> Hello,
>>> 
>>> In one of our clusters, health is showing these warnings :
>>> -
>>> OSD_NEARFULL 1 nearfull osd(s)
>>> osd.22 is near full
>>> POOL_NEARFULL 3 pool(s) nearfull
>>> pool 'templates' is nearfull
>>> pool 'cvm' is nearfull
>>> pool 'ecpool' is nearfull
>>> 
>>> 
>>> One osd is above 85% used, which I know caused the OSD_Nearfull flag.
>>> But what does pool(s) nearfull mean ?
>>> And how can I correct it ?
>>> 
>>> ]$ ceph df
>>> GLOBAL:
>>> SIZE   AVAIL  RAW USED %RAW USED
>>> 31742G 11147G   20594G 64.88
>>> POOLS:
>>> NAMEID USED   %USED MAX AVAIL OBJECTS
>>> templates  5196G 23.28  645G   50202
>>> cvm   66528 0 1076G 770
>>> ecpool   7  10260G 83.56 2018G 3004031
>>> 
>>> 
>>> 
>>> Karun 
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com 
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
>>> 
>> 
>> 
> 
> 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] POOL_NEARFULL

2017-12-19 Thread Karun Josy
Hi ,

That makes sense.

How can I adjust the osd nearfull ratio ?  I tried this, however it didnt
change.

$ ceph tell mon.* injectargs "--mon_osd_nearfull_ratio .86"
mon.mon-a1: injectargs:mon_osd_nearfull_ratio = '0.86' (not observed,
change may require restart)
mon.mon-a2: injectargs:mon_osd_nearfull_ratio = '0.86' (not observed,
change may require restart)
mon.mon-a3: injectargs:mon_osd_nearfull_ratio = '0.86' (not observed,
change may require restart)


Karun Josy

On Tue, Dec 19, 2017 at 10:05 PM, Jean-Charles Lopez 
wrote:

> OK so it’s telling you that the near full OSD holds PGs for these three
> pools.
>
> JC
>
> On Dec 19, 2017, at 08:05, Karun Josy  wrote:
>
> No, I haven't.
>
> Interestingly, the POOL_NEARFULL flag is shown only when there is OSD_NEARFULL
> flag.
> I have recently upgraded to Luminous 12.2.2, haven't seen this flag in
> 12.2.1
>
>
>
> Karun Josy
>
> On Tue, Dec 19, 2017 at 9:27 PM, Jean-Charles Lopez 
> wrote:
>
>> Hi
>>
>> did you set quotas on these pools?
>>
>> See this page for explanation of most error messages:
>> http://docs.ceph.com/docs/master/rados/operations/
>> health-checks/#pool-near-full
>>
>> JC
>>
>> On Dec 19, 2017, at 01:48, Karun Josy  wrote:
>>
>> Hello,
>>
>> In one of our clusters, health is showing these warnings :
>> -
>> OSD_NEARFULL 1 nearfull osd(s)
>> osd.22 is near full
>> POOL_NEARFULL 3 pool(s) nearfull
>> pool 'templates' is nearfull
>> pool 'cvm' is nearfull
>> pool 'ecpool' is nearfull
>> 
>>
>> One osd is above 85% used, which I know caused the OSD_Nearfull flag.
>> But what does pool(s) nearfull mean ?
>> And how can I correct it ?
>>
>> ]$ ceph df
>> GLOBAL:
>> SIZE   AVAIL  RAW USED %RAW USED
>> 31742G 11147G   20594G 64.88
>> POOLS:
>> NAMEID USED   %USED MAX AVAIL OBJECTS
>> templates  5196G 23.28  645G   50202
>> cvm   66528 0 1076G 770
>> ecpool   7  10260G 83.56 2018G 3004031
>>
>>
>>
>> Karun
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>>
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] POOL_NEARFULL

2017-12-19 Thread Jean-Charles Lopez
OK so it’s telling you that the near full OSD holds PGs for these three pools.

JC

> On Dec 19, 2017, at 08:05, Karun Josy  wrote:
> 
> No, I haven't.
> 
> Interestingly, the POOL_NEARFULL flag is shown only when there is 
> OSD_NEARFULL  flag.
> I have recently upgraded to Luminous 12.2.2, haven't seen this flag in 12.2.1
> 
> 
> 
> Karun Josy
> 
> On Tue, Dec 19, 2017 at 9:27 PM, Jean-Charles Lopez  > wrote:
> Hi
> 
> did you set quotas on these pools?
> 
> See this page for explanation of most error messages: 
> http://docs.ceph.com/docs/master/rados/operations/health-checks/#pool-near-full
>  
> 
> 
> JC
> 
>> On Dec 19, 2017, at 01:48, Karun Josy > > wrote:
>> 
>> Hello,
>> 
>> In one of our clusters, health is showing these warnings :
>> -
>> OSD_NEARFULL 1 nearfull osd(s)
>> osd.22 is near full
>> POOL_NEARFULL 3 pool(s) nearfull
>> pool 'templates' is nearfull
>> pool 'cvm' is nearfull
>> pool 'ecpool' is nearfull
>> 
>> 
>> One osd is above 85% used, which I know caused the OSD_Nearfull flag.
>> But what does pool(s) nearfull mean ?
>> And how can I correct it ?
>> 
>> ]$ ceph df
>> GLOBAL:
>> SIZE   AVAIL  RAW USED %RAW USED
>> 31742G 11147G   20594G 64.88
>> POOLS:
>> NAMEID USED   %USED MAX AVAIL OBJECTS
>> templates  5196G 23.28  645G   50202
>> cvm   66528 0 1076G 770
>> ecpool   7  10260G 83.56 2018G 3004031
>> 
>> 
>> 
>> Karun 
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com 
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
>> 
> 
> 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] POOL_NEARFULL

2017-12-19 Thread Cary
Karun,

 You can check how much data each OSD has with "ceph osd df"

ID CLASS WEIGHT  REWEIGHT SIZE   USEAVAIL  %USE  VAR  PGS
 1   hdd   1.84000  1.0 1885G   769G  1115G  40.84   0.97  101
 3   hdd   4.64000  1.0 4679G   2613G 2065G 55.86 1.33 275
 4   hdd   4.64000  1.0 4674G   1914G 2759G 40.96 0.97 193
 5   hdd   4.64000  1.0 4668G   1434G 3234G 30.72 0.73 148
 8   hdd   1.84000  1.0 1874G   742G  1131G 39.61 0.94  74
 0   hdd   4.64000  1.0 4668G   2331G 2337G 49.94 1.19 268
 2   hdd   1.84000  1.0 4668G   868G  3800G 18.60 0.44  99
 6   hdd   4.64000  1.0 4668G   2580G 2087G 55.28 1.32 275
 7   hdd   1.84000  1.01874G888G   985G 47.43 1.13 107
TOTAL 33661G 14144G 19516G 42.02
MIN/MAX VAR: 0.44/1.33  STDDEV: 11.27

 The "%USE" column shows how much space is used on each OSD. You may
need to change the weight of some of the OSDs so the data balances out
correctly with "ceph osd crush reweight osd.N W".Change the N to the
number of OSD and W to the new weight.

 As you can see from above even though the weight on my 4.6TB is the
same for all of them, they have different %USE. So I could lower the
weight of the OSDs with more data, and Ceph will balance the cluster.

 I am not too sure why this happens.

http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-March/008623.html

Cary
-Dynamic

On Tue, Dec 19, 2017 at 3:57 PM, Jean-Charles Lopez  wrote:
> Hi
>
> did you set quotas on these pools?
>
> See this page for explanation of most error messages:
> http://docs.ceph.com/docs/master/rados/operations/health-checks/#pool-near-full
>
> JC
>
> On Dec 19, 2017, at 01:48, Karun Josy  wrote:
>
> Hello,
>
> In one of our clusters, health is showing these warnings :
> -
> OSD_NEARFULL 1 nearfull osd(s)
> osd.22 is near full
> POOL_NEARFULL 3 pool(s) nearfull
> pool 'templates' is nearfull
> pool 'cvm' is nearfull
> pool 'ecpool' is nearfull
> 
>
> One osd is above 85% used, which I know caused the OSD_Nearfull flag.
> But what does pool(s) nearfull mean ?
> And how can I correct it ?
>
> ]$ ceph df
> GLOBAL:
> SIZE   AVAIL  RAW USED %RAW USED
> 31742G 11147G   20594G 64.88
> POOLS:
> NAMEID USED   %USED MAX AVAIL OBJECTS
> templates  5196G 23.28  645G   50202
> cvm   66528 0 1076G 770
> ecpool   7  10260G 83.56 2018G 3004031
>
>
>
> Karun
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] POOL_NEARFULL

2017-12-19 Thread Karun Josy
No, I haven't.

Interestingly, the POOL_NEARFULL flag is shown only when there is OSD_NEARFULL
flag.
I have recently upgraded to Luminous 12.2.2, haven't seen this flag in
12.2.1



Karun Josy

On Tue, Dec 19, 2017 at 9:27 PM, Jean-Charles Lopez 
wrote:

> Hi
>
> did you set quotas on these pools?
>
> See this page for explanation of most error messages: http://docs.ceph.
> com/docs/master/rados/operations/health-checks/#pool-near-full
>
> JC
>
> On Dec 19, 2017, at 01:48, Karun Josy  wrote:
>
> Hello,
>
> In one of our clusters, health is showing these warnings :
> -
> OSD_NEARFULL 1 nearfull osd(s)
> osd.22 is near full
> POOL_NEARFULL 3 pool(s) nearfull
> pool 'templates' is nearfull
> pool 'cvm' is nearfull
> pool 'ecpool' is nearfull
> 
>
> One osd is above 85% used, which I know caused the OSD_Nearfull flag.
> But what does pool(s) nearfull mean ?
> And how can I correct it ?
>
> ]$ ceph df
> GLOBAL:
> SIZE   AVAIL  RAW USED %RAW USED
> 31742G 11147G   20594G 64.88
> POOLS:
> NAMEID USED   %USED MAX AVAIL OBJECTS
> templates  5196G 23.28  645G   50202
> cvm   66528 0 1076G 770
> ecpool   7  10260G 83.56 2018G 3004031
>
>
>
> Karun
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] POOL_NEARFULL

2017-12-19 Thread Jean-Charles Lopez
Hi

did you set quotas on these pools?

See this page for explanation of most error messages: 
http://docs.ceph.com/docs/master/rados/operations/health-checks/#pool-near-full 


JC

> On Dec 19, 2017, at 01:48, Karun Josy  wrote:
> 
> Hello,
> 
> In one of our clusters, health is showing these warnings :
> -
> OSD_NEARFULL 1 nearfull osd(s)
> osd.22 is near full
> POOL_NEARFULL 3 pool(s) nearfull
> pool 'templates' is nearfull
> pool 'cvm' is nearfull
> pool 'ecpool' is nearfull
> 
> 
> One osd is above 85% used, which I know caused the OSD_Nearfull flag.
> But what does pool(s) nearfull mean ?
> And how can I correct it ?
> 
> ]$ ceph df
> GLOBAL:
> SIZE   AVAIL  RAW USED %RAW USED
> 31742G 11147G   20594G 64.88
> POOLS:
> NAMEID USED   %USED MAX AVAIL OBJECTS
> templates  5196G 23.28  645G   50202
> cvm   66528 0 1076G 770
> ecpool   7  10260G 83.56 2018G 3004031
> 
> 
> 
> Karun 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com