Re: [ceph-users] Move ceph admin node to new other server

2018-04-10 Thread Nghia Than
I appreciate for your kind Paul.

On Wed, Apr 11, 2018 at 1:47 AM, Paul Emmerich <paul.emmer...@croit.io>
wrote:

> http://docs.ceph.com/ceph-deploy/docs/gatherkeys.html
>
> 2018-04-10 20:39 GMT+02:00 Nghia Than <cont...@trungnghia.info>:
>
>> Hi Paul,
>>
>> Thanks for your information.
>>
>> May i know if i destroy this node so how they can gatherkeys as this node
>> is already terminated and no data available. As you said it will get from
>> cluster so will they get them all or i have to manual backup them (surely i
>> will do this task before terminate any server) and copy to new node.
>>
>> Thanks,
>>
>>
>> On Wed, Apr 11, 2018 at 1:25 AM, Paul Emmerich <paul.emmer...@croit.io>
>> wrote:
>>
>>> Hi,
>>>
>>> yes, that folder contains everything you need. You can also use
>>> ceph-deploy gatherkeys to get them from your cluster.
>>>
>>>
>>> Paul
>>>
>>>
>>> 2018-04-09 10:04 GMT+02:00 Nghia Than <cont...@trungnghia.info>:
>>>
>>>> Hello,
>>>>
>>>> We have use 1 server for deploy (called ceph-admin-node) for 3 mon and
>>>> 4 OSD node.
>>>>
>>>> We have created a folder called *ceph-deploy* to deploy all node
>>>> members. May we move this folder to other server?
>>>>
>>>> This folder contains all following files:
>>>>
>>>> total 1408
>>>> -rw--- 1 root root 113 Oct 26 16:48 ceph.bootstrap-mds.keyring
>>>> -rw--- 1 root root  71 Oct 26 16:48 ceph.bootstrap-mgr.keyring
>>>> -rw--- 1 root root 113 Oct 26 16:48 ceph.bootstrap-osd.keyring
>>>> -rw--- 1 root root 113 Oct 26 16:48 ceph.bootstrap-rgw.keyring
>>>> -rw--- 1 root root 129 Oct 26 16:48 ceph.client.admin.keyring
>>>> -rw-r--r-- 1 root root 818 Oct 26 12:20 ceph.conf
>>>> -rw-r--r-- 1 root root 1405043 Apr  2 14:16 ceph-deploy-ceph.log
>>>> -rw--- 1 root root  73 Oct 26 12:19 ceph.mon.keyring
>>>>
>>>> Any guide for this task?
>>>>
>>>> Thanks,
>>>>
>>>> ___
>>>> ceph-users mailing list
>>>> ceph-users@lists.ceph.com
>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>
>>>>
>>>
>>>
>>> --
>>> --
>>> Paul Emmerich
>>>
>>> croit GmbH
>>> Freseniusstr. 31h
>>> <https://maps.google.com/?q=Freseniusstr.+31h+81247+M%C3%BCnchen=gmail=g>
>>> 81247 München
>>> <https://maps.google.com/?q=Freseniusstr.+31h+81247+M%C3%BCnchen=gmail=g>
>>> www.croit.io
>>> Tel: +49 89 1896585 90
>>>
>>
>>
>>
>> --
>> ==
>> Nghia Than
>>
>
>
>
> --
> --
> Paul Emmerich
>
> croit GmbH
> Freseniusstr. 31h
> <https://maps.google.com/?q=Freseniusstr.+31h+81247+M%C3%BCnchen=gmail=g>
> 81247 München
> <https://maps.google.com/?q=Freseniusstr.+31h+81247+M%C3%BCnchen=gmail=g>
> www.croit.io
> Tel: +49 89 1896585 90
>



-- 
==
Nghia Than
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Move ceph admin node to new other server

2018-04-10 Thread Nghia Than
Hi Paul,

Thanks for your information.

May i know if i destroy this node so how they can gatherkeys as this node
is already terminated and no data available. As you said it will get from
cluster so will they get them all or i have to manual backup them (surely i
will do this task before terminate any server) and copy to new node.

Thanks,


On Wed, Apr 11, 2018 at 1:25 AM, Paul Emmerich <paul.emmer...@croit.io>
wrote:

> Hi,
>
> yes, that folder contains everything you need. You can also use
> ceph-deploy gatherkeys to get them from your cluster.
>
>
> Paul
>
>
> 2018-04-09 10:04 GMT+02:00 Nghia Than <cont...@trungnghia.info>:
>
>> Hello,
>>
>> We have use 1 server for deploy (called ceph-admin-node) for 3 mon and 4
>> OSD node.
>>
>> We have created a folder called *ceph-deploy* to deploy all node
>> members. May we move this folder to other server?
>>
>> This folder contains all following files:
>>
>> total 1408
>> -rw--- 1 root root 113 Oct 26 16:48 ceph.bootstrap-mds.keyring
>> -rw--- 1 root root  71 Oct 26 16:48 ceph.bootstrap-mgr.keyring
>> -rw--- 1 root root 113 Oct 26 16:48 ceph.bootstrap-osd.keyring
>> -rw--- 1 root root 113 Oct 26 16:48 ceph.bootstrap-rgw.keyring
>> -rw--- 1 root root 129 Oct 26 16:48 ceph.client.admin.keyring
>> -rw-r--r-- 1 root root 818 Oct 26 12:20 ceph.conf
>> -rw-r--r-- 1 root root 1405043 Apr  2 14:16 ceph-deploy-ceph.log
>> -rw--- 1 root root  73 Oct 26 12:19 ceph.mon.keyring
>>
>> Any guide for this task?
>>
>> Thanks,
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>
>
> --
> --
> Paul Emmerich
>
> croit GmbH
> Freseniusstr. 31h
> <https://maps.google.com/?q=Freseniusstr.+31h+81247+M%C3%BCnchen=gmail=g>
> 81247 München
> <https://maps.google.com/?q=Freseniusstr.+31h+81247+M%C3%BCnchen=gmail=g>
> www.croit.io
> Tel: +49 89 1896585 90
>



-- 
==
Nghia Than
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Move ceph admin node to new other server

2018-04-09 Thread Nghia Than
Hello,

We have use 1 server for deploy (called ceph-admin-node) for 3 mon and 4
OSD node.

We have created a folder called *ceph-deploy* to deploy all node members.
May we move this folder to other server?

This folder contains all following files:

total 1408
-rw--- 1 root root 113 Oct 26 16:48 ceph.bootstrap-mds.keyring
-rw--- 1 root root  71 Oct 26 16:48 ceph.bootstrap-mgr.keyring
-rw--- 1 root root 113 Oct 26 16:48 ceph.bootstrap-osd.keyring
-rw--- 1 root root 113 Oct 26 16:48 ceph.bootstrap-rgw.keyring
-rw--- 1 root root 129 Oct 26 16:48 ceph.client.admin.keyring
-rw-r--r-- 1 root root 818 Oct 26 12:20 ceph.conf
-rw-r--r-- 1 root root 1405043 Apr  2 14:16 ceph-deploy-ceph.log
-rw--- 1 root root  73 Oct 26 12:19 ceph.mon.keyring

Any guide for this task?

Thanks,
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] DELL R620 - SSD recommendation

2018-03-21 Thread Nghia Than
If you want speed and IOPS, try: PM863a or SM863a (PM863a is slightly
cheaper).

If you want high endurances, try Intel DC S3700 series.

Do not use consumer SSD for caching either HDD desktop for OSD.

what is the highest HDD capacity that you were able to use in the R620 ?


​This depend on your raid controller, not your server.

On Thu, Mar 22, 2018 at 2:40 AM, Steven Vacaroaia <ste...@gmail.com> wrote:

> Hi,
>
> It will be appreciated if you could recommend some SSD models ( 200GB or
> less)
>
> I am planning to deploy 2 SSD and 6 HDD ( for a 1 to 3 ratio) in few DELL
> R620 with 64GB RAM
>
> Also, what is the highest HDD capacity that you were able to use in the
> R620 ?
>
>
> Note
> I apologize for asking "research easy" kind of questions but I am hoping
> for confirmed / hands on info / details
>
> Many thanks
> Steven
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>


-- 
==
Nghia Than
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Performance issues on Luminous

2018-01-05 Thread Nghia Than
Do not use consumer SSD for OSD. Especially for journal disk.

If you use consumer SSD, please consider add some dedicated SSD Enterprise
for journal disk. Ratio should be 1:2 or 1:4 (1 SSD Enterprise with 4 SSD
Consumer).

Best Regards,

On Fri, Jan 5, 2018 at 3:20 PM, Marc Roos <m.r...@f1-outsourcing.eu> wrote:

>
>
> Maybe because of this 850 evo / 850 pro listed here as 1.9MB/s 1.5MB/s
>
> http://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-
> test-if-your-ssd-is-suitable-as-a-journal-device/
>
>
>
>
> -Original Message-
> From: Rafał Wądołowski [mailto:rwadolow...@cloudferro.com]
> Sent: donderdag 4 januari 2018 16:56
> To: c...@elchaka.de; ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Performance issues on Luminous
>
> I have size of 2.
>
> We know about this risk and we accept it, but we still don't know why
> performance so so bad.
>
>
> Cheers,
>
> Rafał Wądołowski
>
>
> On 04.01.2018 16:51, c...@elchaka.de wrote:
>
>
> I assume you have size of 3 then divide your expected 400 with 3
> and you are not far Away from what you get...
>
> In Addition you should Never use Consumer grade ssds for ceph as
> they will be reach the DWPD very soon...
>
>
> Am 4. Januar 2018 09:54:55 MEZ schrieb "Rafał Wądołowski"
> <rwadolow...@cloudferro.com> <mailto:rwadolow...@cloudferro.com> :
>
> Hi folks,
>
> I am currently benchmarking my cluster for an performance
> issue and I
> have no idea, what is going on. I am using these devices in
> qemu.
>
> Ceph version 12.2.2
>
> Infrastructure:
>
> 3 x Ceph-mon
>
> 11 x Ceph-osd
>
> Ceph-osd has 22x1TB Samsung SSD 850 EVO 1TB
>
> 96GB RAM
>
> 2x E5-2650 v4
>
> 4x10G Network (2 seperate bounds for cluster and public)
> with
> MTU 9000
>
>
> I had tested it with rados bench:
>
> # rados bench -p rbdbench 30 write -t 1
>
> Total time run: 30.055677
> Total writes made:  1199
> Write size: 4194304
> Object size:4194304
> Bandwidth (MB/sec): 159.571
> Stddev Bandwidth:   6.83601
> Max bandwidth (MB/sec): 168
> Min bandwidth (MB/sec): 140
> Average IOPS:   39
> Stddev IOPS:1
> Max IOPS:   42
> Min IOPS:   35
> Average Latency(s): 0.0250656
> Stddev Latency(s):  0.00321545
> Max latency(s): 0.0471699
> Min latency(s): 0.0206325
>
> # ceph tell osd.0 bench
> {
>  "bytes_written": 1073741824,
>  "blocksize": 4194304,
>  "bytes_per_sec": 414199397
> }
>
> Testing osd directly
>
> # dd if=/dev/zero of=/dev/sdc bs=4M oflag=direct count=100
> 100+0 records in
> 100+0 records out
> 419430400 bytes (419 MB, 400 MiB) copied, 1.0066 s, 417
> MB/s
>
> When I do dd inside vm (bs=4M wih direct), I have result
> like
> in rados
> bench.
>
> I think that the speed should be arround ~400MB/s.
>
> Is there any new parameters for rbd in luminous? Maybe I
> forgot about
> some performance tricks? If more information needed feel
> free
> to ask.
>
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



-- 
==
Nghia Than
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] active+remapped+backfill_toofull

2017-12-20 Thread Nghia Than
May i know what OSD i have to restart in this case?

On Wed, Dec 20, 2017 at 9:14 PM David C <dcsysengin...@gmail.com> wrote:

> You should just need to restart the relavent  OSDs for the new backfill
> threshold to kick in.
>
> On 20 Dec 2017 00:14, "Nghia Than" <cont...@trungnghia.info> wrote:
>
> I added more OSDs few days ago to reduce usage under 70% (nearfull and
> full ratio is higher than this value) and it still stuck at
> backfill_toofull while rebalance data.
>
> I tried to change backfill full ratio and it show error (unchangeable) as
> below:
>
> [root@storcp ~]# ceph tell osd.\* injectargs '--osd_backfill_full_ratio
> 0.92'
>
> osd.0: osd_backfill_full_ratio = '0.92' (unchangeable)
>
> osd.1: osd_backfill_full_ratio = '0.92' (unchangeable)
>
> osd.2: osd_backfill_full_ratio = '0.92' (unchangeable)
>
> osd.3: osd_backfill_full_ratio = '0.92' (unchangeable)
>
> osd.4: osd_backfill_full_ratio = '0.92' (unchangeable)
>
> osd.5: osd_backfill_full_ratio = '0.92' (unchangeable)
>
> osd.6: osd_backfill_full_ratio = '0.92' (unchangeable)
>
> osd.7: osd_backfill_full_ratio = '0.92' (unchangeable)
>
> osd.8: osd_backfill_full_ratio = '0.92' (unchangeable)
>
> osd.9: osd_backfill_full_ratio = '0.92' (unchangeable)
>
> osd.10: osd_backfill_full_ratio = '0.92' (unchangeable)
>
> osd.11: osd_backfill_full_ratio = '0.92' (unchangeable)
>
> osd.12: osd_backfill_full_ratio = '0.92' (unchangeable)
>
> osd.13: osd_backfill_full_ratio = '0.92' (unchangeable)
>
> osd.14: osd_backfill_full_ratio = '0.92' (unchangeable)
>
> osd.15: osd_backfill_full_ratio = '0.92' (unchangeable)
>
> osd.16: osd_backfill_full_ratio = '0.92' (unchangeable)
>
> osd.17: osd_backfill_full_ratio = '0.92' (unchangeable)
>
> osd.18: osd_backfill_full_ratio = '0.92' (unchangeable)
>
> osd.19: osd_backfill_full_ratio = '0.92' (unchangeable)
>
> osd.20: osd_backfill_full_ratio = '0.92' (unchangeable)
>
> osd.21: osd_backfill_full_ratio = '0.92' (unchangeable)
>
> osd.22: osd_backfill_full_ratio = '0.92' (unchangeable)
>
> osd.23: osd_backfill_full_ratio = '0.92' (unchangeable)
>
> osd.24: osd_backfill_full_ratio = '0.92' (unchangeable)
>
> osd.25: osd_backfill_full_ratio = '0.92' (unchangeable)
>
> osd.26: osd_backfill_full_ratio = '0.92' (unchangeable)
>
> osd.27: osd_backfill_full_ratio = '0.92' (unchangeable)
>
> osd.28: osd_backfill_full_ratio = '0.92' (unchangeable)
>
> [root@storcp ~]#
>
> On Wed, Dec 20, 2017 at 1:57 AM, David C <dcsysengin...@gmail.com> wrote:
>
>> What's your backfill full ratio? You may be able to get healthy by
>> increasing your backfill full ratio (in small increments). But your next
>> immediate task should be to add more OSDs or remove data.
>>
>>
>> On 19 Dec 2017 4:26 p.m., "Nghia Than" <cont...@trungnghia.info> wrote:
>>
>> Hi,
>>
>> My CEPH is stuck at this for few days, we added new OSD and nothing
>> changed:
>>
>>
>>- *17 pgs backfill_toofull*
>>- *17 pgs stuck unclean*
>>- *recovery 21/5156264 objects degraded (0.000%)*
>>- *recovery 52908/5156264 objects misplaced (1.026%)*
>>- *8 near full osd(s)*
>>
>>
>> ​And here is my ceph health detail:
>>
>> HEALTH_WARN 17 pgs backfill_toofull; 17 pgs stuck unclean; recovery
>> 21/5156264 objects degraded (0.000%); recovery 52908/5156264 objects
>> misplaced (1.026%); 8 near full osd(s)
>>
>> pg 1.231 is stuck unclean for 4367.09, current state
>> active+remapped+backfill_toofull, last acting [24,9]
>>
>> pg 1.1e8 is stuck unclean for 7316.364770, current state
>> active+remapped+backfill_toofull, last acting [16,3]
>>
>> pg 1.188 is stuck unclean for 7315.400227, current state
>> active+remapped+backfill_toofull, last acting [11,7]
>>
>> pg 1.158 is stuck unclean for 7321.511627, current state
>> active+remapped+backfill_toofull, last acting [11,17]
>>
>> pg 1.81 is stuck unclean for 4366.683703, current state
>> active+remapped+backfill_toofull, last acting [10,24]
>>
>> pg 1.332 is stuck unclean for 7315.248115, current state
>> active+remapped+backfill_toofull, last acting [23,1]
>>
>> pg 1.2c2 is stuck unclean for 4365.635413, current state
>> active+remapped+backfill_toofull, last acting [24,13]
>>
>> pg 1.3c6 is stuck unclean for 7320.816089, current state
>> active+remapped+backfill_toofull, last acting [11,20]
>>
>> pg 1.26f is stuck unclean for 7315.882215, current state
>> active+remapped+backfill_toofull, last acting [28,8]
>>
>> pg

Re: [ceph-users] POOL_NEARFULL

2017-12-19 Thread Nghia Than
You may try this command:

766  ceph pg set_nearfull_ratio 0.86

  767  ceph pg set_full_ratio 0.9

On Wed, Dec 20, 2017 at 12:45 AM, Jean-Charles Lopez <jelo...@redhat.com>
wrote:

> Update your ceph.conf file
>
> JC
>
> On Dec 19, 2017, at 09:03, Karun Josy <karunjo...@gmail.com> wrote:
>
> Hi ,
>
> That makes sense.
>
> How can I adjust the osd nearfull ratio ?  I tried this, however it didnt
> change.
>
> $ ceph tell mon.* injectargs "--mon_osd_nearfull_ratio .86"
> mon.mon-a1: injectargs:mon_osd_nearfull_ratio = '0.86' (not observed,
> change may require restart)
> mon.mon-a2: injectargs:mon_osd_nearfull_ratio = '0.86' (not observed,
> change may require restart)
> mon.mon-a3: injectargs:mon_osd_nearfull_ratio = '0.86' (not observed,
> change may require restart)
>
>
> Karun Josy
>
> On Tue, Dec 19, 2017 at 10:05 PM, Jean-Charles Lopez <jelo...@redhat.com>
> wrote:
>
>> OK so it’s telling you that the near full OSD holds PGs for these three
>> pools.
>>
>> JC
>>
>> On Dec 19, 2017, at 08:05, Karun Josy <karunjo...@gmail.com> wrote:
>>
>> No, I haven't.
>>
>> Interestingly, the POOL_NEARFULL flag is shown only when there is 
>> OSD_NEARFULL
>> flag.
>> I have recently upgraded to Luminous 12.2.2, haven't seen this flag in
>> 12.2.1
>>
>>
>>
>> Karun Josy
>>
>> On Tue, Dec 19, 2017 at 9:27 PM, Jean-Charles Lopez <jelo...@redhat.com>
>> wrote:
>>
>>> Hi
>>>
>>> did you set quotas on these pools?
>>>
>>> See this page for explanation of most error messages:
>>> http://docs.ceph.com/docs/master/rados/operations/
>>> health-checks/#pool-near-full
>>>
>>> JC
>>>
>>> On Dec 19, 2017, at 01:48, Karun Josy <karunjo...@gmail.com> wrote:
>>>
>>> Hello,
>>>
>>> In one of our clusters, health is showing these warnings :
>>> -
>>> OSD_NEARFULL 1 nearfull osd(s)
>>> osd.22 is near full
>>> POOL_NEARFULL 3 pool(s) nearfull
>>> pool 'templates' is nearfull
>>> pool 'cvm' is nearfull
>>> pool 'ecpool' is nearfull
>>> 
>>>
>>> One osd is above 85% used, which I know caused the OSD_Nearfull flag.
>>> But what does pool(s) nearfull mean ?
>>> And how can I correct it ?
>>>
>>> ]$ ceph df
>>> GLOBAL:
>>> SIZE   AVAIL  RAW USED %RAW USED
>>> 31742G 11147G   20594G 64.88
>>> POOLS:
>>> NAMEID USED   %USED MAX AVAIL OBJECTS
>>> templates  5196G 23.28      645G   50202
>>> cvm   66528 0 1076G 770
>>> ecpool   7  10260G 83.56 2018G 3004031
>>>
>>>
>>>
>>> Karun
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>>
>>>
>>
>>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>


-- 
==
Nghia Than
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] active+remapped+backfill_toofull

2017-12-19 Thread Nghia Than
I added more OSDs few days ago to reduce usage under 70% (nearfull and full
ratio is higher than this value) and it still stuck at backfill_toofull
while rebalance data.

I tried to change backfill full ratio and it show error (unchangeable) as
below:

[root@storcp ~]# ceph tell osd.\* injectargs '--osd_backfill_full_ratio
0.92'

osd.0: osd_backfill_full_ratio = '0.92' (unchangeable)

osd.1: osd_backfill_full_ratio = '0.92' (unchangeable)

osd.2: osd_backfill_full_ratio = '0.92' (unchangeable)

osd.3: osd_backfill_full_ratio = '0.92' (unchangeable)

osd.4: osd_backfill_full_ratio = '0.92' (unchangeable)

osd.5: osd_backfill_full_ratio = '0.92' (unchangeable)

osd.6: osd_backfill_full_ratio = '0.92' (unchangeable)

osd.7: osd_backfill_full_ratio = '0.92' (unchangeable)

osd.8: osd_backfill_full_ratio = '0.92' (unchangeable)

osd.9: osd_backfill_full_ratio = '0.92' (unchangeable)

osd.10: osd_backfill_full_ratio = '0.92' (unchangeable)

osd.11: osd_backfill_full_ratio = '0.92' (unchangeable)

osd.12: osd_backfill_full_ratio = '0.92' (unchangeable)

osd.13: osd_backfill_full_ratio = '0.92' (unchangeable)

osd.14: osd_backfill_full_ratio = '0.92' (unchangeable)

osd.15: osd_backfill_full_ratio = '0.92' (unchangeable)

osd.16: osd_backfill_full_ratio = '0.92' (unchangeable)

osd.17: osd_backfill_full_ratio = '0.92' (unchangeable)

osd.18: osd_backfill_full_ratio = '0.92' (unchangeable)

osd.19: osd_backfill_full_ratio = '0.92' (unchangeable)

osd.20: osd_backfill_full_ratio = '0.92' (unchangeable)

osd.21: osd_backfill_full_ratio = '0.92' (unchangeable)

osd.22: osd_backfill_full_ratio = '0.92' (unchangeable)

osd.23: osd_backfill_full_ratio = '0.92' (unchangeable)

osd.24: osd_backfill_full_ratio = '0.92' (unchangeable)

osd.25: osd_backfill_full_ratio = '0.92' (unchangeable)

osd.26: osd_backfill_full_ratio = '0.92' (unchangeable)

osd.27: osd_backfill_full_ratio = '0.92' (unchangeable)

osd.28: osd_backfill_full_ratio = '0.92' (unchangeable)

[root@storcp ~]#

On Wed, Dec 20, 2017 at 1:57 AM, David C <dcsysengin...@gmail.com> wrote:

> What's your backfill full ratio? You may be able to get healthy by
> increasing your backfill full ratio (in small increments). But your next
> immediate task should be to add more OSDs or remove data.
>
>
> On 19 Dec 2017 4:26 p.m., "Nghia Than" <cont...@trungnghia.info> wrote:
>
> Hi,
>
> My CEPH is stuck at this for few days, we added new OSD and nothing
> changed:
>
> - *17 pgs backfill_toofull*
> - *17 pgs stuck unclean*
> - *recovery 21/5156264 objects degraded (0.000%)*
> - *recovery 52908/5156264 objects misplaced (1.026%)*
> - *8 near full osd(s)*
>
> ​And here is my ceph health detail:
>
> HEALTH_WARN 17 pgs backfill_toofull; 17 pgs stuck unclean; recovery
> 21/5156264 objects degraded (0.000%); recovery 52908/5156264 objects
> misplaced (1.026%); 8 near full osd(s)
>
> pg 1.231 is stuck unclean for 4367.09, current state
> active+remapped+backfill_toofull, last acting [24,9]
>
> pg 1.1e8 is stuck unclean for 7316.364770, current state
> active+remapped+backfill_toofull, last acting [16,3]
>
> pg 1.188 is stuck unclean for 7315.400227, current state
> active+remapped+backfill_toofull, last acting [11,7]
>
> pg 1.158 is stuck unclean for 7321.511627, current state
> active+remapped+backfill_toofull, last acting [11,17]
>
> pg 1.81 is stuck unclean for 4366.683703, current state
> active+remapped+backfill_toofull, last acting [10,24]
>
> pg 1.332 is stuck unclean for 7315.248115, current state
> active+remapped+backfill_toofull, last acting [23,1]
>
> pg 1.2c2 is stuck unclean for 4365.635413, current state
> active+remapped+backfill_toofull, last acting [24,13]
>
> pg 1.3c6 is stuck unclean for 7320.816089, current state
> active+remapped+backfill_toofull, last acting [11,20]
>
> pg 1.26f is stuck unclean for 7315.882215, current state
> active+remapped+backfill_toofull, last acting [28,8]
>
> pg 1.236 is stuck unclean for 7322.152706, current state
> active+remapped+backfill_toofull, last acting [8,26]
>
> pg 1.249 is stuck unclean for 4366.885751, current state
> active+remapped+backfill_toofull, last acting [9,24]
>
> pg 1.7b is stuck unclean for 7315.353072, current state
> active+remapped+backfill_toofull, last acting [28,3]
>
> pg 1.1ec is stuck unclean for 7315.981062, current state
> active+remapped+backfill_toofull, last acting [16,0]
>
> pg 1.248 is stuck unclean for 7324.062482, current state
> active+remapped+backfill_toofull, last acting [16,3]
>
> pg 1.e4 is stuck unclean for 4370.009328, current state
> active+remapped+backfill_toofull, last acting [21,24]
>
> pg 1.144 is stuck unclean for 7317.998393, current state
> active+remapped+backfill_toofull, last acting [26,3]
>
> pg 0.5f is stuck unclean for 5

[ceph-users] active+remapped+backfill_toofull

2017-12-19 Thread Nghia Than
   160G 81.90 1.05  90

15 0.86800  1.0   888G   627G   260G 70.64 0.91  91

16 0.86800  1.0   888G   668G   220G 75.19 0.96  81

17 0.86800  1.0   888G   764G   124G 86.04 1.10  92

20 0.86800  1.0   888G   598G   289G 67.37 0.86  78

21 0.86800  1.0   888G   726G   162G 81.73 1.05  87

22 0.86800  1.0   888G   707G   181G 79.60 1.02  92

23 0.86800  1.0   888G   804G 85861M 90.57 1.16 104

24 0.86800  0.7   888G   726G   162G 81.73 1.05  90

25 0.86800  1.0   888G   579G   308G 65.24 0.84  80

26 0.86800  1.0   888G   696G   192G 78.36 1.00  95

27 0.86800  1.0   888G   757G   131G 85.20 1.09  98

28 0.86800  1.0   888G   758G   130G 85.29 1.09 104

  TOTAL 25775G 20115G  5660G 78.04

MIN/MAX VAR: 0.78/1.19  STDDEV: 9.24
[root@storcp ~]# ​

​May i know how to get over this?​

--
==
Nghia Than
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com