Hi,

That seems very odd - what do the logs say for the osds with slow requests?

Thanks

On Tue, Nov 24, 2015 at 2:20 AM, Mika c <[email protected]> wrote:

> Hi Sean,
>    Yes, the cluster scrubbing status(scrub + deep scrub) is almost two
> weeks.
>    And the result of execute `ceph pg dump | grep scrub` is empty.
>    But command of "ceph health" show there is "*16 pgs
> active+clean+scrubbing+deep, 2** pgs active+clean+scrubbing*".
>    I have 2 osds have slow requests warning.
>    Is it releated?
>
>
>
> Best wishes,
> Mika
>
>
> 2015-11-23 17:59 GMT+08:00 Sean Redmond <[email protected]>:
>
>> Hi Mika,
>>
>> Have the scubs been running for a long time? Can you see what pool they
>> are running on?  You can check using `ceph pg dump | grep scrub`
>>
>> Thanks
>>
>> On Mon, Nov 23, 2015 at 9:32 AM, Mika c <[email protected]> wrote:
>>
>>> Hi cephers,
>>>  We are facing a scrub issue.   Our CEPH cluster is using Trusty /
>>> Hammer 0.94.1 and have almost 320 OSD disks on 10 nodes.
>>>  And there are more than 30,000 PGs on cluster.
>>>  The cluster works fine until last week. We found the cluster health
>>> status start display "active+clean+scrubbing+deep".
>>>  Some PGs scrub ok but next second another PG start to scrubbing (or
>>> deep scrubbing) everyday.
>>>  We did not change the parameters of scrub. It should scrub once per day
>>> and deep scrub once per week.
>>> ​ Has anyone experiences with​ this incident?
>>> ​ ​
>>>
>>>
>>> Best wishes,
>>> Mika
>>>
>>>
>>> _______________________________________________
>>> ceph-users mailing list
>>> [email protected]
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>>
>>
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to