Re: [ceph-users] Same pg scrubbed over and over (Jewel)

2016-09-28 Thread Arvydas Opulskis
Hi,

we have same situation with one PG on our different cluster. Scrubs and
deep-scrubs are running over and over for same PG (38.34). I've logged some
period with deep-scrub and some scrubs repeating. OSD log form primary osd
can be found there:
https://www.dropbox.com/s/njmixbgzkfo1wws/ceph-osd.377.log.gz?dl=0

Cluster is Jewel 10.2.2. Btw, restarting primary osd service doesn't help.

Br,
Arvydas


On Wed, Sep 21, 2016 at 2:35 PM, Samuel Just  wrote:

> Ah, same question then.  If we can get logging on the primary for one
> of those pgs, it should be fairly obvious.
> -Sam
>
> On Wed, Sep 21, 2016 at 4:08 AM, Pavan Rallabhandi
>  wrote:
> > We find this as well in our fresh built Jewel clusters, and seems to
> happen only with a handful of PGs from couple of pools.
> >
> > Thanks!
> >
> > On 9/21/16, 3:14 PM, "ceph-users on behalf of Tobias Böhm" <
> ceph-users-boun...@lists.ceph.com on behalf of t...@robhost.de> wrote:
> >
> > Hi,
> >
> > there is an open bug in the tracker: http://tracker.ceph.com/
> issues/16474
> >
> > It also suggests restarting OSDs as a workaround. We faced the same
> issue after increasing the number of PGs in our cluster and restarting OSDs
> solved it as well.
> >
> > Tobias
> >
> > > Am 21.09.2016 um 11:26 schrieb Dan van der Ster <
> d...@vanderster.com>:
> > >
> > > There was a thread about this a few days ago:
> > > http://lists.ceph.com/pipermail/ceph-users-ceph.com/
> 2016-September/012857.html
> > > And the OP found a workaround.
> > > Looks like a bug though... (by default PGs scrub at most once per
> day).
> > >
> > > -- dan
> > >
> > >
> > >
> > > On Tue, Sep 20, 2016 at 10:43 PM, Martin Bureau <
> mbur...@stingray.com> wrote:
> > >> Hello,
> > >>
> > >>
> > >> I noticed that the same pg gets scrubbed repeatedly on our new
> Jewel
> > >> cluster:
> > >>
> > >>
> > >> Here's an excerpt from log:
> > >>
> > >>
> > >> 2016-09-20 20:36:31.236123 osd.12 10.1.82.82:6820/14316 150514 :
> cluster
> > >> [INF] 25.3f scrub ok
> > >> 2016-09-20 20:36:32.232918 osd.12 10.1.82.82:6820/14316 150515 :
> cluster
> > >> [INF] 25.3f scrub starts
> > >> 2016-09-20 20:36:32.236876 osd.12 10.1.82.82:6820/14316 150516 :
> cluster
> > >> [INF] 25.3f scrub ok
> > >> 2016-09-20 20:36:33.233268 osd.12 10.1.82.82:6820/14316 150517 :
> cluster
> > >> [INF] 25.3f deep-scrub starts
> > >> 2016-09-20 20:36:33.242258 osd.12 10.1.82.82:6820/14316 150518 :
> cluster
> > >> [INF] 25.3f deep-scrub ok
> > >> 2016-09-20 20:36:36.233604 osd.12 10.1.82.82:6820/14316 150519 :
> cluster
> > >> [INF] 25.3f scrub starts
> > >> 2016-09-20 20:36:36.237221 osd.12 10.1.82.82:6820/14316 150520 :
> cluster
> > >> [INF] 25.3f scrub ok
> > >> 2016-09-20 20:36:41.234490 osd.12 10.1.82.82:6820/14316 150521 :
> cluster
> > >> [INF] 25.3f deep-scrub starts
> > >> 2016-09-20 20:36:41.243720 osd.12 10.1.82.82:6820/14316 150522 :
> cluster
> > >> [INF] 25.3f deep-scrub ok
> > >> 2016-09-20 20:36:45.235128 osd.12 10.1.82.82:6820/14316 150523 :
> cluster
> > >> [INF] 25.3f deep-scrub starts
> > >> 2016-09-20 20:36:45.352589 osd.12 10.1.82.82:6820/14316 150524 :
> cluster
> > >> [INF] 25.3f deep-scrub ok
> > >> 2016-09-20 20:36:47.235310 osd.12 10.1.82.82:6820/14316 150525 :
> cluster
> > >> [INF] 25.3f scrub starts
> > >> 2016-09-20 20:36:47.239348 osd.12 10.1.82.82:6820/14316 150526 :
> cluster
> > >> [INF] 25.3f scrub ok
> > >> 2016-09-20 20:36:49.235538 osd.12 10.1.82.82:6820/14316 150527 :
> cluster
> > >> [INF] 25.3f deep-scrub starts
> > >> 2016-09-20 20:36:49.243121 osd.12 10.1.82.82:6820/14316 150528 :
> cluster
> > >> [INF] 25.3f deep-scrub ok
> > >> 2016-09-20 20:36:51.235956 osd.12 10.1.82.82:6820/14316 150529 :
> cluster
> > >> [INF] 25.3f deep-scrub starts
> > >> 2016-09-20 20:36:51.244201 osd.12 10.1.82.82:6820/14316 150530 :
> cluster
> > >> [INF] 25.3f deep-scrub ok
> > >> 2016-09-20 20:36:52.236076 osd.12 10.1.82.82:6820/14316 150531 :
> cluster
> > >> [INF] 25.3f scrub starts
> > >> 2016-09-20 20:36:52.239376 osd.12 10.1.82.82:6820/14316 150532 :
> cluster
> > >> [INF] 25.3f scrub ok
> > >> 2016-09-20 20:36:56.236740 osd.12 10.1.82.82:6820/14316 150533 :
> cluster
> > >> [INF] 25.3f scrub starts
> > >>
> > >>
> > >> How can I troubleshoot / resolve this ?
> > >>
> > >>
> > >> Regards,
> > >>
> > >> Martin
> > >>
> > >>
> > >>
> > >> ___
> > >> ceph-users mailing list
> > >> ceph-users@lists.ceph.com
> > >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > >>
> > > ___
> > > ceph-users mailing list
> > > ceph-users@lists.ceph.com
> 

Re: [ceph-users] Same pg scrubbed over and over (Jewel)

2016-09-21 Thread Samuel Just
Ah, same question then.  If we can get logging on the primary for one
of those pgs, it should be fairly obvious.
-Sam

On Wed, Sep 21, 2016 at 4:08 AM, Pavan Rallabhandi
 wrote:
> We find this as well in our fresh built Jewel clusters, and seems to happen 
> only with a handful of PGs from couple of pools.
>
> Thanks!
>
> On 9/21/16, 3:14 PM, "ceph-users on behalf of Tobias Böhm" 
>  wrote:
>
> Hi,
>
> there is an open bug in the tracker: http://tracker.ceph.com/issues/16474
>
> It also suggests restarting OSDs as a workaround. We faced the same issue 
> after increasing the number of PGs in our cluster and restarting OSDs solved 
> it as well.
>
> Tobias
>
> > Am 21.09.2016 um 11:26 schrieb Dan van der Ster :
> >
> > There was a thread about this a few days ago:
> > 
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-September/012857.html
> > And the OP found a workaround.
> > Looks like a bug though... (by default PGs scrub at most once per day).
> >
> > -- dan
> >
> >
> >
> > On Tue, Sep 20, 2016 at 10:43 PM, Martin Bureau  
> wrote:
> >> Hello,
> >>
> >>
> >> I noticed that the same pg gets scrubbed repeatedly on our new Jewel
> >> cluster:
> >>
> >>
> >> Here's an excerpt from log:
> >>
> >>
> >> 2016-09-20 20:36:31.236123 osd.12 10.1.82.82:6820/14316 150514 : 
> cluster
> >> [INF] 25.3f scrub ok
> >> 2016-09-20 20:36:32.232918 osd.12 10.1.82.82:6820/14316 150515 : 
> cluster
> >> [INF] 25.3f scrub starts
> >> 2016-09-20 20:36:32.236876 osd.12 10.1.82.82:6820/14316 150516 : 
> cluster
> >> [INF] 25.3f scrub ok
> >> 2016-09-20 20:36:33.233268 osd.12 10.1.82.82:6820/14316 150517 : 
> cluster
> >> [INF] 25.3f deep-scrub starts
> >> 2016-09-20 20:36:33.242258 osd.12 10.1.82.82:6820/14316 150518 : 
> cluster
> >> [INF] 25.3f deep-scrub ok
> >> 2016-09-20 20:36:36.233604 osd.12 10.1.82.82:6820/14316 150519 : 
> cluster
> >> [INF] 25.3f scrub starts
> >> 2016-09-20 20:36:36.237221 osd.12 10.1.82.82:6820/14316 150520 : 
> cluster
> >> [INF] 25.3f scrub ok
> >> 2016-09-20 20:36:41.234490 osd.12 10.1.82.82:6820/14316 150521 : 
> cluster
> >> [INF] 25.3f deep-scrub starts
> >> 2016-09-20 20:36:41.243720 osd.12 10.1.82.82:6820/14316 150522 : 
> cluster
> >> [INF] 25.3f deep-scrub ok
> >> 2016-09-20 20:36:45.235128 osd.12 10.1.82.82:6820/14316 150523 : 
> cluster
> >> [INF] 25.3f deep-scrub starts
> >> 2016-09-20 20:36:45.352589 osd.12 10.1.82.82:6820/14316 150524 : 
> cluster
> >> [INF] 25.3f deep-scrub ok
> >> 2016-09-20 20:36:47.235310 osd.12 10.1.82.82:6820/14316 150525 : 
> cluster
> >> [INF] 25.3f scrub starts
> >> 2016-09-20 20:36:47.239348 osd.12 10.1.82.82:6820/14316 150526 : 
> cluster
> >> [INF] 25.3f scrub ok
> >> 2016-09-20 20:36:49.235538 osd.12 10.1.82.82:6820/14316 150527 : 
> cluster
> >> [INF] 25.3f deep-scrub starts
> >> 2016-09-20 20:36:49.243121 osd.12 10.1.82.82:6820/14316 150528 : 
> cluster
> >> [INF] 25.3f deep-scrub ok
> >> 2016-09-20 20:36:51.235956 osd.12 10.1.82.82:6820/14316 150529 : 
> cluster
> >> [INF] 25.3f deep-scrub starts
> >> 2016-09-20 20:36:51.244201 osd.12 10.1.82.82:6820/14316 150530 : 
> cluster
> >> [INF] 25.3f deep-scrub ok
> >> 2016-09-20 20:36:52.236076 osd.12 10.1.82.82:6820/14316 150531 : 
> cluster
> >> [INF] 25.3f scrub starts
> >> 2016-09-20 20:36:52.239376 osd.12 10.1.82.82:6820/14316 150532 : 
> cluster
> >> [INF] 25.3f scrub ok
> >> 2016-09-20 20:36:56.236740 osd.12 10.1.82.82:6820/14316 150533 : 
> cluster
> >> [INF] 25.3f scrub starts
> >>
> >>
> >> How can I troubleshoot / resolve this ?
> >>
> >>
> >> Regards,
> >>
> >> Martin
> >>
> >>
> >>
> >> ___
> >> ceph-users mailing list
> >> ceph-users@lists.ceph.com
> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >>
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Same pg scrubbed over and over (Jewel)

2016-09-21 Thread Pavan Rallabhandi
We find this as well in our fresh built Jewel clusters, and seems to happen 
only with a handful of PGs from couple of pools.

Thanks!

On 9/21/16, 3:14 PM, "ceph-users on behalf of Tobias Böhm" 
 wrote:

Hi,

there is an open bug in the tracker: http://tracker.ceph.com/issues/16474

It also suggests restarting OSDs as a workaround. We faced the same issue 
after increasing the number of PGs in our cluster and restarting OSDs solved it 
as well.

Tobias

> Am 21.09.2016 um 11:26 schrieb Dan van der Ster :
> 
> There was a thread about this a few days ago:
> 
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-September/012857.html
> And the OP found a workaround.
> Looks like a bug though... (by default PGs scrub at most once per day).
> 
> -- dan
> 
> 
> 
> On Tue, Sep 20, 2016 at 10:43 PM, Martin Bureau  
wrote:
>> Hello,
>> 
>> 
>> I noticed that the same pg gets scrubbed repeatedly on our new Jewel
>> cluster:
>> 
>> 
>> Here's an excerpt from log:
>> 
>> 
>> 2016-09-20 20:36:31.236123 osd.12 10.1.82.82:6820/14316 150514 : cluster
>> [INF] 25.3f scrub ok
>> 2016-09-20 20:36:32.232918 osd.12 10.1.82.82:6820/14316 150515 : cluster
>> [INF] 25.3f scrub starts
>> 2016-09-20 20:36:32.236876 osd.12 10.1.82.82:6820/14316 150516 : cluster
>> [INF] 25.3f scrub ok
>> 2016-09-20 20:36:33.233268 osd.12 10.1.82.82:6820/14316 150517 : cluster
>> [INF] 25.3f deep-scrub starts
>> 2016-09-20 20:36:33.242258 osd.12 10.1.82.82:6820/14316 150518 : cluster
>> [INF] 25.3f deep-scrub ok
>> 2016-09-20 20:36:36.233604 osd.12 10.1.82.82:6820/14316 150519 : cluster
>> [INF] 25.3f scrub starts
>> 2016-09-20 20:36:36.237221 osd.12 10.1.82.82:6820/14316 150520 : cluster
>> [INF] 25.3f scrub ok
>> 2016-09-20 20:36:41.234490 osd.12 10.1.82.82:6820/14316 150521 : cluster
>> [INF] 25.3f deep-scrub starts
>> 2016-09-20 20:36:41.243720 osd.12 10.1.82.82:6820/14316 150522 : cluster
>> [INF] 25.3f deep-scrub ok
>> 2016-09-20 20:36:45.235128 osd.12 10.1.82.82:6820/14316 150523 : cluster
>> [INF] 25.3f deep-scrub starts
>> 2016-09-20 20:36:45.352589 osd.12 10.1.82.82:6820/14316 150524 : cluster
>> [INF] 25.3f deep-scrub ok
>> 2016-09-20 20:36:47.235310 osd.12 10.1.82.82:6820/14316 150525 : cluster
>> [INF] 25.3f scrub starts
>> 2016-09-20 20:36:47.239348 osd.12 10.1.82.82:6820/14316 150526 : cluster
>> [INF] 25.3f scrub ok
>> 2016-09-20 20:36:49.235538 osd.12 10.1.82.82:6820/14316 150527 : cluster
>> [INF] 25.3f deep-scrub starts
>> 2016-09-20 20:36:49.243121 osd.12 10.1.82.82:6820/14316 150528 : cluster
>> [INF] 25.3f deep-scrub ok
>> 2016-09-20 20:36:51.235956 osd.12 10.1.82.82:6820/14316 150529 : cluster
>> [INF] 25.3f deep-scrub starts
>> 2016-09-20 20:36:51.244201 osd.12 10.1.82.82:6820/14316 150530 : cluster
>> [INF] 25.3f deep-scrub ok
>> 2016-09-20 20:36:52.236076 osd.12 10.1.82.82:6820/14316 150531 : cluster
>> [INF] 25.3f scrub starts
>> 2016-09-20 20:36:52.239376 osd.12 10.1.82.82:6820/14316 150532 : cluster
>> [INF] 25.3f scrub ok
>> 2016-09-20 20:36:56.236740 osd.12 10.1.82.82:6820/14316 150533 : cluster
>> [INF] 25.3f scrub starts
>> 
>> 
>> How can I troubleshoot / resolve this ?
>> 
>> 
>> Regards,
>> 
>> Martin
>> 
>> 
>> 
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Same pg scrubbed over and over (Jewel)

2016-09-21 Thread Samuel Just
Can you reproduce with logging on the primary for that pg?

debug osd = 20
debug filestore = 20
debug ms = 1

Since restarting the osd may be a workaround, can you inject the debug
values without restarting the daemon?
-Sam

On Wed, Sep 21, 2016 at 2:44 AM, Tobias Böhm  wrote:
> Hi,
>
> there is an open bug in the tracker: http://tracker.ceph.com/issues/16474
>
> It also suggests restarting OSDs as a workaround. We faced the same issue 
> after increasing the number of PGs in our cluster and restarting OSDs solved 
> it as well.
>
> Tobias
>
>> Am 21.09.2016 um 11:26 schrieb Dan van der Ster :
>>
>> There was a thread about this a few days ago:
>> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-September/012857.html
>> And the OP found a workaround.
>> Looks like a bug though... (by default PGs scrub at most once per day).
>>
>> -- dan
>>
>>
>>
>> On Tue, Sep 20, 2016 at 10:43 PM, Martin Bureau  wrote:
>>> Hello,
>>>
>>>
>>> I noticed that the same pg gets scrubbed repeatedly on our new Jewel
>>> cluster:
>>>
>>>
>>> Here's an excerpt from log:
>>>
>>>
>>> 2016-09-20 20:36:31.236123 osd.12 10.1.82.82:6820/14316 150514 : cluster
>>> [INF] 25.3f scrub ok
>>> 2016-09-20 20:36:32.232918 osd.12 10.1.82.82:6820/14316 150515 : cluster
>>> [INF] 25.3f scrub starts
>>> 2016-09-20 20:36:32.236876 osd.12 10.1.82.82:6820/14316 150516 : cluster
>>> [INF] 25.3f scrub ok
>>> 2016-09-20 20:36:33.233268 osd.12 10.1.82.82:6820/14316 150517 : cluster
>>> [INF] 25.3f deep-scrub starts
>>> 2016-09-20 20:36:33.242258 osd.12 10.1.82.82:6820/14316 150518 : cluster
>>> [INF] 25.3f deep-scrub ok
>>> 2016-09-20 20:36:36.233604 osd.12 10.1.82.82:6820/14316 150519 : cluster
>>> [INF] 25.3f scrub starts
>>> 2016-09-20 20:36:36.237221 osd.12 10.1.82.82:6820/14316 150520 : cluster
>>> [INF] 25.3f scrub ok
>>> 2016-09-20 20:36:41.234490 osd.12 10.1.82.82:6820/14316 150521 : cluster
>>> [INF] 25.3f deep-scrub starts
>>> 2016-09-20 20:36:41.243720 osd.12 10.1.82.82:6820/14316 150522 : cluster
>>> [INF] 25.3f deep-scrub ok
>>> 2016-09-20 20:36:45.235128 osd.12 10.1.82.82:6820/14316 150523 : cluster
>>> [INF] 25.3f deep-scrub starts
>>> 2016-09-20 20:36:45.352589 osd.12 10.1.82.82:6820/14316 150524 : cluster
>>> [INF] 25.3f deep-scrub ok
>>> 2016-09-20 20:36:47.235310 osd.12 10.1.82.82:6820/14316 150525 : cluster
>>> [INF] 25.3f scrub starts
>>> 2016-09-20 20:36:47.239348 osd.12 10.1.82.82:6820/14316 150526 : cluster
>>> [INF] 25.3f scrub ok
>>> 2016-09-20 20:36:49.235538 osd.12 10.1.82.82:6820/14316 150527 : cluster
>>> [INF] 25.3f deep-scrub starts
>>> 2016-09-20 20:36:49.243121 osd.12 10.1.82.82:6820/14316 150528 : cluster
>>> [INF] 25.3f deep-scrub ok
>>> 2016-09-20 20:36:51.235956 osd.12 10.1.82.82:6820/14316 150529 : cluster
>>> [INF] 25.3f deep-scrub starts
>>> 2016-09-20 20:36:51.244201 osd.12 10.1.82.82:6820/14316 150530 : cluster
>>> [INF] 25.3f deep-scrub ok
>>> 2016-09-20 20:36:52.236076 osd.12 10.1.82.82:6820/14316 150531 : cluster
>>> [INF] 25.3f scrub starts
>>> 2016-09-20 20:36:52.239376 osd.12 10.1.82.82:6820/14316 150532 : cluster
>>> [INF] 25.3f scrub ok
>>> 2016-09-20 20:36:56.236740 osd.12 10.1.82.82:6820/14316 150533 : cluster
>>> [INF] 25.3f scrub starts
>>>
>>>
>>> How can I troubleshoot / resolve this ?
>>>
>>>
>>> Regards,
>>>
>>> Martin
>>>
>>>
>>>
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Same pg scrubbed over and over (Jewel)

2016-09-21 Thread Tobias Böhm
Hi,

there is an open bug in the tracker: http://tracker.ceph.com/issues/16474

It also suggests restarting OSDs as a workaround. We faced the same issue after 
increasing the number of PGs in our cluster and restarting OSDs solved it as 
well.

Tobias

> Am 21.09.2016 um 11:26 schrieb Dan van der Ster :
> 
> There was a thread about this a few days ago:
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-September/012857.html
> And the OP found a workaround.
> Looks like a bug though... (by default PGs scrub at most once per day).
> 
> -- dan
> 
> 
> 
> On Tue, Sep 20, 2016 at 10:43 PM, Martin Bureau  wrote:
>> Hello,
>> 
>> 
>> I noticed that the same pg gets scrubbed repeatedly on our new Jewel
>> cluster:
>> 
>> 
>> Here's an excerpt from log:
>> 
>> 
>> 2016-09-20 20:36:31.236123 osd.12 10.1.82.82:6820/14316 150514 : cluster
>> [INF] 25.3f scrub ok
>> 2016-09-20 20:36:32.232918 osd.12 10.1.82.82:6820/14316 150515 : cluster
>> [INF] 25.3f scrub starts
>> 2016-09-20 20:36:32.236876 osd.12 10.1.82.82:6820/14316 150516 : cluster
>> [INF] 25.3f scrub ok
>> 2016-09-20 20:36:33.233268 osd.12 10.1.82.82:6820/14316 150517 : cluster
>> [INF] 25.3f deep-scrub starts
>> 2016-09-20 20:36:33.242258 osd.12 10.1.82.82:6820/14316 150518 : cluster
>> [INF] 25.3f deep-scrub ok
>> 2016-09-20 20:36:36.233604 osd.12 10.1.82.82:6820/14316 150519 : cluster
>> [INF] 25.3f scrub starts
>> 2016-09-20 20:36:36.237221 osd.12 10.1.82.82:6820/14316 150520 : cluster
>> [INF] 25.3f scrub ok
>> 2016-09-20 20:36:41.234490 osd.12 10.1.82.82:6820/14316 150521 : cluster
>> [INF] 25.3f deep-scrub starts
>> 2016-09-20 20:36:41.243720 osd.12 10.1.82.82:6820/14316 150522 : cluster
>> [INF] 25.3f deep-scrub ok
>> 2016-09-20 20:36:45.235128 osd.12 10.1.82.82:6820/14316 150523 : cluster
>> [INF] 25.3f deep-scrub starts
>> 2016-09-20 20:36:45.352589 osd.12 10.1.82.82:6820/14316 150524 : cluster
>> [INF] 25.3f deep-scrub ok
>> 2016-09-20 20:36:47.235310 osd.12 10.1.82.82:6820/14316 150525 : cluster
>> [INF] 25.3f scrub starts
>> 2016-09-20 20:36:47.239348 osd.12 10.1.82.82:6820/14316 150526 : cluster
>> [INF] 25.3f scrub ok
>> 2016-09-20 20:36:49.235538 osd.12 10.1.82.82:6820/14316 150527 : cluster
>> [INF] 25.3f deep-scrub starts
>> 2016-09-20 20:36:49.243121 osd.12 10.1.82.82:6820/14316 150528 : cluster
>> [INF] 25.3f deep-scrub ok
>> 2016-09-20 20:36:51.235956 osd.12 10.1.82.82:6820/14316 150529 : cluster
>> [INF] 25.3f deep-scrub starts
>> 2016-09-20 20:36:51.244201 osd.12 10.1.82.82:6820/14316 150530 : cluster
>> [INF] 25.3f deep-scrub ok
>> 2016-09-20 20:36:52.236076 osd.12 10.1.82.82:6820/14316 150531 : cluster
>> [INF] 25.3f scrub starts
>> 2016-09-20 20:36:52.239376 osd.12 10.1.82.82:6820/14316 150532 : cluster
>> [INF] 25.3f scrub ok
>> 2016-09-20 20:36:56.236740 osd.12 10.1.82.82:6820/14316 150533 : cluster
>> [INF] 25.3f scrub starts
>> 
>> 
>> How can I troubleshoot / resolve this ?
>> 
>> 
>> Regards,
>> 
>> Martin
>> 
>> 
>> 
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Same pg scrubbed over and over (Jewel)

2016-09-21 Thread Dan van der Ster
There was a thread about this a few days ago:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-September/012857.html
And the OP found a workaround.
Looks like a bug though... (by default PGs scrub at most once per day).

-- dan



On Tue, Sep 20, 2016 at 10:43 PM, Martin Bureau  wrote:
> Hello,
>
>
> I noticed that the same pg gets scrubbed repeatedly on our new Jewel
> cluster:
>
>
> Here's an excerpt from log:
>
>
> 2016-09-20 20:36:31.236123 osd.12 10.1.82.82:6820/14316 150514 : cluster
> [INF] 25.3f scrub ok
> 2016-09-20 20:36:32.232918 osd.12 10.1.82.82:6820/14316 150515 : cluster
> [INF] 25.3f scrub starts
> 2016-09-20 20:36:32.236876 osd.12 10.1.82.82:6820/14316 150516 : cluster
> [INF] 25.3f scrub ok
> 2016-09-20 20:36:33.233268 osd.12 10.1.82.82:6820/14316 150517 : cluster
> [INF] 25.3f deep-scrub starts
> 2016-09-20 20:36:33.242258 osd.12 10.1.82.82:6820/14316 150518 : cluster
> [INF] 25.3f deep-scrub ok
> 2016-09-20 20:36:36.233604 osd.12 10.1.82.82:6820/14316 150519 : cluster
> [INF] 25.3f scrub starts
> 2016-09-20 20:36:36.237221 osd.12 10.1.82.82:6820/14316 150520 : cluster
> [INF] 25.3f scrub ok
> 2016-09-20 20:36:41.234490 osd.12 10.1.82.82:6820/14316 150521 : cluster
> [INF] 25.3f deep-scrub starts
> 2016-09-20 20:36:41.243720 osd.12 10.1.82.82:6820/14316 150522 : cluster
> [INF] 25.3f deep-scrub ok
> 2016-09-20 20:36:45.235128 osd.12 10.1.82.82:6820/14316 150523 : cluster
> [INF] 25.3f deep-scrub starts
> 2016-09-20 20:36:45.352589 osd.12 10.1.82.82:6820/14316 150524 : cluster
> [INF] 25.3f deep-scrub ok
> 2016-09-20 20:36:47.235310 osd.12 10.1.82.82:6820/14316 150525 : cluster
> [INF] 25.3f scrub starts
> 2016-09-20 20:36:47.239348 osd.12 10.1.82.82:6820/14316 150526 : cluster
> [INF] 25.3f scrub ok
> 2016-09-20 20:36:49.235538 osd.12 10.1.82.82:6820/14316 150527 : cluster
> [INF] 25.3f deep-scrub starts
> 2016-09-20 20:36:49.243121 osd.12 10.1.82.82:6820/14316 150528 : cluster
> [INF] 25.3f deep-scrub ok
> 2016-09-20 20:36:51.235956 osd.12 10.1.82.82:6820/14316 150529 : cluster
> [INF] 25.3f deep-scrub starts
> 2016-09-20 20:36:51.244201 osd.12 10.1.82.82:6820/14316 150530 : cluster
> [INF] 25.3f deep-scrub ok
> 2016-09-20 20:36:52.236076 osd.12 10.1.82.82:6820/14316 150531 : cluster
> [INF] 25.3f scrub starts
> 2016-09-20 20:36:52.239376 osd.12 10.1.82.82:6820/14316 150532 : cluster
> [INF] 25.3f scrub ok
> 2016-09-20 20:36:56.236740 osd.12 10.1.82.82:6820/14316 150533 : cluster
> [INF] 25.3f scrub starts
>
>
> How can I troubleshoot / resolve this ?
>
>
> Regards,
>
> Martin
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Same pg scrubbed over and over (Jewel)

2016-09-20 Thread Martin Bureau
Hello,


I noticed that the same pg gets scrubbed repeatedly on our new Jewel cluster:


Here's an excerpt from log:


2016-09-20 20:36:31.236123 osd.12 10.1.82.82:6820/14316 150514 : cluster [INF] 
25.3f scrub ok
2016-09-20 20:36:32.232918 osd.12 10.1.82.82:6820/14316 150515 : cluster [INF] 
25.3f scrub starts
2016-09-20 20:36:32.236876 osd.12 10.1.82.82:6820/14316 150516 : cluster [INF] 
25.3f scrub ok
2016-09-20 20:36:33.233268 osd.12 10.1.82.82:6820/14316 150517 : cluster [INF] 
25.3f deep-scrub starts
2016-09-20 20:36:33.242258 osd.12 10.1.82.82:6820/14316 150518 : cluster [INF] 
25.3f deep-scrub ok
2016-09-20 20:36:36.233604 osd.12 10.1.82.82:6820/14316 150519 : cluster [INF] 
25.3f scrub starts
2016-09-20 20:36:36.237221 osd.12 10.1.82.82:6820/14316 150520 : cluster [INF] 
25.3f scrub ok
2016-09-20 20:36:41.234490 osd.12 10.1.82.82:6820/14316 150521 : cluster [INF] 
25.3f deep-scrub starts
2016-09-20 20:36:41.243720 osd.12 10.1.82.82:6820/14316 150522 : cluster [INF] 
25.3f deep-scrub ok
2016-09-20 20:36:45.235128 osd.12 10.1.82.82:6820/14316 150523 : cluster [INF] 
25.3f deep-scrub starts
2016-09-20 20:36:45.352589 osd.12 10.1.82.82:6820/14316 150524 : cluster [INF] 
25.3f deep-scrub ok
2016-09-20 20:36:47.235310 osd.12 10.1.82.82:6820/14316 150525 : cluster [INF] 
25.3f scrub starts
2016-09-20 20:36:47.239348 osd.12 10.1.82.82:6820/14316 150526 : cluster [INF] 
25.3f scrub ok
2016-09-20 20:36:49.235538 osd.12 10.1.82.82:6820/14316 150527 : cluster [INF] 
25.3f deep-scrub starts
2016-09-20 20:36:49.243121 osd.12 10.1.82.82:6820/14316 150528 : cluster [INF] 
25.3f deep-scrub ok
2016-09-20 20:36:51.235956 osd.12 10.1.82.82:6820/14316 150529 : cluster [INF] 
25.3f deep-scrub starts
2016-09-20 20:36:51.244201 osd.12 10.1.82.82:6820/14316 150530 : cluster [INF] 
25.3f deep-scrub ok
2016-09-20 20:36:52.236076 osd.12 10.1.82.82:6820/14316 150531 : cluster [INF] 
25.3f scrub starts
2016-09-20 20:36:52.239376 osd.12 10.1.82.82:6820/14316 150532 : cluster [INF] 
25.3f scrub ok
2016-09-20 20:36:56.236740 osd.12 10.1.82.82:6820/14316 150533 : cluster [INF] 
25.3f scrub starts


How can I troubleshoot / resolve this ?


Regards,

Martin

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com