Re: [ceph-users] Turning on SCRUB back on - any suggestion ?

2015-03-13 Thread Andrija Panic
Hmnice Thx guys


On 13 March 2015 at 12:33, Henrik Korkuc  wrote:

>  I think settings apply to both kinds of scrubs
>
>
> On 3/13/15 13:31, Andrija Panic wrote:
>
> Interestingthx for that Henrik.
>
>  BTW, my placements groups are arround 1800 objects (ceph pg dump) -
> meainng max of 7GB od data at the moment,
>
>  regular scrub just took 5-10sec to finish. Deep scrub would I guess take
> some minutes for sure
>
>  What about deepscrub - timestamp is still some months ago, but regular
> scrub is fine now with fresh timestamp...?
>
>  I don't see max deep scrub setings - or are these settings applied in
> general for both kind on scrubs ?
>
>  Thanks
>
>
>
> On 13 March 2015 at 12:22, Henrik Korkuc  wrote:
>
>>  I think that there will be no big scrub, as there are limits of maximum
>> scrubs at a time.
>> http://ceph.com/docs/master/rados/configuration/osd-config-ref/#scrubbing
>>
>> If we take "osd max scrubs" which is 1 by default, then you will not get
>> more than 1 scrub per OSD.
>>
>> I couldn't quickly find if there are cluster wide limits.
>>
>>
>> On 3/13/15 10:46, Wido den Hollander wrote:
>>
>> On 13-03-15 09:42, Andrija Panic wrote:
>>
>>  Hi all,
>>
>> I have set nodeep-scrub and noscrub while I had small/slow hardware for
>> the cluster.
>> It has been off for a while now.
>>
>> Now we are upgraded with hardware/networking/SSDs and I would like to
>> activate - or unset these flags.
>>
>> Since I now have 3 servers with 12 OSDs each (SSD based Journals) - I
>> was wondering what is the best way to unset flags - meaning if I just
>> unset the flags, should I expect that the SCRUB will start all of the
>> sudden on all disks - or is there way to let the SCRUB do drives one by
>> one...
>>
>>
>>  So, I *think* that unsetting these flags will trigger a big scrub, since
>> all PGs have a very old last_scrub_stamp and last_deepscrub_stamp
>>
>> You can verify this with:
>>
>> $ ceph pg  query
>>
>> A solution would be to scrub each PG manually first in a timely fashion.
>>
>> $ ceph pg scrub 
>>
>> That way you set the timestamps and slowly scrub each PG.
>>
>> When that's done, unset the flags.
>>
>> Wido
>>
>>
>>  In other words - should I expect BIG performance impact ornot ?
>>
>> Any experience is very appreciated...
>>
>> Thanks,
>>
>> --
>>
>> Andrija Panić
>>
>>
>> ___
>> ceph-users mailing 
>> listceph-us...@lists.ceph.comhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>  ___
>> ceph-users mailing 
>> listceph-us...@lists.ceph.comhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>
>
>  --
>
> Andrija Panić
>
>
>


-- 

Andrija Panić
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Turning on SCRUB back on - any suggestion ?

2015-03-13 Thread Henrik Korkuc

I think settings apply to both kinds of scrubs

On 3/13/15 13:31, Andrija Panic wrote:

Interestingthx for that Henrik.

BTW, my placements groups are arround 1800 objects (ceph pg dump) - 
meainng max of 7GB od data at the moment,


regular scrub just took 5-10sec to finish. Deep scrub would I guess 
take some minutes for sure


What about deepscrub - timestamp is still some months ago, but regular 
scrub is fine now with fresh timestamp...?


I don't see max deep scrub setings - or are these settings applied in 
general for both kind on scrubs ?


Thanks



On 13 March 2015 at 12:22, Henrik Korkuc > wrote:


I think that there will be no big scrub, as there are limits of
maximum scrubs at a time.
http://ceph.com/docs/master/rados/configuration/osd-config-ref/#scrubbing

If we take "osd max scrubs" which is 1 by default, then you will
not get more than 1 scrub per OSD.

I couldn't quickly find if there are cluster wide limits.


On 3/13/15 10:46, Wido den Hollander wrote:

On 13-03-15 09:42, Andrija Panic wrote:

Hi all,

I have set nodeep-scrub and noscrub while I had small/slow hardware for
the cluster.
It has been off for a while now.

Now we are upgraded with hardware/networking/SSDs and I would like to
activate - or unset these flags.

Since I now have 3 servers with 12 OSDs each (SSD based Journals) - I
was wondering what is the best way to unset flags - meaning if I just
unset the flags, should I expect that the SCRUB will start all of the
sudden on all disks - or is there way to let the SCRUB do drives one by
one...


So, I *think* that unsetting these flags will trigger a big scrub, since
all PGs have a very old last_scrub_stamp and last_deepscrub_stamp

You can verify this with:

$ ceph pg  query

A solution would be to scrub each PG manually first in a timely fashion.

$ ceph pg scrub 

That way you set the timestamps and slowly scrub each PG.

When that's done, unset the flags.

Wido


In other words - should I expect BIG performance impact ornot ?

Any experience is very appreciated...

Thanks,

-- 


Andrija Panić


___
ceph-users mailing list
ceph-users@lists.ceph.com  
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com  
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



___
ceph-users mailing list
ceph-users@lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




--

Andrija Panić


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Turning on SCRUB back on - any suggestion ?

2015-03-13 Thread Andrija Panic
Interestingthx for that Henrik.

BTW, my placements groups are arround 1800 objects (ceph pg dump) - meainng
max of 7GB od data at the moment,

regular scrub just took 5-10sec to finish. Deep scrub would I guess take
some minutes for sure

What about deepscrub - timestamp is still some months ago, but regular
scrub is fine now with fresh timestamp...?

I don't see max deep scrub setings - or are these settings applied in
general for both kind on scrubs ?

Thanks



On 13 March 2015 at 12:22, Henrik Korkuc  wrote:

>  I think that there will be no big scrub, as there are limits of maximum
> scrubs at a time.
> http://ceph.com/docs/master/rados/configuration/osd-config-ref/#scrubbing
>
> If we take "osd max scrubs" which is 1 by default, then you will not get
> more than 1 scrub per OSD.
>
> I couldn't quickly find if there are cluster wide limits.
>
>
> On 3/13/15 10:46, Wido den Hollander wrote:
>
>
> On 13-03-15 09:42, Andrija Panic wrote:
>
>  Hi all,
>
> I have set nodeep-scrub and noscrub while I had small/slow hardware for
> the cluster.
> It has been off for a while now.
>
> Now we are upgraded with hardware/networking/SSDs and I would like to
> activate - or unset these flags.
>
> Since I now have 3 servers with 12 OSDs each (SSD based Journals) - I
> was wondering what is the best way to unset flags - meaning if I just
> unset the flags, should I expect that the SCRUB will start all of the
> sudden on all disks - or is there way to let the SCRUB do drives one by
> one...
>
>
>  So, I *think* that unsetting these flags will trigger a big scrub, since
> all PGs have a very old last_scrub_stamp and last_deepscrub_stamp
>
> You can verify this with:
>
> $ ceph pg  query
>
> A solution would be to scrub each PG manually first in a timely fashion.
>
> $ ceph pg scrub 
>
> That way you set the timestamps and slowly scrub each PG.
>
> When that's done, unset the flags.
>
> Wido
>
>
>  In other words - should I expect BIG performance impact ornot ?
>
> Any experience is very appreciated...
>
> Thanks,
>
> --
>
> Andrija Panić
>
>
> ___
> ceph-users mailing 
> listceph-us...@lists.ceph.comhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>  ___
> ceph-users mailing 
> listceph-us...@lists.ceph.comhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>


-- 

Andrija Panić
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Turning on SCRUB back on - any suggestion ?

2015-03-13 Thread Henrik Korkuc
I think that there will be no big scrub, as there are limits of maximum 
scrubs at a time.

http://ceph.com/docs/master/rados/configuration/osd-config-ref/#scrubbing

If we take "osd max scrubs" which is 1 by default, then you will not get 
more than 1 scrub per OSD.


I couldn't quickly find if there are cluster wide limits.

On 3/13/15 10:46, Wido den Hollander wrote:


On 13-03-15 09:42, Andrija Panic wrote:

Hi all,

I have set nodeep-scrub and noscrub while I had small/slow hardware for
the cluster.
It has been off for a while now.

Now we are upgraded with hardware/networking/SSDs and I would like to
activate - or unset these flags.

Since I now have 3 servers with 12 OSDs each (SSD based Journals) - I
was wondering what is the best way to unset flags - meaning if I just
unset the flags, should I expect that the SCRUB will start all of the
sudden on all disks - or is there way to let the SCRUB do drives one by
one...


So, I *think* that unsetting these flags will trigger a big scrub, since
all PGs have a very old last_scrub_stamp and last_deepscrub_stamp

You can verify this with:

$ ceph pg  query

A solution would be to scrub each PG manually first in a timely fashion.

$ ceph pg scrub 

That way you set the timestamps and slowly scrub each PG.

When that's done, unset the flags.

Wido


In other words - should I expect BIG performance impact ornot ?

Any experience is very appreciated...

Thanks,

--

Andrija Panić


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Turning on SCRUB back on - any suggestion ?

2015-03-13 Thread Andrija Panic
Will do, of course :)

THx Wido for quick help, as always !

On 13 March 2015 at 12:04, Wido den Hollander  wrote:

>
>
> On 13-03-15 12:00, Andrija Panic wrote:
> > Nice - so I just realized I need to manually scrub 1216 placements
> groups :)
> >
>
> With manual I meant using a script.
>
> Loop through 'ceph pg dump', get the PGid, issue a scrub, sleep for X
> seconds and issue the next scrub.
>
> Wido
>
> >
> > On 13 March 2015 at 10:16, Andrija Panic  > > wrote:
> >
> > Thanks Wido - I will do that.
> >
> > On 13 March 2015 at 09:46, Wido den Hollander  > > wrote:
> >
> >
> >
> > On 13-03-15 09:42, Andrija Panic wrote:
> > > Hi all,
> > >
> > > I have set nodeep-scrub and noscrub while I had small/slow
> hardware for
> > > the cluster.
> > > It has been off for a while now.
> > >
> > > Now we are upgraded with hardware/networking/SSDs and I would
> like to
> > > activate - or unset these flags.
> > >
> > > Since I now have 3 servers with 12 OSDs each (SSD based
> Journals) - I
> > > was wondering what is the best way to unset flags - meaning if
> I just
> > > unset the flags, should I expect that the SCRUB will start all
> of the
> > > sudden on all disks - or is there way to let the SCRUB do
> drives one by
> > > one...
> > >
> >
> > So, I *think* that unsetting these flags will trigger a big
> > scrub, since
> > all PGs have a very old last_scrub_stamp and last_deepscrub_stamp
> >
> > You can verify this with:
> >
> > $ ceph pg  query
> >
> > A solution would be to scrub each PG manually first in a timely
> > fashion.
> >
> > $ ceph pg scrub 
> >
> > That way you set the timestamps and slowly scrub each PG.
> >
> > When that's done, unset the flags.
> >
> > Wido
> >
> > > In other words - should I expect BIG performance impact
> ornot ?
> > >
> > > Any experience is very appreciated...
> > >
> > > Thanks,
> > >
> > > --
> > >
> > > Andrija Panić
> > >
> > >
> > > ___
> > > ceph-users mailing list
> > > ceph-users@lists.ceph.com 
> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > >
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com 
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> >
> >
> >
> > --
> >
> > Andrija Panić
> >
> >
> >
> >
> > --
> >
> > Andrija Panić
>



-- 

Andrija Panić
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Turning on SCRUB back on - any suggestion ?

2015-03-13 Thread Wido den Hollander


On 13-03-15 12:00, Andrija Panic wrote:
> Nice - so I just realized I need to manually scrub 1216 placements groups :)
> 

With manual I meant using a script.

Loop through 'ceph pg dump', get the PGid, issue a scrub, sleep for X
seconds and issue the next scrub.

Wido

> 
> On 13 March 2015 at 10:16, Andrija Panic  > wrote:
> 
> Thanks Wido - I will do that.
> 
> On 13 March 2015 at 09:46, Wido den Hollander  > wrote:
> 
> 
> 
> On 13-03-15 09:42, Andrija Panic wrote:
> > Hi all,
> >
> > I have set nodeep-scrub and noscrub while I had small/slow hardware 
> for
> > the cluster.
> > It has been off for a while now.
> >
> > Now we are upgraded with hardware/networking/SSDs and I would like 
> to
> > activate - or unset these flags.
> >
> > Since I now have 3 servers with 12 OSDs each (SSD based Journals) - 
> I
> > was wondering what is the best way to unset flags - meaning if I 
> just
> > unset the flags, should I expect that the SCRUB will start all of 
> the
> > sudden on all disks - or is there way to let the SCRUB do drives 
> one by
> > one...
> >
> 
> So, I *think* that unsetting these flags will trigger a big
> scrub, since
> all PGs have a very old last_scrub_stamp and last_deepscrub_stamp
> 
> You can verify this with:
> 
> $ ceph pg  query
> 
> A solution would be to scrub each PG manually first in a timely
> fashion.
> 
> $ ceph pg scrub 
> 
> That way you set the timestamps and slowly scrub each PG.
> 
> When that's done, unset the flags.
> 
> Wido
> 
> > In other words - should I expect BIG performance impact ornot ?
> >
> > Any experience is very appreciated...
> >
> > Thanks,
> >
> > --
> >
> > Andrija Panić
> >
> >
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com 
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com 
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> 
> 
> 
> -- 
> 
> Andrija Panić
> 
> 
> 
> 
> -- 
> 
> Andrija Panić
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Turning on SCRUB back on - any suggestion ?

2015-03-13 Thread Andrija Panic
Nice - so I just realized I need to manually scrub 1216 placements groups :)


On 13 March 2015 at 10:16, Andrija Panic  wrote:

> Thanks Wido - I will do that.
>
> On 13 March 2015 at 09:46, Wido den Hollander  wrote:
>
>>
>>
>> On 13-03-15 09:42, Andrija Panic wrote:
>> > Hi all,
>> >
>> > I have set nodeep-scrub and noscrub while I had small/slow hardware for
>> > the cluster.
>> > It has been off for a while now.
>> >
>> > Now we are upgraded with hardware/networking/SSDs and I would like to
>> > activate - or unset these flags.
>> >
>> > Since I now have 3 servers with 12 OSDs each (SSD based Journals) - I
>> > was wondering what is the best way to unset flags - meaning if I just
>> > unset the flags, should I expect that the SCRUB will start all of the
>> > sudden on all disks - or is there way to let the SCRUB do drives one by
>> > one...
>> >
>>
>> So, I *think* that unsetting these flags will trigger a big scrub, since
>> all PGs have a very old last_scrub_stamp and last_deepscrub_stamp
>>
>> You can verify this with:
>>
>> $ ceph pg  query
>>
>> A solution would be to scrub each PG manually first in a timely fashion.
>>
>> $ ceph pg scrub 
>>
>> That way you set the timestamps and slowly scrub each PG.
>>
>> When that's done, unset the flags.
>>
>> Wido
>>
>> > In other words - should I expect BIG performance impact ornot ?
>> >
>> > Any experience is very appreciated...
>> >
>> > Thanks,
>> >
>> > --
>> >
>> > Andrija Panić
>> >
>> >
>> > ___
>> > ceph-users mailing list
>> > ceph-users@lists.ceph.com
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
>
>
> --
>
> Andrija Panić
>



-- 

Andrija Panić
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Turning on SCRUB back on - any suggestion ?

2015-03-13 Thread Andrija Panic
Thanks Wido - I will do that.

On 13 March 2015 at 09:46, Wido den Hollander  wrote:

>
>
> On 13-03-15 09:42, Andrija Panic wrote:
> > Hi all,
> >
> > I have set nodeep-scrub and noscrub while I had small/slow hardware for
> > the cluster.
> > It has been off for a while now.
> >
> > Now we are upgraded with hardware/networking/SSDs and I would like to
> > activate - or unset these flags.
> >
> > Since I now have 3 servers with 12 OSDs each (SSD based Journals) - I
> > was wondering what is the best way to unset flags - meaning if I just
> > unset the flags, should I expect that the SCRUB will start all of the
> > sudden on all disks - or is there way to let the SCRUB do drives one by
> > one...
> >
>
> So, I *think* that unsetting these flags will trigger a big scrub, since
> all PGs have a very old last_scrub_stamp and last_deepscrub_stamp
>
> You can verify this with:
>
> $ ceph pg  query
>
> A solution would be to scrub each PG manually first in a timely fashion.
>
> $ ceph pg scrub 
>
> That way you set the timestamps and slowly scrub each PG.
>
> When that's done, unset the flags.
>
> Wido
>
> > In other words - should I expect BIG performance impact ornot ?
> >
> > Any experience is very appreciated...
> >
> > Thanks,
> >
> > --
> >
> > Andrija Panić
> >
> >
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



-- 

Andrija Panić
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Turning on SCRUB back on - any suggestion ?

2015-03-13 Thread Wido den Hollander


On 13-03-15 09:42, Andrija Panic wrote:
> Hi all,
> 
> I have set nodeep-scrub and noscrub while I had small/slow hardware for
> the cluster.
> It has been off for a while now.
> 
> Now we are upgraded with hardware/networking/SSDs and I would like to
> activate - or unset these flags.
> 
> Since I now have 3 servers with 12 OSDs each (SSD based Journals) - I
> was wondering what is the best way to unset flags - meaning if I just
> unset the flags, should I expect that the SCRUB will start all of the
> sudden on all disks - or is there way to let the SCRUB do drives one by
> one...
> 

So, I *think* that unsetting these flags will trigger a big scrub, since
all PGs have a very old last_scrub_stamp and last_deepscrub_stamp

You can verify this with:

$ ceph pg  query

A solution would be to scrub each PG manually first in a timely fashion.

$ ceph pg scrub 

That way you set the timestamps and slowly scrub each PG.

When that's done, unset the flags.

Wido

> In other words - should I expect BIG performance impact ornot ?
> 
> Any experience is very appreciated...
> 
> Thanks,
> 
> -- 
> 
> Andrija Panić
> 
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com