I think settings apply to both kinds of scrubs
On 3/13/15 13:31, Andrija Panic wrote:
Interesting....thx for that Henrik.
BTW, my placements groups are arround 1800 objects (ceph pg dump) -
meainng max of 7GB od data at the moment,
regular scrub just took 5-10sec to finish. Deep scrub would I guess
take some minutes for sure
What about deepscrub - timestamp is still some months ago, but regular
scrub is fine now with fresh timestamp...?
I don't see max deep scrub setings - or are these settings applied in
general for both kind on scrubs ?
Thanks
On 13 March 2015 at 12:22, Henrik Korkuc <[email protected]
<mailto:[email protected]>> wrote:
I think that there will be no big scrub, as there are limits of
maximum scrubs at a time.
http://ceph.com/docs/master/rados/configuration/osd-config-ref/#scrubbing
If we take "osd max scrubs" which is 1 by default, then you will
not get more than 1 scrub per OSD.
I couldn't quickly find if there are cluster wide limits.
On 3/13/15 10:46, Wido den Hollander wrote:
On 13-03-15 09:42, Andrija Panic wrote:
Hi all,
I have set nodeep-scrub and noscrub while I had small/slow hardware for
the cluster.
It has been off for a while now.
Now we are upgraded with hardware/networking/SSDs and I would like to
activate - or unset these flags.
Since I now have 3 servers with 12 OSDs each (SSD based Journals) - I
was wondering what is the best way to unset flags - meaning if I just
unset the flags, should I expect that the SCRUB will start all of the
sudden on all disks - or is there way to let the SCRUB do drives one by
one...
So, I *think* that unsetting these flags will trigger a big scrub, since
all PGs have a very old last_scrub_stamp and last_deepscrub_stamp
You can verify this with:
$ ceph pg <pgid> query
A solution would be to scrub each PG manually first in a timely fashion.
$ ceph pg scrub <pgid>
That way you set the timestamps and slowly scrub each PG.
When that's done, unset the flags.
Wido
In other words - should I expect BIG performance impact or....not ?
Any experience is very appreciated...
Thanks,
--
Andrija Panić
_______________________________________________
ceph-users mailing list
[email protected] <mailto:[email protected]>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
[email protected] <mailto:[email protected]>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
[email protected] <mailto:[email protected]>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Andrija Panić
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com