Hi all!
I had a CEPH Cluster with 10x OSDs all of them in one node.
Since the cluster was built from the beginning with just one OSDs node
the crushmap had as a default
the replication to be on OSDs.
Here is the relevant part from my crushmap:
# rules
rule replicated_ruleset {
You just need to change your rule from
step chooseleaf firstn 0 type osd
to
step chooseleaf firstn 0 type host
There will be data movement as it will want to move about half the
objects to the new host. There will be data generation as you move
from size 1 to size 2. As far as I know a deep
Georgios,
it really depends on how busy and powerful your cluster is, as Robert
wrote.
If in doubt, lower the backfill value as pointed out by Robert.
Look at the osd_scrub_load_threshold and with new enough
versions of Ceph at the osd_scrub_sleep setting, this is very helpful in keeping
deep
I don't believe that you can set the schedule of the deep scrubs.
People that want that kind of control disable deep scrubs and run a
script to scrub all PGs. For the other options, you should look
through http://ceph.com/docs/master/rados/configuration/osd-config-ref/
and find what you feel might