1 backfill peer osd has never severely impacted performance afaik. That is
a very small amount of io. I run with 2-5 in each of my clusters. When an
osd comes up, the map changes enough that more PGs will move than just what
are backfilling into the new osd.
To modify how many backfills are
The explain about osd_max_backfills is below.
osd max backfills
Description:The maximum number of backfills allowed to or from a single OSD.
Type: 64-bit Unsigned Integer
Default:1
So, I just think the option does not limit osd numbers in backfill activity.
> 在
Thank you for comment.
I can understand what you mean.
When one osd goes down, the osd has many PGs through whole ceph cluster
nodes, so each nodes can have one backfill/recovery per osd and ceph
culster shows many backfills/recoverys.
The other side, When one osd goes up, the osd needs to copy
osd_max_backfills is a setting per osd. With that set to 1, each osd will
only be involved in a single backfill/recovery at the same time. However
the cluster as a whole will have as many backfills as it can while each osd
is only involved in 1 each.
On Wed, Aug 9, 2017 at 10:58 PM 하현
Hi ceph experts.
I confused when set limitation of osd max backfills.
When osd down recovery occuerred, and osd up is same.
I want to set limitation for backfills to 1.
So, I set config as below.
# ceph --admin-daemon /var/run/ceph/ceph-osd.0.asok config show|egrep