Ceph is gradually migrating object data to the new placement groups.
Eventually pgp_num will reach 256. It might take a few days.
I don't know about removed_snaps_queue, I don't think it is related to the
placement group change. You can search the mailing list archive for more
information.
On Sat
Hello Pierre
Thank you for your reply this is the output of the above command.
pool 6 'volumes' replicated size 3 min_size 2 crush_rule 0 object_hash
rjenkins
pg_num 256 pgp_num 174 pgp_num_target 256 autoscale_mode off last_change
132132
lfor 0/0/130849 flags hashpspool,selfmanaged_snaps stripe_
Hi Michel,
This is expected behaviour. As described in Nautilus release notes [1], the
`target_max_misplaced_ratio` option throttles both balancer activity and
automated adjustments to pgp_num (normally as a result of pg_num changes).
Its default value is .05 (5%).
Use `ceph osd pool ls detail` t
Hello,
This is expected behaviour as misplaced ratio in balancer is set to 5%.
See older threads about that [1].
[1]
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/D6VWHYKJGJQ22P2OGW5RZAHBWXB354S4/
Best regards
> On 9 Mar 2024, at 09:38, Michel Niyoyita wrote:
>
> Hello team
Hello team,
I have increased my volumes pool which was 128 PGs to 256 PGs , the
activity started yesterday 5PM , It started when it was 5.733% of misplaced
object , after 4 to 5 hours it reaches to 5.022 % after that it come back
to the initial percentage 5.733% , kindly help to solve the issue. I