Hi,

Wonder have you ever faced issue with snaptrimming if you follow ceph pg 
allocation recommendation (100pg/osd)?

We have a nautilus cluster and we scare to increase the pg-s of the pools 
because seems like even if we have 4osd/nvme, if the pg number is higher = the 
snaptrimming is slower.

Eg.:

We have these pools:
Db1 pool size 64,504G with 512 PGs
Db2 pool size 92,242G with 256 PGs
Db2 snapshot remove faster than Db1.

Our osds are very underutilized regarding pg point of view due to this reason, 
each osd is holding maximum 25 gigantic pgs which makes all the maintenance 
very difficult due to backfilling full, osd full issues.

Any recommendation if you use this feature?

Thank you

________________________________
This message is confidential and is for the sole use of the intended 
recipient(s). It may also be privileged or otherwise protected by copyright or 
other legal rules. If you have received it by mistake please let us know by 
reply email and delete it from your system. It is prohibited to copy this 
message or disclose its content to anyone. Any confidentiality or privilege is 
not waived or lost by any mistaken delivery or unauthorized disclosure of the 
message. All messages sent to and from Agoda may be monitored to ensure 
compliance with company policies, to protect the company's interests and to 
remove potential malware. Electronic messages may be intercepted, amended, lost 
or deleted, or contain viruses.
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to