Shoot for an average, or let the autoscaler handle it with 

Think of the PG replica count for a given OSD as the share of the workload it 
will receive.  Larger OSDs naturally will receive more workload.

# The default is way too low
ceph config set global mon_target_pg_per_osd                      300

# Instruct the balancer to take smaller bites
ceph config set global target_max_misplaced_ratio                0.020000

# Guardrail - when you have varying OSD sizes it's easy to overdose
# especially when something fails.  This does not affect placement, but it
# prevents awkward failures of PGs to activate.
ceph config set global mon_max_pg_per_osd                        1000

# Varying OSD sizes can result in suboptimal balancing by default
mgr/balancer/upmap_max_deviation                                       1

Your list of sizes impliess that you have both mixed-use 3DWPD SSDs and 
read-intensive 1DWPD SSDs. Is this intentional?  I've yet to see a Ceph 
deployment that indicated the extra cost of MU SSDs, which are the same 
hardware as RI SSDs, just with a different overprovisioning setting.


> On Aug 7, 2025, at 8:08 AM, Albert Shih <albert.s...@obspm.fr> wrote:
> 
> Hi,
> 
> Stupid question but how can I set the number of pg for a pool when I got
> differents (1.7 To , 2.7T, 2.9To and 3.4To) size of ssd ?
> 
> Regards
> -- 
> Albert SHIH 🦫 🐸
> France
> Heure locale/Local time:
> jeu. 07 août 2025 14:06:35 CEST
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to