Hello Alwin,

In this post https://forum.proxmox.com/threads/ceph-octopus-upgrade-notes-think-twice-before-enabling-auto-scale.80105/#post-399654

I read about *target ratio to 1 and call it a day *, in my case I set to vm.pool  Target ratio 1 :

ceph osd pool autoscale-status
POOL                                         SIZE TARGET SIZE      RATE  RAW CAPACITY   RATIO  TARGET RATIO EFFECTIVE RATIO  BIAS  PG_NUM  NEW PG_NUM  AUTOSCALE device_health_metrics          22216k       500.0G   2.0 106.4T  0.0092                                                     1.0            8 on vm.pool                                     2734G             3.0        106.4T  0.0753        1.0000                                       0.8180 1.0             512 on cephfs_data                                    0              2.0        106.4T  0.0000             0.2000 0.1636   1.0             128             on cephfs_metadata                  27843k         500.0G 2.0        106.4T  0.0092    4.0              32              on

What you think  I need to set target ratio on cephfs_metadata & device_health_metrics?

To pool  cephfs_data I set the target ratio 0.2  .

Or the target ration on vm.pool need not the *1* but more?


*
*

31.01.2022 15:05, Alwin Antreich пишет:
Hello Sergey,

January 31, 2022 9:58 AM, "Сергей Цаболов"<[email protected]>  wrote:
My question is how I can  decrease MAX AVAIL in default pool
device_health_metrics + cephfs_metadata and set it to vm.pool and
cephfs_data
The max_avail is calculated by the cluster-wide AVAIL and pool USED, with 
respect to the replication size / EC profile.

Cheers,
Alwin

Sergey TS
The best Regard

_______________________________________________
pve-user mailing list
[email protected]
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
_______________________________________________
pve-user mailing list
[email protected]
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user

Reply via email to