Ceph does a quick benchmark when creating a new OSD and stores the
osd_mclock_max_capacity_iops_{ssd,hdd} settings in the config DB.

When destroying the OSD, Ceph does not automatically remove these
settings. Keeping them can be problematic if a new OSD with potentially
more performance is added and ends up getting the same OSD ID.

Therefore, we remove these settings ourselves when destroying an OSD.
Removing both variants, hdd and ssd should be fine, as the MON does not
complain if the setting does not exist.

Signed-off-by: Aaron Lauterer <a.laute...@proxmox.com>
---
 PVE/API2/Ceph/OSD.pm | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/PVE/API2/Ceph/OSD.pm b/PVE/API2/Ceph/OSD.pm
index 0c07e7ce..2893456a 100644
--- a/PVE/API2/Ceph/OSD.pm
+++ b/PVE/API2/Ceph/OSD.pm
@@ -985,6 +985,10 @@ __PACKAGE__->register_method ({
            print "Remove OSD $osdsection\n";
            $rados->mon_command({ prefix => "osd rm", ids => [ $osdsection ], 
format => 'plain' });
 
+           print "Remove $osdsection mclock max capacity iops settings from 
config\n";
+           $rados->mon_command({ prefix => "config rm", who => $osdsection, 
name => 'osd_mclock_max_capacity_iops_ssd' });
+           $rados->mon_command({ prefix => "config rm", who => $osdsection, 
name => 'osd_mclock_max_capacity_iops_hdd' });
+
            # try to unmount from standard mount point
            my $mountpoint = "/var/lib/ceph/osd/ceph-$osdid";
 
-- 
2.39.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

Reply via email to