I was able to get the osdmaps to slowly trim (maybe 50 would trim with each 
change) by making small changes to the CRUSH map like this:

for i in {1..100}; do
    ceph osd crush reweight osd.1754 4.00001
    sleep 5
    ceph osd crush reweight osd.1754 4
    sleep 5
done

I believe this was the solution Dan came across back in the hammer days.  It 
works, but not ideal for sure.  Across the cluster it freed up around 50TB of 
data!

Bryan

From: ceph-users <ceph-users-boun...@lists.ceph.com> on behalf of Bryan 
Stillwell <bstillw...@godaddy.com>
Date: Monday, January 7, 2019 at 2:40 PM
To: ceph-users <ceph-users@lists.ceph.com>
Subject: [ceph-users] osdmaps not being cleaned up in 12.2.8

I have a cluster with over 1900 OSDs running Luminous (12.2.8) that isn't 
cleaning up old osdmaps after doing an expansion.  This is even after the 
cluster became 100% active+clean:

# find /var/lib/ceph/osd/ceph-1754/current/meta -name 'osdmap*' | wc -l
46181

With the osdmaps being over 600KB in size this adds up:

# du -sh /var/lib/ceph/osd/ceph-1754/current/meta
31G        /var/lib/ceph/osd/ceph-1754/current/meta

I remember running into this during the hammer days:

http://tracker.ceph.com/issues/13990

Did something change recently that may have broken this fix?

Thanks,
Bryan

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to